the vulnerability remediation lifecycle of Alpine containers

Anybody who has the responsibility of maintaining a cluster of systems knows about the vulnerability remediation lifecycle: vulnerabilities are discovered, disclosed to vendors, mitigated by vendors and then consumers deploy the mitigations as they update their systems.

In the proprietary software world, the deployment phase is colloquially known as Patch Tuesday, because many vendors release patches on the second and fourth Tuesday of each month.  But how does all of this actually happen, and how do you know what patches you actually need?

I thought it might be nice to look at all the moving pieces that exist in Alpine’s remediation lifecycle, beginning from discovery of the vulnerability, to disclosure to Alpine, to user remediation.  For this example, we will track CVE-2016-20011, which I just fixed in Alpine, which is a minor vulnerability in the libgrss library concerning a lack of TLS certificate validation when fetching https URIs.

The vulnerability itself

GNOME’s libsoup is an HTTP client/server library for the the GNOME platform, analogous to libcurl.  It has two sets of session APIs: the newer SoupSession API and the older SoupSessionSync/SoupSessionAsync family of APIs.  As a result of creating the newer SoupSession API, it was discovered at some point that the older SoupSessionSync/SoupSessionAsync APIs did not enable TLS certificate validation by default.

As a result of discovering that design flaw in libsoup, Michael Catanzaro – one of the libsoup maintainers, began to audit users of libsoup in the GNOME platform.  One such user of libsoup is libgrss, which did not take any steps to enable TLS certificate validation on its own, so Michael opened a bug against it in 2016.

Five years passed and he decided to check up on these bugs.  That lead to the filing of a new bug in GNOME’s gitlab against libgrss, as the GNOME bugzilla service is in the process of being turned down.  As libgrss was still broken in 2021, he requested a CVE identifier for the vulnerability, and was issued CVE-2016-20011.

How do CVE identifiers get determined, anyway?

You might notice that the CVE identifier he was issued is CVE-2016-20011, even though it is presently 2021.  Normally, CVE identifiers use the current year, as requesting a CVE identifier is usually an early step in the disclosure process, but CVE identifiers are actually grouped by the year that a vulnerability was first publicly disclosed.  In the case of CVE-2016-20011, the identifier was assigned to the 2016 year because of the public GNOME bugzilla report which was filed in 2016.

The CVE website at MITRE has more information about how CVE identifiers are grouped if you want to know more.

The National Vulnerability Database

Our vulnerability was issued CVE-2016-20011, but how does Alpine actually find out about it?  The answer is quite simple: the NVD.  When a CVE identifier is issued, information about the vulnerability is forwarded along to the National Vulnerability Database activity at NIST, a US governmental agency.  The NVD consumes CVE data and enriches it with additional links and information about the vulnerability.  They also generate Common Product Enumeration rules which are intended to map the vulnerability to an actual product and set of versions.

Common Product Enumeration rules consist of a CPE URI which tries to map a vulnerability to an ecosystem and product name, and an optional set of version range constraints.  For CVE-2016-20011, the NVD staff issued a CPE URI of cpe:2.3:a:gnome:libgrss:*:*:*:*:*:*:*:* and a version range constraint of <= 0.7.0.

The final step in vulnerability information making its way to Alpine is the security team’s issue tracker.  Every hour, we download the latest version of the CVE-Modified and CVE-Recent feeds offered by the National Vulnerability Database activity.  We then use those feeds to update our own internal vulnerability tracking database.

Throughout the day, the security team pulls various reports from the vulnerability tracking database, for example a list of potential vulnerabilities in edge/community.  The purpose of checking these reports is to see if there are any new vulnerabilities to investigate.

As libgrss is in edge/community, CVE-2016-20011 appeared on that report.


Once we start to work a vulnerability, there are a few steps that we take.  First, we research the vulnerability, by checking the links provided to us through the CVE feed and other feeds the security tracker consumes.  The NVD staff are usually very quick at linking to git commits and other data we can use for mitigating the vulnerability.  However, sometimes, such as in the case of CVE-2016-20011, there is no longer an active upstream maintainer of the package, and we have to mitigate the issue ourselves.

Once we have a patch that is known to fix the issue, we prepare a software update and push it to aports.git.  We then backport the security fix to other branches in aports.git.

Once the fix is committed to all of the appropriate branches, the build servers take over, building a new version of the package with the fixes.  The build servers then upload the new packages to the master mirror, and from there, they get distributed through the mirror network to Alpine’s user community.


At this point, if you’re a casual user of Alpine, you would just do something like apk upgrade -Ua and move on with your life, knowing that your system is up to date.

But what if you’re running a cluster of hundreds or thousands of Alpine servers and containers?  How would you know what to patch?  What should be prioritized?

To solve those problems, there are security scanners, which can check containers, images and filesystems for vulnerabilities.  Some are proprietary software, but there are many options that are free.  However, security scanners are not perfect, like Alpine’s vulnerability investigation tool, they sometimes generate both false positives and false negatives.

Where do security scanners get their data?  In most cases for Alpine systems, they get their data from the Alpine security database, a product maintained by the Alpine security team.  Using that database, they check the apk installed database to see what packages and versions are installed in the system.  Let’s look at a few of them.

Creating a test case by mixing Alpine versions

Note: You should never actually mix Alpine versions like this.  If done in an uncontrolled way, you risk system unreliability and your security scanning solution won’t know what to do as each Alpine version’s security database is specific to that version of Alpine.  Don’t create a franken-alpine!

In the case of libgrss, we know that 0.7.0-r1 and newer have a fix for CVE-2016-20011, but the security fix has already been published.  So, where can we get 0.7.0-r0?  We can get it from Alpine 3.12 of course.  Accordingly, we make a filesystem with apk and install Alpine 3.12 into it:

nanabozho:~# apk add –root ~/test-image –initdb –allow-untrusted -X -X alpine-base libgrss-dev=0.7.0-r0 […] OK: 126 MiB in 92 packages nanabozho:~# apk upgrade –root ~/test-image -X -X […] OK: 127 MiB in 98 packages nanabozho:~# apk info –root ~/test-image libgrss Installed: Available: libgrss-0.7.0-r0 ? nanabozho:~# cat ~/test-image/etc/alpine-release 3.13.5

Now that we have our image, lets see what detects the vulnerability, and what doesn’t.


Trivy is considered by many to be the most reliable scanner for Alpine systems, but can it detect this vulnerability?  In theory, it should be able to.

I have installed trivy to /usr/local/bin/trivy on my machine by downloading the go binary from the GitHub release.  They have a script that can do this for you, but I’m not a huge fan of curl | sh type scripts.

To scan a filesystem image with trivy, you do trivy fs /path/to/filesystem:

nanabozho:~# trivy fs -f json ~/test-image/ 2021-06-07T23:48:40.308-0600 INFO Detected OS: alpine 2021-06-07T23:48:40.308-0600 INFO Detecting Alpine vulnerabilities… 2021-06-07T23:48:40.309-0600 INFO Number of PL dependency files: 0 [ { “Target”: “localhost (alpine 3.13.5)”, “Type”: “alpine” } ]

Hmm, that’s strange.  I wonder why?

nanabozho:~# trivy –debug fs ~/test-image/ 2021-06-07T23:42:54.036-0600 DEBUG Severities: UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL 2021-06-07T23:42:54.038-0600 DEBUG cache dir: /root/.cache/trivy 2021-06-07T23:42:54.039-0600 DEBUG DB update was skipped because DB is the latest 2021-06-07T23:42:54.039-0600 DEBUG DB Schema: 1, Type: 1, UpdatedAt: 2021-06-08 00:19:21.979880152 +0000 UTC, NextUpdate: 2021-06-08 12:19:21.979879952 +0000 UTC, DownloadedAt: 2021-06-08 05:23:09.354950757 +0000 UTC

Ah, trivy’s security database only updates twice per day, so trivy has not become aware of CVE-2016-20011 being mitigated by libgrss-0.7.0-r1 yet.

I rebuilt trivy’s database locally and put it in ~/.cache/trivy/db/trivy.db:

nanabozho:~# trivy fs -f json ~/test-image/ 2021-06-08T01:37:20.574-0600 INFO Detected OS: alpine 2021-06-08T01:37:20.574-0600 INFO Detecting Alpine vulnerabilities… 2021-06-08T01:37:20.576-0600 INFO Number of PL dependency files: 0 [ { “Target”: “localhost (alpine 3.13.5)”, “Type”: “alpine”, “Vulnerabilities”: [ { “VulnerabilityID”: “CVE-2016-20011”, “PkgName”: “libgrss”, “InstalledVersion”: “0.7.0-r0”, “FixedVersion”: “0.7.0-r1”, “Layer”: { “DiffID”: “sha256:4bd83511239d179fb096a1aecdb2b4e1494539cd8a0a4edbb58360126ea8d093” }, “SeveritySource”: “nvd”, “PrimaryURL”: “", “Description”: “libgrss through 0.7.0 fails to perform TLS certificate verification when downloading feeds, allowing remote attackers to manipulate the contents of feeds without detection. This occurs because of the default behavior of SoupSessionSync.”, “Severity”: “HIGH”, “CweIDs”: [ “CWE-295” ], “CVSS”: { “nvd”: { “V2Vector”: “AV:N/AC:L/Au:N/C:N/I:P/A:N”, “V3Vector”: “CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N”, “V2Score”: 5, “V3Score”: 7.5 } }, “References”: [ “”, “" ], “PublishedDate”: “2021-05-25T21:15:00Z”, “LastModifiedDate”: “2021-06-01T17:03:00Z” }, { “VulnerabilityID”: “CVE-2016-20011”, “PkgName”: “libgrss-dev”, “InstalledVersion”: “0.7.0-r0”, “FixedVersion”: “0.7.0-r1”, “Layer”: { “DiffID”: “sha256:4bd83511239d179fb096a1aecdb2b4e1494539cd8a0a4edbb58360126ea8d093” }, “SeveritySource”: “nvd”, “PrimaryURL”: “", “Description”: “libgrss through 0.7.0 fails to perform TLS certificate verification when downloading feeds, allowing remote attackers to manipulate the contents of feeds without detection. This occurs because of the default behavior of SoupSessionSync.”, “Severity”: “HIGH”, “CweIDs”: [ “CWE-295” ], “CVSS”: { “nvd”: { “V2Vector”: “AV:N/AC:L/Au:N/C:N/I:P/A:N”, “V3Vector”: “CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N”, “V2Score”: 5, “V3Score”: 7.5 } }, “References”: [ “”, “" ], “PublishedDate”: “2021-05-25T21:15:00Z”, “LastModifiedDate”: “2021-06-01T17:03:00Z” } ] } ]

Ah, that’s better.


Clair is a security scanner previously written by the CoreOS team, and now maintained by Red Hat.  It is considered the gold standard for security scanning of containers.  How does it do with the filesystem we baked?

nanabozho:~# clairctl report ~/test-image/ 2021-06-08T00:11:04-06:00 ERR error=“UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:root/test-image Type:repository]]”

Oh, right, it can’t just scan a filesystem.  One second.

nanabozho:~$ cd ~/dev-src/clair nanabozho:~$ make local-dev-up-with-quay [a bunch of commands later] nanabozho:~$ clairctl report test-image:1 test-image:1 found libgrss 0.7.0-r0 CVE-2016-20011 (fixed: 0.7.0-r1)

As you can see, clair does succeed in finding the vulnerability, when you bake an actual Docker image and publish it to a local quay instance running on localhost.

But this is really a lot of work to just scan for vulnerabilities, so I wouldn’t recommend clair for that.


grype is a security scanner made by Anchore.  They talk a lot about how Anchore’s products can also be used to build a Software Bill of Materials for a given image.  Let’s see how it goes with our test image:

nanabozho:~# grype dir:~/test-image/ ✔ Vulnerability DB [updated] ✔ Cataloged packages [98 packages] ✔ Scanned image [3 vulnerabilities] NAME INSTALLED FIXED-IN VULNERABILITY SEVERITY libgrss 0.7.0-r0 (fixes indeterminate) CVE-2016-20011 High libxml2 2.9.10-r7 (fixes indeterminate) CVE-2019-19956 High openrc 0.42.1-r19 (fixes indeterminate) CVE-2018-21269 Medium

grype does detect that a vulnerable libgrss is installed, but the (fixes indeterminate) seems fishy to me.  There also appear to be some other hits that the other scanners didn’t notice.  Lets fact check this against a pure Alpine 3.13 container:

nanabozho:~# grype dir:~/test-image-pure/ ✔ Vulnerability DB [no update available] ✔ Cataloged packages [98 packages] ✔ Scanned image [3 vulnerabilities] NAME INSTALLED FIXED-IN VULNERABILITY SEVERITY libgrss 0.7.0-r1 (fixes indeterminate) CVE-2016-20011 High libxml2 2.9.10-r7 (fixes indeterminate) CVE-2019-19956 High openrc 0.42.1-r19 (fixes indeterminate) CVE-2018-21269 Medium

Oh no, it detects 0.7.0-r1 as vulnerable too, which I assume is simply because Anchore’s database hasn’t updated yet.  Researching the other two vulnerabilities, the openrc one seems to be a vulnerability we missed, while the libxml2 one is a false positive.

I think, however, it is important to note that Anchore’s scanning engine assumes a package is vulnerable if there is a CVE and the distribution hasn’t acknowledged a fix.  That may or may not actually be reliable enough of the time, but it is an admittedly interesting approach.


For vulnerability scanning, I have to recommend either trivy or grype.  Clair is really complicated to set up and is really geared at people scanning entire container registries at once.  In general, I would recommend trivy over grype simply because it does not speculate about unconfirmed vulnerabilities, which I think is a distraction to developers, but I think grype has a lot of potential as well, though they may want to add the ability to only scan for confirmed vulnerabilities.

In general, I hope this blog entry answers a lot of questions about the remediation lifecycle in general as well.

Published on