the three taps of doom

A few years ago, I worked as the CTO of an advertising startup.  At first, we used Skype for messaging amongst the employees, and then later, we switched to Slack.  The main reason for switching to Slack was because they had an IRC gateway — you could connect to a Slack workspace with an IRC client, which allowed for the people who wanted to use IRC to do so, while providing a polished experience for those who were unfamiliar with IRC.

the IRC gateway

In the beginning, Slack had an IRC gateway.  On May 15th, 2018, Slack discontinued the IRC gateway, beginning my descent into Cocytus.  Prior to the shutdown of the IRC gateway, I had always interacted with the Slack workspace via IRC.  This was replaced with the Slack mobile and desktop apps.

The IRC gateway, however, was quite buggy, so it was probably good that they got rid of it.  It did not comply with any reasonable IRC specifications, much less support anything from IRCv3, so the user experience was quite disappointing albeit serviceable.

the notifications

Switching from IRC to the native Slack clients, I now got to deal with one of Slack’s main features: notifications.  If you’ve ever used slack, you’re likely familiar with the unholy notification sound, or as I have come to know it, the triple tap of existential doom.  Let me explain.

At this point, we used slack for everything: chat, paging people, even monitoring tickets coming in.  The workflow was efficient, but due to matters outside my control, revenues were declining.  This lead to the CEO becoming quite antsy.  One day he discovered that he could use @all, @tech or @sales to page people with his complaints.

This means that I would now get pages like:

Monitoring: @tech Service rtb-frontend-nyc is degraded
CEO: @tech I demand you implement a filtering feature our customer is requiring to scale up

The monitoring pages were helpful, the CEO paging us demanding that we implement filtering features that spied on users and definitely would not actually result in scaled up revenue (because the customers were paying CPM) were not helpful.

The pages in question were actually a lot more intense than I show here, these are tame examples, but it felt like I had to walk on eggshells in order to use Slack.

Quitting that job

In the middle of 2018, I quit that job for various reasons.  And as a result, I uninstalled Slack, and immediately felt much better.  But every time I hear the Slack notification sound, I now get anxious as a result.

The moral of this story is: if you use Slack, don’t use it for paging, and make sure your CEO doesn’t have access to the paging features.  It will be a disaster.  And if you’re running a FOSS project, consider not using Slack, as there are likely many technical people who avoid Slack due to their own experiences with it.

Bits relating to Alpine security initiatives in June

As usual, I have been hard at work on various security initiatives in Alpine the past month.  Here is what I have been up to:

Alpine 3.14 release and remediation efforts in general

Alpine 3.14.0 was released on June 15, with the lowest unpatched vulnerability count of any release in the past several years.  While previous Alpine release cycles did well on patching the critical vulnerabilities, the less important ones frequently slipped through the cracks, due to the project being unable to focus on vulnerability remediation until now.

We have also largely cleaned up Alpine 3.13 (there are a few minor vulnerabilities that have not been remediated there yet, as they require ABI changes or careful backporting), and Alpine 3.12 and 3.11 are starting to catch up in terms of unpatched vulnerabilities.

While a release branch will realistically never have zero unpatched vulnerabilities, we are much closer than ever before to having the supported repositories in as optimal of a state as we can have them.  Depending on how things play out, this may result in extended security support for the community repository for 3.14, since the introduction of tools and processes has reduced the maintenance burden for security updates.

Finally, with the release of Alpine 3.14, the security support period for Alpine 3.10 draws to a close, so you should upgrade to at least Alpine 3.11 to continue receiving security updates.

secfixes-tracker and the security database

This month saw a minor update to secfixes-tracker, the application which powers  This update primarily focused around supporting the new security rejections database, which allows for maintainers to reject CVEs from their package with an annotated rationale.

In my previous update, I talked about a proposal which will allow security trackers to exchange data, using Linked Data Notifications.  This will be deployed on as part of the secfixes-tracker 0.4 release, as we have come to an agreement with the Go and OSV teams about how to handle JSON-LD extensions in the format.

My goal with the Linked Data Notifications effort is to decentralize the current CVE ecosystem, and a bit longer writeup explaining how we will achieve that is roughly half-way done sitting around in my drafts folder.  Stay tuned!

Finally, the license for the security database has been officially defined as CC-BY-SA, meaning that security vendors can now use our security database in their scanners without having a legal compliance headache.

Reproducible Builds

We have begun work on supporting reproducibility in Alpine.  While there is still a lot of work to be done in abuild to support buildinfo files, kpcyrd started to work on making the install media reproducible, beginning with the Raspberry Pi images we ship.

However, he ran into an issue with BusyBox’s cpio not supporting reproducibility, so I added the necessary flags to allow for cpio archives to be reproducible, sent the patches to upstream BusyBox and pushed an updated BusyBox with the patches to Alpine edge.

There are still a few fixes that need to be made to apk, but with some workarounds, we were able to demonstrate reproducible install images for the Raspberry Pi.

The next few steps here will involve validating the reproducible initramfs work correctly, for example I don’t think we need --ignore-devno, just --renumber-inodes for it, and I also think that with --ignore-devno it won’t actually boot, but validation will allow us to verify everything is OK with the image.

Beyond that, we need reproducible packages, and for that, we need buildinfo files.  That’s next on my list of things to tackle.

The linux-distros list

In the last update, we were discussing whether to join the linux-distros list.  Since then, we concluded that joining the list does not net us anything useful: our post-embargo patching timeframe is the same as distros which participate on the list, and the requirements for sharing vulnerability data with other team members and maintainers were too onerous.  Alpine values transparency, we found that compromising transparency to have embargoed security data was not a useful tradeoff for us.

apk-tools 3

Since the last update, Timo has made a lot of progress on the ADB format used in apk-tools 3.  At this point, I think it has come along enough that we can begin working on exposing security information the ADB-based package indices.

While Alpine itself is not yet publishing ADB-based indices, the features available in the ADB format are required to reflect the security fix information correctly (the current index format does not support structured data at all, and is just a simple key-value store).

I also intend to look at the ADB-based indices to ensure they are reproducible.  This will likely occur within the next few weeks as I work on making the current indices reproducible.


My activities relating to Alpine security work are presently sponsored by Google and the Linux Foundation. Without their support, I would not be able to work on security full time in Alpine, so thanks!

understanding thread stack sizes and how alpine is different

From time to time, somebody reports a bug to some project about their program crashing on Alpine.  Usually, one of two things happens: the developer doesn’t care and doesn’t fix the issue, because it works under GNU/Linux, or the developer fixes their program to behave correctly only for the Alpine case, and it remains silently broken on other platforms.

The Default Thread Stack Size

In general, it is my opinion that if your program is crashing on Alpine, it is because your program is dependent on behavior that is not guaranteed to actually exist, which means your program is not actually portable.  When it comes to this kind of dependency, the typical issue has to deal with the thread stack size limit.

You might be wondering: what is a thread stack, anyway?  The answer, of course, is quite simple: each thread has its own stack memory, because it’s not really feasible for multiple threads to use the same stack memory, and on most platforms the size of that memory is much smaller than the main thread’s stack, though programmers are not necessarily aware of that discontinuity.

Here is a table of common x86_64 platforms and their default stack sizes for the main thread (process) and child threads:

OS Process Stack Size Thread Stack Size
Darwin (macOS, iOS, etc) 8 MiB 512 KiB
FreeBSD 8 MiB 2 MiB
OpenBSD (before 4.6) 8 MiB 64 KiB
OpenBSD (4.6 and later) 8 MiB 512 KiB
Windows 1 MiB 1 MiB
Alpine 3.10 and older 8 MiB 80 KiB
Alpine 3.11 and newer 8 MiB 128 KiB
GNU/Linux 8 MiB 8 MiB

I’ve highlighted the OpenBSD and GNU/Linux default thread stack sizes because they represent the smallest and largest possible default thread stack sizes.

Because the Linux kernel has overcommit mode, GNU/Linux systems use 8 MiB by default, which leads to a potential problem when running code developed against GNU/Linux on other systems.  As most threads only need a small amount of stack memory, other platforms use smaller limits, such as OpenBSD using only 64 KiB and Alpine using at most 128 KiB by default.  This leads to crashes in code which assumes a full 8MiB is available for each thread to use.

If you find yourself debugging a weird crash that doesn’t make sense, and your application is multi-threaded, it likely means that you’re exhausting the stack limit.

What can I do about it?

To fix the issue, you will need to either change the way your program is written, or change the way it is compiled.  There’s a few options you can take to fix the problem, depending on how much time you’re willing to spend.  In most cases, these sorts of crashes are caused by attempting to manipulate a large variable which is stored on the stack.  Generally, moving the variable off the stack is the best way to fix the issue, but there are alternative options.

Moving the variable off the stack

Lets say that the code has a large array that is stored on the stack, which causes the stack exhaustion issue.  In this case, the easiest solution is to move it off the stack.  There’s two main approaches you can use to do this: thread-local storage and heap storage.  Thread-local storage is a way to reserve additional memory for thread variables, think of it like static but bound to each local thread.  Heap storage is what you’re working with when you use malloc and free.

To illustrate the example, we will adjust this code to use both kinds of storage:

void some_function(void) {

    char scratchpad[500000];

    memset(scratchpad, 'A', sizeof scratchpad);


Thread-local variables are referenced with the thread_local keyword.  You must include threads.h in order to use it:

#include <threads.h>

void some_function(void) {

    thread_local char scratchpad[500000];

    memset(scratchpad, 'A', sizeof scratchpad);


You can also use the heap.  The most portable example would be the obvious one:

#include <stdlib.h>

const size_t scratchpad_size = 500000;

void some_function(void) {

    char *scratchpad = calloc(1, scratchpad_size);

    memset(scratchpad, 'A', scratchpad_size);



However, if you don’t mind sacrificing portability outside gcc and clang, you can use the cleanup attribute:

#include <stdlib.h>

#define autofree __attribute__(cleanup(free))

const size_t scratchpad_size = 500000;

void some_function(void) {

    autofree char *scratchpad = calloc(1, scratchpad_size);

    memset(scratchpad, 'A', scratchpad_size);


This is probably the best way to fix code like this if you’re not targeting compilers like the Microsoft one.

Adjusting the thread stack size at runtime

pthread_create takes an optional pthread_attr_t pointer as the second parameter.  This can be used to set an alternate stack size for the thread at runtime:

#include <pthread.h>

pthread_t worker_thread;

void launch_worker(void) {

    pthread_attr_t attr;


    pthread_attr_setstacksize(&attr, 1024768);

    pthread_create(&worker_thread, &attr, some_function);


By changing the stacksize when calling pthread_create, the child thread will have a larger stack.

Adjusting the stack size at link time

In modern Alpine systems, since 2018, it is possible to set the default thread stack size at link time.  This can be done with a special LDFLAGS flag, like -Wl,-z,stack-size=1024768.

You can also use tools like chelf or muslstack to patch pre-built binaries to use a larger stack, but this shouldn’t be done inside Alpine packaging, for example.

Hopefully, this article is helpful for those looking to learn how to solve the stack size issue.

the end of freenode

My first experience with IRC was in 1999.  I was in middle school, and a friend of mine ordered a Slackware CD from Walnut Creek CDROM.  This was Slackware 3.4, and contained the GNOME 1.x desktop environment on the disc, which came with the BitchX IRC client.

At first, I didn’t really know what BitchX was, I just thought it was a cool program that displayed random ascii art, and then tried to connect to various servers.  After a while, I found out that an IRC client allowed you to connect to an IRC network, and get help with Slackware.

At that time, freenode didn’t exist.  The Slackware IRC channel was on DALnet, and I started using DALnet to learn more about Slackware.  Like most IRC newbies, it didn’t go so well: I got banned from #slackware in like 5 minutes or something.  I pleaded for forgiveness, in the way redolent of a middle schooler.  And eventually, I got unbanned and stuck around for a while.  That was my first experience with IRC.

After a few months, I got bored of running Linux and reinstalled Windows 98 on my computer, because I wanted to play games that only worked on Windows, and so, largely, my interest in IRC waned.

A few years passed… I was in eighth grade.  I found out that one of the girls in my class was a witch.  I didn’t really understand what that meant, and so I pressed her for more details.  She said that she was a Wiccan, and that I should read more about it on the Internet if I wanted to know more.  I still didn’t quite understand what she meant, but I looked it up on AltaVista, which linked me to an entire category of sites on  So, I read through these websites and on one of them I saw:

Come join our chatroom on DALnet: #wicca

DALnet!  I knew what that was, so I looked for an IRC client that worked on Windows, and eventually installed mIRC.  Then I joined DALnet again, this time to join #wicca.  I found out about a lot of other amazing ideas from the people on that channel, and wound up joining others like #otherkin around that time.  Many of my closest friends to this day are from those days.

At this time, DALnet was the largest IRC network, with almost 150,000 daily users.  Eventually, my friends introduced me to mIRC script packs, like NoNameScript, and I used that for a few years on and off, sometimes using BitchX on Slackware instead, as I figured out how to make my system dual boot at some point.

The DALnet DDoS attacks

For a few years, all was well, until the end of July 2002, when DALnet started being the target of Distributed Denial of Service attacks.  We would of course, later find out that these attacks were at the request of Jason Michael Downey (Nessun), who had just launched a competing IRC network called Rizon.

However, this resulted in #slackware and many other technical channels moving from DALnet to, a network that was the predecessor to freenode.  Using screen, I was able to run two copies of the BitchX client, one for freenode, and one for DALnet, but I had difficulties connecting to the DALnet network due to the DDoS attacks.

Early freenode

At the end of 2002, became freenode.  At that time, freenode was a much different place, with community projects like #freenoderadio, a group of people who streamed various ‘radio’ shows on an Icecast server.  Freenode had less than 5,000 users, and it was a community where most people knew each other, or at least knew somebody who knew somebody else.

At this time, freenode ran dancer-ircd, with dancer-services, which were written by the Debian developer Andrew Suffield and based on ircd-hybrid 6 and HybServ accordingly.

Dancer had a lot of bugs, the software would frequently do weird things and the services were quite spartan compared to what was available on DALnet.  I knew based on what was available over on DALnet, that we could make something better for freenode, and so I started to learn about IRCD.

Hatching a plan to make services better

By this time, I was in my last year of high school, and was writing IRC bots in Perl.  I hadn’t really tried to write anything in C yet, but I was learning a little bit about C by playing around with a test copy of UnrealIRCd on my local machine.  But I started to talk to lilo about improving the services.  I knew it could be done, but I didn’t know how to do it yet, which lead me to start searching for services projects that were simple and understandable.

In my searching for services software, I found rakaur‘s Shrike project, which was a very simple clone of Undernet’s X service which could be used with ircd-hybrid.  I talked with rakaur, and I learned more about C, and even added some features.  Unfortunately, we had a falling out at that time because a user on the network we ran together found out that he could make rakaur‘s IRC bot run rm -rf --no-preserve-root /, and did so.

After working on Shrike a bit, I finally knew what to do: extend Shrike into a full set of DALnet-like services.  I showed what I was working on to lilo and he was impressed: I became a freenode staff member, and continued to work on the services, and all went well for a while.  He also recruited my friend jilles to help with the coding, and we started fixing bugs in dancer-ircd and dancer-services as an interim solution.  And we started writing atheme as a longer-term replacement to dancer-services, originally under the auspices of freenode.


In early 2006, lilo launched his Spinhome project.  Spinhome was a fundraising effort so that lilo could get a mobile home to replace the double-wide trailer he had been living in.  Some people saw him trying to fundraise while being the owner of freenode as a conflict of interest, which lead to a falling out with a lot of staffers, projects, etc.  OFTC went from being a small network to a much larger network during this time.

One side effect of this was that the atheme project got spun out into its own organization:, which continues to exist in some form to this day.

The project was founded on the concept of promoting digital autonomy, which is basically the network equivalent of software freedom, and has advocated in various ways to preserve IRC in the context of digital autonomy for years.  In retrospect, some of the ways we advocated for digital autonomy were somewhat obnoxious, but as they say, hindsight is always 20/20.

The hit and run

In September 2006, lilo was hit by a motorist while riding his bicycle.  This lead to a managerial crisis inside freenode, where there were two rifts: one group which wanted to lead the network was lead by Christel Dahlskjaer, while the other group was lead by Andrew Kirch (trelane).  Christel wanted to update the network to use all of the new software we developed over the past few years, and so gave her our support, which convinced enough of the sponsors and so on to also support her.

A few months later, lilo‘s brother tried to claim title to the network to turn into some sort of business.  This lead to Christel and Richard Hartmann (RichiH) meeting with him in order to get him to back away from that attempt.

After that, things largely ran smoothly for several years: freenode switched to atheme, and then they switched to ircd-seven, a customized version of charybdis which we had written to be a replacement for hyperion (our fork of dancer-ircd), after which things ran well until…

Freenode Limited

In 2016, Christel incorporated freenode limited, under the guise that it would be used to organize the freenode #live conferences.  In early 2017, she sold 66% of her stake in freenode limited to Andrew Lee, who I wrote about in last month’s chapter.

All of that lead to Andrew’s takeover of the network last month, and last night they decided to remove the #fsf and #gnu channels from the network, and k-lined my friend Amin Bandali when he criticized them about it, which means freenode is definitely no longer a network about FOSS.

Projects should use alternative networks, like OFTC or Libera, or better yet, operate their own IRC infrastructure.  Self-hosting is really what makes IRC great: you can run your own server for your community and not be beholden to anyone else.  As far as IRC goes, that’s the future I feel motivated to build.

This concludes my coverage of the freenode meltdown.  I hope people enjoyed it and also understand why freenode was important to me: without lilo‘s decision to take a chance on a dumbfuck kid like myself, I wouldn’t have ever really gotten as deeply involved in FOSS as I have, so to see what has happened has left me heartbroken.

the vulnerability remediation lifecycle of Alpine containers

Anybody who has the responsibility of maintaining a cluster of systems knows about the vulnerability remediation lifecycle: vulnerabilities are discovered, disclosed to vendors, mitigated by vendors and then consumers deploy the mitigations as they update their systems.

In the proprietary software world, the deployment phase is colloquially known as Patch Tuesday, because many vendors release patches on the second and fourth Tuesday of each month.  But how does all of this actually happen, and how do you know what patches you actually need?

I thought it might be nice to look at all the moving pieces that exist in Alpine’s remediation lifecycle, beginning from discovery of the vulnerability, to disclosure to Alpine, to user remediation.  For this example, we will track CVE-2016-20011, which I just fixed in Alpine, which is a minor vulnerability in the libgrss library concerning a lack of TLS certificate validation when fetching https URIs.

The vulnerability itself

GNOME’s libsoup is an HTTP client/server library for the the GNOME platform, analogous to libcurl.  It has two sets of session APIs: the newer SoupSession API and the older SoupSessionSync/SoupSessionAsync family of APIs.  As a result of creating the newer SoupSession API, it was discovered at some point that the older SoupSessionSync/SoupSessionAsync APIs did not enable TLS certificate validation by default.

As a result of discovering that design flaw in libsoup, Michael Catanzaro — one of the libsoup maintainers, began to audit users of libsoup in the GNOME platform.  One such user of libsoup is libgrss, which did not take any steps to enable TLS certificate validation on its own, so Michael opened a bug against it in 2016.

Five years passed and he decided to check up on these bugs.  That lead to the filing of a new bug in GNOME’s gitlab against libgrss, as the GNOME bugzilla service is in the process of being turned down.  As libgrss was still broken in 2021, he requested a CVE identifier for the vulnerability, and was issued CVE-2016-20011.

How do CVE identifiers get determined, anyway?

You might notice that the CVE identifier he was issued is CVE-2016-20011, even though it is presently 2021.  Normally, CVE identifiers use the current year, as requesting a CVE identifier is usually an early step in the disclosure process, but CVE identifiers are actually grouped by the year that a vulnerability was first publicly disclosed.  In the case of CVE-2016-20011, the identifier was assigned to the 2016 year because of the public GNOME bugzilla report which was filed in 2016.

The CVE website at MITRE has more information about how CVE identifiers are grouped if you want to know more.

The National Vulnerability Database

Our vulnerability was issued CVE-2016-20011, but how does Alpine actually find out about it?  The answer is quite simple: the NVD.  When a CVE identifier is issued, information about the vulnerability is forwarded along to the National Vulnerability Database activity at NIST, a US governmental agency.  The NVD consumes CVE data and enriches it with additional links and information about the vulnerability.  They also generate Common Product Enumeration rules which are intended to map the vulnerability to an actual product and set of versions.

Common Product Enumeration rules consist of a CPE URI which tries to map a vulnerability to an ecosystem and product name, and an optional set of version range constraints.  For CVE-2016-20011, the NVD staff issued a CPE URI of cpe:2.3:a:gnome:libgrss:*:*:*:*:*:*:*:* and a version range constraint of <= 0.7.0.

The final step in vulnerability information making its way to Alpine is the security team’s issue tracker.  Every hour, we download the latest version of the CVE-Modified and CVE-Recent feeds offered by the National Vulnerability Database activity.  We then use those feeds to update our own internal vulnerability tracking database.

Throughout the day, the security team pulls various reports from the vulnerability tracking database, for example a list of potential vulnerabilities in edge/community.  The purpose of checking these reports is to see if there are any new vulnerabilities to investigate.

As libgrss is in edge/community, CVE-2016-20011 appeared on that report.


Once we start to work a vulnerability, there are a few steps that we take.  First, we research the vulnerability, by checking the links provided to us through the CVE feed and other feeds the security tracker consumes.  The NVD staff are usually very quick at linking to git commits and other data we can use for mitigating the vulnerability.  However, sometimes, such as in the case of CVE-2016-20011, there is no longer an active upstream maintainer of the package, and we have to mitigate the issue ourselves.

Once we have a patch that is known to fix the issue, we prepare a software update and push it to aports.git.  We then backport the security fix to other branches in aports.git.

Once the fix is committed to all of the appropriate branches, the build servers take over, building a new version of the package with the fixes.  The build servers then upload the new packages to the master mirror, and from there, they get distributed through the mirror network to Alpine’s user community.


At this point, if you’re a casual user of Alpine, you would just do something like apk upgrade -Ua and move on with your life, knowing that your system is up to date.

But what if you’re running a cluster of hundreds or thousands of Alpine servers and containers?  How would you know what to patch?  What should be prioritized?

To solve those problems, there are security scanners, which can check containers, images and filesystems for vulnerabilities.  Some are proprietary software, but there are many options that are free.  However, security scanners are not perfect, like Alpine’s vulnerability investigation tool, they sometimes generate both false positives and false negatives.

Where do security scanners get their data?  In most cases for Alpine systems, they get their data from the Alpine security database, a product maintained by the Alpine security team.  Using that database, they check the apk installed database to see what packages and versions are installed in the system.  Let’s look at a few of them.

Creating a test case by mixing Alpine versions

Note: You should never actually mix Alpine versions like this.  If done in an uncontrolled way, you risk system unreliability and your security scanning solution won’t know what to do as each Alpine version’s security database is specific to that version of Alpine.  Don’t create a franken-alpine!

In the case of libgrss, we know that 0.7.0-r1 and newer have a fix for CVE-2016-20011, but the security fix has already been published.  So, where can we get 0.7.0-r0?  We can get it from Alpine 3.12 of course.  Accordingly, we make a filesystem with apk and install Alpine 3.12 into it:

nanabozho:~# apk add --root ~/test-image --initdb --allow-untrusted -X -X alpine-base libgrss-dev=0.7.0-r0
OK: 126 MiB in 92 packages
nanabozho:~# apk upgrade --root ~/test-image -X -X
OK: 127 MiB in 98 packages
nanabozho:~# apk info --root ~/test-image libgrss
Installed:                              Available:
libgrss-0.7.0-r0                      ? 
nanabozho:~# cat ~/test-image/etc/alpine-release

Now that we have our image, lets see what detects the vulnerability, and what doesn’t.


Trivy is considered by many to be the most reliable scanner for Alpine systems, but can it detect this vulnerability?  In theory, it should be able to.

I have installed trivy to /usr/local/bin/trivy on my machine by downloading the go binary from the GitHub release.  They have a script that can do this for you, but I’m not a huge fan of curl | sh type scripts.

To scan a filesystem image with trivy, you do trivy fs /path/to/filesystem:

nanabozho:~# trivy fs -f json ~/test-image/
2021-06-07T23:48:40.308-0600 INFO Detected OS: alpine
2021-06-07T23:48:40.308-0600 INFO Detecting Alpine vulnerabilities...
2021-06-07T23:48:40.309-0600 INFO Number of PL dependency files: 0
    "Target": "localhost (alpine 3.13.5)",
    "Type": "alpine"

Hmm, that’s strange.  I wonder why?

nanabozho:~# trivy --debug fs ~/test-image/
2021-06-07T23:42:54.036-0600 DEBUG Severities: UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL
2021-06-07T23:42:54.038-0600 DEBUG cache dir: /root/.cache/trivy
2021-06-07T23:42:54.039-0600 DEBUG DB update was skipped because DB is the latest
2021-06-07T23:42:54.039-0600 DEBUG DB Schema: 1, Type: 1, UpdatedAt: 2021-06-08 00:19:21.979880152 +0000 UTC, NextUpdate: 2021-06-08 12:19:21.979879952 +0000 UTC, DownloadedAt: 2021-06-08 05:23:09.354950757 +0000 UTC

Ah, trivy’s security database only updates twice per day, so trivy has not become aware of CVE-2016-20011 being mitigated by libgrss-0.7.0-r1 yet.

I rebuilt trivy’s database locally and put it in ~/.cache/trivy/db/trivy.db:

nanabozho:~# trivy fs -f json ~/test-image/
2021-06-08T01:37:20.574-0600	INFO	Detected OS: alpine
2021-06-08T01:37:20.574-0600	INFO	Detecting Alpine vulnerabilities...
2021-06-08T01:37:20.576-0600	INFO	Number of PL dependency files: 0
    "Target": "localhost (alpine 3.13.5)",
    "Type": "alpine",
    "Vulnerabilities": [
        "VulnerabilityID": "CVE-2016-20011",
        "PkgName": "libgrss",
        "InstalledVersion": "0.7.0-r0",
        "FixedVersion": "0.7.0-r1",
        "Layer": {
          "DiffID": "sha256:4bd83511239d179fb096a1aecdb2b4e1494539cd8a0a4edbb58360126ea8d093"
        "SeveritySource": "nvd",
        "PrimaryURL": "",
        "Description": "libgrss through 0.7.0 fails to perform TLS certificate verification when downloading feeds, allowing remote attackers to manipulate the contents of feeds without detection. This occurs because of the default behavior of SoupSessionSync.",
        "Severity": "HIGH",
        "CweIDs": [
        "CVSS": {
          "nvd": {
            "V2Vector": "AV:N/AC:L/Au:N/C:N/I:P/A:N",
            "V3Vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N",
            "V2Score": 5,
            "V3Score": 7.5
        "References": [
        "PublishedDate": "2021-05-25T21:15:00Z",
        "LastModifiedDate": "2021-06-01T17:03:00Z"
        "VulnerabilityID": "CVE-2016-20011",
        "PkgName": "libgrss-dev",
        "InstalledVersion": "0.7.0-r0",
        "FixedVersion": "0.7.0-r1",
        "Layer": {
          "DiffID": "sha256:4bd83511239d179fb096a1aecdb2b4e1494539cd8a0a4edbb58360126ea8d093"
        "SeveritySource": "nvd",
        "PrimaryURL": "",
        "Description": "libgrss through 0.7.0 fails to perform TLS certificate verification when downloading feeds, allowing remote attackers to manipulate the contents of feeds without detection. This occurs because of the default behavior of SoupSessionSync.",
        "Severity": "HIGH",
        "CweIDs": [
        "CVSS": {
          "nvd": {
            "V2Vector": "AV:N/AC:L/Au:N/C:N/I:P/A:N",
            "V3Vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N",
            "V2Score": 5,
            "V3Score": 7.5
        "References": [
        "PublishedDate": "2021-05-25T21:15:00Z",
        "LastModifiedDate": "2021-06-01T17:03:00Z"

Ah, that’s better.


Clair is a security scanner previously written by the CoreOS team, and now maintained by Red Hat.  It is considered the gold standard for security scanning of containers.  How does it do with the filesystem we baked?

nanabozho:~# clairctl report ~/test-image/
2021-06-08T00:11:04-06:00 ERR error="UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:root/test-image Type:repository]]"

Oh, right, it can’t just scan a filesystem.  One second.

nanabozho:~$ cd ~/dev-src/clair
nanabozho:~$ make local-dev-up-with-quay
[a bunch of commands later]
nanabozho:~$ clairctl report test-image:1
test-image:1 found libgrss 0.7.0-r0 CVE-2016-20011 (fixed: 0.7.0-r1)

As you can see, clair does succeed in finding the vulnerability, when you bake an actual Docker image and publish it to a local quay instance running on localhost.

But this is really a lot of work to just scan for vulnerabilities, so I wouldn’t recommend clair for that.


grype is a security scanner made by Anchore.  They talk a lot about how Anchore’s products can also be used to build a Software Bill of Materials for a given image.  Let’s see how it goes with our test image:

nanabozho:~# grype dir:~/test-image/
✔ Vulnerability DB [updated]
✔ Cataloged packages [98 packages]
✔ Scanned image [3 vulnerabilities]
libgrss  0.7.0-r0      (fixes indeterminate) CVE-2016-20011 High 
libxml2  2.9.10-r7     (fixes indeterminate) CVE-2019-19956 High 
openrc   0.42.1-r19    (fixes indeterminate) CVE-2018-21269 Medium

grype does detect that a vulnerable libgrss is installed, but the (fixes indeterminate) seems fishy to me.  There also appear to be some other hits that the other scanners didn’t notice.  Lets fact check this against a pure Alpine 3.13 container:

nanabozho:~# grype dir:~/test-image-pure/
✔ Vulnerability DB [no update available]
✔ Cataloged packages [98 packages]
✔ Scanned image [3 vulnerabilities]
libgrss  0.7.0-r1      (fixes indeterminate) CVE-2016-20011 High 
libxml2  2.9.10-r7     (fixes indeterminate) CVE-2019-19956 High 
openrc   0.42.1-r19    (fixes indeterminate) CVE-2018-21269 Medium

Oh no, it detects 0.7.0-r1 as vulnerable too, which I assume is simply because Anchore’s database hasn’t updated yet.  Researching the other two vulnerabilities, the openrc one seems to be a vulnerability we missed, while the libxml2 one is a false positive.

I think, however, it is important to note that Anchore’s scanning engine assumes a package is vulnerable if there is a CVE and the distribution hasn’t acknowledged a fix.  That may or may not actually be reliable enough of the time, but it is an admittedly interesting approach.


For vulnerability scanning, I have to recommend either trivy or grype.  Clair is really complicated to set up and is really geared at people scanning entire container registries at once.  In general, I would recommend trivy over grype simply because it does not speculate about unconfirmed vulnerabilities, which I think is a distraction to developers, but I think grype has a lot of potential as well, though they may want to add the ability to only scan for confirmed vulnerabilities.

In general, I hope this blog entry answers a lot of questions about the remediation lifecycle in general as well.

actually, BSD kqueue is a mountain of technical debt

A side effect of the whole freenode kerfluffle is that I’ve been looking at IRCD again.  IRC, is of course a very weird and interesting place, and the smaller community of people who run IRCDs are largely weirder and even more interesting.

However, in that community of IRCD administrators there happens to be a few incorrect systems programming opinions that have been cargo culted around for years.  This particular blog is about one of these bikesheds, namely the kqueue vs epoll debate.

You’ve probably heard it before.  It goes something like this, “BSD is better for networking, because it has kqueue.  Linux has nothing like kqueue, epoll doesn’t come close.”  While I agree that epoll doesn’t come close, I think that’s actually a feature that has lead to a much more flexible and composable design.

In the beginning…

Originally, IRCD like most daemons used select for polling sockets for readiness, as this was the first polling API available on systems with BSD sockets.  The select syscall works by taking a set of three bitmaps, with each bit describing a file descriptor number: bit 1 refers to file descriptor 1 and so on.  The bitmaps are the read_set, write_set and err_set, which map to sockets that can be read, written to or have errors accordingly.  Due to design defects with the select syscalls, it can only support up to FD_SETSIZE file descriptors on most systems.  This can be mitigated by making fd_set an arbitrarily large bitmap and depending on fdmax to be the upper bound, which is what WinSock has traditionally done on Windows.

The select syscall clearly had some design deficits that negatively affected scalability, so AT&T introduced the poll syscall in System V UNIX.  The poll syscall takes an array of struct pollfd of user-specified length, and updates a bitmap of flags in each struct pollfd entry with the current status of each socket.  Then you iterate over the struct pollfd list.  This is naturally a lot more efficient than select, where you have to iterate over all file descriptors up to fdmax and test for membership in each of the three bitmaps to ascertain each socket’s status.

It can be argued that select was bounded by FD_SETSIZE (which is usually 1024 sockets), while poll begins to have serious scalability issues at around 10240 sockets.  These arbitrary benchmarks have been referred to as the C1K and C10K problems accordingly.  Dan Kegel has a very lengthy post on his website about his experiences mitigating the C10K problem in the context of running an FTP site.

Then there was kqueue…

In July 2000, Jonathan Lemon introduced kqueue into FreeBSD, which quickly propagated into the other BSD forks as well.  kqueue is a kernel-assisted event notification system using two syscalls: kqueue and kevent.  The kqueue syscall creates a handle in the kernel represented as a file descriptor, which a developer uses with kevent to add and remove event filters.  Event filters can match against file descriptors, processes, filesystem paths, timers, and so on.

This design allows for a single-threaded server to process hundreds of thousands of connections at once, because it can register all of the sockets it wishes to monitor with the kernel and then lazily iterate over the sockets as they have events.

Most IRCDs have supported kqueue for the past 15 to 20 years.

And then epoll…

In October 2002, Davide Libenzi got his epoll patch merged into Linux 2.5.44.  Like with kqueue, you use the epoll_create syscall to create a kernel handle which represents the set of descriptors to monitor.  You use the epoll_ctl syscall to add or remove descriptors from that set.  And finally, you use epoll_wait to wait for kernel events.

In general, the scalability aspects are the same to the application programmer: you have your sockets, you use epoll_ctl to add them to the kernel’s epoll handle, and then you wait for events, just like you would with kevent.

Like kqueue, most IRCDs have supported epoll for the past 15 years.

What is a file descriptor, anyway?

To understand the argument I am about to make, we need to talk about file descriptors.  UNIX uses the term file descriptor a lot, even when referring to things which are clearly not files, like network sockets.  Outside the UNIX world, a file descriptor is usually referred to as a kernel handle.  Indeed, in Windows, kernel-managed resources are given the HANDLE type, which makes this relationship more clear.  Essentially, a kernel handle is basically an opaque reference to an object in kernel space, and the astute reader may notice some similarities to the object-capability model as a result.

Now that we understand that file descriptors are actually just kernel handles, we can now talk about kqueue and epoll, and why epoll is actually the correct design.

The problem with event filters

The key difference between epoll and kqueue is that kqueue operates on the notion of event filters instead of kernel handles.  This means that any time you want kqueue to do something new, you have to add a new type of event filter.

FreeBSD presently has 10 different event filter types: EVFILT_READ, EVFILT_WRITE, EVFILT_EMPTY, EVFILT_AIO, EVFILT_VNODE, EVFILT_PROC, EVFILT_PROCDESC, EVFILT_SIGNAL, EVFILT_TIMER and EVFILT_USER.  Darwin has additional event filters concerning monitoring Mach ports.

Other than EVFILT_READ, EVFILT_WRITE and EVFILT_EMPTY, all of these different event filter types are related to entirely different concerns in the kernel: they don’t monitor kernel handles, but instead other specific subsystems than sockets.

This makes for a powerful API, but one which lacks composability.

epoll is better because it is composable

It is possible to do almost everything that kqueue can do on FreeBSD in Linux, but instead of having a single monolithic syscall to handle everything, Linux takes the approach of providing syscalls which allow almost anything to be represented as a kernel handle.

Since epoll strictly monitors kernel handles, you can register any kernel handle you have with it and get events back when its state changes.  As a comparison to Windows, this basically means that epoll is a kernel-accelerated form of WaitForMultipleObjects in the Win32 API.

You are probably wondering how this works, so here’s a table of commonly used kqueue event filters and the Linux syscall used to get a kernel handle for use with epoll.

BSD event filter Linux equivalent
EVFILT_PROC, EVFILT_PROCDESC pidfd, alternatively bind processes to a cgroup and monitor
EVFILT_AIO aiocb.aio_fildes (treat as socket)

Hopefully, as you can see, epoll can automatically monitor any kind of kernel resource without having to be modified, due to its composable design, which makes it superior to kqueue from the perspective of having less technical debt.

Interestingly, FreeBSD has added support for Linux’s eventfd recently, so it appears that they may take kqueue in this direction as well.  Between that and FreeBSD’s process descriptors, it seems likely.

A slightly-delayed monthly status update

A few weeks ago, I announced the creation of a security response team for Alpine, of which I am presently the chair.

Since then, the team has been fully chartered by both the previous Alpine core team, and the new Alpine council, and we have gotten a few members on board working on security issues in Alpine.  Once the Technical Steering Committee is fully formed, the security team will report to the TSC and fall under its purview.

Accordingly, I thought it would be prudent to start write monthly updates summarizing what I’ve been up to.  This one is a little delayed because we’ve been focused on getting Alpine 3.14 out the door (first RC should come out on Monday)!


One of the primary activities of the security team is to manage the security database.  This is largely done using the secfixes-tracker application I wrote in April.  At AlpineConf, I gave a bubble talk about the new security team, including a demonstration of how we use the secfixes-tracker application to research and mitigate security vulnerabilities.

Since the creation of the security team through the Alpine 3.14 release cycle, myself and other security team volunteers have mitigated over 100 vulnerabilities through patching or non-maintainer security upgrades in the pending 3.14 release alone and many more in past releases which are still supported.

All of this work in finding unpatched vulnerabilities is done using secfixes-tracker.  However, while it finds many vulnerabilities, it is not perfect.  There are both false positives and false negatives, which we are working on improving.

The next step for secfixes-tracker is to integrate it into GitLab, so that maintainers can log in and reject CVEs they deem irrelevant in their packages instead of having to attribute a security fix to version 0.  I am also working on a protocol to allow security trackers to share data with each other in an automated way.


Another role of the security team is to advise the infrastructure team on security-related matters.  In the past few weeks, this primarily focused around two issues: how to securely relay patches from the alpine-aports mailing list into GitLab without compromising the security of aports.git and our response to recent changes in freenode, where it was the recommendation of the security team to leave freenode in favor of OFTC.

Reproducible Builds

Another project of mine personally is working to prove the reproducibility of Alpine package builds, as part of the Reproducible Builds project.  To this end, I hope to have the Alpine 3.15 build fully reproducible.  This will require some changes to abuild so that it produces buildinfo files, as well as a rebuilder backend.  We plan to use the same buildinfo format as Arch, and will likely adapt some of the other reproducible builds work Arch has done to Alpine.

I plan to have a meeting within the next week or two to formulate an official reproducible builds team inside Alpine and lay out the next steps for what we need to do in order to get things going.  In the meantime, join #alpine-reproducible on if you wish to follow along.

I plan for reproducible builds (perhaps getting all of main reproducible) to be a sprint in July, once the prerequisite infrastructure is in place to support it, so stay tuned on that.

apk-tools 3

On this front, there’s not much to report yet.  My goal is to integrate the security database into our APKINDEX, so that we can have apk list --upgradable --security, which lists all of the security fixes you need to apply.  Unfortunately, we are still working to finalize the ADB format which is a prerequisite for providing the security database in ADB format.  It does look like Timo is almost done with this, so once he is done, I will be able to start working on a way to reflect the security database into our APKINDEX files.

The linux-distros list

There is a mailing list which is intended to allow linux distribution security personnel to discuss security issues in private.  As Alpine now has a security team, it is possible for Alpine to take steps to participate on this list.

However… participation on this list comes with a few restrictions: you have to agree to follow all embargo terms in a precise way.  For example, if an embargoed security vulnerability is announced there and the embargo specifies you may not patch your packages until XYZ date, then you must follow that or you will be kicked off the list.

I am not sure it is necessarily appropriate or even valuable for Alpine to participate on the list.  At present, if an embargoed vulnerability falls off a truck and Alpine notices it, we can fix it immediately.  If we join the linux-distros list, then we may be put in a position where we have to hide problems, which I didn’t sign up for.  I consider it a feature that the Alpine security team is operating fully in the open for everyone to see, and want to preserve that as much as possible.

The other problem is that distributions which participate bind their package maintainers to an NDA in order to look at data relevant to their packages.  I don’t like this at all and feel that it is not in the spirit of free software to make contributors acknowledge an NDA.

We plan to discuss this over the next week and see if we can reach consensus as a team on what to do.  I prefer to fix vulnerabilities, not wait to fix vulnerabilities, but obviously I am open to being convinced that there is value to Alpine’s participation on that list.


My activities relating to Alpine security work are presently sponsored by Google and the Linux Foundation.  Without their support, I would not be able to work on security full time in Alpine, so thanks!

the whole freenode kerfluffle

But the thing is IRC has always been a glorious thing. The infra has always been sponsored by companies or people. But the great thing about IRC is you can always vote and let the networks and world know which you choose – by using /server.

— Andrew Lee (rasengan), chairman of freenode limited

Yesterday, operational control over freenode was taken over by Andrew Lee, the person who has been owner of freenode limited since 2017.  Myself and others have had questions about this arrangement since we noticed the change in ownership interest in freenode limited back in 2017.

Historically, freenode staff had stated that everything was under control and that Andrew’s involvement in freenode limited had no operational impact on the network.  It turns out that Christel was lying to them: Andrew had operational control and legal authority over the freenode domains.  This lead to several current volunteers drafting their resignation letters.

When I asked Andrew about the current state of the freenode domain, one of his associates who I hadn’t spoken to in months (since terminating the Ophion project I was doodling on during lockdown) came out of nowhere and started offering me bribes of staff privileges and money for Alpine.  These developments were concerning to the Alpine council and interim technical committee, so we scheduled an event at AlpineConf to talk about the situation.

Our initial conclusion was that we should wait until the end of the month and see how the situation shakes out, and possibly plan to stand up our own IRC infrastructure or use another network.  Then this happened yesterday:

[02:54:38] <-- ChanServ (ChanServ@services.) has quit (Killed (grumble (My fellow staff so-called 'friends' are about to hand over account data to a non-staff member. If you care about your data, drop your NickServ account NOW before that happens.)))

Given that situation, members of the Alpine council and technical committee gathered together to discuss the situation.  We decided to move to OFTC immediately, as we wanted to give users the widest window of opportunity to delete their data.  This move has now been concluded, and I appreciate the help of the OFTC IRC network staff as well as the Alpine infrastructure team to migrate all of our IRC-facing services across.  The fact that we were able to move so quickly without much disruption is a testament to the fact that IRC and other open protocols like it are vital for the free software community.

So, why does he want to control freenode anyway?

I have had the pleasure of using freenode since 2003, and have been a staff member on several occasions.  My work on IRC, such as starting the IRCv3 project and writing charybdis and atheme, was largely motivated by a desire to improve freenode.  It is unfortunate that one person’s desire for control over an IRC network has lead to so much destruction.

But why is he actually driven to control these IRC networks?  Many believe it is about data mining, or selling the services database, or some other boring but sensible explanation.

But that’s not why.  What I believe to be the real answer is actually much sadder.

I spent several months talking to Andrew and his associate, Shane, last year during lockdown while I was writing an IRCX server (I didn’t have much to do last summer during lockdown and I had always wanted to write an IRCX server).  Shane linked a server to my testnet because he was enthusiastic about IRCX, he had previously been a user on  Both he and Andrew acted as IRCops on the server they linked.  In that time, I learned a lot about both of them, what their thought processes are, how they operate.

In December 2018, Andrew acquired the domain.  On that domain, he wrote a post titled Let’s take IRC further.  Based on this post, we can gather a few details about Andrew’s childhood: he grew up as a marginalized person, and as a result of that marginalization, he was bullied.  IRC was his outlet, a space for him that was actually safe for him to express himself.  Because of that, he was able to learn about technology and free software.

Because of this, I believe Andrew’s intention is to preserve IRC as it was formative in his transition from childhood to adulthood.  He finds IRC to be comforting, in the same way that I find the bunny plushie I sleep with to be comforting.  This is understandable to me, as many people strongly desire to preserve the environment they proverbially grew up in.

However, in implementing his desire to preserve the IRC network he grew up on, he has effectively destroyed it: projects are leaving or planning to leave en masse, which is sad.

Whether you want to participate in Andrew’s imaginary kingdom or not is up to you, but I believe the current situation to be untenable for the free software community.  We cannot depend on an IRC network where any criticism of Andrew may be perceived by him as a traumatic experience.

I strongly encourage everyone to move their projects to either OFTC or Libera Chat.  I will be disconnecting from freenode on May 22nd, and I have no plans to ever return.

And to the volunteers who kept the network going, with whom I had the privilege on several occasions over the years of working with: I wish you luck with Libera Chat.

AlpineConf 2021 recap

Last weekend was AlpineConf, the first one ever.  We held it as a virtual event, and over 700 participants came and went during the weekend.  Although there were many things we learned up to and during the conference that could be improved, I think that the first AlpineConf was a great success!  If you’re interested in rewatching the event, both days have mostly full recordings on the Alpine website.

What worked

We held the conference on a BigBlueButton instance I set up and used the Alpine Gitlab for organizing.  BigBlueButton scaled well, even when we had nearly 100 active participants, the server performed quite well.  Similarly, using issue tracking in Gitlab helped us to keep the CFP process simple.  I think in general, we will keep this setup for future events, as it worked quite well.

What didn’t work so well

A major problem with BigBlueButton was attaching conference talks from YouTube.  This caused problems with several privacy extensions which blocked the YouTube player from running.  Also, the YouTube video playback segments are missing from the recordings.  I’m going to investigate alternative options for this which should hopefully help with making the recorded talks play back correctly next time.

Maybe if a BigBlueButton developer sees this, they can work to improve the YouTube viewing feature as well so that it works on the recording playback.  That would be a really nice feature to have.

Other than that, we only had one scheduling SNAFU, and that was basically my fault — I didn’t confirm the timeslot I scheduled the cloud team talk in, and so naturally, the cloud team was largely asleep because they were in US/Pacific time.

Overall though, I think things went well and many people said they enjoyed the conference.  Next year, as we will have some experience to draw from, things will be even better, hopefully.

The talks on day 1…

The first day was very exciting with a lot of talks and blahaj representation.  The talks mostly focused around user stories about Alpine.  We learned about where and how Alpine was being used… from phones, to data centers, to windmills, to the science community.  Here is the list of talks on the first day and my thoughts!

The Beauty of Simplicity, by Cameron Seid (@deltaryz)

This was the first talk of the conference and largely focused on how Cameron managed his Alpine server.  It was a good starting talk for the conference, I think, because it showed how people use Alpine at home in their personal infrastructure.  The talk was prerecorded and Cameron spent a lot of time on editing to make it look flashy.

pmbootstrap: The Swiss Army Knife of postmarketOS development, by Oliver Smith (@ollieparanoid)

postmarketOS is a distribution of Alpine for phones and other embedded devices.

In this talk, Oliver went into pmbootstrap, a tool which helps to automate many of the tasks of building postmarketOS images and packages.  About halfway through the talk, a user joined who I needed to make moderator, but I clicked the wrong button and made them presenter instead.  Thankfully, Oliver was a good sport about it and we were able to fix the video playback quickly.  I learned a lot about how pmbootstrap can be used for any sort of embedded project, and that opens up a lot of possibilities for collaborating with the pmOS team in other embedded applications involving Alpine.

Using Alpine Linux in DataCenterLight, by Nico Schottelius (@telmich)

In this talk, Nico walks us through how Alpine powers many devices in his data center project called DataCenterLight.  He is using Alpine in his routing infrastructure with 10 gigabit links!  The talk went over everything from routing all the way down to individual customer services, and briefly compared Alpine to Debian and Devuan from both a user and development point of view.

aports-qa-bot: automating aports, by Rasmus Thomsen (@Cogitri)

Rasmus talked about the aports-qa-bot he wrote which helps maintainers and the mentoring team review merge requests from contributors.  He went into some detail about the modular design of the bot and how it can be easily extended for other teams and also Alpine derivatives.  The postmarketOS team asked about deploying it for their downstream pmaports repo, so you’ll probably be seeing the bot there soon.

apk-polkit-rs: Using APK without the CLI, by Rasmus Thomsen (@Cogitri)

Rasmus had the next slot as well, where he talked about his apk-polkit-rs project which provides a DBus service that can be called for installing and upgrading packages using apk.  He also talked about the rust crate he is working on to wrap the apk-tools 3 API.  Overall, the future looks very interesting for working with apk-tools from rust!

Alpine building infrastructure update, by Natanael Copa (@ncopa)

Next, Natanael gave a bubble talk about the Alpine building infrastructure.  For me this was largely a trip down memory lane, as I witnessed the build infrastructure evolve first hand.  He talked about how the first generation build infrastructure was a series of IRC bots which reacted to IRC messages in order to trigger a new build, and how the IRC infrastructure evolved from IRC to ZeroMQ to MQTT.

He then showed how the builders work, using a live builder as an example, walking through the design and implementation of the build scripts.  Finally, he proposed some ideas for building a more robust system that allowed for parallelizing the build process where possible.

postmarketOS demo, by Martijn Braam (@MartijnBraam)

Martijn showed us postmarketOS in action on several different phones.  Did I mention he has a lot of phones?  I asked in the Q&A afterwards and he said he had like 6 pinephones and somewhere around 60 other phones.

I have to admire the dedication to reverse engineering phones that would lead to somebody acquiring 60+ phones to tinker with.

Sxmo: Simple X Mobile – A minimalist environment for Linux smartphones, by Maarten van Gompel (@proycon)

Maarten van Gompel, Anjandev Momi and Miles Alan gave a talk about and demonstration of Sxmo, their lightweight phone environment based on dwm, dmenu and a bunch of other tools as plumbing.

The UI reminds me a lot of palmOS.  I suspect if palmOS were still alive and kicking today, it would look like Sxmo.  Phone calls and text messages are routed through shell scripts, a feature I didn’t know I needed until I saw it in action.  Sxmo probably is the killer app for running an actual Linux distribution on your phone.

This UI is absolutely begging for jog-wheels to come back, and I for one hope they do.

Alpine and the larger musl ecosystem (a roundtable discussion)

This got off to a rocky start because I don’t know how to organize stuff like this.  I should have found somebody else to run the discussion, but it was really fruitful.  We came to the conclusion that we needed to work more closely together in the musl distribution ecosystem to proactively deal with issues like misinformed upstreams and so on, so that we do not have another Rust-like situation again.  That lead to the formation of #musl-distros on freenode to coordinate on these issues.

Taking Alpine to the Edge and Beyond With Linux Foundation’s Project EVE, by Roman Shaposhnik (@rvs)

Roman talked about Project EVE, an edge computing solution being developed under the auspices of the LF Edge working group at Linux Foundation.  EVE (Edge Virtualization Engine) is a distribution of Alpine built with Docker’s LinuxKit, which has multiple Alpine-based containers working together in order to provide an edge computing solution.

He talked about how the cloud has eroded software freedom (after all, you can’t depend on free-as-in-freedom computing when it’s on hardware you don’t own) by encouraging users to trade it for convenience, and how edge computing brings that same convenience in-house, thus solving the software freedom issue.

Afterward, he demonstrated how EVE is deployed on windmills to analyze audio recordings from the windmill to determine their health.  All of that, including the customer application, is running on Alpine.

He concluded the talk with a brief update on the riscv64 port.  It looks like we are well on the way to having the port in Alpine 3.15.

BinaryBuilder.jl: The Subtle Art of Binaries that “Just Work”, by Elliot Saba and Mosè Giordano

Elliot and Mosè talked about BinaryBuilder, which they use to cross-compile software for all platforms supported by the Julia programming language.  They do this by building the software in an Alpine-based environment under Linux namespaces or Docker (on mac).

Amongst other things, they have a series of wrapper scripts around programs like uname which allow them to emulate the userspace commands of the target operating system, which helps convince badly written autoconf scripts to cooperate.

All in all, it was a fascinating talk!

The talks on day 2…

The talks on day 2 were primarily about the technical plumbing of Alpine.

Future of Alpine Linux community chats (a roundtable discussion)

We talked about the current situation on freenode.  The conclusion we came to regarding that was to support the freenode staff in their efforts to find a solution until the end of the month, at which point we would evaluate the situation again.

This lead to a discussion about enhancing the IRC experience for new contributors, and the possibility of just setting up an internal IRC server for the project to use, as well as working with Element to set up a hosted Matrix server alternative.

We also talked for the first time about the Alpine communities which are growing on non-free services such as Discord.  Laurent observed that there is value in meeting users where they already are for outreach purposes, and also pointed out that the nature of proprietary IRC networks imposes a software freedom issue that doesn’t exist with self-hosting our own.  Most people agreed with these points, so we concluded that we would figure out plans to start integrating these unofficial communities into Alpine properly.

Security tracker demo and security team Q&A

This was kind of a bubble talk.  I gave a demo of the new tracker, as well as an overview of how the current CVE system works with the NVD and CIRCL feeds and so on.  We then talked a bit about how the CVE system could be improved by the Linked Data proposal I am working on, which will be published shortly.

Afterwards, we talked about initiatives like bringing clang‘s Control Flow Integrity into Alpine and a bunch of other topics about security in Alpine.  It was a fun talk and we covered a lot of topics.  It went for an hour and a half, as a talk was cancelled in the 15:00 slot.

Alpine s390x port discussion, by me

After the security talk, I talked a bit about running Alpine on mainframes, how they work, and why people still want to use them in 2021.  In the Q&A we talked about big vs little endian and why people aren’t mining Monero on mainframes.

Simplified networking configuration with ifupdown-ng, by me

This was an expanded talk about ifupdown-ng loosely based on the one Max gave at VirtualNOG last year.  I adapted his talk, replacing Debian-specific content with Alpine content and talked a bit about NSL (RIP).  The talk seemed to go well, in the Q&A we talked primarily about SR-IOV, which is not yet supported by ifupdown-ng.

Declarative networking configuration with ifstate, by Thomas Liske (@liske)

After the ifupdown-ng talk, Thomas talked about and demonstrated his ifstate project, which is available as an alternative to ifupdown in Alpine.  Unlike ifupdown-ng which takes a hybrid approach, and ifupdown which takes an imperative approach, ifstate is a fully declarative implementation.  The YAML syntax is quite interesting.  I think ifstate will be quite popular for Alpine users requiring fully declarative configuration.

AlpineConf 2.0 planning discussion

After the networking track, we talked about AlpineConf next year.  The conclusion was that AlpineConf has most value being a virtual event, and that if we want to have a physical event there’s events like FOSDEM out there which we can use for that.

Alpine cloud team talk and Q&A

This wound up being a bit of a bubble talk because I failed to actually confirm whether anyone from the cloud team could give a talk at this time.  Nonetheless the talk was a huge success.  We talked about Alpine in the cloud and how to build on it.

systemd: the good parts, by Christine Dodrill (@Xe)

Christine gave a talk about systemd’s feature set that she would like to see implemented in Alpine somehow.  In the chat, Laurent provided some commentary…

It was a fun talk that was at least somewhat amusing.

Governance event

Finally to close out the conference, Natanael talked about Alpine governance.  In this event, he announced the dissolution of the Alpine Core Team and implementation of the Alpine Council instead.  The Alpine Council will be initially managed by Natanael Copa, Carlo Landmeter and Kevin Daudt in the interim.  This group will handle the administrative responsibilities of the project, while a technical steering committee will handle the technical planning for the project.  This arrangement is likely familiar to anyone who has used Fedora, I think it makes sense to copy what works!

Afterwards, we talked a little bit informally about everyone’s thoughts on the conference.

In closing…

Thanks to Natanael Copa for proposing the idea of AlpineConf last year, Kevin Daudt for helping push the buttons and keeping things going (especially when my internet connection failed due to bad weather), all of the wonderful presenters who gave talks (many of which gave talks for their first time ever!) and everyone who dropped in to participate in the conference!

We will be having a technically-oriented Alpine miniconf in November, and then AlpineConf 2022 next May!  Hopefully you will be at both.  Announcements will be forthcoming about both soon.

using qemu-user emulation to reverse engineer binaries

QEMU is primarily known as the software which provides full system emulation under Linux’s KVM.  Also, it can be used without KVM to do full emulation of machines from the hardware level up.  Finally, there is qemu-user, which allows for emulation of individual programs.  That’s what this blog post is about.

The main use case for qemu-user is actually not reverse-engineering, but simply running programs for one CPU architecture on another.  For example, Alpine developers leverage qemu-user when they use dabuild(1) to cross-compile Alpine packages for other architectures: qemu-user is used to run the configure scripts, test suites and so on.  For those purposes, qemu-user works quite well: we are even considering using it to build the entire riscv64 architecture in the 3.15 release.

However, most people don’t realize that you can run a qemu-user emulator which targets the same architecture as the host.  After all, that would be a little weird, right?  Most also don’t know that you can control the emulator using gdb, which is possible and allows you to debug binaries which detect if they are being debugged.

You don’t need gdb for this to be a powerful reverse engineering tool, however.  The emulator itself includes many powerful tracing features.  Lets look into them by writing and compiling a sample program, that does some recursion by calculating whether a number is even or odd inefficiently:

#include <stdbool.h> 
#include <stdio.h> 

bool isOdd(int x); 
bool isEven(int x); 

bool isOdd(int x) { 
   return x != 0 && isEven(x - 1); 

bool isEven(int x) { 
   return x == 0 || isOdd(x - 1); 

int main(void) { 
   printf("isEven(%d): %d\n", 1025, isEven(1025)); 
   return 0; 

Compile this program with gcc, by doing gcc -ggdb3 -Os example.c -o example.

The next step is to install the qemu-user emulator for your architecture, in this case we want the qemu-x86_64 package:

$ doas apk add qemu-x86_64
(1/1) Installing qemu-x86_64 (6.0.0-r1)

Normally, you would also want to install the qemu-openrc package and start the qemu-binfmt service to allow for the emulator to handle any program that couldn’t be run natively, but that doesn’t matter here as we will be running the emulator directly.

The first thing we will do is check to make sure the emulator can run our sample program at all:

$ qemu-x86_64 ./example 
isEven(1025): 0

Alright, all seems to be well.  Before we jump into using gdb with the emulator, lets play around a bit with the tracing features.  Normally when reverse engineering a program, it is common to use tracing programs like strace.  These tracing programs are quite useful, but they suffer from a design flaw: they use ptrace(2) to accomplish the tracing, which can be detected by the program being traced.  However, we can use qemu-user to do the tracing in a way that is transparent to the program being analyzed:

$ qemu-x86_64 -d strace ./example 
22525 arch_prctl(4098,274903714632,136818691500777464,274903714112,274903132960,465) = 0 
22525 set_tid_address(274903715728,274903714632,136818691500777464,274903714112,0,465) = 22525 
22525 brk(NULL) = 0x0000004000005000 
22525 brk(0x0000004000007000) = 0x0000004000007000 
22525 mmap(0x0000004000005000,4096,PROT_NONE,MAP_PRIVATE|MAP_ANONYMOUS|MAP_FIXED,-1,0) = 0x0000004000005000 
22525 mprotect(0x0000004001899000,4096,PROT_READ) = 0 
22525 mprotect(0x0000004000003000,4096,PROT_READ) = 0 
22525 ioctl(1,TIOCGWINSZ,0x00000040018052b8) = 0 ({55,236,0,0}) 
isEven(1025): 0 
22525 writev(1,0x4001805250,0x2) = 16 
22525 exit_group(0)

But we can do even more.  For example, we can learn how a CPU would hypothetically break a program down into translation buffers full of micro-ops (these are TCG micro-ops but real CPUs are similar enough to gain a general understanding of the concept):

$ qemu-x86_64 -d op ./example
ld_i32 tmp11,env,$0xfffffffffffffff0 
brcond_i32 tmp11,$0x0,lt,$L0 

---- 000000400185eafb 0000000000000000 
discard cc_dst 
discard cc_src 
discard cc_src2 
discard cc_op 
mov_i64 tmp0,$0x0 
mov_i64 rbp,tmp0 

---- 000000400185eafe 0000000000000031 
mov_i64 tmp0,rsp 
mov_i64 rdi,tmp0 

---- 000000400185eb01 0000000000000031 
mov_i64 tmp2,$0x4001899dc0 
mov_i64 rsi,tmp2 

---- 000000400185eb08 0000000000000031 
mov_i64 tmp1,$0xfffffffffffffff0 
mov_i64 tmp0,rsp 
and_i64 tmp0,tmp0,tmp1 
mov_i64 rsp,tmp0 
mov_i64 cc_dst,tmp0 

---- 000000400185eb0c 0000000000000019 
mov_i64 tmp0,$0x400185eb11 
sub_i64 tmp2,rsp,$0x8 
qemu_st_i64 tmp0,tmp2,leq,0 
mov_i64 rsp,tmp2 
mov_i32 cc_op,$0x19 
goto_tb $0x0 
mov_i64 tmp3,$0x400185eb11 
st_i64 tmp3,env,$0x80 
exit_tb $0x7f72ebafc040 
set_label $L0 
exit_tb $0x7f72ebafc043

If you want to trace the actual CPU registers for every instruction executed, that’s possible too:

$ qemu-x86_64 -d cpu ./example
RAX=0000000000000000 RBX=0000000000000000 RCX=0000000000000000 RDX=0000000000000000 
RSI=0000000000000000 RDI=0000000000000000 RBP=0000000000000000 RSP=0000004001805690 
R8 =0000000000000000 R9 =0000000000000000 R10=0000000000000000 R11=0000000000000000 
R12=0000000000000000 R13=0000000000000000 R14=0000000000000000 R15=0000000000000000 
RIP=000000400185eafb RFL=00000202 [-------] CPL=3 II=0 A20=1 SMM=0 HLT=0 
ES =0000 0000000000000000 00000000 00000000 
CS =0033 0000000000000000 ffffffff 00effb00 DPL=3 CS64 [-RA] 
SS =002b 0000000000000000 ffffffff 00cff300 DPL=3 DS   [-WA] 
DS =0000 0000000000000000 00000000 00000000 
FS =0000 0000000000000000 00000000 00000000 
GS =0000 0000000000000000 00000000 00000000 
LDT=0000 0000000000000000 0000ffff 00008200 DPL=0 LDT 
TR =0000 0000000000000000 0000ffff 00008b00 DPL=0 TSS64-busy 
GDT=     000000400189f000 0000007f 
IDT=     000000400189e000 000001ff 
CR0=80010001 CR2=0000000000000000 CR3=0000000000000000 CR4=00000220 
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 
DR6=00000000ffff0ff0 DR7=0000000000000400 
CCS=0000000000000000 CCD=0000000000000000 CCO=EFLAGS 

You can also trace with disassembly for each translation buffer generated:

$ qemu-x86_64 -d in_asm ./example
0x000000400185eafb:  xor    %rbp,%rbp 
0x000000400185eafe:  mov    %rsp,%rdi 
0x000000400185eb01:  lea    0x3b2b8(%rip),%rsi        # 0x4001899dc0 
0x000000400185eb08:  and    $0xfffffffffffffff0,%rsp 
0x000000400185eb0c:  callq  0x400185eb11 

0x000000400185eb11:  sub    $0x190,%rsp 
0x000000400185eb18:  mov    (%rdi),%eax 
0x000000400185eb1a:  mov    %rdi,%r8 
0x000000400185eb1d:  inc    %eax 
0x000000400185eb1f:  cltq    
0x000000400185eb21:  mov    0x8(%r8,%rax,8),%rcx 
0x000000400185eb26:  mov    %rax,%rdx 
0x000000400185eb29:  inc    %rax 
0x000000400185eb2c:  test   %rcx,%rcx 
0x000000400185eb2f:  jne    0x400185eb21

All of these options, and more, can also be stacked.  For more ideas, look at qemu-x86_64 -d help.  Now, lets talk about using this with gdb using qemu-user’s gdbserver functionality, which allows for gdb to control a remote machine.

To start a program under gdbserver mode, we use the -g argument with a port number.  For example, qemu-x86_64 -g 1234 ./example will start our example program with a gdbserver listening on port 1234.  We can then connect to that gdbserver with gdb:

$ gdb ./example
Reading symbols from ./example... 
(gdb) target remote localhost:1234 
Remote debugging using localhost:1234 
0x000000400185eafb in ?? ()
(gdb) br isEven 
Breakpoint 1 at 0x4000001233: file example.c, line 12.
(gdb) c 

Breakpoint 1, isEven (x=1025) at example.c:12 
12          return x == 0 || isOdd(x - 1);
(gdb) bt full 
#0  isEven (x=1025) at example.c:12 
No locals. 
#1  0x0000004000001269 in main () at example.c:16 
No locals.

All of this is happening without any knowledge or cooperation of the program.  As far as its concerned, its running as normal, there is no ptrace or any other weirdness.

However, this is not 100% perfect: a program could be clever and run the cpuid instruction and check for GenuineIntel or AuthenticAMD and crash out if it doesn’t see that it is running on a legitimate CPU.  Thankfully, qemu-user has the ability to spoof CPUs with the -cpu option.

If you find yourself needing to spoof the CPU, you’ll probably have the best results with a simple CPU type like -cpu Opteron_G1-v1 or similar.  That CPU type spoofs an Opteron 240 processor, which was one of the first x86_64 CPUs on the market.  You can get a full list of CPUs supported by your copy of the qemu-user emulator by doing qemu-x86_64 -cpu help.

There’s a lot more qemu-user emulation can do to help with reverse engineering, for some ideas, look at qemu-x86_64 -h or similar.