Moving my blog to Oracle cloud

In my past few blog posts, I have been talking about the current state of affairs concerning ARM VPS hosting.  To put my money where my mouth is, I have now migrated my blog to the ARM instances Oracle has to offer, as an actual production use of their cloud.  You might find this surprising, given the last post, but Oracle reached out and explained why their system terminated my original account and we found a solution for that problem.

What happened, anyway?

Back at the end of May, Oracle announced that they were offering ARM VPS servers running on Ampere Altra CPUs.  Accordingly, I was curious, so I signed up for an account on the free tier.  All went well, except that as I was signing up, my now-previous bank declined the initial charge to verify that I had a working credit card.

I was able to sign up anyway, but then a few days later, they charged my card again, which was also declined by my previous bank’s overzealous fraud protection.  Then a few weeks later, I attempted to upgrade, and the same thing happened again: first charge was declined, I got a text message and retried, and everything went through.  This weirdness with the card being declined reliably on the first try, however, made Oracle’s anti-fraud team anxious, and so they decided to understandably cover their own asses and terminate my account.

I’m going to talk in more depth about my relationship with my previous bank soon, but I want to close my accounts out fully with them before I complain about how awful they are: one does not talk smack about somebody who is holding large sums of your savings, after all.  Needless to say, if you find yourself at a bank being acquired by another bank, run like hell.

Given that Oracle was very proactive in addressing my criticism, and that the issue was caused by something neither myself nor Oracle had any control over (my bank demonstrating very loudly that they needed to be replaced), I decided to give them another chance, and move some of my production services over.

At least, at the moment, since I will no longer be operating my own network as of September, I plan on running my services on a mix of Vultr, Oracle and Linode VMs, as this allows me to avoid Intel CPUs (Oracle have ARM, but also AMD EPYC VMs available, while Vultr and Linode also use AMD EPYC).  I will probably run the more FOSS-centric infrastructure on fosshost’s ARM infrastructure, assuming they accept my application anyway.

Installing Alpine on Oracle Cloud

At present, Alpine images are not offered on Oracle’s cloud.  I intend to talk with some of the folks running the service who reached out about getting official Alpine images running in their cloud, as it is a quite decent hosting option.

In the meantime, it is pretty simple to install Alpine.  The first step is to provision an ARM (or x86) instance in their control panel.  You can just use the stock Oracle Linux image, as we will be blasting it away anyway.

Once the image is running, you’ll be presented with a control panel like so:

A control panel for the newly created VPS instance.

The next step is to create an SSH-based serial console.  You will need this to access the Alpine installer.  Scroll down to the resources section and click “Console Connection.”  Then click “Create Console Connection”:

Console connections without any created yet.

This will open a modal dialog, where you can specify the SSH key to use.  You’ll need to use an RSA key, as this creation wizard doesn’t yet recognize Ed25519 keys.  Select “Paste public key” and then paste in your RSA public key, then click “Create console connection” at the bottom of the modal dialog.

The console connection will be created.  Click the menu icon for it, and then click “Copy Serial Console Connection for Linux/Mac.”

Copying the SSH connection command.

Next, open a terminal and paste the command that was copied to your clipboard, and you should be able to access the VPS serial console after dealing with the SSH prompts.

VPS serial console running Oracle Linux

The next step is to SSH into the machine and download the Alpine installer.  This will just be ssh opc@1.2.3.4 where 1.2.3.4 is the IP of the instance.  We will want to download the installer ISO to /run, which is a ramdisk, and then write it to /dev/sda and then sysrq b to reboot.  Here’s what that looks like:

Preparing the Alpine installer

If you monitor your serial console window, you’ll find that you’ve been dropped into the Alpine installer ISO.

Alpine installer shell

From here, you can run setup-alpine and follow the directions as usual.  You will want to overwrite the boot media, so answer yes when it asks.

Installing Alpine

At this point, you can reboot, and it will dump you into your new Alpine image.  You might want to set up cloud-init, or whatever, but that’s not important to cover here.

Future plans

At the moment, the plan is to see how things perform, and if they perform well, migrate more services over.  I might also create OCIs with cloud-init enabled for other users of Alpine on Oracle cloud.

Stay tuned!

Oracle cloud sucks

Update: Oracle have made this right, and I am in fact, now running production services on their cloud.  Thanks to Ross and the other Oracle engineers who reached out offering assistance.  The rest of the blog post is retained for historical purposes.

In my previous blog, I said that Oracle was the best option for cheap ARM hosting.

Yesterday, Oracle rewarded me for that praise by demonstrating they are, in fact, Oracle and terminating my account.  When I contacted their representative, I was told that I was running services on my instance not allowed by their policies (I was running a non-public IRC server that only connected to other IRC servers, and their policies did not discuss IRC at all) and that the termination decision was final.  Accordingly, I can no longer recommend using Oracle’s cloud services for anything — if you use their service, you are at risk of losing your hosting at any time, for any reason they choose to invent, regardless of whether you are a paying customer or not.

That leaves us with exactly zero options for cheap ARM hosting.  Hopefully Amazon will bring ARM options to Lightsail soon.

It’s time for ARM to embrace traditional hosting

ARM is everywhere these days — from phones to hyperscale server deployments.  There is even an ARM workstation available that has decent specs at an acceptable price.  Amazon and Oracle tout white paper after white paper about how their customers have switched to ARM, gotten performance wins and saved money.  Sounds like everything is on the right track, yes?  Well, actually it’s not.

ARM for the classes, x86 for the masses

For various reasons, I’ve been informed that I need to start rethinking my server infrastructure arrangements.  We won’t go into that here, but the recent swearing at San Francisco property developers on my Twitter is highly related.

As I am highly allergic to using any infrastructure powered by x86 CPUs, due to the fact that Intel and AMD both include firmware in the CPU which allow for computation to occur without my consent (also known as a backdoor) so that Hollywood can implement a largely pointless (especially on a server) digital restrictions management scheme, I decided to look at cloud-based hosting solutions using ARM CPUs, as that seemed perfectly reasonable at first glance.

Unfortunately, what I found is that ARM hosting is not deployed in a way where individual users can access it at cost-competitive prices.

AWS Graviton (bespoke Neoverse CPUs)

In late 2018, AWS announced the Graviton CPU, which was based on a core design they got when they acquired Annapurna Labs.  This was followed up in 2020 with Graviton2, which is based on the ARM Neoverse N1 core design.  These are decent chips, the performance is quite good, and costs are quite low.

But, how much does it cost for an average person to actually make use of it?  We will assume that the 1 vCPU / 4GB RAM m6g.medium configuration is suitable for this comparison, as it is the most comparable to a modest x86 VPS.

The m6g.medium instance does not come with any transfer, but the first GB is always free on EC2.  Further transfer is $0.09/GB up to 10TB.  By comparison, the Linode 4GB RAM plan comes with 4TB of transfer, so we will use that for our comparison.

Hourly price (m6g.medium) $0.0385
× 720 hours $27.72
+ 3.999TB of transfer ($0.09 × 3999) $359.90
Total: $387.62

Transfer charges aside, the $27.72 monthly charge is quite competitive to Linode, clocking in at only $7.72 more expensive for comparable performance.  But the data transfer charges have the potential to make using Graviton on EC2 very costly.

What about AWS Lightsail?

An astute reader might note that AWS actually does provide traditional VPS hosting as a product, under its Lightsail brand.  But the Lightsail VPS product is x86-only for now.

Amazon could make a huge impact in terms of driving ARM adoption in the hosting ecosystem by bringing Graviton to their Lightsail product.  Capturing Lightsail users into the Graviton ecosystem and then scaling them up to EC2 seems like a no-brainer sales strategy too.  But so far, they haven’t implemented this.

Oracle Cloud Infrastructure

A few months ago, Oracle introduced instances based on Ampere’s Altra CPUs, which are also based on the Neoverse N1 core.

The base configuration (Oracle calls it a shape) is priced at $0.01/hourly, includes a single vCPU and 6GB of memory.  These instances do not come with any data transfer inclusive, but like AWS, data transfer is pooled.  A major difference between Oracle and AWS, however, is that the first 10TB of transfer is included gratis.

Hourly price $0.01
× 720 hours $7.20
+ 4TB transfer (included gratis) $0
Total: $7.20

I really, really wanted to find a reason to hate on Oracle here.  I mean, they are Oracle.  But I have to admit that Oracle’s cloud product is a lot more similar to traditional VPS hosting than Amazon’s EC2 offerings.  Update: Haha, nevermind!  They came up with a reason for me to hate on them when they terminated my account for no reason.

So, we have one option for a paid ARM VPS, and that is only an option if you are willing to deal with Oracle, which are Oracle.  Did I mention they are Oracle?

Oracle federating its login service with itself

Scaleway

Tons of people told me that Scaleway had ARM VPS for a long time.  And indeed, they used to, but they don’t anymore.  Back when they launched ARMv8 VPS on ThunderX servers, I actually used a Scaleway VPS to port libucontext.

Unfortunately, they no longer offer ARM VPS of any kind, and only overpriced x86 ones that are not remotely cost competitive to anything else on that market.

Mythic Beasts, miniNodes, etc.

These companies offer ARM instances, but they are Raspberry Pi instances.  The pricing is also rather expensive when considering that they are Raspberry Pi instances.  I don’t consider these offers competitive in any way.

Equinix Metal

You can still buy ARM servers on the Equinix Metal platform, but you have to request permission to buy them.  In testing a couple of years ago, I was able to provision a c1.large.arm server on the spot market for $0.25/hour, which translates to $180/monthly.

However, the problem with buying on the spot market is that your server might go away at any time, which means you can’t actually depend on it.

There is also the problem with data transfer: Equinix Metal follows the same billing practices for data transfer as AWS, meaning actual data transfer gets expensive quickly.

However, the folks who run Equinix Metal are great people, and I feel like ARM could work with them to get some sort of side project going where they get ARM servers into the hands of developers at reasonable pricing.  They already have an arrangement like that for FOSS projects with the Works on ARM program.

Conclusions

Right now, as noted above, Oracle is the best game in town for the average person (like me) to buy an ARM VPS.  We need more options.  Amazon should make Graviton available on its Lightsail platform.

It is also possible that as a side effect of marcan’s Asahi Linux project, we might have cheap Linux dedicated servers on Apple M1 mac minis soon.  That’s also a space to watch.

the three taps of doom

A few years ago, I worked as the CTO of an advertising startup.  At first, we used Skype for messaging amongst the employees, and then later, we switched to Slack.  The main reason for switching to Slack was because they had an IRC gateway — you could connect to a Slack workspace with an IRC client, which allowed for the people who wanted to use IRC to do so, while providing a polished experience for those who were unfamiliar with IRC.

the IRC gateway

In the beginning, Slack had an IRC gateway.  On May 15th, 2018, Slack discontinued the IRC gateway, beginning my descent into Cocytus.  Prior to the shutdown of the IRC gateway, I had always interacted with the Slack workspace via IRC.  This was replaced with the Slack mobile and desktop apps.

The IRC gateway, however, was quite buggy, so it was probably good that they got rid of it.  It did not comply with any reasonable IRC specifications, much less support anything from IRCv3, so the user experience was quite disappointing albeit serviceable.

the notifications

Switching from IRC to the native Slack clients, I now got to deal with one of Slack’s main features: notifications.  If you’ve ever used slack, you’re likely familiar with the unholy notification sound, or as I have come to know it, the triple tap of existential doom.  Let me explain.

At this point, we used slack for everything: chat, paging people, even monitoring tickets coming in.  The workflow was efficient, but due to matters outside my control, revenues were declining.  This lead to the CEO becoming quite antsy.  One day he discovered that he could use @all, @tech or @sales to page people with his complaints.

This means that I would now get pages like:

Monitoring: @tech Service rtb-frontend-nyc is degraded
CEO: @tech I demand you implement a filtering feature our customer is requiring to scale up

The monitoring pages were helpful, the CEO paging us demanding that we implement filtering features that spied on users and definitely would not actually result in scaled up revenue (because the customers were paying CPM) were not helpful.

The pages in question were actually a lot more intense than I show here, these are tame examples, but it felt like I had to walk on eggshells in order to use Slack.

Quitting that job

In the middle of 2018, I quit that job for various reasons.  And as a result, I uninstalled Slack, and immediately felt much better.  But every time I hear the Slack notification sound, I now get anxious as a result.

The moral of this story is: if you use Slack, don’t use it for paging, and make sure your CEO doesn’t have access to the paging features.  It will be a disaster.  And if you’re running a FOSS project, consider not using Slack, as there are likely many technical people who avoid Slack due to their own experiences with it.

Bits relating to Alpine security initiatives in June

As usual, I have been hard at work on various security initiatives in Alpine the past month.  Here is what I have been up to:

Alpine 3.14 release and remediation efforts in general

Alpine 3.14.0 was released on June 15, with the lowest unpatched vulnerability count of any release in the past several years.  While previous Alpine release cycles did well on patching the critical vulnerabilities, the less important ones frequently slipped through the cracks, due to the project being unable to focus on vulnerability remediation until now.

We have also largely cleaned up Alpine 3.13 (there are a few minor vulnerabilities that have not been remediated there yet, as they require ABI changes or careful backporting), and Alpine 3.12 and 3.11 are starting to catch up in terms of unpatched vulnerabilities.

While a release branch will realistically never have zero unpatched vulnerabilities, we are much closer than ever before to having the supported repositories in as optimal of a state as we can have them.  Depending on how things play out, this may result in extended security support for the community repository for 3.14, since the introduction of tools and processes has reduced the maintenance burden for security updates.

Finally, with the release of Alpine 3.14, the security support period for Alpine 3.10 draws to a close, so you should upgrade to at least Alpine 3.11 to continue receiving security updates.

secfixes-tracker and the security database

This month saw a minor update to secfixes-tracker, the application which powers security.alpinelinux.org.  This update primarily focused around supporting the new security rejections database, which allows for maintainers to reject CVEs from their package with an annotated rationale.

In my previous update, I talked about a proposal which will allow security trackers to exchange data, using Linked Data Notifications.  This will be deployed on security.alpinelinux.org as part of the secfixes-tracker 0.4 release, as we have come to an agreement with the Go and OSV teams about how to handle JSON-LD extensions in the format.

My goal with the Linked Data Notifications effort is to decentralize the current CVE ecosystem, and a bit longer writeup explaining how we will achieve that is roughly half-way done sitting around in my drafts folder.  Stay tuned!

Finally, the license for the security database has been officially defined as CC-BY-SA, meaning that security vendors can now use our security database in their scanners without having a legal compliance headache.

Reproducible Builds

We have begun work on supporting reproducibility in Alpine.  While there is still a lot of work to be done in abuild to support buildinfo files, kpcyrd started to work on making the install media reproducible, beginning with the Raspberry Pi images we ship.

However, he ran into an issue with BusyBox’s cpio not supporting reproducibility, so I added the necessary flags to allow for cpio archives to be reproducible, sent the patches to upstream BusyBox and pushed an updated BusyBox with the patches to Alpine edge.

There are still a few fixes that need to be made to apk, but with some workarounds, we were able to demonstrate reproducible install images for the Raspberry Pi.

The next few steps here will involve validating the reproducible initramfs work correctly, for example I don’t think we need --ignore-devno, just --renumber-inodes for it, and I also think that with --ignore-devno it won’t actually boot, but validation will allow us to verify everything is OK with the image.

Beyond that, we need reproducible packages, and for that, we need buildinfo files.  That’s next on my list of things to tackle.

The linux-distros list

In the last update, we were discussing whether to join the linux-distros list.  Since then, we concluded that joining the list does not net us anything useful: our post-embargo patching timeframe is the same as distros which participate on the list, and the requirements for sharing vulnerability data with other team members and maintainers were too onerous.  Alpine values transparency, we found that compromising transparency to have embargoed security data was not a useful tradeoff for us.

apk-tools 3

Since the last update, Timo has made a lot of progress on the ADB format used in apk-tools 3.  At this point, I think it has come along enough that we can begin working on exposing security information the ADB-based package indices.

While Alpine itself is not yet publishing ADB-based indices, the features available in the ADB format are required to reflect the security fix information correctly (the current index format does not support structured data at all, and is just a simple key-value store).

I also intend to look at the ADB-based indices to ensure they are reproducible.  This will likely occur within the next few weeks as I work on making the current indices reproducible.

Acknowledgement

My activities relating to Alpine security work are presently sponsored by Google and the Linux Foundation. Without their support, I would not be able to work on security full time in Alpine, so thanks!

understanding thread stack sizes and how alpine is different

From time to time, somebody reports a bug to some project about their program crashing on Alpine.  Usually, one of two things happens: the developer doesn’t care and doesn’t fix the issue, because it works under GNU/Linux, or the developer fixes their program to behave correctly only for the Alpine case, and it remains silently broken on other platforms.

The Default Thread Stack Size

In general, it is my opinion that if your program is crashing on Alpine, it is because your program is dependent on behavior that is not guaranteed to actually exist, which means your program is not actually portable.  When it comes to this kind of dependency, the typical issue has to deal with the thread stack size limit.

You might be wondering: what is a thread stack, anyway?  The answer, of course, is quite simple: each thread has its own stack memory, because it’s not really feasible for multiple threads to use the same stack memory, and on most platforms the size of that memory is much smaller than the main thread’s stack, though programmers are not necessarily aware of that discontinuity.

Here is a table of common x86_64 platforms and their default stack sizes for the main thread (process) and child threads:

OS Process Stack Size Thread Stack Size
Darwin (macOS, iOS, etc) 8 MiB 512 KiB
FreeBSD 8 MiB 2 MiB
OpenBSD (before 4.6) 8 MiB 64 KiB
OpenBSD (4.6 and later) 8 MiB 512 KiB
Windows 1 MiB 1 MiB
Alpine 3.10 and older 8 MiB 80 KiB
Alpine 3.11 and newer 8 MiB 128 KiB
GNU/Linux 8 MiB 8 MiB

I’ve highlighted the OpenBSD and GNU/Linux default thread stack sizes because they represent the smallest and largest possible default thread stack sizes.

Because the Linux kernel has overcommit mode, GNU/Linux systems use 8 MiB by default, which leads to a potential problem when running code developed against GNU/Linux on other systems.  As most threads only need a small amount of stack memory, other platforms use smaller limits, such as OpenBSD using only 64 KiB and Alpine using at most 128 KiB by default.  This leads to crashes in code which assumes a full 8MiB is available for each thread to use.

If you find yourself debugging a weird crash that doesn’t make sense, and your application is multi-threaded, it likely means that you’re exhausting the stack limit.

What can I do about it?

To fix the issue, you will need to either change the way your program is written, or change the way it is compiled.  There’s a few options you can take to fix the problem, depending on how much time you’re willing to spend.  In most cases, these sorts of crashes are caused by attempting to manipulate a large variable which is stored on the stack.  Generally, moving the variable off the stack is the best way to fix the issue, but there are alternative options.

Moving the variable off the stack

Lets say that the code has a large array that is stored on the stack, which causes the stack exhaustion issue.  In this case, the easiest solution is to move it off the stack.  There’s two main approaches you can use to do this: thread-local storage and heap storage.  Thread-local storage is a way to reserve additional memory for thread variables, think of it like static but bound to each local thread.  Heap storage is what you’re working with when you use malloc and free.

To illustrate the example, we will adjust this code to use both kinds of storage:

void some_function(void) {

    char scratchpad[500000];



    memset(scratchpad, 'A', sizeof scratchpad);

}

Thread-local variables are referenced with the thread_local keyword.  You must include threads.h in order to use it:

#include <threads.h>



void some_function(void) {

    thread_local char scratchpad[500000];


    memset(scratchpad, 'A', sizeof scratchpad);

}

You can also use the heap.  The most portable example would be the obvious one:

#include <stdlib.h>



const size_t scratchpad_size = 500000;



void some_function(void) {

    char *scratchpad = calloc(1, scratchpad_size);



    memset(scratchpad, 'A', scratchpad_size);



    free(scratchpad);

}

However, if you don’t mind sacrificing portability outside gcc and clang, you can use the cleanup attribute:

#include <stdlib.h>



#define autofree __attribute__(cleanup(free))



const size_t scratchpad_size = 500000;



void some_function(void) {

    autofree char *scratchpad = calloc(1, scratchpad_size);



    memset(scratchpad, 'A', scratchpad_size);

}

This is probably the best way to fix code like this if you’re not targeting compilers like the Microsoft one.

Adjusting the thread stack size at runtime

pthread_create takes an optional pthread_attr_t pointer as the second parameter.  This can be used to set an alternate stack size for the thread at runtime:

#include <pthread.h>



pthread_t worker_thread;



void launch_worker(void) {

    pthread_attr_t attr;



    pthread_attr_init(&attr);

    pthread_attr_setstacksize(&attr, 1024768);



    pthread_create(&worker_thread, &attr, some_function);

}

By changing the stacksize when calling pthread_create, the child thread will have a larger stack.

Adjusting the stack size at link time

In modern Alpine systems, since 2018, it is possible to set the default thread stack size at link time.  This can be done with a special LDFLAGS flag, like -Wl,-z,stack-size=1024768.

You can also use tools like chelf or muslstack to patch pre-built binaries to use a larger stack, but this shouldn’t be done inside Alpine packaging, for example.

Hopefully, this article is helpful for those looking to learn how to solve the stack size issue.

the end of freenode

My first experience with IRC was in 1999.  I was in middle school, and a friend of mine ordered a Slackware CD from Walnut Creek CDROM.  This was Slackware 3.4, and contained the GNOME 1.x desktop environment on the disc, which came with the BitchX IRC client.

At first, I didn’t really know what BitchX was, I just thought it was a cool program that displayed random ascii art, and then tried to connect to various servers.  After a while, I found out that an IRC client allowed you to connect to an IRC network, and get help with Slackware.

At that time, freenode didn’t exist.  The Slackware IRC channel was on DALnet, and I started using DALnet to learn more about Slackware.  Like most IRC newbies, it didn’t go so well: I got banned from #slackware in like 5 minutes or something.  I pleaded for forgiveness, in the way redolent of a middle schooler.  And eventually, I got unbanned and stuck around for a while.  That was my first experience with IRC.

After a few months, I got bored of running Linux and reinstalled Windows 98 on my computer, because I wanted to play games that only worked on Windows, and so, largely, my interest in IRC waned.

A few years passed… I was in eighth grade.  I found out that one of the girls in my class was a witch.  I didn’t really understand what that meant, and so I pressed her for more details.  She said that she was a Wiccan, and that I should read more about it on the Internet if I wanted to know more.  I still didn’t quite understand what she meant, but I looked it up on AltaVista, which linked me to an entire category of sites on dmoz.org.  So, I read through these websites and on one of them I saw:

Come join our chatroom on DALnet: irc.dal.net #wicca

DALnet!  I knew what that was, so I looked for an IRC client that worked on Windows, and eventually installed mIRC.  Then I joined DALnet again, this time to join #wicca.  I found out about a lot of other amazing ideas from the people on that channel, and wound up joining others like #otherkin around that time.  Many of my closest friends to this day are from those days.

At this time, DALnet was the largest IRC network, with almost 150,000 daily users.  Eventually, my friends introduced me to mIRC script packs, like NoNameScript, and I used that for a few years on and off, sometimes using BitchX on Slackware instead, as I figured out how to make my system dual boot at some point.

The DALnet DDoS attacks

For a few years, all was well, until the end of July 2002, when DALnet started being the target of Distributed Denial of Service attacks.  We would of course, later find out that these attacks were at the request of Jason Michael Downey (Nessun), who had just launched a competing IRC network called Rizon.

However, this resulted in #slackware and many other technical channels moving from DALnet to irc.openprojects.net, a network that was the predecessor to freenode.  Using screen, I was able to run two copies of the BitchX client, one for freenode, and one for DALnet, but I had difficulties connecting to the DALnet network due to the DDoS attacks.

Early freenode

At the end of 2002, irc.openprojects.net became freenode.  At that time, freenode was a much different place, with community projects like #freenoderadio, a group of people who streamed various ‘radio’ shows on an Icecast server.  Freenode had less than 5,000 users, and it was a community where most people knew each other, or at least knew somebody who knew somebody else.

At this time, freenode ran dancer-ircd, with dancer-services, which were written by the Debian developer Andrew Suffield and based on ircd-hybrid 6 and HybServ accordingly.

Dancer had a lot of bugs, the software would frequently do weird things and the services were quite spartan compared to what was available on DALnet.  I knew based on what was available over on DALnet, that we could make something better for freenode, and so I started to learn about IRCD.

Hatching a plan to make services better

By this time, I was in my last year of high school, and was writing IRC bots in Perl.  I hadn’t really tried to write anything in C yet, but I was learning a little bit about C by playing around with a test copy of UnrealIRCd on my local machine.  But I started to talk to lilo about improving the services.  I knew it could be done, but I didn’t know how to do it yet, which lead me to start searching for services projects that were simple and understandable.

In my searching for services software, I found rakaur‘s Shrike project, which was a very simple clone of Undernet’s X service which could be used with ircd-hybrid.  I talked with rakaur, and I learned more about C, and even added some features.  Unfortunately, we had a falling out at that time because a user on the network we ran together found out that he could make rakaur‘s IRC bot run rm -rf --no-preserve-root /, and did so.

After working on Shrike a bit, I finally knew what to do: extend Shrike into a full set of DALnet-like services.  I showed what I was working on to lilo and he was impressed: I became a freenode staff member, and continued to work on the services, and all went well for a while.  He also recruited my friend jilles to help with the coding, and we started fixing bugs in dancer-ircd and dancer-services as an interim solution.  And we started writing atheme as a longer-term replacement to dancer-services, originally under the auspices of freenode.

Spinhome

In early 2006, lilo launched his Spinhome project.  Spinhome was a fundraising effort so that lilo could get a mobile home to replace the double-wide trailer he had been living in.  Some people saw him trying to fundraise while being the owner of freenode as a conflict of interest, which lead to a falling out with a lot of staffers, projects, etc.  OFTC went from being a small network to a much larger network during this time.

One side effect of this was that the atheme project got spun out into its own organization: atheme.org, which continues to exist in some form to this day.

The atheme.org project was founded on the concept of promoting digital autonomy, which is basically the network equivalent of software freedom, and has advocated in various ways to preserve IRC in the context of digital autonomy for years.  In retrospect, some of the ways we advocated for digital autonomy were somewhat obnoxious, but as they say, hindsight is always 20/20.

The hit and run

In September 2006, lilo was hit by a motorist while riding his bicycle.  This lead to a managerial crisis inside freenode, where there were two rifts: one group which wanted to lead the network was lead by Christel Dahlskjaer, while the other group was lead by Andrew Kirch (trelane).  Christel wanted to update the network to use all of the new software we developed over the past few years, and so atheme.org gave her our support, which convinced enough of the sponsors and so on to also support her.

A few months later, lilo‘s brother tried to claim title to the network to turn into some sort of business.  This lead to Christel and Richard Hartmann (RichiH) meeting with him in order to get him to back away from that attempt.

After that, things largely ran smoothly for several years: freenode switched to atheme, and then they switched to ircd-seven, a customized version of charybdis which we had written to be a replacement for hyperion (our fork of dancer-ircd), after which things ran well until…

Freenode Limited

In 2016, Christel incorporated freenode limited, under the guise that it would be used to organize the freenode #live conferences.  In early 2017, she sold 66% of her stake in freenode limited to Andrew Lee, who I wrote about in last month’s chapter.

All of that lead to Andrew’s takeover of the network last month, and last night they decided to remove the #fsf and #gnu channels from the network, and k-lined my friend Amin Bandali when he criticized them about it, which means freenode is definitely no longer a network about FOSS.

Projects should use alternative networks, like OFTC or Libera, or better yet, operate their own IRC infrastructure.  Self-hosting is really what makes IRC great: you can run your own server for your community and not be beholden to anyone else.  As far as IRC goes, that’s the future I feel motivated to build.

This concludes my coverage of the freenode meltdown.  I hope people enjoyed it and also understand why freenode was important to me: without lilo‘s decision to take a chance on a dumbfuck kid like myself, I wouldn’t have ever really gotten as deeply involved in FOSS as I have, so to see what has happened has left me heartbroken.

the vulnerability remediation lifecycle of Alpine containers

Anybody who has the responsibility of maintaining a cluster of systems knows about the vulnerability remediation lifecycle: vulnerabilities are discovered, disclosed to vendors, mitigated by vendors and then consumers deploy the mitigations as they update their systems.

In the proprietary software world, the deployment phase is colloquially known as Patch Tuesday, because many vendors release patches on the second and fourth Tuesday of each month.  But how does all of this actually happen, and how do you know what patches you actually need?

I thought it might be nice to look at all the moving pieces that exist in Alpine’s remediation lifecycle, beginning from discovery of the vulnerability, to disclosure to Alpine, to user remediation.  For this example, we will track CVE-2016-20011, which I just fixed in Alpine, which is a minor vulnerability in the libgrss library concerning a lack of TLS certificate validation when fetching https URIs.

The vulnerability itself

GNOME’s libsoup is an HTTP client/server library for the the GNOME platform, analogous to libcurl.  It has two sets of session APIs: the newer SoupSession API and the older SoupSessionSync/SoupSessionAsync family of APIs.  As a result of creating the newer SoupSession API, it was discovered at some point that the older SoupSessionSync/SoupSessionAsync APIs did not enable TLS certificate validation by default.

As a result of discovering that design flaw in libsoup, Michael Catanzaro — one of the libsoup maintainers, began to audit users of libsoup in the GNOME platform.  One such user of libsoup is libgrss, which did not take any steps to enable TLS certificate validation on its own, so Michael opened a bug against it in 2016.

Five years passed and he decided to check up on these bugs.  That lead to the filing of a new bug in GNOME’s gitlab against libgrss, as the GNOME bugzilla service is in the process of being turned down.  As libgrss was still broken in 2021, he requested a CVE identifier for the vulnerability, and was issued CVE-2016-20011.

How do CVE identifiers get determined, anyway?

You might notice that the CVE identifier he was issued is CVE-2016-20011, even though it is presently 2021.  Normally, CVE identifiers use the current year, as requesting a CVE identifier is usually an early step in the disclosure process, but CVE identifiers are actually grouped by the year that a vulnerability was first publicly disclosed.  In the case of CVE-2016-20011, the identifier was assigned to the 2016 year because of the public GNOME bugzilla report which was filed in 2016.

The CVE website at MITRE has more information about how CVE identifiers are grouped if you want to know more.

The National Vulnerability Database

Our vulnerability was issued CVE-2016-20011, but how does Alpine actually find out about it?  The answer is quite simple: the NVD.  When a CVE identifier is issued, information about the vulnerability is forwarded along to the National Vulnerability Database activity at NIST, a US governmental agency.  The NVD consumes CVE data and enriches it with additional links and information about the vulnerability.  They also generate Common Product Enumeration rules which are intended to map the vulnerability to an actual product and set of versions.

Common Product Enumeration rules consist of a CPE URI which tries to map a vulnerability to an ecosystem and product name, and an optional set of version range constraints.  For CVE-2016-20011, the NVD staff issued a CPE URI of cpe:2.3:a:gnome:libgrss:*:*:*:*:*:*:*:* and a version range constraint of <= 0.7.0.

security.alpinelinux.org

The final step in vulnerability information making its way to Alpine is the security team’s issue tracker.  Every hour, we download the latest version of the CVE-Modified and CVE-Recent feeds offered by the National Vulnerability Database activity.  We then use those feeds to update our own internal vulnerability tracking database.

Throughout the day, the security team pulls various reports from the vulnerability tracking database, for example a list of potential vulnerabilities in edge/community.  The purpose of checking these reports is to see if there are any new vulnerabilities to investigate.

As libgrss is in edge/community, CVE-2016-20011 appeared on that report.

Mitigation

Once we start to work a vulnerability, there are a few steps that we take.  First, we research the vulnerability, by checking the links provided to us through the CVE feed and other feeds the security tracker consumes.  The NVD staff are usually very quick at linking to git commits and other data we can use for mitigating the vulnerability.  However, sometimes, such as in the case of CVE-2016-20011, there is no longer an active upstream maintainer of the package, and we have to mitigate the issue ourselves.

Once we have a patch that is known to fix the issue, we prepare a software update and push it to aports.git.  We then backport the security fix to other branches in aports.git.

Once the fix is committed to all of the appropriate branches, the build servers take over, building a new version of the package with the fixes.  The build servers then upload the new packages to the master mirror, and from there, they get distributed through the mirror network to Alpine’s user community.

Remediation

At this point, if you’re a casual user of Alpine, you would just do something like apk upgrade -Ua and move on with your life, knowing that your system is up to date.

But what if you’re running a cluster of hundreds or thousands of Alpine servers and containers?  How would you know what to patch?  What should be prioritized?

To solve those problems, there are security scanners, which can check containers, images and filesystems for vulnerabilities.  Some are proprietary software, but there are many options that are free.  However, security scanners are not perfect, like Alpine’s vulnerability investigation tool, they sometimes generate both false positives and false negatives.

Where do security scanners get their data?  In most cases for Alpine systems, they get their data from the Alpine security database, a product maintained by the Alpine security team.  Using that database, they check the apk installed database to see what packages and versions are installed in the system.  Let’s look at a few of them.

Creating a test case by mixing Alpine versions

Note: You should never actually mix Alpine versions like this.  If done in an uncontrolled way, you risk system unreliability and your security scanning solution won’t know what to do as each Alpine version’s security database is specific to that version of Alpine.  Don’t create a franken-alpine!

In the case of libgrss, we know that 0.7.0-r1 and newer have a fix for CVE-2016-20011, but the security fix has already been published.  So, where can we get 0.7.0-r0?  We can get it from Alpine 3.12 of course.  Accordingly, we make a filesystem with apk and install Alpine 3.12 into it:

nanabozho:~# apk add --root ~/test-image --initdb --allow-untrusted -X http://dl-cdn.alpinelinux.org/v3.12/main -X http://dl-cdn.alpinelinux.org/v3.12/community alpine-base libgrss-dev=0.7.0-r0
[...]
OK: 126 MiB in 92 packages
nanabozho:~# apk upgrade --root ~/test-image -X http://dl-cdn.alpinelinux.org/v3.13/main -X http://dl-cdn.alpinelinux.org/v3.13/community
[...]
OK: 127 MiB in 98 packages
nanabozho:~# apk info --root ~/test-image libgrss
Installed:                              Available:
libgrss-0.7.0-r0                      ? 
nanabozho:~# cat ~/test-image/etc/alpine-release
3.13.5

Now that we have our image, lets see what detects the vulnerability, and what doesn’t.

trivy

Trivy is considered by many to be the most reliable scanner for Alpine systems, but can it detect this vulnerability?  In theory, it should be able to.

I have installed trivy to /usr/local/bin/trivy on my machine by downloading the go binary from the GitHub release.  They have a script that can do this for you, but I’m not a huge fan of curl | sh type scripts.

To scan a filesystem image with trivy, you do trivy fs /path/to/filesystem:

nanabozho:~# trivy fs -f json ~/test-image/
2021-06-07T23:48:40.308-0600 INFO Detected OS: alpine
2021-06-07T23:48:40.308-0600 INFO Detecting Alpine vulnerabilities...
2021-06-07T23:48:40.309-0600 INFO Number of PL dependency files: 0
[
  {
    "Target": "localhost (alpine 3.13.5)",
    "Type": "alpine"
  }
]

Hmm, that’s strange.  I wonder why?

nanabozho:~# trivy --debug fs ~/test-image/
2021-06-07T23:42:54.036-0600 DEBUG Severities: UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL
2021-06-07T23:42:54.038-0600 DEBUG cache dir: /root/.cache/trivy
2021-06-07T23:42:54.039-0600 DEBUG DB update was skipped because DB is the latest
2021-06-07T23:42:54.039-0600 DEBUG DB Schema: 1, Type: 1, UpdatedAt: 2021-06-08 00:19:21.979880152 +0000 UTC, NextUpdate: 2021-06-08 12:19:21.979879952 +0000 UTC, DownloadedAt: 2021-06-08 05:23:09.354950757 +0000 UTC

Ah, trivy’s security database only updates twice per day, so trivy has not become aware of CVE-2016-20011 being mitigated by libgrss-0.7.0-r1 yet.

I rebuilt trivy’s database locally and put it in ~/.cache/trivy/db/trivy.db:

nanabozho:~# trivy fs -f json ~/test-image/
2021-06-08T01:37:20.574-0600	INFO	Detected OS: alpine
2021-06-08T01:37:20.574-0600	INFO	Detecting Alpine vulnerabilities...
2021-06-08T01:37:20.576-0600	INFO	Number of PL dependency files: 0
[
  {
    "Target": "localhost (alpine 3.13.5)",
    "Type": "alpine",
    "Vulnerabilities": [
      {
        "VulnerabilityID": "CVE-2016-20011",
        "PkgName": "libgrss",
        "InstalledVersion": "0.7.0-r0",
        "FixedVersion": "0.7.0-r1",
        "Layer": {
          "DiffID": "sha256:4bd83511239d179fb096a1aecdb2b4e1494539cd8a0a4edbb58360126ea8d093"
        },
        "SeveritySource": "nvd",
        "PrimaryURL": "https://avd.aquasec.com/nvd/cve-2016-20011",
        "Description": "libgrss through 0.7.0 fails to perform TLS certificate verification when downloading feeds, allowing remote attackers to manipulate the contents of feeds without detection. This occurs because of the default behavior of SoupSessionSync.",
        "Severity": "HIGH",
        "CweIDs": [
          "CWE-295"
        ],
        "CVSS": {
          "nvd": {
            "V2Vector": "AV:N/AC:L/Au:N/C:N/I:P/A:N",
            "V3Vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N",
            "V2Score": 5,
            "V3Score": 7.5
          }
        },
        "References": [
          "https://bugzilla.gnome.org/show_bug.cgi?id=772647",
          "https://gitlab.gnome.org/GNOME/libgrss/-/issues/4"
        ],
        "PublishedDate": "2021-05-25T21:15:00Z",
        "LastModifiedDate": "2021-06-01T17:03:00Z"
      },
      {
        "VulnerabilityID": "CVE-2016-20011",
        "PkgName": "libgrss-dev",
        "InstalledVersion": "0.7.0-r0",
        "FixedVersion": "0.7.0-r1",
        "Layer": {
          "DiffID": "sha256:4bd83511239d179fb096a1aecdb2b4e1494539cd8a0a4edbb58360126ea8d093"
        },
        "SeveritySource": "nvd",
        "PrimaryURL": "https://avd.aquasec.com/nvd/cve-2016-20011",
        "Description": "libgrss through 0.7.0 fails to perform TLS certificate verification when downloading feeds, allowing remote attackers to manipulate the contents of feeds without detection. This occurs because of the default behavior of SoupSessionSync.",
        "Severity": "HIGH",
        "CweIDs": [
          "CWE-295"
        ],
        "CVSS": {
          "nvd": {
            "V2Vector": "AV:N/AC:L/Au:N/C:N/I:P/A:N",
            "V3Vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N",
            "V2Score": 5,
            "V3Score": 7.5
          }
        },
        "References": [
          "https://bugzilla.gnome.org/show_bug.cgi?id=772647",
          "https://gitlab.gnome.org/GNOME/libgrss/-/issues/4"
        ],
        "PublishedDate": "2021-05-25T21:15:00Z",
        "LastModifiedDate": "2021-06-01T17:03:00Z"
      }
    ]
  }
]

Ah, that’s better.

clair

Clair is a security scanner previously written by the CoreOS team, and now maintained by Red Hat.  It is considered the gold standard for security scanning of containers.  How does it do with the filesystem we baked?

nanabozho:~# clairctl report ~/test-image/
2021-06-08T00:11:04-06:00 ERR error="UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:root/test-image Type:repository]]"

Oh, right, it can’t just scan a filesystem.  One second.

nanabozho:~$ cd ~/dev-src/clair
nanabozho:~$ make local-dev-up-with-quay
[a bunch of commands later]
nanabozho:~$ clairctl report test-image:1
test-image:1 found libgrss 0.7.0-r0 CVE-2016-20011 (fixed: 0.7.0-r1)

As you can see, clair does succeed in finding the vulnerability, when you bake an actual Docker image and publish it to a local quay instance running on localhost.

But this is really a lot of work to just scan for vulnerabilities, so I wouldn’t recommend clair for that.

grype

grype is a security scanner made by Anchore.  They talk a lot about how Anchore’s products can also be used to build a Software Bill of Materials for a given image.  Let’s see how it goes with our test image:

nanabozho:~# grype dir:~/test-image/
✔ Vulnerability DB [updated]
✔ Cataloged packages [98 packages]
✔ Scanned image [3 vulnerabilities]
NAME     INSTALLED     FIXED-IN              VULNERABILITY  SEVERITY 
libgrss  0.7.0-r0      (fixes indeterminate) CVE-2016-20011 High 
libxml2  2.9.10-r7     (fixes indeterminate) CVE-2019-19956 High 
openrc   0.42.1-r19    (fixes indeterminate) CVE-2018-21269 Medium

grype does detect that a vulnerable libgrss is installed, but the (fixes indeterminate) seems fishy to me.  There also appear to be some other hits that the other scanners didn’t notice.  Lets fact check this against a pure Alpine 3.13 container:

nanabozho:~# grype dir:~/test-image-pure/
✔ Vulnerability DB [no update available]
✔ Cataloged packages [98 packages]
✔ Scanned image [3 vulnerabilities]
NAME     INSTALLED     FIXED-IN              VULNERABILITY  SEVERITY 
libgrss  0.7.0-r1      (fixes indeterminate) CVE-2016-20011 High 
libxml2  2.9.10-r7     (fixes indeterminate) CVE-2019-19956 High 
openrc   0.42.1-r19    (fixes indeterminate) CVE-2018-21269 Medium

Oh no, it detects 0.7.0-r1 as vulnerable too, which I assume is simply because Anchore’s database hasn’t updated yet.  Researching the other two vulnerabilities, the openrc one seems to be a vulnerability we missed, while the libxml2 one is a false positive.

I think, however, it is important to note that Anchore’s scanning engine assumes a package is vulnerable if there is a CVE and the distribution hasn’t acknowledged a fix.  That may or may not actually be reliable enough of the time, but it is an admittedly interesting approach.

Conclusion

For vulnerability scanning, I have to recommend either trivy or grype.  Clair is really complicated to set up and is really geared at people scanning entire container registries at once.  In general, I would recommend trivy over grype simply because it does not speculate about unconfirmed vulnerabilities, which I think is a distraction to developers, but I think grype has a lot of potential as well, though they may want to add the ability to only scan for confirmed vulnerabilities.

In general, I hope this blog entry answers a lot of questions about the remediation lifecycle in general as well.

actually, BSD kqueue is a mountain of technical debt

A side effect of the whole freenode kerfluffle is that I’ve been looking at IRCD again.  IRC, is of course a very weird and interesting place, and the smaller community of people who run IRCDs are largely weirder and even more interesting.

However, in that community of IRCD administrators there happens to be a few incorrect systems programming opinions that have been cargo culted around for years.  This particular blog is about one of these bikesheds, namely the kqueue vs epoll debate.

You’ve probably heard it before.  It goes something like this, “BSD is better for networking, because it has kqueue.  Linux has nothing like kqueue, epoll doesn’t come close.”  While I agree that epoll doesn’t come close, I think that’s actually a feature that has lead to a much more flexible and composable design.

In the beginning…

Originally, IRCD like most daemons used select for polling sockets for readiness, as this was the first polling API available on systems with BSD sockets.  The select syscall works by taking a set of three bitmaps, with each bit describing a file descriptor number: bit 1 refers to file descriptor 1 and so on.  The bitmaps are the read_set, write_set and err_set, which map to sockets that can be read, written to or have errors accordingly.  Due to design defects with the select syscalls, it can only support up to FD_SETSIZE file descriptors on most systems.  This can be mitigated by making fd_set an arbitrarily large bitmap and depending on fdmax to be the upper bound, which is what WinSock has traditionally done on Windows.

The select syscall clearly had some design deficits that negatively affected scalability, so AT&T introduced the poll syscall in System V UNIX.  The poll syscall takes an array of struct pollfd of user-specified length, and updates a bitmap of flags in each struct pollfd entry with the current status of each socket.  Then you iterate over the struct pollfd list.  This is naturally a lot more efficient than select, where you have to iterate over all file descriptors up to fdmax and test for membership in each of the three bitmaps to ascertain each socket’s status.

It can be argued that select was bounded by FD_SETSIZE (which is usually 1024 sockets), while poll begins to have serious scalability issues at around 10240 sockets.  These arbitrary benchmarks have been referred to as the C1K and C10K problems accordingly.  Dan Kegel has a very lengthy post on his website about his experiences mitigating the C10K problem in the context of running an FTP site.

Then there was kqueue…

In July 2000, Jonathan Lemon introduced kqueue into FreeBSD, which quickly propagated into the other BSD forks as well.  kqueue is a kernel-assisted event notification system using two syscalls: kqueue and kevent.  The kqueue syscall creates a handle in the kernel represented as a file descriptor, which a developer uses with kevent to add and remove event filters.  Event filters can match against file descriptors, processes, filesystem paths, timers, and so on.

This design allows for a single-threaded server to process hundreds of thousands of connections at once, because it can register all of the sockets it wishes to monitor with the kernel and then lazily iterate over the sockets as they have events.

Most IRCDs have supported kqueue for the past 15 to 20 years.

And then epoll…

In October 2002, Davide Libenzi got his epoll patch merged into Linux 2.5.44.  Like with kqueue, you use the epoll_create syscall to create a kernel handle which represents the set of descriptors to monitor.  You use the epoll_ctl syscall to add or remove descriptors from that set.  And finally, you use epoll_wait to wait for kernel events.

In general, the scalability aspects are the same to the application programmer: you have your sockets, you use epoll_ctl to add them to the kernel’s epoll handle, and then you wait for events, just like you would with kevent.

Like kqueue, most IRCDs have supported epoll for the past 15 years.

What is a file descriptor, anyway?

To understand the argument I am about to make, we need to talk about file descriptors.  UNIX uses the term file descriptor a lot, even when referring to things which are clearly not files, like network sockets.  Outside the UNIX world, a file descriptor is usually referred to as a kernel handle.  Indeed, in Windows, kernel-managed resources are given the HANDLE type, which makes this relationship more clear.  Essentially, a kernel handle is basically an opaque reference to an object in kernel space, and the astute reader may notice some similarities to the object-capability model as a result.

Now that we understand that file descriptors are actually just kernel handles, we can now talk about kqueue and epoll, and why epoll is actually the correct design.

The problem with event filters

The key difference between epoll and kqueue is that kqueue operates on the notion of event filters instead of kernel handles.  This means that any time you want kqueue to do something new, you have to add a new type of event filter.

FreeBSD presently has 10 different event filter types: EVFILT_READ, EVFILT_WRITE, EVFILT_EMPTY, EVFILT_AIO, EVFILT_VNODE, EVFILT_PROC, EVFILT_PROCDESC, EVFILT_SIGNAL, EVFILT_TIMER and EVFILT_USER.  Darwin has additional event filters concerning monitoring Mach ports.

Other than EVFILT_READ, EVFILT_WRITE and EVFILT_EMPTY, all of these different event filter types are related to entirely different concerns in the kernel: they don’t monitor kernel handles, but instead other specific subsystems than sockets.

This makes for a powerful API, but one which lacks composability.

epoll is better because it is composable

It is possible to do almost everything that kqueue can do on FreeBSD in Linux, but instead of having a single monolithic syscall to handle everything, Linux takes the approach of providing syscalls which allow almost anything to be represented as a kernel handle.

Since epoll strictly monitors kernel handles, you can register any kernel handle you have with it and get events back when its state changes.  As a comparison to Windows, this basically means that epoll is a kernel-accelerated form of WaitForMultipleObjects in the Win32 API.

You are probably wondering how this works, so here’s a table of commonly used kqueue event filters and the Linux syscall used to get a kernel handle for use with epoll.

BSD event filter Linux equivalent
EVFILT_READ, EVFILT_WRITE, EVFILT_EMPTY Pass the socket with EPOLLIN etc.
EVFILT_VNODE inotify
EVFILT_SIGNAL signalfd
EVFILT_TIMER timerfd
EVFILT_USER eventfd
EVFILT_PROC, EVFILT_PROCDESC pidfd, alternatively bind processes to a cgroup and monitor cgroup.events
EVFILT_AIO aiocb.aio_fildes (treat as socket)

Hopefully, as you can see, epoll can automatically monitor any kind of kernel resource without having to be modified, due to its composable design, which makes it superior to kqueue from the perspective of having less technical debt.

Interestingly, FreeBSD has added support for Linux’s eventfd recently, so it appears that they may take kqueue in this direction as well.  Between that and FreeBSD’s process descriptors, it seems likely.

A slightly-delayed monthly status update

A few weeks ago, I announced the creation of a security response team for Alpine, of which I am presently the chair.

Since then, the team has been fully chartered by both the previous Alpine core team, and the new Alpine council, and we have gotten a few members on board working on security issues in Alpine.  Once the Technical Steering Committee is fully formed, the security team will report to the TSC and fall under its purview.

Accordingly, I thought it would be prudent to start write monthly updates summarizing what I’ve been up to.  This one is a little delayed because we’ve been focused on getting Alpine 3.14 out the door (first RC should come out on Monday)!

secfixes-tracker

One of the primary activities of the security team is to manage the security database.  This is largely done using the secfixes-tracker application I wrote in April.  At AlpineConf, I gave a bubble talk about the new security team, including a demonstration of how we use the secfixes-tracker application to research and mitigate security vulnerabilities.

Since the creation of the security team through the Alpine 3.14 release cycle, myself and other security team volunteers have mitigated over 100 vulnerabilities through patching or non-maintainer security upgrades in the pending 3.14 release alone and many more in past releases which are still supported.

All of this work in finding unpatched vulnerabilities is done using secfixes-tracker.  However, while it finds many vulnerabilities, it is not perfect.  There are both false positives and false negatives, which we are working on improving.

The next step for secfixes-tracker is to integrate it into GitLab, so that maintainers can log in and reject CVEs they deem irrelevant in their packages instead of having to attribute a security fix to version 0.  I am also working on a protocol to allow security trackers to share data with each other in an automated way.

Infrastructure

Another role of the security team is to advise the infrastructure team on security-related matters.  In the past few weeks, this primarily focused around two issues: how to securely relay patches from the alpine-aports mailing list into GitLab without compromising the security of aports.git and our response to recent changes in freenode, where it was the recommendation of the security team to leave freenode in favor of OFTC.

Reproducible Builds

Another project of mine personally is working to prove the reproducibility of Alpine package builds, as part of the Reproducible Builds project.  To this end, I hope to have the Alpine 3.15 build fully reproducible.  This will require some changes to abuild so that it produces buildinfo files, as well as a rebuilder backend.  We plan to use the same buildinfo format as Arch, and will likely adapt some of the other reproducible builds work Arch has done to Alpine.

I plan to have a meeting within the next week or two to formulate an official reproducible builds team inside Alpine and lay out the next steps for what we need to do in order to get things going.  In the meantime, join #alpine-reproducible on irc.oftc.net if you wish to follow along.

I plan for reproducible builds (perhaps getting all of main reproducible) to be a sprint in July, once the prerequisite infrastructure is in place to support it, so stay tuned on that.

apk-tools 3

On this front, there’s not much to report yet.  My goal is to integrate the security database into our APKINDEX, so that we can have apk list --upgradable --security, which lists all of the security fixes you need to apply.  Unfortunately, we are still working to finalize the ADB format which is a prerequisite for providing the security database in ADB format.  It does look like Timo is almost done with this, so once he is done, I will be able to start working on a way to reflect the security database into our APKINDEX files.

The linux-distros list

There is a mailing list which is intended to allow linux distribution security personnel to discuss security issues in private.  As Alpine now has a security team, it is possible for Alpine to take steps to participate on this list.

However… participation on this list comes with a few restrictions: you have to agree to follow all embargo terms in a precise way.  For example, if an embargoed security vulnerability is announced there and the embargo specifies you may not patch your packages until XYZ date, then you must follow that or you will be kicked off the list.

I am not sure it is necessarily appropriate or even valuable for Alpine to participate on the list.  At present, if an embargoed vulnerability falls off a truck and Alpine notices it, we can fix it immediately.  If we join the linux-distros list, then we may be put in a position where we have to hide problems, which I didn’t sign up for.  I consider it a feature that the Alpine security team is operating fully in the open for everyone to see, and want to preserve that as much as possible.

The other problem is that distributions which participate bind their package maintainers to an NDA in order to look at data relevant to their packages.  I don’t like this at all and feel that it is not in the spirit of free software to make contributors acknowledge an NDA.

We plan to discuss this over the next week and see if we can reach consensus as a team on what to do.  I prefer to fix vulnerabilities, not wait to fix vulnerabilities, but obviously I am open to being convinced that there is value to Alpine’s participation on that list.

Acknowledgement

My activities relating to Alpine security work are presently sponsored by Google and the Linux Foundation.  Without their support, I would not be able to work on security full time in Alpine, so thanks!