GNU nano is my editor of choice

I have been using GNU nano for the overwhelming majority of my life.  Like an old friend, nano has always been reliable and has never failed me where other text editors have.  By far, it has been the most influential software I have ever used regarding how I approach the design of my own software.

The vim vs emacs flame war

I’ve used both vim and emacs.  I don’t like either of them, for differing reasons: modal editing doesn’t really fit my mental model of how an editor should work, and Emacs Lisp is not really a particularly fun language to use simply for customizing the behavior of an editor — as they say, Emacs is a nice operating system, it just needs a good editor.

In all cases, I think nano is a much better editor (at least for me), as when properly configured (and previously with some patching), it provides all of the features from vim I would want anyway, but in a modeless format.

A note about pico

As most people know, GNU nano began its life as a clone of UW pico.  Pico (short for PIne COmposer) was bundled with the PINE email client, which was written by the University of Washington.  Unfortunately, PINE was distributed under a custom license which had many problems.  This was eventually solved when the University of Washington released ALPINE (short for Alternatively Licensed PINE) under the Apache 2.0 license.

The licensing problem in combination with a desire to make a more capable editor based on the overall pico user experience led to the creation of GNU nano.

In the Alpine Linux distribution, both pico and nano are available.  Here’s what pico looks like by default:A screenshot of Pico showing some code, in its default configuation. The help options and titlebar are present.

The default nano experience

Like with pico, the default UI for nano is quite boring to look at.  Here is GNU nano displaying the same file with the default configuration:

The GNU nano editor showing some code in its default configuration. The help bar highlights undo/redo support and other features not present in Pico.As you can hopefully see, the default nano configuration is quite similar to that of pico.  However, unlike pico, it can be changed by editing the ~/.nanorc file.

Building something like vim using .nanorc

What I want in an editor is something that basically looks like vim, but is modeless like nano.  Something like this:

GNU nano displaying source code as I have configured it, syntax highlighting is enabled, and minibar mode also.But how do we get there?  The answer is simple: we use the ~/.nanorc file.

GNU nano displaying my .nanorc file. Some features are enabled, and some syntax highlighting packages are included.As a result of many people wanting the same thing: vim-like functionality with modeless editing, nano gained several third-party patches which allowed for this.  For the most part, these patches (or equivalent functionality) have been included upstream in recent years.

Getting most of the way to a vim-like look and feel, without syntax highlighting, is quite simple.  You simply need to add these lines to your ~/.nanorc file with any recent version of nano:

# enables the minibar feature
set minibar

# disables the shortcut hints
set nohelp

That gets you something like this:

GNU nano with minibar and help disabled.However, that minibar looks a little ugly with the inverse text.  The good news is, we can disable the inverse text by adding another snippet to ~/.nanorc:

# disable inverse text for the minibar
set titlecolor normal,normal

The way this works is by setting the foreground and background colors for the titlebar to normal, which means that nano shouldn’t change whatever color is already set.  That gives us:

GNU nano with minibar enabled, help disabled, and titlecolor set to normal/normal.

Enabling syntax highlighting

There are two ways that syntax highlighting can be enabled in nano: both come down to including configuration snippets to enable it.  GNU nano comes with some sample syntax highlighting configuration, which on Alpine systems is available in the nano-syntax package, but I don’t personally use it, as the color scheme is quite ugly.

Instead, I use an improved syntax highlighting package that is distributed on GitHub.  To install it, you can just do something like:

nanabozho:~$ git clone ~/.nano/

This will install the syntax highlighting package to ~/.nano.  At that point, you just add include lines for the syntax highlighters you want to enable:

include "~/.nano/c.nanorc"

Once you do that, you’re done and left with a nano that looks like this:

GNU nano displaying source code as I have configured it, syntax highlighting is enabled, and minibar mode also.Hopefully this post demonstrates that nano is a quite capable editor in its own right.

On the topic of community management, CoCs, etc.

Many people may remember that at one point, Alpine had a rather troubled community, which to put it diplomatically, resulted in a developer leaving the project.  This was the result of not properly managing the Alpine community as it grew — had we taken early actions to ensure appropriate moderation and community management, that particular incident would never have happened.

We did ultimately fix this issue and now have a community that tries to be friendly, welcoming and constructive, but it took a lot of work to get there.  As I was one of the main people who did that work, I think it might be helpful to talk about what I’ve learned through that process.

Moderation is critical

For large projects like Alpine, active moderation is the most crucial aspect.  It is basically the part that makes or breaks everything else you try to do.  Building the right moderation team is also important: it needs to be a team that everyone can believe in.

That means that the people who are pushing for community management may or may not be the right people to do the actual day to day moderation work, and should rather focus on policy.  This is because there will be bias against the people pushing for changes in the way the community is managed by some members.  Building a moderation team that gently enforces established policy, but is otherwise perceived as neutral is critical to success.

Policy statements (such as Codes of Conduct)

It is not necessarily a requirement to write a Code of Conduct.  However, if you are retrofitting one into a pre-existing community, it needs to be done from the bottom up, allowing everyone to say their thoughts.  Yes, you will get people who present bad faith arguments, because they are resistant to change, or perhaps they see no problem with the status quo.  In most cases, however, it is likely because people are resistant to change.  By including the community in the discussion about its community management goals, you ensure they will generally believe in the governance decisions made.

Alpine did ultimately adopt a Code of Conduct.  Most people have never read it, and it doesn’t matter.  When we wrote it, we were writing it to address specific patterns of behavior we wanted to remove from the community space.  The real purpose of a Code of Conduct is simply to set expectations, both from participants and the moderation team.

However, if you do adopt a Code of Conduct, you must actually enforce it as needed, which brings us back to moderation.  I have unfortunately seen many projects in the past few years, which have simply clicked the “Add CoC” button on GitHub and attached a copy of the Contributor Covenant, and then went on to do exactly nothing to actually align their community with the Code of Conduct they published.  Simply publishing a Code of Conduct is an optional first step to improving community relations, but it is never the last step.

Fostering inclusivity

The other key part of building a healthy community is to build a community where everyone feels like they are represented.  This is achieved by encouraging community participation in governance, both at large, and in a targeted way: the people making the decisions and moderating the community should ideally look like the people who actually use the software created.

This means that you should try to encourage women, people of color and other marginalized people to participate in project governance.  One way of doing so is by amplifying their work in your project.  You should also amplify the work of other contributors, too.  Basically, if people are doing cool stuff, the community team should make everyone aware of it.  A great side effect of a community team actively doing this is that it encourages people to work together constructively, which reinforces the community management goals.

Final thoughts

Although it was not easy, Alpine ultimately implemented all of the above, and the community is much healthier than it was even a few years ago.  People are happy, code is being written, and we’re making progress on substantive improvements to the Alpine system, as a community.

Change is scary, but in the long run, I think everyone in the Alpine community agrees by now that it was worth it.  Hopefully other communities will find this advice helpful, too.

Bits relating to Alpine security initiatives in July

Another month has passed, and we’ve gotten a lot of work done.  No big announcements to make, but lots of incremental progress, bikeshedding and meetings.  We have been laying the ground work for several initiatives in Alpine 3.15, as well as working with other groups to find a path forward on vulnerability information sharing.

The Technical Steering Committee

By far the biggest news for this update is that the nascent Alpine Core Team has officially split into the Alpine Council and TSC.  I am a member of the TSC, representing the security team.  We have now had two TSC meetings, and the TSC is already working through a couple of issues referred to it by the security team.  There is still a lot to do involving figuring out the workflow of the TSC, but we are making progress.

sudo deprecation

The first issue the security team referred to the TSC is significant: the security team would like to deprecate sudo in favor of doas.  This is because doas is a much simpler codebase: while sudo clocks in at 174k lines of code, doas only has 2500 lines, including the portability layer.  Since both are SUID programs, it hopefully makes sense that the security team would prefer to support the simpler doas over the much more complicated sudo.

But the TSC attached conditions to its decision:

  • doas must gain an equivalent to /etc/sudoers.d, so that maintainer scripts can make use of it to do things like automatically restart services.  I have been working with upstream to enable this support, which should be landing in Alpine later this week.
  • cloud-init must be adapted to support doas.  This is because the cloud images do not enable the root user, so there needs to be a tool to allow escalation to root, such as the venerable sudo -si.  Hopefully, a plugin for cloud-init will be implemented shortly: we have had some discussion with the cloud-init maintainer about this already.

I hope other distributions follow in our footsteps and deprecate sudo in favor of doas.  While we should try to avoid SUID programs where possible, simpler SUID programs are still better than complicated ones like sudo.

A lifecycle for packages in testing

While not directly security-related, since the security team does not provide security support for the testing repository, I raised an issue with the TSC to define a deadline for packages in testing, where they must either leave testing and go to community or main, or they must be removed from testing.  This is because people contribute packages to testing, and then we never hear from those people ever again, while their packages sit in testing basically unmaintained.  This creates problems for the security team when we have to do rebuilds due to libraries changing their soname.

The TSC is presently expected to take up this issue at the next meeting, in two weeks.  Hopefully, that translates to a deadline for packages to leave testing.

Reproducible Builds

At the moment, we are still focusing on making the installation media reproducible.  On that front, I reviewed the patch by kpcyrd to make apk index reproducible.  After a little bit of discussion, this patch was accepted into apk-tools and is included in version 2.12.6 and later.

The next big step is still the design and implementation of buildinfo in Alpine.  With apk-tools 3 around the corner, we are debating using structured data in the package itself instead of an actual file on disk to represent the buildinfo data.

Once that particular design discussion is complete, we will work to get it pushed out for the edge builders, and start work on a rebuilderd backend.

secfixes-tracker, security database, sharing

In terms of deployed changes, there’s not much to talk about here.  Things are moving along with very few problems.

Behind the scenes, however, we are still working with many stakeholders on enabling real-time vulnerability information sharing.  This is, for whatever reason, a topic where the politics are constantly shifting around: for example, we had the Distributed Weakness Filing project, which has now died and been replaced with Universal Vulnerability Identifiers.

The people who run UVI want the same thing we want: real-time vulnerability data sharing based on push messaging and JSON-LD, but they are having the same difficulties that I have been having getting everyone on board with the JSON-LD part.  Unfortunately, the JSON-LD part is the most important part of all of this, because it allows everyone to participate: while the new schema proposed by the Go and OSV teams is a significant improvement over CVE v4, the designers specifically considered URIs, and therefore linked data, to be harmful.

However, linked data is really the thing that is going to let us scale up and track all vulnerabilities.  While it is true that domains can go away, it is possible to archive that data with JSON-LD: the implementations allow for substitution of some URI patterns with other ones (such as  At the moment, we track around 20,000 to 30,000 CVEs per year, but in reality, there are likely millions of vulnerabilities found per year.  Having a centralized, gate-kept set of vulnerability databases simply does not scale to this reality.

By using linked data, it is possible to simply walk along a JSON document and discover more information about a vulnerability:

"references": [

Which itself is really helpful.  But where this becomes even more awesome is when you realize that you can reference vulnerability data in other contexts, such as malware databases:

"name": "Win32/Some.Printer.Malware",
"exploitation_vectors": [

Now the malware researcher can follow the node for CVE-2021-34481 and get structured data back about the Print Spooler CVE, and it’s entirely transparent.

I cannot stress enough that this is how it has to be.  We will never get to the point where we are tracking every single vulnerability unless it is an open system like this, built in the same spirit as FOSS, but for data.

Unfortunately, I suspect there is going to be at least another round of discussions before we get there.  Once these issues are resolved, and a clear path forward is obvious, we will release secfixes-tracker 0.4, which makes the security data available in the standardized JSON-LD format discussed above.

apk-tools 3

Timo continues to make progress on apk-tools 3.  It is quite possible that Alpine 3.15 will ship with apk-tools 3, and that possibility is getting more likely all the time.

Since the last update, we’ve concluded that the new index format used by apk-tools 3 is reproducible.

I have also talked with Timo about introducing security fix data into the new apk-tools indices, and we are starting to work on a design for that.  This will enable the apk list --upgradable --security feature I have talked about a few times now.

I plan on doing a blog post demonstrating reproducibility of both apk-tools 3 indices as well as apk-tools 3 packages in the next week or so.  I have also been working on writing a specification for the ADB format so that other people can write their own parsers for it.  That will be going upstream as a MR shortly.

CVEs in Alpine software

Part of the responsibility of the security team is to get CVEs from MITRE.  We are considering becoming a CVE Numbering Authority to improve the process of getting CVEs, but the process CNAs use is kind of clunky, so we might not actually do it.  In July, we requested two CVEs for Alpine software or Alpine-specific packaging problems:

  • CVE-2021-36158: The xrdp package was generating the same public and private keypair for every single installation.  This was fixed by moving the keypair generation into a maintainer script.  Thanks to Leo for fixing it!
  • CVE-2021-36159: Samanta Navarro reported a vulnerability in libfetch to Alpine and FreeBSD.  We requested a CVE for the Alpine fork, but then FreeBSD decided to use the same CVE for the original libfetch, too.  As a side effect of coordination of this vulnerability, I proposed creating a unified libfetch project where all users can collaborate on its maintenance.


My activities relating to Alpine security work are presently sponsored by Google and the Linux Foundation. Without their support, I would not be able to work on security full time in Alpine, so thanks!  I also appreciate their recent sponsoring of kpcyrd, an Alpine, Arch, Debian and contributor!

Moving my blog to Oracle cloud

In my past few blog posts, I have been talking about the current state of affairs concerning ARM VPS hosting.  To put my money where my mouth is, I have now migrated my blog to the ARM instances Oracle has to offer, as an actual production use of their cloud.  You might find this surprising, given the last post, but Oracle reached out and explained why their system terminated my original account and we found a solution for that problem.

What happened, anyway?

Back at the end of May, Oracle announced that they were offering ARM VPS servers running on Ampere Altra CPUs.  Accordingly, I was curious, so I signed up for an account on the free tier.  All went well, except that as I was signing up, my now-previous bank declined the initial charge to verify that I had a working credit card.

I was able to sign up anyway, but then a few days later, they charged my card again, which was also declined by my previous bank’s overzealous fraud protection.  Then a few weeks later, I attempted to upgrade, and the same thing happened again: first charge was declined, I got a text message and retried, and everything went through.  This weirdness with the card being declined reliably on the first try, however, made Oracle’s anti-fraud team anxious, and so they decided to understandably cover their own asses and terminate my account.

I’m going to talk in more depth about my relationship with my previous bank soon, but I want to close my accounts out fully with them before I complain about how awful they are: one does not talk smack about somebody who is holding large sums of your savings, after all.  Needless to say, if you find yourself at a bank being acquired by another bank, run like hell.

Given that Oracle was very proactive in addressing my criticism, and that the issue was caused by something neither myself nor Oracle had any control over (my bank demonstrating very loudly that they needed to be replaced), I decided to give them another chance, and move some of my production services over.

At least, at the moment, since I will no longer be operating my own network as of September, I plan on running my services on a mix of Vultr, Oracle and Linode VMs, as this allows me to avoid Intel CPUs (Oracle have ARM, but also AMD EPYC VMs available, while Vultr and Linode also use AMD EPYC).  I will probably run the more FOSS-centric infrastructure on fosshost’s ARM infrastructure, assuming they accept my application anyway.

Installing Alpine on Oracle Cloud

At present, Alpine images are not offered on Oracle’s cloud.  I intend to talk with some of the folks running the service who reached out about getting official Alpine images running in their cloud, as it is a quite decent hosting option.

In the meantime, it is pretty simple to install Alpine.  The first step is to provision an ARM (or x86) instance in their control panel.  You can just use the stock Oracle Linux image, as we will be blasting it away anyway.

Once the image is running, you’ll be presented with a control panel like so:

A control panel for the newly created VPS instance.

The next step is to create an SSH-based serial console.  You will need this to access the Alpine installer.  Scroll down to the resources section and click “Console Connection.”  Then click “Create Console Connection”:

Console connections without any created yet.

This will open a modal dialog, where you can specify the SSH key to use.  You’ll need to use an RSA key, as this creation wizard doesn’t yet recognize Ed25519 keys.  Select “Paste public key” and then paste in your RSA public key, then click “Create console connection” at the bottom of the modal dialog.

The console connection will be created.  Click the menu icon for it, and then click “Copy Serial Console Connection for Linux/Mac.”

Copying the SSH connection command.

Next, open a terminal and paste the command that was copied to your clipboard, and you should be able to access the VPS serial console after dealing with the SSH prompts.

VPS serial console running Oracle Linux

The next step is to SSH into the machine and download the Alpine installer.  This will just be ssh opc@ where is the IP of the instance.  We will want to download the installer ISO to /run, which is a ramdisk, and then write it to /dev/sda and then sysrq b to reboot.  Here’s what that looks like:

Preparing the Alpine installer

If you monitor your serial console window, you’ll find that you’ve been dropped into the Alpine installer ISO.

Alpine installer shell

From here, you can run setup-alpine and follow the directions as usual.  You will want to overwrite the boot media, so answer yes when it asks.

Installing Alpine

At this point, you can reboot, and it will dump you into your new Alpine image.  You might want to set up cloud-init, or whatever, but that’s not important to cover here.

Future plans

At the moment, the plan is to see how things perform, and if they perform well, migrate more services over.  I might also create OCIs with cloud-init enabled for other users of Alpine on Oracle cloud.

Stay tuned!

Oracle cloud sucks

Update: Oracle have made this right, and I am in fact, now running production services on their cloud.  Thanks to Ross and the other Oracle engineers who reached out offering assistance.  The rest of the blog post is retained for historical purposes.

In my previous blog, I said that Oracle was the best option for cheap ARM hosting.

Yesterday, Oracle rewarded me for that praise by demonstrating they are, in fact, Oracle and terminating my account.  When I contacted their representative, I was told that I was running services on my instance not allowed by their policies (I was running a non-public IRC server that only connected to other IRC servers, and their policies did not discuss IRC at all) and that the termination decision was final.  Accordingly, I can no longer recommend using Oracle’s cloud services for anything — if you use their service, you are at risk of losing your hosting at any time, for any reason they choose to invent, regardless of whether you are a paying customer or not.

That leaves us with exactly zero options for cheap ARM hosting.  Hopefully Amazon will bring ARM options to Lightsail soon.

It’s time for ARM to embrace traditional hosting

ARM is everywhere these days — from phones to hyperscale server deployments.  There is even an ARM workstation available that has decent specs at an acceptable price.  Amazon and Oracle tout white paper after white paper about how their customers have switched to ARM, gotten performance wins and saved money.  Sounds like everything is on the right track, yes?  Well, actually it’s not.

ARM for the classes, x86 for the masses

For various reasons, I’ve been informed that I need to start rethinking my server infrastructure arrangements.  We won’t go into that here, but the recent swearing at San Francisco property developers on my Twitter is highly related.

As I am highly allergic to using any infrastructure powered by x86 CPUs, due to the fact that Intel and AMD both include firmware in the CPU which allow for computation to occur without my consent (also known as a backdoor) so that Hollywood can implement a largely pointless (especially on a server) digital restrictions management scheme, I decided to look at cloud-based hosting solutions using ARM CPUs, as that seemed perfectly reasonable at first glance.

Unfortunately, what I found is that ARM hosting is not deployed in a way where individual users can access it at cost-competitive prices.

AWS Graviton (bespoke Neoverse CPUs)

In late 2018, AWS announced the Graviton CPU, which was based on a core design they got when they acquired Annapurna Labs.  This was followed up in 2020 with Graviton2, which is based on the ARM Neoverse N1 core design.  These are decent chips, the performance is quite good, and costs are quite low.

But, how much does it cost for an average person to actually make use of it?  We will assume that the 1 vCPU / 4GB RAM m6g.medium configuration is suitable for this comparison, as it is the most comparable to a modest x86 VPS.

The m6g.medium instance does not come with any transfer, but the first GB is always free on EC2.  Further transfer is $0.09/GB up to 10TB.  By comparison, the Linode 4GB RAM plan comes with 4TB of transfer, so we will use that for our comparison.

Hourly price (m6g.medium) $0.0385
× 720 hours $27.72
+ 3.999TB of transfer ($0.09 × 3999) $359.90
Total: $387.62

Transfer charges aside, the $27.72 monthly charge is quite competitive to Linode, clocking in at only $7.72 more expensive for comparable performance.  But the data transfer charges have the potential to make using Graviton on EC2 very costly.

What about AWS Lightsail?

An astute reader might note that AWS actually does provide traditional VPS hosting as a product, under its Lightsail brand.  But the Lightsail VPS product is x86-only for now.

Amazon could make a huge impact in terms of driving ARM adoption in the hosting ecosystem by bringing Graviton to their Lightsail product.  Capturing Lightsail users into the Graviton ecosystem and then scaling them up to EC2 seems like a no-brainer sales strategy too.  But so far, they haven’t implemented this.

Oracle Cloud Infrastructure

A few months ago, Oracle introduced instances based on Ampere’s Altra CPUs, which are also based on the Neoverse N1 core.

The base configuration (Oracle calls it a shape) is priced at $0.01/hourly, includes a single vCPU and 6GB of memory.  These instances do not come with any data transfer inclusive, but like AWS, data transfer is pooled.  A major difference between Oracle and AWS, however, is that the first 10TB of transfer is included gratis.

Hourly price $0.01
× 720 hours $7.20
+ 4TB transfer (included gratis) $0
Total: $7.20

I really, really wanted to find a reason to hate on Oracle here.  I mean, they are Oracle.  But I have to admit that Oracle’s cloud product is a lot more similar to traditional VPS hosting than Amazon’s EC2 offerings.  Update: Haha, nevermind!  They came up with a reason for me to hate on them when they terminated my account for no reason.

So, we have one option for a paid ARM VPS, and that is only an option if you are willing to deal with Oracle, which are Oracle.  Did I mention they are Oracle?

Oracle federating its login service with itself


Tons of people told me that Scaleway had ARM VPS for a long time.  And indeed, they used to, but they don’t anymore.  Back when they launched ARMv8 VPS on ThunderX servers, I actually used a Scaleway VPS to port libucontext.

Unfortunately, they no longer offer ARM VPS of any kind, and only overpriced x86 ones that are not remotely cost competitive to anything else on that market.

Mythic Beasts, miniNodes, etc.

These companies offer ARM instances, but they are Raspberry Pi instances.  The pricing is also rather expensive when considering that they are Raspberry Pi instances.  I don’t consider these offers competitive in any way.

Equinix Metal

You can still buy ARM servers on the Equinix Metal platform, but you have to request permission to buy them.  In testing a couple of years ago, I was able to provision a c1.large.arm server on the spot market for $0.25/hour, which translates to $180/monthly.

However, the problem with buying on the spot market is that your server might go away at any time, which means you can’t actually depend on it.

There is also the problem with data transfer: Equinix Metal follows the same billing practices for data transfer as AWS, meaning actual data transfer gets expensive quickly.

However, the folks who run Equinix Metal are great people, and I feel like ARM could work with them to get some sort of side project going where they get ARM servers into the hands of developers at reasonable pricing.  They already have an arrangement like that for FOSS projects with the Works on ARM program.


Right now, as noted above, Oracle is the best game in town for the average person (like me) to buy an ARM VPS.  We need more options.  Amazon should make Graviton available on its Lightsail platform.

It is also possible that as a side effect of marcan’s Asahi Linux project, we might have cheap Linux dedicated servers on Apple M1 mac minis soon.  That’s also a space to watch.

the three taps of doom

A few years ago, I worked as the CTO of an advertising startup.  At first, we used Skype for messaging amongst the employees, and then later, we switched to Slack.  The main reason for switching to Slack was because they had an IRC gateway — you could connect to a Slack workspace with an IRC client, which allowed for the people who wanted to use IRC to do so, while providing a polished experience for those who were unfamiliar with IRC.

the IRC gateway

In the beginning, Slack had an IRC gateway.  On May 15th, 2018, Slack discontinued the IRC gateway, beginning my descent into Cocytus.  Prior to the shutdown of the IRC gateway, I had always interacted with the Slack workspace via IRC.  This was replaced with the Slack mobile and desktop apps.

The IRC gateway, however, was quite buggy, so it was probably good that they got rid of it.  It did not comply with any reasonable IRC specifications, much less support anything from IRCv3, so the user experience was quite disappointing albeit serviceable.

the notifications

Switching from IRC to the native Slack clients, I now got to deal with one of Slack’s main features: notifications.  If you’ve ever used slack, you’re likely familiar with the unholy notification sound, or as I have come to know it, the triple tap of existential doom.  Let me explain.

At this point, we used slack for everything: chat, paging people, even monitoring tickets coming in.  The workflow was efficient, but due to matters outside my control, revenues were declining.  This lead to the CEO becoming quite antsy.  One day he discovered that he could use @all, @tech or @sales to page people with his complaints.

This means that I would now get pages like:

Monitoring: @tech Service rtb-frontend-nyc is degraded
CEO: @tech I demand you implement a filtering feature our customer is requiring to scale up

The monitoring pages were helpful, the CEO paging us demanding that we implement filtering features that spied on users and definitely would not actually result in scaled up revenue (because the customers were paying CPM) were not helpful.

The pages in question were actually a lot more intense than I show here, these are tame examples, but it felt like I had to walk on eggshells in order to use Slack.

Quitting that job

In the middle of 2018, I quit that job for various reasons.  And as a result, I uninstalled Slack, and immediately felt much better.  But every time I hear the Slack notification sound, I now get anxious as a result.

The moral of this story is: if you use Slack, don’t use it for paging, and make sure your CEO doesn’t have access to the paging features.  It will be a disaster.  And if you’re running a FOSS project, consider not using Slack, as there are likely many technical people who avoid Slack due to their own experiences with it.

Bits relating to Alpine security initiatives in June

As usual, I have been hard at work on various security initiatives in Alpine the past month.  Here is what I have been up to:

Alpine 3.14 release and remediation efforts in general

Alpine 3.14.0 was released on June 15, with the lowest unpatched vulnerability count of any release in the past several years.  While previous Alpine release cycles did well on patching the critical vulnerabilities, the less important ones frequently slipped through the cracks, due to the project being unable to focus on vulnerability remediation until now.

We have also largely cleaned up Alpine 3.13 (there are a few minor vulnerabilities that have not been remediated there yet, as they require ABI changes or careful backporting), and Alpine 3.12 and 3.11 are starting to catch up in terms of unpatched vulnerabilities.

While a release branch will realistically never have zero unpatched vulnerabilities, we are much closer than ever before to having the supported repositories in as optimal of a state as we can have them.  Depending on how things play out, this may result in extended security support for the community repository for 3.14, since the introduction of tools and processes has reduced the maintenance burden for security updates.

Finally, with the release of Alpine 3.14, the security support period for Alpine 3.10 draws to a close, so you should upgrade to at least Alpine 3.11 to continue receiving security updates.

secfixes-tracker and the security database

This month saw a minor update to secfixes-tracker, the application which powers  This update primarily focused around supporting the new security rejections database, which allows for maintainers to reject CVEs from their package with an annotated rationale.

In my previous update, I talked about a proposal which will allow security trackers to exchange data, using Linked Data Notifications.  This will be deployed on as part of the secfixes-tracker 0.4 release, as we have come to an agreement with the Go and OSV teams about how to handle JSON-LD extensions in the format.

My goal with the Linked Data Notifications effort is to decentralize the current CVE ecosystem, and a bit longer writeup explaining how we will achieve that is roughly half-way done sitting around in my drafts folder.  Stay tuned!

Finally, the license for the security database has been officially defined as CC-BY-SA, meaning that security vendors can now use our security database in their scanners without having a legal compliance headache.

Reproducible Builds

We have begun work on supporting reproducibility in Alpine.  While there is still a lot of work to be done in abuild to support buildinfo files, kpcyrd started to work on making the install media reproducible, beginning with the Raspberry Pi images we ship.

However, he ran into an issue with BusyBox’s cpio not supporting reproducibility, so I added the necessary flags to allow for cpio archives to be reproducible, sent the patches to upstream BusyBox and pushed an updated BusyBox with the patches to Alpine edge.

There are still a few fixes that need to be made to apk, but with some workarounds, we were able to demonstrate reproducible install images for the Raspberry Pi.

The next few steps here will involve validating the reproducible initramfs work correctly, for example I don’t think we need --ignore-devno, just --renumber-inodes for it, and I also think that with --ignore-devno it won’t actually boot, but validation will allow us to verify everything is OK with the image.

Beyond that, we need reproducible packages, and for that, we need buildinfo files.  That’s next on my list of things to tackle.

The linux-distros list

In the last update, we were discussing whether to join the linux-distros list.  Since then, we concluded that joining the list does not net us anything useful: our post-embargo patching timeframe is the same as distros which participate on the list, and the requirements for sharing vulnerability data with other team members and maintainers were too onerous.  Alpine values transparency, we found that compromising transparency to have embargoed security data was not a useful tradeoff for us.

apk-tools 3

Since the last update, Timo has made a lot of progress on the ADB format used in apk-tools 3.  At this point, I think it has come along enough that we can begin working on exposing security information the ADB-based package indices.

While Alpine itself is not yet publishing ADB-based indices, the features available in the ADB format are required to reflect the security fix information correctly (the current index format does not support structured data at all, and is just a simple key-value store).

I also intend to look at the ADB-based indices to ensure they are reproducible.  This will likely occur within the next few weeks as I work on making the current indices reproducible.


My activities relating to Alpine security work are presently sponsored by Google and the Linux Foundation. Without their support, I would not be able to work on security full time in Alpine, so thanks!

understanding thread stack sizes and how alpine is different

From time to time, somebody reports a bug to some project about their program crashing on Alpine.  Usually, one of two things happens: the developer doesn’t care and doesn’t fix the issue, because it works under GNU/Linux, or the developer fixes their program to behave correctly only for the Alpine case, and it remains silently broken on other platforms.

The Default Thread Stack Size

In general, it is my opinion that if your program is crashing on Alpine, it is because your program is dependent on behavior that is not guaranteed to actually exist, which means your program is not actually portable.  When it comes to this kind of dependency, the typical issue has to deal with the thread stack size limit.

You might be wondering: what is a thread stack, anyway?  The answer, of course, is quite simple: each thread has its own stack memory, because it’s not really feasible for multiple threads to use the same stack memory, and on most platforms the size of that memory is much smaller than the main thread’s stack, though programmers are not necessarily aware of that discontinuity.

Here is a table of common x86_64 platforms and their default stack sizes for the main thread (process) and child threads:

OS Process Stack Size Thread Stack Size
Darwin (macOS, iOS, etc) 8 MiB 512 KiB
FreeBSD 8 MiB 2 MiB
OpenBSD (before 4.6) 8 MiB 64 KiB
OpenBSD (4.6 and later) 8 MiB 512 KiB
Windows 1 MiB 1 MiB
Alpine 3.10 and older 8 MiB 80 KiB
Alpine 3.11 and newer 8 MiB 128 KiB
GNU/Linux 8 MiB 8 MiB

I’ve highlighted the OpenBSD and GNU/Linux default thread stack sizes because they represent the smallest and largest possible default thread stack sizes.

Because the Linux kernel has overcommit mode, GNU/Linux systems use 8 MiB by default, which leads to a potential problem when running code developed against GNU/Linux on other systems.  As most threads only need a small amount of stack memory, other platforms use smaller limits, such as OpenBSD using only 64 KiB and Alpine using at most 128 KiB by default.  This leads to crashes in code which assumes a full 8MiB is available for each thread to use.

If you find yourself debugging a weird crash that doesn’t make sense, and your application is multi-threaded, it likely means that you’re exhausting the stack limit.

What can I do about it?

To fix the issue, you will need to either change the way your program is written, or change the way it is compiled.  There’s a few options you can take to fix the problem, depending on how much time you’re willing to spend.  In most cases, these sorts of crashes are caused by attempting to manipulate a large variable which is stored on the stack.  Generally, moving the variable off the stack is the best way to fix the issue, but there are alternative options.

Moving the variable off the stack

Lets say that the code has a large array that is stored on the stack, which causes the stack exhaustion issue.  In this case, the easiest solution is to move it off the stack.  There’s two main approaches you can use to do this: thread-local storage and heap storage.  Thread-local storage is a way to reserve additional memory for thread variables, think of it like static but bound to each local thread.  Heap storage is what you’re working with when you use malloc and free.

To illustrate the example, we will adjust this code to use both kinds of storage:

void some_function(void) {

    char scratchpad[500000];

    memset(scratchpad, 'A', sizeof scratchpad);


Thread-local variables are referenced with the thread_local keyword.  You must include threads.h in order to use it:

#include <threads.h>

void some_function(void) {

    thread_local char scratchpad[500000];

    memset(scratchpad, 'A', sizeof scratchpad);


You can also use the heap.  The most portable example would be the obvious one:

#include <stdlib.h>

const size_t scratchpad_size = 500000;

void some_function(void) {

    char *scratchpad = calloc(1, scratchpad_size);

    memset(scratchpad, 'A', scratchpad_size);



However, if you don’t mind sacrificing portability outside gcc and clang, you can use the cleanup attribute:

#include <stdlib.h>

#define autofree __attribute__(cleanup(free))

const size_t scratchpad_size = 500000;

void some_function(void) {

    autofree char *scratchpad = calloc(1, scratchpad_size);

    memset(scratchpad, 'A', scratchpad_size);


This is probably the best way to fix code like this if you’re not targeting compilers like the Microsoft one.

Adjusting the thread stack size at runtime

pthread_create takes an optional pthread_attr_t pointer as the second parameter.  This can be used to set an alternate stack size for the thread at runtime:

#include <pthread.h>

pthread_t worker_thread;

void launch_worker(void) {

    pthread_attr_t attr;


    pthread_attr_setstacksize(&attr, 1024768);

    pthread_create(&worker_thread, &attr, some_function);


By changing the stacksize when calling pthread_create, the child thread will have a larger stack.

Adjusting the stack size at link time

In modern Alpine systems, since 2018, it is possible to set the default thread stack size at link time.  This can be done with a special LDFLAGS flag, like -Wl,-z,stack-size=1024768.

You can also use tools like chelf or muslstack to patch pre-built binaries to use a larger stack, but this shouldn’t be done inside Alpine packaging, for example.

Hopefully, this article is helpful for those looking to learn how to solve the stack size issue.

the end of freenode

My first experience with IRC was in 1999.  I was in middle school, and a friend of mine ordered a Slackware CD from Walnut Creek CDROM.  This was Slackware 3.4, and contained the GNOME 1.x desktop environment on the disc, which came with the BitchX IRC client.

At first, I didn’t really know what BitchX was, I just thought it was a cool program that displayed random ascii art, and then tried to connect to various servers.  After a while, I found out that an IRC client allowed you to connect to an IRC network, and get help with Slackware.

At that time, freenode didn’t exist.  The Slackware IRC channel was on DALnet, and I started using DALnet to learn more about Slackware.  Like most IRC newbies, it didn’t go so well: I got banned from #slackware in like 5 minutes or something.  I pleaded for forgiveness, in the way redolent of a middle schooler.  And eventually, I got unbanned and stuck around for a while.  That was my first experience with IRC.

After a few months, I got bored of running Linux and reinstalled Windows 98 on my computer, because I wanted to play games that only worked on Windows, and so, largely, my interest in IRC waned.

A few years passed… I was in eighth grade.  I found out that one of the girls in my class was a witch.  I didn’t really understand what that meant, and so I pressed her for more details.  She said that she was a Wiccan, and that I should read more about it on the Internet if I wanted to know more.  I still didn’t quite understand what she meant, but I looked it up on AltaVista, which linked me to an entire category of sites on  So, I read through these websites and on one of them I saw:

Come join our chatroom on DALnet: #wicca

DALnet!  I knew what that was, so I looked for an IRC client that worked on Windows, and eventually installed mIRC.  Then I joined DALnet again, this time to join #wicca.  I found out about a lot of other amazing ideas from the people on that channel, and wound up joining others like #otherkin around that time.  Many of my closest friends to this day are from those days.

At this time, DALnet was the largest IRC network, with almost 150,000 daily users.  Eventually, my friends introduced me to mIRC script packs, like NoNameScript, and I used that for a few years on and off, sometimes using BitchX on Slackware instead, as I figured out how to make my system dual boot at some point.

The DALnet DDoS attacks

For a few years, all was well, until the end of July 2002, when DALnet started being the target of Distributed Denial of Service attacks.  We would of course, later find out that these attacks were at the request of Jason Michael Downey (Nessun), who had just launched a competing IRC network called Rizon.

However, this resulted in #slackware and many other technical channels moving from DALnet to, a network that was the predecessor to freenode.  Using screen, I was able to run two copies of the BitchX client, one for freenode, and one for DALnet, but I had difficulties connecting to the DALnet network due to the DDoS attacks.

Early freenode

At the end of 2002, became freenode.  At that time, freenode was a much different place, with community projects like #freenoderadio, a group of people who streamed various ‘radio’ shows on an Icecast server.  Freenode had less than 5,000 users, and it was a community where most people knew each other, or at least knew somebody who knew somebody else.

At this time, freenode ran dancer-ircd, with dancer-services, which were written by the Debian developer Andrew Suffield and based on ircd-hybrid 6 and HybServ accordingly.

Dancer had a lot of bugs, the software would frequently do weird things and the services were quite spartan compared to what was available on DALnet.  I knew based on what was available over on DALnet, that we could make something better for freenode, and so I started to learn about IRCD.

Hatching a plan to make services better

By this time, I was in my last year of high school, and was writing IRC bots in Perl.  I hadn’t really tried to write anything in C yet, but I was learning a little bit about C by playing around with a test copy of UnrealIRCd on my local machine.  But I started to talk to lilo about improving the services.  I knew it could be done, but I didn’t know how to do it yet, which lead me to start searching for services projects that were simple and understandable.

In my searching for services software, I found rakaur‘s Shrike project, which was a very simple clone of Undernet’s X service which could be used with ircd-hybrid.  I talked with rakaur, and I learned more about C, and even added some features.  Unfortunately, we had a falling out at that time because a user on the network we ran together found out that he could make rakaur‘s IRC bot run rm -rf --no-preserve-root /, and did so.

After working on Shrike a bit, I finally knew what to do: extend Shrike into a full set of DALnet-like services.  I showed what I was working on to lilo and he was impressed: I became a freenode staff member, and continued to work on the services, and all went well for a while.  He also recruited my friend jilles to help with the coding, and we started fixing bugs in dancer-ircd and dancer-services as an interim solution.  And we started writing atheme as a longer-term replacement to dancer-services, originally under the auspices of freenode.


In early 2006, lilo launched his Spinhome project.  Spinhome was a fundraising effort so that lilo could get a mobile home to replace the double-wide trailer he had been living in.  Some people saw him trying to fundraise while being the owner of freenode as a conflict of interest, which lead to a falling out with a lot of staffers, projects, etc.  OFTC went from being a small network to a much larger network during this time.

One side effect of this was that the atheme project got spun out into its own organization:, which continues to exist in some form to this day.

The project was founded on the concept of promoting digital autonomy, which is basically the network equivalent of software freedom, and has advocated in various ways to preserve IRC in the context of digital autonomy for years.  In retrospect, some of the ways we advocated for digital autonomy were somewhat obnoxious, but as they say, hindsight is always 20/20.

The hit and run

In September 2006, lilo was hit by a motorist while riding his bicycle.  This lead to a managerial crisis inside freenode, where there were two rifts: one group which wanted to lead the network was lead by Christel Dahlskjaer, while the other group was lead by Andrew Kirch (trelane).  Christel wanted to update the network to use all of the new software we developed over the past few years, and so gave her our support, which convinced enough of the sponsors and so on to also support her.

A few months later, lilo‘s brother tried to claim title to the network to turn into some sort of business.  This lead to Christel and Richard Hartmann (RichiH) meeting with him in order to get him to back away from that attempt.

After that, things largely ran smoothly for several years: freenode switched to atheme, and then they switched to ircd-seven, a customized version of charybdis which we had written to be a replacement for hyperion (our fork of dancer-ircd), after which things ran well until…

Freenode Limited

In 2016, Christel incorporated freenode limited, under the guise that it would be used to organize the freenode #live conferences.  In early 2017, she sold 66% of her stake in freenode limited to Andrew Lee, who I wrote about in last month’s chapter.

All of that lead to Andrew’s takeover of the network last month, and last night they decided to remove the #fsf and #gnu channels from the network, and k-lined my friend Amin Bandali when he criticized them about it, which means freenode is definitely no longer a network about FOSS.

Projects should use alternative networks, like OFTC or Libera, or better yet, operate their own IRC infrastructure.  Self-hosting is really what makes IRC great: you can run your own server for your community and not be beholden to anyone else.  As far as IRC goes, that’s the future I feel motivated to build.

This concludes my coverage of the freenode meltdown.  I hope people enjoyed it and also understand why freenode was important to me: without lilo‘s decision to take a chance on a dumbfuck kid like myself, I wouldn’t have ever really gotten as deeply involved in FOSS as I have, so to see what has happened has left me heartbroken.