Categories
Uncategorized

introducing witchery: tools for building distroless images with alpine

As I noted in my last blog, I have been working on a set of tools which enable the building of so-called “distroless” images based on Alpine.  These tools have now evolved to a point where they are usable for testing in lab environments, thus I am happy to announce the witchery project.

For the uninitiated, a “distroless” image is one which contains only the application and its dependencies.  This has some desirable qualities: since the image is only the application and its immediate dependencies, there is less attack surface to worry about.  For example, a simple hello-world application built with witchery clocks in at 619kB, while that same hello-world application deployed on alpine:3.14 clocks in at 5.6MB.  There are also drawbacks: a distroless image typically does not include a package manager, so there is generally no ability to add new packages to a distroless image.

As for why it’s called witchery: we are using Alpine’s package manager in new ways to perform truly deep magic.  The basic idea behind witchery is that you use it to stuff your application into an .apk file, and then use apk to install only that .apk and its dependencies into a rootfs: no alpine-base, no apk-tools, no busybox (though witchery allows you to install those things if you want them).

Deploying an an example application with witchery

For those who want to see the source code without commentary, you can find the Dockerfile for this example on the witchery GitHub repo.  For everyone else, I am going to try to break down what each part is doing, so that you can hopefully understand how it all fits together. We will be looking at the Dockerfile in the hello-world example.

The first thing the reader will likely notice is that Docker images built with witchery are done in three stages.  First, you build the application itself, then you use witchery to build what will become the final image, and finally, you copy that image over to a blank filesystem.

FROM alpine:3.14 AS build
WORKDIR /root
COPY . .
RUN apk add --no-cache build-base && gcc -o hello-world hello-world.c

The first stage to build the application is hopefully self explanatory, and is aptly named build.  We fetch the alpine:3.14 image from Dockerhub, then install a compiler (build-base) and finally use gcc to build the application.

The second stage has a few steps to it, that I will split up so that its easier to follow along.

FROM kaniini/witchery:latest AS witchery

First, we fetch the kaniini/witchery:latest image, and name it witchery.  This image contains alpine-sdk, which is needed to make packages, and the witchery tools which drive the alpine-sdk tools, such as abuild.

RUN adduser -D builder && addgroup builder abuild
USER builder
WORKDIR /home/builder

Anybody who is familiar with abuild will tell you that it cannot be used as root.  Accordingly, we create a user for running abuild, and add it to the abuild group.  We then tell Docker that we want to run commands as this new user, and do so from its home directory.

COPY --from=build /root/hello-world .
RUN mkdir -p payloadfs/app && mv hello-world payloadfs/app/hello-world
RUN abuild-keygen -na && fakeroot witchery-buildapk -n payload payloadfs/ payloadout/

The next step is to package our application.  The first step in doing so involves copying the application from our build stage.  We ultimately want the application to wind up in /app/hello-world, so we make a directory for the package filesystem, then move the application into place.  Finally, we generate a signing key for the package, and then generate a signed .apk for the application named payload.

At this point, we have a signed .apk package containing our application, but how do we actually build the image?  Well, just as we drove abuild with witchery-buildapk to build the .apk package and sign it, we will have apk build the image for us.  But first, we need to switch back to being root:

USER root
WORKDIR /root

Now that we are root again, we can generate the image.  But first, we need to add the signing key we generated in the earlier step to apk‘s trusted keys.  To do that, we simply copy it from the builder user’s home directory.

RUN cp /home/builder/.abuild/*.pub /etc/apk/keys

And finally, we build the image.  Witchery contains a helper tool, witchery-compose that makes doing this with apk really easy.

RUN witchery-compose -p ~builder/payloadout/payload*.apk -k /etc/apk/keys -X http://dl-cdn.alpinelinux.org/alpine/v3.14/main /root/outimg/

In this case, we want witchery-compose to grab the application package from ~builder/payloadout/payload*.apk.  We use a wildcard there because we don’t know the full filename of the generated package.  There are options that can be passed to witchery-buildapk to allow you to control all parts of the .apk package’s filename, so you don’t necessarily have to do this.  We also want witchery-compose to use the system’s trusted keys for validating signatures, and we want to pull dependencies from an Alpine mirror.

Once witchery-compose finishes, you will have a full image in /root/outimg.  The final step is to copy that to a new blank image.

FROM scratch
CMD ["/app/hello-world"]
COPY --from=witchery /root/outimg/ .

And that’s all there is to it!

Things left to do

There are still a lot of things left to do.  For example, we might want to implement layers that users can build from when deploying their apps, like one containing s6 for example.  We also don’t have a great answer for applications written in things like Python yet, so far this only works well for programs that are compiled in the traditional sense.

But its a starting point none the less.  I’ll be writing more about witchery over the coming months as the tools evolve into something even more powerful.  This is only the beginning.

Categories
Uncategorized

Bits relating to Alpine security initiatives in August

As always, the primary focus of my work in Alpine is related to security, either through non-maintainer updates to address CVEs, new initiatives for hardening Alpine, maintenance of critical security-related packages or working with other projects to improve our workflows with better information sharing.  Here are some updates on that, which are slightly delayed because of the long weekend.

sudo deprecation

One of the key things we discussed in the last update was our plan to deprecate sudo, by moving it to communitysudo exists in a similar situation as firejail: it allows for some interesting use cases, but the security track record is not very good.  Additionally, the maintenance lifecycle for a release branch of sudo is very short, which makes it difficult to provide long-term support for any given version.

As such, the security team proposed to the Technical Steering Committee that we should deprecate sudo and move to an alternative implementation such as doas.  This required some work, namely, doas needed to gain support for configuration directories.  I wrote a patch for doas which provides support for configuration directories, and last week, pushed a doas package which includes this patch with some migration scripts.

At this point, basically everything which depended on sudo for technical reasons has been moved over to using doas.  We are just waiting for the cloud-init maintainer to finish testing their support for doas.  Once that is done, sudo will be moved to community.

OpenSSL 3.0

OpenSSL 3.0 was released today.  It is my intention to migrate Alpine to using it where possible.  As OpenSSL 3.0 will require a major rebuild, after talking with Timo, we will be coordinating this migration plan with the Technical Steering Committee.  Switching to OpenSSL 3.0 should not be as invasive as the OpenSSL 1.0 to 1.1 migration, as they did not change the APIs that much, and will give us the benefit of finally being free of that damn GPL-incompatible OpenSSL license, as OpenSSL 3 was relicensed to use the Apache 2.0 license.

I have already done some test rebuilds which covered much of the aports package set, and did not see much fallout so far.  Even packages which use the more lowlevel APIs, such as those in libcrypto compiled without any major problems.

A nice effect of the license change is that we should be able to drop dependencies on less audited TLS libraries, like GNU TLS, as many programs are licensed under GPL and therefore not compatible with the original OpenSSL license.

Reproducible Builds

We are starting to take steps towards reproducible packages.  The main blocker on that issue was determining what to do about storing the build metadata, so that a build environment can be recreated precisely.  To that extent, I have a patch to abuild which records all of the details exactly.  A rebuilder can then simply install the pinned packages with apk add --virtual.

We will need some way to archive historically built packages for the verification process.  Right now, the archive only ships current packages for each branch.  I am thinking about building something with ZFS or similar which snapshots the archive on a daily basis, but suggestions are welcome if anyone knows of better approaches.

Once these two things are addressed, we need to also add support for attempting rebuilds to the rebuilderd project.  In general, we should be able to model our support based on the support implemented for Arch Linux.

I am expecting to make significant progress on getting the .BUILDINFO file support merged into abuild and support for rebuilderd over the next month.  kpcyrd has been quite helpful in showing us how Arch has tackled reproducibility, and we have applied some lessons from that already to Alpine.

If you’re interested in this project, feel free to join #alpine-reproducible on irc.oftc.net.

secfixes-tracker

I am working on overhauling the JSON-LD documents which are presently generated by the secfixes-tracker application, so that they are more aligned with what the UVI vocabulary will look like.  At the same time, the UVI group have largely endorsed the use of Google’s new OSV format for use cases that do not require linked data.

Accordingly, I am writing a Python library which translates UVI to OSV and vice versa.  This is possible to do without much issues because UVI is intended to be a superset of OSV.

However, we need to request two mime-types for OSV documents and UVI JSON-LD documents.  In the meantime, the secfixes tracker will support querying with the .osv+json extension for querying our security tracker in the OSV format.

Anybody with experience requesting mime-types from IANA is encouraged to provide advice on how to do it most efficiently.

Best practices for Alpine installations

A couple of weeks ago, a kerfuffle happened where Adoptium planned to ship builds of OpenJDK with a third-party glibc package for Alpine.  Mixing libc on any system is a bad idea and has not necessarily obvious security implications.

As one option intended to discourage the practice of mixing musl and glibc on the same system, I proposed installing a conflict with the glibc package as part of the musl packaging.  We asked the Technical Steering Committee for guidance on this plan, and ultimately the TSC decided to solve this with documentation.

Therefore, I plan to work with the docs team to document practices to avoid (such as mixing libc implementations and release branches) to ensure Alpine systems remain secure and reliable.

Distroless, for Alpine

Google’s Distroless project provides tooling to allow users to build containers that include only the runtime dependencies to support an application.  This has some nice security advantages, because images have less components available to attack.  There has been some interest in building the same thing for Alpine, so that users can take advantage of the musl C library, while also having the security advantages of distroless.

It turns out that apk is capable of easily building a tool like this.  I already have a proof of concept, and I plan on expanding that into a more fully featured tool over the next week.  I also plan to do a deep dive into how that tool works once the initial version is released.

Acknowledgement

My activities relating to Alpine security work are presently sponsored by Google and the Linux Foundation. Without their support, I would not be able to work on security full time in Alpine, so thanks!

Categories
Uncategorized

I drove 1700 miles for a Blåhaj last weekend and it was worth it

My grandmother has Alzheimer’s and has recently had to move into an assisted living facility. You’ve probably seen bits and pieces outlining my frustration with that process on Twitter over the past year or so. Anyway, I try to visit her once or twice a month, as time permits.

But what does that have to do with blåhaj, and what is a blåhaj, anyway? To answer your question literally, blåhaj is Swedish for “blue shark.” But to be more precise, it’s a popular shark stuffed animal design produced by IKEA. As a stuffed animal sommelier, I’ve been wanting one for a hot minute.

Anyway, visiting grandmother was on the way to the St. Louis IKEA. So, I figured, I would visit her, then buy a blåhaj, and go home, right? Well, it was not meant to be: after visiting with my grandmother, we went to the IKEA in St. Louis, and they had sold out. This had happened a few times before, but this time I decided I wasn’t going to take no for an answer. A blåhaj was to be acquired at any cost.

This led us to continue our journey onto Chicago, as the IKEA website indicated they had 20 or so in stock. 20 blåhaj for the entire Chicagoland area? We figured that, indeed, odds were good that a blåhaj could be acquired. Unfortunately, it still wasn’t meant to be: by the time we got to Chicago, the website indicated zero stock.

So we kept going, onward to Minneapolis. At the very least, we could see one of the great monuments to laissez-faire capitalism, the Mall of America, and the historic George Floyd Square, which was frankly more my speed and also quite moving. But again, our attempt to get a blåhaj was unsuccessful – the IKEA at the Mall of America was out of stock.

Our search for blåhaj wasn’t finished yet: there were two options, Winnipeg had nearly 100 in stock, and the Kansas City location had 53. A decision had to be made. We looked at the border crossing requirements for entering Canada and found out that if you present your CDC vaccination card, you could enter Canada without any problems. So, we flipped a coin: do we go six hours north, or six hours south?

Ultimately, we decided to go to the Kansas City location, as we wanted to start heading back towards home. It turns out that Kansas City is only about six hours away from Minneapolis, so we were able to make it to the Kansas City IKEA about an hour before it closed. Finally, a success: a blåhaj was acquired. And that’s when my truck started pissing oil, but that’s a story for another day.

Does blåhaj live up to or exceed my expectations?

Absolutely! As far as stuffed animals go, blåhaj is quite premium, and available for a bargain at only 20 dollars for the large one. The quality is quite comparable to high-end plush brands like Jellycat and Aurora. It is also very soft, unlike some of the other IKEA stuffed animals.

Some people asked about sleeping with a blåhaj. Jellycat stuffed animals, for example, are explicitly designed for spooning, which is essential for a side sleeper. The blåhaj is definitely not, but due to its softness you can use it as a body pillow in various ways, such as to support your head or back.

The shell of the blåhaj is made out of a soft micro plush material very similar to the material used on second generation Jellycat bashful designs (which is different than the yarn-like material used on the first generation bashfuls). All stitching is done using inside seems, so the construction is quite robust. It should last for years, even with constant abuse.

All in all, a fun trip, for a fun blåhaj, though maybe I wouldn’t drive 1700 miles round trip again for one.

Categories
Uncategorized

How networks of consent can fix social platforms

Social platforms are powerful tools which allow a user to communicate with their friends and family. They also allow for activists to organize and manage political movements. Unfortunately, they also allow for users to harass other users and the mitigations available for that harassment are generally lacking.

By implementing networks of consent using the techniques presented, centralized, federated and distributed social networking platforms alike can build effective mitigations against harassment. Not all techniques will be required, depending on the design and implementation of a given platform.

What does consent mean here?

In this case, consent does not have any special technical meaning. It means the same thing as it does in real life: you (or your agent) is allowing or disallowing an action concerning you in some way.

As computers are incapable of inferring whether consent is given, the user records a statement affirming their consent if granted. Otherwise, the lack of a consent statement must be taken to mean that consent was not granted for an action. How this affirmation is recorded is platform specific.

In technical terms, we refer to these affirmations of consent as an object capability. Many things in this world are already built on object capabilities, for example Mach’s port system and cryptographic assets are forms of object capabilities.

How object capabilities can be used

In a monolithic system, you don’t really need real object capabilities, as they access grants can simply be recorded in the backend, and enforced transparently.

In a federated or distributed system, there are a few techniques that can be used to represent and invoke object capabilities. For example, an object capability might be represented by a key pair. In this case, the capability is invoked by signing the request with that key. Alternatively, capability URLs are another popular option, popularized by the Second Life Grid.

In a distributed system, simply having an opaque pointer to a given set of rights (and demonstrating possession of it) is sufficient, as the actor invoking the capability will invoke it directly with all relevant participants. This works because all participants are able to directly verify the validity of the capability as they witnessed its issuance to begin with.

However, in a federated system, you also need a way to provide proof that the invocation of a capability was accepted. This is usually implemented in the form of a signed proof statement created by the participant which issued the capability to begin with. Other more exotic schemes exist, but for the purpose of explaining everything this should suffice.

Building networks of consent with object capabilities

Now that we understand the basic concepts behind object capabilities, we can use them to model what a social network built from the ground up with a consent-oriented design would look like.

It should be noted that the user may configure her user agent to automatically consent to any incoming action, but this is an implementation detail. The presence of a consent framework at the network level does not imply the requirement for a user to manage whether consent is granted, it just allows for the necessary decision points to exist.

An example on a monolithic network

Let’s say that Alice wants to reply to her friend Bob’s post on Tooter, an imaginary proprietary social network that is focused on microblogging. In a proprietary network, Alice composes her reply, and then sends it to the network. The network then asks Bob’s user agent to approve or disapprove the reply. Bob’s user agent can choose to automatically accept the reply because Alice and Bob are already friends.

Now, let’s say that Karen98734762153 wants to reply to Bob’s post as well. Karen98734762153 has no prior relationship with Bob, but because Bob’s user agent is asked to make the decision, it can present the message to Bob to accept or reject. As Karen98734762153 is wanting to suggest the use of apple-flavored horse paste as a possible prophylactic for COVID, he chooses to reject the post, and Karen98734762153 is not granted any new privileges.

The same example in a distributed system

On a proprietary network, all of this can be implemented transparently to the end user. But can it be implemented in a distributed system? In this case, we assume a simple peer to peer network like Scuttlebutt. How would this work there?

As noted before, we can use object capabilities here. In this case, both Alice and Karen98734762153 would send their replies to Bob. Alice would reference her pre-existing relationship, and Karen98734762153 would not reference anything. Bob would commit Alice’s reply to the Scuttlebutt ledger and distribute that commitment to the pub-servers he is subscribed to, and ignore Karen98734762153’s reply.

The same example in a federated system

As seen, in a distributed system with a distributed ledger, where all objects are signed, this approach can work. Federated systems are a lot trickier to get right in regards to trust relationships, but it can be done here too. In this case, we introduce proof objects to demonstrate the acceptance of a capability. We will refer to these proofs as endorsements.

To this end, both Alice and Karen98734762153 send their replies to Bob, like before. Bob’s user agent then makes the decision to accept or reject the replies. In this example, Bob would add the reply to his local copy of replies, and then at a minimum send an endorsement back to Alice. Either Alice, Bob or both would then distribute that endorsement to a list of interested subscribers, who could verify the validity of the endorsement.

While other instances may choose to accept replies without an endorsement, they can also choose to reject them, or to give endorsed replies special status in their user interface. As there is not a unified consensus mechanism in federated networks, that is all that can be done. But it’s still pretty good.

The application of these ideas to other scenarios is left as an exercise for the reader.

Categories
Uncategorized

I am planning to move to Europe

I have been considering a move to Europe since the 2018 midterm election, though a combination of friends being persuasive and the COVID-19 pandemic put a damper on those plans.  Accordingly, I have tried my best to give Biden and the democrats an opportunity to show even the most basic modicum of progress on putting the country on a different path.  I did my part, I held my nose and as I was told, voted blue no matter who, despite a total lack of enthusiasm for the candidates.  But honestly, I can’t do this anymore.

Yesterday, Texas’ SB 8 went into force of law.  This is a law which incentivizes citizens to bring litigation against anybody involved in administering an abortion to anybody who is six weeks pregnant or later, by issuing a $10,000 reward paid by the state for such litigation if successful.

As I do not have a uterus, I am unable to get pregnant, and so you might wonder why I care about this so much.  It is simple: the right to bodily autonomy should matter to everyone.  However, there is a second reason as well.  The US Supreme Court, when requested to review the law, determined there was no constitutionality issue with it, because of the financial incentive system implemented by the law.

This means that the legislators in Texas have found a vulnerability in our legal system, one which will be surely abused to institute other draconian policies driven by the religious right.  The people who are implementing this strategy will do their best to outlaw any form of contraception.  They will try their best to outlaw queer relationships.  And, most relevant to me, they will use this vulnerability against trans people to make even our basic existence illegal.

My confidence in the continued viability for my safety in the US began to wane earlier this summer, when a transphobic activist invented false allegations that a transgender woman flashed her genitalia at other women at Wi Spa in Los Angeles.  These allegations resulted in violent demonstrations by white supremacists.  Don’t worry, the message was received.

This isn’t strictly about safety, however.  I also recognize that leaving the US is a selfish choice, and that I have a lot of privilege that others may not have.  But the thing is, I’ve worked my tail off to get where I am, on my own.

As a skilled tax-paying worker, I believe the US has an obligation to respect me as a person, if they wish for me to retain my patronage.  In other words, they should be competing for myself and other skilled labor to remain.  Instead, the lack of any tangible action to bring an end to Trumpism and lack of any legislative resistance to trans people being the next stop in the culture war, shows that I am not wanted here, and so I will move somewhere else, where I will be safe and respected as a person.

And so I plan to move to the Netherlands within the next year.  You see, as I primarily work as a contractor, I have a business vehicle.  The Dutch American Friendship Treaty allows me to very easily acquire a permanent Schengen visa, and after some time, citizenship, simply by moving my business vehicle to the Netherlands.  Additionally, I already have many friends in Europe, so making new friends is not something I have to worry a lot about.

I’m not sure how moving my business vehicle to Europe will affect any current engagements, so in the interest of redundancy, I will be putting together a CV package and putting together a website for my business vehicle within the next few weeks.  If you have a European company that could benefit from my unique experiences and insight, feel free to reach out.  Similarly, I’ll probably be looking for some sort of housing arrangements in the next few months, so suggestions in that area are welcome too.

Categories
Uncategorized

there is no such thing as a “glibc based alpine image”

For whatever reason, the alpine-glibc project is apparently being used in production.  Worse yet, some are led to believe that Alpine officially supports or at least approves of its usage.  For the reasons I am about to outline, we don’t.  I have also proposed an update to Alpine which will block the installation of the glibc packages produced by the alpine-glibc project, and have referred acceptance of that update to the TSC to determine if we actually want to put our foot down or not.  I have additionally suggested that the TSC may wish to have the Alpine Council reach out to the alpine-glibc project to find a solution which appropriately communicates that the project is not supported in any way by Alpine.  It should be hopefully clear that there is no such thing as a “glibc based alpine image” because Alpine does not use glibc, it uses musl.

Update: the TSC has decided that it is better to approach this problem as a documentation issue.  We will therefore try to identify common scenarios, including using the glibc package, that cause stability issues to Alpine and document them as scenarios that should ideally be avoided.

What the alpine-glibc project actually does

The alpine-glibc project attempts to package the GNU C library (glibc) in such a way that it can be used on Alpine transparently.  However, it is conceptually flawed, because it uses system libraries where available, which have been compiled against the musl C library.  Combining code built for musl with code built for glibc is like trying to run Windows programs on OS/2: both understand .EXE files to some extent, but they are otherwise very different.

But why are they different?  They are both libraries designed to run ELF binaries, after all.  The answer is due to differences in the application binary interface, also known as an ABI.  Specifically, glibc supports and heavily uses a backwards compatibility technique called symbol versioning, and musl does not support it at all.

How symbol versioning works

Binary programs, such as those compiled against musl or glibc, have something called a symbol table.  The symbol table contains a list of symbols needed from the system libraries, for example the C library functions like printf are known as symbols.  When a binary program is run, it is not executed directly by the kernel: instead, a special program known as an ELF interpreter is loaded, which sets up the mapping from symbols in the symbol table to the actual locations where those symbols exist.  That mapping is known as the global object table.

On a system with symbol versioning, additional data in the symbol table designates what version of a symbol is actually wanted.  For example, when you request printf on a glibc system, you might actually wind up requesting printf@GLIBC_2_34 or some other kind of versioned symbol.  This allows newer programs to prefer the newer printf function, while older programs can reference an older version of the implementation.  That allows for low-cost backwards compatibility: all you have to do is keep around the old versions of the routines until you decide to drop support in the ABI for them.

Why mixing these two worlds is bad

However, if you combine a world expecting symbol versioning and one which does not, you wind up with undefined behavior.  For very simple programs, it appears to work, but for more complicated programs, you will wind up with strange behavior and possible crashes, as the global object table references routines with different behavior than expected by the program.  For example, a program expecting a C99 compliant printf routine will get one on musl if it asks for printf.  But a program expecting a C99 compliant printf routine on glibc will ask for printf@GLIBC_2_12 or similar.

The symbol versioning problem spreads to the system libraries too: on Alpine, libraries don’t provide versioned symbols: instead, you get the latest version of each symbol.  But if a glibc program is expecting foo to be an older routine without the semantics of the current implementation of foo, then it will either crash or do something weird.

This has security impacts: the lack of consistency for whether versioned symbols are actually supported by the system basically turns any interaction with versioned symbols into what is called a weird machine.  This means that an attacker possibly controls more attack surface than they would in a situation where the system ran either pure glibc or pure musl.

Alternatives to alpine-glibc

As alpine-glibc is primarily discussed in the context of containers, we will keep this conversation largely focused on that.  There are a few options if you want a small container to run a binary blob linked against glibc in a container that are far better than using alpine-glibc.  For example, you can use Google’s distroless tools, which will build a Debian-based container with only the application and its runtime dependencies, which allows for a fairly small container.  You can also try to use the gcompat package, which emulates the GNU C library ABI in the same way that WINE emulates the Windows ABI.

But whatever you do, you shouldn’t use the alpine-glibc project to do this.  You will wind up with something completely broken.

Categories
Uncategorized

a tail of two bunnies

As many people know, I collect stuffed animals.  Accordingly, I get a lot of questions about what to look for in a quality stuffed animal which will last a long time.  While there are a lot of factors to consider when evaluating a design, I hope the two examples I present here in contrast to each other will help most people get the basic idea.

the basic things to look for

A stuffed animal is basically a set of fabric patches sewn together around some stuffing material.  Therefore, the primary mode of failure for a stuffed animal is when one or more seams suffers a tear or rip in its stitching.  A trained eye can look at a design and determine both the likelihood of failure and the most vulnerable seams, even in a high quality stuffed animal.

There are two basic ways to sew together a stuffed animal: the fabric patches can be sewn together to form inward-facing seams, or they can be sewn together to form outward-facing seams.  Generally, the stuffed animals that have inward facing seams have more robust construction.  This means that if you can easily see the seam lines that the quality is likely to be low.  Similarly, if eyes and other accessories are sewn in along a main seam line, they become points of vulnerability in the design.

Materials also matter: if the purpose of the stuffed animal is to be placed on a bed, or in a crib, it should be made out of fire-retardant materials.  Higher quality stuffed animals will use polyester fill with a wool-polyester blend for the outside, while lower quality stuffed animals may use materials like cotton.  In the event of a fire, polyester can potentially melt onto skin, but materials like cotton will burn much more vigorously than polyester (which is fire retardant).

Finally, it is important to verify that the stuffed animal has been certified to a well-known safety standard.  Look for compliance with the European Union’s EN71 safety standard or the ASTM F963 standard.  Do not buy any stuffed animal made by a company which is not compliant with these standards.  Stuffed animals bought off maker-oriented websites like Etsy will most likely not be certified, in these cases, you may wish to verify with the maker that they are familiar with the EN71 and ASTM F963 standards and have designed around those standards.

a good example: the jellycat bashful bunny

A jellycat bashful bunny, cream colored, size: really big.  it is approximately 4 feet tall.

One of my favorite bunny designs is the Jellycat Bashful Bunny.  I have several of them, ranging from small to the largest size available.

This is what I would consider to be a high quality design.  While the seam line along his tummy is visible, it is a very small seam line, which is indicative that the stitching is inward-facing.  There are no other visible seam lines.  Cared for properly, this stuffed animal will last a very long time.

a bad example: build a bear’s pawlette

Jumbo Pawlette, from build a bear.  This variant is 3 feet tall.

A few people have asked me about Build a Bear’s Pawlette design recently, as it looks very similar to the Jellycat Bashful Bunny.  I don’t think it is a very good design.

To start with, you can see that there are 21 separate panels stitched together: 4 for the ears, 3 for the head, 4 for the arms, 2 for the tummy, 2 for the back, 4 for the legs, and 2 for the feet.  The seam lines are very visible, which indicates that there is a high likelihood that the stitching is outward rather than inward.  That makes sense, because it’s a lot easier to stitch up a stuffed animal in store that way.  Additionally, you can see that the eyes are anchored to the seam lines that make up the face, which means detachment of the eyes is a likely possibility as a failure mode.

Build a Bear has some good designs that are robustly constructed, but Pawlette is not one of them.  I would avoid that one.

Hopefully this is helpful to somebody, at the very least, I can link people to this post now when they ask about this stuff.

Categories
Uncategorized

free software does not come with any guarantees of support

This evening, I stumbled upon a Twitter post by an account which tracks features being added to GitHub:

To be absolutely clear, this is a terrible idea.  Free software maintainers already have to deal with a subset of users who believe they are automatically entitled to support and, in some cases, SLAs from the maintainer.

Thankfully, the license I tend to use these days for my software makes it very clear that no support is provided:

This software is provided ‘as is’ and without any warranty, express or implied. In no event shall the authors be liable for any damages arising from the use of this software.

However, it’s not only my license which does this.  So does the GPL, in all caps none the less:

THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

Other free software licenses, such as the BSD license, and Apache 2.0 license also have a similar clause.  While maintainers might offer to provide support for free, or perhaps offer a support agreement for a fee, free software is ultimately what you make of it, for better or worse.

By integrating anti-features such as direct messages into software development forges like the proposed GitHub feature, the developers of these forges will inevitably be responsible for an onslaught of abusive messages directed at free software maintainers, which ultimately will serve to act as a Denial of Service on our time.  And, most likely, this abuse will be targeted more frequently at maintainers who are women, or people of color.

I urge the developers of software development forges to respect the labor donated to the commons by free software maintainers, by developing features which respect their agency, rather than features that will ultimately lead abusive users to believe maintainers are available at their beck and call.

Otherwise, the developers of these forges may find projects going to a platform that does respect them.

Categories
Uncategorized

GNU nano is my editor of choice

I have been using GNU nano for the overwhelming majority of my life.  Like an old friend, nano has always been reliable and has never failed me where other text editors have.  By far, it has been the most influential software I have ever used regarding how I approach the design of my own software.

The vim vs emacs flame war

I’ve used both vim and emacs.  I don’t like either of them, for differing reasons: modal editing doesn’t really fit my mental model of how an editor should work, and Emacs Lisp is not really a particularly fun language to use simply for customizing the behavior of an editor — as they say, Emacs is a nice operating system, it just needs a good editor.

In all cases, I think nano is a much better editor (at least for me), as when properly configured (and previously with some patching), it provides all of the features from vim I would want anyway, but in a modeless format.

A note about pico

As most people know, GNU nano began its life as a clone of UW pico.  Pico (short for PIne COmposer) was bundled with the PINE email client, which was written by the University of Washington.  Unfortunately, PINE was distributed under a custom license which had many problems.  This was eventually solved when the University of Washington released ALPINE (short for Alternatively Licensed PINE) under the Apache 2.0 license.

The licensing problem in combination with a desire to make a more capable editor based on the overall pico user experience led to the creation of GNU nano.

In the Alpine Linux distribution, both pico and nano are available.  Here’s what pico looks like by default:A screenshot of Pico showing some code, in its default configuation. The help options and titlebar are present.

The default nano experience

Like with pico, the default UI for nano is quite boring to look at.  Here is GNU nano displaying the same file with the default configuration:

The GNU nano editor showing some code in its default configuration. The help bar highlights undo/redo support and other features not present in Pico.As you can hopefully see, the default nano configuration is quite similar to that of pico.  However, unlike pico, it can be changed by editing the ~/.nanorc file.

Building something like vim using .nanorc

What I want in an editor is something that basically looks like vim, but is modeless like nano.  Something like this:

GNU nano displaying source code as I have configured it, syntax highlighting is enabled, and minibar mode also.But how do we get there?  The answer is simple: we use the ~/.nanorc file.

GNU nano displaying my .nanorc file. Some features are enabled, and some syntax highlighting packages are included.As a result of many people wanting the same thing: vim-like functionality with modeless editing, nano gained several third-party patches which allowed for this.  For the most part, these patches (or equivalent functionality) have been included upstream in recent years.

Getting most of the way to a vim-like look and feel, without syntax highlighting, is quite simple.  You simply need to add these lines to your ~/.nanorc file with any recent version of nano:

# enables the minibar feature
set minibar

# disables the shortcut hints
set nohelp

That gets you something like this:

GNU nano with minibar and help disabled.However, that minibar looks a little ugly with the inverse text.  The good news is, we can disable the inverse text by adding another snippet to ~/.nanorc:

# disable inverse text for the minibar
set titlecolor normal,normal

The way this works is by setting the foreground and background colors for the titlebar to normal, which means that nano shouldn’t change whatever color is already set.  That gives us:

GNU nano with minibar enabled, help disabled, and titlecolor set to normal/normal.

Enabling syntax highlighting

There are two ways that syntax highlighting can be enabled in nano: both come down to including configuration snippets to enable it.  GNU nano comes with some sample syntax highlighting configuration, which on Alpine systems is available in the nano-syntax package, but I don’t personally use it, as the color scheme is quite ugly.

Instead, I use an improved syntax highlighting package that is distributed on GitHub.  To install it, you can just do something like:

nanabozho:~$ git clone git@github.com:scopatz/nanorc ~/.nano/
[...]

This will install the syntax highlighting package to ~/.nano.  At that point, you just add include lines for the syntax highlighters you want to enable:

include "~/.nano/c.nanorc"

Once you do that, you’re done and left with a nano that looks like this:

GNU nano displaying source code as I have configured it, syntax highlighting is enabled, and minibar mode also.Hopefully this post demonstrates that nano is a quite capable editor in its own right.

Categories
Uncategorized

On the topic of community management, CoCs, etc.

Many people may remember that at one point, Alpine had a rather troubled community, which to put it diplomatically, resulted in a developer leaving the project.  This was the result of not properly managing the Alpine community as it grew — had we taken early actions to ensure appropriate moderation and community management, that particular incident would never have happened.

We did ultimately fix this issue and now have a community that tries to be friendly, welcoming and constructive, but it took a lot of work to get there.  As I was one of the main people who did that work, I think it might be helpful to talk about what I’ve learned through that process.

Moderation is critical

For large projects like Alpine, active moderation is the most crucial aspect.  It is basically the part that makes or breaks everything else you try to do.  Building the right moderation team is also important: it needs to be a team that everyone can believe in.

That means that the people who are pushing for community management may or may not be the right people to do the actual day to day moderation work, and should rather focus on policy.  This is because there will be bias against the people pushing for changes in the way the community is managed by some members.  Building a moderation team that gently enforces established policy, but is otherwise perceived as neutral is critical to success.

Policy statements (such as Codes of Conduct)

It is not necessarily a requirement to write a Code of Conduct.  However, if you are retrofitting one into a pre-existing community, it needs to be done from the bottom up, allowing everyone to say their thoughts.  Yes, you will get people who present bad faith arguments, because they are resistant to change, or perhaps they see no problem with the status quo.  In most cases, however, it is likely because people are resistant to change.  By including the community in the discussion about its community management goals, you ensure they will generally believe in the governance decisions made.

Alpine did ultimately adopt a Code of Conduct.  Most people have never read it, and it doesn’t matter.  When we wrote it, we were writing it to address specific patterns of behavior we wanted to remove from the community space.  The real purpose of a Code of Conduct is simply to set expectations, both from participants and the moderation team.

However, if you do adopt a Code of Conduct, you must actually enforce it as needed, which brings us back to moderation.  I have unfortunately seen many projects in the past few years, which have simply clicked the “Add CoC” button on GitHub and attached a copy of the Contributor Covenant, and then went on to do exactly nothing to actually align their community with the Code of Conduct they published.  Simply publishing a Code of Conduct is an optional first step to improving community relations, but it is never the last step.

Fostering inclusivity

The other key part of building a healthy community is to build a community where everyone feels like they are represented.  This is achieved by encouraging community participation in governance, both at large, and in a targeted way: the people making the decisions and moderating the community should ideally look like the people who actually use the software created.

This means that you should try to encourage women, people of color and other marginalized people to participate in project governance.  One way of doing so is by amplifying their work in your project.  You should also amplify the work of other contributors, too.  Basically, if people are doing cool stuff, the community team should make everyone aware of it.  A great side effect of a community team actively doing this is that it encourages people to work together constructively, which reinforces the community management goals.

Final thoughts

Although it was not easy, Alpine ultimately implemented all of the above, and the community is much healthier than it was even a few years ago.  People are happy, code is being written, and we’re making progress on substantive improvements to the Alpine system, as a community.

Change is scary, but in the long run, I think everyone in the Alpine community agrees by now that it was worth it.  Hopefully other communities will find this advice helpful, too.