libreplayer: toward a generic interface for replayer cores and music players

I’ve been taking a break from focusing on fediverse development for the past couple of weeks — I’ve done some things, but it’s not my focus right now because I’m waiting for Pleroma’s develop tree to stabilize enough to branch it for the 1.1 stable releases. So, I’ve been doing some multimedia coding instead.

The most exciting aspect of this has been libreplayer, which is essentially a generic interface between replayer emulation cores and audio players. For example, it will be possible for a replayer emulation core to simply target libreplayer and an audio player to target libreplayer and allow these two things (the emulation core and the libreplayer client) to work together to produce audio.

The first release of libreplayer will drop soon. It will contain a PSF1/PSF2 player that is free of binary blobs. This is an important milestone because the only other PSF1/PSF2 replayer that is blob-free has many emulation bugs due to the use of incorrectly modified CPU emulation code from MAME. Highly Experimental’s dirty secret is that it contains an obfuscated version of the PS2 firmware that has been stripped down.

And so, the naming of libreplayer is succinct, in two ways: one, it’s self-descriptive, libreplayer obviously conveys that it’s a library for accessing replayers, but also due to the emulation cores included in the source kit being blob-free, it implies that the replayer emulation cores we include are free as in freedom, which is also important to me.

What does this mean for audacious? Well, my intention is to replace the uglier replayer emulation plugins in audacious with a libreplayer client and clean-room implementations of each replayer core. I also intend to introduce replayer emulation cores that are not yet supported in audacious in a good way.

Hopefully this allows for the emulation community to be more effective stewards of their own emulation cores, while allowing for projects like audacious to focus on their core competencies. I also hope that having high-quality clean-room implementations of emulator cores written to modern coding practice will help to improve the security of the emulation scene in general. Time will tell.

Federation – what flows where, and why?

With all of the recent hullabaloo with Gab, and then, today Kiwi Farms joining the fediverse, there has been a lot of people asking questions about how data flows in the fediverse and what exposure they actually have.

I’m not really particularly a fan of either of those websites, but that’s beside the point. The point here is to provide an objective presentation of how instances federate with each other and how these federation transactions impact exposure.

How Instances Federate

To start, lets describe a basic model of a federated network. This network will have five actors in it:

  • alyssa@social.example
  • bob@chatty.example
  • chris@photos.example

(yeah yeah, I know, I’m not that good at making up fake domains.)

Next, we will build some relationships:

  • Sophie follows Alyssa and Bob
  • Emily follows Alyssa and Chris
  • Chris follows Emily and Alyssa
  • Bob follows Sophie and Alyssa
  • Alyssa follows Bob and Emily


Normally posts flow through the network in the form of broadcasts. A broadcast type post is one that is sent to and only to a pre-determined set of targets, typically your followers collection.

So, this means that if Sophie makes a post, chatty.example is the only server that gets a copy of it. It does not matter that chatty.example is peered with other instances (social.example).

This is, by far, the majority of traffic inside the fediverse.


The other kind of transaction is easily described as relaying.

To extend on our example above, lets say that Bob chooses to Announce (Mastodon calls this a boost, Pleroma calls this a repeat) the post Sophie sent him.

Because Bob is followed by Sophie and Alyssa, both of these people receive a copy of the Announce activity (an activity is a message which describes a transaction). Relay activities refer to the original message by it’s unique identifier, and recipients of Announce activities use the unique identifier to fetch the referred message.

For now, we will assume that Alyssa’s instance (social.example) was able to succeed in fetching the original post, because there’s presently no access control in practice on fetching posts in ActivityPub.

This now means that Sophie’s original post is present on three servers:

  • chatty.example
  • social.example

Relaying can cause perceived problems when an instance blocks another instance, but these problems are actually caused by a lack of access control on object fetches.


A variant on the broadcast-style transaction is a Create activity that references an object as a reply.

Lets say Alyssa responds to Sophie’s post that was boosted to her. She composes a reply that references Sophie’s original post with the inReplyTo property.

Because Alyssa is followed by actors on the entire network, now the entire network goes and fetches Sophie’s post and has a copy of it.

This too can cause problems when an instance blocks another. And like in the relaying case, it is caused by a lack of access control on object fetches.

Metadata Leakage

From time to time, people talk about metadata leakage with ActivityPub. But what does that actually mean?

Some people erroneously believe that the metadata leakage problem has to do with public (without access control) posts appearing on instances which they have blocked. While that is arguably a problem, that problem is related to the lack of access controls on public posts. The technical term for a publicly available post is as:Public, a reference to the security label that is applied to them.

The metadata leakage problem is an entirely different problem. It deals with posts that are not labelled as:Public.

The metadata leakage problem is this: If Sophie composes a post addressed to her followers collection, then only Bob receives it. So far, so good, no leakage. However, because of bad implementations (and other problems), if Bob replies back to Sophie, then his post will be sent not only to Sophie, but Alyssa. Based on that, Alyssa now has knowledge that Sophie posted something, but no actual idea what that something was. That’s why it’s called a metadata leakage problem — metadata about one of Sophie’s objects existing and it’s contents (based on the text of the reply) are leaked to Alyssa.

This problem is the big one. It’s not technically ActivityPub’s fault, either, but a problem in how ActivityPub is typically implemented. But at the same time, it means that followers-only posts can be risky. Mastodon covers up the metadata leakage problem by hiding replies to users you don’t follow, but that’s all it is, a cover up of the problem.


The solution to the metadata leakage problem is to have replies be forwarded to the OP’s audience. But to do this, we need to rework the way the protocol works a bit. That’s where proposals like moving to an OCAP-based variant of ActivityPub come into play. In those variants, doing this is easy. But in what we have now, doing this is difficult.

Anyway, I hope this post helps to explain how data flows through the network.

What is OCAP and why should I care?

OCAP refers to Object CAPabilities. Object Capabilities are one of many possible ways to achieve capability-based security. OAuth Bearer Tokens, for example, are an example of an OCAP-style implementation.

In this context, OCAP refers to an adaptation of ActivityPub which utilizes capability tokens.

But why should we care about OCAP? OCAP is a more flexible approach that allows for more efficient federation (considerably reduced cryptography overhead!) as well as conditional endorsement of actions. The latter enables things like forwarding Create activities using tokens that would not normally be authorized to do such things (think of this like sudo, but inside the federation). Tokens can also be used to authorize fetches allowing for non-public federation that works reliably without leaking metadata about threads.

In short, OCAP fixes almost everything that is lacking about ActivityPub’s security, because it defines a rigid, robust and future-proof security model for the fediverse to use.

How does it all fit together?

This work is being done in the LitePub (maybe soon to be called SocialPub) working group. LitePub is to ActivityPub what the WHATWG is to HTML5. The examples I use here don’t necessarily completely line up with what is really in the spec, because they are meant to just be a basic outline of how the scheme works.

So the first thing that we do is extend the AS2 actor description with a new endpoint (capabilityAcquisitionEndpoint) which is used to acquire a new capability object.

Example: Alyssa P. Hacker’s actor object
  "@context": "https://social.example/litepub-v1.jsonld",
  "id": "https://social.example/~alyssa",
  "capabilityAcquisitionEndpoint": "https://social.example/caps/new"

Bob has a server which lives at chatty.example. Bob wants to exchange notes with Alyssa. To do this, Bob’s instance needs to acquire a capability that he uses to federate in the future by POSTing a document to the capabilityAcquisitionEndpoint and signing it with HTTP Signatures:

Example: Bob’s instance acquires the inbox:write and objects:read capabilities
  "@context": "https://chatty.example/litepub-v1.jsonld",
  "id": "https://chatty.example/caps/request/9b2220dc-0e2e-4c95-9a5a-912b0748c082",
  "type": "Request",
  "capability": ["inbox:write", "objects:read"],
  "actor": "https://chatty.example"

It should be noted here that Bob’s instance itself makes the request, using an instance-specific actor. This is important because capability tokens are scoped to their actor. In this case, the capability token may be invoked by any children actors of the instance, because it’s an instance-wide token. But the instance could request the token strictly on Bob’s behalf by using Bob’s actor and signing the request with Bob’s key.

Alyssa’s instance responds with a capability object:

Example: A capability token
  "@context": "https://social.example/litepub-v1.jsonld",
  "id": "https://social.example/caps/640b0093-ae9a-4155-b295-a500dd65ee11",
  "type": "Capability",
  "capability": ["inbox:write", "objects:read"],
  "scope": "https://chatty.example",
  "actor": "https://social.example"

There’s a few peculiar things about this object that I’m sure you’ve probably noticed. Lets look at this object together:

  • The scope describes the actor which may use the token. Implementations check the scope for validity by merging it against the actor referenced in the message.
  • The actor here describes the actor which granted the capability. Usually this is an instance-wide actor, but it may also be any other kind of actor.

In traditional ActivityPub the mechanism through which Bob authenticates and later authorizes federation is left undefined. This is the hole that got filled with signature-based authentication, and is being filled again with OCAP.

But how do we invoke the capability to exchange messages? There’s a couple of ways.

When pushing messages, we can simply reference the capability by including it in the message:

Example: Pushing a note using a capability
  "@context": "https://chatty.example/litepub-v1.jsonld",
  "id": "https://chatty.example/activities/63ffcdb1-f064-4405-ab0b-ec97b94cfc34",
  "capability": "https://social.example/caps/640b0093-ae9a-4155-b295-a500dd65ee11",
  "type": "Create",
  "object": {
    "id": "https://chatty.example/objects/de18ad80-879c-4ad2-99f7-e1c697c0d68b",
    "type": "Note",
    "attributedTo": "https://chatty.example/~bob",
    "content": "hey alyssa!",
    "to": ["https://social.example/~alyssa"]
  "to": ["https://social.example/~alyssa"],
  "cc": [],
  "actor": "https://chatty.example/~bob"

Easy enough, right? Well, there’s another way we can do it as well, which is to use the capability as a bearer token (because it is one). This is useful when fetching objects:

Example: Fetching an object with HTTP + capability token
GET /objects/de18ad80-879c-4ad2-99f7-e1c697c0d68b HTTP/1.1
Accept: application/activity+json
Authorization: Bearer https://social.example/caps/640b0093-ae9a-4155-b295-a500dd65ee11

HTTP/1.1 200 OK
Content-Type: application/activity+json


Because we have a valid capability token, the server can make decisions on whether or not to disclose the object based on the relationship associated with that token.

This is basically OCAP in a nutshell. It’s simple and easy for implementations to adopt and gives us a framework for extending it in the future to allow for all sorts of things without leakage of cryptographically-signed metadata.

If this sort of stuff interests you, drop by #litepub on freenode!

Software Does Not Make A Product

Some fediverse developers approach project management from the philosophy that they are building a product in it’s own right instead of a tool. But does that approach really make sense for the fediverse?

It’s that time again, patches have been presented which improve Mastodon’s compatibility with the rest of the fediverse. However, the usual suspect has expressed disinterest in clicking the merge button. The users protest loudly about this unilateral decision, as is expected by the astute reader. Threats of hard forks are made. GitHub’s emoji reactions start to arrive, mostly negative. The usual suspect fires back saying that the patches do not fit into his personal vision, leading to more negative reactions. But why?

I believe the main issue at stake is whether or not fediverse software is the product, or if it is the instances themselves which are the product. Yes, both the software and the instance itself, are products, but the question, really, is which one is actually more impactful?

Gargron (the author of Mastodon), for whatever reason, sees Mastodon itself as the core product. This is obvious based on the marketing copy he writes to promote the Mastodon software and the 300,000+ user instance he personally administrates where he is followed by all new signups by default. It is also obvious based on the dictatorial control he exerts over the software.

But is this view aligned with reality? Mastodon has very few configurable options, but admins have made modifications to the software, which add configuration options that contradict Gargron’s personal vision. These features are frequently deployed by Mastodon admins and, to an extent, Mastodon instances compete with each other on various configuration differences: custom emoji, theming, formatting options and even the maximum length of a post. This competition, largely, has been enabled by the existence of “friendly” forks that add the missing configuration options.

My view is different. I see fediverse software as a tool that is used to build a community which optionally exists in a community of communities (the fediverse). In my view, users should be empowered to choose an instance which provides the features they want, with information about what features are available upfront. In essence, it is the instances themselves which are competing for users, not the software.

Monoculture harms competitiveness, there are thousands of Mastodon instances to choose from, but how many of them are truly memorable? How many are shipping stock Mastodon with the same old default color scheme and theme?

Outside of Mastodon, the situation is quite different. Most of us see the software we work on as a tool for facilitating community building. Accordingly, we try to do our best to give admins as many ways as possible to make their instance look and feel as they want. They are building the product that actually matters, we’re just facilitating their work. After all, they are the ones who have to spend time customizing, promoting and managing the community they build. This is why Pleroma has extensive configuration and theming options that are presented in a way that is very easy to leverage. Likewise, Friendica, Hubzilla and even GNU Social can be customized in the same way: you’re in control as the admin, not a product designer.

But Mastodon is still problematic when it comes to innovation in the fediverse at large. Despite the ability that other fediverse software give to users and admins to present their content in whatever form they want, Mastodon presently fails to render the content correctly:

Mastodon presents lists in an incorrect way.

The patches I referred to earlier correct this problem by changing how Mastodon processes posts from remote instances. They also provide a path toward improving usability in the fediverse by allowing us to work toward phasing out the use of Unicode mathematical constants as a substitute for proper formatting. The majority of fediverse microblogging software has supported this kind of formatting for a long time, many implementations predating Mastodon itself. Improved interoperability with other fediverse implementations sounds like a good thing, right? Well, it’s not aligned with the Mastodon vision, so it’s rejected.

The viewpoint that the software itself is primarily what matters is stifling fediverse development. As developers, we should be working together to improve the security and expressiveness of the underlying technology. This means that some amount of flexibility is required. Quoting RFC791:

In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior.

There is no God of the fediverse. The fediverse exists and operates smoothly because we work together, as developers, in concert with the admin and user community at large. Accomplishing this requires compromise, not unilateral decision making.

What would ActivityPub look like with capability-based security, anyway?

This is the third article in a series of articles about ActivityPub detailing the challenges of building a trustworthy, secure implementation of the protocol stack.

In this case, it also does a significant technical deep dive into informally specifying a set of protocol extensions to ActivityPub. Formal specification of these extensions will be done in the Litepub working group, and will likely see some amount of change, so this blog entry should be considered non-normative in it’s entirety.

Over the past few years of creating and revising ActivityPub, many people have made a push for the inclusion of a capability-based security model as the core security primitive (instead, the core security primitive is “this section is non-normative,” but I’m not salty), but what would that look like?

There’s a few different proposals in the works at varying stages of development that could be used to retrofit capability-based security into ActivityPub:

  • OCAP-LD, which adds a generic object capabilities framework for any consumer of JSON-LD (such as the Linked Data Platform, or the neutered version of LDP that is described as part of ActivityPub),
  • Litepub Capability Enforcement, which is preliminarily described by this blog post, and
  • PolaPub aka CapabilityPub which is only an outline stored in an .org document. It is presumed that PolaPub or CapabilityPub or whatever it is called next week will be built on OCAP-LD, but in fairness, this is pure speculation.

Why capabilities instead of ACLs?

ActivityPub, like the fediverse in general, is an open world system. Traditional ACLs fail to provide proper scalability to the possibility of 100s of millions of accounts across millions of instances. Object capabilities, on the other hand, are opaque tokens which allow the bearer to possibly consume a set of permissions.

The capability enforcement proposals presently proposed would be deployed as a hybrid approach: capabilities to provide horizontal scalability for the large number of accounts and instances, and deny lists to block specific interactions from actors. The combination of capabilities and deny lists provides for a highly robust permissions system for the fediverse, and mimics previous work on federated open world systems.

Drawing inspiration from previous work: the Second Life Open Grid Protocol

I’ve been following large scale interactive communications architectures for many years, which has allowed me to learn many things about the design and implementation of open world horizontally-scaled systems.

One of the projects that I followed very closely was started in 2008, as a collaboration between Linden Lab, IBM and some other participants: the Open Grid Protocol. While the Open Grid Protocol itself ultimately did not work out for various reasons (largely political), a large amount of the work was recycled into a significant redesign of the Second Life service’s backend, and the SL grid itself now resembles a federated network in many ways.

OGP was built on the concept of using capability tokens as URIs, which would either map to an active web service or a confirmation. Since the capability token was opaque and difficult to forge, it provided sufficient proof of authentication without sharing any actual information about the authorization itself: the web services act on the session established by the capability URIs instead of on an account directly.

Like ActivityPub, OGP is an actor-centric messaging protocol: when logging in, the login server provides a set of “seed capabilities”, which allow use of the other services. From the perspective of the other services, invocation of those capability URIs is seen as an account performing an action. Sound familiar in a way?

The way Linden Lab implemented this part of OGP was by having a capabilities server which handled routing the invoked capability URIs to other web services. This step in and of itself is not particularly required, an OGP implementation could handle consumption of the capability URIs directly, as OpenSim does for example.

Bringing capability URIs into ActivityPub as a first step

So, we have established that capability URIs are an opaque token that can be called as a substitute for whatever backend web service was going to be used in the first place. But, what does that get us?

The simplest way to look at it is this way: there are activities which are relayable and others which are not relayable. Both can become capability-enabled, but require separate strategies.

Relayable activities

Create (in this context, thread replies) activities are relayable. This means the capability can simply be invoked by treating it as an inbox, and the server the capability is invoked on will relay the side effects forward. The exact mechanism for this is not yet defined, as it will require prototyping and verification, but it’s not impossible. Capability URIs for relayable activities can likely be directly aliased to the sharedInbox if one is available, however.

Intransitive activities

Intransitive activities (ones which act on a pre-existing object that is not supplied) like Announce, Like, Follow will require proofs. We can already provide proofs in the form of an Accept activity:

  "@context": "",
  "id": "",
  "type": "Accept",
  "actor": "",
  "object": {
    "id": "",
    "type": "Announce",
    "object": "",
    "actor": "",
    "to": [""],
    "cc": [""]

This proof can be optionally signed with LDS in the same way as OCAP-LD proofs. Signing the proof is not covered here, and the proof must be fetchable, as somebody looking to distribute their intransitive actions on objects known to be security labeled must validate the proof somehow.

Object capability discovery

A security labelled object has a new field, capabilities which is an Object that contains a set of allowed actions and the corresponding capability URI for them:

  "@context": [
  "capabilities": {
    "Announce": "",
    "Create": "",
    "Like": ""

Example: Invoking a capability

Bob makes a post, which he allows liking, and replying, but not announcing. That post looks like this:

  "@context": [
  "capabilities": {
    "Create": "",
    "Like": ""
  "id": "",
  "type": "Note",
  "content": "I'm really excited about the new capabilities feature!",
  "attributedTo": ""

As you can tell, the capabilities object does not include an Announce grant, which means that a proof will not be provided for Announce objects.

Alyssa wants to like the post, so she creates a normal Like activity and sends it to the Like capability URI. The server responds with an Accept object that she can forward to her recipients:

  "@context": [
  "id": "",
  "type": "Accept",
  "actor": "",
  "object": {
    "id": "",
    "type": "Like",
    "object": "",
    "actor": "",
    "to": [

Bob can be removed from the recipient list, as he already processed the side effects of the activity when he accepted it. Alyssa can then forward this object on to her followers, which can verify the proof by fetching it, or alternatively verifying the LDS signature if present.

Example: Invoking a relayable capability

Some capabilities, like Create result in the server hosting the invoked capability relaying the message forward instead of using proofs.

In this example, the post being relayed is assumed to be publicly accessible. Instances where a post is not publicly accessible should create a capability URI which returns the post object.

Alyssa decides to post a reply to the message from Bob she just liked above:

  "@context": [
  "to": [""],
  "cc": [""],
  "type": "Create",
  "actor": "",
  "object": {
    "capabilities": {
      "Create": "",
      "Like": ""
    "type": "Note",
    "content": "I am really liking the new object capabilities feature too!",
    "attributedTo": ""

An astute reader will note that the capability set is the same as the parent. This is because the parent reserves the right to reject any post which requests more rights than were in the parent post’s capability set.

Alyssa POSTs this message to the Create capability from the original message and gets back a 202 Accepted status from the server. The server will then relay the message to her followers collection by dereferencing it remotely.

A possible extension here would be to allow the Create message to become intransitive and combined with a proof. This could be done by leaving the to and cc fields empty, and specifying audience instead or something along those lines.

Considerations with backwards compatibility

Obviously, it goes without saying that an ActivityPub 1.0 implementation can ignore these capabilities and do whatever they want to do. Thusly, it is suggested that messages with security labelling contrary to what is considered normal for ActivityPub 1.0 are not sent to ActivityPub 1.0 servers.

Determining what servers are compatible ahead of time is still an area that needs significant research activity, but I believe it can be done!

ActivityPub: the present state, or why saving the ‘worse is better’ virus is both possible and important

This is the second article in a series that will be a fairly critical review of ActivityPub from a trust & safety perspective. Stay tuned for more.

In our previous episode, I laid out some personal observations about implementing an AP stack from scratch over the past year. When we started this arduous task, there were only three other AP implementations in progress: Mastodon, Kroeg and PubCrawl (the AP transport for Hubzilla), so it has been a pretty significant journey.

I also described how ActivityPub was a student of the ‘worse is better’ design philosophy. Some people felt a little hurt by this, but they shouldn’t have: after all, UNIX (of which modern Linux and BSD systems are a derivative) is also a student of the ‘worse is better’ philosophy. And much like the unices of yesteryear, ActivityPub right now has a lot of missing pieces. But that’s alright, as long as the participants in this experiment understand the limitations.

For the first time in decades, the success of ActivityPub, in part by way of it’s aggressive adoption of the ‘worse is better’ philosophy (which enabled them to ship something) has made some traction that has inspired people to believe that perhaps we can take back the Web and make it open again. This in itself is a wonderful thing, and we must do our best to seize this opportunity and run with it.

As I mentioned, there have been a huge amount of projects looking to implement AP in some way or other, many not yet in a public stage but seeking guidance on how to write an AP stack. My DMs have been quite busy with questions over the past couple of months about ActivityPub.

Let’s talk about the elephant in the room, actually no not that one.

ActivityPub has been brought this far by the W3C Social CG. This is a Community Group that was chartered by the W3C to advance the Social Web.

While they did a good job at getting some of the best minds into the same room and talking about building a federated social web, a lot of decisions were already predetermined (using as a basis) or left underspecified to satisfy other groups inside W3C. Finally, the ActivityPub specification itself claimed that pure JSON could be used to implement ActivityPub, but the W3C kept pushing for layered specs on top like JSON-LD Linked Data Signatures, a spec that is not yet finalized but depends on JSON-LD.

LDS has a lot of problems, but I already covered them already. You can read about some of those problems by reading up on a mitigation known as Blind Key Rotation. Anyway, this isn’t really about W3C pushing for use of LDS in AP, that is just one illustrated example of trying to bundle JSON-LD and dependencies into ActivityPub to make JSON-LD a defacto requirement.

Because of this bundling issue, we established a new community group, called LitePub, this was meant to be a workspace for people actually implementing ActivityPub stacks so that they could get documentation and support for using ActivityPub without JSON-LD, or using JSON-LD in a safe way. To date, the LitePub community is one of the best resources for asking questions about ActivityPub and getting real answers that can be used in production today.

But to build the next generation of ActivityPub, the LitePub group isn’t enough. Is W3C still interested? Unfortunately, from what I can tell, not really: they are pursuing another system that was developed in house called SOLID, which is built on the Linked Data Platform. Since SOLID is being developed by W3C top brass, I would assume that they aren’t interested in stewarding a new revision of ActivityPub. And why would they be? SOLID is essentially a semantic web retread of ActivityPub, which gives the W3C top brass exactly what they wanted in the first place.

In some ways, I argue that W3C’s perceived disinterest in Social Web technologies other than SOLID largely has to do with fediverse projects having a very luke warm response to JSON-LD and LDS.

The good news is that there have been some initial conversations between a few projects on what a working group to build the next generation of ActivityPub would look like, how it would be managed, and how it would be funded. We will be having more of these conversations over the next few months.

ActivityPub: the present state

In the first blog post, I went into a little detail about the present state of ActivityPub. But is it really as bad as I said?

I am going to break down a few examples of faults in the protocol and talk about their current state as well as what we are doing for short-term mitigations and where we are doing them.

Ambiguous addressing: is it a DM or just a post directly addressed to a circle of friends?

As Osada and Hubzilla started to get attention, Mastodon and Pleroma users started to see weird behavior in their notifications and timelines: messages from people they didn’t necessarily follow which got directly addressed to the user. These are messages sent to a group of selected friends, but can otherwise be forwarded (boosted/repeated/announced) to other audiences.

In other words, they do not have the same semantic meaning as a DM. But due to the way they were addressed, Mastodon and Pleroma saw them as a DM.

Mastodon fixed this issue in 2.6 by adding heuristics: if a message has recipients in both the to and cc fields, then it’s a public message that is addressed to a group of recipients, and not a DM. Unfortunately, Mastodon treats it similarly to a followers-only post and does not infer the correct rights.

Meanwhile, Pleroma and Friendica came up with the idea to add a semantic hint to the message with the litepub:directMessage field. If this is set to true, it should be considered as a direct message. If the field is set to false, then it should be considered a group message. If the field is unset, then heuristics are used to determine the message type.

Pleroma has a branch in progress which adds both support for the litepub:directMessage field as well as the heuristics. It should be landing shortly (it needs a rebase and I need to fix up some of the heuristics).

So overall, the issue is reasonably mitigated at this point.

Fake direction attacks

Several months ago, Puckipedia did some fake direction testing against mainstream ActivityPub implementations. Fake direction attacks are especially problematic because they allow spoofing to happen.

She found vulnerabilities in Mastodon, Pleroma and PixelFed, as well as recently a couple of other fediverse software.

The vulnerabilities she reported in Mastodon, Pleroma and PixelFed have been fixed, but the class of vulnerability as she observes keeps appearing.

In part, we can mitigate this by writing excellent security documentation and referring people to read it. This is something that I hope the LitePub group can do in the future.

But for now, I would say this issue is not fully mitigated.

Leakage caused by Mastodon’s followers-only scope

Software which is directly compatible with the Mastodon followers-only scope have a few problems, I am grouping them together here:

  • New followers can see content that was posted before they were authorized to view any followers-only content
  • Replies to followers-only posts are addressed to their own followers instead of the followers collection of the OP at the time the post was created (which creates metadata leaks about the OP)
  • Software which does not support the followers-only scope can dereference the OP’s followers collection in any way they wish, including interpreting it as as:Public (this is explicitly allowed by the ActivityStreams 2.0 specification, you can’t even make this up)

Mitigation of this is actually incredibly easy, which makes me question why Mastodon didn’t do it to begin with: simply expand the followers collection when preparing to send the message outbound.

An implementation of this will be landing in Pleroma soon to harden the followers-only scope as well as fix followers-only threads to be more usable.

Implementation of this mitigation also brings the followers-only threads to Friendica and Hubzilla in a safe and compatible way: all fediverse software will be able to properly interact with the threads.

The “don’t @ me” problem

Some of this interpretation about Zot may be slightly wrong, it is based on reading the specification for Zot and Zot 6.

Other federated protocols such as DFRN, Zot and Zot 6 provide a rich framework for defining what interactions are allowed with a given message. ActivityPub doesn’t.

DFRN provides UI hints on each object that hint at what may be done with the object, but uses a capabilities system under the hood. Capability enforcement is done by the “feed producer,” which either accepts your request or denies it. If you comment on a post in DFRN, it is the responsibility of the parent “feed producer” to forward your post onward through the network.

Zot uses a similar capabilities system but provides a magic signature in response to consuming the capability, which you then forward as proof of acceptance. Zot 6 uses a similar authentication scheme, except using OpenWebAuth instead of the original Zot authentication scheme.

For ActivityPub, my proposal is to use a system of capability URIs and proof objects that are cross-checked by the receiving server. In terms of the proof objects themselves, cryptographic signatures are not a component of this proof system, it is strictly capability based. Cryptographic verification could be provided by leveraging HTTP Signatures to sign the response, if desired. I am still working out the details on how precisely this will work, and that will probably be the what the next blog post is about.

As a datapoint: in Pleroma, we already use this cross-checking technique to verify objects which have been forwarded to us due to ActivityPub §7.1.2. This allows us to avoid JSON-LD and LDS signatures and is the recommended way to verify forwarded objects in LitePub implementations.

Unauthenticated object fetching

Right now, due to the nature of ActivityPub and the design motivations behind it, fetching public objects is entirely unauthenticated.

This has lead to a few incidents where fediverse users have gotten upset over their posts still arriving at servers they have blocked, since they naturally expect that posts won’t arrive at servers they have blocked.

Mastodon has implemented an extension for post fetching where fetching private posts is authenticated using the HTTP Signature of the user who is fetching the post. This is a possible way of solving the authentication problem: instances can be identified based on which actor signed the request.

However, I don’t think that fetching private posts in this way (instead this should always fail) is a good idea and wouldn’t recommend it. With that said, a more generalized approach based on using HTTP Signatures to fetch public posts could be workable.

But I do not think the AP server should use a random user’s key to sign the requests: instead there should be an AP actor which explicitly represents the whole instance, and the instance actor’s key should be used to sign the fetch requests instead. That way information about individual users isn’t leaked, and signatures aren’t created without the express consent of a random instance user.

Once object fetches are properly authenticated in a way that instances are identifiable, then objects can be selectively disclosed. This also hardens object fetching via third parties such as crawlers.


In this particular blog entry, I discussed why ActivityPub is still the hero we need despite being designed with the ‘worse is better’ philosophy, as well as discussed some early plans for cross-project collaboration on a next generation ActivityPub-based protocol, and discussed a few of the common problem areas with ActivityPub and how we can mitigate them in the future.

And with that, despite the present issues we face with ActivityPub, I will end this by borrowing a common saying from the cryptocurrency community: the future is bright, the future is decentralized.

ActivityPub: The “Worse Is Better” Approach to Federated Social Networking

This is the first article in a series that will be a fairly critical review of ActivityPub from a trust & safety perspective. Stay tuned for more.

In the modern day, myself and many other developers working on libre software have been exposed to a protocol design philosophy that emphasizes safety and correctness. That philosophy can be summarized with these goals:

  • Simplicity: the protocol must be simple to implement. It is more important for the protocol to be simple than the backend implementation.
  • Correctness: the protocol must be verifiably correct. Incorrect behavior is simply not allowed.
  • Safety: the protocol must be designed in a way that is safe. Behavior and functionality which risks safety is considered incorrect.
  • Completeness: the protocol must cover as many situations as is practical. All reasonably expected cases must be covered. Simplicity is not a valid excuse to reduce completeness.

Most people would correctly refer to these as good characteristics and overall the right way to approach designing protocols, especially in a federated and social setting. In many ways, the Diaspora protocol could be considered as an example of this philosophy of design.

The “worse is better” approach to protocol design is only slightly different:

  • Simplicity: the protocol must be simple to implement. It is important for the backend implementation to be equally simple as the protocol itself. Simplicity of both implementation and protocol are the most important considerations in the design.
  • Correctness: the protocol must be correct when tested against reasonably expected cases. It is more important to be simple than correct. Inconsistencies between real implementations and theoretical implementations are acceptable.
  • Safety: the protocol must be safe when tested against basic use cases. It is more important to be simple than safe.
  • Completeness: the protocol must cover reasonably expected cases. It is more important for the protocol to be simple than complete. Under-specification is acceptable when it improves the simplicity of the protocol.

OStatus and ActivityPub are examples of the “worse is better” approach to protocol design. I have intentionally portrayed this design approach in a way to attempt to convince you that it is a really bad approach.

However, I do believe that this approach, even though it is considerably worse approach to protocol design which creates technologies that people simply cannot trust or have confidence in their safety while using those technologies, has better survival characteristics.

To understand why, we have to look at both what expected security features of federated social networks are, and what people mostly use social networks for.

When you ask people what security features they expect of a federated social networking service such as Mastodon or Pleroma, they usually reply with a list like this:

  • I should be able to interact with my friends.
  • The messages I share only with my friends should be handled in a secure manner. I should be able to depend on the software to not compromise my private posts.
  • Blocking should work reasonably well: if I block someone, they should disappear from my experience.

These requirements sound reasonable, right? And of course, ActivityPub mostly gets the job done. After all, the main use of social media is shitposting, posting selfies of yourself and sharing pictures of your dog. But would they be better served by a different protocol? Absolutely.

See, the thing is, ActivityPub is like a virus. The protocol is simple enough to implement that people can actually do it. And they are, aren’t they? There’s over 40 applications presently in development that use ActivityPub as the basis of their networking stack.

Why is this? Because, despite the design flaws in ActivityPub, it is generally good enough: you can interact with your friends, and in compliant implementations, addressing ensures that nobody else except for those you explicitly authorize will read your messages.

But it’s not good enough: for example, people have expressed that they want others to be able to read messages, but not reply to them.

Had ActivityPub been a capability-based system instead of a signature-based system, this would never have been a concern to begin with: replies to the message would have gone to a special capability URI and then accepted or rejected.

There are similar problems with things like the Mastodon “followers-only” posts and general concerns like direct messaging: these types of messages imply specific policy, but there is no mechanism in ActivityPub to convey these semantics. (This is in part solved by the LitePub litepub:directMessage flag, but that’s a kludge to be honest.)

I’ve also mentioned before that a large number of instances where there have been discourse about Mastodon verses Pleroma have actually been caused by complete design failures of ActivityPub.

An example of this is with instances you’ve banned being able to see threads from your instance still: what happens with this is that somebody from a third instance interacts with the thread and then the software (either Mastodon or Pleroma) reconstructs the entire thread. Since there is no authentication requirement to retrieve a thread, these blocked instances can successfully reconstruct the threads they weren’t allowed to receive in the first place. The only difference between Mastodon and Pleroma here is that Pleroma allows the general public to view the shared timelines without using a third party tool, which exposes the leaks caused by ActivityPub’s bad design.

In an ideal world, the number of ActivityPub implementations would be zero. But of course this is not an ideal world, so that leaves us with the question: “where do we go from here?”

And honestly, I don’t know how to answer that yet. Maybe we can save ActivityPub by extending it to be properly capability-based and eventually dropping support for the ActivityPub of today. But this will require coordination between all the vendors. And with 40+ projects out there, it’s not going to be easy. And do we even care about those 40+ projects anyway?

The Case For Blind Key Rotation

ActivityPub uses cryptographic signatures, mainly for the purpose of authenticating messages. This is largely for the purpose of spoofing prevention, but as any observant person would understand, digital signatures carry strong forensic value.

Unfortunately, while ActivityPub uses cryptographic signatures, the types of cryptographic signatures to use have been left unspecified. This has lead to various implementations having to choose on their own which signature types to use.

The fediverse has settled on using not one but two types of cryptographic signature:

  • HTTP Signatures: based on an IETF internet-draft, HTTP signatures provide a cryptographic validation of the headers, including a Digest header which provides some information about the underlying object. HTTP Signatures are an example of detached signatures. HTTP Signatures also generally sign the Date header which provides a defacto validity period.
  • JSON-LD Linked Data Signatures: based on a W3C community draft, JSON-LD Linked Data Signatures provide an inline cryptographic validation of the JSON-LD document being signed. JSON-LD Linked Data Signatures are commonly referred to as LDS signatures or LDSigs because frankly the title of the spec is a mouthful. LDSigs are an example of inline signatures.

Signatures and Deniability

When we refer to deniability, what we’re talking about is forensic deniability, or put simply the ability to plausibly argue in a court or tribunal that you did not sign a given object. In essence, forensic deniability is the ability to argue plausible deniability when presented with a piece of forensic evidence.

Digital signatures are by their very nature harmful with regard to forensic deniability because they are digital evidence showing that you signed something. But not all signature schemes are made equal, some are less harmful to deniability than others.

A good signature scheme which does not harm deniability has the following basic attributes:

  • Signatures are ephemeral: they only hold validity for a given time period.
  • Signatures are revocable: they can be invalidated during the validity period in some way.

Both HTTP Signatures and LDSigs have weaknesses — specifically, both implementations do not allow for the possibility of future revocation of the signature, but LDSigs is even worse because LDSigs are intentionally forever.

Mitigating the revocability problem with Blind Key Rotation

Blind Key Rotation is a mitigation that builds on the fact that ActivityPub implementations must fetch a given actor again in the event that signature authentication fails, by using this fact to provide some level of revocability.

The mitigation works as follows:

  1. You delete one or more objects in a short time period.
  2. Some time after the deletions are processed, the instance rekeys your account. It does not send any Update message or similar because signing your new key with your old key defeats the purpose of this exercise.
  3. When you next publish content, signature validation fails and the instance fetches your account’s actor object again to learn the new keys.
  4. With the new keys, signature validation passes and your new content is published.

It is important to emphasize that in a Blind Key Rotation, you do not send out an Update message with new keys. The reason why this is, is because you do not want to create a cryptographic relationship between the keys. By creating a cryptographic relationship, you introduce new digital evidence which can be used to prove that you held the original keypair at some time in the past.


If you still have questions, contact me on the fediverse:

Pleroma, LitePub, ActivityPub and JSON-LD

A lot of people make assumptions about my position on whether or not JSON-LD is actually good or not. The reality is that my view is more nuanced than that: there are great uses for JSON-LD, but it’s not appropriate in the scenario it is used in ActivityPub.

What is JSON-LD anyway?

JSON-LD stands for JSON Linked Data. Linked Data is a “Big Data” technique which involves creating large graphs of interlinked pieces of data, intended to help enrich data sets with more semantic context (this is known as graph coloring), as well as additional data linked by URI (hince why it’s called linked data). The Linked Data concept can be extremely powerful for data analysis when used in the appropriate context. A good example of where linked data is useful is, where they use it to help compare performance and value verses cost of US health insurance plans.

ActivityPub and JSON-LD

Another example where JSON-LD is ostensibly used is ActivityPub. ActivityPub inherits it’s JSON-LD dependency from ActivityStreams 2.0, which is a data format that enjoys wide use outside of the ActivityPub ecosystem: for example, Twitter, Instagram, Facebook and Tumblr all use variations of ActivityStreams 2.0 objects in various places inside their APIs.

These services find the JSON-LD concept useful because their advertising customers can leverage JSON-LD (in facebook, the open graph concept they frequently pitch to advertisers is built in part on top of JSON-LD) to optimize their advertising campaigns.

But does JSON-LD provide any value in a social networking environment which does not have advertising? In my opinion, not really: it’s just a artifact of the “if you’re not the customer, you’re the product” nature of the proprietary social networking services. As previously stated, the primary advantage of JSON-LD and the linked data philosophy in general is data enrichment, and data enrichment is largely useful to two groups: advertisers and intelligence (public or private).

Since the federated social networking services don’t have advertising, that just leaves intelligence.

Private intelligence and social networking, how data enrichment can impact your credit score

There are various kinds of private intelligence firms out there which collect information about you, me, and everyone else. You’ve probably heard of some of them, and some of the products they sell: companies like Experian, InfoCheckUSA and Equifax sell various products like FICO credit scores and background reports which determine everything from whether or not you can rent or buy a car or house to whether or not you can get a job.

But did you know these companies crawl your use of the proprietary social networking services? There are companies like FriendlyScore which sell credit-related data based on how you utilize social networking services. Those “social” credit scores are directly enabled by technology such as JSON-LD and ActivityStreams 2.0.

Public intelligence and social networking, how data enrichment can get you killed

We’ve all heard about Predator drones and drone strikes in the news. In the past decade, drone strikes have been used to attack countless targets. But how do our public intelligence agencies determine who is a target? It’s very similar to how the private intelligence agencies determine whether you should own a house or have a job: they use big data methods to analyze all of the metadata they collected.

If you write a post on a social networking service and attach GPS data to it, they can use that information to determine a general pattern of when and where you are, and then feed it into a machine learning algorithm to determine when and where you will likely be in the future. They can also use this metadata analysis to prove certain assertions about your identity to a level of certainty which determines if you become a target, even if you’re not really the same person they are trying to find.

Conclusion: safety is more important than data enrichment

These techniques that are used both in the public and private sector are what the press tend to refer to as “Big Data” techniques. JSON-LD is a “Big Data” technology that can be leveraged in these ways. But at the same time, we can leverage some “Big Data” techniques in such a way that JSON-LD parsers will automatically do what we want them to do.

In my opinion, it is a critical obligation of federated social networking service developers to ensure that handling of data is done in the most secure way possible, built on proven fundamentals. I view the inclusion of JSON-LD in the ActivityPub and ActivityStreams 2.0 standards to be harmful toward that obligation.

Pleroma and JSON-LD

As you may know, there are two mainstream ActivityPub servers that are in wide use: Mastodon and Pleroma. Mastodon uses JSON-LD and Pleroma does not. But they are able to interoperate just fine despite this. This is largely because Pleroma provides JSON-LD attributes in the messages it generates without actively using them itself.

Handling ActivityPub in a world without JSON-LD

The origin of the Transmogrifier name

Instead, Pleroma has a module called Transmogrifier that translates between real ActivityPub and our ActivityPub internal representation. The use of AP constructs in our internal representation is the origin of the statement that Pleroma uses ActivityPub internally, and to an extent it is a very truthful statement: our internal representation and object graph are directly derived from an earlier ActivityPub draft, but it’s not quite the same, and there have been a few bugs where things have not been translated correctly which have resulted in leaks and other problems.

Besides the Transmogrifier, we have two functions which fetch new pieces into the graphs we build: Object.normalize() and Activity.normalize(). This could be considered to be a similar approach to JSON-LD except that it’s explicit instead of implicit. The explicit fetching of new graph pieces is a security feature: it allows us to validate that we actually trust what we’re fetching before we do it. This helps us to prevent various “fake direction” attacks which can be used for spoofing.

LitePub and JSON-LD

LitePub is a recent initiative that was started between Pleroma and a few other ActivityPub implementations to slim down the ActivityPub standard into something that is minimalist and secure. While LitePub itself does not require JSON-LD, LitePub implementations follow some JSON-LD like behaviors where it makes sense, and LitePub provides a @context which allows JSON-LD parsers to transparently parse LitePub messages.

Leveraging Linked Data for Object Capability Enforcement

The main principle LitePub is built on is the use of leveraging the linked data paradigm to perform object capability enforcement. This can work either explicitly (as is done in Pleroma) or implicitly (as is done in Mastodon when parsing a LitePub activity).

We do this by treating every Object ID in LitePub as a capability URI. When processing messages that reference a capability URI, we check to make sure the capability URI is still valid by re-fetching the object. If fetching the object fails, then the capability URI is no longer valid. This prevents zombie activities.

A note on Zombie Activities

There are two primary ways of securing ActivityPub implementations with digital signatures: JSON Linked Data Signatures (LDSigs) and the construction built on HTTP Signatures that is leveraged in LitePub. These can be referred to as inline signatures and transient signatures, respectively.

The problem with inline signatures is that they are valid forever. LDSig signatures have no expiration and have no revocation method. Because of this, if an Object is deleted, it can come back to life. The solution created by the LDSig advocates is to use Tombstone objects for all deletions, but that creates a potential metadata leak that proves a post once existed which harms plausible deniability.

The LitePub approach on the other hand is to treat all objects as capability URIs. This means when an object is deleted, future attempts to access the capability URI fail and thus the object cannot come back to life through boosting or other means.


Hopefully this clarifies my views on JSON-LD and it’s applications in the fediverse. Feel free to ask me questions if you have any.

Do not use or provide DH-AES or DH-BLOWFISH for SASL/IAL authentication

Atheme 7.2 dropped support for the DH-AES and DH-BLOWFISH mechanisms. This was for very good reason.

At the time that DH-BLOWFISH was created, IRC was a very different place… SSL was not ubiquitous, and it was thought that having some lightweight encryption on the authentication exchange might be useful, without opening services to a DoS vector. An initial audit on DH-BLOWFISH found some problems, so a second mechanism, DH-AES was created to get rid of some of them.

However, both of these mechanisms use a small keysize for Diffie-Helman key exchange (256 bits), as previously mentioned by grawity. After the freenode incident, where a user discovered they could DoS atheme by spamming it with DH-BLOWFISH requests, we decided to audit both mechanisms, and determined that they should be removed from the distribution.

The reasons why were:

  1. Users had a strong misconception that the mechanisms provided better security than PLAIN over TLS (they don’t);
  2. Because the DH key exchange is unauthenticated, users may be MITM’d by the IRC daemon;
  3. The session key is half the length as the keyexchange phase, making the entire system weak. DH can only securely provide half the bitspace for the session key as the size of key exchange parameters. Put more plainly: if you use DH with 256-bit parameters, the session key is 128 bits, which is weaker than PLAIN over TLS.
  4. Correcting the key exchange to use 256-bit keys would require rewriting every single implementation anyway.

If you want secure authentication, just use PLAIN over TLS, or use atheme’s experimental family of ECDSA mechanisms, namely ECDSA-NIST256P-CHALLENGE. Yes, it’s based on sec256p1, which is a NIST curve, but it’s acceptable for authentication in most cases, and most cryptography libraries implement the sec256p1 curve. While not perfect, it is still much better than the DH family of mechanisms.

Unfortunately, at least one atheme fork has resurrected this mechanism. Hopefully they remove it, as it should be treated as if it were backdoored, because the level of mistakes made in designing the mechanism would be the same type of mistakes one would introduce if they wanted to backdoor a mechanism.

Update: Unfortunately Anope also implemented these broken mechanisms. Luckily it appears that X3 has not.