Normally, when you hear the phrase “trusted computing,” you think about schemes designed to create roots of trust for companies, rather than the end user. For example, Microsoft’s Palladium project during the Longhorn development cycle of Windows is a classically cited example of trusted computing used as a basis to enforce Digital Restrictions Management against the end user.
However, for companies and software maintainers, or really anybody who is processing sensitive data, maintaining a secure chain of trust is paramount, and that root of trust is always the hardware. In the past, this was not so difficult: we had very simple computers, usually with some sort of x86 CPU and a BIOS, which was designed to be just enough to get DOS up and running on a system. This combination resulted in something trivial to audit and for the most part everything was fine.
More advanced systems of the day, like the Macintosh and UNIX workstations such as those sold by Sun and IBM used implementations of IEEE-1275, also known as Open Firmware. Unlike the BIOS used in the PC, Open Firmware was written atop a small Forth interpreter, which allowed for a lot more flexibility in handling system boot. Intel, noting the features that were enabled by Open Firmware, ultimately decided to create their own competitor called the Extensible Firmware Interface, which was launched with the Itanium.
Intel’s EFI evolved into an architecture-neutral variant known as the Unified Extensible Firmware Interface, frequently referred to as UEFI. For the most part, UEFI won against Open Firmware: the only vendor still supporting it being IBM, and only as a legacy compatibility option for their POWER machines. Arguably the demise of Open Firmware was more related to industry standardization on x86 instead of the technical quality of UEFI however.
So these days the most common architecture is x86 with UEFI firmware. Although many firmwares out there are complex, this in and of itself isn’t impossible to audit: most firmware is built on top of TianoCore. However, it isn’t ideal, and is not even the largest problem with modern hardware.
Low-level hardware initialization
Most people when asked how a computer boots, would say that UEFI is the first thing that the computer runs, and then that boots into the operating system by way of a boot loader. And, for the most part, due to magic, this is a reasonable assumption for the layperson. But it isn’t true at all.
In reality, most machines have either a dedicated service processor, or a special execution mode that they begin execution in. Regardless of whether a dedicated service processor (like the AMD PSP, older Intel ME, various ARM SoCs, POWER, etc.) or a special execution mode (newer Intel ME), system boot starts by executing code burned into a mask rom, which is part of the CPU circuitry itself.
Generally the mask rom code is designed to bring up just enough of the system to allow transfer of execution to a platform-provided payload. In other words, the mask rom typically brings up the processor’s core complex, and then jumps into platform-specific firmware in NOR flash, which then gets you into UEFI or Open Firmware or whatever your device is running that is user-facing.
Some mask roms initialize more, others less. As they are immutable, they cannot be tampered with on a targeted basis. However, once the main core complex is up, sometimes the service processor (or equivalent) sticks around and is still alive. In situations where the service processor remains operational, there is the possibility that it can be used as a backdoor. Accordingly, the behavior of the service processor must be carefully considered when evaluating the trustworthiness of a system.
One can ask a few simple questions to evaluate the trustworthiness of a system design, assuming that the worst case scenario is assumed for any question where the answer is unknown. These questions are:
- How does the system boot? Does it begin executing code at a hardwired address or is there a service processor?
- If there is a service processor, what is the initialization process that the service processor does? Is the mask rom and intermediate firmware auditable? Has it already been audited by a trusted party?
- What components of the low level init process are stored in NOR flash or similar? What components are immutable?
- What other functions does the service processor perform? Can they be disabled? Can the service processor be instructed to turn off?
The next point of contention, of course, is the system firmware itself. On most systems today, this is an implementation of UEFI, either Aptio or InsydeH2O. Both are derived from the open source TianoCore EDK codebase.
In most cases, these firmwares are too complicated for an end user to audit. However, some machines support coreboot, which can be used to replace the proprietary UEFI with a system firmware of your choosing, including one built on TianoCore.
From a practical perspective, the main point of consideration at the firmware level is whether the trust store can be modified. UEFI mandates the inclusion of Microsoft’s signing key by default, but if you can uninstall their key and install your own, it is possible to gain some trustworthiness from the implementation, assuming it is not backdoored. This should be considered a minimum requirement for gaining some level of trust in the system firmware, but ultimately if you cannot audit the firmware, then you should not extend high amounts of trust to it.
A good system design will attempt to isolate resources using IOMMUs. This is because external devices, such as those on the PCIe bus should not be trusted with unrestricted access to system memory, as they can potentially be backdoored.
It is sometimes possible to use virtualization technology to create barriers between PCIe devices and the main OS. Qubes OS for example uses the Xen hypervisor and dedicated VMs to isolate specific pieces of hardware and their drivers.
Additionally, with appropriate use of IOMMUs, system stability is improved, as badly behaving hardware and drivers cannot crash the system.
A reasonably secure system
Based on the discussion above, we can conclude some properties of what a secure system would look like. Not all systems evaluated later in this blog will have all of these properties. But we have a framework none the less, where the more properties that are there indicate a higher level of trustworthiness:
- The system should have a hardware initialization routine that is as simple as possible.
- The service processor, if any, should be restricted to hardware initialization and tear down and should not perform any other functionality.
- The system firmware should be freely available and reproducible from source.
- The system firmware must allow the end user to control any signing keys enrolled into the trust store.
- The system should use IOMMUs to mediate I/O between the main CPU and external hardware devices like PCIe cards and so on.
How do systems stack up in the real world?
Using the framework above, lets look at a few of the systems I own and see how trustworthy they actually are. The results may surprise you. These are systems that anybody can purchase, without having to do any sort of hardware modifications themselves, from reputable vendors. Some examples are intentionally silly, in that while they are secure, you wouldn’t actually want to use them today for getting work done due to obsolescence.
Compaq DeskPro 486/33m
The DeskPro is an Intel 80486DX system running at 33mhz. It has 16MB of RAM, and I haven’t gotten around to unpacking it yet. But, it’s reasonably secure, even when turned on.
As described in the 80486 programmer’s manual, the 80486 is hardwired to start execution from
0xFFFFFFF0. As long as there is a ROM connected to the chip in such a way that the
0xFFFFFFF0 address can be read, the system will boot whatever is there. This jumps into a BIOS, and then from there, into its operating system. We can audit the system BIOS if desired, or, if we have a CPLD programmer, replace it entirely with our own implementation, since it’s socketed on the system board.
There is no service processor, and booting from any device other than the hard disk can be restricted with a password. Accordingly, any practical attack against this machine would require disassembly of it, for example, to replace the hard disk.
However, this machine does not use IOMMUs, as it predates IOMMUs, and it is too slow to use Xen to provide equivalent functionality. Overall it scores 3 out of 5 points on the framework above: simple initialization routine, no service controller, no trust store to worry about.
Where you can get one: eBay, local PC recycler, that sort of thing.
Dell Inspiron 5515 (AMD Ryzen 5700U)
This machine is my new workhorse for x86 tasks, since my previous x86 machine had a significant failure of the system board. Whenever I am doing x86-specific Alpine development, it is generally on this machine. But how does it stack up?
Unfortunately, it stacks up rather badly. Like modern Intel machines, system initialization is controlled by a service processor, the AMD Platform Security Processor. Worse yet, unlike Intel, the PSP firmware is distributed as a single signed image, and cannot have unwanted modules removed from it.
The system uses InsydeH2O for its UEFI implementation, which is closed source. It does allow Microsoft’s signing keys to be removed from the trust store. And while IOMMU functionality is available, it is available to virtualized guests only.
So, overall, it scores only 1 out of 5 possible points for trustworthiness. It should not surprise you to learn that I don’t do much sensitive computing on this device, instead using it for compiling only.
Where you can get one: basically any electronics store you want.
IBM/Lenovo ThinkPad W500
This machine used to be my primary computer, quite a while ago, and ThinkPads are known for being able to take quite a beating. It is also the first computer I tried coreboot on. These days, you can use Libreboot to install a deblobbed version of coreboot on the W500. And, since it is based on the Core2 Quad CPU, it does not have the Intel Management Engine service processor.
But, of course, the Core2 Quad is too slow for day to day work on an operating system where you have to compile lots of things. However, if you don’t have to compile lots of things, it might be a reasonably priced option.
When you use this machine with a coreboot distribution like Libreboot, it scores 4 out of 5 on the trustworthiness score, the highest of all x86 devices evaluated. Otherwise, with the normal Lenovo BIOS, it scores 3 out of 5, as the main differentiator is the availability of a reproducible firmware image: there is no Intel ME to worry about, and the UEFI BIOS allows removal of all preloaded signing keys.
However, if you use an old ThinkPad, using Libreboot introduces modern features that are not available in the Lenovo BIOS, for example, you can build a firmware that fully supports the latest UEFI specification by using the TianoCore payload.
Where you can get it: eBay, PC recyclers. The maintainer of Libreboot sells refurbished ThinkPads on her website with Libreboot pre-installed. Although her pricing is higher than a PC recycler, you are paying not only for a refurbished ThinkPad, but also to support the Libreboot project, hence the pricing premium.
Raptor Computing Systems Blackbird (POWER9 Sforza)
A while ago, somebody sent me a Blackbird system they built after growing tired of the
#talos community. The vendor promises that the system is built entirely on user-controlled firmware. How does it measure up?
Firmware wise, it’s true: you can compile every piece of firmware yourself, and instructions are provided to do so. However, the OpenPOWER firmware initialization process is quite complicated. This is offset by the fact that you have all of the source code, of course.
There is a service processor, specifically the BMC. It runs the OpenBMC firmware, and is potentially a network-connected element. However, you can compile the firmware that runs on it yourself.
Overall, I give the Blackbird 5 out of 5 points, however, the pricing is expensive to buy directly from Raptor. A complete system usually runs in the neighborhood of about $3000-4000. There are also a lot of bugs with PPC64LE Linux still, too.
Where you can get it: eBay sometimes, the Raptor Computing Systems website.
Apple MacBook Air M1
Last year, Apple announced machines based on their own ARM CPU design, the Apple M1 CPU. Why am I bringing this up, since I am a free software developer, and Apple is usually wanting to destroy software freedom? Great question: the answer basically is that Apple’s M1 devices are designed in such a way that they have potential to be trustworthy`, performant and unlike Blackbird, reasonably affordable. However, this is still a matter of potential: the Asahi Linux project, while making fast progress has not yet arrived at production-quality support for this hardware yet. So how does it measure up?
Looking at the Asahi docs for system boot, there are three stages of system boot: SecureROM, and the two iBoot stages. The job of SecureROM is to initialize and load just enough to get the first iBoot stage running, while the first iBoot stage’s job is only to get the second iBoot stage running. The second iBoot stage then starts whatever kernel is passed to it, as long as it matches the enrolled hash for secure boot, which is user-controllable. This means that the second iBoot stage can chainload into GRUB or similar to boot Linux. Notably, there is no PKI involved in the secure boot process, it is strictly based on hashes.
This means that the system initialization is as simple as possible, leaving the majority of work to the second stage bootloader. There are no keys to manage, which means no trust store. The end user may trust whatever kernel hash she wishes.
But what about the Secure Enclave? Does it act as a service processor? No, it doesn’t: it remains offline until it is explicitly started by MacOS. And on the M1, everything is gated behind an IOMMU.
Therefore, the M1 actually gets 4 out of 5, making it roughly as trustworthy as the Libreboot ThinkPad, and slightly less trustworthy than the Blackbird. But unlike those devices, the performance is good, and the cost is reasonable. However… it’s not quite ready for Linux users yet. That leaves the Libreboot machines as providing the best balance between usability and trustworthiness today, even though the performance is quite slow by comparison to more modern computers. If you’re excited by these developments, you should follow the Asahi Linux project and perhaps donate to marcan’s Patreon.
Where to get it: basically any electronics store
SolidRun Honeycomb (NXP LX2160A, 16x Cortex-A72)
aarch64 workhorse at the moment is the SolidRun Honeycomb. I picked one up last year, and got Alpine running on it. Like the Blackbird, all firmware that can be flashed to the board is open source. SolidRun provides a build of u-boot or a build of TianoCore to use on the board. In general, they do a good job at enabling the ability to build your own firmware, the process is reasonably documented, with the only binary blob being DDR PHY training data.
However, mainline Linux support is only starting to mature: networking support just landed in full with Linux 5.14, for example. There are also bugs with the PCIe controller. And at $750 for the motherboard and CPU module, it is expensive to get started, but not nearly as expensive as something like Blackbird.
If you’re willing to put up with the PCIe bugs, however, it is a good starting point for a fully open system. In that regard, Honeycomb does get 5 out of 5 points, just like the Blackbird system.
Where to get it: SolidRun’s website.
While we have largely been in the dark for modern user-trustworthy computers, things are finally starting to look up. While Apple is a problematic company, for many reasons, they are at least producing computers which, once Linux is fully functional on them, are basically trustworthy, but at a sufficiently low price point verses other platforms like Blackbird. Similarly, Libreboot seems to be back up and running and will hopefully soon be targeting more modern hardware.