Your computer is tormented by a wicked god

mostlysignssomeportents:

Computer security is really, really important. It was important decades ago, when computers were merely how we ran our financial system, aviation, and the power grid. Today, as more and more of us have our bodies inside of computers (cars, houses, etc) and computers in our body (implants), computer security is urgent.

Decades ago, security practitioners began a long argument about how best to address that looming urgency. The most vexing aspect of this argument was a modern, cybernetic variant on a debate that was as old as the ancient philosophers — a debate that Rene Descartes immortalized in the 17th Century.

You’ve doubtless heard the phrase, “I think therefore I am” ( Cogito, ergo sum). It comes from Descartes’ 1637 Discourse on the Method, which asks the question, “How can we know things?” Or, more expansively, “Given that all my reasoning begins with things I encounter through my senses, and given that my senses are sometimes wrong, how can I know anything?”

Descartes’ answer: “I know God is benevolent, because when I conceive of God, I conceive of benevolence, and God gave me my conceptions. A benevolent God wouldn’t lead me astray. Thus, the things I learn through my senses and understand through my reason are right, because a benevolent God wouldn’t have it any other way.”

I’ve hated this answer since my freshman philosophy class, and even though the TA rejected my paper explaining why it was bullshit, I still think it’s bullshit. I mean, I’m a science fiction writer, so I can handily conceive of a wicked God whose evil plan starts with making you think He is benevolent and then systematically misleading you in your senses and reasoning, tormenting you for His own sadistic pleasure.

The debate about trust and certainty has been at the center of computer security since its inception. When Ken “Unix” Thompson accepted the 1984 Turing Prize he gave an acceptance speech called “Reflections on Trusting Trust”:

https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf

It’s a bombshell. In it, Thompson proposes an evil compiler, one that inserted a back-door into any operating system it compiled, and that inserted a back-door-generator into any compiler it was asked to compile. Since Thompson had created the original Unix compiler — which was used to compile every other compiler and thus every other flavor of Unix — this was a pretty wild thought experiment, especially since he didn’t outright deny having done it.

Trusting trust is still the most important issue in information security. Sure, you can run a virus-checker, but that virus checker has to ask your operating system to tell it about what files are on the drive, what data is in memory, and what processes are being executed. What if the OS is compromised?

Okay, so maybe you are sure the OS isn’t compromised, but how does the OS know if it’s even running on the “bare metal” of your computer. Maybe it is running inside a virtual machine, and the actual OS on the computer is a malicious program that sits between your OS and the chips and circuits, distorting the data it sends and receives. This is called a “rootkit,” and it’s a deadass nightmare that actually exists in the actual world.

A computer with a rootkit is a brain in a jar, a human battery in the Matrix. You, the computer user, can ask the operating system questions about its operating environment that it will answer faithfully and truthfully, and those answers will all be wrong, because the actual computer is being controlled by the rootkit and it only tells your operating system what it wants it to know.

20 years ago, some clever Microsoft engineers proposed a solution to this conundrum: “Trusted Computing.” They proposed adding a second computer to your system, a sealed, secure chip with very little microcode, so little that it could all be audited in detail and purged of bugs. The chip itself would be securely affixed to your motherboard, such that any attempt to remove it and replace it with a compromised chip would be immediately obvious to you (for example, it might encapsulate some acid in a layer of epoxy that would rupture if you tried to remove the chip).

They called this “Next Generation Secure Computing Base,” or “Palladium” for short. They came to the Electronic Frontier Foundation offices to present it. It was a memorable day:

https://pluralistic.net/2020/12/05/trusting-trust/#thompsons-devil

My then-colleague Seth Schoen — EFF’s staff technologist, the most technically sophisticated person to have been briefed on the technology without signing an NDA — made several pointed critiques of Palladium:

https://web.archive.org/web/20020802145913/http://vitanuova.loyalty.org/2002-07-05.html

And suggested a hypothetical way to make sure it only served computer users, and not corporations or governments who wanted to control them:

https://www.linuxjournal.com/article/7055

But his most salient concern was this: “what if malware gets into the trusted computing chip?”

The point of trusted computing was to create a nub of certainty, a benevolent God whose answers to your questions could always be trusted. The output from a trusted computing element would be ground truth, axiomatic, trusted without question. By having a reliable external observer of your computer and its processes, you could always tell whether you were in the Matrix or in the world. It was a red pill for your computer.

What if it was turned? What if some villain convinced it to switch sides, by subverting its code, or by subtly altering it at the manufacturer?

That is, what if Descartes’ God was a sadist who wanted to torment him?

This was a nightmare scenario in 2002, one that the trusted computing advocates never adequately grappled with. In the years since, it’s only grown more salient, as trusted computing variations have spread to many kinds of computer.

The most common version is the UEFI — (“Unified Extensible Firmware Interface”) — a separate operating system, often running on its own chip (though sometimes running in a notionally “secure” region of your computer’s main processors) that is charged with observing and securing your computer’s boot process.

UEFI poses lots of dangers to users; it can (and is) used by manufacturers to block third-party operating systems, which allows them to lock you into using their own products, including their app stores, letting them restrict your choices and pick your pocket.

But in exchange, UEFI is said to deliver a far more important benefit: a provably benevolent God, one who will never lie to your operating system about whether it is in the Matrix or in the real world, providing the foundational ground truth needed to find and block malicious software.

So it’s a big deal that Kaspersky has detected a UEFI-infecting rootkit (which they’ve dubbed a “bootkit”), which they call Cosmicstrand, which can reinstall itself after your reformat your drive and reinstall your OS:

https://securelist.com/cosmicstrand-uefi-firmware-rootkit/106973/

Cosmicstrand does some really clever, technical things to compromise your UEFI, which then allows it to act with near-total impunity and undetectability. Indeed, Kaspersky warns that there are probably lots of these bootkits floating around.

If you want a good lay-oriented breakdown of how Cosmicstrand installs a wicked God in your computer, check out Dan Goodin’s excellent Ars Technica writeup:

https://arstechnica.com/information-technology/2022/07/researchers-unpack-unkillable-uefi-rootkit-that-survives-os-reinstalls/

Cosmicstrand dates back at least to 2016, a year after we learned about the NSA’s BIOS attacks, thanks to the Snowden docs:

https://www.wired.com/2015/03/researchers-uncover-way-hack-bios-undermine-secure-operating-systems/

But despite its long tenure, Cosmicstrand was only just discovered. That’s because of the fundamental flaw inherent in designing a computer that its owners can’t fully inspect or alter: if you design a component that is supposed to be immune from owner override, then anyone who compromises that component can’t be detected or countered by the computer’s owner.

This is the core of a two-decade-old debate among security people, and it’s one that the “benevolent God” faction has consistently had the upper hand in. They’re the “curated computing” advocates who insist that preventing you from choosing an alternative app store or side-loading a program is for your own good — because if it’s possible for you to override the manufacturer’s wishes, then malicious software may impersonate you to do so, or you might be tricked into doing so.

This benevolent dictatorship model only works so long as the dictator is both perfectly benevolent and perfectly competent. We know the dictators aren’t always benevolent. Apple won’t invade your privacy to sell you things, but they’ll take away ever Chinese user’s privacy to retain their ability to manufacture devices in China:

https://www.nytimes.com/2021/05/17/technology/apple-china-censorship-data.html

But even if you trust a dictator’s benevolence, you can’t trust in their perfection. Everyone makes mistakes. Benevolent dictator computing works well, but fails badly. Designing a computer that intentionally can’t be fully controlled by its owner is a nightmare, because that is a computer that, once compromised, can attack its owner with impunity.

Image:
Cryteria (modified)
https://commons.wikimedia.org/wiki/File:HAL9000.svg

CC BY 3.0:
https://creativecommons.org/licenses/by/3.0/deed.en


[Image ID: A remix of Benediction of God the Father by Luca Cambiaso, c. 1565, which depicts a bearded god holding the Earth under one arm. In the remix, God’s eyes have been replaced by the glaring red eyes of HAL9000 from 2001: A Space Odyssey. The Earth has been overlaid with a Matrix movie-style ‘code waterfall.’]