UEFI hacking malware
Security researchers are alarmed: the already-notorious Trickbot malware has been spottied probing infected computers to find out which version of UEFI they’re running. This is read as evidence that Trickbot has figured out how to pull off a really scary feat.To understand why, you have to understand UEFI: a fascinating, deep, philosophical change to our view of computers, trust, and the knowability of the universe. It’s a tale of hard choices, paternalism, and the race to secure the digital realm as it merges with the physical.
Computers were once standalone: a central processing unit that might be augmented by some co-processors for specialized processes, like a graphics card or even a math co-processor.
These co-pros were subordinate to the CPU though. You’d turn on the computer and it would read a very small set of hardcoded instructions telling it how to access a floppy disk or other storage medium for the rest of the boot sequence, the stuff needed to boot the system.
The hardwired instructions were in a ROM that had one job: wake up and feed some instructions to the “computer” telling it what to do, then go back to sleep. But there’s a philosophical conundrum here.
Because the world of computing is adversarial and networked computing is doubly so: there are people who want your computer to do things that are antithetical to your interests, like steal your data or spy on you or encrypt all your files and demand ransom.
To stop this, you need to be able to examine the programs running on your computer and terminate the malicious ones. And therein lies the rub: when you instruct your computer to examine its own workings, how do you know if you can trust it?
In 1983, Ken Thompson (co-creator of C, Unix, etc) was awarded a Turing Award (“computer science’s Nobel Prize”). He gave a fucking bombshell of an acceptance speech, called “Reflections on Trusting Trust.”
https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf
Thompson revealed that he had created a backdoor for himself that didn’t just live in Unix, but in the C compiler that people made to create new Unix systems.
Here’s what that means: when you write a program, you produce “high-level code” with instructions like “printf(“Hello, World!”);“. Once your program is done, you turn it into machine code, a series of much shorter instructions that your CPU understands ("mov dx, msg” etc).
Most programmers can’t read this machine code, and even for those who can, it’s a hard slog. In general, we write our code, compile it and run it, but we don’t examine it. With nontrivial programs, looking at the machine code is very, very hard.
Compilers are treated as intrinsically trustworthy. Give ‘em some source, they spit out a binary, you run the binary. Sometimes there are compiler bugs, sure, and compiler improvements can be a big deal. But compilers are infrastructure: inscrutable and forgotten.
Here’s what Thompson did: he hid a program in his compiler that would check to see whether you were compiling an operating system or a compiler. If you were compiling an OS, it hid a secret login for him inside of it.
If you were compiling a compiler, it hid the program that looked for compilers or operating systems inside of it.
Think about what this means: every OS you compiled had an intentional security defect that the OS itself couldn’t detect.
If you suspected that your compiler was up to no good and wrote your own compiler, it would be compromised as soon as you compiled it. What Thompson did was ask us to contemplate what we meant when we “trusted” something.
It was a move straight out of Rene Descartes, the reasoning that leads up to “I think therefore I am.” Descartes’ “Discourse on the Method” asks how we can know things about the universe.
He points out that sometimes he thinks he senses something but is wrong - he dreams, he hallucinates, he misapprehends.
If all our reasoning depends on the impressions we get from our senses, and if our senses are sometimes faulty, how can we reason at all?
Descartes wants a point of certainty, one thing he *knows* to be absolutely true. He makes the case that if you can be certain of one thing, you can anchor everything else to this point and build up a massive edifice of trustable knowledge that all hangs off of this anchor.
Thompson is basically saying, “You thought you had descartesed your way into a trustable computing universe because of the axiom that I would never poison your lowest-level, most fundamental tools.
”*Wrong*.
“Bwahahahaha.”
(But, you know, in a nice way: an object lesson to serve as a wake-up call before computers fully merged with the physical world to form a global, species-wide digital nervous system whose untrustworthy low-level parts were foolishly, implicitly trusted).
But processors were expensive and computers were exploding. PCs running consumer operating systems like Windows and Mac OS (and more exotic ones like GNU/Linux and various Unices) proliferated, and they all shared this flawed security model.
They all relied on the operating system to be a faithful reporter of the computer’s internals, and operated on the assumption that they could use programs supervised by the OS to detect and terminate malicious programs.
But starting in 1999, Ken Thompson’s revenge was visited upon the computing world. Greg Hoglund released Ntrootkit, a proof-of-concept malware that attacked Windows itself, so that the operating system would lie to antivirus programs about what it was doing and seeing.
In Decartesspeak, your computer could no longer trust its senses, so it could no longer reason. The nub of trust, the piton driven into the mountainface, was made insecure and the whole thing collapsed. Security researchers at big companies like Microsoft took this to heart.
In 2002, Peter Biddle and his team from Microsoft came to EFF to show us a new model for computing: “Trusted Computing” (codenamed “Palladium”).
Palladium proposed to give computers back their nub of Descartesian certainty. It would use a co-processor, but unlike a graphics card or a math co-pro, it would run before the CPU woke up and did its thing.
And unlike a ROM, it wouldn’t just load up the boot sequence and go back to sleep.
This chip - today called a “Secure Enclave” or a “Trusted Platform Module” (etc) - would have real computing power, and it would remain available to the CPU at all times.
Inside the chip was a bunch of cool cryptographic stuff that provided the nub of certainty. At the start of the boot, the TPM would pull the first stages of the boot-code off of the drive, along with a cryptographic signature.
A quick crypto aside:
Crypto is code that mixes a key (a secret known to the user) with text to produce a scrambled text (a “ciphertext”) that can only be descrambled by the key.
Dual-key crypto has two keys. What one scrambles, the other descrambles (and vice-versa).
With dual-key crypto, you keep one key secret (the “private key”) and you publish the other one (the “public key”). If you scramble something with a private key, then anyone can descramble it with your public key and know it came from you.
If you scramble it *twice*, first with your private key and then with your friend’s public key, then they can tell it came from you (because only your private key’s ciphertexts can be descrambled with your public key).
And *you* can be certain that only they can read it (because only their private key can descramble messages that were scrambled with their public key).
Code-signing uses dual-key crypto to validate who published some code.
Microsoft can make a shorter version of its code (like a fingerprint) and then you scramble it with its private key. The OS that came with your computer has a copy of MSFT’s public key. When you get an OS update, you can descramble the fingerprint with that built-in key.
If it matches the update, then you know that Microsoft signed it and it hasn’t been tampered with on its way to you. If you trust Microsoft, you can run the update.
But…What if a virus replaces Microsoft’s public keys with its own?
That’s where Palladium’s TPM comes in. It’s got the keys hardcoded into it. Programs running on the CPU can only ask the TPM to do very limited things like ask it to sign some text, or to check the signature on some text.
It’s a kind of god-chip, running below the most privileged level of user-accessible operations. By design, you - the owner of the computer - can demand things of it that it is technically capable of doing, and it can refuse you, and you can’t override it.
That way, programs running even in the most privileged mode can’t compromise it.
Back to our boot sequence: the TPM fetches some startup code from the disk along with a signature, and checks to see whether the OS has been signed by its manufacturer.
If not, it halts and shows you a scary error message. Game over, Ken Thompson!
It is a very cool idea, but it’s also very scary, because the chip doesn’t take orders from Descartes’ omnibenevolent God.
It takes orders from Microsoft, a rapacious monopolist with a history of complicity with human rights abuses. Right from that very first meeting the brilliant EFF technologist Seth Schoen spotted this (and made the Descartes comparison):
https://web.archive.org/web/20021004125515/http://vitanuova.loyalty.org/2002-07-05.html
Seth identified a way of having your cake and eating it too: he proposed a hypothetical thing called an “owner override” - a physical switch that, when depressed, could be used to change which public keys lived in the chip.
This would allow owners of computers to decide who they trusted and would defend them against malware. But what it *wouldn’t* do is defend tech companies shareholders against the owner of the computer - it wouldn’t facilitate DRM.
“Owner override” is a litmus test: are you Descartes’ God, or Thompson’s Satan?
Do you want computers to allow their owners to know the truth? Or do you want computers to bluepill their owners, lock them in a matrix where you get to decide what is true?
A month later, I published a multi-award-winning sf story called “0wnz0red” in Salon that tried to dramatize the stakes here.
https://www.salon.com/2002/08/28/0wnz0red/
Despite Seth’s technical clarity and my attempts at dramatization, owner override did not get incorporated into trusted computing architectures.
Trusted computing took years to become commonplace in PCs. In the interim, rootkits proliferated. Three years after the Palladium paper, Sony-BMG deliberately turned 6m audio CDs into rootkit vectors that would silently alter your OS when you played them from a CD drive.
The Sony rootkit broke your OS so that any filename starting with $SYS$ didn’t show up in file listings, $SYS$ programs wouldn’t show up in the process monitor. Accompanying the rootkit was a startup program (starting with $SYS$) that broke CD ripping.
Sony infected hundreds of thousands of US gov and mil networks. Malware authors - naturally enough - added $SYS$ to the files corresponding with their viruses, so that antivirus software (which depends on the OS for information about files and processes) couldn’t detect it.
It was an incredibly reckless, depraved act, and it wasn’t the last. Criminals, spies and corporations continued to produce rootkits to attack their adversaries (victims, rival states, customers) and trusted computing came to the rescue.
Today, trusted computing is widely used by the world’s largest tech companies to force customers to use their app stores, their OSes, their printer ink, their spare parts. It’s in medical implants, cars, tractors and kitchen appliances.
None of this stuff has an owner override. In 2012, I gave a talk to Google, Defcon and the Long Now Foundation about the crisis of owner override, called “The Coming Civil War Over General Purpose Computing.”
https://memex.craphound.com/2012/08/23/the-coming-civil-war-over-general-purpose-computing/
It proposed a way that owner override, combined with trusted computing, could allow users to resist both state and corporate power, and it warned that a lack of technological self-determination opened the door to a parade of horribles.
Because once you have a system that is designed to override owners - and not the other way around - then anyone who commands that system can, by design, do things that the user can’t discern or prevent.
This is the *real* trolley problem when it comes to autonomous vehicles: not “who should a car sacrifice in a dangerous situation?” but rather, “what happens when a car that is designed to sometimes kill its owner is compromised by Bad Guys?”
https://this.deakin.edu.au/self-improvement/car-wars
The thing is, trusted computing with an owner override is pretty magical. Take the Introspection Engine, a co-processor in a fancy Iphone case designed by Edward Snowden and Bunnie Huang. It’s designed to catch otherwise undetectable mobile malware.
https://www.tjoe.org/pub/direct-radio-introspection/release/2
You see, your phone doesn’t just run Ios or Android; the part that interfaces with the phone system - be baseband radio - runs an ancient, horribly insecure OS, and if it is infected, it can trick your phone’s senses, so that it can no longer reason.
The Introspection Engine is a small circuit board that sandwiches between your phone’s mainboard and its case, making electrical contact with all the systems that carry network traffic.
This daughterboard has a ribbon cable that snakes out of the SIM slot and into a slightly chunky phone case that has a little open source hardward chip with fully auditable code and an OLED display.
This second computer monitors the electrical signals traveling on the phone’s network buses and tells you what’s going on. This is a user-accessible god-chip, a way for you to know whether your phone is hallucinating when it tells you that it isn’t leaking your data.
That’s why it’s called an “Introspection Engine.” It lets your phone perch at an objective remove and understand how it is thinking.
(If all this sounds familiar, it’s because it plays a major role in ATTACK SURFACE, the third Little Brother book)
The reason the Introspection Engine is so exciting is that it is exceptional. The standard model for trusted computing is that it treats everyone *except* the manufacturer as its adversary - including you, the owner of the device.
This opens up many different sets of risks, all of which have been obvious since 1999’s Ntrootkit, and undeniable since 2005’s Sony Rootkit.
I. The manufacturer might not have your interests at heart.
In 2016, HP shipped a fake security update to its printers, tricking users into installing a system that rejected their third-party ink, forcing them to pay monopoly prices for HP products.
II. An insider at the company may not have your interests at heart.
Multiple “insider threat” attacks have been executed against users. Employees at AT&T, T-Mobile, even Roblox have accepted bribes to attack users on behalf of criminals.
III. A government may order the company to attack its users.
In 2017 Apple removed all working VPNs from its Chinese app stores, as part of the Chinese state’s mass surveillance program (1m members of religious minorities were subsequently sent to concentration camps).
Apple’s trusted computing prevents users from loading apps that aren’t in its app stores, meaning that Apple’s decisions about which apps you can run on your Iphone are binding on you, even if you disagree.
IV. Third parties may exploit a defect in the trusted computing system and attack users in undetectable ways that users can’t prevent.
By design, TPMs can’t be field updated, so if there’s a defect in them, it can’t be patched.
Checkm8 exploits a defect in eight generations Apple’s mobile TPM. It’s a proof-of-concept released to demonstrate a vulnerability, not malware (thankfully).
But there have been scattered, frightening instances of malware that attacks the TPM - that suborns the mind of God so that your computer ceases to be able to reason. To date, these have all been associated with state actors who used them surgicially.
State actors know that the efficacy of their cyberweapons is tied to secrecy: once a rival government knows that a system is vulnerable, they’ll fix it or stop using it or put it behind a firewall, so these tools are typically used parsimoniously.
But criminals are a different matter (and now, at long last, we’re coming back to Trickbot and UEFI) (thanks for hanging in there).
UEFI (“You-Eff-Ee”) is a trusted computing that computer manufacturers use to prevent unauthorized OSes from running on the PCs they sell you.
Mostly, they use this to prevent malicious OSes from running on the hardware they manufacture, but there have been scattered instances of it being used for monopolistic purposes: to prevent you from replacing their OS with another one (usually a flavor of GNU/Linux).
UEFI is god-mode for your computer, and a compromise to it would be a Sony Rootkit event, but 15 years later, in a world where systems are more widespread and used for more critical applications from driving power-plants to handling multimillion-dollar transactions.
Trickbot is very sophisticated malware generally believed to be run by criminals, not a government. Like a lot of modern malware, there’s a mechanism for updating it in the field with new capabilities - both attacks and defenses.
And Trickbot has been observed in the wild probing infected systems’ UEFI. This leads security researchers to believe that Trickbot’s authors have figured out how to compromise UEFI on some systems.
https://www.wired.com/story/trickbot-botnet-uefi-firmware/
Now, no one has actually observed UEFI being compromised, nor has anyone captured any UEFI-compromising Trickbot code. The thinking goes that Trickbot only downloads the UEFI code when it finds a vulnerable system.
Running in UEFI would make Trickbot largely undetectable and undeletable. Even wiping and restoring the OS wouldn’t do it. Remember, TPMs are designed to be unpatchable and tamper-resistant. The physical hardware is designed to break forever if you try to swap it out.
If this is indeed what’s going on, it’s the first instance in which a trusted computing module was used to attack users by criminals (not governments or the manufacturer and its insiders). And Trickbot’s owners are really bad people.
They’ve hired out to the North Korean state to steal from multinationals; they’ve installed ransomware in big companies, and while their footprint has waned, they once controlled 1,000,000 infected systems.
You can check your UEFI to see if it’s vulnerable to tampering:
https://eclypsium.com/2019/10/23/protecting-system-firmware-storage/
and also determine whether it has been compromised:
But this isn’t the end, it’s just getting started. As Seth Schoen warned us in 2002, the paternalistic mode of computing has a huge, Ken Thompson-shaped hole in it: it requires you trust the benevolence of a manufacturer, and, crucially, they know you don’t have a choice.
If companies knew that you *could* alter whom you trusted, they would have to work to earn and keep your trust. If governments knew that ordering a company to compromise on TPMs, they’d understand that their targets would simply shift tactics if they made that order.
Some users would make foolish decisions about whom to trust, but they would also have recourse when a trusted system was revealed to be defective. This is a fight that’s into its third decade, and the stakes have never been higher.
Sadly, we are no closer to owner override than we were in 2002.