My 2019 book RADICALIZED opened with a novella called Unauthorized Bread, a tale of self-determination versus technical oppression that starts with a Libyan refugee hacking her stupid smart-toaster, which locks her into buying proprietary bread.
I wrote that story after watching the inexorable colonization of every kind of device - from implanted defibrillators to tractors - with computerized controllers that served a variety of purposes, many of them nakedly dystopian.
The existence of laws like Section 1201 of the DMCA really invites companies to make “smart” versions of their devices for the sole purpose of adding DRM to them, because DMCA 1201 makes it a felony to unlock DRM, even for perfectly legal purposes.
That’s how John Deere uses DRM: to force farmers to use (and pay for) authorized repair personnel when their tractors break down; it’s how Abbott Labs uses DRM, to force people with diabetes to use their own insulin pumps with their glucose monitors.
It’s the inkjet business-model, but for everything from artificial pancreases to coffee-makers. And because DMCA 1201 is so badly* drafted, it also puts security researchers at risk.
*Assuming you’re willing to believe this isn’t what the law was supposed to do all along
Adding networked computers to everyday gadgets is a risky business: as with any human endeavor, software is prone to error. And as with any technical pursuit, the only way to reliably root out errors is through adversarial peer review.
That is, to have people who want you to fail go through your stuff looking for stupid mistakes they can mock you over.
It’s not enough for you to go over your own work for errors. Anyone who’s ever stared right at their own typo and not seen it knows this doesn’t work.
Nor is it sufficient for your friends to look over your work - not only will they go easy on you, but sometimes your errors come from a shared set of faulty assumptions.
They CAN’T spot these errors: this is why no argument among Qanoners ever points out the most important fact, which is that the whole fucking thing is batshit.
The default for products is that ANYONE is allowed to point out their defects. If you buy a pencil and the tip breaks all the time and you do some analysis and discover that the manufacturer sucks at graphite, you can publish that analysis.
But DMCA 1201 prohibits this kind of disclosure if it means that you reveal flaws that might be used to disable the DRM. Security researchers get threatened by “smart device” companies all the time.
Just the spectre of the threat is enough to convince a lot of organizations’ lawyers to advise researchers not to go public with this information.
That means that a defect that could crash your car (or your implanted pacemaker) only gets disclosed if the company that made it authorizes the disclosure.
This is seriously bad policy.
Companies add “smarts” to get DRM, because DRM lets them control how their customers use their products, and lets them shut down competitors who try to give control back to customers, and also silence critics who reveal the defects in their products.
DRM can be combined with terms of service, patents, trade secrets, binding arbitration, and other forms of “IP” to deliver near-perfect corporate control over competitors, customers and critics.
But it’s worse than that, because software designed to exercise this kind of control is necessarily designed for maximum opacity: to hide what it does, how it does it, and how to turn it off.
This obfuscation means that when your device is compromised, malicious code can take advantage of the obscure-by-design nature of the device to run undetectably as it attacks you, your data, and your physical environment.
Malicious code can also leverage DRM’s natural tamper-resistance to make it hard to remove malware once it has been detected. Once a device designed to control its owners has been compromised, the attacker gets to control the owner, too.
Which brings me to “Smarter,” a “smart” $250 coffee maker that is remarkably insecure, allowing anyone on the same wifi network as the device to replace its firmware, as Martin Hron demonstrates in a recent proof-of-concept attack.
Hron’s attack hijacks the machine, causing it to “turn on the burner, dispense water, spin the bean grinder, and display a ransom message, all while beeping repeatedly.”
As Dan Goodin points out, Hron did all this in just one week, and quite likely could find more ways to attack the device. The defects Hron identified - like the failure to use encryption in the device’s communications or firmware updates - are glaring, idiotic errors.
As is the decision to allow for unsigned firmware updates without any user intervention. This kind of design idiocy has been repeatedly identified in MANY kinds of devices.
Back in 2011, I watched Ang Cui silently update the OS of an HP printer by sending it a gimmicked PDF (HP’s printers received new firmware via print-jobs, ingesting everything after a Postscript comment that said, “New firmware starts here”).
A decade later, there is no excuse for this kind of mistake. The fact that IoT vendors are making it tells you that the opacity and the power to punish critics is not a power that companies wield wisely - and that you shouldn’t trust any IoT gadgets.