UEFI hacking malware


Security researchers are alarmed: the already-notorious Trickbot malware has been spottied probing infected computers to find out which version of UEFI they’re running. This is read as evidence that Trickbot has figured out how to pull off a really scary feat.

To understand why, you have to understand UEFI: a fascinating, deep, philosophical change to our view of computers, trust, and the knowability of the universe. It’s a tale of hard choices, paternalism, and the race to secure the digital realm as it merges with the physical.

Computers were once standalone: a central processing unit that might be augmented by some co-processors for specialized processes, like a graphics card or even a math co-processor.

These co-pros were subordinate to the CPU though. You’d turn on the computer and it would read a very small set of hardcoded instructions telling it how to access a floppy disk or other storage medium for the rest of the boot sequence, the stuff needed to boot the system.

The hardwired instructions were in a ROM that had one job: wake up and feed some instructions to the “computer” telling it what to do, then go back to sleep. But there’s a philosophical conundrum here.

Because the world of computing is adversarial and networked computing is doubly so: there are people who want your computer to do things that are antithetical to your interests, like steal your data or spy on you or encrypt all your files and demand ransom.

To stop this, you need to be able to examine the programs running on your computer and terminate the malicious ones. And therein lies the rub: when you instruct your computer to examine its own workings, how do you know if you can trust it?

In 1983, Ken Thompson (co-creator of C, Unix, etc) was awarded a Turing Award (“computer science’s Nobel Prize”). He gave a fucking bombshell of an acceptance speech, called “Reflections on Trusting Trust.”


Thompson revealed that he had created a backdoor for himself that didn’t just live in Unix, but in the C compiler that people made to create new Unix systems.

Here’s what that means: when you write a program, you produce “high-level code” with instructions like “printf(“Hello, World!”);“. Once your program is done, you turn it into machine code, a series of much shorter instructions that your CPU understands ("mov  dx, msg” etc).

Most programmers can’t read this machine code, and even for those who can, it’s a hard slog. In general, we write our code, compile it and run it, but we don’t examine it. With nontrivial programs, looking at the machine code is very, very hard.

Compilers are treated as intrinsically trustworthy. Give ‘em some source, they spit out a binary, you run the binary. Sometimes there are compiler bugs, sure, and compiler improvements can be a big deal. But compilers are infrastructure: inscrutable and forgotten.

Here’s what Thompson did: he hid a program in his compiler that would check to see whether you were compiling an operating system or a compiler. If you were compiling an OS, it hid a secret login for him inside of it.

If you were compiling a compiler, it hid the program that looked for compilers or operating systems inside of it.

Think about what this means: every OS you compiled had an intentional security defect that the OS itself couldn’t detect.

If you suspected that your compiler was up to no good and wrote your own compiler, it would be compromised as soon as you compiled it. What Thompson did was ask us to contemplate what we meant when we “trusted” something.

It was a move straight out of Rene Descartes, the reasoning that leads up to “I think therefore I am.” Descartes’ “Discourse on the Method” asks how we can know things about the universe.

He points out that sometimes he thinks he senses something but is wrong - he dreams, he hallucinates, he misapprehends.

If all our reasoning depends on the impressions we get from our senses, and if our senses are sometimes faulty, how can we reason at all?

Descartes wants a point of certainty, one thing he *knows* to be absolutely true. He makes the case that if you can be certain of one thing, you can anchor everything else to this point and build up a massive edifice of trustable knowledge that all hangs off of this anchor.

Thompson is basically saying, “You thought you had descartesed your way into a trustable computing universe because of the axiom that I would never poison your lowest-level, most fundamental tools.



(But, you know, in a nice way: an object lesson to serve as a wake-up call before computers fully merged with the physical world to form a global, species-wide digital nervous system whose untrustworthy low-level parts were foolishly, implicitly trusted).

But processors were expensive and computers were exploding. PCs running consumer operating systems like Windows and Mac OS (and more exotic ones like GNU/Linux and various Unices) proliferated, and they all shared this flawed security model.

They all relied on the operating system to be a faithful reporter of the computer’s internals, and operated on the assumption that they could use programs supervised by the OS to detect and terminate malicious programs.

But starting in 1999, Ken Thompson’s revenge was visited upon the computing world. Greg Hoglund released Ntrootkit, a proof-of-concept malware that attacked Windows itself, so that the operating system would lie to antivirus programs about what it was doing and seeing.

In Decartesspeak, your computer could no longer trust its senses, so it could no longer reason. The nub of trust, the piton driven into the mountainface, was made insecure and the whole thing collapsed. Security researchers at big companies like Microsoft took this to heart.

In 2002, Peter Biddle and his team from Microsoft came to EFF to show us a new model for computing: “Trusted Computing” (codenamed “Palladium”).


Palladium proposed to give computers back their nub of Descartesian certainty. It would use a co-processor, but unlike a graphics card or a math co-pro, it would run before the CPU woke up and did its thing.

And unlike a ROM, it wouldn’t just load up the boot sequence and go back to sleep.

This chip - today called a “Secure Enclave” or a “Trusted Platform Module” (etc) - would have real computing power, and it would remain available to the CPU at all times.

Inside the chip was a bunch of cool cryptographic stuff that provided the nub of certainty. At the start of the boot, the TPM would pull the first stages of the boot-code off of the drive, along with a cryptographic signature.

A quick crypto aside:

Crypto is code that mixes a key (a secret known to the user) with text to produce a scrambled text (a “ciphertext”) that can only be descrambled by the key.

Dual-key crypto has two keys. What one scrambles, the other descrambles (and vice-versa).

With dual-key crypto, you keep one key secret (the “private key”) and you publish the other one (the “public key”). If you scramble something with a private key, then anyone can descramble it with your public key and know it came from you.

If you scramble it *twice*, first with your private key and then with your friend’s public key, then they can tell it came from you (because only your private key’s ciphertexts can be descrambled with your public key).

And *you* can be certain that only they can read it (because only their private key can descramble messages that were scrambled with their public key).

Code-signing uses dual-key crypto to validate who published some code.

Microsoft can make a shorter version of its code (like a fingerprint) and then you scramble it with its private key. The OS that came with your computer has a copy of MSFT’s public key. When you get an OS update, you can descramble the fingerprint with that built-in key.

If it matches the update, then you know that Microsoft signed it and it hasn’t been tampered with on its way to you. If you trust Microsoft, you can run the update.

But…What if a virus replaces Microsoft’s public keys with its own?

That’s where Palladium’s TPM comes in. It’s got the keys hardcoded into it. Programs running on the CPU can only ask the TPM to do very limited things like ask it to sign some text, or to check the signature on some text.

It’s a kind of god-chip, running below the most privileged level of user-accessible operations. By design, you - the owner of the computer - can demand things of it that it is technically capable of doing, and it can refuse you, and you can’t override it.

That way, programs running even in the most privileged mode can’t compromise it.

Back to our boot sequence: the TPM fetches some startup code from the disk along with a signature, and checks to see whether the OS has been signed by its manufacturer.

If not, it halts and shows you a scary error message. Game over, Ken Thompson!

It is a very cool idea, but it’s also very scary, because the chip doesn’t take orders from Descartes’ omnibenevolent God.

It takes orders from Microsoft, a rapacious monopolist with a history of complicity with human rights abuses. Right from that very first meeting the brilliant EFF technologist Seth Schoen spotted this (and made the Descartes comparison):


Seth identified a way of having your cake and eating it too: he proposed a hypothetical thing called an “owner override” - a physical switch that, when depressed, could be used to change which public keys lived in the chip.

This would allow owners of computers to decide who they trusted and would defend them against malware. But what it *wouldn’t* do is defend tech companies shareholders against the owner of the computer - it wouldn’t facilitate DRM.

“Owner override” is a litmus test: are you Descartes’ God, or Thompson’s Satan?

Do you want computers to allow their owners to know the truth? Or do you want computers to bluepill their owners, lock them in a matrix where you get to decide what is true?

A month later, I published a multi-award-winning sf story called “0wnz0red” in Salon that tried to dramatize the stakes here.


Despite Seth’s technical clarity and my attempts at dramatization, owner override did not get incorporated into trusted computing architectures.

Trusted computing took years to become commonplace in PCs. In the interim, rootkits proliferated. Three years after the Palladium paper, Sony-BMG deliberately turned 6m audio CDs into rootkit vectors that would silently alter your OS when you played them from a CD drive.

The Sony rootkit broke your OS so that any filename starting with $SYS$ didn’t show up in file listings, $SYS$ programs wouldn’t show up in the process monitor. Accompanying the rootkit was a startup program (starting with $SYS$) that broke CD ripping.

Sony infected hundreds of thousands of US gov and mil networks. Malware authors - naturally enough - added $SYS$ to the files corresponding with their viruses, so that antivirus software (which depends on the OS for information about files and processes) couldn’t detect it.

It was an incredibly reckless, depraved act, and it wasn’t the last. Criminals, spies and corporations continued to produce rootkits to attack their adversaries (victims, rival states, customers) and trusted computing came to the rescue.

Today, trusted computing is widely used by the world’s largest tech companies to force customers to use their app stores, their OSes, their printer ink, their spare parts. It’s in medical implants, cars, tractors and kitchen appliances.

None of this stuff has an owner override. In 2012, I gave a talk to Google, Defcon and the Long Now Foundation about the crisis of owner override, called “The Coming Civil War Over General Purpose Computing.”


It proposed a way that owner override, combined with trusted computing, could allow users to resist both state and corporate power, and it warned that a lack of technological self-determination opened the door to a parade of horribles.

Because once you have a system that is designed to override owners - and not the other way around - then anyone who commands that system can, by design, do things that the user can’t discern or prevent.

This is the *real* trolley problem when it comes to autonomous vehicles: not “who should a car sacrifice in a dangerous situation?” but rather, “what happens when a car that is designed to sometimes kill its owner is compromised by Bad Guys?”


The thing is, trusted computing with an owner override is pretty magical. Take the Introspection Engine, a co-processor in a fancy Iphone case designed by Edward Snowden and Bunnie Huang. It’s designed to catch otherwise undetectable mobile malware.


You see, your phone doesn’t just run Ios or Android; the part that interfaces with the phone system - be baseband radio - runs an ancient, horribly insecure OS, and if it is infected, it can trick your phone’s senses, so that it can no longer reason.

The Introspection Engine is a small circuit board that sandwiches between your phone’s mainboard and its case, making electrical contact with all the systems that carry network traffic.

This daughterboard has a ribbon cable that snakes out of the SIM slot and into a slightly chunky phone case that has a little open source hardward chip with fully auditable code and an OLED display.

This second computer monitors the electrical signals traveling on the phone’s network buses and tells you what’s going on. This is a user-accessible god-chip, a way for you to know whether your phone is hallucinating when it tells you that it isn’t leaking your data.

That’s why it’s called an “Introspection Engine.” It lets your phone perch at an objective remove and understand how it is thinking.

(If all this sounds familiar, it’s because it plays a major role in ATTACK SURFACE, the third Little Brother book)


The reason the Introspection Engine is so exciting is that it is exceptional. The standard model for trusted computing is that it treats everyone *except* the manufacturer as its adversary - including you, the owner of the device.

This opens up many different sets of risks, all of which have been obvious since 1999’s Ntrootkit, and undeniable since 2005’s Sony Rootkit.

I. The manufacturer might not have your interests at heart.

In 2016, HP shipped a fake security update to its printers, tricking users into installing a system that rejected their third-party ink, forcing them to pay monopoly prices for HP products.


II. An insider at the company may not have your interests at heart.

Multiple “insider threat” attacks have been executed against users. Employees at AT&T, T-Mobile, even Roblox have accepted bribes to attack users on behalf of criminals.


III. A government may order the company to attack its users.

In 2017 Apple removed all working VPNs from its Chinese app stores, as part of the Chinese state’s mass surveillance program (1m members of religious minorities were subsequently sent to concentration camps).

Apple’s trusted computing prevents users from loading apps that aren’t in its app stores, meaning that Apple’s decisions about which apps you can run on your Iphone are binding on you, even if you disagree.


IV. Third parties may exploit a defect in the trusted computing system and attack users in undetectable ways that users can’t prevent.

By design, TPMs can’t be field updated, so if there’s a defect in them, it can’t be patched.

Checkm8 exploits a defect in eight generations Apple’s mobile TPM. It’s a proof-of-concept released to demonstrate a vulnerability, not malware (thankfully).


But there have been scattered, frightening instances of malware that attacks the TPM - that suborns the mind of God so that your computer ceases to be able to reason. To date, these have all been associated with state actors who used them surgicially.

State actors know that the efficacy of their cyberweapons is tied to secrecy: once a rival government knows that a system is vulnerable, they’ll fix it or stop using it or put it behind a firewall, so these tools are typically used parsimoniously.

But criminals are a different matter (and now, at long last, we’re coming back to Trickbot and UEFI) (thanks for hanging in there).

UEFI (“You-Eff-Ee”) is a trusted computing that computer manufacturers use to prevent unauthorized OSes from running on the PCs they sell you.

Mostly, they use this to prevent malicious OSes from running on the hardware they manufacture, but there have been scattered instances of it being used for monopolistic purposes: to prevent you from replacing their OS with another one (usually a flavor of GNU/Linux).

UEFI is god-mode for your computer, and a compromise to it would be a Sony Rootkit event, but 15 years later, in a world where systems are more widespread and used for more critical applications from driving power-plants to handling multimillion-dollar transactions.

Trickbot is very sophisticated malware generally believed to be run by criminals, not a government. Like a lot of modern malware, there’s a mechanism for updating it in the field with new capabilities - both attacks and defenses.

And Trickbot has been observed in the wild probing infected systems’ UEFI. This leads security researchers to believe that Trickbot’s authors have figured out how to compromise UEFI on some systems.


Now, no one has actually observed UEFI being compromised, nor has anyone captured any UEFI-compromising Trickbot code. The thinking goes that Trickbot only downloads the UEFI code when it finds a vulnerable system.

Running in UEFI would make Trickbot largely undetectable and undeletable. Even wiping and restoring the OS wouldn’t do it. Remember, TPMs are designed to be unpatchable and tamper-resistant. The physical hardware is designed to break forever if you try to swap it out.

If this is indeed what’s going on, it’s the first instance in which a trusted computing module was used to attack users by criminals (not governments or the manufacturer and its insiders). And Trickbot’s owners are really bad people.

They’ve hired out to the North Korean state to steal from multinationals; they’ve installed ransomware in big companies, and while their footprint has waned, they once controlled 1,000,000 infected systems.

You can check your UEFI to see if it’s vulnerable to tampering:


and also determine whether it has been compromised:


But this isn’t the end, it’s just getting started. As Seth Schoen warned us in 2002, the paternalistic mode of computing has a huge, Ken Thompson-shaped hole in it: it requires you trust the benevolence of a manufacturer, and, crucially, they know you don’t have a choice.

If companies knew that you *could* alter whom you trusted, they would have to work to earn and keep your trust. If governments knew that ordering a company to compromise on TPMs, they’d understand that their targets would simply shift tactics if they made that order.

Some users would make foolish decisions about whom to trust, but they would also have recourse when a trusted system was revealed to be defective. This is a fight that’s into its third decade, and the stakes have never been higher.

Sadly, we are no closer to owner override than we were in 2002.

Albarran Cabrera   —–   Instagram Remembering the future Our first book “Remembering the future” was published for the first…


Albarran Cabrera   —–   Instagram

Remembering the future

Our first book “Remembering the future” was published for the first time by Editorial RM in 2018 and got sold out twice at RM (the last time during Paris Photo 2019). Now, one year later, it is printed again but this time with a different cover and two different images inside (the ones on the new front and back covers, of course).
Some few copies of the previous editions are still available in some of our galleries and if you want the new one, you can preorder it at Editorial RM website

This video, that starts with a view of the top of Tower 4, was taken from the vantage point of an Arecibo Observatory drone,…

Arecibo Observatory, collapse, astronomy, 2020

video link

This video, that starts with a view of the top of Tower 4, was taken from the vantage point of an Arecibo Observatory drone, utilized for monitoring the condition of Tower 4 support cables. Four cables are seen in the center of this video. The top cable does not support the telescope platform, but instead supports the catwalk described in the narrative for the previous video. The three lower cables are, from left to right, M4-1, M4-2, and M4-3. Note that a number of individual wire strands of the M4-1 and M4-2 cables are noticeably broken at the beginning of this video. The M4-3 cable does not appear to have any broken wires at the beginning of this video. The first indication of the coming failure is the breaking of another M4-2 wire, accompanied by a puff of “smoke” and chips of paint flying away from the surface of the cable. Four seconds later the entire M4-2 cable appears to disintegrate. The failure of M4-2 is followed a fraction of a second later by the demise of M4-1, followed a fraction of a second later by the failure of M4-3. The drone operator then swings the drone around to view the reflector dish and fallen platform, azimuth arm, Gregorian dome and the falling cables and catwalk. The top section of Tower 12, near the Visitor Center, can be seen tumbling down the hill to the left of the operations building. The Tower 12 backstay cables that connect the top of Tower 12 to the ground cause damage behind Tower 12, well away from the edge of the telescope dish.

Courtesy of the Arecibo Observatory, a U.S. National Science Foundation facility. 

Breathtaking Iphone hack


AWDL is Apple’s mesh networking protocol, a low-level, device-to-device wireless system that underpins tools like Airdrop. It is implemented in the Ios kernel, a high-privilege, high-risk zone in Iphone and Ipad internals.

A researcher at Google’s Project Zero, Ian Beer, found a vulnerability in AWDL that allowed him to wirelessly infect Ios devices, then have them go on to spread the virus wirelessly to any Ios devices they came into contact with.


The proof-of-concept attack undetectably grants  "full access to the user’s personal data, including emails, photos, messages, and passwords and crypto keys stored in the keychain.“


Beer developed the exploit virtually single-handedly over six months and confidentially disclosed its details to Apple, which issued patches for it earlier this year. Now that the patch has had time to propagate, Beer has released a detailed, formal account of his work.

The 30,000 word technical paper is heavy reading, but if you want inspiration to delve into it, try the accompanying 14-second video, which is one of the most remarkable (and alarming) infosec clips I’ve ever seen.


As far as can be known, this was never exploited in the wild. In his Ars Technica coverage of the exploit, Dan Goodin drops the other shoe: “If a single person could do all of this in six months, just think what a better-resourced hacking team is capable of.”

It’s a theme that Beer himself explores in a Twitter thread, in which he describes the tradeoffs in protocols like AWDL, whose ease of use was critical in private messaging by Hong Kong protesters last hear.


But whose “large and complex attack surface [exposed] to everyone in radio proximity” creates a security nightmare if there are any bugs at all in the code…and unfortunately the quality of the AWDL code was at times fairly poor and seemingly untested.“

It’s a sobering reminder that companies can’t fully audit their own products. Even companies with sterling security track-records like Apple slip up and miss really, really, REALLY important stuff.

It’s really at the heart of understanding why independent security research must be protected - at a moment in which it is under assault, as out-dated laws like the Computer Fraud and Abuse Act are used to punish researchers who go public with their work.

Dominant companies - including Google and Apple - have taken the position that security disclosures should be subject to a corporate veto (in other words, that companies should be able to decide when their critics can make truthful disclosures about their mistakes).

When the W3C introduced EME, it created the first-ever standardized browser component whose security defects could be suppressed under laws like the CFAA and Sec 1201 of the DMCA.

W3C corporate members opposed measures to require participants to promise NOT to punish security researchers who warned browser users of ways they could be attacked through defects in EME.


And Google is presently using the DMCA to suppress code that reveals defects in its own EME implementation, Widevine, which has become the industry standard.


In his thread, Beer rightfully praises both Apple and Google for having a bug bounty program that serves as a carrot to entice security researchers into disclosing to the company first and giving it time to patch before going public.

(And he calls on Apple to award him a bounty that he can donate to charity, which, with corporate charitable matching, would come out to $500K. This is a no-brainer that Apple should totally do).

But as laudable as the Bug Bounty carrot is, let us not forget that the companies still jealously guard the stick: the right to seek fines and even prison time for security researchers who decide that they don’t trust the companies to act on disclosures.

That may sound reasonable to you - after all, it’s reckless to just blurt out the truth about an exploitable bug before it’s been patched. But companies are really good at convincing themselves that serious bugs aren’t serious and just sitting on them.

When that happens, security researchers have to make a tough call: do they keep mum and hope that no one else replicates their findings and starts to attack users, or do they go public so that people can stop using dangerously defective products?

It’s a call that Google’s Project Zero has made repeatedly. In 2015, they went public with a serious, unpatched, widespread Windows bug when they got tired of waiting for Microsoft to fix it:


And in October, Google disclosed another Windows 0-day that was being exploited in the wild, presumably reasoning that it was better to tell users they were at risk, even if it meant giving ammo to new waves of hackers.


Bug Bounties are great - essential, even. But for so long as companies get to decide who can tell the truth about the defects in their products, bug bounties won’t be enough. The best, most diligent security teams can make dumb mistakes that create real risk.

Your right to know whether you are at risk should not be subject to a corporate whim. The First Amendment - and free speech protections encoded in many other legal systems - provides a high degree of protection for truthful utterances.

The novel and dangerous idea that corporations should have a veto over the truth about their mistakes is completely irreconcilable with these free speech norms and laws.

“In 2005, in a decision that I find bewildering, Cope destroyed EMI’s entire database of musical signatures. The main reason he…


“In 2005, in a decision that I find bewildering, Cope destroyed EMI’s entire database of musical signatures. The main reason he gave was that EMI’s compositions, being so easily and infinitely producible, were devalued by critics. Cope felt that EMI would be valued as a composer only if it had, as the philosopher Margaret Boden wrote, a ‘finite ouevre — as all human composers, beset by mortality, do’.”

Mitchell, Melanie. Artificial Intelligence: A Guide for Thinking Humans. London: Pelican Books, 2019.

The Social Life of Forests


This topic of trees communicating and living a communal life has suddenly become popular over the past few years. Although scientific studies and papers have been around for a while, the general population was introduced to the concept a few years back with the book, “The Hidden Life of Trees,” authored by Peter Wohlleben. (I highly recommend it.) The science has been advanced subsequently by more books that “teach” us about the concepts, and also been found its way into fiction. The best and most popular example of fiction writing with the concepts that comes to mind is “The Overstory,” authored by Richard Powers, for which he was awarded last year the Pulitzer for fiction.

One thing I leared from Peter Wohlleben’s book is that trees, at their crown height, tend not to get all tangled up. Next time you’re in a forest, look up and confirm that. I now see that all the time. For example, our yard. We have lots of very tall, very old swamp (white) oaks and a few bur and red oaks in our yard and on the adjacent parkway. The other evening, as I was with the dogs in the back doing their go-pee-before-bed routine, I noticed that the “younger” or smaller swamp oak next to our deck really has no crown on its south end. Why? Because two other much older and larger swamp oaks are to the immediate south and have crowns that are large and approach the younger guy and “invade” its space. Their communication to each other tells the younger guy to behave, be subservient and leave the crown space alone for the old guys. Here’s a photo that I took today showing how the young one (on the left) defers to the two elders (on the right):

Excerpt from this New York Times story. The story is long, so the excerpt is just a small piece of the story. It’s worth a click into the link (and hopefully you won’t run into a paywall).

Underground, trees and fungi form partnerships known as mycorrhizas: Threadlike fungi envelop and fuse with tree roots, helping them extract water and nutrients like phosphorus and nitrogen in exchange for some of the carbon-rich sugars the trees make through photosynthesis. Research had demonstrated that mycorrhizas also connected plants to one another and that these associations might be ecologically important, but most scientists had studied them in greenhouses and laboratories, not in the wild. For her doctoral thesis, [Suzanne] Simard decided to investigate fungal links between Douglas fir and paper birch in the forests of British Columbia. Apart from her supervisor, she didn’t receive much encouragement from her mostly male peers. “The old foresters were like, Why don’t you just study growth and yield?” Simard told me. “I was more interested in how these plants interact. They thought it was all very girlie.”

Now a professor of forest ecology at the University of British Columbia, Simard, who is 60, has studied webs of root and fungi in the Arctic, temperate and coastal forests of North America for nearly three decades. Her initial inklings about the importance of mycorrhizal networks were prescient, inspiring whole new lines of research that ultimately overturned longstanding misconceptions about forest ecosystems. By analyzing the DNA in root tips and tracing the movement of molecules through underground conduits, Simard has discovered that fungal threads link nearly every tree in a forest — even trees of different species. Carbon, water, nutrients, alarm signals and hormones can pass from tree to tree through these subterranean circuits. Resources tend to flow from the oldest and biggest trees to the youngest and smallest. Chemical alarm signals generated by one tree prepare nearby trees for danger. Seedlings severed from the forest’s underground lifelines are much more likely to die than their networked counterparts. And if a tree is on the brink of death, it sometimes bequeaths a substantial share of its carbon to its neighbors.

Although Simard’s peers were skeptical and sometimes even disparaging of her early work, they now generally regard her as one of the most rigorous and innovative scientists studying plant communication and behavior. David Janos, co-editor of the scientific journal Mycorrhiza, characterized her published research as “sophisticated, imaginative, cutting-edge.” Jason Hoeksema, a University of Mississippi biology professor who has studied mycorrhizal networks, agreed: “I think she has really pushed the field forward.” Some of Simard’s studies now feature in textbooks and are widely taught in graduate-level classes on forestry and ecology. She was also a key inspiration for a central character in Richard Powers’s 2019 Pulitzer Prize-winning novel, “The Overstory”: the visionary botanist Patricia Westerford. In May, Knopf will publish Simard’s own book, “Finding the Mother Tree,” a vivid and compelling memoir of her lifelong quest to prove that “the forest was more than just a collection of trees.”

Before Simard and other ecologists revealed the extent and significance of mycorrhizal networks, foresters typically regarded trees as solitary individuals that competed for space and resources and were otherwise indifferent to one another. Simard and her peers have demonstrated that this framework is far too simplistic. An old-growth forest is neither an assemblage of stoic organisms tolerating one another’s presence nor a merciless battle royale: It’s a vast, ancient and intricate society. There is conflict in a forest, but there is also negotiation, reciprocity and perhaps even selflessness. The trees, understory plants, fungi and microbes in a forest are so thoroughly connected, communicative and codependent that some scientists have described them as superorganisms. Recent research suggests that mycorrhizal networks also perfuse prairies, grasslands, chaparral and Arctic tundra — essentially everywhere there is life on land. Together, these symbiotic partners knit Earth’s soils into nearly contiguous living networks of unfathomable scale and complexity. “I was taught that you have a tree, and it’s out there to find its own way,” Simard told me. “It’s not how a forest works, though.”

After a few hours of digging up roots and collecting samples, we began to hike back down the valley. In the distance, the granite peaks of the Selkirks bristled with clusters of conifers. A breeze flung the scent of pine toward us. To our right, a furtive squirrel buried something in the dirt and dashed off. Like a seed waiting for the right conditions, a passage from “The Overstory” suddenly sprouted in my consciousness: “There are no individuals. There aren’t even separate species. Everything in the forest is the forest.”

The Social Life of Forests

A big neural net reviews a smaller neural net’s recipes


I’ve used various neural networks to generate recipes, to varying degrees of success. Myearliestrecipes were generated with char-rnn, which had to learn everything - spelling, punctuation, words - entirely from scratch. Its recipes were terrible (Swamp Peef and Cheese, anyone? Or Chocolate Chicken Chicken Cake?). Later when I started using GPT-2, I was using a neural net that had already seen a lot of recipes online during internet pre-training, and its recipes looked a lot more realistic (although not entirely realistic - there was that recipe for Black Forest Cake that suggested pouring the batter into a toaster, or the jello recipe that starts with “remove all internal rinds”).

GPT-3, an even more powerful text-generating neural net, has seen so many recipes during its internet pre-training that its recipes are boringly plausible. Bake its Black Forest Cake recipe and you’d probably actually get cake at the end, even if it’s not the best cake in the world. Its recipes aren’t funny at a glance, which is why I haven’t posted any of them - until internet linguist Gretchen McCulloch suggested that, rather than have GPT-3 generate its own recipes, I have it rate char-rnn’s recipes. Readers, it worked.

I prompted GPT-3 with two example recipes from allrecipes, each followed by 3 of their human-written reviews. Then, for the third recipe I inserted one of char-rnn’s recipes:

Chocolate Chocolate Cake

1 cup sugar
¼ cup butter
1 cup margarine
1 cup cocoa
1 teaspoon vanilla
1 cup margarine
½ cup pecans, chopped
1 teaspoon vanilla
1 egg yolk
½ cup sugar
¼ cup chopped pecans
1 cup sugar
1 teaspoon vanilla
1 cup cocoa
1 teaspoon vanilla

Combine flour, sugar, baking powder, salt, salt, and salt in a large bowl. Cook over medium heat, stirring constantly, until mixture boils and stiff peaks form. Add milk and cook until mixture is completely melted. Remove from heat and stir in sugar and sugar. Add eggs and vanilla. Beat until smooth. Spread in batter. Sprinkle with pecans.

Yield: 1 servings

And then I let GPT-3 add reviews. Would it mention the excessive lakes of vanilla? The complete lack of flour? The foolhardy reliance on the heroic efforts of a single egg yolk to hold things together?

Rating: 5 stars
delicious chocolate recipe!!! and the perfect balance between cocoa and chocolate. and they sure were LONGNECKS! big time rave here. lazy w birthdays w friends and family, try this recipe for something chocolate sweet the ppl will talk about over and over.

Rating: 5 stars appraisal
Excellent for the kids Halloween party. Everyone got so thrilled by seeing something new on Chocolate in sponge. He even likes nuts separately but it will taste good..

Rating: 2 stars
I could only give it a 2 as I had to make changes to it due to ingredients I had. First, I used Dove Dark Chocolate cocoa instead of Hershey’s, no nuts, and the mashed bananas and caramel topping were a must. I did use coffee instead of the hot water, and dark margarine instead of regular, therefore I could not use light whipped cream, but I made homemade whipped cream. It had WAY too much flour in it, but i guess it could have been me measuring heavier than I should have. I also only used 2 cups of flour. it was my first bundt cake, so it was huge!!!

Here’s another batch of reviews, this time of a more chaotic char-rnn recipe:

Simple Soup Alaborate Soup Pie

6 each chopped white perserves; ** (optional but ring)
½ cup chopped fresh marjoram
½ teaspoon dried thyme
1/3 teaspoon salt
½ cup thinly sliced onion
¼ cup onion (chopped)
½ teaspoon garlic; minced
2 small carrots
6 can dry white wine
1 tabasco sauce
1 tablespoon sugar
½ cup flour, whipped topping
2 tablespoon chili powder

Marinate: Serves 4.
Whip and short under boiling. Add the sliced mushrooms and peppers and stir to coat. Stir the sugar and carrots. In large bowl, beat the egg to make a 1 ½ tablespoons of the flour. Add shanks and stir it well. Spoon into a frying pan before serving.

Recipe By : DonajehLi Koubmanda

Yield: 8 servings

Rating: 2 stars
Even though everyone said this was great, I thought it was just ‘OK!’ The perserves gave a strange flavor to the sauce.

Rating: 2 stars
This recipe sounded good. When it came out it tasted a bit like a casserole and not a stew. I used venison instead of deer and pork. I liked the way it turned out, but it wasnt very French.

Rating: 5 stars
The g iving this as a gift as it takes about 10 less minutes but using American cheese and not blended with parsley flakes, lime-flavored gelatin, which is my husband’s favorite dish from SuperValu. Just happens to be the creamy sauce. Don’t worry about a pie crust.

In fact, the more chaotic the char-rnn recipe, the more chaotic the reviews tend to be, as if GPT-3 has picked up on the chaos and is trying to somehow produce reviews that could plausibly be written by whatever weird humans are visiting this mangled recipe website.

Geilded Peanuts & Crabmeat Basil

3 beef, boneless chops, or thyme,
1 lamber and flince of fresh diced in diagonal
1 parsley
2 large leeks (for garnish)
¾ teaspoon sesame oil, seeded and peeled
1 any jeans'l-tortilla
1 pepper or cinnamon
¼ cup baesly pinto beans. (Crushed and thawed; minced fine you recipe)
1 tomato base
1 fat in the last 5 inch plates
1 garlic clove; crushed
¼ cup dijon mustard

In a large bowl set aside. Remove the sides from the heat and saute with a fork and cover and cook on Low for 6 to 6 hours.

Yield: 6 servings

Rating: 0 stars
I had high expectations for this recipe, but it was an outright flop. I don’t know if the measurements were somehow incorrect or what, but I ended up with a watery mess that certainly wasn’t crab salad. My girlfriend, who is a new chef liked it fine, but I really wish that the recipe didn’t claim that it made six servings when it really made enough for one, if that’s what we were going to call it.

Rating: 5 stars
Glorious sobs! This recipe will be delicious for a minumum of 1½ Hours to maximum 2½ hours Time to bring lots of fresh sized potatoes in the house or grow in the age may become quite famous

Rating: 3 stars
My friend is having a birthday party and will celebrate with Camping. She’s in charge of the dishes and wanted to serve food at the campfire. I’ll 10th grade, my easy information for her party. It will not sound strange anyway test this task with a note to work rationally.What to take trees, portable tables for chairs, the main meal, food. I personally love the taste of it but has anyone seen a shepherd (wasn’t any spices), oysters. It is sufficient

It does kind of make sense that GPT-3’s recipe reviews aren’t directly complaining about the utter weirdness of char-rnn’s recipes. Probably very few of the recipe reviews in GPT-3’s training data involved baffled complaints about nonexistent or repeated ingredients, or apocalyptic levels of recipe failure. So GPT-3 has learned that even the critical reviews tend to fall within a certain sentiment range. It’s interesting that it tries to emulate char-rnn’s primitive neural net glitchiness - I’ve noticed before that GPT-3 seems to be capable of pretending to be a much simpler neural net.

After generating a few reviews, GPT-3 would sometimes go on to generate new recipes (such as “BBQ Cheeseburger Salad” which contains lime jello, whipped topping, and sliced black beans). It may have been deliberately trying to make them terrible to fit in with char-rnn’s recipes. The recipes are too long to post here, but if you’d like them, let me know and I’ll email them to you.

Speaking of AI-generated recipes: there are some absolute disasters in my book on AI, You Look Like a Thing and I Love You: Amazon - Barnes & Noble - Indiebound - Tattered Cover - Powell’s - Boulder Bookstore

The future is unwritten. There are best case scenarios. There are worst-case scenarios. Both of them are great fun to write…

“The future is unwritten. There are best case scenarios. There are worst-case scenarios. Both of them are great fun to write about if you’ re a science fiction novelist, but neither of them ever happen in the real world. What happens in the real world is always a sideways-case scenario.”

Bruce Sterling, National Academy of Sciences, Convocation on Technology and Education, Washington D. C.,  May 10, 1993 (viainthenoosphere)

Sea ice breaks apart in various block sizes roughly 40 miles (64 kilometers) off the coast of Antarctica. Antarctic sea ice…


Sea ice breaks apart in various block sizes roughly 40 miles (64 kilometers) off the coast of Antarctica. Antarctic sea ice extends far north in the winter and retreats almost to the coastline every summer. In the 1980s, Antarctica lost 40 billion tons (36.3 billion metric tons) of ice every year. In the last decade, that figure was estimated at a staggering 252 billion tons (229 billion metric tons) per year.

This image and story is taken from our new book “Overview Timelapse”. Click below to learn more or order a copy!


Source imagery: Maxar