“This brings us to the verge of discussing OOO’s unconventional notion of what an object is. In everyday language, the word…


“This brings us to the verge of discussing OOO’s unconventional notion of what an object is. In everyday language, the word ‘object’ often has the connotations of something physical, solid, durable, inhuman or utterly inanimate. In OOO, by contrast, ‘object’ simply means anything that cannot be reduced either downward or upward, which means anything that has a surplus beyond its constituent pieces and beneath its sum total of effects in the world.”

Harman, Graham. Object-Oriented Ontology: A New Theory of Everything. London: Pelican Books, 2018.

From Nature Magazine: Spark of destruction. Tom Houghton (Media editor). This incredibly powerful moment — a lightning strike…


From Nature Magazine:

Spark of destruction. Tom Houghton (Media editor). This incredibly powerful moment — a lightning strike during the Taal Volcano eruption in the Philippines back in January — looks like a dramatic oil painting by a nineteenth-century artist. Volcanic lightning is an incredible phenomenon caused by static electricity generated from ash particles colliding in the volcano plume. It results in the most breathtaking examples of destructive power. Credit: Domcar C Lagto/PACIFIC P/SIPA/Shutterstock

“Furthermore, since reality is always radically different from our formulation of it, and is never something we encounter…


“Furthermore, since reality is always radically different from our formulation of it, and is never something we encounter directly in the flesh, we must approach it indirectly. This withdrawal or withholding of things from direct access is the central principle of OOO.”

Harman, Graham. Object-Oriented Ontology: A New Theory of Everything. London: Pelican Books, 2018.

“As a consequence of the slavish “categoryitis” the scientifically illogical, and as we shall see, often meaningless questions…


“As a consequence of the slavish “categoryitis” the scientifically illogical, and as we shall see, often meaningless questions “Where do you live?” “What are you?” “What religion?” “What race?” “What nationality?” are all thought of today as logical questions. By the twenty-first century it either will have become evident to humanity that these questions are absurd and anti-evolutionary or men will no longer be living on Earth.”

— R. Buckminster Fuller, Operating Manual for Spaceship Earth

Motherlands are castles made of glass. In order to leave them, you have to break something—a wall, a social convention, a…

Elif Shafak, place, glass, home, emigré, belonging, 2020, migration

“Motherlands are castles made of glass. In order to leave them, you have to break something—a wall, a social convention, a cultural norm, a psychological barrier, a heart. What you have broken will haunt you. To be an emigré, therefore means to forever bear shards of glass in your pockets. It is easy to forget they are there, light and minuscule as they are, and go on with your life, your little ambitions and important plans, but at the slightest contact the shards will remind you of their presence. They will cut you deep.”

Elif Shafak on What It Means to Belong in Many Places at Once

“It will come as no surprise that Joyce himself coined a word — a neologism, by way of a productive repetition — for this…


“It will come as no surprise that Joyce himself coined a word — a neologism, by way of a productive repetition — for this impasse. As we would just as easily be squeezed to death by pure unbroken order (cosmos) as dissipated by pure unleashed chaos, we are enjoined by the hermeneutic imperative to embrace what Joyce called the chaosmos, the chaosmic, the mutual interplay and continual disturbance of one side by the other, which is the condition of possibility of producing novel effects.”

Caputo, John D. Hermeneutics: Facts and Interpretation in the Age of Information. London: Pelican Books, 2018.

The Great Conjunction of Jupiter and Saturn



Credits:  NASA/Bill Ingalls

Have you noticed two bright objects in the sky getting closer together with each passing night? It’s Jupiter and Saturn doing a planetary dance that will result in the Great Conjunction on Dec. 21. On that day, Jupiter and Saturn will be right next to each other in the sky – the closest they have appeared in nearly 400 years!

Skywatching Tips from NASA


Credits: NASA/JPL-Caltech

For those who would like to see this phenomenon for themselves, here’s what to do:

  • Find a spot with an unobstructed view of the sky, such as a field or park. Jupiter and Saturn are bright, so they can be seen even from most cities.
  • An hour after sunset, look to the southwestern sky. Jupiter will look like a bright star and be easily visible. Saturn will be slightly fainter and will appear slightly above and to the left of Jupiter until December 21, when Jupiter will overtake it and they will reverse positions in the sky.
  • The planets can be seen with the unaided eye, but if you have binoculars or a small telescope, you may be able to see Jupiter’s four large moons orbiting the giant planet.

How to Photograph the Conjunction


Credits: NASA/Bill Dunford

Saturn and Jupiter are easy to see without special equipment, and can be photographed easily on DSLR cameras and many cell phone cameras. Here are a few tips and tricks:

  • These planets are visible in the early evening, and you’ll have about 1-2 hours from when they are visible, to when they set. A photo from the same location can look completely different just an hour later!
  • Using a tripod will help you hold your camera steady while taking longer exposures. If you don’t have a tripod, brace your camera against something – a tree, a fence, or a car can all serve as a tripod for a several-second exposure.
  • The crescent Moon will pass near Jupiter and Saturn a few days before the conjunction. Take advantage of it in your composition!

Get more tips HERE.

Still have questions about the Great Conjunction?

Our NASA expert answered questions from social media on an episode of NASA Science Live on Thursday, Dec. 17. Watch the recording HERE.

Make sure to follow us on Tumblr for your regular dose of space: http://nasa.tumblr.com.

the influencerization of everything


From this LARB essay by Sarah Brouillette about Caroline Calloway:

We see in her case a set of conditions that are likely to intensify as the publishing industry continues to struggle: toward convergence with social media culture, the self-branding industry, gig work in the form of self-publishing, with a growing army of hungry creatives vying for attention. They are serving a new kind of consumer, too — a topic for another piece — who is drawn less to physical paperbound books and more to free content with options added, like that $100 personal phone call, and to the kinds of subscription-based services that reduce the risk of disappointment if you don’t get what you paid for.

For a long time now, I’ve argued that social media incentivize (and then ultimately compel) the production of the self as a commodity — they reconstitute self-expression as perpetual advertisements for the self, demonstrations of one’s human capital, as well as one’s capacity to leverage attention and, as Brouillette emphasizes, the promotional labor of others. The rise of influencers is indicative of the normalization of these practices, and a harbinger — it seems like most forms of work will eventually be influencerized, and workers will have to leverage their personality, their “personal brand,” to get work or to perform it up to managerial expectations. Taylor Lorenz points out in this piece how this has happened in journalism.

But the other side of the coin that Brouillette gestures toward above seems just as important: how influencerization has changed consumption, how it reflects and drives a destabilization of the object of consumption. In other words, once static objects (books, etc.) become “content” — fluid, upgradable, networked, subject to spontaneous (or spurious) customization, directly social in that one can immediately recirculate it, comment on it, argue about it, “react” to it with a button, and so on.

It may become increasingly strange to consume objects we cannot immediately imprint with some avatar of ourselves, that we can’t immediately augment (by paying extra or performing some kind of labor). It’s not “interactivity” per se, because it is not reciprocal and it is mostly systematized and delimited by the interfaces through which media is consumed. But it is a matter of manifesting “influence.” No kind of consumption can occur outside the awareness of the asymmetries of attention that directly govern it. 

When one thinks of, say, free-to-play games, it’s easy to construe their constant attempts to milk money from you as annoying. But it may be more accurate to think of that as part of the entertainment, part of the means for subjectivizing the player, for making them feel as though they are being paid attention to, being recognized. 

This is how I understand the “new kind of consumer” Brouillette mentions. The vicarious fantasy inherent in consumption can be supplemented by more direct forms of engagement; consumers no longer need to be trained how to enjoy vicarious, imaginative experiences in the same way they used to. The emulative, mimetic aspects of consumption are more straightforward now, given the channels consumers have to immediately produce their responses and see what reactions they attract. 

Every commodified experience concretizes some aspect of the “influence” that has produced and circulated it, and the process of consuming it is now a matter of tapping into that and trying to realize it somehow for oneself.  

A huge tent was put up over the former site of an insecticide factory in Hangzhou, Zhejiang province, to contain a peculiar…

Hangzhou, pollution, tent, inflatable, smell, 2014, China Daily

A huge tent was put up over the former site of an insecticide factory in Hangzhou, Zhejiang province, to contain a peculiar smell emanating from polluted soil. The tent in downtown Hangzhou covers about 20,000 square meters and is 36 meters high. It was built on top of where Hangzhou Qingfeng Agricultural Chemical once stood. It was relocated in 2009, but the more than 50-year-old company left many contaminants buried in the ground.

Following an investigation and risk assessment, treatment of the polluted soil was initiated in September. However, in the process of the treatment, a peculiar smell was released, seriously affecting nearby residents.

A woman surnamed Shao, who lives two bus stops away from the site, said she can still smell a pungent odor.

(via http://www.chinadaily.com.cn/china/2014-05/14/content_17506217.htm )

UEFI hacking malware


Security researchers are alarmed: the already-notorious Trickbot malware has been spottied probing infected computers to find out which version of UEFI they’re running. This is read as evidence that Trickbot has figured out how to pull off a really scary feat.

To understand why, you have to understand UEFI: a fascinating, deep, philosophical change to our view of computers, trust, and the knowability of the universe. It’s a tale of hard choices, paternalism, and the race to secure the digital realm as it merges with the physical.

Computers were once standalone: a central processing unit that might be augmented by some co-processors for specialized processes, like a graphics card or even a math co-processor.

These co-pros were subordinate to the CPU though. You’d turn on the computer and it would read a very small set of hardcoded instructions telling it how to access a floppy disk or other storage medium for the rest of the boot sequence, the stuff needed to boot the system.

The hardwired instructions were in a ROM that had one job: wake up and feed some instructions to the “computer” telling it what to do, then go back to sleep. But there’s a philosophical conundrum here.

Because the world of computing is adversarial and networked computing is doubly so: there are people who want your computer to do things that are antithetical to your interests, like steal your data or spy on you or encrypt all your files and demand ransom.

To stop this, you need to be able to examine the programs running on your computer and terminate the malicious ones. And therein lies the rub: when you instruct your computer to examine its own workings, how do you know if you can trust it?

In 1983, Ken Thompson (co-creator of C, Unix, etc) was awarded a Turing Award (“computer science’s Nobel Prize”). He gave a fucking bombshell of an acceptance speech, called “Reflections on Trusting Trust.”


Thompson revealed that he had created a backdoor for himself that didn’t just live in Unix, but in the C compiler that people made to create new Unix systems.

Here’s what that means: when you write a program, you produce “high-level code” with instructions like “printf(“Hello, World!”);“. Once your program is done, you turn it into machine code, a series of much shorter instructions that your CPU understands ("mov  dx, msg” etc).

Most programmers can’t read this machine code, and even for those who can, it’s a hard slog. In general, we write our code, compile it and run it, but we don’t examine it. With nontrivial programs, looking at the machine code is very, very hard.

Compilers are treated as intrinsically trustworthy. Give ‘em some source, they spit out a binary, you run the binary. Sometimes there are compiler bugs, sure, and compiler improvements can be a big deal. But compilers are infrastructure: inscrutable and forgotten.

Here’s what Thompson did: he hid a program in his compiler that would check to see whether you were compiling an operating system or a compiler. If you were compiling an OS, it hid a secret login for him inside of it.

If you were compiling a compiler, it hid the program that looked for compilers or operating systems inside of it.

Think about what this means: every OS you compiled had an intentional security defect that the OS itself couldn’t detect.

If you suspected that your compiler was up to no good and wrote your own compiler, it would be compromised as soon as you compiled it. What Thompson did was ask us to contemplate what we meant when we “trusted” something.

It was a move straight out of Rene Descartes, the reasoning that leads up to “I think therefore I am.” Descartes’ “Discourse on the Method” asks how we can know things about the universe.

He points out that sometimes he thinks he senses something but is wrong - he dreams, he hallucinates, he misapprehends.

If all our reasoning depends on the impressions we get from our senses, and if our senses are sometimes faulty, how can we reason at all?

Descartes wants a point of certainty, one thing he *knows* to be absolutely true. He makes the case that if you can be certain of one thing, you can anchor everything else to this point and build up a massive edifice of trustable knowledge that all hangs off of this anchor.

Thompson is basically saying, “You thought you had descartesed your way into a trustable computing universe because of the axiom that I would never poison your lowest-level, most fundamental tools.



(But, you know, in a nice way: an object lesson to serve as a wake-up call before computers fully merged with the physical world to form a global, species-wide digital nervous system whose untrustworthy low-level parts were foolishly, implicitly trusted).

But processors were expensive and computers were exploding. PCs running consumer operating systems like Windows and Mac OS (and more exotic ones like GNU/Linux and various Unices) proliferated, and they all shared this flawed security model.

They all relied on the operating system to be a faithful reporter of the computer’s internals, and operated on the assumption that they could use programs supervised by the OS to detect and terminate malicious programs.

But starting in 1999, Ken Thompson’s revenge was visited upon the computing world. Greg Hoglund released Ntrootkit, a proof-of-concept malware that attacked Windows itself, so that the operating system would lie to antivirus programs about what it was doing and seeing.

In Decartesspeak, your computer could no longer trust its senses, so it could no longer reason. The nub of trust, the piton driven into the mountainface, was made insecure and the whole thing collapsed. Security researchers at big companies like Microsoft took this to heart.

In 2002, Peter Biddle and his team from Microsoft came to EFF to show us a new model for computing: “Trusted Computing” (codenamed “Palladium”).


Palladium proposed to give computers back their nub of Descartesian certainty. It would use a co-processor, but unlike a graphics card or a math co-pro, it would run before the CPU woke up and did its thing.

And unlike a ROM, it wouldn’t just load up the boot sequence and go back to sleep.

This chip - today called a “Secure Enclave” or a “Trusted Platform Module” (etc) - would have real computing power, and it would remain available to the CPU at all times.

Inside the chip was a bunch of cool cryptographic stuff that provided the nub of certainty. At the start of the boot, the TPM would pull the first stages of the boot-code off of the drive, along with a cryptographic signature.

A quick crypto aside:

Crypto is code that mixes a key (a secret known to the user) with text to produce a scrambled text (a “ciphertext”) that can only be descrambled by the key.

Dual-key crypto has two keys. What one scrambles, the other descrambles (and vice-versa).

With dual-key crypto, you keep one key secret (the “private key”) and you publish the other one (the “public key”). If you scramble something with a private key, then anyone can descramble it with your public key and know it came from you.

If you scramble it *twice*, first with your private key and then with your friend’s public key, then they can tell it came from you (because only your private key’s ciphertexts can be descrambled with your public key).

And *you* can be certain that only they can read it (because only their private key can descramble messages that were scrambled with their public key).

Code-signing uses dual-key crypto to validate who published some code.

Microsoft can make a shorter version of its code (like a fingerprint) and then you scramble it with its private key. The OS that came with your computer has a copy of MSFT’s public key. When you get an OS update, you can descramble the fingerprint with that built-in key.

If it matches the update, then you know that Microsoft signed it and it hasn’t been tampered with on its way to you. If you trust Microsoft, you can run the update.

But…What if a virus replaces Microsoft’s public keys with its own?

That’s where Palladium’s TPM comes in. It’s got the keys hardcoded into it. Programs running on the CPU can only ask the TPM to do very limited things like ask it to sign some text, or to check the signature on some text.

It’s a kind of god-chip, running below the most privileged level of user-accessible operations. By design, you - the owner of the computer - can demand things of it that it is technically capable of doing, and it can refuse you, and you can’t override it.

That way, programs running even in the most privileged mode can’t compromise it.

Back to our boot sequence: the TPM fetches some startup code from the disk along with a signature, and checks to see whether the OS has been signed by its manufacturer.

If not, it halts and shows you a scary error message. Game over, Ken Thompson!

It is a very cool idea, but it’s also very scary, because the chip doesn’t take orders from Descartes’ omnibenevolent God.

It takes orders from Microsoft, a rapacious monopolist with a history of complicity with human rights abuses. Right from that very first meeting the brilliant EFF technologist Seth Schoen spotted this (and made the Descartes comparison):


Seth identified a way of having your cake and eating it too: he proposed a hypothetical thing called an “owner override” - a physical switch that, when depressed, could be used to change which public keys lived in the chip.

This would allow owners of computers to decide who they trusted and would defend them against malware. But what it *wouldn’t* do is defend tech companies shareholders against the owner of the computer - it wouldn’t facilitate DRM.

“Owner override” is a litmus test: are you Descartes’ God, or Thompson’s Satan?

Do you want computers to allow their owners to know the truth? Or do you want computers to bluepill their owners, lock them in a matrix where you get to decide what is true?

A month later, I published a multi-award-winning sf story called “0wnz0red” in Salon that tried to dramatize the stakes here.


Despite Seth’s technical clarity and my attempts at dramatization, owner override did not get incorporated into trusted computing architectures.

Trusted computing took years to become commonplace in PCs. In the interim, rootkits proliferated. Three years after the Palladium paper, Sony-BMG deliberately turned 6m audio CDs into rootkit vectors that would silently alter your OS when you played them from a CD drive.

The Sony rootkit broke your OS so that any filename starting with $SYS$ didn’t show up in file listings, $SYS$ programs wouldn’t show up in the process monitor. Accompanying the rootkit was a startup program (starting with $SYS$) that broke CD ripping.

Sony infected hundreds of thousands of US gov and mil networks. Malware authors - naturally enough - added $SYS$ to the files corresponding with their viruses, so that antivirus software (which depends on the OS for information about files and processes) couldn’t detect it.

It was an incredibly reckless, depraved act, and it wasn’t the last. Criminals, spies and corporations continued to produce rootkits to attack their adversaries (victims, rival states, customers) and trusted computing came to the rescue.

Today, trusted computing is widely used by the world’s largest tech companies to force customers to use their app stores, their OSes, their printer ink, their spare parts. It’s in medical implants, cars, tractors and kitchen appliances.

None of this stuff has an owner override. In 2012, I gave a talk to Google, Defcon and the Long Now Foundation about the crisis of owner override, called “The Coming Civil War Over General Purpose Computing.”


It proposed a way that owner override, combined with trusted computing, could allow users to resist both state and corporate power, and it warned that a lack of technological self-determination opened the door to a parade of horribles.

Because once you have a system that is designed to override owners - and not the other way around - then anyone who commands that system can, by design, do things that the user can’t discern or prevent.

This is the *real* trolley problem when it comes to autonomous vehicles: not “who should a car sacrifice in a dangerous situation?” but rather, “what happens when a car that is designed to sometimes kill its owner is compromised by Bad Guys?”


The thing is, trusted computing with an owner override is pretty magical. Take the Introspection Engine, a co-processor in a fancy Iphone case designed by Edward Snowden and Bunnie Huang. It’s designed to catch otherwise undetectable mobile malware.


You see, your phone doesn’t just run Ios or Android; the part that interfaces with the phone system - be baseband radio - runs an ancient, horribly insecure OS, and if it is infected, it can trick your phone’s senses, so that it can no longer reason.

The Introspection Engine is a small circuit board that sandwiches between your phone’s mainboard and its case, making electrical contact with all the systems that carry network traffic.

This daughterboard has a ribbon cable that snakes out of the SIM slot and into a slightly chunky phone case that has a little open source hardward chip with fully auditable code and an OLED display.

This second computer monitors the electrical signals traveling on the phone’s network buses and tells you what’s going on. This is a user-accessible god-chip, a way for you to know whether your phone is hallucinating when it tells you that it isn’t leaking your data.

That’s why it’s called an “Introspection Engine.” It lets your phone perch at an objective remove and understand how it is thinking.

(If all this sounds familiar, it’s because it plays a major role in ATTACK SURFACE, the third Little Brother book)


The reason the Introspection Engine is so exciting is that it is exceptional. The standard model for trusted computing is that it treats everyone *except* the manufacturer as its adversary - including you, the owner of the device.

This opens up many different sets of risks, all of which have been obvious since 1999’s Ntrootkit, and undeniable since 2005’s Sony Rootkit.

I. The manufacturer might not have your interests at heart.

In 2016, HP shipped a fake security update to its printers, tricking users into installing a system that rejected their third-party ink, forcing them to pay monopoly prices for HP products.


II. An insider at the company may not have your interests at heart.

Multiple “insider threat” attacks have been executed against users. Employees at AT&T, T-Mobile, even Roblox have accepted bribes to attack users on behalf of criminals.


III. A government may order the company to attack its users.

In 2017 Apple removed all working VPNs from its Chinese app stores, as part of the Chinese state’s mass surveillance program (1m members of religious minorities were subsequently sent to concentration camps).

Apple’s trusted computing prevents users from loading apps that aren’t in its app stores, meaning that Apple’s decisions about which apps you can run on your Iphone are binding on you, even if you disagree.


IV. Third parties may exploit a defect in the trusted computing system and attack users in undetectable ways that users can’t prevent.

By design, TPMs can’t be field updated, so if there’s a defect in them, it can’t be patched.

Checkm8 exploits a defect in eight generations Apple’s mobile TPM. It’s a proof-of-concept released to demonstrate a vulnerability, not malware (thankfully).


But there have been scattered, frightening instances of malware that attacks the TPM - that suborns the mind of God so that your computer ceases to be able to reason. To date, these have all been associated with state actors who used them surgicially.

State actors know that the efficacy of their cyberweapons is tied to secrecy: once a rival government knows that a system is vulnerable, they’ll fix it or stop using it or put it behind a firewall, so these tools are typically used parsimoniously.

But criminals are a different matter (and now, at long last, we’re coming back to Trickbot and UEFI) (thanks for hanging in there).

UEFI (“You-Eff-Ee”) is a trusted computing that computer manufacturers use to prevent unauthorized OSes from running on the PCs they sell you.

Mostly, they use this to prevent malicious OSes from running on the hardware they manufacture, but there have been scattered instances of it being used for monopolistic purposes: to prevent you from replacing their OS with another one (usually a flavor of GNU/Linux).

UEFI is god-mode for your computer, and a compromise to it would be a Sony Rootkit event, but 15 years later, in a world where systems are more widespread and used for more critical applications from driving power-plants to handling multimillion-dollar transactions.

Trickbot is very sophisticated malware generally believed to be run by criminals, not a government. Like a lot of modern malware, there’s a mechanism for updating it in the field with new capabilities - both attacks and defenses.

And Trickbot has been observed in the wild probing infected systems’ UEFI. This leads security researchers to believe that Trickbot’s authors have figured out how to compromise UEFI on some systems.


Now, no one has actually observed UEFI being compromised, nor has anyone captured any UEFI-compromising Trickbot code. The thinking goes that Trickbot only downloads the UEFI code when it finds a vulnerable system.

Running in UEFI would make Trickbot largely undetectable and undeletable. Even wiping and restoring the OS wouldn’t do it. Remember, TPMs are designed to be unpatchable and tamper-resistant. The physical hardware is designed to break forever if you try to swap it out.

If this is indeed what’s going on, it’s the first instance in which a trusted computing module was used to attack users by criminals (not governments or the manufacturer and its insiders). And Trickbot’s owners are really bad people.

They’ve hired out to the North Korean state to steal from multinationals; they’ve installed ransomware in big companies, and while their footprint has waned, they once controlled 1,000,000 infected systems.

You can check your UEFI to see if it’s vulnerable to tampering:


and also determine whether it has been compromised:


But this isn’t the end, it’s just getting started. As Seth Schoen warned us in 2002, the paternalistic mode of computing has a huge, Ken Thompson-shaped hole in it: it requires you trust the benevolence of a manufacturer, and, crucially, they know you don’t have a choice.

If companies knew that you *could* alter whom you trusted, they would have to work to earn and keep your trust. If governments knew that ordering a company to compromise on TPMs, they’d understand that their targets would simply shift tactics if they made that order.

Some users would make foolish decisions about whom to trust, but they would also have recourse when a trusted system was revealed to be defective. This is a fight that’s into its third decade, and the stakes have never been higher.

Sadly, we are no closer to owner override than we were in 2002.

Albarran Cabrera   —–   Instagram Remembering the future Our first book “Remembering the future” was published for the first…


Albarran Cabrera   —–   Instagram

Remembering the future

Our first book “Remembering the future” was published for the first time by Editorial RM in 2018 and got sold out twice at RM (the last time during Paris Photo 2019). Now, one year later, it is printed again but this time with a different cover and two different images inside (the ones on the new front and back covers, of course).
Some few copies of the previous editions are still available in some of our galleries and if you want the new one, you can preorder it at Editorial RM website