Google are now using their reCaptcha technique of outsourcing image-tagging to humans, a process known as ‘human-based computation’, to assist in the development of driverless cars.
Tim O'Reilly writes about the reality that more and more of our lives – including whether you end up seeing this very sentence! – is in the hands of “black boxes” – algorithmic decision-makers whose inner workings are a secret from the people they effect.
O'Reilly proposes four tests to determine whether a black box is trustable:
1. Its creators have made clear what outcome they are seeking, and it is possible for external observers to verify that outcome.
2. Success is measurable.
3. The goals of the algorithm’s creators are aligned with the goals of the algorithm’s consumers.
4. Does the algorithm lead its creators and its users to make better longer term decisions?
O'Reilly goes on to test these assumptions against some of the existing black boxes that we trust every day, like aviation autopilot systems, and shows that this is a very good framework for evaluating algorithmic systems.
But I have three important quibbles with O'Reilly’s framing. The first is absolutely foundational: the reason that these algorithms are black boxes is that the people who devise them argue that releasing details of their models will weaken the models’ security. This is nonsense.
For example, Facebook’s tweaked its algorithm to downrank “clickbait” stories. Adam Mosseri, Facebook’s VP of product management told Techcrunch, “Facebook won’t be publicly publishing the multi-page document of guidelines for defining clickbait because ‘a big part of this is actually spam, and if you expose exactly what we’re doing and how we’re doing it, they reverse engineer it and figure out how to get around it.’”
There’s a name for this in security circles: “Security through obscurity.” It is as thoroughly discredited an idea as is possible. As far back as the 19th century, security experts have decried the idea that robust systems can rely on secrecy as their first line of defense against compromise.
The reason the algorithms O'Reilly discusses are black boxes is because the people who deploy them believe in security-through-obscurity. Allowing our lives to be manipulated in secrecy because of an unfounded, superstitious belief is as crazy as putting astrologers in charge of monetary policy, no-fly lists, hiring decisions, and parole and sentencing recommendations.
So there’s that: the best way to figure out whether we can trust a black box is the smash it open, demand that it be exposed to the disinfecting power of sunshine, and give no quarter to the ideologically bankrupt security-through-obscurity court astrologers of Facebook, Google, and the TSA.
Then there’s the second issue, which is important whether or not we can see inside the black box: what data was used to train the model? Or, in traditional scientific/statistical terms, what was the sampling methodology?
Garbage in, garbage out is a principle as old as computer science, and sampling bias is a problem that’s as old as the study of statistics. Algorithms are often deployed to replace biased systems with empirical ones: for example, predictive policing algorithms tell the cops where to look for crime, supposedly replacing racially biased stop-and-frisk with data-driven systems of automated suspicion.
But predictive policing training data comes from earlier, human-judgment-driven stop-and-frisk projects. If the cops only make black kids turn out their pockets, then all the drugs, guns and contraband they find will be in the pockets of black kids. Feed this data to a machine learning model and ask it where the future guns, drugs and contraband will be found, and it will dutifully send the police out to harass more black kids. The algorithm isn’t racist, but its training data is.
There’s a final issue, which is that algorithms have to have their models tweaked based on measurements of success. It’s not enough to merely measure success: the errors in the algorithm’s predictions also have to be fed back to it, to correct the model. That’s the difference between Amazon’s sales-optimization and automated hiring systems. Amazon’s systems predict ways of improving sales, which the company tries: the failures are used to change the model to improve it. But automated hiring systems blackball some applicants and advance others, and the companies that makes these systems don’t track whether the excluded people go on to be great employees somewhere else, or whether the recommended hires end up stealing from the company or alienating its customers.
I like O'Reilly’s framework for evaluating black boxes, but I think we need to go farther.
Meal-replacement drinks were made popular by US firm Soylent in the past few years. Founded in 2013 by Rob Rhinehart, the company was shipping 30,000 “meals” a month a year later and Rhinehart told Bloomberg in January this year that sales were up 300 per cent. Soylent is now valued at more than US$100 million.
Its success has seen similar start-ups springing up around the world. India’s SupermealX, Australia’s Aussielent and British-based Huel all claim to offer nutritionally complete drinks.
Shao Wei, who was working as a programmer in Hangzhou, was also intrigued by the idea. As a start-up worker, he had been looking for healthy meal options for those who had little time away from their computers. In 2014, he quit his job and set up his own meal-substitute brand, Ruffood. Its Chinese name – ruo fan in pinyin – means “like rice“.
n. [mass noun] a form of dance music of the 1960s and 1940s, presented in the 1960s and 1940s, and then in the 1960s, consisting of monochrome convocations and formal drugs.
[count noun] a person who is characterized by a high price of the form of the mononomic process of the character.
the process of transporting or recording a computer system and then making a straight line into a signal to decide the tendency to decide the first passage of the operation.
v. [with obj.] remove the mononomite of (an article or process): the first tax can be mononomiched of single drugs.
make (something) a monoconficial tendency: a moconal railway landing was mononophed from the building.
mononophist n. mononophilia n. mononophilia n. mononophilia n.
A George Washington University researcher has identified a 6,200-year-old indigo-blue fabric from Huaca, Peru, making it one of the oldest-known cotton textiles in the world and the oldest known textile decorated with indigo blue.
The discovery marks the earliest use of indigo as a dye, a technically challenging color to produce. According to Jeffrey Splitstoser, lead author of a paper on the discovery and assistant research professor of anthropology at the George Washington University, the finding speaks to the sophisticated textile technology ancient Andean people developed 6,200 years ago.
“Some of the world’s most significant technological achievements were developed first in the New World,” said Dr. Splitstoser. “Many people, however, remain mostly unaware of the important technological contributions made by Native Americans, perhaps because so many of these technologies were replaced by European systems during the conquest. Read more.
“Numbers do not seem to work well with regard to deep time. Any number above a couple of thousand years—fifty thousand, fifty million—will with nearly equal effect awe the imagination.”
A set of six #abstract #photographs An original A4 sized Abstract #drawing unique to each edition (Randomly Chosen) White ink on Noir paper. Signed and dated by the artist. Photographs, shot on an OLYMPUS EM-10 micro-four-thirds camera and a RI by Irving Paul Pereira (via http://flic.kr/p/LdxnHs )
On July 17, 130 million cubic yards of ice and rock suddenly let go from a glacier in Tibet, hurtling down six-tenths of a mile and killing nine herders along with 350 sheep and 110 yaks. Scientists were baffled. Now, by examining satellite images before and after the event, they think it is an example of a rare glacial surge, when a glacier moves at 10 to 100 times its normal speed. Some researchers believe that climate change at high elevations can trigger such surges.
In the days following the Rutog avalanche, cracks occurred in nearby glaciers. Temperatures in the Tibetan plateau have risen 0.4 degrees Celsius per decade, twice the global average. One-tenth of the permafrost has melted in just the past decade. Rapidly melting glaciers have added to the number of lakes by 14 percent since 1970 and 80 percent of existing lakes have grown, flooding towns and pastures. In addition, precipitation in the area has increased 12 percent since 1960.
I expected particular types of experiences, as did my volunteers. We thought that mystical unitive enlightenment-like states would predominate. These are states are imageless, content-free, ego-dissolving states of being merged with a powerful but undifferentiated source of being. Instead, these types of experiences were very rare. Rather, volunteers described entering into a world of intensely saturated light, buzzing and morphing, full of “things” — all manner of objects, and oftentimes sentient beings who were awaiting them and interacted with them. Perhaps if I had used another compound for my studies with more unitive properties, such as 5-methoxy-DMT, my expectations would have been met more consistently. But, I studied DMT and this is what we found.
Consider the following as a rule. Whenever you have nonlinearity, the average doesn’t matter anymore. Hence:
The more nonlinearity in the response, the less informational the average.
For instance, your benefit from drinking water would be linear if ten glasses of water were ten times as good as one single glass. If that is not the case, then necessarily the average water consumption matters less than something else that we will call “unevenness”, or volatility, or inequality in consumption. Say your average daily consumption needs to be one liter a day and I gave you ten liters one day and none for the remaining nine days, for an average of one liter a day. Odds are you won’t survive. You want your quantity of water to be as evenly distributed as possible. Within the day, you do not need to consume the same amount water every minute, but at the scale of the day, you want maximal evenness.
From an informational standpoint, someone who tells you “We will supply you with one liter of water liter day on average” is not conveying much information at all; there needs to be a second dimension, the variations around such an average. You are quite certain that you will die of thirst if his average comes from a cluster of a hundred liters every hundred days.
The ‘Terror of War’ case, then, is the tip of the iceberg: a rare visible instance that points to a much larger mass of unseen automated and semi-automated decisions. The concern is that most of these ‘weak AI’ systems are making decisions that don’t garner such attention. They are embedded at the back-end of systems, working at the seams of multiple data sets, with no consumer-facing interface. Their operations are mainly unknown, unseen, and with impacts that are take enormous effort to detect.
“Would it be an exaggeration to say that in the unconscious there is necessarily less cruelty and terror, and of a different type, than in the consciousness of an heir, a soldier, or a Chief of State? The unconscious has its horrors, but they are not anthropomorphic. It is not the slumber of reason that engenders monsters, but vigilant and insomniac rationality.”
–
Deleuze and Guattari,
Anti-Oedipus, 112
Thinking about this in relation to Trump and the problem of labelling - or rather, psychopathologizing - him as a ‘narcissist’, and other terms up to ‘raving lunatic’ (which, rightly, one cannot say). The idea that what’s wrong with Trump is somehow
internal to him, and not related to the economic and political system which produced him (
quafree-speaker, ’parrhesiast’) and which he ostensibly rails against, seems like a dumb mistake for anyone on the left to make.
Central to Deleuze and Guattari’s text, an “introduction to the nonfascist life” in Foucault’s preface, is the question from Wilhelm Reich, “How could the masses be made to desire their own repression?” To a degree, that’s the same question being asked about Brexit, and Trump (Clinton’s “basket of deplorables” comment is both an example of the accusation of racism being turned around into a sign of superiority, and a bald statement of fact). The potency of ideas around ‘sovereignty’ and ‘Make America Great Again’ is not simply an appeal to base libido but the circularity of ‘psychic repression’ with ‘social repression’- although Reich “did not succeed in determining the insertion of desire into the economic infrastructure itself, the insertion of the drives into social production […] Better to depart in search of the Orgone, he said to himself, in search of the vital and cosmic element of desire, than to continue being a psychoanalyst under such conditions.”
It is not the slumber of reason that engenders monsters, but vigilant and insomniac rationality. This is the other problem with the Trump-analysis - that we’ve departed from a supposed past in which politics was conducted by rational debate, not emotion and traditional positions, and that somehow ‘data’ should have made things uncontroversial. What if our problem is indeed an
excess of rationality of a certain kind - economic? That’s certainly the position of André Gorz; and in
An American Utopia, Fredric Jameson argues that we need to revolt against an idea of ‘efficiency’ determining our political and social lives. I haven’t read it, but the metaphor of sleep brings to mind Jonathan Crary’s
24/7: Late Capitalism and the Ends of Sleep.
Trump, unfortunately, is not a nightmare from which we are trying to awake - but a day-state which we cannot shake off.
“Today, Homo sapiens is faced with a rapid modification of his environment, a transformation for which he is the involuntary collective agent. I am not implying that our species is threatened with extinction or that the “end of the world” is approaching. I am not preaching millenarianism. Rather, I would like to point out an alternative. Either we cross a new threshold, enter a new stage of hominization, by inventing some human attribute that is as essential as language but operates at a much higher level, or we continue to “communicate” through the media and think within the context of separate institutions, which contribute to the suffocation and division of intelligence. In the latter case we will no longer be confronted only by the problems of power and survival. But if we are committed to the process of collective intelligence, we will gradually create the technologies, sign systems, forms of social organization and regulation that enable us to think as a group, concentrate our intellectual and spiritual forces, and negotiate practical real-time solutions to the complex problems we must inevitably confront. We will gradually learn … to collectively invent ourselves as a species.”
“The great merit of the capitalist system, it has been said, is that it succeeds in using the nastiest motives of nasty people for the ultimate benefit of society.”
“Bad philosophy cannot easily be countered by good philosophy – argument and explanation – because it holds itself immune. But it can be countered by progress. People want to understand the world, no matter how loudly they may deny that. And progress makes bad philosophy harder to believe.”
–Deutsch, David.
The Beginning of Infinity: Explanations that Transform the World. London: Allen Lane, 2011. (viacarvalhais)
Report of the Annexation of Switzerland as a Border by the K.R.E.V KonungaRikena Elgaland-Vargaland, Zürich main station, 14.05.2016
With Adrian Notz, Leif Elggren, Carl Michael von Hausswolff, Dave Phillips as special guest and the people of Zürich and beyond.
Imagine if we had a day each year when we all went around asking cancer patients if they were “okay”, yet didn’t fund practical medical help for them or give them any hope? The idea is laughable, yet the analogy is real. I’m not against awareness raising per se. In many fields it does a lot of good and changes lives for the better. And most of those who work in these fields are passionate about their cause and want to help. Awareness raising without action plans for those whose awareness has been raised is cruel. Unconscionable. And utterly disgraceful.
This post presents WaveNet, a deep generative model of raw audio waveforms. We show that WaveNets are able to generate speech which mimics any human voice and which sounds more natural than the best existing Text-to-Speech systems, reducing the gap with human performance by over 50%. We also demonstrate that the same network can be used to synthesize other audio signals such as music, and present some striking samples of automatically generated piano pieces.
Facebook’s Mission Statement states that your objective is to “make the world more open and connected”. In reality you are doing this in a totally superficial sense.
If you will not distinguish between child pornography and documentary photographs from a war, this will simply promote stupidity and fail to bring human beings closer to each other.
To pretend that it is possible to create common, global rules for what may and what may not be published, only throws dust into peoples’ eyes.
Building and maintaining a n-to-n communications platform for over a billion *daily* active users across multiple access platforms *is* difficult and *is* hard and you’ve done it and congratulations, that was lots of work and effort. You - and your Valley compatriots - talk excitedly and breathlessly about solving Hard Problems and Disrupting Things, but in other areas - other areas that are *also* legitimate hard problems like content moderation and community moderation and abuse (which isn’t even a new thing!) - do not appear to interest you. They appear to interest you to such a little degree that it looks like you’ve given up *compared to* the effort that’s put into other hard problems.
You can’t have it both ways. You can’t use rhetoric to say that your people - not just engineers - are the best and the brightest working to solve humanity’s problems without also including the asterisk that says “Actually, *not all hard problems*. Not all difficult problems. Just some. Just the engineering ones, for example."
What you’re doing right now - with your inflexible process that’s designed to be efficient and work at scale without critically being able to deal *at scale* with nuance and context (which, I’d say, is your difficult problem and a challenge you should *relish* - how do you deal with nuance at scale in a positive manner?!) smacks of algorithmic and system-reductionism.
It is tempting to make every fiasco at Facebook about the power (and the abuse of power) of the algorithm. The "napalm girl” controversy does not neatly fit that storyline. A little-known team of humans at Facebook decided to remove the iconic photo from the site this week.
That move revealed, in a klutzy way, just how much the company is struggling internally to exercise the most basic editorial judgment, despite claims by senior leadership that the system is working.
As in the case of Ut’s picture, the decision over whether or not to publicly share photographs like the two East Liverpool ones ought to be in the hands of highly trained photo editors, people who not only have the knowledge to understand the “news value” of the photographs, but who have also wrestled with the different underlying ethical problems.
However much any editor’s decisions might be flawed at times, at the very least we can be certain that they have thought about the underlying problems, that, in other words, we’re looking at the end result of an educated process (regardless of whether or not we end up agreeing with it or not). The world of Facebook does away with this.
When Edward Snowden flew to Hong Kong with thumb-drives full of damning US government documents, he assumed his freedom was forfeit: he didn’t even make an escape plan.
But after the explosive revelations of mass, illegal US spying, people around the world determined that they would save Snowden from the fate of Chelsea Manning: years of torture, decades of imprisonment. Among them was a Canadian human rights lawyer, Robert Tibbo, who had represented many of the teeming masses of refugees crammed into Hong Kong’s asylum-seeker ghetto. Tibbo and his clients shuttled Snowden from shanty to shack to cramped apartment for days, hiding him in plain sight in Kowloon’s Lai Chi Kok district among Vietnamese, Indonesian, Filipino, African and Sri Lankan asylum seekers who endure years of grinding poverty in their bid to make new lives away from their home countries.
Canada’s National Post conducted a long, wide-ranging interview with Tibbo about Snowden’s unlikely escape, filling in the blanks with information from Wikileaks volunteers, other lawyers, Laura Poitras, and Snowden himself. The tale of the vulnerable people who selflessly hid Snowden is the main meat of the story, but perhaps more salient – given the oft-repeated smear that Snowden was a Russian spy – is the story of how Snowden ended up in Moscow.
Twine is a tool that lets you make point-and-click games that run in a web browser—what a lot of people refer to as “choose your own adventure” or CYOA games. It’s pretty easy to make a game, which means that the Twine community is fairly big and diverse.
There are a lot of tools that you can use to do information architecture and to sketch out processes. Visio, PowerPoint, Keynote, or Omnigraffle, for example. In the programming world, some people use UML tools to draw pictures of how a program should operate, and then turn that into code, and a new breed of product prototyping apps are blurring the line between design and code, too. But it has always bummed me out that when you draw a picture on a computer it is, for the most part, just a picture. Why doesn’t the computer make sense of those boxes and arrows for you? Why is it so hard to turn a picture of a web product into a little, functional website?
This is a huge topic — why are most digital documents not presented as dynamic programs? (One good recent exploration of the subject is Bret Victor’s “Up and Down the Ladder of Abstraction.”) And in some ways the Twine interface is a very honest testing and prototyping environment, because it is so good at modeling choices (as in, choose your own adventure).
Last year I turned off all my notifications. I stopped booking meetings. I started living asynchronously. Now instead of being interrupted throughout the day—or rushing from one meeting to the next—I sit down and get work done. I work a lot. I communicate with hundreds of people a day. I collaborate extensively. But I do so on my own terms, at my own tempo. You can live more asynchronously, too. I’ll explain the benefits. I’ll show you how.
What does “causality” mean, and how can you represent it mathematically? How can you encode causal assumptions, and what bearing do they have on data analysis? These types of questions are at the core of the practice of data science, but deep knowledge about them is surprisingly uncommon.
Zudem zeigte sich die Staatsanwaltschaft plötzlich wunderbar kunstverständig und sagte, das «übergeordnete Interesse an einer öffentlichen Debatte und die Fragen, die der ‹Random Darknet Shopper› aufwirft, den Besitz des Ecstasy gerechtfertigt». Die deutsche Kuratorin Inke Arns schrieb auf Facebook: «Der Schweizer Staatsanwalt scheint ein guter Kunstkritiker zu sein.» Und Marina Galperina, die Chefredaktorin des New Yorker Online-Magazins Hopes&Fears twitterte: «Schweizer Staatsanwalt: Es ist ok, online MDMA zu kaufen! (so lange man ein Bot in einem Kunstprojekt ist.)»
As far as style goes, the AirPods resemble the EarPods from the Season 2 episode of Doctor Who in which a megalomaniac billionaire has convinced the populace to purchase the wireless devices as a means to conduct communication and receive all their information, only to turn around and deploy them as a weapon that hacked into their brains and turned them into soulless, emotionless, homicidal metal automatons.
One of the most urgent tasks that we mortal critters have is making kin, not babies. This making kin, both with and among other humans and not humans, should happen in an enduring fashion that can sustain through generations. I propose making kin nongenealogically, which will be an absolute need for the eleven-plus billion humans by the end of this century—and is already terribly important. I’m interested in taking care of the earth in a way that makes multispecies environmental justice the means and not just the goal. So I think of making kin as a way of being really, truly prochild—making babies rare and precious—as opposed to the crazy pronatalist but actually antichild world in which we live. It’s making present the powers of mortal critters on earth in resistance to the anthropocene and capitalocene. That’s really what the book is about.
I’ve spent many years referencing Wikipedia’s list of cognitive biases whenever I have a hunch that a certain type of thinking is an official bias but I can’t recall the name or details. It’s been an invaluable reference for helping me identify the hidden flaws in my own thinking. Nothing else I’ve come across seems to be both as comprehensive and as succinct.
However, honestly, the Wikipedia page is a bit of a tangled mess. Despite trying to absorb the information of this page many times over the years, very little of it seems to stick. I often scan it and feel like I’m not able to find the bias I’m looking for, and then quickly forget what I’ve learned. I think this has to do with how the page has organically evolved over the years. Today, it groups 175 biases into vague categories (decision-making biases, social biases, memory errors, etc) that don’t really feel mutually exclusive to me, and then lists them alphabetically within categories. There are duplicates a-plenty, and many similar biases with different names, scattered willy-nilly.
I’ve taken some time over the last four weeks (I’m on paternity leave) to try to more deeply absorb and understand this list, and to try to come up with a simpler, clearer organizing structure to hang these biases off of.
Clarity is an ambiguous virtue today. It’s more frequently called “transparency” now, and the naive still advance it as a simple salve for all ills. But the ills of the early 1990s never left us. If anything, they doubled down, demonstrating how comparatively oversimplified issues like ozone depletion, statist territorialism, and rain forest conservation really were—simply being able to see the issues were supposed to lead to the implementation of their obvious remedies. Today that false dream remains, in the form of technological innovation that promises to “change the world” by producing an even more commercialized version of progress than we endured two decades ago. Would it be a step too far to call Silicon Valley one big, compostable bottle of Crystal Pepsi? Probably. The nostalgia you drink when you drink a reissued Crystal Pepsi is not a nostalgia for taste, nor for the gewgaws of the 1990s, nor even for the youth that might have accompanied the original. It is a nostalgia for a moment when a new secular, global righteousness seemed simple enough that drinking a branded cola could legitimately contribute to it.
Whakaari, also known as White Island, is an active stratovolcano, situated 48 km (30 mi) from the North Island of New Zealand in the Bay of Plenty. Whakaari is New Zealand’s most active volcano, and has been built up by continuous eruptions over the past 150,000 years. The island is approximately 2 km (1.2 mi) in diameter and rises to a height of 321 m (1,053 ft) above sea level.
In western liberal democracies (where Tor is overwhelmingly based, and by raw numbers, largely serves) human-rights advocacy has better optics than privacy. But the opposite is true in the regions that Tor aims to serve. Privacy empowers the individual. Empowering the individual naturally dovetails with human rights, so its plausible that greater human rights is a natural byproduct of privacy advocacy. However, Tor’s pivot from “Privacy Enthusiasts” to “Human Rights Watch for Nerds” substantially increases the risk of imprisonment to those operating a Tor relay or using the Tor Browser Bundle from less HR-friendly regions.
Mocking the work of women in the home doesn’t help women complete the labour they’re left with each day after the great feminist debate goes home to bed. It doesn’t change the structural inequality, the lack of support, failure of financial reimbursement and undervalued social status.
“Alphaliner said Hanjin’s bankruptcy represents the largest container shipping failure in history, dwarfing the 1986 crash of United States Lines (USL).”
This map represents global temperature anomalies averaged from 2008 through 2012. NASA Goddard Institute for Space Studies
Excerpt:
The National Aeronautics and Space Administration’s (NASA) top climate scientist announced Tuesday that the Earth is warming at a pace not seen in at least the past 1,000 years, making it “very unlikely” that global temperatures will stay below the 1.5 C limit agreed to in the landmark climate treaty negotiated in Paris last December.
“Maintaining temperatures below the 1.5 C guardrail requires significant and very rapid cuts in carbon dioxide emissions or coordinated geo-engineering,” he continued, referring to controversial environmental manipulations. “That is very unlikely. We are not even yet making emissions cuts commensurate with keeping warming below 2 C.”
The announcement comes amid a growing body of research—month after month after month—that shows 2016 is shaping up to be the warmest year in recorded history.
Over the past century, temperatures began to rise at a rate that is 10 times faster than historical averages, according to research by NASA and the National Oceanic and Atmospheric Administration (NOAA). That means the Earth will warm up “at least” 20 times faster than historical average in the coming 100 years, NASA said.