On Generative Algorithms (2016)
by Anders Hoff
Posts tagged algorithms
On Generative Algorithms (2016)
by Anders Hoff
When we open up data, are we empowering people to come together? Or to come apart? Who defines the values that we should be working towards? Who checks to make sure that our data projects are moving us towards those values? If we aren’t clear about what we want and the trade-offs that are involved, simply opening up data can — and often does — reify existing inequities and structural problems in society. Is that really what we’re aiming to do?
Today, with the rapid development of digital technology, we can increasingly attempt to follow Leibniz’s logic. An increasing level of sophistication, to the point of some products becoming highly or fully autonomous, leads to complex situations requiring some form of ethical reasoning — autonomous vehicles and lethal battlefield robots are good examples of such products due to the tremendous complexity of tasks they have to carry out, as well as their high degree of autonomy. How can such systems be designed to accommodate the complexity of ethical and moral reasoning? At present there exists no universal standard dealing with the ethics of automated systems — will they become a commodity that one can buy, change and resell depending on personal taste? Or will the ethical frameworks embedded into automated products be those chosen by the manufacturer? More importantly, as ethics has been a field under study for millennia, can we ever suppose that our current subjective ethical notions be taken for granted, and used for products that will make decisions on our behalf in real-world situations?
As we make algorithms that can improve themselves — stumbling first steps on the road to artificial intelligence — how should we regulate them? Should we require them to tell us their every step […] Or should we let the algorithms run unfettered? Nara Logics’ Jana Eggers […] suggests that a good approach is to have algorithms explain themselves. After all, humans are terrible at tracking their actions, but software has no choice but to do so. Each time a machine learning algorithm generates a conclusion, it should explain why it did so. Then auditors and regulators can query the justifications to see if they’re allowed. On the surface, this seems like a good idea: Just turn on logging, and you’ll have a detailed record of why an algorithm chose a particular course of action, or classified something a certain way. […] There’s a tension between transparent regulation of the algorithms that rule our futures (having them explain themselves to us so we can guide and hone them) and the speed and alacrity with which an unfettered algorithm can evolve, adapt, and improve better than others. Is he who hesitates to unleash an AI without guidance lost? There’s no simple answer here. It’s more like parenting than computer science: Giving your kid some freedom, and a fundamental moral framework, and then randomly checking in to see that the kid isn’t a jerk. But simply asking to share the algorithm won’t give us the controls and changes we’re hoping to see.
This problem has a name: the paradox of automation. It applies in a wide variety of contexts, from the operators of nuclear power stations to the crew of cruise ships, from the simple fact that we can no longer remember phone numbers because we have them all stored in our mobile phones, to the way we now struggle with mental arithmetic because we are surrounded by electronic calculators. The better the automatic systems, the more out-of-practice human operators will be, and the more extreme the situations they will have to face. The psychologist James Reason, author of Human Error, wrote: “Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills … when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions.”The paradox of automation, then, has three strands to it. First, automatic systems accommodate incompetence by being easy to operate and by automatically correcting mistakes. Because of this, an inexpert operator can function for a long time before his lack of skill becomes apparent – his incompetence is a hidden weakness that can persist almost indefinitely. Second, even if operators are expert, automatic systems erode their skills by removing the need for practice. Third, automatic systems tend to fail either in unusual situations or in ways that produce unusual situations, requiring a particularly skilful response. A more capable and reliable automatic system makes the situation worse.
Facebook’s Mission Statement states that your objective is to “make the world more open and connected”. In reality you are doing this in a totally superficial sense.
If you will not distinguish between child pornography and documentary photographs from a war, this will simply promote stupidity and fail to bring human beings closer to each other.
– Espen Egil Hansen (Editor-in-chief and CEO Aftenposten)
Building and maintaining a n-to-n communications platform for over a billion *daily* active users across multiple access platforms *is* difficult and *is* hard and you’ve done it and congratulations, that was lots of work and effort. You - and your Valley compatriots - talk excitedly and breathlessly about solving Hard Problems and Disrupting Things, but in other areas - other areas that are *also* legitimate hard problems like content moderation and community moderation and abuse (which isn’t even a new thing!) - do not appear to interest you. They appear to interest you to such a little degree that it looks like you’ve given up *compared to* the effort that’s put into other hard problems.
You can’t have it both ways. You can’t use rhetoric to say that your people - not just engineers - are the best and the brightest working to solve humanity’s problems without also including the asterisk that says “Actually, *not all hard problems*. Not all difficult problems. Just some. Just the engineering ones, for example."
What you’re doing right now - with your inflexible process that’s designed to be efficient and work at scale without critically being able to deal *at scale* with nuance and context (which, I’d say, is your difficult problem and a challenge you should *relish* - how do you deal with nuance at scale in a positive manner?!) smacks of algorithmic and system-reductionism.
–Dan Hon, s3e27: It’s Difficult
It is tempting to make every fiasco at Facebook about the power (and the abuse of power) of the algorithm. The "napalm girl” controversy does not neatly fit that storyline. A little-known team of humans at Facebook decided to remove the iconic photo from the site this week.
That move revealed, in a klutzy way, just how much the company is struggling internally to exercise the most basic editorial judgment, despite claims by senior leadership that the system is working.
The same week Nick Ut’s picture didn’t make it, the small town East Liverpool (Ohio) posted two photographs of a couple that had overdosed in their car, with a small child sitting right behind them. Addiction experts were quick to point out that public shaming would very likely be counter productive. In this case, it was reported, “a Facebook spokesperson said the photos did not violate the company’s community standards.”
As in the case of Ut’s picture, the decision over whether or not to publicly share photographs like the two East Liverpool ones ought to be in the hands of highly trained photo editors, people who not only have the knowledge to understand the “news value” of the photographs, but who have also wrestled with the different underlying ethical problems.
However much any editor’s decisions might be flawed at times, at the very least we can be certain that they have thought about the underlying problems, that, in other words, we’re looking at the end result of an educated process (regardless of whether or not we end up agreeing with it or not). The world of Facebook does away with this.
– Jörg M. Colberg,The Facebook Problem
Zudem zeigte sich die Staatsanwaltschaft plötzlich wunderbar kunstverständig und sagte, das «übergeordnete Interesse an einer öffentlichen Debatte und die Fragen, die der ‹Random Darknet Shopper› aufwirft, den Besitz des Ecstasy gerechtfertigt». Die deutsche Kuratorin Inke Arns schrieb auf Facebook: «Der Schweizer Staatsanwalt scheint ein guter Kunstkritiker zu sein.» Und Marina Galperina, die Chefredaktorin des New Yorker Online-Magazins Hopes&Fears twitterte: «Schweizer Staatsanwalt: Es ist ok, online MDMA zu kaufen! (so lange man ein Bot in einem Kunstprojekt ist.)»
Just months after the discovery that Facebook’s “trending” news module was curated and tweaked by human beings, the company has eliminated its editors and left the algorithm to do its job. The results, so far, are a disaster.
Over the weekend, the fully automated Facebook trending module pushed out a false story about Fox News host Megyn Kelly, a controversial piece about a comedian’s four-letter word attack on rightwing pundit Ann Coulter, and links to an article about a video of a man masturbating with a McDonald’s chicken sandwich.
The dismissal of the trending module team appears to have been a long-term plan at Facebook. A source told the Guardian the trending module was meant to have “learned” from the human editors’ curation decisions and was always meant to eventually reach full automation.
21st Century photography has nothing in common with the hypocritical moralism of the post-colonial document, that relies on the same representational paradigm that made colonialism possible. In short, 21st Century Photography is not the representation of the world, but the exploration of the labor practices that shape this world through mass-production, computation, self-replication and pattern recognition. Through it we come to understand that the ‘real world’ is nothing more than so much information plucked out of chaos: the randomised and chaotic conflation of bits of matter, strands of DNA, sub-atomic particles and computer code.
In photography one can glimpse how the accidental meetings of these forces are capable of producing temporary, meaningful assemblages that we call 'images’. In the 21st Century, photography is not a stale sight for sore eyes, but the inquiry into what makes something an image. As such, photography is the most essential task of art in the current time.
Policy algorithms can cause real damage that is difficult to remedy under existing legal protections, especially when algorithms terminate basic services. If community members are unfairly stigmatized by police surveillance or incorrectly denied care for acute medical conditions, it is nearly impossible to make them whole after the fact. So how do we preserve fairness, due process, and equity in automated decision-making?
Google Flu Trends, which launched in 2008, monitors web searches across the US to find terms associated with flu activity such as “cough” or “fever”. It uses those searches to predict up to nine weeks in advance the number of flu-related doctors’ visits that are likely to be made. The system has consistently overestimated flu-related visits over the past three years, and was especially inaccurate around the peak of flu season – when such data is most useful. In the 2012/2013 season, it predicted twice as many doctors’ visits as the US Centers for Disease Control and Prevention (CDC) eventually recorded. In 2011/2012 it overestimated by more than 50 per cent.
The 320-page novel, called “True Love,” is a variation on Leo Tolstoy’s 1877 classic “Anna Karenina” but written in the style of Japanese author Haruki Murakami. It is based on 17 famous literary works that were uploaded onto the program. Within 72 hours, the computer generated its novel about true love.