When we open up data, are we empowering people to come together? Or to come apart? Who defines the values that we should be working towards? Who checks to make sure that our data projects are moving us towards those values? If we aren’t clear about what we want and the trade-offs that are involved, simply opening up data can — and often does — reify existing inequities and structural problems in society. Is that really what we’re aiming to do?
via https://points.datasociety.net/toward-accountability–6096e38878f0
Today, with the rapid development of digital technology, we can increasingly attempt to follow Leibniz’s logic. An increasing level of sophistication, to the point of some products becoming highly or fully autonomous, leads to complex situations requiring some form of ethical reasoning — autonomous vehicles and lethal battlefield robots are good examples of such products due to the tremendous complexity of tasks they have to carry out, as well as their high degree of autonomy. How can such systems be designed to accommodate the complexity of ethical and moral reasoning? At present there exists no universal standard dealing with the ethics of automated systems — will they become a commodity that one can buy, change and resell depending on personal taste? Or will the ethical frameworks embedded into automated products be those chosen by the manufacturer? More importantly, as ethics has been a field under study for millennia, can we ever suppose that our current subjective ethical notions be taken for granted, and used for products that will make decisions on our behalf in real-world situations?
via https://medium.com/@mchrbn/ethical-autonomous-algorithms–5ad07c311bcc
As we make algorithms that can improve themselves — stumbling first steps on the road to artificial intelligence — how should we regulate them? Should we require them to tell us their every step […] Or should we let the algorithms run unfettered? Nara Logics’ Jana Eggers […] suggests that a good approach is to have algorithms explain themselves. After all, humans are terrible at tracking their actions, but software has no choice but to do so. Each time a machine learning algorithm generates a conclusion, it should explain why it did so. Then auditors and regulators can query the justifications to see if they’re allowed. On the surface, this seems like a good idea: Just turn on logging, and you’ll have a detailed record of why an algorithm chose a particular course of action, or classified something a certain way. […] There’s a tension between transparent regulation of the algorithms that rule our futures (having them explain themselves to us so we can guide and hone them) and the speed and alacrity with which an unfettered algorithm can evolve, adapt, and improve better than others. Is he who hesitates to unleash an AI without guidance lost? There’s no simple answer here. It’s more like parenting than computer science: Giving your kid some freedom, and a fundamental moral framework, and then randomly checking in to see that the kid isn’t a jerk. But simply asking to share the algorithm won’t give us the controls and changes we’re hoping to see.
via https://medium.com/pandemonio/how-to-regulate-an-algorithm-c2e70048da3
This problem has a name: the paradox of automation. It applies in a wide variety of contexts, from the operators of nuclear power stations to the crew of cruise ships, from the simple fact that we can no longer remember phone numbers because we have them all stored in our mobile phones, to the way we now struggle with mental arithmetic because we are surrounded by electronic calculators. The better the automatic systems, the more out-of-practice human operators will be, and the more extreme the situations they will have to face. The psychologist James Reason, author of Human Error, wrote: “Manual control is a highly skilled activity, and skills need to be practised continuously in order to maintain them. Yet an automatic control system that fails only rarely denies operators the opportunity for practising these basic control skills … when manual takeover is necessary something has usually gone wrong; this means that operators need to be more rather than less skilled in order to cope with these atypical conditions.”
The paradox of automation, then, has three strands to it. First, automatic systems accommodate incompetence by being easy to operate and by automatically correcting mistakes. Because of this, an inexpert operator can function for a long time before his lack of skill becomes apparent – his incompetence is a hidden weakness that can persist almost indefinitely. Second, even if operators are expert, automatic systems erode their skills by removing the need for practice. Third, automatic systems tend to fail either in unusual situations or in ways that produce unusual situations, requiring a particularly skilful response. A more capable and reliable automatic system makes the situation worse.
via https://www.theguardian.com/technology/2016/oct/11/crash-how-computers-are-setting-us-up-disaster
Facebook’s Mission Statement states that your objective is to “make the world more open and connected”. In reality you are doing this in a totally superficial sense.
If you will not distinguish between child pornography and documentary photographs from a war, this will simply promote stupidity and fail to bring human beings closer to each other.
To pretend that it is possible to create common, global rules for what may and what may not be published, only throws dust into peoples’ eyes.
– Espen Egil Hansen (Editor-in-chief and CEO Aftenposten)
Building and maintaining a n-to-n communications platform for over a billion *daily* active users across multiple access platforms *is* difficult and *is* hard and you’ve done it and congratulations, that was lots of work and effort. You - and your Valley compatriots - talk excitedly and breathlessly about solving Hard Problems and Disrupting Things, but in other areas - other areas that are *also* legitimate hard problems like content moderation and community moderation and abuse (which isn’t even a new thing!) - do not appear to interest you. They appear to interest you to such a little degree that it looks like you’ve given up *compared to* the effort that’s put into other hard problems.
You can’t have it both ways. You can’t use rhetoric to say that your people - not just engineers - are the best and the brightest working to solve humanity’s problems without also including the asterisk that says “Actually, *not all hard problems*. Not all difficult problems. Just some. Just the engineering ones, for example."
What you’re doing right now - with your inflexible process that’s designed to be efficient and work at scale without critically being able to deal *at scale* with nuance and context (which, I’d say, is your difficult problem and a challenge you should *relish* - how do you deal with nuance at scale in a positive manner?!) smacks of algorithmic and system-reductionism.
–Dan Hon, s3e27: It’s Difficult
It is tempting to make every fiasco at Facebook about the power (and the abuse of power) of the algorithm. The "napalm girl” controversy does not neatly fit that storyline. A little-known team of humans at Facebook decided to remove the iconic photo from the site this week.
That move revealed, in a klutzy way, just how much the company is struggling internally to exercise the most basic editorial judgment, despite claims by senior leadership that the system is working.
–Aarti Shahani, With ‘Napalm Girl,’ Facebook Humans (Not Algorithms) Struggle To Be Editor
The same week Nick Ut’s picture didn’t make it, the small town East Liverpool (Ohio) posted two photographs of a couple that had overdosed in their car, with a small child sitting right behind them. Addiction experts were quick to point out that public shaming would very likely be counter productive. In this case, it was reported, “a Facebook spokesperson said the photos did not violate the company’s community standards.”
As in the case of Ut’s picture, the decision over whether or not to publicly share photographs like the two East Liverpool ones ought to be in the hands of highly trained photo editors, people who not only have the knowledge to understand the “news value” of the photographs, but who have also wrestled with the different underlying ethical problems.
However much any editor’s decisions might be flawed at times, at the very least we can be certain that they have thought about the underlying problems, that, in other words, we’re looking at the end result of an educated process (regardless of whether or not we end up agreeing with it or not). The world of Facebook does away with this.
– Jörg M. Colberg,The Facebook Problem
21st Century photography has nothing in common with the hypocritical moralism of the post-colonial document, that relies on the same representational paradigm that made colonialism possible. In short, 21st Century Photography is not the representation of the world, but the exploration of the labor practices that shape this world through mass-production, computation, self-replication and pattern recognition. Through it we come to understand that the ‘real world’ is nothing more than so much information plucked out of chaos: the randomised and chaotic conflation of bits of matter, strands of DNA, sub-atomic particles and computer code.
In photography one can glimpse how the accidental meetings of these forces are capable of producing temporary, meaningful assemblages that we call 'images’. In the 21st Century, photography is not a stale sight for sore eyes, but the inquiry into what makes something an image. As such, photography is the most essential task of art in the current time.
http://thephotographersgalleryblog.org.uk/2015/07/03/what-is-21st-century-photography/