Posts tagged ethics

Like Bringing a Gantt Chart to a Casino

productivity, creativity, economics, ethics, reflection, clowncar, 2017

I imagine a spectrum where at one pole we have an assembly line worker, and on the other we have a mathematician trying to prove a famous conjecture. The productivity of the former is constrained only by physics, and may glean a few percent here and there with better tools, methods, and discipline. The latter, by contrast, may have the best tools, methods, and discipline, spend an entire career working diligently, and still not succeed. We live somewhere in between.

what does it mean to be productive if what you are producing is bad? Or even if it is good for you, it may be bad for others, who may endeavour in turn to make it bad for you.


via https://superyesmore.com/like-bringing-a-gantt-chart-to-a-casino-a0e4dd635d787c3b0a2156d9aff41218

Unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are…

Karl Popper, Paradox of tolerance, open society, intolerance, ethics

“Unlimited tolerance must lead to the disappearance of tolerance. If we extend unlimited tolerance even to those who are intolerant, if we are not prepared to defend a tolerant society against the onslaught of the intolerant, then the tolerant will be destroyed, and tolerance with them. […] We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant.”

Karl Popper (on theParadox of tolerance in  The Open Society and Its Enemies)

The Skin of Others in your Game

Medium, Nassim Nicholas Taleb, skin in the game, terrorism, ethics, punishment

Society likes saints and moral heroes to be celibate so they do not have family pressures and be forced into dilemmas of needing to compromise their sense of ethics to feed their children. The entire human race, something rather abstract, becomes their family. Some martyrs, such as Socrates, had young children (although he was in his seventies), and overcame the dilemma at their expense. Many can’t.

via https://medium.com/incerto/the-skin-of-others-in-your-game–3f51d8ccc3fb

Ethical Autonomous Algorithms

Medium, Matthieu Cherubini, ethics, automation, autonomy, algorithms, AI

Today, with the rapid development of digital technology, we can increasingly attempt to follow Leibniz’s logic. An increasing level of sophistication, to the point of some products becoming highly or fully autonomous, leads to complex situations requiring some form of ethical reasoning — autonomous vehicles and lethal battlefield robots are good examples of such products due to the tremendous complexity of tasks they have to carry out, as well as their high degree of autonomy. How can such systems be designed to accommodate the complexity of ethical and moral reasoning? At present there exists no universal standard dealing with the ethics of automated systems — will they become a commodity that one can buy, change and resell depending on personal taste? Or will the ethical frameworks embedded into automated products be those chosen by the manufacturer? More importantly, as ethics has been a field under study for millennia, can we ever suppose that our current subjective ethical notions be taken for granted, and used for products that will make decisions on our behalf in real-world situations?

via https://medium.com/@mchrbn/ethical-autonomous-algorithms–5ad07c311bcc

“Nothing is Forbidden, but Some Things are Good”

Medium, ethics, morality, preference, vegetarianism, philosophy, choice, taoism

“Nothing is Forbidden, but Some Things are Good” So morality may be a mirage, but it’s a useful mirage that helps us find life-giving meaning in what would otherwise be a desert of pure perception. I found de to be a helpful bridge towards holonic integration, but you might prefer Sharia law, act utilitarianism, or any number of moral or ethical ideas. Whatever your choice, in this way morality serves as an oasis that will sustain you on your journey to find meaning, especially when all meaning seems lost to the harsh winds of an uncaring world.

via https://mapandterritory.org/nothing-is-forbidden-but-some-things-are-good-b57f2aa84f1b

James Clapper on the Future of Cyberwar and Surveillance

Clapper, USA, NSA, CIA, IC, law, ethics, surveillance, warfare, Snowden, stuxnet, internment, tortur

While Clapper grudgingly accepts the damage the Snowden affair has done to his own reputation, he worries more deeply about the impact it’s had on the intelligence workforce. He hates the thought that America might turn on his employees. He fears that, in the same way the nation and Congress turned their backs on the CIA officers who ran the agency’s “black sites” and torture program in the wake of 9/11, the country will one day turn on the people who carry out drone attacks. “I worry that people will decide retroactively that killing people with drones was wrong, and that will lead us to criticize, indict, and try people who helped kill with drones,” he says. “I find it really bothersome to set a moral standard retrospectively,” he says. “People raise all sorts of good questions about things America has done. Everyone now agrees that interning Japanese [Americans] in World War II was egregious—but at the time it seemed like it was in the best interests of the country.”

via https://www.wired.com/2016/11/james-clapper-us-intelligence/

Artificial intelligence will force us to confront our values

Medium, AI, ethics

For the foreseeable future, “artificial intelligence” is really just a term to describe advanced analysis of massive datasets, and the models that use that data to identify patterns or make predictions about everything from traffic patterns to criminal justice outcomes. AI can’t think for itself — it’s taught by humans to perform tasks based on the “training data” that we provide, and these systems operate within parameters that we define. But this data often reflects unhealthy social dynamics, like race and gender-based discrimination, that can be easy to miss because we’ve become so desensitized to their presence in society.

via https://medium.com/equal-future/artificial-intelligence-will-force-us-to-confront-our-values–6f32682a32ec

Rules for trusting “black boxes” in algorithmic control systems

algortihmics, trust, black boxes, security, decision making, prediction, data, machine learning, ethics

mostlysignssomeportents:

Tim O'Reilly writes about the reality that more and more of our lives – including whether you end up seeing this very sentence! – is in the hands of “black boxes” – algorithmic decision-makers whose inner workings are a secret from the people they effect.

O'Reilly proposes four tests to determine whether a black box is trustable:

1. Its creators have made clear what outcome they are seeking, and it is possible for external observers to verify that outcome.

2. Success is measurable.

3. The goals of the algorithm’s creators are aligned with the goals of the algorithm’s consumers.

4. Does the algorithm lead its creators and its users to make better longer term decisions?

O'Reilly goes on to test these assumptions against some of the existing black boxes that we trust every day, like aviation autopilot systems, and shows that this is a very good framework for evaluating algorithmic systems.

But I have three important quibbles with O'Reilly’s framing. The first is absolutely foundational: the reason that these algorithms are black boxes is that the people who devise them argue that releasing details of their models will weaken the models’ security. This is nonsense.

For example, Facebook’s tweaked its algorithm to downrank “clickbait” stories. Adam Mosseri, Facebook’s VP of product management told Techcrunch, “Facebook won’t be publicly publishing the multi-page document of guidelines for defining clickbait because ‘a big part of this is actually spam, and if you expose exactly what we’re doing and how we’re doing it, they reverse engineer it and figure out how to get around it.’”

There’s a name for this in security circles: “Security through obscurity.” It is as thoroughly discredited an idea as is possible. As far back as the 19th century, security experts have decried the idea that robust systems can rely on secrecy as their first line of defense against compromise.

The reason the algorithms O'Reilly discusses are black boxes is because the people who deploy them believe in security-through-obscurity. Allowing our lives to be manipulated in secrecy because of an unfounded, superstitious belief is as crazy as putting astrologers in charge of monetary policy, no-fly lists, hiring decisions, and parole and sentencing recommendations.

So there’s that: the best way to figure out whether we can trust a black box is the smash it open, demand that it be exposed to the disinfecting power of sunshine, and give no quarter to the ideologically bankrupt security-through-obscurity court astrologers of Facebook, Google, and the TSA.

Then there’s the second issue, which is important whether or not we can see inside the black box: what data was used to train the model? Or, in traditional scientific/statistical terms, what was the sampling methodology?

Garbage in, garbage out is a principle as old as computer science, and sampling bias is a problem that’s as old as the study of statistics. Algorithms are often deployed to replace biased systems with empirical ones: for example, predictive policing algorithms tell the cops where to look for crime, supposedly replacing racially biased stop-and-frisk with data-driven systems of automated suspicion.

But predictive policing training data comes from earlier, human-judgment-driven stop-and-frisk projects. If the cops only make black kids turn out their pockets, then all the drugs, guns and contraband they find will be in the pockets of black kids. Feed this data to a machine learning model and ask it where the future guns, drugs and contraband will be found, and it will dutifully send the police out to harass more black kids. The algorithm isn’t racist, but its training data is.

There’s a final issue, which is that algorithms have to have their models tweaked based on measurements of success. It’s not enough to merely measure success: the errors in the algorithm’s predictions also have to be fed back to it, to correct the model. That’s the difference between Amazon’s sales-optimization and automated hiring systems. Amazon’s systems predict ways of improving sales, which the company tries: the failures are used to change the model to improve it. But automated hiring systems blackball some applicants and advance others, and the companies that makes these systems don’t track whether the excluded people go on to be great employees somewhere else, or whether the recommended hires end up stealing from the company or alienating its customers.

I like O'Reilly’s framework for evaluating black boxes, but I think we need to go farther.

http://boingboing.net/2016/09/15/rules-for-trusting-black-box.html

Artificial intelligence is hard to see

Medium, AI, judgement, ethics, automation, algorithmic

The ‘Terror of War’ case, then, is the tip of the iceberg: a rare visible instance that points to a much larger mass of unseen automated and semi-automated decisions. The concern is that most of these ‘weak AI’ systems are making decisions that don’t garner such attention. They are embedded at the back-end of systems, working at the seams of multiple data sets, with no consumer-facing interface. Their operations are mainly unknown, unseen, and with impacts that are take enormous effort to detect.

via https://medium.com/@katecrawford/artificial-intelligence-is-hard-to-see-a71e74f386db

Zapping Their Brains at Home

neuroscience, neurohacking, brain, brain-hacking, DIY, tDCS, ethics, research, exeperiment

As neuroscientists continue to conduct brain stimulation experiments, publish results in journals and hold conferences, the D.I.Y. practitioners have remained quiet downstream listeners, blogging about scientists’ experiments, posting unrestricted versions of journal articles and linking to videos of conference talks. Some practitioners create their own manuals and guides based on published papers. The growth of D.I.Y. brain stimulation stems in part from a larger frustration with the exclusionary institutions of modern medicine, such as the exorbitant price of pharmaceuticals and the glacial pace at which new therapies trickle down to patients. For people without an institutional affiliation, even reading a journal article can be prohibitively expensive. The open letter this month is about safety. But it is also a recognition that these D.I.Y. practitioners are here to stay, at least for the time being. While the letter does not condone, neither does it condemn. It sticks to the facts and eschews paternalistic tones in favor of measured ones. The letter is the first instance I’m aware of in which scientists have directly addressed these D.I.Y. users. Though not quite an olive branch, it is a commendable step forward, one that demonstrates an awareness of a community of scientifically involved citizens.

via http://www.nytimes.com/2016/07/24/opinion/sunday/zapping-their-brains-at-home.html?ribbon-ad-idx=19&rref=opinion&module=Ribbon&version=context&region=Header&action=click&contentCollection=Opinion&pgtype=article&_r=0

Neuroscientists’ Open Letter To DIY Brain Hackers

neuroscience, tDCS, brain-hacking, neurohacking, ethics, experiment, review

We now know that if you take the same subject and do tDCS with exactly the same settings on different days, they can have very different responses. We know there’s a huge amount that can actually change what effect tDCS has. What you’re doing at the time tDCS is administered, or before tDCS is administered, has an effect. There are so many different things that can have an effect – your age, your gender, your hormones, whether you drank coffee that morning, whether you’ve had exposure to brain stimulation previously, your baseline neurotransmitter level — all of this stuff can affect what tDCS does to your brain. And some of those things vary on a day-to-day basis.

via http://www.wbur.org/commonhealth/2016/07/11/caution-brain-hacking

An Open Letter Concerning Do-It-Yourself Users of Transcranial Direct Current Stimulation

neurohacking, tDCS, ethics, experimentation, medecine

“As clinicians and scientists who study noninvasive brain stimulation, we share a common interest with do-it-yourself (DIY) users, namely administering transcranial direct current stimulation (tDCS) to improve brain function. Evidence suggests that DIY users reference the scientific literature to guide their use of tDCS, including published ethical and safety standards. However, as discussed at a recent Institute of Medicine Workshop, there is much about noninvasive brain stimulation in general, and tDCS in particular, that remains unknown. Whereas some risks, such as burns to the skin and complications resulting from electrical equipment failures, are well recognized, other problematic issues may not be immediately apparent. We perceive an ethical obligation to draw the attention of both professionals and DIY users to some of these issues”

  • Stimulation affects more of the brain than a user may think
  • Stimulation interacts with ongoing brain activity, so what a user does during tDCS changes tDCS effects
  • Enhancement of some cognitive abilities may come at the cost of others
  • Changes in brain activity (intended or not) may last longer than a user may think
  • Small differences in tDCS parameters can have a big effect
  • tDCS effects are highly variable across different people
  • The risk/benefit ratio is different for treating diseases versus enhancing function

http://onlinelibrary.wiley.com/doi/10.1002/ana.24689/pdf

Do We Still Need the Trolley Problem?

The Atlantic, ethics, automation, engineering, trolley problem, driverless cars, 2015

It may be fortuitous that the trolley problem has trickled into the world of driverless cars: It illuminates some of the profound ethical—and legal—challenges we will face ahead with robots. As human agents are replaced by robotic ones, many of our decisions will cease to be in-the-moment, knee-jerk reactions. Instead, we will have the ability to premeditate different options as we program how our machines will act. For philosophers like Lin, this is the perfect example of where theory collides with the real world—and thought experiments like the trolley problem, though they may be abstract or outdated, can help us to rigorously think through scenarios before they happen. Lin and Gerdes hosted a conference about ethics and self-driving cars last month, and hope the resulting discussions will spread out to other companies and labs developing these technologies.

http://www.theatlantic.com/technology/archive/2015/10/trolley-problem-history-psychology-morality-driverless-cars/409732/

A Plea for Culinary Modernism

food, cooking, history, local, slow food, fast food, ethics, globalism, industrialisation, labour

Culinary Luddites are right, though, about two important things. We need to know how to prepare good food, and we need a culinary ethos. As far as good food goes, they’ve done us all a service by teaching us to how to use the bounty delivered to us (ironically) by the global economy. Their culinary ethos, though, is another matter. Were we able to turn back the clock, as they urge, most of us would be toiling all day in the fields or the kitchen; many of us would be starving. Nostalgia is not what we need. What we need is an ethos that comes to terms with contemporary, industrialized food, not one that dismisses it, an ethos that opens choices for everyone, not one that closes them for many so that a few may enjoy their labor, and an ethos that does not prejudge, but decides case by case when natural is preferable to processed, fresh to preserved, old to new, slow to fast, artisanal to industrial.

https://www.jacobinmag.com/2015/05/slow-food-artisanal-natural-preservatives/

What does the Facebook experiment teach us?

danah boyd, facebook, research, ethics, IRB, peer review, psychology, sentiment manipulation, algori

For better or worse, people imagine Facebook is run by a benevolent dictator, that the site is there to enable people to better connect with others. In some senses, this is true. But Facebook is also a company […] it designs its algorithms not just to market to you directly but to convince you to keep coming back over and over again. People have an abstract notion of how that operates, but they don’t really know, or even want to know. They just want the hot dog to taste good. Whether it’s couched as research or operations, people don’t want to think they’re being manipulated. So when they find out what soylent green is made of, they’re outraged. This study isn’t really what’s at stake. What’s at stake is the underlying dynamic of how Facebook runs its business, operates its system, and makes decisions that have nothing to do with how its users want Facebook to operate. It’s not about research. It’s a question of power.

https://medium.com/message/what-does-the-facebook-experiment-teach-us-c858c08e287f

Don’t be a Glasshole

Glass, google, ethics, instructions, social mediation

Don’t Be creepy or rude (aka, a “Glasshole”). Respect others and if they have questions about Glass don’t get snappy. Be polite and explain what Glass does and remember, a quick demo can go a long way. In places where cell phone cameras aren’t allowed, the same rules will apply to Glass. If you’re asked to turn your phone off, turn Glass off as well. Breaking the rules or being rude will not get businesses excited about Glass and will ruin it for other Explorers.

https://sites.google.com/site/glasscomms/glass-explorers

The challenge of crafting policy for DIY brain stimulation

tDCS, brain stimulation, neurology, ethics, DIY.pubmed

Transcranial direct current stimulation (tDCS), a simple means of brain stimulation, possesses a trifecta of appealing features: it is relatively safe, relatively inexpensive and relatively effective. It is also relatively easy to obtain a device and the do-it-yourself (DIY) community has become galvanised by reports that tDCS can be used as an all-purpose cognitive enhancer. We provide practical recommendations designed to guide balanced discourse, propagate norms of safe use and stimulate dialogue between the DIY community and regulatory authorities. We call on all stakeholders-regulators, scientists and the DIY community-to share in crafting policy proposals that ensure public safety while supporting DIY innovation.

http://www.ncbi.nlm.nih.gov/pubmed/23733050

What Is ‘Evil’ to Google?

Ian Bogost, evil, google, morality, ethics, progress, engineering, silicon valley, narcissism

Famous though the slogan might be, its meaning has never been clear. In the 2004 IPO letter, founders Larry Page and Sergey Brin clarify that Google will be “a company that does good things for the world even if we forgo some short term gains.” But what counts as “good things,” and who constitutes “the world?” The slogan’s significance has likely changed over time, but today it seems clear that we’re misunderstanding what “evil” means to the company. For today’s Google, evil isn’t tied to malevolence or moral corruption, the customary senses of the term. Rather, it’s better to understand Google’s sense of evil as the disruption of its brand of (computational) progress.

http://www.theatlantic.com/technology/archive/2013/10/what-is-evil-to-google/280573/

Ethics and Power in the Long War

ethics, power, surveillance, Eleanor Saitta, Dymaxion, hackers, security, intelligence, centralisat

So, hacker culture is kind of at a crossroads. For a long time it was totally cool that, you know what, I don’t really want to be political, because I just like to reverse code and it’s a lot of fun, and I don’t really have time for politics cause I spend thirteen hours a day looking at Shell code and socialism takes too long. That was great for a while, but we don’t get to be apolitical anymore. Because If you’re doing security work, if you’re doing development work and you are apolitical, then you are aiding the existing centralizing structure. If you’re doing security work and you are apolitical, you are almost certainly working for an organization that exists in a great part to prop up existing companies and existing power structures. Who here has worked for a a security consultancy? Not that many people, ok. I don’t know anybody who has worked for a security consultancy where that consultancy has not done work for someone in the defense industry. There are probably a few, and I guarantee you that those consultancies that have done no work that is defense industry related, have taken an active political position, that we will not touch anything that is remotely fishy. If you’re apolitical, you’re aiding the enemy.

https://noisysquare.com/ethics-and-power-in-the-long-war-eleanor-saitta-dymaxion/