Posts tagged AI

“The video, called “Alternative Face v1.1”, is the work of Mario Klingemann, a German artist. It plays audio from an NBC…

GAN, Mario Klingemann, ML, AI, News, Fake News, media, 2017

video link

“The video, called “Alternative Face v1.1”, is the work of Mario Klingemann, a German artist. It plays audio from an NBC interview with Ms Conway through the mouth of Ms Hardy’s digital ghost. The video is wobbly and pixelated; a competent visual-effects shop could do much better. But Mr Klingemann did not fiddle with editing software to make it. Instead, he took only a few days to create the clip on a desktop computer using a generative adversarial network (GAN), a type of machine-learning algorithm. His computer spat it out automatically after being force fed old music videos of Ms Hardy. It is a recording of something that never happened.”

AlphaGo, in context

Medium, AlphaGo, ML, machine learning, AI, go

AlphaGo is made up of a number of relatively standard techniques: behavior cloning (supervised learning on human demonstration data), reinforcement learning (REINFORCE), value functions, and Monte Carlo Tree Search (MCTS). However, the way these components are combined is novel and not exactly standard. In particular, AlphaGo uses a SL (supervised learning) policy to initialize the learning of an RL (reinforcement learning) policy that gets perfected with self-play, which they then estimate a value function from, which then plugs into MCTS that (somewhat surprisingly) uses the (worse!, but more diverse) SL policy to sample rollouts. In addition, the policy/value nets are deep neural networks, so getting everything to work properly presents its own unique challenges (e.g. value function is trained in a tricky way to prevent overfitting). On all of these aspects, DeepMind has executed very well. That being said, AlphaGo does not by itself use any fundamental algorithmic breakthroughs in how we approach RL problems.

via https://medium.com/@karpathy/alphago-in-context-c47718cb95a5

Physiognomy’s New Clothes

Medium, AI, machine learning, physiognomy, bias, prejudice, false objectivity, Blaise Aguera y Arcas

The practice of using people’s outer appearance to infer inner character is called physiognomy. While today it is understood to be pseudoscience, the folk belief that there are inferior “types” of people, identifiable by their facial features and body measurements, has at various times been codified into country-wide law, providing a basis to acquire land, block immigration, justify slavery, and permit genocide. When put into practice, the pseudoscience of physiognomy becomes the pseudoscience of scientific racism.

Rapid developments in artificial intelligence and machine learning have enabled scientific racism to enter a new era, in which machine-learned models embed biases present in the human behavior used for model development. Whether intentional or not, this “laundering” of human prejudice through computer algorithms can make those biases appear to be justified objectively.


via https://medium.com/@blaisea/physiognomys-new-clothes-f2d4b59fdd6a

Dear Tech, You Suck at Delight

Medium, Sara Wachter-Boettcher, AI, UI, tech, Apple, siri, partial automation

What we’ve found, over and over, is an industry willing to invest endless resources chasing “delight” — but when put up to the pressure of real life, the results are shallow at best, and horrifying at worst. Consider this: Apple has known Siri had a problem with crisis since it launched in 2011. Back then, if you told it you were thinking about shooting yourself, it would give you directions to a gun store. When bad press rolled in, Apple partnered with the National Suicide Prevention Lifeline to offer users help when they said something Siri identified as suicidal. It’s not just crisis scenarios, either. Hell, Apple Health claimed to track “all of your metrics that you’re most interested in” back in 2014 — but it didn’t consider period tracking a worthwhile metric for over a year after launch.

via https://medium.com/@sara_ann_marie/dear-tech-you-suck-at-delight–86382d101575

Ethical Autonomous Algorithms

Medium, Matthieu Cherubini, ethics, automation, autonomy, algorithms, AI

Today, with the rapid development of digital technology, we can increasingly attempt to follow Leibniz’s logic. An increasing level of sophistication, to the point of some products becoming highly or fully autonomous, leads to complex situations requiring some form of ethical reasoning — autonomous vehicles and lethal battlefield robots are good examples of such products due to the tremendous complexity of tasks they have to carry out, as well as their high degree of autonomy. How can such systems be designed to accommodate the complexity of ethical and moral reasoning? At present there exists no universal standard dealing with the ethics of automated systems — will they become a commodity that one can buy, change and resell depending on personal taste? Or will the ethical frameworks embedded into automated products be those chosen by the manufacturer? More importantly, as ethics has been a field under study for millennia, can we ever suppose that our current subjective ethical notions be taken for granted, and used for products that will make decisions on our behalf in real-world situations?

via https://medium.com/@mchrbn/ethical-autonomous-algorithms–5ad07c311bcc

A New Cryptocurrency For Coordinating Artificial Intelligence on Numerai

Medium, Numerai, cryptocurrency, finance, economics, network effects, AI, capital allocation

Nearly all of the most valuable companies throughout history were valuable through their strong network effects. If there is one motif in American economic history it is network effects. Every railroad made the railroad network more valuable, every telephone made the telephone network more valuable, and every Internet user made the Internet network more valuable. But no hedge fund has ever harnessed network effects. Negative network effects are too pervasive in finance, and they are the reason that there is no one hedge fund monopoly managing all the money in the world. For perspective, Bridgewater, the biggest hedge fund in the world, manages less than 1% of the total actively managed money. Facebook, on the other hand, with its powerful network effects, has a 70% market share in social networking. The most valuable hedge fund in the 21st century will be the first hedge fund to bring network effects to capital allocation.

via https://medium.com/numerai/a-new-cryptocurrency-for-coordinating-artificial-intelligence-on-numerai–9251a131419a

NanoNets : How to use Deep Learning when you have Limited Data

Medium, Machine Learning, deep learning, transfer learning, small data, AI, cats, Van Gogh

With transfer learning, we can take a pretrained model, which was trained on a large readily available dataset (trained on a completely different task, with the same input but different output). Then try to find layers which output reusable features. We use the output of that layer as input features to train a much smaller network that requires a smaller number of parameters. This smaller network only needs to learn the relations for your specific problem having already learnt about patterns in the data from the pretrained model. This way a model trained to detect Cats can be reused to Reproduce the work of Van Gogh

via https://medium.com/nanonets/nanonets-how-to-use-deep-learning-when-you-have-limited-data-f68c0b512cab

Introducing Factmata — Artificial intelligence for automated fact-checking

Medium, facts, post-truth, AI, machine learning, fact checking

Over the course of the next few months, we will be launching a prototype of the research already completed in statistical fact checking and claim detection. So far, our work has been in identifying claims in text by the named entities they contain, what economic statistics those claims are about, and verifying if they are “fact-checkable”. At the moment, we can only check claims that can be validated by known statistical databases — we built our system on Freebase (an fact database that came out of Wikipedia’s knowledge graph), and will be migrating it to new databases such as EUROSTAT and the World Bank Databank.

via https://medium.com/factmata/introducing-factmata-artificial-intelligence-for-political-fact-checking-db8acdbf4cf1

How to regulate an algorithm

Medium, AI, algorithms, self-analysis, conversation, explication, autoexplication, self-describing-processes

As we make algorithms that can improve themselves — stumbling first steps on the road to artificial intelligence — how should we regulate them? Should we require them to tell us their every step […] Or should we let the algorithms run unfettered? Nara Logics’ Jana Eggers […] suggests that a good approach is to have algorithms explain themselves. After all, humans are terrible at tracking their actions, but software has no choice but to do so. Each time a machine learning algorithm generates a conclusion, it should explain why it did so. Then auditors and regulators can query the justifications to see if they’re allowed. On the surface, this seems like a good idea: Just turn on logging, and you’ll have a detailed record of why an algorithm chose a particular course of action, or classified something a certain way. […] There’s a tension between transparent regulation of the algorithms that rule our futures (having them explain themselves to us so we can guide and hone them) and the speed and alacrity with which an unfettered algorithm can evolve, adapt, and improve better than others. Is he who hesitates to unleash an AI without guidance lost? There’s no simple answer here. It’s more like parenting than computer science: Giving your kid some freedom, and a fundamental moral framework, and then randomly checking in to see that the kid isn’t a jerk. But simply asking to share the algorithm won’t give us the controls and changes we’re hoping to see.

via https://medium.com/pandemonio/how-to-regulate-an-algorithm-c2e70048da3

Murat Pak: Designing the Mind of an Online Curator

Medium, bots, aesthetics, Archillect, design, fashion, art, technology, curation, AI, character

From the very beginning, since Archillect was made to find images by following a certain relational structure, I had to trust that Archillect would have a certain character in what she found and shared, which would create an almost personal profile. This is the reason I wanted to present Archillect as a person rather than a random bot. As people perceived Archillect as a character, a personality, they also contributed to the project through the ways they interacted with the project as a result of this perception. This was important to me.

via https://medium.com/@lintropy/murat-pak-designing-the-mind-of-an-online-curator–5785e373127d

Artificial intelligence will force us to confront our values

Medium, AI, ethics

For the foreseeable future, “artificial intelligence” is really just a term to describe advanced analysis of massive datasets, and the models that use that data to identify patterns or make predictions about everything from traffic patterns to criminal justice outcomes. AI can’t think for itself — it’s taught by humans to perform tasks based on the “training data” that we provide, and these systems operate within parameters that we define. But this data often reflects unhealthy social dynamics, like race and gender-based discrimination, that can be easy to miss because we’ve become so desensitized to their presence in society.

via https://medium.com/equal-future/artificial-intelligence-will-force-us-to-confront-our-values–6f32682a32ec

Three Challenges for Artificial Intelligence in Medicine

Medium, Medicine, AI

Modern research has become so specialized that our notion of impact is sometimes siloed. A world-class clinician may be rewarded for inventing a new surgery; an AI researcher may get credit for beating the world record on MNIST. When two fields cross, there can sometimes be fear, misunderstanding, or culture clashes. We’re not unique in history. In 1944, the foundations of quantum physics had been laid, including, dramatically, the later detonation of the first atomic bomb. After the war, a generation of physicists turned their attention to biology. In the 1944 book What is Life?, Erwin Schrödinger referred to a sense of noblesse oblige that prevented researchers in disparate fields from collaborating deeply, and “beg[ged] to renounce the noblesse”

via https://blog.cardiogr.am/three-challenges-for-artificial-intelligence-in-medicine-dfb9993ae750

Artificial intelligence is hard to see

Medium, AI, judgement, ethics, automation, algorithmic

The ‘Terror of War’ case, then, is the tip of the iceberg: a rare visible instance that points to a much larger mass of unseen automated and semi-automated decisions. The concern is that most of these ‘weak AI’ systems are making decisions that don’t garner such attention. They are embedded at the back-end of systems, working at the seams of multiple data sets, with no consumer-facing interface. Their operations are mainly unknown, unseen, and with impacts that are take enormous effort to detect.

via https://medium.com/@katecrawford/artificial-intelligence-is-hard-to-see-a71e74f386db

Google Cuts Its Giant Electricity Bill With DeepMind-Powered AI

Google, AI, DeepMind, energy, electricity, machine-learning

In recent months, the Alphabet Inc. unit put a DeepMind AI system in control of parts of its data centers to reduce power consumption by manipulating computer servers and related equipment like cooling systems. It uses a similar technique to DeepMind software that taught itself to play Atari video games, Hassabis said in an interview at a recent AI conference in New York. The system cut power usage in the data centers by several percentage points, “which is a huge saving in terms of cost but, also, great for the environment,” he said. The savings translate into a 15 percent improvement in power usage efficiency, or PUE, Google said in a statement. PUE measures how much electricity Google uses for its computers, versus the supporting infrastructure like cooling systems.

via http://www.bloomberg.com/news/articles/2016–07–19/google-cuts-its-giant-electricity-bill-with-deepmind-powered-ai

5 Magical Beasts And How To Replace Them With A Shell Script

occultism, history, culture, magic, daemonology, alchemy, AI, computing, automation, bots

It was the ultimate goal of many schools of occultism to create life. In Muslim alchemy, it was called Takwin. In modern literature, Frankenstein is obviously a story of abiogenesis, and not only does the main character explicitly reference alchemy as his inspiration but it’s partially credited for sparking the Victorian craze for occultism. Both the Golem and the Homunculus are different traditions’ alchemical paths to abiogenesis, in both cases partially as a way of getting closer to the Divine by imitating its power. And abiogenesis has also been the fascinated object of a great deal of AI research. Sure, in recent times we might have started to become excited by its power to create a tireless servant who can schedule meetings, manage your Twitter account, spam forums, or just order you a pizza, but the historical context is driven by the same goal as the alchemists - create artificial life. Or more accurately, to create an artificial human. Will we get there? Is it even a good idea? One of the talks at a recent chatbot convention in London was entitled “Don’t Be Human” . Meanwhile, possibly the largest test of an intended-to-be-humanlike - and friendlike - bot is going on via the Chinese chat service WeChat.

via http://www.antipope.org/charlie/blog-static/2016/04/5-magical-beasts-and-how-to-re.html

A ‘Brief’ History of Neural Nets and Deep Learning

history, machine-learning, machinelearning, neural-nets, deep-learning, AI, computing

This is the first part of ‘A Brief History of Neural Nets and Deep Learning’. In this part, we shall cover the birth of neural nets with the Perceptron in 1958, the AI Winter of the 70s, and neural nets’ return to popularity with backpropagation in 1986.

via http://www.andreykurenkov.com/writing/a-brief-history-of-neural-nets-and-deep-learning/

“Compared with the accuracy of various human judges reported in the meta-analysis, computer models need 10, 70, 150, and 300…

psychology, personality, AI, social media, technology, judgement, personality assessment, Big 5, data driven decisions

“Compared with the accuracy of various human judges reported in the meta-analysis, computer models need 10, 70, 150, and 300 Likes, respectively, to outperform an average work colleague, cohabitant or friend, family member, and spouse (graypoints) […]

Automated, accurate, and cheap personality assessment tools could affect society in many ways: marketing messages could be tailored to users’ personalities; recruiters could better match candidates with jobs based on their personality; products and services could adjust their behavior to best match their users’ characters and changing moods; and scientists could collect personality data without burdening participants with lengthy questionnaires. Furthermore, in the future, people might abandon their own psychological judgments and rely on computers when making important life decisions, such as choosing activities, career paths, or even romantic partners. It is possible that such data-driven decisions will improve people’s lives”

http://www.pnas.org/content/112/4/1036.full.pdf

Object Lessons in Freedom

AI, AGI, DeepMind, Google, Ethics, learning, development, parenting, robot apocalypse, mind children

So let’s address our children as though they are our children, and let us revel in the fact they are playing and painting and creating; using their first box of crayons, and us proud parents are putting every masterpiece on the fridge. Even if we are calling them all %E2%80%9Cnightmarish%E2%80%9D–a word I really wish we could stop using in this context; DeepMind sees very differently than we do, but it still seeks pattern and meaning. It just doesn’t know context, yet. But that means we need to teach these children, and nurture them. Code for a recognition of emotions, and context, and even emotional context. There’s been some fantastic advancements in emotional recognition, lately, so let’s continue to capitalize on that; not just to make better automated menu assistants, but to actually make a machine that can understand and seek to address human emotionality. Let’s plan on things like showing AGI human concepts like love and possessiveness and then also showing the deep difference between the two.

http://www.afutureworththinkingabout.com/?p=4906

Agency

fiction, Quinn Norton, AI, agents, soft AI, pattern recognition, algorithmics, automation

Since we’re agent engineers, my husband and I tend to think agents are great. Also, we’re lazy and stupid by our own happy admission — and agents make us a lot smarter and more productive than we would be if we weren’t “borrowing” our brains from the rest of the internet. Like most people, whatever ambivalence we feel about our agents is buried under how much better they make our lives. Agents aren’t true AI, though heavy users sometimes think they are. They are sets of structured queries, a few API modules for services the agent’s owner uses, sets of learning algorithms you can enable by “turning up” their intelligence, and procedures for interfacing with people and things. As you use them they collect more and more of a person’s interests and history — we used to say people “rub off” on their agents over time.

https://medium.com/message/agency–3d37adfc69a3

sentient closet-based AI

AI, barbie, games, new uncanny, GLaDOS

This is a game in which a sentient closet-based AI locks four girls in a room (with giant metal barriers) because one of them smudged her make-up, and forces them to repeatedly apply lipstick and eyeliner to freakishly giant doll heads until he is satisfied. That’s not my arch interpretation of events. That’s what actually happens.

http://www.rockpapershotgun.com/2013/11/27/barbie-dreamhouse-party-creeps-the-crap-out-of-me/

Extended Senses & Invisible Fences

machine perception, AI, machine readable, humanities, Cities, cities, technolgy, urbanism, urban, An

Amidst the swirling maelstrom of technological progress so often heralded as the imminent salvation to all our ills, it can be necessary to remind ourselves that humanity sits at the center, not technology. And yet, we extrude these tools so effortlessly as if secreted by some glandular Technos expressed from deep within our genetic code. It’s difficult to separate us from our creations but it’s imperative that we examine this odd relationship as we engineer more autonomy, sensitivity, and cognition into the machines we bring into this world. The social environment, typified by the contemporary urban landscape, is evolving to include non-human actors that routinely engage with us, examining our behaviors, mediating our relationships, and assigning or revoking our rights. It is this evolving human-machine socialization that I wish to consider.

http://www.urbeingrecorded.com/news/2012/06/27/extended-senses-invisible-fences/