Posts tagged machine-learning

Recurrent generative auto-encoders and novelty search

GAN, RNN, RGAN, VAE, stability, autoencoders, pattern-recognition, Monte-Carlo, machine-learning

This post summarizes a bunch of connected tricks and methods I explored with the help of my co-authors. Following the previous post, above the stability properties of GANs, the overall aim was to improve our ability to train generative models stably and accurately, but we went through a lot of variations and experiments with different methods on the way. I’ll try to explain why I think these things worked, but we’re still exploring it ourselves as well. The basic problem is that generative neural network models seem to either be stable but fail to properly capture higher-order correlations in the data distribution (which manifests as blurriness in the image domain), or they are very unstable to train due to having to learn both the distribution and the loss function at the same time, leading to issues like non-stationarity and positive feedbacks. The way GANs capture higher order correlations is to say ‘if there’s any distinguishable statistic from real examples, the discriminator will exploit that’. That is, they try to make things individually indistinguishable from real examples, rather than in the aggregate. The cost of that is the instability arising from not having a joint loss function – the discriminator can make a move that disproportionately harms the generator, and vice versa.

via http://www.araya.org/archives/1306

In this paper, we demonstrated techniques for generating accessories in the form of eyeglass frames that, when printed and worn,…

face recognition, FDS, machine-learning, ML, DNN, peturbation, adversarial networks, Invisibility with the use of accessories, adversarial images

In this paper, we demonstrated techniques for generating accessories in the form of eyeglass frames that, when printed and worn, can effectively fool state-of-the-art face-recognition systems. Our research builds on recent research in fooling machine-learning classifiers by perturbing inputs in an adversarial way, but does so with attention to two novel goals: the perturbations must be physically realizable and inconspicuous. We showed that our eyeglass frames enabled subjects to both dodge recognition and to impersonate others. We believe that our demonstration of techniques to realize these goals through printed eyeglass frames is both novel and important, and should inform future deliberations on the extent to which ML can be trusted in adversarial settings. Finally, we extended our work in two additional directions, first, to so-called black-box FRSs that can be queried but for which the internals are not known, and, second, to defeat state-of-the-art face detection systems.

https://www.cs.cmu.edu/~sbhagava/papers/face-rec-ccs16.pdf

Research Blog: Wide & Deep Learning: Better Together with TensorFlow

deep-learning, wide-learning, machine-learning, generalisation, learning, classification

The human brain is a sophisticated learning machine, forming rules by memorizing everyday events (“sparrows can fly” and “pigeons can fly”) and generalizing those learnings to apply to things we haven’t seen before (“animals with wings can fly”). Perhaps more powerfully, memorization also allows us to further refine our generalized rules with exceptions (“penguins can’t fly”). As we were exploring how to advance machine intelligence, we asked ourselves the question—can we teach computers to learn like humans do, by combining the power of memorization and generalization? It’s not an easy question to answer, but by jointly training a wide linear model (for memorization) alongside a deep neural network (for generalization), one can combine the strengths of both to bring us one step closer. At Google, we call it Wide & Deep Learning. It’s useful for generic large-scale regression and classification problems with sparse inputs (categorical features with a large number of possible feature values), such as recommender systems, search, and ranking problems.

via https://research.googleblog.com/2016/06/wide-deep-learning-better-together-with.html

Google Cuts Its Giant Electricity Bill With DeepMind-Powered AI

Google, AI, DeepMind, energy, electricity, machine-learning

In recent months, the Alphabet Inc. unit put a DeepMind AI system in control of parts of its data centers to reduce power consumption by manipulating computer servers and related equipment like cooling systems. It uses a similar technique to DeepMind software that taught itself to play Atari video games, Hassabis said in an interview at a recent AI conference in New York. The system cut power usage in the data centers by several percentage points, “which is a huge saving in terms of cost but, also, great for the environment,” he said. The savings translate into a 15 percent improvement in power usage efficiency, or PUE, Google said in a statement. PUE measures how much electricity Google uses for its computers, versus the supporting infrastructure like cooling systems.

via http://www.bloomberg.com/news/articles/2016–07–19/google-cuts-its-giant-electricity-bill-with-deepmind-powered-ai

Rise Of The Trollbot

Internet, troll, bots, chatbot, machine-learning, lols, 4chan, celebrity, democracy

Right now, if you want to have someone attacked by a horde of angry strangers, you need to be a celebrity. That’s a real problem on Twitter and Facebook both, with a few users in particular becoming well-known for abusing their power to send their fans after people with whom they disagree. But remember, the Internet’s about democratising power, and this is the latest frontier. With a trollbot and some planning, this power will soon be accessible to anyone.

via http://www.antipope.org/charlie/blog-static/2016/04/rise-of-the-trollbot.html#more

DeepFool: A simple and accurate method to fool deep neural networks

neural-networks, DeepFool, DeepDream, CNN, classification, machine-learning, deep-learning

State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. In this paper, we fill this gap and propose the DeepFool framework to efficiently compute perturbations that fools deep network and thus reliably quantify the robustness of arbitrary classifiers.

via http://gitxiv.com/posts/iZpWrvKdXmtRaQusa/deepfool-a-simple-and-accurate-method-to-fool-deep-neural

A ‘Brief’ History of Neural Nets and Deep Learning

history, machine-learning, machinelearning, neural-nets, deep-learning, AI, computing

This is the first part of ‘A Brief History of Neural Nets and Deep Learning’. In this part, we shall cover the birth of neural nets with the Perceptron in 1958, the AI Winter of the 70s, and neural nets’ return to popularity with backpropagation in 1986.

via http://www.andreykurenkov.com/writing/a-brief-history-of-neural-nets-and-deep-learning/