Posts tagged neural networks

Software 2.0

Medium, Andrej Karpathy, software, computing, AI, neural networks, programming, statistics, 2017

Software 2.0 is written in neural network weights. No human is involved in writing this code because there are a lot of weights (typical networks might have millions), and coding directly in weights is kind of hard (I tried). Instead, we specify some constraints on the behavior of a desirable program (e.g., a dataset of input output pairs of examples) and use the computational resources at our disposal to search the program space for a program that satisfies the constraints. In the case of neural networks, we restrict the search to a continuous subset of the program space where the search process can be made (somewhat surprisingly) efficient with backpropagation and stochastic gradient descent. It turns out that a large portion of real-world problems have the property that it is significantly easier to collect the data than to explicitly write the program. A large portion of programmers of tomorrow do not maintain complex software repositories, write intricate programs, or analyze their running times. They collect, clean, manipulate, label, analyze and visualize data that feeds neural networks.

via https://medium.com/@karpathy/software–2–0-a64152b37c35

Why we should be Deeply Suspicious of BackPropagation

Medium, machine learning, ML, back propagation, neural networks, GAN

That something else, call it imagination or call it dreaming, does not require validation with immediate reality. The closest incarnation we have today is the generative adversarial network (GAN). A GAN consists of two networks, a generator and a discriminator. One can consider a discriminator as a neural network that acts in concert with the objective function. That is, it validates an internal generator network with reality. The generator is an automation that recreates an approximation of reality. A GAN works using back-propagation and it does perform unsupervised learning. So perhaps unsupervised learn doesn’t require an objective function, however it may still need back-propagation.

via https://medium.com/intuitionmachine/the-deeply-suspicious-nature-of-backpropagation–9bed5e2b085e

A new kind of deep neural networks

Medium, neural networks, RNN, LeNet5, AlexNet, generative networks

The new kind of neural networks are an evolution of the initial feed-forward model of LeNet5 / AlexNet and derivatives, and include more sophisticated by-pass schemes than ResNet / Inception. These feedforward neural networks are also called encoders, as they compress and encode images into smaller representation vectors. The new wave of neural networks have two important new features:
generative branches: also called decoders, as they project a representation vector back into the input space
recurrent layers: that combine representations from previous time steps with the inputs and representations of the current time step

via https://medium.com/towards-data-science/a-new-kind-of-deep-neural-networks–749bcde19108

Inceptionism: Going Deeper into Neural Networks

image calsification, google research, neural networks, feedback, perception, classification, classi

One of the challenges of neural networks is understanding what exactly goes on at each layer. We know that after training, each layer progressively extracts higher and higher-level features of the image, until the final layer essentially makes a decision on what the image shows. For example, the first layer maybe looks for edges or corners. Intermediate layers interpret the basic features to look for overall shapes or components, like a door or a leaf. The final few layers assemble those into complete interpretations–these neurons activate in response to very complex things such as entire buildings or trees. One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation. Say you want to know what sort of image would result in %E2%80%9CBanana.%E2%80%9D Start with an image full of random noise, then gradually tweak the image towards what the neural net considers a banana

http://googleresearch.blogspot.be/2015/06/inceptionism-going-deeper-into-neural.html