Posts tagged RNN

A new kind of deep neural networks

Medium, neural networks, RNN, LeNet5, AlexNet, generative networks

The new kind of neural networks are an evolution of the initial feed-forward model of LeNet5 / AlexNet and derivatives, and include more sophisticated by-pass schemes than ResNet / Inception. These feedforward neural networks are also called encoders, as they compress and encode images into smaller representation vectors. The new wave of neural networks have two important new features:
generative branches: also called decoders, as they project a representation vector back into the input space
recurrent layers: that combine representations from previous time steps with the inputs and representations of the current time step

via https://medium.com/towards-data-science/a-new-kind-of-deep-neural-networks–749bcde19108

Recurrent generative auto-encoders and novelty search

GAN, RNN, RGAN, VAE, stability, autoencoders, pattern-recognition, Monte-Carlo, machine-learning

This post summarizes a bunch of connected tricks and methods I explored with the help of my co-authors. Following the previous post, above the stability properties of GANs, the overall aim was to improve our ability to train generative models stably and accurately, but we went through a lot of variations and experiments with different methods on the way. I’ll try to explain why I think these things worked, but we’re still exploring it ourselves as well. The basic problem is that generative neural network models seem to either be stable but fail to properly capture higher-order correlations in the data distribution (which manifests as blurriness in the image domain), or they are very unstable to train due to having to learn both the distribution and the loss function at the same time, leading to issues like non-stationarity and positive feedbacks. The way GANs capture higher order correlations is to say ‘if there’s any distinguishable statistic from real examples, the discriminator will exploit that’. That is, they try to make things individually indistinguishable from real examples, rather than in the aggregate. The cost of that is the instability arising from not having a joint loss function – the discriminator can make a move that disproportionately harms the generator, and vice versa.

via http://www.araya.org/archives/1306

A Return to Machine Learning

Medium, Machine Learning, RNN, CNN, autoencoder, visualization, visualisation

This last year I’ve been getting back into machine learning and AI, rediscovering the things that drew me to it in the first place. I’m still in the “learning” and “small studies” phase that naturally precedes crafting any new artwork, and I wanted to share some of that process here. This is a fairly linear record of my path, but my hope is that this post is modular enough that anyone interested in a specific part can skip ahead and find something that gets them excited, too. I’ll cover some experiments with these general topics: Convolutional Neural Networks, Recurrent Neural Networks, Dimensionality Reduction and Visualization, Autoencoders

via https://medium.com/@kcimc/a-return-to-machine-learning–2de3728558eb