Posts tagged RNN

To celebrate halloween we trained a net that creates endless vignettes about murdering humans, torture, necrophilia—kinda funny…

dadabots, death, extinction, cannibal corpse, RNN, BigGAN, AI, death metal, livestream, 2019

video link

To celebrate halloween we trained a net that creates endless vignettes about murdering humans, torture, necrophilia—kinda funny and campy like Evil Dead—using one of the greatest datasets ever— cannibal corpse lyrics

😵🗡️🤖🔪😵🗡️🤖

Neural network generating death metal, via livestream 24/7.

Audio / lyrics / visuals are all generative.

Powered by DADABOTS http://dadabots.com 


🤖Audio generated with modified SampleRNN trained on Cannibal Corpse
🤖Lyrics generated with pretrained 117M GPT2 fine-tuned on Cannibal Corpse
🤖Meat images generated with BigGAN interpolations in the #butchershop latent space
🤖You can generate all kinds of gross stuff on artbreeder https://artbreeder.com/i?k=ff84821d51…
🤖Vocals separated using Wave-U-Net (yup it separates death growls)
🤖Read more about our scientific research into eliminating humans from music https://arxiv.org/abs/1811.06633

Neural Nets for Generating Music

Medium, music, algorithmic music, generative music, history, stochastic, RNN, ML, nsynth, LSTM, Kyle McDonald, 2017

Algorithmic music composition has developed a lot in the last few years, but the idea has a long history. In some sense, the first automatic music came from nature: Chinese windchimes, ancient Greek wind-powered Aeolian harps, or the Japanese water instrument suikinkutsu. But in the 1700s music became “algorithmic”: Musikalisches Würfelspiel, a game that generates short piano compositions from fragments, with choices made by dice.

Dice games, Markov chains, and RNNs aren’t the only ways to make algorithmic music. Some machine learning practitioners explore alternative approaches like hierarchical temporal memory, or principal components analysis. But I’m focusing on neural nets because they are responsible for most of the big changes recently. (Though even within the domain of neural nets there are some directions I’m leaving out that have fewer examples, such as restricted Boltzmann machines for composing 4-bar jazz licks, short variations on a single song, or hybrid RNN-RBM models, or hybrid autoencoder-LSTM models.)



via https://medium.com/artists-and-machine-intelligence/neural-nets-for-generating-music-f46dffac21c0?source=ifttt————–1

A new kind of deep neural networks

Medium, neural networks, RNN, LeNet5, AlexNet, generative networks

The new kind of neural networks are an evolution of the initial feed-forward model of LeNet5 / AlexNet and derivatives, and include more sophisticated by-pass schemes than ResNet / Inception. These feedforward neural networks are also called encoders, as they compress and encode images into smaller representation vectors. The new wave of neural networks have two important new features:
generative branches: also called decoders, as they project a representation vector back into the input space
recurrent layers: that combine representations from previous time steps with the inputs and representations of the current time step

via https://medium.com/towards-data-science/a-new-kind-of-deep-neural-networks–749bcde19108

Recurrent generative auto-encoders and novelty search

GAN, RNN, RGAN, VAE, stability, autoencoders, pattern-recognition, Monte-Carlo, machine-learning

This post summarizes a bunch of connected tricks and methods I explored with the help of my co-authors. Following the previous post, above the stability properties of GANs, the overall aim was to improve our ability to train generative models stably and accurately, but we went through a lot of variations and experiments with different methods on the way. I’ll try to explain why I think these things worked, but we’re still exploring it ourselves as well. The basic problem is that generative neural network models seem to either be stable but fail to properly capture higher-order correlations in the data distribution (which manifests as blurriness in the image domain), or they are very unstable to train due to having to learn both the distribution and the loss function at the same time, leading to issues like non-stationarity and positive feedbacks. The way GANs capture higher order correlations is to say ‘if there’s any distinguishable statistic from real examples, the discriminator will exploit that’. That is, they try to make things individually indistinguishable from real examples, rather than in the aggregate. The cost of that is the instability arising from not having a joint loss function – the discriminator can make a move that disproportionately harms the generator, and vice versa.

via http://www.araya.org/archives/1306

A Return to Machine Learning

Medium, Machine Learning, RNN, CNN, autoencoder, visualization, visualisation

This last year I’ve been getting back into machine learning and AI, rediscovering the things that drew me to it in the first place. I’m still in the “learning” and “small studies” phase that naturally precedes crafting any new artwork, and I wanted to share some of that process here. This is a fairly linear record of my path, but my hope is that this post is modular enough that anyone interested in a specific part can skip ahead and find something that gets them excited, too. I’ll cover some experiments with these general topics: Convolutional Neural Networks, Recurrent Neural Networks, Dimensionality Reduction and Visualization, Autoencoders

via https://medium.com/@kcimc/a-return-to-machine-learning–2de3728558eb