Posts tagged deep-learning

Deep Learning Techniques for Music Generation

arxiv, music, sound, generative, machine-learning, deep-learning

This book is a survey and an analysis of different ways of using deep learning (deep artificial neural networks) to generate musical content. At first, we propose a methodology based on four dimensions for our analysis: - objective - What musical content is to be generated? (e.g., melody, accompaniment…); - representation - What are the information formats used for the corpus and for the expected generated output? (e.g., MIDI, piano roll, text…); - architecture - What type of deep neural network is to be used? (e.g., recurrent network, autoencoder, generative adversarial networks…); - strategy - How to model and control the process of generation (e.g., direct feedforward, sampling, unit selection…). For each dimension, we conduct a comparative analysis of various models and techniques. For the strategy dimension, we propose some tentative typology of possible approaches and mechanisms. This classification is bottom-up, based on the analysis of many existing deep-learning based systems for music generation, which are described in this book

via https://arxiv.org/abs/1709.01620

Research Blog: Wide & Deep Learning: Better Together with TensorFlow

deep-learning, wide-learning, machine-learning, generalisation, learning, classification

The human brain is a sophisticated learning machine, forming rules by memorizing everyday events (“sparrows can fly” and “pigeons can fly”) and generalizing those learnings to apply to things we haven’t seen before (“animals with wings can fly”). Perhaps more powerfully, memorization also allows us to further refine our generalized rules with exceptions (“penguins can’t fly”). As we were exploring how to advance machine intelligence, we asked ourselves the question—can we teach computers to learn like humans do, by combining the power of memorization and generalization? It’s not an easy question to answer, but by jointly training a wide linear model (for memorization) alongside a deep neural network (for generalization), one can combine the strengths of both to bring us one step closer. At Google, we call it Wide & Deep Learning. It’s useful for generic large-scale regression and classification problems with sparse inputs (categorical features with a large number of possible feature values), such as recommender systems, search, and ranking problems.

via https://research.googleblog.com/2016/06/wide-deep-learning-better-together-with.html

DeepFool: A simple and accurate method to fool deep neural networks

neural-networks, DeepFool, DeepDream, CNN, classification, machine-learning, deep-learning

State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. In this paper, we fill this gap and propose the DeepFool framework to efficiently compute perturbations that fools deep network and thus reliably quantify the robustness of arbitrary classifiers.

via http://gitxiv.com/posts/iZpWrvKdXmtRaQusa/deepfool-a-simple-and-accurate-method-to-fool-deep-neural

A ‘Brief’ History of Neural Nets and Deep Learning

history, machine-learning, machinelearning, neural-nets, deep-learning, AI, computing

This is the first part of ‘A Brief History of Neural Nets and Deep Learning’. In this part, we shall cover the birth of neural nets with the Perceptron in 1958, the AI Winter of the 70s, and neural nets’ return to popularity with backpropagation in 1986.

via http://www.andreykurenkov.com/writing/a-brief-history-of-neural-nets-and-deep-learning/