Many ideas were based on a paper by Yan Ke, Derek Hoiem, and Rahul Sukthankar called “Computer Vision for Music Identification” (2005). In fact, even the Last.fm fingerprinter uses the code published by the authors of this paper. This is where I learned that audio identification is more about machine learning that it is about DSP. Many useful methods for extracting interesting features from audio streams are well-known and the problem is more about how to apply and index them the best way. The basic idea here is to treat audio as a spectral image and index the content of the image. I’ll explain this in more detail and how Chromaprint uses this in a following post. Another important paper for me was “Pairwise Boosted Audio Fingerprint” (2009) by Dalwon Jang, Chang D. Yoo, Sunil Lee, Sungwoong Kim and Ton Kalker (Ton Kalker is a co-author of a historically important paper “Robust Audio Hashing for Content Identification” (2001) published by Philips Research), which combined previous experiments of the authors with audio identification based on spectral centroid features and the indexing approach similar to the one suggested by Y. Ke, D. Hoiem and R. Sukthankar. For a long time this was the best solution I had and since it was actually not very hard to implement, the most time I spent on tweaking the configuration to get the best results. The last major change came after I learned about “chroma” features by reading the “Efficient Index-Based Audio Matching” (2008) by Frank Kurth and Meinard Müller. I’ve read more papers about chroma features later, but this was the first and also the most important one for me and some ideas about processing the feature vectors from it are implemented in Chromaprint. Chroma features are typically used for music identification, as opposed to audio file identification, but I tried to use them with the approach I already had implemented and it nicely improved the quality of the fingerprinting function and actually reduced complexity which allowed me to use much larger training data sets.
She starts with a clip that’s been digitally altered to sound like jibberish. On first listen, to my ears, it was entirely meaningless. Next, Das plays the original, unaltered clip: a woman’s voice saying, “The Constitution Center is at the next stop.” Then we hear the jibberish clip again, and woven inside what had sounded like nonsense, we hear “The Constitution Center is at the next stop.” The point is: When our brains know what to expect to hear, they do, even if, in reality, it is impossible. Not one person could decipher that clip without knowing what they were hearing, but with the prompt, it’s impossible not to hear the message in the jibberish. This is a wonderful audio illusion.
The Shortwave Radio Audio Archive (SRAA) is a collection of shortwave radio recordings that you can download or listen to as a podcast. The collection grows every day and includes both historic recordings and current recordings from the shortwave radio spectrum.
This post presents WaveNet, a deep generative model of raw audio waveforms. We show that WaveNets are able to generate speech which mimics any human voice and which sounds more natural than the best existing Text-to-Speech systems, reducing the gap with human performance by over 50%. We also demonstrate that the same network can be used to synthesize other audio signals such as music, and present some striking samples of automatically generated piano pieces.
The sonic boom would be the first thing the target would hear. It would be followed by several sounds played over one another, including both reversed music (rising slightly in pitch as it fades out) and forward-playing music (which would play at half speed and an octave too low), followed by the crash of a stereo demolishing your neighbor’s shed.
Of all the noises that my children will not understand, the one that is nearest to my heart is not from a song or a television show or a jingle. It’s the sound of a modem connecting with another modem across the repurposed telephone infrastructure. It was the noise of being part of the beginning of the Internet.