This talk is about applying deep learning to music. We will look at the raw music data and discover the following:
How to detect instruments from a piece of music
How to detect what is being played by what instrument
How to isolate instruments in multi-instrument (polyphonic) music
Instead of applying it to existing music we will generate our own music using some simple musical rules. The benefit of this is that we are in control of the complexity and we know exactly what is being played. We start out simple and then start adding more instruments, different timbres, etc. As we go up in complexity, we shall see how to adapt our models to be able to deal with it. This gives interesting insights in what structures in deep nets work well.
I will show:
How to build a simple synthesizer using numpy
How to create an unlimited data set of improvisation that sounds musical
How to use this data set for detecting instruments using deep learning
How to filter out one instrument when multiple synthesizers are playing at once
I am looking for editors/curators to help with branches of the tree. Please send me an email if you are interested.