/genre-recognition

Primary LanguageJavaScriptGNU General Public License v3.0GPL-3.0

CRNN for Live Music Genre Recognition

Convolutional-Recurrent Neural Networks for Live Music Genre Recognition is a project aimed at creating a neural network recognizing music genre and providing a user-friendly visualization for the network's current belief of the genre of a song. The project was created for the 24-hour Braincode Hackathon in Warsaw by Piotr Kozakowski, Jakub Królak, Łukasz Margas and Bartosz Michalak.

This project uses Keras for the neural network and Tornado for serving requests.

Demo

You can see a demo for a few selected songs here: Demo.

Usage

In a fresh virtualenv type:

pip install -r requirements.txt

to install all the prerequisites. Run:

python server.py

to launch the server.

You can also use Docker Compose:

docker-compose up

The demo will be accessible at http://0.0.0.0:8080/. You can upload a song using the big (and only) button and see the results for yourself. All mp3 files should work fine.

Running server.py without additional parameters launches the server using a default model provided in the package. You can provide your own model, as long as it matches the input and output architecture of the provided model. You can train your own model by modifying and running train_model.py. If you wish to train a model by yourself, download the GTZAN dataset (or provide analogous) to the data/ directory, extract it, run create_data_pickle.py to preprocess the data and then run train_model.py to train the model:

cd data
wget http://opihi.cs.uvic.ca/sound/genres.tar.gz
tar zxvf genres.tar.gz
cd ..
python create_data_pickle.py
python train_model.py

You can "visualize" the filters learned by the convolutional layers using extract_filters.py. This script for each convolutional neuron extracts and concatenates a few chunks resulting in maximum activation of this neuron from the tracks from the dataset. By default, it will put the visualizations in the filters/ directory. It requires the GTZAN dataset and its pickled version in the data/ directory. Run the commands above to obtain them. You can control the number of extracted chunks using the --count0 argument. Extracting higher number of chunks will be slower.

Background

The rationale for this particular model is based on several works, primarily Grzegorz Gwardys and Daniel Grzywczak, Deep Image Features in Music Information Retrieval and Recommending music on Spotify with Deep Learning. The whole idea is extensively described in our blog post Convolutional-Recurrent Neural Network for Live Music Genre Recognition.