Tools created while working on the master's thesis: Automatic music generation methods for video games
The repository contains the model, second and third (last) step of data preparation, scripts to run WebSocket server for live generation and script to generate some sample MIDI files.
The trained model is included in save directory as everything-game-30s-transposed.sess.
- To generate MIDI file samples with various conditioning:
Run generate.py
- To start WebSocket server for live generation:
Run generate_live.py
-
Second data preparation step:
Run data_prep.py
-
Preprocessing:
Run preprocess.py
-
Training
Run train.py
Everything was run using Python 3.8 on Windows in Anaconda environment. The environment can be imported using anaconda.yaml as described in docs.
Code based on implementation of EmotionBox, the paper is available on https://arxiv.org/abs/2112.08561.