A midi melody generator. This model generator is based on a poetry lstm model generator coded by dvictor. I modified for midi words generator. https://github.com/dvictor/lstm-poetry-word-based
I used a 2 layer LSTM each with 400 node and 0.6 dropout
some tensorflow pesudo-code like this:
g = tflearn.input_data([None, maxlen, len(char_idx)])
g = tflearn.lstm(g, 400, return_seq=True)
g = tflearn.dropout(g, 0.6)
g = tflearn.lstm(g, 400)
g = tflearn.dropout(g, 0.6)
g = tflearn.fully_connected(g, len(char_idx), activation='softmax')
g = tflearn.regression(g, optimizer='adam', loss='categorical_crossentropy', learning_rate=0.0)
Google "midi download" and get them :-)
python midi2text.py --source test.mid
this will generate a txt sufficed file which looks like this
0_b0_65_00 0_b0_64_02 0_b0_06_40 60_b0_65_00 0_b0_64_01 0_b0_06_40 0_b0_26_00 ...
put your encoded source file in a specific folder and rename it to input.txt
python3 src/train.py -source ../data/your_input_file_folder --num_layers 2 --hidden_size 400 --dropout 0.6
you will get some tensorflow dump files after the training, use the final one to sample some output
python3 src/generate.py --source ../data/your_input_file_folder --output sample.txt --length 100
if you'd like to use your own sample seed, you can use '--header' arg
python text2midi.py --source ../your_folder/sample.utf8decode
then you will get your generated midi file
I'm planning to add the following features in the future
-
Embedding will be added to make the training memory friendly
-
Add more co-related midi melodies to enlarge the learning material
-
Use GPU to speed up the training
This is the best introduction about LSTM networks I found.
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
this is where my original idea came from...
please follow Karpathy's readme to setup your experiment for training and sampling.
https://github.com/karpathy/char-rnn
MIT