Supplementary code release.
All code is written in Python 3 (Anaconda recommended). To install the dependencies:
pip install -r requirements.txt
A copy of the Magenta codebase is required for access to MusicVAE and related components. Installation instructions can be found on the Magenta public repository. You will also need to download pretrained MusicVAE checkpoints. For our experiments, we use the 2-bar melody model.
We use the Lakh MIDI Dataset to train our models. Follow these instructions to download and build the Lakh MIDI Dataset.
To encode the Lakh dataset with MusicVAE, use scripts/generate_song_data_beam.py
:
python scripts/generate_song_data_beam.py \
--checkpoint=/path/to/musicvae-ckpt \
--input=/path/to/lakh_tfrecords \
--output=/path/to/encoded_tfrecords
To preprocess and generate fixed-length latent sequences for training diffusion and autoregressive models, refer to scripts/transform_encoded_data.py
:
python scripts/transform_encoded_data.py \
--encoded_data=/path/to/encoded_tfrecords \
--output_path =/path/to/preprocess_tfrecords \
--mode=sequences \
--context_length=32
python train_ncsn.py --flagfile=configs/ddpm-mel-32seq-512.cfg
python train_mdn.py --flagfile=configs/mdn-mel-32seq-512.cfg
python sample_ncsn.py \
--flagfile=configs/ddpm-mel-32seq-512.cfg \
--sample_seed=42 \
--sample_size=1000 \
--sampling_dir=/path/to/latent-samples
python sample_ncsn.py \
--flagfile=configs/mdn-mel-32seq-512.cfg \
--sample_seed=42 \
--sample_size=1000 \
--sampling_dir=/path/to/latent-samples
To convert sequences of embeddings (generated by diffusion or TransformerMDN models) to sequences of MIDI events, refer to scripts/sample_audio.py
.
python scripts/sample_audio.py
--input=/path/to/latent-samples/[ncsn|mdn] \
--output=/path/to/audio-midi \
--n_synth=1000 \
--include_wav=True