/magenta

Magenta: Music and Art Generation with Machine Intelligence

Primary LanguagePythonApache License 2.0Apache-2.0

Magenta is a project from the Google Brain team that asks: Can we use machine learning to create compelling art and music? If so, how? If not, why not? We’ll use TensorFlow, and we’ll release our models and tools in open source on this GitHub. We’ll also post demos, tutorial blog postings, and technical papers. Soon we’ll begin accepting code contributions from the community at large. If you’d like to keep up on Magenta as it grows, you can read our blog and or join our discussion group.

Installation

Docker

The easiest way to get started with Magenta is to use our Docker container. First, install Docker. Next, run this command:

docker run -it -p 6006:6006 -v /tmp/magenta:/magenta-data tensorflow/magenta

This will start a shell in a directory with all Magenta components compiled and ready to run. It will also map port 6006 of the host machine to the container so you can view TensorBoard servers that run within the container.

This also maps the directory /tmp/magenta on the host machine to /magenta-data within the Docker session. WARNING: only data saved in /magenta-data will persist across sessions.

One downside to the Docker container is that it is isolated from the host. If you want to listen to a generated MIDI file, you'll need to copy it to the host machine. Similarly, because our MIDI instrument interface requires access to the host MIDI port, it will not work within the Docker container. You'll need to use the full Development Environment.

Note: Our docker image is also available at gcr.io/tensorflow/magenta.

Development Environment

If you want to develop on Magenta, use our MIDI instrument interface or preview MIDI files without copying them out out of the Docker environment, you'll need to set up the full Development Environment.

The installation has three components. You are going to need Bazel to build packages, TensorFlow to run models, and an up-to-date version of this repository.

First, clone this repository:

git clone https://github.com/tensorflow/magenta.git

Next, install Bazel. We recommend the latest version, currently 0.3.1.

Finally, install TensorFlow. We require version 0.10 or later.

Also, verify that your environment uses Python 2.7. We do aim to support Python 3 eventually, but it is currently experimental.

After that's done, run the tests with this command:

bazel test //magenta/...

Building your Dataset

Now that you have a working copy of Magenta, let's build your first MIDI dataset. We do this by creating a directory of MIDI files and converting them into NoteSequences. If you don't have any MIDIs handy, you can find some here from MidiWorld.

Build and run the script. Warnings may be printed by the MIDI parser if it encounters a malformed MIDI file but these can be safely ignored. MIDI files that cannot be parsed will be skipped.

MIDI_DIRECTORY=<folder containing MIDI files. can have child folders.>

# TFRecord file that will contain NoteSequence protocol buffers.
SEQUENCES_TFRECORD=/tmp/notesequences.tfrecord

bazel run //magenta/scripts:convert_midi_dir_to_note_sequences -- \
--midi_dir=$MIDI_DIRECTORY \
--output_file=$SEQUENCES_TFRECORD \
--recursive

Note: To build and run in separate commands, run

bazel build //magenta/scripts:convert_midi_dir_to_note_sequences

./bazel-bin/magenta/scripts/convert_midi_dir_to_note_sequences \
--midi_dir=$MIDI_DIRECTORY \
--output_file=$SEQUENCES_TFRECORD \
--recursive

Data processing APIs

If you are interested in adding your own model, please take a look at how we create our datasets under the hood: Data processing in Magenta

Generating MIDI

To create your own melodies with TensorFlow, train a model on the dataset you built above and then use it to generate new sequences. Select a model below for further instructions.

Basic RNN: A simple recurrent neural network for predicting melodies.

Lookback RNN: A recurrent neural network for predicting melodies that uses custom inputs and labels.

Attention RNN: A recurrent neural network for predicting melodies that uses attention.

Using a MIDI Instrument

After you've trained one of the models above, you can use our MIDI interface to play with it interactively.