/audiocraft_plus

Audiocraft is a library for audio processing and generation with deep learning. It features the state-of-the-art EnCodec audio compressor / tokenizer, along with MusicGen, a simple and controllable music generation LM with textual and melodic conditioning.

Primary LanguagePythonMIT LicenseMIT

AudioCraft Plus

docs badge linter badge tests badge

AudioCraft is a PyTorch library for deep learning research on audio generation. AudioCraft contains inference and training code for two state-of-the-art AI generative models producing high-quality audio: AudioGen and MusicGen.

Open In Colab Open in HugginFace

image

Features

AudioCraft Plus is an all-in-one WebUI for the original AudioCraft, adding many quality features on top.

  • AudioGen Model
  • Multiband Diffusion
  • Custom Model Support
  • Generation Metadata and Audio Info tab
  • Mono to Stereo
  • Multiprompt/Prompt Segmentation with Structure Prompts
  • Video Output Customization
  • Music Continuation

Installation

If you are updating from the previous version of AudioCraft Plus, do the following steps in the AudioCraft Plus folder:

git pull
pip install transformers --upgrade
pip install  torchmetrics --upgrade

Otherwise: Clean Installation

AudioCraft requires Python 3.9, PyTorch 2.0.0. To install AudioCraft, you can run the following:

# Best to make sure you have torch installed first, in particular before installing xformers.
# Don't run this if you already have PyTorch installed.
pip install 'torch>=2.0'
# Then proceed to one of the following
pip install -U audiocraft  # stable release
pip install -U git+https://git@github.com/GrandaddyShmax/audiocraft_plus#egg=audiocraft  # bleeding edge
pip install -e .  # or if you cloned the repo locally (mandatory if you want to train).

We also recommend having ffmpeg installed, either through your system or Anaconda:

sudo apt-get install ffmpeg
# Or if you are using Anaconda or Miniconda
conda install 'ffmpeg<5' -c  conda-forge

Installation video thanks to Pogs Cafe:
Untitled

Additional installation guide by radaevm can be found HERE

Models

At the moment, AudioCraft contains the training code and inference code for:

  • MusicGen: A state-of-the-art controllable text-to-music model.
  • AudioGen: A state-of-the-art text-to-sound model.
  • EnCodec: A state-of-the-art high fidelity neural audio codec.
  • Multi Band Diffusion: An EnCodec compatible decoder using diffusion.

Training code

AudioCraft contains PyTorch components for deep learning research in audio and training pipelines for the developed models. For a general introduction of AudioCraft design principles and instructions to develop your own training pipeline, refer to the AudioCraft training documentation.

For reproducing existing work and using the developed training pipelines, refer to the instructions for each specific model that provides pointers to configuration, example grids and model/task-specific information and FAQ.

API documentation

We provide some API documentation for AudioCraft.

FAQ

Is the training code available?

Yes! We provide the training code for EnCodec, MusicGen and Multi Band Diffusion.

Where are the models stored?

Hugging Face stored the model in a specific location, which can be overriden by setting the AUDIOCRAFT_CACHE_DIR environment variable.

License

  • The code in this repository is released under the MIT license as found in the LICENSE file.
  • The models weights in this repository are released under the CC-BY-NC 4.0 license as found in the LICENSE_weights file.

Citation

For the general framework of AudioCraft, please cite the following.

@article{copet2023simple,
    title={Simple and Controllable Music Generation},
    author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez},
    year={2023},
    journal={arXiv preprint arXiv:2306.05284},
}

When referring to a specific model, please cite as mentioned in the model specific README, e.g ./docs/MUSICGEN.md, ./docs/AUDIOGEN.md, etc.