/AudioLDM

AudioLDM: Generate speech, sound effects, music and beyond, with text.

Primary LanguagePythonOtherNOASSERTION

Text-to-Audio Generation

arXiv githubio Hugging Face Spaces Hugging Face Hub Replicate

Generate speech, sound effects, music and beyond.


  1. Prepare running environment
conda create -n audioldm python=3.8
conda activate audioldm
git clone git@github.com:haoheliu/AudioLDM.git
cd AudioLDM
pip3 install -e .
  1. Download pretrained checkpoint
wget https://zenodo.org/record/7600541/files/audioldm-s-full?download=1 -O ckpt/audioldm-s-full.ckpt
  1. text-to-audio generation
# Test run
python3 scripts/text2sound.py -t "A hammer is hitting a wooden surface"

For more options on guidance scale, batchsize, seed, etc, please run

python3 scripts/text2sound.py -h

For the evaluation of audio generative model, please refer to audioldm_eval.

Web Demo

Integrated into Huggingface Spaces 🤗 using Gradio. Try out the Web Demo Hugging Face Spaces

TODO

  • Update the checkpoint with more training steps.
  • Add AudioCaps finetuned AudioLDM-S model
  • Build pip installable package for commandline use
  • Add text-guided style transfer
  • Add audio super-resolution
  • Add audio inpainting

Cite this work

If you found this tool useful, please consider citing

@article{liu2023audioldm,
  title={AudioLDM: Text-to-Audio Generation with Latent Diffusion Models},
  author={Liu, Haohe and Chen, Zehua and Yuan, Yi and Mei, Xinhao and Liu, Xubo and Mandic, Danilo and Wang, Wenwu and Plumbley, Mark D},
  journal={arXiv preprint arXiv:2301.12503},
  year={2023}
}

Hardware requirement

  • GPU with 8GB of dedicated VRAM
  • A system with a 64-bit operating system (Windows 7, 8.1 or 10, Ubuntu 16.04 or later, or macOS 10.13 or later) 16GB or more of system RAM

Reference

Part of the code is borrowed from the following repos. We would like to thank the authors of these repos for their contribution.

https://github.com/LAION-AI/CLAP

https://github.com/CompVis/stable-diffusion

https://github.com/v-iashin/SpecVQGAN

https://github.com/toshas/torch-fidelity

We build the model with data from AudioSet, Freesound and BBC Sound Effect library. We share this demo based on the UK copyright exception of data for academic research.