Generate speech, sound effects, music and beyond.
- Prepare running environment
conda create -n audioldm python=3.8
conda activate audioldm
git clone git@github.com:haoheliu/AudioLDM.git
cd AudioLDM
pip3 install -e .
- Download pretrained checkpoint
wget https://zenodo.org/record/7600541/files/audioldm-s-full?download=1 -O ckpt/audioldm-s-full.ckpt
- text-to-audio generation
# Test run
python3 scripts/text2sound.py -t "A hammer is hitting a wooden surface"
For more options on guidance scale, batchsize, seed, etc, please run
python3 scripts/text2sound.py -h
For the evaluation of audio generative model, please refer to audioldm_eval.
Integrated into Huggingface Spaces 🤗 using Gradio. Try out the Web Demo
- Update the checkpoint with more training steps.
- Add AudioCaps finetuned AudioLDM-S model
- Build pip installable package for commandline use
- Add text-guided style transfer
- Add audio super-resolution
- Add audio inpainting
If you found this tool useful, please consider citing
@article{liu2023audioldm,
title={AudioLDM: Text-to-Audio Generation with Latent Diffusion Models},
author={Liu, Haohe and Chen, Zehua and Yuan, Yi and Mei, Xinhao and Liu, Xubo and Mandic, Danilo and Wang, Wenwu and Plumbley, Mark D},
journal={arXiv preprint arXiv:2301.12503},
year={2023}
}
- GPU with 8GB of dedicated VRAM
- A system with a 64-bit operating system (Windows 7, 8.1 or 10, Ubuntu 16.04 or later, or macOS 10.13 or later) 16GB or more of system RAM
Part of the code is borrowed from the following repos. We would like to thank the authors of these repos for their contribution.
We build the model with data from AudioSet, Freesound and BBC Sound Effect library. We share this demo based on the UK copyright exception of data for academic research.