s_01.mp4 |
s_02.mp4 |
s_03.mp4 |
en_01.mp4 |
en_03.mp4 |
en_05.mp4 |
ch_02.mp4 |
ch_03.mp4 |
ch_04.mp4 |
po_01.mp4 |
po_02.mp4 |
po_03.mp4 |
ap_04.mp4 |
ap_05.mp4 |
ap_06.mp4 |
(Some demo images above are sourced from image websites. If there is any infringement, we will immediately remove them and apologize.)
git clone https://github.com/BadToBest/EchoMimic
cd EchoMimic
- Tested System Environment: Centos 7.2/Ubuntu 22.04, Cuda >= 11.7
- Tested GPUs: A100(80G) / RTX4090D (24G) / V100(16G)
- Tested Python Version: 3.8 / 3.10 / 3.11
Create conda environment (Recommended):
conda create -n echomimic python=3.8
conda activate echomimic
Install packages with pip
pip install -r requirements.txt
Download and decompress ffmpeg-static, then
export FFMPEG_PATH=/path/to/ffmpeg-4.4-amd64-static
git lfs install
git clone https://huggingface.co/BadToBest/EchoMimic pretrained_weights
The pretrained_weights is organized as follows.
./pretrained_weights/
├── denoising_unet.pth
├── reference_unet.pth
├── motion_module.pth
├── face_locator.pth
├── sd-vae-ft-mse
│ └── ...
├── sd-image-variations-diffusers
│ └── ...
└── audio_processor
└── whisper_tiny.pt
In which denoising_unet.pth / reference_unet.pth / motion_module.pth / face_locator.pth are the main checkpoints of EchoMimic. Other models in this hub can be also downloaded from it's original hub, thanks to their brilliant works:
Run the python inference script:
python -u infer_audio2vid.py
Edit the inference config file ./configs/prompts/animation.yaml, and add your own case:
test_cases:
"path/to/your/image":
- "path/to/your/audio"
The run the python inference script:
python -u infer_audio2vid.py
Status | Milestone | ETA |
---|---|---|
✅ | The inference source code of the Audio-Driven algo meet everyone on GitHub | 9th July, 2024 |
✅ | Pretrained models trained on English and Mandarin Chinese to be released | 9th July, 2024 |
🚀 | The inference source code of the Pose-Driven algo meet everyone on GitHub | 13th July, 2024 |
🚀 | Pretrained models with better pose control to be released | 13th July, 2024 |
🚀 | Pretrained models with better sing performance to be released | TBD |
🚀 | Accelerated models to be released | TBD |
🚀 | Large-Scale and High-resolution Chinese-Based Talking Head Dataset | TBD |
We would like to thank the contributors to the AnimateDiff, Moore-AnimateAnyone and MuseTalk repositories, for their open research and exploration.
We are also grateful to V-Express and hallo for their outstanding work in the area of diffusion-based talking heads.
If we missed any open-source projects or related articles, we would like to complement the acknowledgement of this specific work immediately.
There are numerous developers actively engaged in projects centered around EchoMimic, and we are compelled to express our profound gratitude for their invaluable contributions. In acknowledgment of their efforts, we are pleased to highlight a selection of exemplary repositories below. These repositories have significantly augmented the capabilities of EchoMimic, thereby enhancing its potency and versatility in application.
WebUi version: https://github.com/greengerong/EchoMimic
If you find our work useful for your research, please consider citing the paper:
@misc{chen2024echomimic,
title={EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning},
author={Zhiyuan Chen, Jiajiong Cao, Zhiquan Chen, Yuming Li, Chenguang Ma},
year={2024},
archivePrefix={arXiv},
primaryClass={cs.CV}
}