Zhongcong Xu
·
Jianfeng Zhang
·
Jun Hao Liew
·
Hanshu Yan
·
Jia-Wei Liu
·
Chenxu Zhang
·
Jiashi Feng
·
Mike Zheng Shou
National University of Singapore | ByteDance
- [2023.12.4] Release inference code and gradio demo. We are working to improve MagicAnimate, stay tuned!
- [2023.11.23] Release MagicAnimate paper and project page.
Please download the pretrained base models for StableDiffusion V1.5 and MSE-finetuned VAE.
Download our MagicAnimate checkpoints.
Place them as follows:
magic-animate
|----pretrained_models
|----MagicAnimate
|----appearance_encoder
|----diffusion_pytorch_model.safetensors
|----config.json
|----densepose_controlnet
|----diffusion_pytorch_model.safetensors
|----config.json
|----temporal_attention
|----temporal_attention.ckpt
|----sd-vae-ft-mse
|----...
|----stable-diffusion-v1-5
|----...
|----...
prerequisites: python>=3.8
, CUDA>=11.3
, and ffmpeg
.
Install with conda
:
conda env create -f environment.yaml
conda activate manimate
or pip
:
pip3 install -r requirements.txt
Run inference on single GPU:
bash scripts/animate.sh
Run inference with multiple GPUs:
bash scripts/animate_dist.sh
Try our online gradio demo quickly.
Launch local gradio demo on single GPU:
python3 -m demo.gradio_animate
Launch local gradio demo if you have multiple GPUs:
python3 -m demo.gradio_animate_dist
Then open gradio demo in local browser.
We would like to thank AK(@_akhaliq) and huggingface team for the help of setting up oneline graio demo.
If you find this codebase useful for your research, please use the following entry.
@inproceedings{xu2023magicanimate,
author = {Xu, Zhongcong and Zhang, Jianfeng and Liew, Jun Hao and Yan, Hanshu and Liu, Jia-Wei and Zhang, Chenxu and Feng, Jiashi and Shou, Mike Zheng},
title = {MagicAnimate: Temporally Consistent Human Image Animation using Diffusion Model},
booktitle = {arXiv},
year = {2023}
}