/SadTalker

(CVPR 2023)SadTalker:Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation

Primary LanguagePythonMIT LicenseMIT

            Open In Colab       Hugging Face Spaces

Wenxuan Zhang *,1,2Xiaodong Cun *,2Xuan Wang 3Yong Zhang 2Xi Shen 2
Yu Guo1 Ying Shan 2   Fei Wang 1

1 Xi'an Jiaotong University   2 Tencent AI Lab   3 Ant Group  

CVPR 2023

sadtalker

TL;DR:       single portrait image 🙎‍♂️      +       audio 🎤       =       talking head video 🎞.


🔥 Highlight

  • 🔥 Beta version of the full image mode is online! checkout here for more details.
still still + enhancer input image @bagbag1815
still1_n.mp4
still_e_n.mp4
  • 🔥 Several new mode, eg, still mode, reference mode, resize mode are online for better and custom applications.

  • 🔥 Happy to see our method is used in various talking or singing avatar, checkout these wonderful demos at bilibili and twitter #sadtalker.

📋 Changelog (Previous changelog can be founded here)

  • [2023.03.30]: Launch beta version of the full body mode.

  • [2023.03.30]: Launch new feature: through using reference videos, our algorithm can generate videos with more natural eye blinking and some eyebrow movement.

  • [2023.03.29]: resize mode is online by python infererence.py --preprocess resize! Where we can produce a larger crop of the image as discussed in OpenTalker#35.

  • [2023.03.29]: local gradio demo is online! python app.py to start the demo. New requirments.txt is used to avoid the bugs in librosa.

  • [2023.03.28]: Online demo is launched in Hugging Face Spaces, thanks AK!

🎼 Pipeline

main_of_sadtalker

Our method uses the coefficients of 3DMM as intermediate motion representation. To this end, we first generate realistic 3D motion coefficients (facial expression β, head pose ρ) from audio, then these coefficients are used to implicitly modulate the 3D-aware face render for final video generation.

🚧 TODO

Previous TODOs
  • Generating 2D face from a single Image.
  • Generating 3D face from Audio.
  • Generating 4D free-view talking examples from audio and a single image.
  • Gradio/Colab Demo.
  • Full body/image Generation.
  • training code of each componments.
  • Audio-driven Anime Avatar.
  • interpolate ChatGPT for a conversation demo 🤔
  • integrade with stable-diffusion-web-ui. (stay tunning!)
sadtalker_demo_short.mp4

⚙️ Installation

Dependence Installation

CLICK ME For Mannual Installation
git clone https://github.com/Winfredy/SadTalker.git

cd SadTalker 

conda create -n sadtalker python=3.8

conda activate sadtalker

pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113

conda install ffmpeg

pip install -r requirements.txt
CLICK For Docker Installation

A dockerfile are also provided by @thegenerativegeneration in docker hub, which can be used directly as:

docker run --gpus "all" --rm -v $(pwd):/host_dir wawa9000/sadtalker \
    --driven_audio /host_dir/deyu.wav \
    --source_image /host_dir/image.jpg \
    --expression_scale 1.0 \
    --still \
    --result_dir /host_dir

Download Trained Models

CLICK ME

You can run the following script to put all the models in the right place.

bash scripts/download_models.sh

OR download our pre-trained model from google drive or our github release page, and then, put it in ./checkpoints.

Model Description
checkpoints/auido2exp_00300-model.pth Pre-trained ExpNet in Sadtalker.
checkpoints/auido2pose_00140-model.pth Pre-trained PoseVAE in Sadtalker.
checkpoints/mapping_00229-model.pth.tar Pre-trained MappingNet in Sadtalker.
checkpoints/facevid2vid_00189-model.pth.tar Pre-trained face-vid2vid model from the reappearance of face-vid2vid.
checkpoints/epoch_20.pth Pre-trained 3DMM extractor in Deep3DFaceReconstruction.
checkpoints/wav2lip.pth Highly accurate lip-sync model in Wav2lip.
checkpoints/shape_predictor_68_face_landmarks.dat Face landmark model used in dilb.
checkpoints/BFM 3DMM library file.
checkpoints/hub Face detection models used in face alignment.

🔮 Quick Start

Generating 2D face from a single Image from default config.

python inference.py --driven_audio <audio.wav> --source_image <video.mp4 or picture.png> 

The results will be saved in results/$SOME_TIMESTAMP/*.mp4.

Or a local gradio demo can be run by:

python app.py

Advanced Configuration

Click Me
Name Configuration default Explaination
Enhance Mode --enhancer None Using gfpgan or RestoreFormer to enhance the generated face via face restoration network
Still Mode --still False Using the same pose parameters as the original image, fewer head motion.
Expressive Mode --expression_scale 1.0 a larger value will make the expression motion stronger.
save path --result_dir ./results The file will be save in the newer location.
preprocess --preprocess crop Run and produce the results in the croped input image. Other choices: resize, where the images will be resized to the specific resolution.
ref Mode (eye) --ref_eyeblink None A video path, where we borrow the eyeblink from this reference video to provide more natural eyebrow movement.
ref Mode (pose) --ref_pose None A video path, where we borrow the pose from the head reference video.
3D Mode --face3dvis False Need additional installation. More details to generate the 3d face can be founded here.
free-view Mode --input_yaw,
--input_pitch,
--input_roll
None Genearting novel view or free-view 4D talking head from a single image. More details can be founded here.

Examples

basic w/ still mode w/ exp_scale 1.3 w/ gfpgan
art_0.japanese.mp4
art_0.japanese_still.mp4
art_0.japanese_scale1.3.mp4
art_0.japanese_es1.mp4

Kindly ensure to activate the audio as the default audio playing is incompatible with GitHub.

Input, w/ reference video , reference video
free_view
If the reference video is shorter than the input audio, we will loop the reference video .

Generating 3D face from Audio

Input Animated 3d face
3dface.mp4

Kindly ensure to activate the audio as the default audio playing is incompatible with GitHub.

Generating 4D free-view talking examples from audio and a single image

We use input_yaw, input_pitch, input_roll to control head pose. For example, --input_yaw -20 30 10 means the input head yaw degree changes from -20 to 30 and then changes from 30 to 10.

python inference.py --driven_audio <audio.wav> \
                    --source_image <video.mp4 or picture.png> \
                    --result_dir <a file to store results> \
                    --input_yaw -20 30 10
Results, Free-view results, Novel view results
free_view

[Beta Application] Full body/image Generation

Now, you can use --still to generate a natural full body video. You can add enhancer or full_img_enhancer to improve the quality of the generated video. However, if you add other mode, such as ref_eyeblinking, ref_pose, the result will be bad. We are still trying to fix this problem.

python inference.py --driven_audio <audio.wav> \
                    --source_image <video.mp4 or picture.png> \
                    --result_dir <a file to store results> \
                    --still \
                    --enhancer gfpgan 

🛎 Citation

If you find our work useful in your research, please consider citing:

@article{zhang2022sadtalker,
  title={SadTalker: Learning Realistic 3D Motion Coefficients for Stylized Audio-Driven Single Image Talking Face Animation},
  author={Zhang, Wenxuan and Cun, Xiaodong and Wang, Xuan and Zhang, Yong and Shen, Xi and Guo, Yu and Shan, Ying and Wang, Fei},
  journal={arXiv preprint arXiv:2211.12194},
  year={2022}
}

💗 Acknowledgements

Facerender code borrows heavily from zhanglonghao's reproduction of face-vid2vid and PIRender. We thank the authors for sharing their wonderful code. In training process, We also use the model from Deep3DFaceReconstruction and Wav2lip. We thank for their wonderful work.

🥂 Related Works

📢 Disclaimer

This is not an official product of Tencent. This repository can only be used for personal/research/non-commercial purposes.

LOGO: color and font suggestion: ChatGPT, logo font:Montserrat Alternates .

All the copyright demo images are from communities users or the geneartion from stable diffusion. Free free to contact us if you feel uncomfortable.