This is a Python project for Creating an Avatar. It includes instructions for setting up the Python 3.8 environment and installing the project dependencies.
To use this project, follow the steps below:
-
Create the Python 3.8 Environment, activate the virtual environment, and install the project dependencies by running the following commands:
python3.8 -m venv env><br /> source env/bin/activate><br /> pip install -r requirements.txt><br />><br />
If you Want to use the AppBrowser also run this in the terminal
pip install PyQt5==5.15.9
pip install PyQtWebEngine==5.15.6
link to download wav2lip_gan.pth https://drive.google.com/file/d/1L3YpmdzuIs-ToTEmADUtpJdCoa3T8Gaa/view?usp=drive_link or Download: wget http://217.21.78.160/wav2lip_gan.pth -O checkpoints/wav2lip_gan.pth This is included in the repository:
For commercial requests, please contact us at radrabha.m@research.iiit.ac.in or prajwal.k@research.iiit.ac.in. We have an HD model ready that can be used commercially. This code is part of the paper: A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild published at ACM Multimedia 2020. Download: wget http://217.21.78.160/wav2lip_gan.pth -O checkpoints/wav2lip_gan.pth Download: https://drive.google.com/file/d/1L3YpmdzuIs-ToTEmADUtpJdCoa3T8Gaa/view?usp=sharing
📑 Original Paper | 📰 Project Page | 🌀 Demo | ⚡ Live Testing | 📔 Colab Notebook |
---|---|---|---|---|
Paper | Project Page | Demo Video | Interactive Demo | Colab Notebook /Updated Collab Notebook |
- Weights of the visual quality disc has been updated in readme!
- Lip-sync videos to any target speech with high accuracy 💯. Try our interactive demo.
- ✨ Works for any identity, voice, and language. Also works for CGI faces and synthetic voices.
- Complete training code, inference code, and pretrained models are available 💥
- Or, quick-start with the Google Colab Notebook: Link. Checkpoints and samples are available in a Google Drive folder as well. There is also a tutorial video on this, courtesy of What Make Art. Also, thanks to Eyal Gruss, there is a more accessible Google Colab notebook with more useful features. A tutorial collab notebook is present at this link.
- 🔥 🔥 Several new, reliable evaluation benchmarks and metrics [
evaluation/
folder of this repo] released. Instructions to calculate the metrics reported in the paper are also present.
All results from this open-source code or our demo website should only be used for research/academic/personal purposes only. As the models are trained on the LRS2 dataset, any form of commercial use is strictly prohibhited. For commercial requests please contact us directly!
Python 3.6
- ffmpeg:
sudo apt-get install ffmpeg
- Install necessary packages using
pip install -r requirements.txt
. Alternatively, instructions for using a docker image is provided here. Have a look at this comment and comment on the gist if you encounter any issues. - Face detection pre-trained model should be downloaded to
face_detection/detection/sfd/s3fd.pth
. Alternative link if the above does not work.
Model | Description | Link to the model |
---|---|---|
Wav2Lip | Highly accurate lip-sync | Link |
Wav2Lip + GAN | Slightly inferior lip-sync, but better visual quality | Link |
Expert Discriminator | Weights of the expert discriminator | Link |
Visual Quality Discriminator | Weights of the visual disc trained in a GAN setup | Link |
You can lip-sync any video to any audio:
python inference.py --checkpoint_path <ckpt> --face <video.mp4> --audio <an-audio-source>
The result is saved (by default) in results/result_voice.mp4
. You can specify it as an argument, similar to several other available options. The audio source can be any file supported by FFMPEG
containing audio data: *.wav
, *.mp3
or even a video file, from which the code will automatically extract the audio.
- Experiment with the
--pads
argument to adjust the detected face bounding box. Often leads to improved results. You might need to increase the bottom padding to include the chin region. E.g.--pads 0 20 0 0
. - If you see the mouth position dislocated or some weird artifacts such as two mouths, then it can be because of over-smoothing the face detections. Use the
--nosmooth
argument and give another try. - Experiment with the
--resize_factor
argument, to get a lower resolution video. Why? The models are trained on faces which were at a lower resolution. You might get better, visually pleasing results for 720p videos than for 1080p videos (in many cases, the latter works well too). - The Wav2Lip model without GAN usually needs more experimenting with the above two to get the most ideal results, and sometimes, can give you a better result as well.
Our models are trained on LRS2. See here for a few suggestions regarding training on other datasets.
data_root (mvlrs_v1)
├── main, pretrain (we use only main folder in this work)
| ├── list of folders
| │ ├── five-digit numbered video IDs ending with (.mp4)
Place the LRS2 filelists (train, val, test) .txt
files in the filelists/
folder.
python preprocess.py --data_root data_root/main --preprocessed_root lrs2_preprocessed/
Additional options like batch_size
and number of GPUs to use in parallel to use can also be set.
preprocessed_root (lrs2_preprocessed)
├── list of folders
| ├── Folders with five-digit numbered video IDs
| │ ├── *.jpg
| │ ├── audio.wav
There are two major steps: (i) Train the expert lip-sync discriminator, (ii) Train the Wav2Lip model(s).
You can download the pre-trained weights if you want to skip this step. To train it:
python color_syncnet_train.py --data_root lrs2_preprocessed/ --checkpoint_dir <folder_to_save_checkpoints>
You can either train the model without the additional visual quality disriminator (< 1 day of training) or use the discriminator (~2 days). For the former, run:
python wav2lip_train.py --data_root lrs2_preprocessed/ --checkpoint_dir <folder_to_save_checkpoints> --syncnet_checkpoint_path <path_to_expert_disc_checkpoint>
To train with the visual quality discriminator, you should run hq_wav2lip_train.py
instead. The arguments for both the files are similar. In both the cases, you can resume training as well. Look at python wav2lip_train.py --help
for more details. You can also set additional less commonly-used hyper-parameters at the bottom of the hparams.py
file.
Training on other datasets might require modifications to the code. Please read the following before you raise an issue:
- You might not get good results by training/fine-tuning on a few minutes of a single speaker. This is a separate research problem, to which we do not have a solution yet. Thus, we would most likely not be able to resolve your issue.
- You must train the expert discriminator for your own dataset before training Wav2Lip.
- If it is your own dataset downloaded from the web, in most cases, needs to be sync-corrected.
- Be mindful of the FPS of the videos of your dataset. Changes to FPS would need significant code changes.
- The expert discriminator's eval loss should go down to ~0.25 and the Wav2Lip eval sync loss should go down to ~0.2 to get good results.
When raising an issue on this topic, please let us know that you are aware of all these points.
We have an HD model trained on a dataset allowing commercial usage. The size of the generated face will be 192 x 288 in our new model.
Please check the evaluation/
folder for the instructions.
Theis repository can only be used for personal/research/non-commercial purposes. However, for commercial requests, please contact us directly at radrabha.m@research.iiit.ac.in or prajwal.k@research.iiit.ac.in. We have an HD model trained on a dataset allowing commercial usage. The size of the generated face will be 192 x 288 in our new model. Please cite the following paper if you use this repository:
@inproceedings{10.1145/3394171.3413532,
author = {Prajwal, K R and Mukhopadhyay, Rudrabha and Namboodiri, Vinay P. and Jawahar, C.V.},
title = {A Lip Sync Expert Is All You Need for Speech to Lip Generation In the Wild},
year = {2020},
isbn = {9781450379885},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3394171.3413532},
doi = {10.1145/3394171.3413532},
booktitle = {Proceedings of the 28th ACM International Conference on Multimedia},
pages = {484–492},
numpages = {9},
keywords = {lip sync, talking face generation, video generation},
location = {Seattle, WA, USA},
series = {MM '20}
}
Parts of the code structure is inspired by this TTS repository. We thank the author for this wonderful code. The code for Face Detection has been taken from the face_alignment repository. We thank the authors for releasing their code and models. We thank zabique for the tutorial collab notebook.