This repository contains a Wav2Lip model with GAN for improved lip syncing and employing ESRGAN for High Resoultion video of LipSyncs.
To use the model in Google Colab, simply click on the following link:
https://colab.research.google.com/drive/1F6Bj8G1eiktudzuusKYdHRiJOPlpA4-L?usp=sharing
This will open a Google Colab notebook with the Wav2Lip model.
Video-https://openinapp.co/5cwva
Audio-https://openinapp.co/o9vuj
To generate a lip-syncing video, simply follow these steps:
- Open the google colab with the link provided above.
- Install the pre-required dependencies and the Wav2Lip model.
- In the Google Colab notebook, mount your Google Drive account.
- Upload the video file that you want to generate a lip-syncing video.
- Upload the audio file that you want to generate a lip-syncing video.
- Then in last cell to churn the data add specifiactions of padding based on your own needs.
- Then finally run the last cell to generate the lip-syncing video.
- Finally connect the Lipsynced video with ESRGAN part install the pre-required dependencies.
- Train on each frame.
- Get Final Video.
https://drive.google.com/file/d/1uNVv3ncr8J6YutaG02OcGlV9K3nSATV8/view?usp=sharing
If you have any questions, please feel free to contact me at truelykush18@gmail.com.
This repository is licensed under the MIT License.