This repository contains a Wav2Lip model that can be used to generate lip-syncing videos from audio.
To use the model in Google Colab, simply click on the following link:
https://colab.research.google.com/drive/1-n9wexxt2_2xn0JD2xIXW3wgfkXtJs6e?usp=sharing
This will open a Google Colab notebook with the Wav2Lip model.
Video-https://openinapp.co/5cwva
Audio-https://openinapp.co/o9vuj
To generate a lip-syncing video, simply follow these steps:
- Open the google colab with the link provided above.
- Install the pre-required dependecies and the Wav2Lip model.
- In the Google Colab notebook, mount your Google Drive account.
- Upload the video file that you want to generate a lip-syncing video.
- Upload the audio file that you want to generate a lip-syncing video.
- Then in last cell to churn the data add specifiactions of padding based on your own needs.
- Then finally run the last cell to generate the lip-syncing video.
output_video.mp4
If you have any questions, please feel free to contact me at truelykush18@gmail.com.
This repository is licensed under the MIT License.