/Wav2lip_LipSync

Wav2Lip model that can be used to generate lip-syncing videos from audio.

Primary LanguageJupyter NotebookMIT LicenseMIT

README

This repository contains a Wav2Lip model that can be used to generate lip-syncing videos from audio.

To use the model in Google Colab, simply click on the following link:

https://colab.research.google.com/drive/1-n9wexxt2_2xn0JD2xIXW3wgfkXtJs6e?usp=sharing

This will open a Google Colab notebook with the Wav2Lip model.

Sample Inputs

Video-https://openinapp.co/5cwva

Audio-https://openinapp.co/o9vuj

Usage

To generate a lip-syncing video, simply follow these steps:

  1. Open the google colab with the link provided above.
  2. Install the pre-required dependecies and the Wav2Lip model.
  3. In the Google Colab notebook, mount your Google Drive account.
  4. Upload the video file that you want to generate a lip-syncing video.
  5. Upload the audio file that you want to generate a lip-syncing video.
  6. Then in last cell to churn the data add specifiactions of padding based on your own needs.
  7. Then finally run the last cell to generate the lip-syncing video.

Sample Output

output_video.mp4

Contact

If you have any questions, please feel free to contact me at truelykush18@gmail.com.

License

This repository is licensed under the MIT License.