/LipSick

🤢 LipSick: Fast, High Quality, Low Resource Lipsync Tool 🤮

Primary LanguagePythonThe UnlicenseUnlicense

LipSick Logo

Introduction

To get started with LipSick on Windows, follow these steps to set up your environment. This branch has been tested with Anaconda using Python 3.10 and CUDA 11.6 with only 4GB VRAM. Using a different Cuda version can cause speed issues.

See branches for Linux or HuggingFace GPU / CPU or Collab

Setup

Install
  1. Clone the repository:
git clone https://github.com/Inferencer/LipSick.git
cd LipSick
  1. Create and activate the Anaconda environment:
conda env create -f environment.yml
conda activate LipSick

Download pre-trained models

Download Links

For the folder ./asserts

Please download pretrained_lipsick.pth using this link and place the file in the folder ./asserts

Then, download output_graph.pb using this link and place the file in the same folder.

For the folder ./models

Please download shape_predictor_68_face_landmarks.dat using this link and place the file in the folder ./models

The folder structure for manually downloaded models

.
├── ...
├── asserts                        
│   ├── examples                   # A place to store inputs if not using gradio UI
│   ├── inference_result           # Results will be saved to this folder
│   ├── output_graph.pb            # The DeepSpeech model you manually download and place here
│   └── pretrained_lipsick.pth     # Pre-trained model you manually download and place here
│                   
├── models
│   ├── Discriminator.py
│   ├── LipSick.py
│   ├── shape_predictor_68_face_landmarks.dat  # Dlib Landmark tracking model you manually download and place here
│   ├── Syncnet.py
│   └── VGG19.py   
└── ...
  1. Run the application:
python app.py

Or use the new autorun tool by double clicking run_lipsick.bat

This will launch a Gradio interface where you can upload your video and audio files to process them with LipSick.

To-Do List

  • Add support MacOS.
  • Add upscale reference frames with masking.
  • Add seamless clone masking to remove the common bounding box around mouths. 🤕
  • Add alternative option for face tracking model SFD (likely best results, but slower than Dlib).
  • Add custom reference frame feature. 😷
  • Examine CPU speed upgrades.
  • Reintroduce persistent folders for frame extraction as an option with existing frame checks for faster extraction on commonly used videos. 😷
  • Provide HuggingFace space CPU (free usage but slower). 😷
  • Provide Google Colab .IPYNB. 🤮
  • Add support for Linux. 🤢
  • Release Tutorial on manual masking using DaVinci. 😷
  • Looped original video generated as an option for faster manual masking. 🤮
  • Image to MP4 conversion so a single image can be used as input.
  • Automatic audio conversion to WAV regardless of input audio format. 🤢
  • Clean README.md & provide command line inference.
  • Remove input video 25fps requirement.
  • Upload cherry picked input footage for user download & use.
  • Create a Discord to share results, faster help, suggestions & cherry picked input footage.
  • Upload results footage montage to GitHub so new users can see what LipSick is capable of. 🤮
  • Close mouth fully on silence
  • auto git pull updater .bat file 🤢
  • Add auto persistent crop_radius to prevent mask flickering. 🤮
  • Auto run the UI with a .bat file. 🤮
  • Auto open UI in default browser. 🤮
  • Add custom crop radius feature to stop flickering Example 🤮
  • Provide HuggingFace space GPU. 🤮
  • Remove warning messages in command prompt that don't affect performance. 🤢
  • Moved frame extraction to temp folders. 🤮
  • Results with the same input video name no longer overwrite existing results. 🤮
  • Remove OpenFace CSV requirement. 🤮
  • Detect accepted media input formats only. 🤮
  • Upgrade to Python 3.10. 🤮
  • Add UI. 🤮

Key:

  • 🤮 = Completed & published
  • 🤢 = Completed & published but requires community testing
  • 😷 = Tested & working but not published yet
  • 🤕 = Tested but not ready for public use

Simple Key:

  • Available
  • Unavailable

Acknowledge

This project, LipSick, is heavily inspired by and based on DINet. Specific components are borrowed and adapted to enhance LipSick

We express our gratitude to the authors and contributors of DINet for their open-source code and documentation.