/video-face-swap

The easiest way to make yourself the hero of video memes.

Primary LanguagePythonMIT LicenseMIT

Video Face Swap

The easiest way to make yourself the hero of video memes.
You don't need a powerful video card with ray tracing support to run it.

MacBook Pro M1 — 29 frames — 48.5 seconds

Examples

Without face restoration With 🧖‍♂️ face restoration
video-1.mp4
video_01.mp4
video-2.mp4
video-3.mp4

Limitations

  • Intentionally no audio support.
  • Everything tested only on MacBook Pro (Apple M1, 16GB RAM)

How it works

  1. Splitting video to sequence of images
  2. Swap all faces with insightface module (same as Roop) on each image
  3. Restore faces with GFPGAN if passed --restore option
  4. Making video from image sequence

Installation

git clone git@github.com:pfrankov/video-face-swap.git
cd video-face-swap
python3.10 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

Also, you need to put pretrained models into models directory:

Usage

# Make sure you're in virtual environment
source venv/bin/activate

Single video

python swap.py ./input/leonardo-dicaprio-rick-dalton.mp4 ./my_face.jpg result.mp4
Usage:
    swap.py <video> <face> <output> [--restore]

Arguments:
  video                 Path to the .mp4 video file to process
  face                  Path to the image with your face
  output                Path to output video with .mp4 extension

Options:
  --restore             Enabling face restoration. Slowing down the processing

Batch

# Make batch_run.sh executable
chmod +x batch_run.sh
./batch_run.sh ./my_face.jpg
Usage:
    ./batch_run.sh <face> [input_directory]

Arguments:
  face                  Path to the image with your face
  input_directory       Directory with .mp4 files. Default: `input`

Мой канал про нейронки на русском: https://t.me/neuronochka