Official pytorch implementation of the paper: "APB2FACEV2: REAL-TIME AUDIO-GUIDED MULTI-FACE REENACTMENT".
This code has been developed under Python3.7
, PyTorch 1.5.1
and CUDA 10.1
on Ubuntu 16.04
.
Download AnnVI dataset from
Google Drive
or
Baidu Cloud (Key:str3) to /media/datasets/AnnVI
.
python3 train.py --name AnnVI --data AnnVI --data_root DATASET_PATH --img_size 256 --mode train --trainer l2face --gan_mode lsgan --gpus 0 --batch_size 16
Results are stored in checkpoints/xxx
python3 test.py
Results are stored in checkpoints/AnnVI-Big/results
@article{zhang2021real,
title={Real-Time Audio-Guided Multi-Face Reenactment},
author={Zhang, Jiangning and Zeng, Xianfang and Xu, Chao and Liu, Yong and Li, Hongliang},
journal={IEEE Signal Processing Letters},
year={2021},
publisher={IEEE}
}