Website | ArXiv | Get Start | Video
The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering" (ICCV2021)
The proposed PIRenderer can synthesis portrait images by intuitively controlling the face motions with fully disentangled 3DMM parameters. This model can be applied to tasks such as:
-
Intuitive Portrait Image Editing
Intuitive Portrait Image Control
Pose & Expression Alignment
-
Motion Imitation
Same & Corss-identity Reenactment
-
Audio-Driven Facial Reenactment
Audio-Driven Reenactment
- 2021.9.20 Code for PyTorch is available!
Coming soon
- Python 3
- PyTorch 1.7.1
- CUDA 10.2
# 1. Create a conda virtual environment.
conda create -n PIRenderer python=3.6
conda activate PIRenderer
conda install -c pytorch pytorch=1.7.1 torchvision cudatoolkit=10.2
# 2. Install other dependencies
pip install -r requirements.txt
We train our model using the VoxCeleb. You can download the demo dataset for inference or prepare the dataset for training and testing.
The demo dataset contains all 514 test videos. You can download the dataset with the following code:
./scripts/download_demo_dataset.sh
Or you can choose to download the resources with these links:
Google Driven & BaiDu Driven with extraction passwords ”p9ab“
Then unzip and save the files to ./dataset
-
The dataset is preprocessed follow the method used in First-Order. You can follow the instructions in their repo to download and crop videos for training and testing.
-
After obtaining the VoxCeleb videos, we extract 3DMM parameters using Deep3DFaceReconstruction.
The folder are with format as:
${DATASET_ROOT_FOLDER} └───path_to_videos └───train └───xxx.mp4 └───xxx.mp4 ... └───test └───xxx.mp4 └───xxx.mp4 ... └───path_to_3dmm_coeff └───train └───xxx.mat └───xxx.mat ... └───test └───xxx.mat └───xxx.mat ...
-
We save the video and 3DMM parameters in a lmdb file. Please run the following code to do this
python scripts/prepare_vox_lmdb.py \ --path path_to_videos \ --coeff_3dmm_path path_to_3dmm_coeff \ --out path_to_output_dir
The trained weights can be downloaded by running the following code:
./scripts/download_weights.sh
Or you can choose to download the resources with these links:
Google Driven & Baidu Driven with extraction passwards "4sy1".
Then unzip and save the files to ./result/face
.
Reenactment
Run the the demo for face reenactment:
python -m torch.distributed.launch --nproc_per_node=1 --master_port 12345 inference.py \
--config ./config/face_demo.yaml \
--name face \
--no_resume \
--output_dir ./vox_result/face_reenactment
The output results are saved at ./vox_result/face_reenactment
Intuitive Control
coming soon
Our model can be trained with the following code
python -m torch.distributed.launch --nproc_per_node=4 --master_port 12345 train.py \
--config ./config/face.yaml \
--name face
If you find this code is helpful, please cite our paper
@misc{ren2021pirenderer,
title={PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering},
author={Yurui Ren and Ge Li and Yuanqi Chen and Thomas H. Li and Shan Liu},
year={2021},
eprint={2109.08379},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
We build our project base on imaginaire. Some dataset preprocessing methods are derived from video-preprocessing.