Source code for CVPR 2020 paper "Deep Facial Non-Rigid Multi-View Stereo" [paper] [supp] [video].
(1) Create an Anaconda environment with python 3.6.
conda create -n DFNRMVS python=3.6
source activate DFNRMVS
(2) Clone the repository and install dependencies.
git clone https://github.com/zqbai-jeremy/DFNRMVS.git
cd DFNRMVS
conda install --yes --file requirements_conda.txt
pip install -r requirements_pip.txt
(3) Setup 3DMM
- Clone this repository, which is forked and modified from YadiraF/face3d, to "<DFNRMVS directory>/external/".
mkdir external
cd external
git clone https://github.com/zqbai-jeremy/face3d.git
cd face3d
-
Setup face3d as in YadiraF/face3d.
-
Download "Exp_Pca.bin" from Guo et al. (in "CoarseData" link of their repository) and copy to "<DFNRMVS directory>/external/face3d/examples/Data/BFM/Out/".
-
Download "std_exp.txt" from Deng et al. and copy to "<DFNRMVS directory>/external/face3d/examples/Data/BFM/Out/".
(4) Install face-alignment.
conda install -c 1adrianb face_alignment
(5) Download pre-trained model (2views_model.pth or 3views_finetune_model.pth; May be used for research purpose only) to "<DFNRMVS directory>/net_weights/". Need to create the folder.
- Modify directory paths in demo.py and run
cd <DFNRMVS_directory>
python demo.py
-
All images in the input directory will be used for reconstruction. Per-view results will be saved to the output directory.
-
Some examples are in "<DFNRMVS directory>/examples/". The corresponding outputs are in "<DFNRMVS directory>/out_dir/".
-
The model usually gives good results for 2 views input with +-30 degree yaw angles.
-
Training requires 256x256 images with ground truth 3D scans. Loss functions and training parameters are provided in "<DFNRMVS directory>/train/losses.py"
-
Need to setup torch-batch-svd to use the losses.
@inproceedings{bai2020deep,
title={Deep Facial Non-Rigid Multi-View Stereo},
author={Bai, Ziqian and Cui, Zhaopeng and Rahim, Jamal Ahmed and Liu, Xiaoming and Tan, Ping},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={5850--5860},
year={2020}
}