/NeuFace

Official code for CVPR 2023 paper NeuFace: Realistic 3D Neural Face Rendering from Multi-view Images.

Primary LanguagePythonMIT LicenseMIT

NeuFace: Realistic 3D Neural Face Rendering from Multi-view Images [CVPR 2023]

Mingwu Zheng, Haiyu Zhang, Hongyu Yang, Di Huang

Official code for CVPR 2023 paper NeuFace: Realistic 3D Neural Face Rendering from Multi-view Images.

The paper presents a novel 3D face rendering model, namely NeuFace, to learn accurate and physically-meaningful underlying 3D representations by neural rendering techniques.

NeuFace naturally incorporates low-rank neural BRDFs into physically based rendering, allowing it to capture facial geometry and complex appearance properties collaboratively, which enhances its robustness against specular reflections. Additionally, NeuFace exhibits commendable generalization abilities when applied to common objects.

Installation Requirmenets

The code is compatible with python 3.6.13 and pytorch 1.9.1. To create an anaconda environment named neuface with the required dependencies, run:

conda create -n neuface python==3.6.13
conda activate neuface
pip install -r requirement.txt

Usage

Data and shape prior

For human face, We use data from FaceScape Dataset to evaluate our model. The detailed 3D mesh is used to generate a mask of each image's face area. The ImFace model can be download from pretrained-model.

  • Mesh preprocess To obtain the preprocessed mesh, run:
python data_preprocess/cut_mesh.py

Please make sure the path in the file is correct.

  • Image and mask rendering Once you have the preprocessed mesh, you can render the mask and image by running:
python data_preprocess/render_mask.py

Please make sure the path in the file is correct.

For common objects, the DTU dataset is used for model evaluation.

Train on Facescape

To train NeuFace on Facescape dataset, run:

python scripts/train_pl.py

Make sure that the variables in your config file are correct. Results can be found in {out_dir}/{expname}. The trained model can be downloaded from (FaceScape's author allows to release the trained model):

Trained Model Description
NeuFace_1_id_2_exp train on 1 id with 2 exp (smile) of Facescape dataset

If you want to use our trained model, please place the downloaded file in exp_pl/ckpt/{trained_model}.

Evaluation on Facescape

To evaluate the novel view metrics, run:

 python scripts/eval_pl.py --ckpt [ckpt_path] --out_dir [our_dir]

Results can be found in {our_dir}/test/{expname}.

Train on DTU

To train NeuFace on DTU dataset, run:

cd common_object
python training/exp_runner.py --conf ./confs/dtu_fixed_cameras.conf --scan_id [scan_id] --gpu [GPU_ID]

Make sure that [dataset.data_dir] in your config file is correct. The results can be found in common_object/exps/{train.expname}/{timestamp}. The trained model can be downloaded from:

Trained Model Description
NeuFace_DTU_65 train on 65 scan of DTU dataset
NeuFace_DTU_110 train on 110 scan of DTU dataset
NeuFace_DTU_118 train on 118 scan of DTU dataset

If you want to use our trained model, please place the downloaded file in common_object/exps/{trained_model}.

Evaluation on DTU

To evaluate the training view metrics, run:

cd common_object
python evaluation/eval.py  --conf ./confs/dtu_fixed_cameras.conf --scan_id [SCAN_ID] --eval_rendering --gpu [GPU_INDEX]

Results can be found in common_object/evals/{train.expname}/rendering.

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{zheng2023neuface,
title={NeuFace: Realistic 3D Neural Face Rendering from Multi-view Images},
author={Zheng, Mingwu and Zhang Haiyu and Yang, Hongyu and Huang, Di},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2023}
}

Acknowledgments

  • The codebase is developed based on VolSDF and IDR of Lior et al. Many thanks to their great contributions!
  • This paper is based on ImFace (CVPR 2022), welcome to pay attention!