Extreme 3D Face Reconstruction: Seeing Through Occlusions
Please note that the main part of the code has been released, though we are still testing it to fix possible glitches. Thank you.
Python and C++ code for realistic 3D face modeling from single image using our shape and detail regression networks published in CVPR 2018 [1] (follow the link to our PDF which has many, many more reconstruction results.)
This page contains end-to-end demo code that estimates the 3D facial shape with realistic details directly from an unconstrained 2D face image. For a given input image, it produces standard ply files of the 3D face shape. It accompanies the deep networks described in our paper [1] and [2]. The occlusion recovery code, however, will be published in a future release. We also include demo code and data presented in [1].
Dependencies
Data requirements
Before compiling the code, please, make sure to have all the required data in the following specific folder:
- Download our Bump-CNN and move the CNN model (1 file:
ckpt_109_grad.pth.tar
) into theCNN
folder - Download our PyTorch CNN model and move the CNN model (3 files:
shape_model.pth
,shape_model.py
,shape_mean.npz
) into theCNN
folder - Download the Basel Face Model and move
01_MorphableModel.mat
into the3DMM_model
folder - Acquire 3DDFA Expression Model, run its code to generate
Model_Expression.mat
and move this file the3DMM_model
folder - Go into
3DMM_model
folder. Run the scriptpython trimBaselFace.py
. This should output 2 filesBaselFaceModel_mod.mat
andBaselFaceModel_mod.h5
. - Download dlib face prediction model and move the
.dat
file into thedlib_model
folder.
Note that we modified the model files from the 3DMM-CNN paper. Therefore, if you generated these files before, you need to re-create them for this code.
Installation
There are 2 options below to compile our code:
Installation with Docker (recommended)
- Install Docker CE
- With Linux, manage Docker as non-root user
- Install nvidia-docker
- Build docker image:
docker build -t extreme-3dmm-docker .
Installation without Docker on Linux
The steps below have been tested on Ubuntu Linux only:
- Install Python2.7
- Install the required third-party packages:
sudo apt-get install -y libhdf5-serial-dev libboost-all-dev cmake libosmesa6-dev freeglut3-dev
- Install Dlib C++ library. Sample code to comiple Dlib:
wget http://dlib.net/files/dlib-19.6.tar.bz2
tar xvf dlib-19.6.tar.bz2
cd dlib-19.6/
mkdir build
cd build
cmake ..
cmake --build . --config Release
sudo make install
cd ..
- Install PyTorch
- Install other required third-party Python packages:
pip install opencv-python torchvision scikit-image cvbase pandas mmdnn dlib
- Config Dlib and HDF5 path in CMakefiles.txt, if needed
- Build C++ code
mkdir build;
cd build; \
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=../demoCode ..;
make;
make install;
cd ..
This code should generate TestBump
in demoCode
folder
Usage
Start docker container
If you compile our code with Docker, you need to start a Docker container to run our code. You also need to set up a shared folder to transfer input/output data between the host computer and the container.
- Prepare the shared folder on the host computer. For example,
/home/ubuntu/shared
- Copy input data (if needed) to the shared folder
- Start container:
nvidia-docker run --rm -ti --ipc=host --privileged -v /home/ubuntu/shared:/shared extreme-3dmm-docker bash
Now folder /home/ubuntu/shared
on your host computer will be mounted to folder /shared
inside the container
3D face modeling with realistic details from a set of input images
- Go into
demoCode
folder. The demo script can be used from the command line with the following syntax:
$ Usage: python testBatchModel.py <inputList> <outputDir>
where the parameters are the following:
<inputList>
is a text file containing the paths to each of the input images, one in each line.<outputDir>
is the path to the output directory, where ply files are stored.
An example for <inputList>
is demoCode/testImages.txt
../data/test/03f245cb652c103e1928b1b27028fadd--smith-glasses-too-faced.jpg ../data/test/20140420_011855_News1-Apr-25.jpg ....
The output 3D models will be <outputDir>/<imageName>_<postfix>.ply
with <postfix>
= <modelType>_<poseType>
. <modelType>
can be "foundation"
, "withBump"
(before soft-symmetry),"sparseFull"
(soft-symmetry on the sparse mesh), and "final"
. <poseType>
can be "frontal"
or "aligned"
(based on the estimated pose).
The final 3D shape has <postfix>
as "final_frontal"
. You can config the output models in code before compiling.
The PLY files can be displayed using standard off-the-shelf 3D (ply file) visualization software such as MeshLab.
Sample command:
python testBatchModel.py testImages.txt /shared
Note that our occlusion recovery code is not included in this release.
Demo code and data in our paper
- Go into
demoCode
folder. The demo script can be used from the command line with the following syntax:
$ Usage: ./testPaperResults.sh
Before exiting the docker container, remember to save your output data to the shared folder.
Citation
If you find this work useful, please cite our paper [1] with the following bibtex:
@inproceedings{tran2017extreme,
title={Extreme {3D} Face Reconstruction: Seeing Through Occlusions},
author={Tran, Anh Tuan and Hassner, Tal and Masi, Iacopo and Paz, Eran and Nirkin, Yuval and Medioni, G\'{e}rard},
booktitle={IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
year=2018
}
References
[1] A. Tran, T. Hassner, I. Masi, E. Paz, Y. Nirkin, G. Medioni, "Extreme 3D Face Reconstruction: Seeing Through Occlusions", IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, June 2018
[2] A. Tran, T. Hassner, I. Masi, G. Medioni, "Regressing Robust and Discriminative 3D Morphable Models with a very Deep Neural Network", CVPR 2017
Changelog
- Dec. 2018, Convert to Dockerfile
- Dec. 2017, First Release
License and Disclaimer
Please, see the LICENSE here
Contacts
If you have any questions, drop an email to anhttran@usc.edu , hassner@isi.edu and iacopoma@usc.edu or leave a message below with GitHub (log-in is needed).