/PRNet

The source code of 'Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network'.

Primary LanguagePythonMIT LicenseMIT

Joint 3D Face Reconstruction and Dense Alignment with Position Map Regression Network

This is an official python implementation of PRN. The training code will be released(about two months later).

PRN is a method to jointly regress dense alignment and 3D face shape in an end-to-end manner. More examples on Multi-PIE and 300VW can be seen in YouTube .

The main features are:

  • End-to-End our method can directly regress the 3D facial structure and dense alignment from a single image bypassing 3DMM fitting.

  • Multi-task By regressing position map, the 3D geometry along with semantic meaning can be obtained. Thus, we can effortlessly complete the tasks of dense alignment, monocular 3D face reconstruction, pose estimation, etc.

  • Faster than real-time The method can run at over 100fps(with GTX 1080) to regress a position map.

  • Robust Tested on facial images in unconstrained conditions. Our method is robust to poses, illuminations and occlusions.

Applications

Basics(Evaluated in paper)

  • Face Alignment

Dense alignment of both visible and non-visible points(including 68 key points).

And the visibility of points(1 for visible and 0 for non-visible).

alignment

  • 3D Face Reconstruction

Get the 3D vertices and corresponding colours from a single image. Save the result as mesh data(.obj), which can be opened with Meshlab or Microsoft 3D Builder. Notice that, the texture of non-visible area is distorted due to self-occlusion.

New:

  1. you can choose to output mesh with its original pose(default) or with front view(which means all output meshes are aligned)
  2. obj file can now also written with texture map, and you can set non-visible texture to 0.

alignment

More(To be added)

  • 3D Pose Estimation

    Rather than only use 68 key points to calculate the camera matrix(easily effected by expression and poses), we use all vertices(more than 40K) to calculate a more accurate pose.

    pose

  • Depth image

    pose

  • Texture Editing

    • Data Augmentation/Selfie Editing

      modify special parts of input face, eyes for example:

      pose

    • Face Swapping

      replace the texture with another, then warp it to original pose and use Poisson editing to blend images.

      pose

Getting Started

Prerequisite

  • Python 2.7 (numpy, skimage, scipy)

  • TensorFlow >= 1.4

    Optional:

  • dlib (for detecting face. You do not have to install if you can provide bounding box information. )

  • opencv2 (for showing results)

GPU is highly recommended. The run time is ~0.01s with GPU(GeForce GTX 1080) and ~0.2s with CPU(Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz).

Usage

  1. Clone the repository
git clone https://github.com/YadiraF/PRNet
cd PRNet
  1. Download the PRN trained model at BaiduDrive or GoogleDrive, and put it into Data/net-data

  2. Run the test code.(test AFLW2000 images)

    python run_basics.py #Can run only with python and tensorflow

  3. Run with your own images

    python demo.py -i <inputDir> -o <outputDir> --isDlib True

    run python demo.py --help for more details.

  4. For Texture Editing Apps:

    python demo_texture.py -i image_path_1 -r image_path_2 -o output_path

    run python demo_texture.py --help for more details.

Changelog

  • 2018/5/10 add texture editing examples(for data augmentation, face swapping)
  • 2018/4/28 add visibility of vertices, output obj file with texture map, depth image
  • 2018/4/26 can output mesh with front view
  • 2018/3/28 add pose estimation
  • 2018/3/12 first release(3d reconstruction and dense alignment)

Contacts

Please contact fengyao@sjtu.edu.cn or open an issue for any questions or suggestions(like, push me to add more applications).

Thanks! (●'◡'●)

Acknowledgements