/JVCR-3Dlandmark

Code for "Joint Voxel and Coordinate Regression for Accurate 3D Facial Landmark Localization"

Primary LanguagePythonMIT LicenseMIT

Joint Voxel and Coordinate Regression for Accurate 3D Facial Landmark Localization

This repository includes the PyTorch code for training and evaluating the network described in Joint Voxel and Coordinate Regression for Accurate 3D Facial Landmark Localization.

Requirements

  • python 2.7

packages

Usage

Clone the repository and install the dependencies mentioned above

git clone https://github.com/HongwenZhang/JVCR-3Dlandmark.git
cd JVCR-3Dlandmark

Then, you can run the demo code or train a model from stratch.

Demo

  1. Download the pre-trained model (trained on 300W-LP) and put it into the checkpoint directory

  2. Run the demo code

python run_demo.py --verbose

Training

  1. Prepare the training and evaluation datasets
ln -s /path/to/your/300W_LP data/300wLP/images
ln -s /path/to/your/aflw2000 data/aflw2000/images
  • Download .json annotation files from here and put them into data/300wLP and data/aflw2000 respectively
  1. Run the training code
python train.py --gpus 0 -j 4

Acknowledgment

The code is developed upon PyTorch-Pose. Thanks to the original author.