/face-alignment-pytorch

A re-implement of training and Inference code for 2D-FAN and 3D-FAN decribed in "How far" paper for Face Alignment Net

Primary LanguagePythonBSD 3-Clause "New" or "Revised" LicenseBSD-3-Clause

Pytorch version of ‘How far are we from solving the 2D \& 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)’

For official torch7 version please refer to face-alignment-training. Please visit author’s webpage or arxiv for technical details.

This is a reinplement of training and Inference code for 2D-FAN and 3D-FAN decribed in “How far” paper. And up to now, since I have seeked for a pytorch version for a long time but always find some error in others version, maybe this job is the only one with both train code and inference code in pytorch. And I also have checked the results. This code take some other codes in github for reference, such as pyhowfar, face-alignment.

Pretrained models are available soon.

Requirments

  • Install the latest PyTorch, version 0.4.1 is fully supported and there is no further test on older version.
  • Install python 3.6.6, which is fully supported and there is no further test on older version.

Packages

Setup

  1. Clone the github repository and install all the dependencies mentiones above.
git clone https://github.com/GuohongLi/face-alignment-pytorch.git
cd face-alignment-pytorch
  1. Download the 300W-LP dataset from the Here.
  2. Download the 300W-LP annotations converted to t7 format by paper author from here, extract it and move the “`landmarks“` folder to the root of the 300W-LP dataset.
  3. Download the face detector pretrain model file from s3fd_convert.pth

Usage

In order to run the demo please download the required models available bellow and the associated data.

Train

python train.py

In order to see all the available options please run:

python train.py --help

Inference

python inference.py

In order to see all the available options please run:

python inference.py --help

What’s different?

  • Pythoner friendly and there is no need for `.t7` format annotations
  • Add 300-W-LP test set for validation.
  • Followed the excatly same training procedure described in the paper (except binary network part).
  • Add model evaluation in terms of **Mean error**, **AUC@0.07**
  • TODO: add evaluation on test sets (300W, 300VW, AFLW2000-3D etc.).

Citation

@inproceedings{bulat2017far,
  title={How far are we from solving the 2D \& 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)},
  author={Bulat, Adrian and Tzimiropoulos, Georgios},
  booktitle={International Conference on Computer Vision},
  year={2017}
}