PFLD_68Points_Pytorch

Implementation of PFLD For 68 Facial Landmarks By Pytorch

DataSets

  • WFLW Dataset

    Wider Facial Landmarks in-the-wild (WFLW) is a new proposed face dataset. It contains 10000 faces (7500 for training and 2500 for testing) with 98 fully manual annotated landmarks.

    1.Training and Testing images[Google Drive][Baidu Drive], Unzip and put to ./data/WFLW/raw/

    2.Have got list_68pt_rect_attr_train.txt and list_68pt_rect_attr_test.txt. If you want to get them by youself, please watch get68psFrom98psWFLW.py and run it before please get WFLW Face Annotations , unzip and put to ./data/WFLW/

    3.Move Mirror68.txt to ./data/WFLW/annotations/

     $ cd ./data/WFLW 
     $ python3 WFLW_SetPreparation68.py
  • 300W Dataset

    300W is a very general face alignment dataset. It has a total of 3148+689 images, each image contains more than one face, but only one face is labeled for each image.File directory includes afw(337),helen(train 2000+test 330),ibug(135),lfpw(train 811+test 224) with 68 fully manual annotated landmarks.

    1.Training and Testing images[Databases][Baidu Drive], Unzip and put to ./data/300W/raw/

    2.Have got list_68pt_rect_attr_train.txt and list_68pt_rect_attr_test.txt. If you want to get them by youself, please watch get68pointsfor300W.py and run it

    3.Move Mirror68.txt to ./data/300W/annotations/

     $ cd ./data/300W 
     $ python3 300W_SetPreparation68.py
  • 300VW Dataset

    300VW is a video format, which needs to be processed into a single frame picture and corresponds to each key point pts file.

    1.Training and Testing images[Databases], Unzip and put to ./data/300VW/raw/

    2.Run get68psAndImagesFrom300VW.py to get list_68pt_rect_attr_train.txt

    3.Move Mirror68.txt to ./data/300VW/annotations/

     $ cd ./data/300VW 
     $ python3 get68psAndImagesFrom300VW.py
     $ python3 300VW_SetPreparation68.py
  • Your Own Dataset

    If you want to get facial landmarks for new face data, please use Detect API of face++. For specific operations,
    please refer to API Document. And refer to ./data/getNewFacialLandmarksFromFacePP.py for using the api interface.

  • All Dataset

    After completing the steps of each data set above, you can run the code merge_files.py directly .

     $ cd ./data
     $ python3 merge_files.py

training & testing

training :

 $ sh train.sh

testing:

 $ python3 camera.py

Result

Sample IMGs:

Image text Image text Image text Image text Image text Image text Image text Image text Image text

pytorch -> onnx -> ncnn

Pytorch -> onnx -> onnx_sim

Make sure pip3 install onnx-simplifier

 $ python3 pytorch2onnx.py
 $ python3 -m onnxsim model.onnx model_sim.onnx

onnx_sim -> ncnn

How to build :https://github.com/Tencent/ncnn/wiki/how-to-build

 $ cd ncnn/build/tools/onnx
 $ ./onnx2ncnn model_sim.onnx model_sim.param model_sim.bin

reference:

PFLD: A Practical Facial Landmark Detector https://arxiv.org/pdf/1902.10859.pdf

Tensorflow Implementation for 98 Facial Landmarks: https://github.com/guoqiangqi/PFLD