Implementation of PFLD For 68 Facial Landmarks By Pytorch
-
WFLW Dataset
Wider Facial Landmarks in-the-wild (WFLW) is a new proposed face dataset. It contains 10000 faces (7500 for training and 2500 for testing) with 98 fully manual annotated landmarks.
1.Training and Testing images[Google Drive][Baidu Drive], Unzip and put to
./data/WFLW/raw/
2.Have got
list_68pt_rect_attr_train.txt
andlist_68pt_rect_attr_test.txt
. If you want to get them by youself, please watch get68psFrom98psWFLW.py and run it before please get WFLW Face Annotations , unzip and put to./data/WFLW/
3.Move
Mirror68.txt
to./data/WFLW/annotations/
$ cd ./data/WFLW $ python3 WFLW_SetPreparation68.py
-
300W Dataset
300W is a very general face alignment dataset. It has a total of 3148+689 images, each image contains more than one face, but only one face is labeled for each image.File directory includes afw(337),helen(train 2000+test 330),ibug(135),lfpw(train 811+test 224) with 68 fully manual annotated landmarks.
1.Training and Testing images[Databases][Baidu Drive], Unzip and put to
./data/300W/raw/
2.Have got
list_68pt_rect_attr_train.txt
andlist_68pt_rect_attr_test.txt
. If you want to get them by youself, please watch get68pointsfor300W.py and run it3.Move
Mirror68.txt
to./data/300W/annotations/
$ cd ./data/300W $ python3 300W_SetPreparation68.py
-
300VW Dataset
300VW is a video format, which needs to be processed into a single frame picture and corresponds to each key point pts file.
1.Training and Testing images[Databases], Unzip and put to
./data/300VW/raw/
2.Run get68psAndImagesFrom300VW.py to get
list_68pt_rect_attr_train.txt
3.Move
Mirror68.txt
to./data/300VW/annotations/
$ cd ./data/300VW $ python3 get68psAndImagesFrom300VW.py $ python3 300VW_SetPreparation68.py
-
Your Own Dataset
If you want to get facial landmarks for new face data, please use Detect API of face++. For specific operations,
please refer to API Document. And refer to./data/getNewFacialLandmarksFromFacePP.py
for using the api interface. -
All Dataset
After completing the steps of each data set above, you can run the code
merge_files.py
directly .$ cd ./data $ python3 merge_files.py
training :
$ sh train.sh
reading images from a camera to test:
$ python3 camera.py
reading images from a dir to test:
$ python3 test.py
Sample IMGs:
Details about the models are below:
tip: please install resnest to use ResNest models.
Name | # Params | Mean error | Failure rate | One iteration time(s) |
---|---|---|---|---|
ResNest50 |
122.27M | 0.046 | 0.038 | 0.304 |
MobileNetV2_0.25 |
1.09M | 0.075 | 0.174 | 0.154 |
MobileNetV2_1.00 |
7.28M | 0.065 | 0.127 | 0.203 |
BlazeLandmark |
7.52M | 0.069 | 0.131 | 0.171 |
HRNetV2 |
545.07M | 0.066 | 0.125 | 0.769 |
efficientnet-b0 |
16.67M | 0.064 | 0.119 | 0.202 |
efficientnet-b1 |
26.37M | 0.075 | 0.149 | 0.252 |
efficientnet-b2 |
30.85M | 0.071 | 0.145 | 0.266 |
efficientnet-b3 |
42.29M | 0.099 | 0.136 | 0.307 |
efficientnet-b4 |
68.34M | 0.077 | 0.164 | 0.375 |
efficientnet-b5 |
109.34M | 0.094 | 0.173 | 0.501 |
efficientnet-b6 |
156.34M | 0.081 | 0.175 | 0.702 |
efficientnet-b7 |
244.03M | 0.081 | 0.196 | 0.914 |
Pytorch -> onnx -> onnx_sim
Make sure pip3 install onnx-simplifier
$ python3 pytorch2onnx.py
$ python3 -m onnxsim model.onnx model_sim.onnx
onnx_sim -> ncnn
How to build :https://github.com/Tencent/ncnn/wiki/how-to-build
$ cd ncnn/build/tools/onnx
$ ./onnx2ncnn model_sim.onnx model_sim.param model_sim.bin
PFLD: A Practical Facial Landmark Detector https://arxiv.org/pdf/1902.10859.pdf
ResNest: Split-Attention Networks https://hangzhang.org/files/resnest.pdf
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks https://arxiv.org/pdf/1905.11946.pdf
pytorch:https://github.com/lukemelas/EfficientNet-PyTorch
tensorflow:https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
keras: https://github.com/qubvel/efficientnet
Tensorflow Implementation for 98 Facial Landmarks: https://github.com/guoqiangqi/PFLD