A PyTorch implementation of RetinaFace: Single-stage Dense Face Localisation in the Wild. Model size only 1.7M, when Retinaface use mobilenet0.25 as backbone net. We also provide resnet50 as backbone net to get better result. The official code in Mxnet can be found here.
We also provide a set of Face Detector for edge device in here from python training to C++ inference.
Style | easy | medium | hard |
---|---|---|---|
Pytorch (same parameter with Mxnet) | 94.82 % | 93.84% | 89.60% |
Pytorch (original image scale) | 95.48% | 94.04% | 84.43% |
Mxnet | 94.86% | 93.87% | 88.33% |
Mxnet(original image scale) | 94.97% | 93.89% | 82.27% |
Style | easy | medium | hard |
---|---|---|---|
Pytorch (same parameter with Mxnet) | 88.67% | 87.09% | 80.99% |
Pytorch (original image scale) | 90.70% | 88.16% | 73.82% |
Mxnet | 88.72% | 86.97% | 79.19% |
Mxnet(original image scale) | 89.58% | 87.11% | 69.12% |
FDDB(pytorch) | performance |
---|---|
Mobilenet0.25 | 98.64% |
Resnet50 | 99.22% |
-
Pytorch version 1.1.0+ and torchvision 0.3.0+ are needed.
-
Codes are based on Python 3
-
Download the WIDERFACE dataset.
-
Download annotations (face bounding boxes & five facial landmarks) from baidu cloud or dropbox
-
Organise the dataset directory as follows:
./data/widerface/
train/
images/
label.txt
val/
images/
wider_val.txt
ps: wider_val.txt only include val file names but not label information.
We also provide the organized dataset we used as in the above directory structure.
Link: from google cloud or baidu cloud Password: ruck
We provide restnet50 and mobilenet0.25 as backbone network to train model. We trained Mobilenet0.25 on imagenet dataset and get 46.58% in top 1. If you do not wish to train the model, we also provide trained model. Pretrain model and trained model are put in google cloud and baidu cloud Password: fstq . The model could be put as follows:
./weights/
mobilenet0.25_Final.pth
mobilenetV1X0.25_pretrain.tar
Resnet50_Final.pth
-
Before training, you can check network configuration (e.g. batch_size, min_sizes and steps etc..) in
data/config.py and train.py
. -
Train the model using WIDER FACE:
CUDA_VISIBLE_DEVICES=0,1,2,3 python train.py --network resnet50 or
CUDA_VISIBLE_DEVICES=0 python train.py --network mobile0.25
- Generate txt file
python test_widerface.py --trained_model weight_file --network mobile0.25 or resnet50
- Evaluate txt results. Demo come from Here
cd ./widerface_evaluate
python setup.py build_ext --inplace
python evaluation.py
- You can also use widerface official Matlab evaluate demo in Here
- Download the images FDDB to:
./data/FDDB/images/
- Evaluate the trained model using:
python test_fddb.py --trained_model weight_file --network mobile0.25 or resnet50
- Download eval_tool to evaluate the performance.
@inproceedings{deng2019retinaface,
title={RetinaFace: Single-stage Dense Face Localisation in the Wild},
author={Deng, Jiankang and Guo, Jia and Yuxiang, Zhou and Jinke Yu and Irene Kotsia and Zafeiriou, Stefanos},
booktitle={arxiv},
year={2019}