By Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj and Le Song
The repository contains the entire pipeline (including all the preprossings) for deep face recognition with SphereFace
. The recognition pipeline contains three major steps: face detection, face alignment and face recognition.
SphereFace is a recently proposed face recognition method. It was initially described in an arXiv technical report and then published in CVPR 2017. To facilitate the face recognition research, we give an example of training on CAISA-WebFace and testing on LFW using the 20-layer CNN architecture described in the paper (i.e. SphereFace-20).
SphereFace is released under the MIT License (refer to the LICENSE file for details).
If you find SphereFace useful in your research, please consider to cite:
@inproceedings{liu2017sphereface,
author = {Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song},
title = {SphereFace: Deep Hypersphere Embedding for Face Recognition},
booktitle = {Proceedings of the IEEE conference on computer vision and pattern recognition},
Year = {2017}
}
Please click the image to watch the Youtube video. For Youku users, click here.
- July 20, 2017
- This repository was built.
- August 9, 2017
- Most of the bugs are fixed. SphereFace-20 prototxt file (
$SPHEREFACE_ROOT/train/code/sphereface_model.prototxt
) is released. This architecture is exactly the same as the 20-layer CNN reported in the paper. A well-trained model with accuracy 99.30% on LFW is released.
- Most of the bugs are fixed. SphereFace-20 prototxt file (
- August 16, 2017
- A video demo is released.
- To be updated:
- Detected facial landmarks, training image list, training log and extracted features will be released soon.
- Backward gradient.
- In this implementation, we did not strictly follow the equations in paper. Instead, we normalize the scale of gradient to 1. It can be interpreted as a varying strategy for learning rate to help converge more stably. Similar idea and intuition also appear in https://arxiv.org/pdf/1707.04822.pdf
- More specifically, if the original gradient of f w.r.t x can be written as df/dx = coeff_w * w + coeff_x * x, we use the normalized version [df/dx] = (coeff_w * w + coeff_x * x) / norm_wx to perform backward propragation, where norm_wx is sqrt(coeff_w^2 + coeff_x^2). The same operation is also applied to the gradient of f w.r.t w.
- If you use the original gradient to do the backprop, you could still make it work but may need different lambda settings.
- Requirements for
Matlab
- Requirements for
Caffe
andmatcaffe
(see: Caffe installation instructions) - Requirements for
MTCNN
(see: MTCNN - face detection & alignment) andPdollar toolbox
(see: Piotr's Image & Video Matlab Toolbox).
-
Clone the SphereFace repository. We'll call the directory that you cloned SphereFace as
SPHEREFACE_ROOT
.git clone --recursive https://github.com/wy1iu/sphereface.git
-
Build Caffe and matcaffe
cd $SPHEREFACE_ROOT/tools/caffe-sphereface # Now follow the Caffe installation instructions here: # http://caffe.berkeleyvision.org/installation.html make all -j8 && make matcaffe
After successfully completing installation, you'll be ready to run all the following experiments.
Note 1: In this part, we assume you are in the directory $SPHEREFACE_ROOT/preprocess/
-
Download the training set (
CASIA-WebFace
) and test set (LFW
) and place them indata/
.mv /your_path/CASIA_WebFace data/ ./code/get_lfw.sh tar xvf data/lfw.tgz -C data/
Please make sure that the directory of
data/
contains two datasets. -
Detect faces and facial landmarks in CAISA-WebFace and LFW datasets using
MTCNN
(see: MTCNN - face detection & alignment).# In Matlab Command Window run code/face_detect_demo.m
This will create a file
dataList.mat
in the directory ofresult/
. -
Align faces to a canonical pose using similarity transformation.
# In Matlab Command Window run code/face_align_demo.m
This will create two folders (
CASIA-WebFace-112X96/
andlfw-112X96/
) in the directory ofresult/
, containing the aligned face images.
Note 2: In this part, we assume you are in the directory $SPHEREFACE_ROOT/train/
-
Get a list of training images and labels.
mv ../preprocess/result/CASIA-WebFace-112X96 data/ # In Matlab Command Window run code/get_list.m
The aligned face images in folder
CASIA-WebFace-112X96/
are moved from preprocess folder to train folder. A listCASIA-WebFace-112X96.txt
is created in the directory ofdata/
for the subsequent training. -
Train the sphereface model.
./code/sphereface/sphereface_train.sh 0,1
After training, a model
sphereface_model_iter_28000.caffemodel
and a corresponding log filesphereface_train.log
are placed in the directory ofresult/sphereface/
.
Note 3: In this part, we assume you are in the directory $SPHEREFACE_ROOT/test/
-
Get the pair list of LFW (view 2).
mv ../preprocess/result/lfw-112X96 data/ ./code/get_pairs.sh
Make sure that the LFW dataset and
pairs.txt
in the directory ofdata/
-
Extract deep features and test on LFW.
# In Matlab Command Window run code/evaluation.m
Finally we have the
sphereface_model.caffemodel
, extracted featurespairs.mat
in folderresult/
, and accuracy on LFW like this:fold 1 2 3 4 5 6 7 8 9 10 AVE ACC 99.33% 99.17% 98.83% 99.50% 99.17% 99.83% 99.17% 98.83% 99.83% 99.33% 99.30%
- Visualizations of network architecture (tools from ethereon):
- SphereFace-20: link
- Model file
- SphereFace-20: Google Drive | Baidu
-
Following the instruction, we go through the entire pipeline for 5 times. The accuracies on LFW are shown below. Generally, we report the average but we release the best one here.
Experiment #1 #2 #3 (released) #4 #5 ACC 99.24% 99.20% 99.30% 99.27% 99.13%
Questions can also be left as issues in the repository. We will be happy to answer them.