This project is an implementation of this paper with the help of this repo which proposes a solution that compines the speed and accuracy of performance for the state-of-art problem of face detection.
It is part of Smart Exam Website project in which it serves the feature of detecting the faces of the present individuals in front of the camera and returns the number of faces detected to indicate if there is someone beside the student helping in cheating in the exam or not.
Python 3.9.7
Tensorflow 2.8.0 (but the part of saving model and creating pb file needs Tesnsorflow 1.12)
Pandas
NumPy
CV2
matplotlib
PIL
tqdm
flask
flask_restful
Note that some of the files may not be uploaded in the repo but will be generated when you run the code.
./
├── Datasets/
├── Testing/
├── Training/
├── api.py
├── face_detector.py
└── try_detector.ipynb
./Testing/
├── face_detector.py
├── model.pb
├── predict_for_FDDB.ipynb
├── visulaize_original_annotations.ipynb
├── prepare_data/
└── explore_and_convert_FDDB.ipynb
|
├── eval_result/
├── create_roc_files.py
├── plot_roc.ipynb
└── FDDB-result/
├── ContROC.txt
└── DiscROC.txt
|
└── fddb/
├── fddb_images/
├── fddb_folds/ :Contains files with names: FDDB-fold-xx.txt and FDDB-fold-xx-ellipseList.txt, where xx = {01, 02, ..., 10} represents the fold-index.
| Each line in the "FDDB-fold-xx.txt" file specifies a path to an image in the data set. The corresponding annotations are included in the file "FDDB-fold-xx-ellipseList.txt"
|
├── result/
├── detections.txt
├── ellipseList.txt
├── faceList.txt
├── ann_images/ :Contains the FDDB images with the detected bounding boxes
└── images/
|
└── val/
├── annotations/ :Contains the FDDB images with the ground truth boxes
├── original_ann_images/ :Contains the FDDB images with the ground truth boxes
└── images/
./Training/
├── config.json
├── create_tfrecords.py
├── evaluation_utils.py
├── model.py
├── test_input_pipeline.ipynb
├── train.py
├── train_model.ipynb
├── prepare_data/
├── explore_and_prepare_WIDER.ipynb
└── explore_and_prepare_MAFA.ipynb
|
├── mafa/
├── result/
├── test/
└── annotations/ :Contains the annotaions of each test image in JSON files
└── train/
└── annotations/ :Contains the annotaions of each train image in JSON files
|
├── train_shards/ :Contains the tfrecords of each training shard
└── val_shards/ :Contains the tfrecords of each validation shard
|
├── models/
└── run02/ :Contains the training checkpoints
|
├── save_&_create_pb
├── create_pb.py
├── evaluation_utils.py
├── face_detector.py
├── model.py
├── save.py
├── export/
└── run02/ :Contains the exported saved model
|
└── src/
├── __init__.py
├── anchor_generator.py
├── constants.py
├── detector.py
├── losses_and_ohem.py
├── network.py
├── training_target_creation.py
├── input_pipeline/
├── __init__.py
├── other_augmentations.py
├── pipeline.py
└── random_image_crop.py
|
└── utils/
├── __init__.py
├── box_utils.py
└── nms.py
|
└── src/
├── __init__.py
├── anchor_generator.py
├── constants.py
├── detector.py
├── losses_and_ohem.py
├── network.py
├── training_target_creation.py
├── input_pipeline/
├── __init__.py
├── other_augmentations.py
├── pipeline.py
└── random_image_crop.py
|
└── utils/
├── __init__.py
├── box_utils.py
└── nms.py
To use the pre-trained model, you need to download the frozen graph file (model.pb
) from here and run api.py
file (which depends on face_detector file) or use try_detector.ipynb
notebook
To evaluate the model using FDDB dataset go into Testing
directory and:
- Download FDDB files from here into
fddb
folder - Put the
model.pb
file in Testing directory - Run
explore_and_convert_FDDB.ipynb
file to prepare the dataset to be ready for evaluation - Run
predict_for_FDDB.ipynb
file to get the detections - Go into
eval_result
and runcreate_roc_files.py
to produce the discrete ROC & continous ROC files using this commandpython create_roc_files.py ../fddb/result/detections.txt FDDB-result/
- To visualize the FDDB annotations on the images run
visulaize_original_annotations.ipynb
I tried to train the model on MAFA Dataset using Google Colab
To train the model you need to:
- Upload
explore_and_prepare_MAFA.ipynb
andtrain_model.ipynb
as new colab notebook - Upload the rest of the files in
Training
directory on your google drive - Run
explore_and_prepare_MAFA.ipynb
file to prepare the dataset to be ready for training - Run
train_model.ipynb
file to train the model on the prepared data, knowing that you have to continue training from the last checkpointrun00
found here - To export the training result into .pb file you will need to run the following files, which are in
save_&_create_pb
directory, locally usingTesnsorflow 1.12
: 5.1 runsave.py
to export a saved model 5.2 runcreate_pb.py
using this commandpython create_pb.py --saved_model_folder="export/run02/__some_tmp_num__" --output_pb="model_2.pb"
- Finally use the created
model_2.pb
to evaluate it and use it in inference
For more details about the files dependencies and quick notes about each file, you can find it here
This is the discrete ROC in which True positive rate at 1000 false positives is 0.902
This project is inspired by this repo