Facial landmark detection based on convolution neural network.
The model is build with TensorFlow, the training code is provided so you can train your own model with your own datasets.
A sample gif extracted from video file showing the detection result.
This is the companion code for the tutorial on deep learning here, which includes background, dataset, preprocessing, model architecture, training and deployment. I tried my best to make them simple and easy to understand for beginners. Feel free to open issue when you are stuck or have some wonderful ideas to share.
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
Just git clone this repo and you are good to go.
# From your favorite development directory
git clone https://github.com/yinguobing/cnn-facial-landmark.git
Before training started, make sure the following requirements are met.
- training and evaluation tf-record file.
- a directory to store the check point files.
- hyper parameters like training steps, batch size, number of epochs.
The following command shows how to train the model for 500 steps and evaluate it after training.
# From the repo's root directory
python3 landmark.py \
--train_record train.record \
--val_record validation.record \
--model_dir train \
--train_steps 500 \
--batch_size 32
TensorFlow's SavedModel is recommended and is the default option. Use the argument --export_dir
to set the directory where the model should be saved.
# From the repo's root directory
python3 landmark.py \
--model_dir train \
--export_dir saved_model \
--export_only True
These devices tend to have constrained resource and TensorFlow Lite is most suitable for this situation. However this is beyond the scope of this project. But don't worry, you will find a more comprehensive project in the next section.
Once you have accomplished all the applications above, it's a good time to move on to a more advanced repo with following features:
- Support multiple public dataset: WFLW, IBUG, etc.
- Advanced model architecture: HRNet v2
- Data augmentation: randomly scale/rotate/flip
- Model optimization: quantization, pruning
Watch this video demo: HRNet Facial Landmark Detection (bilibili)
And build a better one: https://github.com/yinguobing/facial-landmark-detection-hrnet
Yin Guobing (尹国冰) - yinguobing
- The TensorFlow team for their comprehensive tutorial.
- The iBUG team for their public dataset.
Making Keras
the default way of building models.
A new input function is added to export the model to take raw tensor input. Use the --raw_input
argument in the exporting command. This is useful if you want to "freeze" the model later.
For those who are interested in inference with frozen model on image/video/webcam, there is a lightweight module here:https://github.com/yinguobing/butterfly, check it out.
Good news! The code is updated. Issue #11 #13 #38 #45 and many others have been resolved. No more key error x
in training, and exporting model looks fine now.
Thanks for your patience. I have managed to updated the repo that is used to extract face annotations and generate TFRecord file. Some bugs have been fixed and some minimal sample files have been added. Check it out here and here.
The training part(this repo) is about to be updated. I'm working on it.
This repository now has 199 github stars that is totally beyond my expectation. Whoever you are, wherever you are from and whichever language you speak, I want to say "Thank you!" to you 199 github friends for your interest.
Human facial landmark detection is easy to get hands on but also hard enough to demonstrates the power of deep neural networks, that is the reason I chose for my learning project. Even I had tried my best to keep a exhaustive record that turned into this repository and the companion tutorial, they are still sloppy and confusing in some parts.
The code is published a year ago and during this time a lot things have changed. TensorFlow 2.0 is coming and the exported model seems not working in the latest release of tf1.13. I think it's better to make this project up to date and keep being beneficial to the community.
I've got a full time job which costs nearly 12 hours(including traffic time) in my daily life, but I will try my best to keep the pace.
Feel free to open issues so that we can discuss in detail.