Computer vision examples in python using OpenCV and other libraries.
At the moment, there are working examples only for face detection and face recognition algorithms like OpenCV and Dlib in Python. Object detection will be added later on.
Make sure you have Python3 installed.
-
Clone the repo and cd to project root directory. We will be running all commands from the root dir.
-
To install dependencies run:
make install
-
If you run
make help
, it shows list of available commands to run face detection and recognition algorithms on your images.
In face detection, we are only interested to identify which parts of the image are human faces.
Libraries like OpenCV and Dlib provide open source classifiers which can be used with minimal configuration. Having said that, we need to tweak the configs to increase or decrease the sensitivity to filter out false positives. We need not to train model for this.
Existing classifiers (HaarCascade, Dlib) are used to get coordinates of bounding box for face and we draw them in the image.
- Simple, fast, good for smaller devices.
- Less accurate than other models like Dlib so needs a bit of fine tuning for correct results.
- Most used classifiers(face, eyes, cats, license plate) are offered by OpenCV.
- Good candidate for RaspberryPI as well.
Commands:
-
Run detection in default test image:
make haar-detect
-
Test on your image:
make haar-detect TEST_IMAGE="dataset/test-samples/friends.jpg"
-
Run live detection from webcam:
make live-detect
Dlib:
-
Face detection using Dlib(HOG + Linear SVM)
Higher accuracy than HaarCascade, still faster than MMOD CNN.
make hog-detect
-
Face detection using Dlib(MMOD CNN)
Higher accuracy than Dlib HOG, but needs high compute, takes longer time, slower in smaller machines.
make cnn-detect
-
Facial Landmark Detection
Detect facial features like eyes, eyebrows, nose, mouth, lips.
make live-facial-landmarks
Face Recognition has few steps than detection because it is necessary to first detect our faces from the images and label them so that we can identify them later from our test image which will be not included in the training dataset.
In high level, these are the steps in face recognition using OpenCV:
- We extract faces using face detection classifiers.
- Use the facial data and label(e.g. person's name) and train it. What we get is a model which can be saved as a yml file.
- We then use OpenCV or other face recognizers to predict the label from given input(facial image data).
- We get label and a confidence score as the result.
- Using given classifier, we will be creating a model training it on our images(faces).
- One image should have one face.
- Training data set should contain variation of lighting, angles, background for better results.
We need sample images to train for positive and negative values which are inside dataset/faces
folder.
For each person, there is a unique folder where the images are kept. Folder name is important here as we use it as Label for image when detected.
.datasets
├── faces
├── Chandler
│ ├── 1.png
│ ├── 2.png
│ ├── ...
│ └── 50.png
├── Joey
│ ├── 1.png
│ ├── 2.png
│ ├── ...
│ └── 50.png
└── Unknown
├── 1.png
├── 2.png
├── ...
└── 50.png
Single training image should contain one face only.
Run this command which will apply HaarCascade classifier on our image folders, create a model and save it in yml file.
make haar-train
# default image
make haar-recognize
# run recognition on your own image
make haar-recognize TEST_IMAGE="dataset/test-samples/friends.png"
It is not 100% accurate. Need to tweak the configs and use with good training dataset.