On this project, it is a facemask detection tool using a state-of-the-art object detection model called YOLOv7 using a custom dataset, the paper explaining the model architecture. This mainly revolves towards the utilization of Google Colab's free GPU usage for training a bulk dataset though changing the source file to be detected using webcam is disabled due to conflicts.
The figure shown above is an evaluation of other YOLO version models compared to YOLOv7's model on inference (x-axis) and accuracy (y-axis).
The dataset used in the project is available on Kaggle. This dataset contains 853 images belonging to 3 classes (with, without, and incorrect), as well as their annotated files in PASCAL VOC format compatible for converting to YOLO format.
Found on the repository is a .ipynb file containing the set of instructions in order to run the facemask detection model.
You can clone the repository to your local machine
git clone https://github.com/<username>/facemask-detection.git
- Content found on
masks.yaml
placed inside the directory of /data.
train: ./train
val: ./val
test: ./test
# Classes
nc: 3 # number of classes
names: ['with_mask', 'without_mask', 'mask_weared_incorrect']
yolov7-masks.yaml
copied from the yolov7.yaml found in yolov7/cfg/training; the only changed value are the number of classes to be detected.
# parameters
nc: 3 # number of classes