/mask-detector

Real-time video streaming mask detection based on Python. Designed to defeat COVID-19.

Primary LanguageJupyter NotebookMIT LicenseMIT

WearMask: Real-time In-browser Face Mask Detection

products.jpg

Requirements

Please use Python 3.8 with all requirements.txt dependencies installed, including torch>=1.6. Do not use python 3.9.

$ pip install -r requirements.txt

Modeling

The data has been saved in ./modeling/data/, if you added any extra image and annotation, please re-run the code in 10-preparation-process.ipynb to get the new training set and test set.

The following steps work on Google Colab.

1. Training

Run this code to train the model based on the pretrained weights yolo-fastest.weights from COCO.

$ python3 train.py --cfg yolo-fastest.cfg --data data/face_mask.data --weights weights/yolo-fastest.weights --epochs 120

The training process would cost several hours. When the training ended, you can use from utils import utils; utils.plot_results() to get the training graphs.

After training, you can get the model weights best.pt with its structure yolo-fastest.cfg. You can also use the following code to get the model weights best.weights in Darknet format.

$ python3  -c "from models import *; convert('cfg/yolo-fastest.cfg', 'weights/best.pt')"

2. Inference

With the model you got, the inference could be performed directly in this format: python3 detect.py --source ... For instance, if you want to use your webcam, please run python3 detect.py --source 0.

There are some example cases:

examples

Hint: If you want to convert the model to the ONNX format (Not necessary), please check 20-PyTorch2ONNX.ipynb

Deployment

The deployment part works based on NCNN and WASM.

1. Pytorch to NCNN

At first, you need to compile the NCNN library. For more details, you can visit Tutorial for compiling NCNN library to find the tutorial.

When the compilation process of NCNN has been completed, you can start to use various tools in the ncnn/build/tools folder to help us convert the model.

For example, you can copy the yolo-fastest.cfg and best.weights files of the darknet model to the ncnn/build/tools/darknet, and use this code to convert to the NCNN model.

./darknet2ncnn yolo-fastest.cfg best.weights yolo-fastest.param yolo-fastest.bin 1

For compacting the model size, you can move the yolo-fastest.param and yolo-fastest.bin to ncnn/build/tools, then run the ncnnoptimize program.

ncnnoptimize yolo-fastest.param yolo-fastest.bin yolo-fastest-opt.param yolo-fastest-opt.bin 65536 

2. NCNN to WASM

Now you have the yolo-fastest-opt.param and yolo-fastest-opt.bin as our final model. For making it works in WASM format, you need to re-compile the NCNN library with WASM. you can visit Tutorial for compiling NCNN with WASM to find the tutorial.

Then you need to write a C++ program that calls the NCNN model as input the image data and return the model output. The C++ code I used has been uploaded to the facemask-detection repository.

Compile the C++ code by emcmake cmake and emmake make, you can get the yolo.js, yolo.wasm, yolo.worker.js and yolo.data. These files are the model in WASM format.

3. Build webpage

After establishing the webpage, you can test it locally with the following steps in the facemask-detection repository:

  1. start a HTTP server python3 -m http.server 8888
  2. launch google chrome browser, open chrome://flags and enable all experimental webassembly features
  3. re-launch google chrome browser, open http://127.0.0.1:8888/test.html, and test it on one frame.
  4. re-launch google chrome browser, open http://127.0.0.1:8888/index.html, and test it by webcam.

To publish the webpage, you can use Github Pages as a free server. For more details about it, please visit https://pages.github.com/.

Acknowledgement

The modeling part is modified based on the code from Ultralytics. The model used is modified from the Yolo-Fastest model shared by dog-qiuqiu. Thanks to nihui, the author of NCNN, for her help in the NCNN and WASM approach.