/FaceScan

FaceScan is a Python application which can detect Face Masks in webcams, images, video files and online streams

Primary LanguageJupyter NotebookApache License 2.0Apache-2.0

GitHub repo size

GitHub license GitHub stars GitHub forks GitHub contributors GitHub repo size GitHub repo size

FaceScan is a Python program which can detect face masks in webcams, images or video files and streams

📖 Table of Contents

✨ Demo

📹 Webcam

FaceScan is able to detect face masks in video feed from connected webcams in realtime:

📁 Files

FaceScan is able to detect face masks from locally saved video and image files:

Before

After

📲 Online Streams

FaceScan is able to detect face masks from online streams such as rtsp, rtmp and http.

💻 Use with GUI Interface

FaceScan also comes with a GUI which functions the same way as the python command, just with an easier interface.

⚙️ Installation

Clone the git repo

$ git clone https://github.com/DanielLechner/FaceScan

Change directory to FaceScan/app

$ cd FaceScan/app

If you are using Windows, then go into your Anaconda environment and install torchvision with this command

$ conda install pytorch torchvision cpuonly -c pytorch-nightly -c defaults -c conda-forge

Install all dependencies from requirements.txt. We reccomend using Python 3.8.5.

$ pip install -r requirements.txt

Enviroment should now be fully functional

🚀 Usage

As mentioned above, there are many ways to use our Python program. Here you can check all the functions:

$ python detect.py --source 0  # webcam
                            file.jpg  # image 
                            file.mp4  # video
                            path/  # directory
                            path/*.jpg  # glob
                            rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa  # rtsp stream
                            rtmp://192.168.1.105/live/test  # rtmp stream
                            http://112.50.243.8/PLTV/88888888/224/3221225900/1.m3u8  # http stream

All outputs are saved in FaceScan/app/runs/detect

Furthermore you can use the following parser tags:

--weights: weights of the trained model
--source: input file/folder to run inference on, 0 for webcam
--output: directory to save results
--iou-thres: IOU threshold for NMS, defaults to 0.45
--conf-thres: object confidence threshold

🧠 Models

We decided to use the yolov5s model, since it is the fastes. This is important since live feed has to be computet very fast to have no lag.

Our weights file has a high precision as you can see in this graph, which represents the confidence the trained model has gained over time. This is tested with the "test" Dataset.

👨🏾‍💻👨🏻‍💻 Code Contributors

DanielLechner's GitHub stats gabcode1712's GitHub stats

📝 License

To check out more about our License click here