This repository contains extra methods on top of the ultralytics yolov5 repository Extra features are:
- Light, strong custom object tracker
- Key point finder and feature matcher with optical flow
- Variance of the image laplacian to calculate image blur(To avoid possible false detections)
Whole pipeline is runned on 100 randomly selected videos from Berlin.
Within each video, a json file that indicates the individual car locations is also available. (private data)
Model output is compared with ground truth data.
Tracker algorithm success calculated as 89.5% in the test.
To install the pipeline run:
$ pip install -r requirements.txt
To run the inference on test video simply run:
$ python analysis/main.py
To run the inference on a defined video simply run:
$ python analysis/main.py --source /path/to/video.mp4
In default script saves the video output, if you do not want to save them, simply assign video_output argument to False by:
$ python analysis/main.py --source /path/to/video.mp4 --video_output False
If you want a faster inference time, you can reduce the image size by:
$ python analysis/main.py --source /path/to/video.mp4 --img_size 320
or
python analysis/main.py --source /path/to/video.mp4 --img_size 160
- Car plate number detection & Anonymisation script is added.
- Face detection, pistol detection, human face & car plate(both in one) detection models are trained and added in to google cloud
If you want to use any of these models, please first download the trained model then run:
$ python analysis/main.py --source /path/to/video.mp4 --weights path/to/downloaded/model.pt
or if you want to anonymized objects;
$ python analysis/anonymization.py --source /path/to/video.mp4 --weights path/to/downloaded/model.pt
Models have trained with Nvidia-Tesla V100 GPU over 100 epochs on the private datasets.
Model | size | AP50 |
---|---|---|
YOLOv5s_carPlate | 640 | 92.3 |
YOLOv5x_carPlate | 640 | 97.5 |
YOLOv5s_pistol | 640 | 95.4 |
YOLOv5s_face | 640 | 98.2 |
YOLOv5s_faceAndCarPlate | 640 | 98.2 |
- YOLOv5s_carPlate in real-time: