Dev logs
[19/12/2021] Update to new YOLOv5 version 6. Can load checkpoints from original repo now π€[16/07/2021] BIG REFACTOR Code is cleaned and working fine now, promise π€
[27/09/2021] All trained checkpoints on AIC-HCMC-2020 have been lost. Now use pretrained models on COCO for inference.
- Use YOLOv5 for vehicle detection task, only considers objects in Region of Interest (ROI)
- Use DeepSORT for car tracking, not need to retrain this model, only inference
- Use Cosine Similarity to assign object's tracks to most similar directions.
- Count each type of vehicle on each direction.
- For inference, use this notebook
- To retrain detection model, follow instructions from original Yolov5
- AIC-HCMC-2020: link
- Direction and ROI annotation format:
cam_01.json # match video name
{
"shapes": [
{
"label": "zone",
"points": [[x1,y1], [x2,y2], [x3,y3], [x4,y4], ... ] #Points of a polygon
},
{
"label": "direction01",
"points": [[x1,y1], [x2,y2]] #Points of vector
},
{
"label": "direction{id}",
"points": [[x1,y1], [x2,y2]]
},...
],
}
- Download finetuned models from on AIC-HCMC-2020 dataset:
Model | Image Size | Weights | Precision | Recall | MAP@0.5 | MAP@0.5-0.95 |
---|---|---|---|---|---|---|
YOLOv5s | 640x640 | link | 0.87203 | 0.87356 | 0.91797 | 0.60795 |
YOLOv5m | 1024x1024 | link | 0.89626 | 0.91098 | 0.94711 | 0.66816 |
- File structure
this repo
β detect.py
ββββconfigs
β configs.yaml # Contains model's configurations
β cam_configs.yaml # Contains DEEPSORT's configuration for each video
- Install dependencies by
pip install -r requirements.txt
- To run full pipeline:
python run.py --input_path=<input video or dir> --output_path=<output dir> --weight=<trained weight>
- Extra Parameters:
- --min_conf: minimum confident for detection
- --min_iou: minimum iou for detection
- After running, a .csv file contains results has following example format:
track_id | frame_id | box | color | label | direction | fpoint | lpoint | fframe | lframe |
---|---|---|---|---|---|---|---|---|---|
2 | 3 | [607, 487, 664, 582] | (144, 238, 144) | 0 | 1 | (635.5, 534.5) | (977.0, 281.5) | 3 | 109 |
2 | 4 | [625, 475, 681, 566] | (144, 238, 144) | 0 | 1 | (635.5, 534.5) | (977.0, 281.5) | 3 | 109 |
2 | 5 | [631, 471, 686, 561] | (144, 238, 144) | 0 | 1 | (635.5, 534.5) | (977.0, 281.5) | 3 | 109 |
- With:
track_id
: the id of the objectframe_id
: the current framebox
: the box wraps around the object in the corresponding framecolor
: the color which is used to visualize the objectdirection
: the direction of the objectfpoint
,lpoint
: first/last coordinate where the object appearsfframe
,lframe
: first/last frame where the object appears
Visualization result |
---|
- DeepSORT from https://github.com/ZQPei/deep_sort_pytorch
- YOLOv5 from https://github.com/ultralytics/yolov5