/vehicle-counting

Vehicle counting using Pytorch

Primary LanguagePythonMIT LicenseMIT

🏍️ πŸš™ Vehicle Tracking using YOLOv5 + DeepSORT 🚌

Dev logs [19/12/2021] Update to new YOLOv5 version 6. Can load checkpoints from original repo now 🀞
[16/07/2021] BIG REFACTOR Code is cleaned and working fine now, promise 🀞
[27/09/2021] All trained checkpoints on AIC-HCMC-2020 have been lost. Now use pretrained models on COCO for inference.

Method

  • Use YOLOv5 for vehicle detection task, only considers objects in Region of Interest (ROI)
  • Use DeepSORT for car tracking, not need to retrain this model, only inference
  • Use Cosine Similarity to assign object's tracks to most similar directions.
  • Count each type of vehicle on each direction.

πŸ“” Notebook

  • For inference, use this notebook Notebook
  • To retrain detection model, follow instructions from original Yolov5

Dataset

  • AIC-HCMC-2020: link
  • Direction and ROI annotation format:
cam_01.json # match video name
{
    "shapes": [
        {
            "label": "zone",
            "points": [[x1,y1], [x2,y2], [x3,y3], [x4,y4], ... ] #Points of a polygon
        },
        {
            "label": "direction01",
            "points": [[x1,y1], [x2,y2]] #Points of vector
        },
        {
            "label": "direction{id}",
            "points": [[x1,y1], [x2,y2]]
        },...
    ],
}
screen

πŸ₯‡ Pretrained weights

  • Download finetuned models from on AIC-HCMC-2020 dataset:
Model Image Size Weights Precision Recall MAP@0.5 MAP@0.5-0.95
YOLOv5s 640x640 link 0.87203 0.87356 0.91797 0.60795
YOLOv5m 1024x1024 link 0.89626 0.91098 0.94711 0.66816

🌟 Inference

  • File structure
this repo
β”‚   detect.py
└───configs
β”‚      configs.yaml           # Contains model's configurations
β”‚      cam_configs.yaml       # Contains DEEPSORT's configuration for each video
  • Install dependencies by pip install -r requirements.txt
  • To run full pipeline:
python run.py --input_path=<input video or dir> --output_path=<output dir> --weight=<trained weight>
  • Extra Parameters:
    • --min_conf: minimum confident for detection
    • --min_iou: minimum iou for detection

Results

  • After running, a .csv file contains results has following example format:
track_id frame_id box color label direction fpoint lpoint fframe lframe
2 3 [607, 487, 664, 582] (144, 238, 144) 0 1 (635.5, 534.5) (977.0, 281.5) 3 109
2 4 [625, 475, 681, 566] (144, 238, 144) 0 1 (635.5, 534.5) (977.0, 281.5) 3 109
2 5 [631, 471, 686, 561] (144, 238, 144) 0 1 (635.5, 534.5) (977.0, 281.5) 3 109
  • With:
    • track_id: the id of the object
    • frame_id: the current frame
    • box: the box wraps around the object in the corresponding frame
    • color: the color which is used to visualize the object
    • direction: the direction of the object
    • fpoint, lpoint: first/last coordinate where the object appears
    • fframe, lframe: first/last frame where the object appears
Visualization result
screen
screen

References