/multi-object-tracker

Multi-object trackers in Python

Primary LanguagePythonMIT LicenseMIT

Multi-object trackers in Python

Easy to use implementation of various multi-object tracking algorithms.

DOI

YOLOv3 + CentroidTracker TF-MobileNetSSD + CentroidTracker
Cars with YOLO Cows with tf-SSD
Video source: link Video source: link

Available Multi Object Trackers

CentroidTracker
IOUTracker
CentroidKF_Tracker
SORT

Available OpenCV-based object detectors:

detector.TF_SSDMobileNetV2
detector.Caffe_SSDMobileNet
detector.YOLOv3

Installation

Pip install for OpenCV (version 3.4.3 or later) is available here and can be done with the following command:

git clone https://github.com/adipandas/multi-object-tracker
cd multi-object-tracker
pip install -r requirements.txt
pip install -e .

Note - for using neural network models with GPU
For using the opencv dnn-based object detection modules provided in this repository with GPU, you may have to compile a CUDA enabled version of OpenCV from source.

  • To build opencv from source, refer the following links: [link-1], [link-2]

How to use?: Examples

The interface for each tracker is simple and similar. Please refer the example template below.

from motrackers import CentroidTracker # or IOUTracker, CentroidKF_Tracker, SORT

input_data = ...
detector = ...
tracker = CentroidTracker(...) # or IOUTracker(...), CentroidKF_Tracker(...), SORT(...)

while True:
    done, image = <read(input_data)>
    if done:
        break

    detection_bboxes, detection_confidences, detection_class_ids = detector.detect(image)
    # NOTE: 
    # * `detection_bboxes` are numpy.ndarray of shape (n, 4) with each row containing (bb_left, bb_top, bb_width, bb_height)
    # * `detection_confidences` are numpy.ndarray of shape (n,);
    # * `detection_class_ids` are numpy.ndarray of shape (n,).

    output_tracks = tracker.track(detection_bboxes, detection_confidences, detection_class_ids)
    
    # `output_tracks` is a list with each element containing tuple of
    # (<frame>, <id>, <bb_left>, <bb_top>, <bb_width>, <bb_height>, <conf>, <x>, <y>, <z>)
    for track in output_tracks:
        frame, id, bb_left, bb_top, bb_width, bb_height, confidence, x, y, z = track
        assert len(track) == 10
        print(track)

Please refer examples folder of this repository for more details. You can clone and run the examples as shown here.

Pretrained object detection models

You will have to download the pretrained weights for the neural-network models. The shell scripts for downloading these are provided here below respective folders. Please refer DOWNLOAD_WEIGHTS.md for more details.

Notes

  • There are some variations in implementations as compared to what appeared in papers of SORT and IoU Tracker.
  • In case you find any bugs in the algorithm, I will be happy to accept your pull request or you can create an issue to point it out.

References, Credits and Contributions

Please see REFERENCES.md and CONTRIBUTING.md.

Citation

If you use this repository in your work, please consider citing it with:

@misc{multiobjtracker_amd2018,
  author = {Deshpande, Aditya M.},
  title = {Multi-object trackers in Python},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/adipandas/multi-object-tracker}},
}
@software{aditya_m_deshpande_2020_3951169,
  author       = {Aditya M. Deshpande},
  title        = {Multi-object trackers in Python},
  month        = jul,
  year         = 2020,
  publisher    = {Zenodo},
  version      = {v1.0.0},
  doi          = {10.5281/zenodo.3951169},
  url          = {https://doi.org/10.5281/zenodo.3951169}
}