This project was accepted in WACV 2025! 🎉
DragonTrack is a dynamic robust adaptive graph-based tracker designed as an end-to-end framework for multi-person tracking (MPT). It integrates a detection transformer model for object detection and feature extraction with a graph convolutional network for re-identification.
DragonTrack leverages encoded features from the transformer model to facilitate precise subject matching and track maintenance. The graph convolutional network processes these features alongside geometric data to predict subsequent positions of tracked individuals. This approach aims to enhance tracking accuracy and reliability, leading to improvements in key metrics such as higher order tracking accuracy (HOTA) and multiple object tracking accuracy (MOTA).
- Integration of detection transformer for object detection and feature extraction.
- Utilization of graph convolutional networks for re-identification and track prediction.
- Focus on enhancing tracking accuracy and reliability.
- Outperforms state-of-the-art MOT methods on MOT17 datasets, achieving 82.0 and 65.3 in MOTA and HOTA, respectively.
Follow these steps to set up your environment and install the necessary dependencies:
-
Clone the Repository
Clone the DragonTrack repository to your local machine.
git clone https://yourprojectrepository.com/DragonTrack cd DragonTrack
Use Conda to manage your environment and dependencies. If you do not have Conda installed, download it from Miniconda or Anaconda.
-
Create a new Conda environment:
conda create --name dragontrack python=3.8
-
Activate the Conda environment:
conda activate dragontrack
Install required packages from requirements.txt
.
pip install -r requirements.txt
Download the MOT17 Dataset from the MOT Challenge website and place it in a folder named MOT_dataset
.
mkdir -p MOT_dataset
# Download the dataset into the MOT_dataset folder
Run the script to create necessary folders.
./create_folders.sh
-
Command: Run the training script.
./train.sh
-
Result: Training starts, saving trained models in
/models
. Modify settings intracking.py
.
-
Specify the trained model in
tracking.py
. -
Command: Initiate testing.
./test.sh
-
Result: Generates
.txt
files and videos, saved in/output
. Settings can be changed intracking.py
.
Pre-processed Tracktor detection files from this repository were used for benchmark evaluation.
If you use this code or dataset for your research, please consider citing our paper:
@inproceedings{Amraee2025DragonTrack,
title={Transformer-Enhanced Graphical Multi-Person Tracking in Complex Scenarios},
author={Bishoy Galoaa and Somaieh Amraee and Sarah Ostadabbas},
booktitle={IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month={1},
year={2025}
}