/AttMOT

[T-NNLS 2024] AttMOT: Improving Multiple-Object Tracking by Introducing Auxiliary Pedestrian Attributes

Primary LanguagePython

AttMOT: Improving Multiple-Object Tracking by Introducing Auxiliary Pedestrian Attributes

AttMOT: Improving Multiple-Object Tracking by Introducing Auxiliary Pedestrian Attributes
Yunhao Li, Zhen Xiao, Lin Yang, Dan Meng, Heng Fan, Libo Zhang
arXiv Dataset

examples
Figure: We introduce AttMOT, a large, highly enriched synthetic dataset for pedestrian tracking, containing over 80k frames and 6 million pedestrian IDs with different time, weather conditions, and scenarios. To the best of our knowledge, AttMOT is the first MOT dataset with semantic attributes.

πŸ“· Attribute Samples

samples
Figure: Visualization of several attribute annotation examples in the proposed AttMOT.

🚩 Usage

πŸ”Ή πŸ‘‰ Organization

Due to the large data size, we split VastTrack into multiple Zip files. Each file has the following organization:

part-01.zip
β”œβ”€β”€ seq_001
β”‚   └── det.txt
β”‚   └── feature.txt
β”‚   └── seqinfo.ini
β”‚   └── 0.jpg
β”‚   └── 1.jpg
β”‚   └── 2.jpg
|   ...
part-02.zip
β”œβ”€β”€ seq_101
|   ...
...

πŸ”Ή πŸ‘‰ Format of Each Video Sequence

In each video folder, we provide the frames of the video, bounding box annotations in the det.txt file, pedestrian attribute annotations in the feature.txt file, and basic information of the video sequence in the seqinfo.ini. The format of the bounding box and video information remains consistent with the MOTChallenge dataset, which is also the mainstream annotation method in the MOT field.

πŸ”Ή πŸ‘‰ Downloading Links

In AttMOT we utilized a dataset containing 450 sequences. Subsequently, for further research purposes, we designed datasets with a larger number of sequences. Therefore, here we provide download links for two different versions of the dataset:

  • The downloading link for the 450-seqs version is here: baidu (code:g2aa)
  • The downloading link for the 1800-seqs version is here.

Note: Our dataset is a synthetic dataset, thus, it only consists of a training set and does not include a separate test set.

πŸ“ Evaluation

AttMOT does not include a test set. Therefore, models trained on it are typically evaluated on existing real MOT datasets, such as MOTChallenge datasets, using the TrackEval Evaluation Toolkit.

🎈 Citation

πŸ™ If you use AttMOT for your research, please consider giving it a star ⭐ and citing it:

@article{li2024attmot,
        title={AttMOT: Improving Multiple-Object Tracking by Introducing Auxiliary Pedestrian Attributes},
        author={Li, Yunhao and Zhen, Xiao and Yang, Lin and Meng, Dan and Fan, Heng and Zhang, Libo},
        journal={IEEE Transactions on Neural Networks and Learning Systems},
        year={2024}
}