AttMOT: Improving Multiple-Object Tracking by Introducing Auxiliary Pedestrian Attributes
Yunhao Li, Zhen Xiao, Lin Yang, Dan Meng, Heng Fan, Libo Zhang
arXiv
Dataset
Figure: We introduce AttMOT, a large, highly enriched synthetic dataset for pedestrian tracking, containing over 80k frames and 6 million pedestrian IDs with different time, weather conditions, and scenarios. To the best of our knowledge, AttMOT is the first MOT dataset with semantic attributes.
Figure: Visualization of several attribute annotation examples in the proposed AttMOT.
πΉ π Organization
Due to the large data size, we split VastTrack into multiple Zip files. Each file has the following organization:
part-01.zip
βββ seq_001
β βββ det.txt
β βββ feature.txt
β βββ seqinfo.ini
β βββ 0.jpg
β βββ 1.jpg
β βββ 2.jpg
| ...
part-02.zip
βββ seq_101
| ...
...
πΉ π Format of Each Video Sequence
In each video folder, we provide the frames of the video, bounding box annotations in the det.txt
file, pedestrian attribute annotations in the feature.txt
file, and basic information of the video sequence in the seqinfo.ini
. The format of the bounding box and video information remains consistent with the MOTChallenge dataset, which is also the mainstream annotation method in the MOT field.
πΉ π Downloading Links
In AttMOT we utilized a dataset containing 450 sequences. Subsequently, for further research purposes, we designed datasets with a larger number of sequences. Therefore, here we provide download links for two different versions of the datasetοΌ
- The downloading link for the
450-seqs version
is here: baidu (code:g2aa) - The downloading link for the
1800-seqs version
is here.
Note: Our dataset is a synthetic dataset, thus, it only consists of a training set and does not include a separate test set.
AttMOT does not include a test set. Therefore, models trained on it are typically evaluated on existing real MOT datasets, such as MOTChallenge datasets, using the TrackEval Evaluation Toolkit.
π If you use AttMOT for your research, please consider giving it a star β and citing it:
@article{li2024attmot,
title={AttMOT: Improving Multiple-Object Tracking by Introducing Auxiliary Pedestrian Attributes},
author={Li, Yunhao and Zhen, Xiao and Yang, Lin and Meng, Dan and Fan, Heng and Zhang, Libo},
journal={IEEE Transactions on Neural Networks and Learning Systems},
year={2024}
}