Augmented Dual-Contrastive Aggregation Learning for Unsupervised Visible-Infrared Person Re-Identification ACM MM22
- We propose a dual-stream contrastive learning framework with two modality-specific memory modules for USL-VI-ReID. To learn color-invariant features, the visible stream employs a powerful color augmentation method of random channel augmentation as a bridge to infrared modality for joint contrastive learning.
- We design a Cross-modality Memory Aggregation (CMA) module to select reliable positive samples and aggregate corresponding memory representations in a parameter-free manner, which enables the dual-stream framework to learn better modality-invariant knowledge, while simultaneously reinforcing each contrastive learning stream.
- We present extensive experiments on the SYSU-MM01 and RegDB datasets, which demonstrate that our method outperforms existing unsupervised methods under various settings, and even surpasses some supervised counterparts, providing a new baseline for USL-VI-ReID task and significantly pushing VI-ReID to real-world deployment.
Put SYSU-MM01 and RegDB dataset into data/sysu and data/regdb, run prepare_sysu.py and prepare_regdb.py to prepare the training data (convert to market1501 format).
- sh run_train_sysu.sh for SYSU-MM01
- sh run_train_regdb.sh for RegDB
- sh run_test_sysu.sh for SYSU-MM01
- sh run_test_regdb.sh for RegDB
@inproceedings{adca, title={Augmented Dual-Contrastive Aggregation Learning for Unsupervised Visible-Infrared Person Re-Identification}, author={Yang, Bin and Ye, Mang and Chen, Jun and Wu, Zesen}, pages = {2843–2851}, booktitle = {ACM MM}, year={2022} }