/PHA

Primary LanguagePythonMIT LicenseMIT

[CVPR2023] PHA: Patch-wise High-frequency Augmentation for Transformer-based Person Re-identification [pdf]

Official Code for the CVPR 2023 paper [PHA: Patch-wise High-frequency Augmentation for Transformer-based Person Re-identification].

Requirements

Installation

pip install -r requirements.txt
(we use 32G V100 for training and evaluation.)

Prepare ViT Pre-trained Models

You need to download the ImageNet pretrained transformer model : ViT-Base,

Training

We utilize 1 GPU for training.

CUDA_VISIBLE_DEVICES=0 python train.py --config_file configs/Cuhk03_labeled/vit_transreid_stride.yml
CUDA_VISIBLE_DEVICES=0 python train.py --config_file configs/Market/vit_transreid_stride.yml
CUDA_VISIBLE_DEVICES=0 python train.py --config_file configs/MSMT17/vit_transreid_stride.yml

Citation

If you find this code useful for your research, please cite our paper

@InProceedings{Zhang_2023_CVPR,
    author    = {Guiwei Zhang, Yongfei Zhang, Tianyu Zhang, Bo Li1, Shiliang Pu},
    title     = {PHA: Patch-wise High-frequency Augmentation for Transformer-based Person Re-identification},
    booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {14133-14142}
}

Acknowledgement

Our code is based on TransReID. Thanks for the great work!

@InProceedings{He_2021_ICCV,
    author    = {He, Shuting and Luo, Hao and Wang, Pichao and Wang, Fan and Li, Hao and Jiang, Wei},
    title     = {TransReID: Transformer-Based Object Re-Identification},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {15013-15022}
}