Semantic-guided Pixel Sampling for Cloth-Changing Person Re-identification

In our paper publish, arxiv, we propose a semantic-guided pixel sampling approach for the cloth-changing person re-ID task. This repo contains the training and testing codes.

Prepare Dataset

  1. Download the PRCC dataset: PRCC
  2. Obtain the human body parts: SCHP
  3. The mask of PRCC dataset: Baidu, password: r9kc or Google

Trained Models

The trained models can be downloaded from: BaiduPan password: 6ulj, Google

Put the trained models to corresponding directories:
>pixel_sampling/imagenet/resnet50-19c8e357.pth
>pixel_sampling/logs/prcc_base/checkpoint_best.pth
>pixel_sampling/logs/prcc_hpm/checkpoint_best.pth
>...... 

Training and Testing Models

Only need to modify several parameters:

>parser.add_argument('--train', type=str, default='train', help='train, test')

>parser.add_argument('--data_dir', type=str, default='/data/prcc/')

then

>python train_prcc_base.py

Citations

If you think this work is useful for you, please cite

@article{shu2021semantic,
  title={Semantic-guided Pixel Sampling for Cloth-Changing Person Re-identification},
  author={Shu, Xiujun and Li, Ge and Wang, Xiao and Ruan, Weijian and Tian, Qi},
  journal={IEEE Signal Processing Letters},
  volume={28},
  pages={1365-1369},
  year={2021}, 
}

If you have any questions, please contact this e-mail: shuxj@mail.ioa.ac.cn