Prerequisite: Python 3.7 and Pytorch 1.3.0
-
Install Pytorch
-
Download and prepare the dataset as follow:
a. PETA Baidu Yun, passwd: 5vep, or Google Drive.
./dataset/peta/images/*.png ./dataset/peta/PETA.mat ./dataset/peta/READMEpython script/dataset/transform_peta.pyb. RAP Google Drive.
./dataset/rap/RAP_dataset/*.png ./dataset/rap/RAP_annotation/RAP_annotation.matpython script/dataset/transform_rap.pyc. PA100K Links
./dataset/pa100k/data/*.png ./dataset/pa100k/annotation.matpython script/dataset/transform_pa100k.pyd. RAP(v2) Links.
./dataset/rap2/RAP_dataset/*.png ./dataset/rap2/RAP_annotation/RAP_annotation.matpython script/dataset/transform_rap2.py
sh script/experiment/train.sh
sh script/experiment/test.sh
python script/experiment/demo.py
@inproceedings{li2015deepmar,
author = {Dangwei Li and Xiaotang Chen and Kaiqi Huang},
title = {Multi-attribute Learning for Pedestrian Attribute Recognition in Surveillance Scenarios},
booktitle = {ACPR},
pages={111--115},
year = {2015}
}
Partial codes are based on the repository from Houjing Huang.
The code should only be used for academic research.