Prerequisite: Python 2.7 and Pytorch 0.3.1
-
Install Pytorch
-
Download and prepare the dataset as follow:
a. PETA Baidu Yun, passwd: 5vep, or Google Drive.
./dataset/peta/images/*.png ./dataset/peta/PETA.mat ./dataset/peta/README
python script/dataset/transform_peta.py
b. RAP Google Drive.
./dataset/rap/RAP_dataset/*.png ./dataset/rap/RAP_annotation/RAP_annotation.mat
python script/dataset/transform_rap.py
c. PA100K Links
./dataset/pa100k/data/*.png ./dataset/pa100k/annotation.mat
python script/dataset/transform_pa100k.py
d. RAP(v2) Links.
./dataset/rap2/RAP_dataset/*.png ./dataset/rap2/RAP_annotation/RAP_annotation.mat
python script/dataset/transform_rap2.py
sh script/experiment/train.sh
sh script/experiment/test.sh
python script/experiment/demo.py
@inproceedings{li2015deepmar,
author = {Dangwei Li and Xiaotang Chen and Kaiqi Huang},
title = {Multi-attribute Learning for Pedestrian Attribute Recognition in Surveillance Scenarios},
booktitle = {ACPR},
pages={111--115},
year = {2015}
}
Partial codes are based on the repository from Houjing Huang.
The code should only be used for academic research.