A pytorch implementation of the Local-Bottom Network (LB) in the paper:
Wu, Zifeng, et al. "A comprehensive study on cross-view gait based human identification with deep cnns." IEEE transactions on pattern analysis and machine intelligence 39.2 (2017): 209-226.
- In
src/model.py
there are two models: LBNet and LBNet_1. LBNet_1 is more close to the model described in the section 4.2.1 of the original paper. You can select either one. The results are close to each other.
- To train the model, put the CASIA-B dataset silhoutte data under repository
mkdir snapshot
to build the directory for saving models- goto the
src
dir and run
python3 train.py
The model will be saved into the execution dir every 10000 iterations. You can change the interval in train.py.
- Install visdom.
- Start the visdom server with
python3 -m visdom.server -port 5274
or any port you like (change the port in train.py and test.py) - Open this URL in your browser:
http://localhost:5274
You will see the training loss curve and the validation accuracy curve.
- goto
src
dir and runpython3 test.py
. You can select which snapshot to use by modifying thecheckpoint = th.load('../snapshot/snapshot_75000.pth')
to other snapshots. Be patient since it takes a long time. The computed similarities will be saved intosimilarity.npy
- run
python3 compute_acc_per_angle
to compute the accuracy for each prove view and gallery view. The results will be saved intoacc_table.csv
You will get a table like this.