I create this simple project to check if different versions of evaluation codes generate identical results, for person re-identification. So it's just a test case for evaluation codes.
Here I compare the results by Tong Xiao and Zhun Zhong.
Files
./data
It contains an example case for computing CMC and mAP scores, where 100 query images and 5332 gallery images are involved. Their identities, cameras and query-gallery distance matrix are provided.
./python_version
ranking.py
, copied from open-reid, linkmain.py
, computing CMC and mAP
./matlab_version
evaluation.m
, copied from person-re-ranking, linkcompute_AP.m
, copied from person-re-ranking, link. This is also the same as provided by Market1501, link.main.m
, computing CMC and mAP
Usage
To run the python version, numpy
and scikit-learn
is required.
cd python_version
python main.py
To run the matlab version, change working directory to matlab_version
in Matlab and run main.m
.
You can compare results of two versions.
Note
The CMC scores by two versions are identical.
As for mAP, the python version is implemented using sklearn.metrics.average_precision_score
. This function has changed its behavior since version 0.19. My installed version (0.18.1) generates mAP identical to the matlab version, i.e. 64.03%
, while version 0.19.1 generates 66.52%
. Since the Market1501 paper introduces mAP into person re-identification, I stick to the matlab version of mAP.
You can check your installed version of scikit-learn by pip freeze | grep scikit-learn
. To install scikit-learn
with version 0.18.1
, try this:
pip uninstall scikit-learn
pip install scikit-learn==0.18.1