Cysu/open-reid

Version of sklearn.metrics.average_precision_score Should Be Carefully Considered for mAP

huanghoujing opened this issue · 7 comments

Hi, Tong Xiao.

I find that sklearn.metrics.average_precision_score has changed its behavior since version 0.19. Previous versions (I have only tested 0.18.1) generate mAP identical to the code of Market1501, while newer versions (I have only tested 0.19.1) generate higher mAP.

I provide a test case for this, link.

Thank you!

Cysu commented

@huanghoujing Thanks a lot for this information! That's interesting. I will check about it when available.

Thanks again. Yeah, maybe it's necessary to make the mAP eval code self-contained in open-reid. Looking forward to your solution.

Cysu commented

@huanghoujing you may install specific version by

pip uninstall scikit-learn
pip install scikit-learn==0.18.1

Yeah, currently it is also my awkward solution to downgrade the package and check the version in the code.

@huanghoujing
Hi, I am following your mAP calculations and I came across this issue too.
Can you advice me on this:

  1. Should I use sklearn version 0.18.1 or 0.19.1?
    From the documentation: http://scikit-learn.org/dev/modules/generated/sklearn.metrics.average_precision_score.html#sklearn.metrics.average_precision_score
    "Changed in version 0.19: Instead of linearly interpolating between operating points, precisions are weighted by the change in recall since the last operating point."

  2. Running in Python2 and Python3 also gives me different mAP results. Which one is correct?

  3. Do you happen to know for other reid datasets such as CUHK, which is the standard sklearn version for mAP calculation?
    Thank you

@yee-kevin

  1. You should use 0.18.1
  2. If you are using open-reid which recommends using Python 3, then you should use Python 3 to reduce potential inconsistency.
  3. Sklearn version 0.18.1 is the standard mAP for ReID, regardless of which dataset you are using.

@huanghoujing
Thank you very much!