eric-yyjau/pytorch-superpoint

Why is the benchmark different from SuperPoint paper

SuanNaiShuiGuoLao opened this issue · 1 comments

Hi! Thank you for your great work!

I noticed that the SIFT results in your benchmark are quite different from the original paper of Superpoint.
The results of SIFT in original paper and your benchmark are respectively:

E = 1 E = 3 E = 5 Rep. MLE NNmAP MS.
SIFT in the paper: 0.42 | 0.68 | 0.76 | 0.50 | 0.83| 0.69 | 0.31 (480640, top1000)
SIFT in this work: 0.60 | 0.75 | 0.80 | 0.47 | 1.13 | 0.71 | 0.31 (480
640, top1000)

ps: to get the top 1000 result, I configured the classical_detectors_descriptors.py line 50
from "sift = cv2.xfeatures2d.SIFT_create(contrastThreshold=1e-5)"
to "sift = cv2.xfeatures2d.SIFT_create(1000, contrastThreshold=1e-5)"

Could you please explained what causes these differences, especially for the MLE and Homography? Thanks a lot!

Hi @SuanNaiShuiGuoLao ,

Thank you for your question.
I'm not sure what causes the difference.
In the paper, it says
"Repeatability is computed at 240 x 320 resolution with 300 points detected in each image."
So maybe you can try top 300?
And maybe there are some parameters to tune sift.