Nanne/pytorch-NetVlad

recall

Closed this issue · 1 comments

whuzs commented

Hi, Nanne. You have done a great jod on reproducing Net-Vlad!! I have a little problem on recall results, i got similar results as you on Pitts-30k dataset, which are:
====> Calculating recall @ N
====> Recall@1: 0.8567
====> Recall@5: 0.9507
====> Recall@10: 0.9713
====> Recall@20: 0.9840
=> Best Recall@5: 0.9508

But in Patch-NetVLAD paper, pitts-30k results is:
image

The difference is about 3%

Is this result reasonable?

Nanne commented

Thanks! The difference may just be in the backbone used, the original NetVlad uses AlexNet backbone and scores a bit lower than the VGG16 I used here.

The Recall@5 and @10 perhaps seem a bit low compared to the @1 recall.

This is not really a code issue though, so will close it.