iago-suarez/BEBLID

Problem in the scale of execution times

iago-suarez opened this issue · 5 comments

We have found a problem with the scale of the time measurements in the original paper "Suárez, I., Sfeir, G., Buenaposada, J. M., & Baumela, L. (2020). BEBLID: Boosted efficient binary local image descriptor. Pattern Recognition Letters, 133, 366-372.". The time measurements have been scaled down by a constant factor around x13 due to a bug in the experiment source code. For example, the real execution times for BEBLID-512 in the images of Oxford dataset with sizes between (765x512) and (1000x700) is not 0.21 ms as pointed out in the paper but 0.21 x13 ms = 2.73 ms . This is also the case for the other descriptors and therefore the relevance and conclusions of the paper remains the same.

Hi Suarez,
I also noticed that there are some mis-understanding upon the execution time.

  1. In original BEBLID paper (Tab. 1), the BEBLID holds ~10 ms in Desktop CPU, while in BAD/HashSIFT paper(Table V) it holds 1.56 ms. So what is the difference? May be the difference from float BEBLID to binary BEBLID?
  2. I have tested by myself on OpenCV wrapper BEBLID and BAD. But the execution time between ORB and these two are not that significant as in paper. (ORB: 77ms, BEBLID_256b: 44.28ms, TEBLID_256b: 44.16). Also TEBLID is not faster than BEBLID. I am not sure whether something is missing in my experiments, maybe building configuration for OpenCV?

Thank you in advance!

Hi @xushangnjlh ,

I haven't had time to respond to the other issue yet, I has been a busy week 😓.

Regarding execution times, the good ones should be the ones described in BAD/HashSIFT paper, since the ones in BEBLID paper have the aforementioned issue of the wrong scale.

Despite this, the execution time of BEBLID is faster than ORB one's because of the parallel execution:

https://github.com/opencv/opencv_contrib/blob/de84cc02a876894a4047ce31f7d9fd179f213e95/modules/xfeatures2d/src/beblid.cpp#L368-L370

The parallel execution should be enabled by default in OpenCV, but the speedup would depend on how many cores you have available in your machine and how many keypoints you have to process. We do our experiments with a maximum of 2000 keypoints and the image size is around 800 x 800px (Oxford dataset).

I hope you find this answer helpful.

Best,
Iago.

Hi Suarez, I also noticed that there are some mis-understanding upon the execution time.

  1. In original BEBLID paper (Tab. 1), the BEBLID holds ~10 ms in Desktop CPU, while in BAD/HashSIFT paper(Table V) it holds 1.56 ms. So what is the difference? May be the difference from float BEBLID to binary BEBLID?
  2. I have tested by myself on OpenCV wrapper BEBLID and BAD. But the execution time between ORB and these two are not that significant as in paper. (ORB: 77ms, BEBLID_256b: 44.28ms, TEBLID_256b: 44.16). Also TEBLID is not faster than BEBLID. I am not sure whether something is missing in my experiments, maybe building configuration for OpenCV?

Thank you in advance!

Hi, have you ever tried HashSIFT with parallel_for_ option OFF? The time cost in computing descriptors is even higher than opencv SIFT in my experiment, and is this normal?

Hi Shengnan,

Iago did a very good job taking only the SIFT descriptor computation part from the OpenCV SIFT implementation for HashSIFT. If I remember well, this means that HashSIFT is more efficient than OpenCV's SIFT descriptor implementation when used with keypoint detectors other than SIFT. However, when SIFT descriptor and detector are used together, the OpenCV implementation is faster as the descriptor is reusing results already computed by the detector. The other way around is also true, when using HashSIFT with OpenCV SIFT detector should be slower than OpenCV SIFT detector used with SIFT detector.

Is this the case? Are you using HashSIFT with OpenCV's SIFT detector?

Iago, please, correct me if I'm wrong.

Hi guys, since this is BEBLID's repo, let's keep the discussion in iago-suarez/efficient-descriptors#3