cvlab-epfl/tf-lift

missing file: lift_desc_sub_kernel.h5

pacidic opened this issue · 6 comments

After studying the code for a bit, I think I am able to load in the legacy models contained in the original LIFT repo. However, when trying to run the "kp" subtask on a test image, I get an OSError at line 67 in "modules/lift_desc.py" due to the missing file "tf-lift/modules/lift_desc_sub_kernel.h5". Would it be possible to include it in the repo? Thanks.

Should be working now.

kmyi commented

After studying the code for a bit, I think I am able to load in the legacy models contained in the original LIFT repo.

It's not really supported as we did not have enough time to debug that part.
If you want, you can try it yourself, as all the data you need is in the LIFT repo and this repo.
We are always grateful for any pull requests!

Kwang

Thanks for the info. I was trying to load the legacy model from the LIFT repo but I couldn't get past the error regarding the missing "lift_desc_sub_kernel.h5" file. It's not included in the LIFT repo either, would it be possible for you to add it?

My situation is the following: I need to compute LIFT keypoints & descriptors for many images (>10k) in order to establish the performance of a method I am working on compared to the state-of-the-art. I can use the "run.sh" script in the LIFT repo, however it is slow (2-3min/image on Tesla K20) and I can't seem to be able to parallelise due to the Theano compiler lock. How do you suggest to speed this up, in particular, is there a way for me to paralellise the evaluation?

Thanks again.

I see what you're trying to do, but the "legacy" models are not those trained with the Theano implementation of LIFT, but the weights for an even older version of (only) the descriptor trained with lua-torch. We don't use it at all, we just forgot to remove those lines.

We know that running theano-LIFT in batch can be very annoying due to the compiler checks. It's a big reason of why we ported everything to Tensorflow. There isn't a trivial fix, if we absolutely must use the Theano models we just eat the overhead. We plan to release models for the new codebase as soon as we can. We'll ping you when we do.

kmyi commented

As a tip, you can put the .theano directory in the fastest available resource, e.g. tmpfs on ram, to avoid this overhead as much as you can.