mihaidusmanu/d2-net

About the evaluation in visuallocalizationbenchmark

Closed this issue · 8 comments

Hello again. I am trying to reproduce the evaluation part of d2-net.
I manage to extract the feature and save them into npz files.
I simply use 'extract_features.py' for extracting the feature.
However, after importing the features and the matching,
the pipeline fails in the geometric reconstruction step
which also cause reconstruction failure.

Can you show me which part I do wrong? Thank you.

==============================================================================
Triangulating image #1112
==============================================================================

  => Image has 0 / 0 points
  => Triangulated 0 points

==============================================================================
Triangulating image #29
==============================================================================

  => Image has 0 / 0 points
  => Triangulated 0 points

==============================================================================
Triangulating image #3542
==============================================================================

  => Image has 0 / 0 points
  => Triangulated 0 points

==============================================================================

Hello! This looks like an issue during the geometric verification process. Could you report the output of the GV step (https://github.com/tsattler/visuallocalizationbenchmark/blob/master/local_feature_evaluation/reconstruction_pipeline.py#L213)? I'll look into it on my end as well and update you in case I find any problems.

I disable everything else and only run geometric verification on my server

Running geometric verification...
qt.qpa.screen: QXcbConnection: Could not connect to display 
Could not connect to any X display.

I also print out the paths

print(paths.database_path)
print(paths.match_list_path)

./data/aachen-day-night/d2-net.db
./data/aachen-day-night/image_pairs_to_match.txt

In order to be sure about the extraction part. I print out some keypoints and descriptors too.
It seems quite fine for me. So I am very confused.

# command
d = np.load('./db/101.jpg.d2-net')

print('--------keypoints--------')
print(d['keypoints'])
print(d['keypoints'].shape)

print('--------descriptors--------')
print(d['descriptors'])
print(d['descriptors'].shape)

# results
--------keypoints--------
[[1.0220248e+03 2.3840601e+02 1.0000000e+00]
 [1.8756195e+02 3.9289459e+02 1.0000000e+00]
 [9.5907166e+02 7.4171857e+02 1.0000000e+00]
 ...
 [5.7415186e+02 1.5254072e+03 1.0000000e+00]
 [6.6166510e+02 1.2554918e+02 1.0000000e+00]
 [4.9555618e+02 4.4235461e+02 1.0000000e+00]]
(10210, 3)
--------descriptors--------
[[0.17914139 0.09293804 0.         ... 0.05883683 0.00216836 0.10146339]
 [0.20785215 0.03320646 0.02627012 ... 0.00305413 0.03256616 0.        ]
 [0.20039983 0.02432002 0.         ... 0.08115707 0.         0.05065363]
 ...
 [0.02823835 0.0051806  0.         ... 0.05338924 0.2248854  0.        ]
 [0.         0.07378493 0.0822665  ... 0.0012342  0.01612465 0.16420138]
 [0.         0.02574669 0.         ... 0.         0.0203543  0.22006287]]
(10210, 512)

I suspect you are using the non-CUDA build of COLMAP (you can verify this by running colmap -h).

One workaround to this issue, in case your machine has an X server, is to run export DISPLAY=:0.0 before running the reconstruction script or to use X forwarding (ssh -X) in case you are running it remotely (be aware that, for the second option, you will need to be connected while the script is running; disconnecting will probably break it).

In order to fix your solution permanently you will need to compile COLMAP from sources using CUDA which will allow you to run it remotely without issues.

It seems like I did install the non-cuda version. I will try that. Thanks!

COLMAP 3.4 -- Structure-from-Motion and Multi-View Stereo
(Commit Unknown on Unknown without CUDA)

Hello @mihaidusmanu, turns out it is because of the colmap version. After I building it from source, everything is working.

I ran into some problem though, but I fix it according to the following issue.
colmap/colmap#188

conda uninstall libtiff

I have a final question though. The visuallocalizationbenchmark does not gives me the results of your Figure 5. Can you kindly tell me which files exactly do you use to compute such results? Or is there a script for that I missed? Many thanks.

The annotations for the Aachen Day-Night dataset are private because it is part of an ongoing challenge - we do not plan on releasing them in the near future (this includes the evaluation scripts).

Nevertheless, you can submit the results file generated by the reconstruction script (Aachen_eval_[method-name].txt) to the evaluation server available at https://www.visuallocalization.net (Dataset: Aachen Day-Night; Local feature challenge) in order to check the results.

Thank you very much. Hope to chat with you in CVPR :)