3DOM-FBK/deep-image-matching

How to check the matching result?

Closed this issue · 10 comments

Hi, I found some problem when processing my own data (large images).
When downsampled to 2K, position calcultion result is right. When downsampled to 2K, and processing with tiling preselection (tile_size is set to half of default, try_match_full_images set true), still OK.
But when downsampled to 4K, the result is wrong. When downsampled to 4K, and processing with tiling preselection (try_match_full_images set true or false both tested), still wrong.
It seems that processing of large images may have problem.

I guess if matching result is not good, and I want to debug this.
Now I can find image pairs with features in folder results_superpoint+lightglue_matching_lowres_quality_high/debug.
But there is no matching result for debug. I have read the documents and still do not know how to check the matching results.
Is there a way to check this result?

Update: I have found show_matches.py in the root folder, but I cannot run it as the usage.
Error message is:
ModuleNotFoundError: No module named 'cv2'
I have tried to run pip install opencv-python, and got message:

Requirement already satisfied: opencv-python in c:\users\up2u\miniconda3\envs\deep-image-matching\lib\site-packages (4.9.0.80)
Requirement already satisfied: numpy>=1.17.0 in c:\users\up2u\miniconda3\envs\deep-image-matching\lib\site-packages (from opencv-python) (1.26.3)

Thank you.

Hi, to debug the easier way is to use COLMAP GUI. Please check here for COLMAP installation (https://colmap.github.io/install.html). Then

  • run COLMAP GUI
  • File tab > New project
  • Click Database Open (not New), and select the database (.db format) that you can find in the results folder
  • Click Images Select - select the folder with images that you used
  • Click Save
  • Processing tab > Database Management
  • Click an image > click on the top right Overlapping images
  • Click Two-view geometries tab
  • Click an overlapping image and then Show matches

Another way is to use this script https://github.com/3DOM-FBK/deep-image-matching/blob/master/show_matches.py
Here how you can use it from the second colab script (https://github.com/3DOM-FBK/deep-image-matching/blob/master/notebooks/colab_run_from_bash_custom_images.ipynb)

# Visualize results
# Pass to --images argument the names of the images (e.g. "img01.jpg img02.jpg") or their ids (e.g. "1 2") to visualize verified matches inside COLMAP database (change --type ['names', 'ids'])
%%bash
python3 ./deep-image-matching/show_matches.py \
  --images "1 2" \
  --type ids \
  --database /content/custom_example/results_superpoint+lightglue_matching_lowres_quality_high/database.db \
  --imgsdir /content/custom_example/images \
  --output /content/custom_example/matches.png

I am also updating documentation

Let me know if you manage to debug

About the error ModuleNotFoundError: No module named 'cv2', it is strange because opencv is a module needed to run deep-image-matching, so please check to have activated the environment used to run the main.py

Thanks for your reply.

I forgot that I can use COLMAP UI to see the matching result!

And about show_matches.py, I run in the same console which I run main.py, and got the error ModuleNotFoundError: No module named 'cv2'.

Ok good, please let us know if there is an issue on the matches downsampling the images

Hi, I have checked some pairs, and found matching result is quite differenet when images are 2K or 4K.

For example,

2K
2010_2013_2K_small

4K
2010_2013_4K_small

When the image is 4K size, the light before the main object is mis-matched.
There are fewer feature points on the main object, and almost no pairs between the main object of two images.

Could you share the command you used? I imagine preselection is none and you are using superpoint+lightglue. Have you tried with different local features and matchers? You could also try to pass python main.py --dir assets/example_cyprus --pipeline superpoint+lightglue --config/superpoint+lightglue.yaml. You can modify the detector and matcher options inside superpoint+lightglue.yaml. In general I would say that deep learning local features are trained on small size images, so it is much better working with the preselection option, that works per tiles and should give you the best results. Let me know if the preselection solve the issue

I just run python main.py --dir xxxxx --pipeline superpoint+lightglue in the readme doc.
Also, I have tried to copy the config/superpoint+lightglue.yaml file to the base folder, and then run python main.py --dir xxxxx --pipeline superpoint+lightglue - c config.yaml. (Once I tried to add try_match_full_images option when tiling and still no good result.)
When processing 4K images, I have both tried to run with/without preselection, and both could not get good result.

I will try some other large images data to see if they have the same problem.

If you want you can share a folder with the two images of the statue (full resolution), so that I can give a look

Thanks for your help.
Image ZIP file is larger than 25MB (18 images), could not upload here.
I have uploaded it to https://gigafile.nu/, which can temporarily saving files.
Here is the download link: https://53.gigafile.nu/0510-p1cc7f950c7d23246eed74a7bada1804c
password: 1234

Sorry for using a Japanese language website, here is the usage:
Input password in the left box, then click the button, download will start.
download

I think I may find the reason.
When processing large images, extractor parameter max_keypoints may need to set larger.
Otherwise, most of the points may go to the background, not on the main object.
When I try 4K images, with max_keypoints set to 32000, the result is almost same with 2K images with max_keypoints 8000 as default.

Good, sorry I didn't have time to check the images yet. It sounds reasonable, keypoints are extracted based on a score and the texture of 4K can be much more rich than 2K and keypoints can be not extracted uniformly. A simple way to have more distributed keypoints is working by tiles. Thanks for the feedbacks, feel free to open other issues or to collaborate to the project!