xuelunshen/gim

Request for Advice on Processing Large-Scale Sequential Image Data

takafire2222 opened this issue · 3 comments

Dear Developer,

I hope this message finds you well.

We are currently working with sequential image data consisting of over 1000 images, similar to those captured by drones. In processing such large-scale datasets, we have encountered several challenges. In light of this, we would greatly appreciate your expert insights on the following:

  1. Are there any best practices for handling such large sequential datasets (1000+ images) in the context of 3D reconstruction? Additionally, are there any parameters or configurations we should adjust when working with this type of data compared to more general datasets?

  2. Do you have any recommended specific strategies or modifications to the existing code for dealing with extensive sequential data?

Any ideas you could share on adapting the current system to handle larger, sequential datasets would be immensely helpful.

Thank you in advance for your time and expertise.

Best regards,

Because I was on vacation last week, so this reply came a little late. Please forgive me.

If you use 3D Reconstruction released by GIM to take 1000+ images, I strongly recommend that you modify the sampling logic of the reconstructed image pair, which is here:

exhaustive_pairs = pairs_from_exhaustive.main(image_pairs, image_list=images)
# if j - i > 5:
# continue
If these two lines of comment are canceled, the code will combine 5 adjacent images for reconstruction, rather than reconstructing 1000+ images in pairs.

Because for reconstruction, it makes sense to match only images with overlapping areas. When your drone flies far away, an image does not need to be matched with images that are far away because they do not have overlapping areas on pixels.

Hello,

Thank you for your previous advice. Thanks to your guidance, I was able to successfully perform gim_dkm using the sequential method.

Using this method, processing 1,200 images in 40 pairs took about 24 hours on an NVIDIA RTX 6000 Ada GPU. The results were better than when using 5 pairs.

I have another question I would like to ask. I have many materials that were shot in the past, amounting to 6,000 images. I feel that performing gim_dkm sequentially on these images would require a significant amount of time and effort.

Do you have any ideas or suggestions on performing dense reconstruction and recalculating matches using gim_dkm with files where sparse results are already available (camera.bin, images.bin, point3d.bin)? This is similar to the functionality of pipeline_loftr.py in hloc.

I believe it would be meaningful to enhance legacy works using gim_dkm, so I wanted to propose this idea.

Thank you for your time and assistance.

Best regards,

As you mentioned, incremental reconstruction on an already reconstructed scene can refer to the approach in pipeline_loftr.py from hloc. My suggestion is to reconstruct on the image pairs that have overlapping. You can use image retrieval methods to find the overlapping image pairs and then proceed with incremental reconstruction.