question about dataset of Application 1: Train -RANSAC for Two-view geometry estimation
Closed this issue · 12 comments
Thank you for your great work. I have some questions about the dataset of Application 1: What is the dataset of Application 1? Is it diff_ransac_data? I noticed that the dataset contains pair_.npy files instead of images. How do you convert images into pair_.npy files? Is the input of Δ-RANSAC training a set of tentative correspondences? I read in the paper that Feature matcher and Trainable quality f can be trained and optimized together. Is this not done in Application 1? Sorry for asking so many questions at once, looking forward to your reply!
Dear author, I have a question about DS_Block. In w1 = self.ds_0(points), is the input points the positon of the points, and is the output the probability of these points being sampled?
Hi! Thanks for your interest, there are two applications (1 & 2) training importance score prediction network (eg, backbone from CLNet) +
If you are interested in dumping the data (feature detection, matching), check it out at ARS-MAGSAC or NG-RANSAC.
I read in the paper that Feature matcher and Trainable quality f can be trained and optimized together.
To answer this, please see application 3 that train feature matcher (eg, LoFTR) with
Recommend to check our poster as well. Feel free to ask any questions.
Dear author, I noticed the statement in the paper: "Indoors is trained with the coordinates and confidence output from the most commonly used feature detector and matcher, i.e., SuperPoint with SuperGlue." May I inquire whether this specific content is not open source?
Thank you for your kind reply.
Dear author, I attempted to run your Python script with the following command:
python test.py -nf 2000 -m pretrained_models/saved_model_5PC_l_epi/model.net -bs 32 -fmat 1 -sam 3 -ds sacre_coeur -t 2 -pth "diff_ransac_data/data"
which gives the following results:
Rotation error = 27.368475788562268 | Translation error = 17.417572128006835
Rotation error median= 4.00565242767334 | Translation error median= 5.565282344818115
AUC scores = [0.41858143, 0.47052947, 0.5377123]
Invalid Pairs (ignored in the following metrics): 59
F1 Score: 43.95 percent
% Inliers: 25.57
Mean Epi Error: 18.90
Median Epi Error: 1.43
The Rotation error and Translation error median are still far from the original 2.91/1.36 in Table 3, and 0.84 / 0.74 and 0.79 / 0.53 in Table 5, Could you please provide guidance on how to obtain results closer to those reported in the paper?
Dear author, I noticed the statement in the paper: "Indoors is trained with the coordinates and confidence output from the most commonly used feature detector and matcher, i.e., SuperPoint with SuperGlue." May I inquire whether this specific content is not open source?
It is open souce for sure, for indoors we ran SuperPoint + SuperGlue on ScanNet to dump the data, I will commit the link to the dumped data, and the code/param. for that.
Thank you for your kind reply. Dear author, I attempted to run your Python script with the following command: python test.py -nf 2000 -m pretrained_models/saved_model_5PC_l_epi/model.net -bs 32 -fmat 1 -sam 3 -ds sacre_coeur -t 2 -pth "diff_ransac_data/data"
which gives the following results: Rotation error = 27.368475788562268 | Translation error = 17.417572128006835 Rotation error median= 4.00565242767334 | Translation error median= 5.565282344818115 AUC scores = [0.41858143, 0.47052947, 0.5377123] Invalid Pairs (ignored in the following metrics): 59 F1 Score: 43.95 percent % Inliers: 25.57 Mean Epi Error: 18.90 Median Epi Error: 1.43
The Rotation error and Translation error median are still far from the original 2.91/1.36 in Table 3, and 0.84 / 0.74 and 0.79 / 0.53 in Table 5, Could you please provide guidance on how to obtain results closer to those reported in the paper?
Glad to hear. python test.py will test on the basic RANSAC loop with our sampler in Python, also it gave you the results on scene sacre coeur.
To achieve the SOTA results as the paper, run test_magsac.py instead. Please install magsac in Python and try to test it.
Note that add '-bm 1' instead of '-ds <scene>' (specific scene) will test on a list of testing scenes.
Glad to hear. python test.py will test on the basic RANSAC loop with our sampler in Python, also it gave you the results on scene sacre coeur. To achieve the SOTA results as the paper, run test_magsac.py instead. Please install magsac in Python and try to test it. Note that add '-bm 1' instead of '-ds ' (specific scene) will test on a list of testing scenes.
Thank you! I added "-bm 1" when running test.py and found that it didn't work well in many scenarios, such as the following:
Rotation error = 36.36013293599749 | Translation error = 35.57712629499969
Rotation error median= 2.6460447311401367 | Translation error median= 26.830575942993164
Rotation error = 50.85618265550007 | Translation error = 44.1930445529126
Rotation error median= 17.746257781982422 | Translation error median= 45.45615768432617
Rotation error = 88.51307068209906 | Translation error = 47.50164546813819
Rotation error median= 95.91495513916016 | Translation error median= 48.0619010925293
Is this due to the model's lack of generalisation? Or am I doing something wrong somewhere? I also have a stupid question: is there any relationship between F1 , AUC, epi error and R error T error?When testing on a list of testing scenes, I found that some scenes F1 and AUC are better than others but R error and T error are much worse?
There is no meaning to run test.py on all scenes at all, it is used as a demo without installing MAGSAC. Pls run test_magsac.py instead if you want to be comparable as the papaer, which inludes final refinements, etc. F1 score and epipolar errors are used for F matrix evaluation. R and T are decomposed from E matrix, you will get the errors compared to the ground truth. AUC is the recall curve area under differnet thresholds, based on pose errors (
Sorry, I'm running test_magsac.py again with the following problem:
Traceback (most recent call last).
File "test_magsac.py", line 233, in
test(model, test_loader, opt)
File "test_magsac.py", line 114, in test
E, mask, save_samples = pymagsac.findEssentialMatrix(
TypeError: findEssentialMatrix(): incompatible function arguments. The following argument types are supported.
1. (correspondences: numpy.ndarray[numpy.float64], K1: numpy. w2: float, h2: float, probabilities: numpy.ndarray[numpy.float64], sampler: int = 4, use_magsac_plus_plus: bool = True, sigma_th: float = 1.0, conf. float = 0.99, min_iters: int = 50, max_iters: int = 1000, partition_num: int = 5) -> tuple
Is this a problem with my magsac installation? Or do I need to change the code?
Sorry, I'm running test_magsac.py again with the following problem: Traceback (most recent call last). File "test_magsac.py", line 233, in test(model, test_loader, opt) File "test_magsac.py", line 114, in test E, mask, save_samples = pymagsac.findEssentialMatrix( TypeError: findEssentialMatrix(): incompatible function arguments. The following argument types are supported. 1. (correspondences: numpy.ndarray[numpy.float64], K1: numpy. w2: float, h2: float, probabilities: numpy.ndarray[numpy.float64], sampler: int = 4, use_magsac_plus_plus: bool = True, sigma_th: float = 1.0, conf. float = 0.99, min_iters: int = 50, max_iters: int = 1000, partition_num: int = 5) -> tuple
Is this a problem with my magsac installation? Or do I need to change the code?
I have resolved the issue. When installing with 'pip install pymagsac,' it installs pymagsac==0.2, but when installing with cmake, it installs pymagsac==0.3. After reinstalling, 'test_magsac.py' runs successfully. However, I noticed that the results did not include the estimation for Rotation error | Translation error, Rotation error median | Translation error median. How can I modify the operation to make the program display these errors?
There is no meaning to run test.py on all scenes at all, it is used as a demo without installing MAGSAC. Pls run test_magsac.py instead if you want to be comparable as the papaer, which inludes final refinements, etc. F1 score and epipolar errors are used for F matrix evaluation. R and T are decomposed from E matrix, you will get the errors compared to the ground truth. AUC is the recall curve area under differnet thresholds, based on pose errors (err(pose)=max(errR,errT)).
Sorry, I forgot to ask an important question. I couldn't find the answer in the paper. What is the unit of Translation error? meters or centimeters?
There is no meaning to run test.py on all scenes at all, it is used as a demo without installing MAGSAC. Pls run test_magsac.py instead if you want to be comparable as the papaer, which inludes final refinements, etc. F1 score and epipolar errors are used for F matrix evaluation. R and T are decomposed from E matrix, you will get the errors compared to the ground truth. AUC is the recall curve area under differnet thresholds, based on pose errors (err(pose)=max(errR,errT)).
Sorry, I forgot to ask an important question. I couldn't find the answer in the paper. What is the unit of Translation error? meters or centimeters?
Hi, please check the content below Eqn. 6 in the paper, both translation and rotation errors are angular errors, in degrees.
Thanks for the answer, I'll close the question for now.