Question about the evaluated results
XJTU-Haolin opened this issue · 2 comments
Dear authors,
Thanks for your excellent work!
After running the evaluation with your released checkpoint (mickey.ckpt), I got the following results:
{
"Average Median Translation Error": 1.9044957215846383,
"Average Median Rotation Error": 37.06174682877119,
"Average Median Reprojection Error": 142.83572575743437,
"Precision @ Pose Error < (25.0cm, 5deg)": 0.09847359178711333,
"AUC @ Pose Error < (25.0cm, 5deg)": 0.2579417106896131,
"Precision @ VCRE < 90px": 0.45265432932594896,
"AUC @ VCRE < 90px": 0.7204040026315026,
"Estimates for % of frames": 1.0
}
However, they are not comparable with the results in your paper.
For example, "Average Median Reprojection Error": 142.83572575743437 (ckpt) vs 126.9 (paper), "Average Median Translation Error": 1.9044957215846383 (ckpt) vs 1.59 (paper), "Average Median Rotation Error": 37.06174682877119 (ckpt) vs 25.9 (paper).
Could you please offer any instruction?
Thanks for your time.
Haolin
Hello Haolin!
Thanks for the interest in our work!
Just to confirm, are those the results on the validation or the test set? The results provided in the paper are for the test set, which need to be computed through the Map-free benchmark website. They look very similar to the validation results, so I just want to make sure before we look deeper into the problem.
Thanks a lot!
Hello Haolin!
Thanks for the interest in our work!
Just to confirm, are those the results on the validation or the test set? The results provided in the paper are for the test set, which need to be computed through the Map-free benchmark website. They look very similar to the validation results, so I just want to make sure before we look deeper into the problem.
Thanks a lot!
Now I understand. The results I got were calculated from the val set.
Thanks for your explanation!