ToniRV/NeRF-SLAM

Question about evaluation results

mix345 opened this issue · 1 comments

Hi @ToniRV

Thank you very much for sharing your work! The code and paper are both impressive!

I have some questions about the evaluation. I assume the evaluation values in the paper can be obtained by setting self.evaluate=True and registering ref_frame in nerf_fusion.py

1. Color space: I enabled the visualization flag and checked the images used for PSNR calculation. I noticed that both reference and predicted images are darker than the original data and seem to be in a different color space. I think PSNR value will be higher in this color space, because all pixel values are suppressed. After applying linear_to_srgb function, the color looks natural and the PSNR values on training views are approximately close to the value I got by running Instant-NGP’s original implementation on Replica office0 scene (around 44 after 20000 iterations). What do you think about this?

psnr

2. PSNR values: In Table1 in the paper, I noticed that some methods show very low PSNR less than 10db (iMAP*, TSDF, sigma-Fusion). Could you tell me the detail on these sequences if possible? I think PSNR below 10 is not very common for novel view synthesis. For example, I measured PSNR of black image (assume the rendering is completely broken) against all color images in Replica office0 and took the average, and it still provides PSNR more than 10db.

3. Measurement timing: Could you tell me when did you get the PSNR/Depth number? Is it when realtime SLAM process finishes or when the training iteration reaches a certain number (25000 in stop_iters) after SLAM process?

Thanks in advance!

Hello @ToniRV,
Could you reply to this? I think this is a very critical part to fully understand your method.

Thanks,