Simple evaluation on monodepth2's code
Opened this issue · 1 comments
xapaxca commented
Hi Jaime,
Again, thank you for the work!
I am trying to evaluate your KBR weights on Monodepth2's evaluation code with Eigen split and median scaling.
See the code: evaluate_depth_kbr
However, the results I'm obtaining are worse than anticipated. Please see the details below:
Loading weights with prefix 'nets.depth.encoder.':
Total number of keys: 340
Number of missing keys: 0
Number of unexpected keys: 0
Loading weights with prefix 'nets.depth.decoders.disp.':
Total number of keys: 28
Number of missing keys: 0
Number of unexpected keys: 0
-> Computing predictions with size 640x192
-> Evaluating
Mono evaluation - using median scaling
Scaling ratios | med: 1.755 | std: 0.170
abs_rel | sq_rel | rmse | rmse_log | a1 | a2 | a3 |
& 0.137 & 1.731 & 5.461 & 0.215 & 0.851 & 0.944 & 0.974 \\
-> Done!
Scaling disparities does not significantly influence the outcomes:
pred_disp, _ = disp_to_depth(pred_disp, opt.min_depth, opt.max_depth)
Could I be overlooking something?
Thank you for your assistance!
jspenmar commented
Hi xapaxca,
At the moment I don't have time to look in too much detail at the code you sent. For now, some "off the top of my head" recommendations:
- Have you visualised the predictions and seen if they're similar to those in the paper?
- Are the images being ImageNet-standardised?
- Our results are in https://github.com/jspenmar/slowtv_monodepth/tree/main/results/kbr/base. From what I can see, the median scaling factor is very different.
- Use quickstart/run as a reference for loading the model & such