Evaluation metrics on MipNeRF360
Closed this issue · 2 comments
Thanks for sharing the great work! I have a small question regarding how you compute the evaluation metrics on MipNeRF360 dataset in Tab1.
"Where 1u is equal to the object’s largest dimension. "
Can you point me to the code where you compute this object's largest dimension to normalize the translation error ?
For the other methods, are you computing the numbers yourself on this dataset or the numbers are taken from some other papers ?
Hi,
thanks for your interest in the project and thanks for waiting for my answer. The code for computing the largest dimension was not included in this repository and the switch was made before putting the data inside the tables.
I report here the ratio that I have used:
Model | Ratio with original |
---|---|
Bicycle | 0,01 |
Bonsai | 0,016495 |
Counter | 0,0367 |
Garden | 0,009734 |
Kitchen | 0,0152 |
Room | 0,0165 |
Stump | 0,0043 |
For the other methods, I re-run the method myself. Mainly because some of them were not tested on some datasets and/or were using slightly different testing protocols so I try to put everyone on the same level, following the iNeRF protocol.
Thanks!