About Evaluation Scripts
jjongs97 opened this issue · 4 comments
Thanks for releasing the codes of this awesome work!
Could you please provide the evaluation scripts? Because I have confused with evaluation of multimodal completion. Thank you.
Thanks!
For the multimodal completion, we follow the script provided by MSC: https://github.com/ChrisWu1997/Multimodal-Shape-Completion/tree/master/evaluation. We will also provide the script soon.
Please also let me know which part of the evaluation confuses you such that I can clarify it for you. Sorry for the inconvenience.
I don't fully understand the description in the paper.
"For a fair comparison, we also give the baseline methods additional points within the truncation threshold."
So I would like to ask a question about the setting.
Hi, sorry for the late reply!
Because we fill 0.2 for the missing regions, it essentially give the model some information about the boundaries, which the point cloud based model do not have. So for a fair comparison, we use less SDF grids for computing the metrics. For instance, suppose the resolution is 64 and the bottom-half region of the shape is missing. Then in our case we fill 33 (or more) instead of 32 of the grid with .2 to indicate the missing regions. This essentially gives other methods more points during the evaluation.
Thanks!
For the multimodal completion, we follow the script provided by MSC: https://github.com/ChrisWu1997/Multimodal-Shape-Completion/tree/master/evaluation. We will also provide the script soon.
Please also let me know which part of the evaluation confuses you such that I can clarify it for you. Sorry for the inconvenience.
Could you provide the language-guided evaluation script for subsequent work metric alignment and referencing.
Very Thanks.