naver/croco

pre-training code

Huiimin5 opened this issue · 3 comments

Dear author,

Thank you for your interesting contribution. Do you mind sharing the pre-training code, too? 

Many thanks

Hi @Huiimin5
We are currently trying to get the internal authorization to release the pre-training code.
We'll let you know when this happens (hopefully soon? i.e. in the next months).

Thank you for your responce.

I notice several SOTA methods' results you reported in your paper are not consistent with those in their papers. The depth estimation results on NYUv2 of MAE and MultiMAE can be as high as 85.1 and 86.4, respectively. (https://github.com/EPFL-VILAB/MultiMAE). But in Table 3 of your paper, these numbers are 79.6 and 83.0 , respectively. I am really confused about where this difference comes from.