ShaohuaDong2021/DPLNet

Ask about the preprocessing of depth images

Opened this issue · 3 comments

When training on NYUv2 dataset, did you use HHA encoding image as inputs or did you just use the colorization depth images? Furthermore, did you crop the white border of depth images?

We do not use the HHA, we only use the raw depth image and do not crop the white border of depth images.

Thanks for your reply. I have another question regarding the preprocessing of image on SunRGBD dataset. This dataset has images with different sizes and ratios. How did you preprocess to train and evaluate this dataset fairly?

We followed Dformer, cropped the image to the same resolution.