autonomousvision/unimatch

tartan air dataset

Closed this issue · 5 comments

I knowed that tartan air dataset's depth map is not disparity but, there is no changing disparity code to depth in the dataset build code.

Would you plz what is the right sense?

Hi, the tartan air dataset is only used for stereo disparity estimation in our experiments and it's not used for the depth task.

Hi, the tartan air dataset is only used for stereo disparity estimation in our experiments and it's not used for the depth task.

Thank you for your kindness
I understand that the Tartan-Air dataset provides GT (ground truth) values as left_depth.npy. I believe these are depth values, but am I misunderstanding something?
https://github.com/castacks/tartanair_tools/blob/master/data_type.md#depth-image
image

If I'm mistaken, I would appreciate your correction. As always, I send my admiration for your efforts.

Hi @haofeixu

In your 'dataset.py' code, I noticed that for common datasets like KITTI, disparity images are loaded as GT. However, for datasets like Tartan Air, the code seems to load depth maps instead like belows.

disp_files = sorted(glob(data_dir + '/*/*/*/*/depth_left/*.npy'))

disp_files = sorted(glob(data_dir + '/*/*/*left.depth.png'))

Given that these two types(disparity <-> depth) of data represent different things, I'm concerned this difference could potentially impact the learning process. Could you clarify if this is the case and if the variation in data representation between datasets is intended to influence the training outcomes?

@haofeixu Thank you sir!