Training for metric depth on a custom dataset.
abhishek0696 opened this issue · 1 comments
abhishek0696 commented
HI @LiheYoung @1ssb, I tried using the depth_to_pointcloud script as it is for estimating depth for some rgb images for which I do have pixel-wise ground truth. As expected, because I used the pre-trained weights for outdoors as they were, I got inaccurate depths for farther away distances. I am looking to train the xoedepth+depthanything model for my custom dataset. It would be great to get some instructions on the following:
- How can I stage my dataset in a similar format as the in-built dataset? My image size is (512.1024).
- Can I create a custom config for my dataset or do I need to use that of nyu/kitti/etc and if so, is there a way to get accurate results for the given image size because afaik the image size in the in-built datasets is way smaller.
- What modifications do I have to make in the training script in case of the custom dataset, I would also like to use the smaller or the base model just to accelerate the inference.
Thanks in advance!
1ssb commented
I think this is something @LiheYoung can help with better.
Try an idea called metric point maps, use the (x,y,z) correct to the metric depth that you have and try supervising the model directly using the zoedepth architecture instead of fine-tuning on DepthAnything.