alibaba/cascade-stereo

Tanks and Temples reproducing problem and one way might be right?

Opened this issue · 2 comments

Hi, dear authors. Thank for your great work.When I tried to reproduce the results on the Tanks and Temples dataset, I got many background pixels, especially in "Family", in which the main part lost. I think the reason why there are so many background pixels is that in general_eval.py -> read_cam_file we use self.ndepths=192 but in YaoYao's T&T dataset, for example: Family, the num_depth is 700+.So I annotate the line 72~75 in general_eval.py and get rid of many background pixels, I do not konw if it is a mistake but it really confused me when I tried to reproduce the results on T&T. I want to know when you submit the results to T&T benchmark, how many self.ndepths you use and if you use the num_depth that YaoYao's dataset provide?Hope for your reply.

please , i want to konw how to change code in order to test T&T datasets, i dont see the depth img and mask img in T&T datasets

@HuanyuanZhou May I ask how this code fuses the point cloud generated by the Tanks and Temples dataset.
Looking forward to your reply.