alibaba/cascade-stereo

How to reproduce your result on DTU dataset?

Closed this issue · 16 comments

Hello,

I'm trying to reproduce the result presented in the paper on DTU dataset. More specifically, as shown in your paper, the accuracy is 0.325 and the completeness is 0.385. However, I got the accuracy is 0.357 and the completeness is 0.359 when running your Mathlab evaluation code on the result of your pretrained model?
Can you provide more detail about the progress?

Many thanks,
Khang Truong

Yes, I also used the official pretrained model but the final result: mean accuracy 0.3579 mean completeness: 0.3577
How to reproduce the result presented in the paper?
Can you provide more detail about the progress?

Many Thanks,
Puyuan Yi

@jamesYI123 , I've got the result as same as in the paper. You need to use gipuma fusion method (they described in their tutorial).

thx!! I have used the normal fusion method to get the final point cloud result, many thx!

I have used Gipuma method for fusion. However, I can't reproduce the result in this paper. I got an accuracy of 0.347083 and completeness of 0.400340. I just git clone the code from https://github.com/YoYo000/fusibile and then do 'cmake' and 'make' operation. What's wrong with my operation?
Thanks
@TruongKhang

Hello, all. I tried training the network from scratch and used gipuma as fusion method, however the evaluation result is 0.3358(mean acc) and 0.4244(mean comp), which is not comparable to the one reported in the paper (0.325, 0.385). So, have you guys manage to reproduce the result in the paper by retraining the network?

Hello, all. I tried training the network from scratch and used gipuma as fusion method, however the evaluation result is 0.3358(mean acc) and 0.4244(mean comp), which is not comparable to the one reported in the paper (0.325, 0.385). So, have you guys manage to reproduce the result in the paper by retraining the network?

I have used Gipuma method for fusion. However, I can't reproduce the result in this paper. I got an accuracy of 0.347083 and completeness of 0.400340. I just git clone the code from https://github.com/YoYo000/fusibile and then do 'cmake' and 'make' operation. What's wrong with my operation?
Thanks
@TruongKhang

hello, try normal fusion, thres_view=4

Hello, all. I tried training the network from scratch and used gipuma as fusion method, however the evaluation result is 0.3358(mean acc) and 0.4244(mean comp), which is not comparable to the one reported in the paper (0.325, 0.385). So, have you guys manage to reproduce the result in the paper by retraining the network?

I have used Gipuma method for fusion. However, I can't reproduce the result in this paper. I got an accuracy of 0.347083 and completeness of 0.400340. I just git clone the code from https://github.com/YoYo000/fusibile and then do 'cmake' and 'make' operation. What's wrong with my operation?
Thanks
@TruongKhang

hello, try normal fusion, thres_view=4

I've tried normal fusion, thres_view=4 but I got
mean acc
0.4226
mean comp
0.3204
which is not 0.355 in the paper.

I can reproduced the result in the paper now. It turn out that you have to use fusibile from here https://github.com/YoYo000/fusibile

@Tangshengku , after compiling fusibile successfully. You run directly file `test.py' without changing any parameters to produce 3D model (.ply files). Then by using the provided Matlab code for evaluation, you'll get the similar results in the paper. Did you do exactly these steps?

@Tangshengku , after compiling fusibile successfully. You run directly file `test.py' without changing any parameters to produce 3D model (.ply files). Then by using the provided Matlab code for evaluation, you'll get the similar results in the paper. Did you do exactly these steps?

yes! I reproduced the result. The problem is the version of pytorch is too high. My version is 1.6.0 which may have the difference in sampling comparing with the old version. So, do not use 1.6.0 or higher and just be 1.1.0 will be fine.

I can reproduced the result in the paper now. It turn out that you have to use fusibile from here https://github.com/YoYo000/fusibile

你好 我尝试过从头训练,但是不能复现出最终的结果,我使用batch size=2训练和gipuma的融合方式,没有使用Apex,训练了16个epoch之后无法进行复现,请问您是如果操作并复现的呢?

How can I use the evaluation codes by matlab?

Hi @gxd1994 , thanks for your amazing work!
How to reproduce the high-res(1600x1184) results of DTU dataset? Which fusion method do you use? How to set the fusion parameters?
Looking forward to your reply.

Hi @XYZ-qiyh , please check my comment above. I described the steps in order to reproduce the result on the DTU dataset!

Hi @XYZ-qiyh , please check my comment above. I described the steps in order to reproduce the result on the DTU dataset!

Thanks for your reply!

@Tangshengku , after compiling fusibile successfully. You run directly file `test.py' without changing any parameters to produce 3D model (.ply files). Then by using the provided Matlab code for evaluation, you'll get the similar results in the paper. Did you do exactly these steps?

yes! I reproduced the result. The problem is the version of pytorch is too high. My version is 1.6.0 which may have the difference in sampling comparing with the old version. So, do not use 1.6.0 or higher and just be 1.1.0 will be fine.

Hi, did you try "align_corners=True" in F.grid_sample (Pytorch 1.6.0)? It seems like the differences lie in the "align_corners" in different pytorch versions.
image

I notice that the author used Pytorch1.2.0 which means that F.grid_Sample(align_corners=True). However, the author set align_corners=False in F.interpolate.

I would like to know how these parameters influence the final results. @gxd1994 .Many thanks in advance.