xy-guo/GwcNet

About the evaluation on the kitti12 validation set

Xt-Chen opened this issue · 12 comments

Firstly, thank you for publishing your code.
I used your pre-trained model on KITTI2012(best.ckpt) to evaluate the validation set of kitti2012(14 pairs of images) and found the following results: for gwcnet-gc
avg_test_scalars {'D1': [0.024143276503309608], 'EPE': [1.3547629032816206], 'Thres1': [0.6347695120743343], 'Thres2': [0.06658247805067471], 'Thres3': [0.02868282133048134]}
However, this is different from the data reported in your paper.
In your paper, the results are:
Gwc40-Cat24: for kitti2012, EPE(px):0.659, D1-all(%):2.10
The results of this experiment confused me. Am I doing something wrong?

Could you provide more details? for example the command you used.

First, I download the pre-training model you provided. The path to load the pre-training model is .\checkpoints\kitti12\gwcnet-gc\best.ckpt. Secondly, in order to evaluate the results of the model on the validation set of KITTI2012, I used the function "test_sample" you provided in "main.py", through which I got the above evaluation results.
The above result is the pre-training model you provided (\checkpoints\kitti12\gwcnet-gc\best.ckpt) was evaluated on kitti12_val.txt.
Thank you very much for your reply.

Maybe you can first check whether you have loaded the correct checkpoint. make sure test_batch_size set to 1, and model in eval mode.

I tried to test the model with my released code.

First I removed the adjust-learning-rate / training / saving code in train() function of main.py. then I run the following command:

python main.py --dataset kitti
--datapath $DATAPATH --trainlist ./filenames/kitti12_train.txt --testlist ./filenames/kitti12_val.txt
--epochs 300 --lrepochs "200:10"
--model gwcnet-gc --logdir ./checkpoints/kitti12/gwcnet-gc
--test_batch_size 1 --loadckpt ././checkpoints/kitti12/gwcnet-gc/best.ckpt

and I got the following results:

avg_test_scalars {'loss': 0.2046549352152007, 'D1': [0.022020522937444702], 'EPE': [0.6573930361441204], 'Thres1': [0.11551064665296248], 'Thres2': [0.04537256248295307], 'Thres3': [0.02558313699306122]}

Thank you very much for your detailed answer. However, the strange thing is that I also performed the same steps above, I got the same results as before:
start at epoch 0
Epoch 0/300, Iter 0/14, test loss = 0.699, time = 3.617939
Epoch 0/300, Iter 1/14, test loss = 0.384, time = 0.383580
Epoch 0/300, Iter 2/14, test loss = 0.355, time = 0.372590
Epoch 0/300, Iter 3/14, test loss = 0.514, time = 0.367138
Epoch 0/300, Iter 4/14, test loss = 0.405, time = 0.380387
Epoch 0/300, Iter 5/14, test loss = 0.419, time = 0.382480
Epoch 0/300, Iter 6/14, test loss = 0.383, time = 0.377939
Epoch 0/300, Iter 7/14, test loss = 0.456, time = 0.373806
Epoch 0/300, Iter 8/14, test loss = 0.657, time = 0.374106
Epoch 0/300, Iter 9/14, test loss = 0.462, time = 0.370982
Epoch 0/300, Iter 10/14, test loss = 0.335, time = 0.373985
Epoch 0/300, Iter 11/14, test loss = 0.409, time = 0.374382
Epoch 0/300, Iter 12/14, test loss = 0.408, time = 0.375034
Epoch 0/300, Iter 13/14, test loss = 0.373, time = 0.375253
avg_test_scalars {'loss': 0.4471520973103387, 'D1': [0.024143276503309608], 'EPE': [1.3547629032816206], 'Thres1': [0.6347695120743343], 'Thres2': [0.06658247805067471], 'Thres3': [0.02868282133048134]}
Will the different versions of pytorch affect? I am using pytorch1.0.1. What configuration are you using?

My code has been tested under ubuntu 14 + cuda 8 + pytorch 0.4.1 and ubuntu 16 + cuda 9 + pytorch 1.0. Are you using windows? I notice your path is \ instead of /

emmm...sorry, that's a typo. I also use ubuntu16+cuda9

Can you check if you are on the lastest commit? Please copy git diff outputs and I can check it for you.

and python version ( I am using python 3.6)

Thank you so much, I have found the reason for this bug. If align_corners=True in "F.interpolate", the performance will be worse, so if you set align_corners=False I can get the same performance as you.
Thanks again!

Thank you so much, I have found the reason for this bug. If align_corners=True in "F.interpolate", the performance will be worse, so if you set align_corners=False I can get the same performance as you.
Thanks again!
@Xt-Chen
The baseline PSMNet set align_corners=True (https://github.com/JiaRenChang/PSMNet/blob/master/README.md#notice).
I have tried align_corners=False in PSMNet, the performance on SceneFlow will be improved, but after finetune on K15, the performance will be worse.
Have you tried to finetune the pretrained network to K15 in gwcnet? How about the performance?

@zhFuECL
Hello, I have experimented and concluded that "align_corners" must be consistent during training and testing . "align_corners" is set to True or False has little influence on the results.