NVlabs/PWC-Net

Could you please provide the pretrained model with the AEPE of 2.00 on FlyingChairs?

littlespray opened this issue · 4 comments

Hi,

Thank you for your great work! Now I am re-implementing the PWCNet in PyTorch. And your repository really helps me a lot!

But I still meet a problem: I trained my model for 400 epochs but it only has the AEPE of 3.9 on the train set, far away from 2.0, the AEPE reported in the original paper. So I tried the pre-trained weights you provided, pwc_net_chairs.pth.tar, and got an AEPE of 3.56.

I know this pre-trained-model is fine-tuned on FlyingThings3D, and I think maybe that's the reason why it does not work as well as the original paper. So could you offer me a pre-trained model that has the AEPE of 2.0 on FlyingChairs? Thank you very much!

I also want to make sure that the only preprocessing is to randomly crop the images and ground truths to [384, 448], then divide the ground truths by 20? And there is no need to crop the images when calling infer(), right?

OK, I have solved this problem. There is no problem with the model they provided. The model finetuned on FlyingThings can achieve the AEPE of 2.3. The reason why I cannot reach the same AEPE is that I use PyTorch1.4.0&cuda10.2. There are implement differences between my version and PyTorch1.0.0&cuda9.0.

OK, I have solved this problem. There is no problem with the model they provided. The model finetuned on FlyingThings can achieve the AEPE of 2.3. The reason why I cannot reach the same AEPE is that I use PyTorch1.4.0&cuda10.2. There are implement differences between my version and PyTorch1.0.0&cuda9.0.

Hi Bro,
I pretrained the model using Flyingchairs, it converged to EPE of 2.3. However, when I finetune is with flyingthings3D, with a 10% of the pretrained init learning rate and patch size 384x768, It cannot go converge, and stay at an AEPE with 39.8. Have you met whis problem? And what's your parameter setting when fine-tuning.

OK, I have solved this problem. There is no problem with the model they provided. The model finetuned on FlyingThings can achieve the AEPE of 2.3. The reason why I cannot reach the same AEPE is that I use PyTorch1.4.0&cuda10.2. There are implement differences between my version and PyTorch1.0.0&cuda9.0.

Hi Bro,
I pretrained the model using Flyingchairs, it converged to EPE of 2.3. However, when I finetune is with flyingthings3D, with a 10% of the pretrained init learning rate and patch size 384xH768, It cannot go converge, and stay at an AEPE with 39.8. Have you met whis problem? And what's your parameter setting when fine-tuning.

Hi Bro,

I didn't meet that problem. You can refer to irr's repository. I use the same settings as theirs. Hope it could help.