twhui/LiteFlowNet

Help needed regarding training strategy used for finetuning with FlyingThings dataset

syed-mujtaba-hassan opened this issue · 1 comments

Hi,

I tried to replicate the lite-flow caffe model using the procedure describe in the paper. For training done on FlyingChairs dataset, the accuracy reached was close to what was reported in the paper (33.68% as compared to 32.59%). However, training on FlyingThings dataset is not increasing accuracy significantly. I can only reach till 32.63% using FlyingThinggs dataset (not 28.59 %) even after finetuning for more than 500k iterations. I had also removed the harmful dataset as pointed out in FlowNet2. One thing that I have noticed is that the training is very slow for the last layer (layer2) and accuracy has not increased much by adding this layer. Moreover, the test loss for this layer is much greater than the other layers,.

Can you guide me about the training procedure for finetuning with FlyingThings dataset. I cannot figure out what confiugurations are used in solver prototxt file for finetuning with FlyingThings dataset.

Thanks for taking your time and reading the post :)

Thanks.

twhui commented

You don't need to train models yourselves. You can simply find my trained models here LiteFlowNet/models/trained/.

Level-wise training is only needed for FlyingChairs but not the others.