MeanIU of FCN32 lower than the result in Shelhamer et al. (2016)
yasuru opened this issue · 6 comments
I'm training FCN32. The meanIU on validation data set is ~30 % after 100000 iterations.
It can't reach the result (63.6 %) of Shelhamer et al. (2016). Is this correct?
Would you tell me what is the difference?
According to paper at http://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf, page 6, the MeanIU is (59.4) for FCN-32s.
The difference is
- dataset (pascal2011 segval in their work, 2012 mine)
- iteration (maybe they're larger than mine)
Maybe you can try training with voc2011 with iteration larger than 100000.
Thank you for quick response.
I will try it.
I have another question. I'm training FCN16s using pretraind FCN-32s with meanIU ~30 %. The meanIU of FCN16s is 22 % after 100000 iterations. It is lower than meanIU of FCN-32s. Have you met this problem? I'm using the following parameters.
・learning rate: 1e-10
・momentum: 0.99
・weight decay: 5e-4
Actually I've never tried training FCN-16s for pascal dataset.
Noted with thanks.
Hi, I think I found the cause of this issue.
That is because I didn't copy the weight of pre-trained model (VGG) to FCN32s regarding of fc layers.
weight of fc layers can be reshaped to use it as the weight of conv layers.