Some implementation queries on dropout, vgg16 last feature map tapping
Opened this issue · 1 comments
Deleted user commented
-
It seems like dropout layer before the topdown attention logits has missed the is_training argument (line 283 in nets_factory.py). Kindly check this
-
In vgg.py, the vgg16 function (line 166-170) uses the relued output of conv5_2 (not pre-relu conv5_3) as the final convolutional endpoint. And the relu activation of conv5_3 is missing from the architecture. Since we are loading the imagenet weights, the architecture has to be consistent. Please share your thoughts.
rohitgirdhar commented
- I don't think it's required because I define it in the
arg_scope
here. Every call toslim.dropout
within that block is passed in theis_training
argument defined in thearg_scope
. - I think you're right, I've fixed it now. Thanks for pointing it out. In any case it shouldn't affect the results since I don't use VGG networks in this work.