rohitgirdhar/AttentionalPoolingAction

Some implementation queries on dropout, vgg16 last feature map tapping

Opened this issue · 1 comments

  1. It seems like dropout layer before the topdown attention logits has missed the is_training argument (line 283 in nets_factory.py). Kindly check this

  2. In vgg.py, the vgg16 function (line 166-170) uses the relued output of conv5_2 (not pre-relu conv5_3) as the final convolutional endpoint. And the relu activation of conv5_3 is missing from the architecture. Since we are loading the imagenet weights, the architecture has to be consistent. Please share your thoughts.

  1. I don't think it's required because I define it in the arg_scope here. Every call to slim.dropout within that block is passed in the is_training argument defined in the arg_scope.
  2. I think you're right, I've fixed it now. Thanks for pointing it out. In any case it shouldn't affect the results since I don't use VGG networks in this work.