sanghoon/pytorch_imagenet

why there is an adding relu after the last conv of feature

Closed this issue · 2 comments

and seems this implemention is different from the detection's model, the input normalization and the weights initialization are all different.

Hi @argman
Firstly, the PVANet implemented here are based on 'pre-activation' scheme. (Please refer to https://arxiv.org/abs/1603.05027) That's why we need an additional ReLU so that the final feature map from the convolutional layers is a post-activation map.
Talking about the parameters, I haven't put some efforts into implementing the exactly same network. Please note that the input-size for fc6 is also different from the original one. (7x7 here, 6x6 before)