Why do we need padding=100 for a filter of size 3?
deartonym opened this issue · 5 comments
As the title, in torchfcn/models/fcn32s.py we have the setting for the first conv1 layer:
nn.Conv2d(3, 64, 3, padding=100),
Why do we need a padding of side length 100 instead of 1 according to the filter size 3?
Thanks
Shelhamer and Long use a padding of 100 for the first conv layer in their Caffe implementation.
https://github.com/shelhamer/fcn.berkeleyvision.org/blob/master/pascalcontext-fcn8s/train.prototxt
Thank you @Viresh-DL
if anyone interested:
Why pad the input?: The 100 pixel input padding guarantees that the network output can be aligned to the input for any input size in the given datasets, for instance PASCAL VOC. The alignment is handled automatically by net specification and the crop layer. It is possible, though less convenient, to calculate the exact offsets necessary and do away with this amount of padding.
I have another issue. I am writing the code in keras/ tf.keras.
How can I add the padding=100, as it seems, keras has padding= same, valid
I have another issue. I am writing the code in keras/ tf.keras.
How can I add the padding=100, as it seems, keras has padding= same, valid
what about using tf.layers.conv2d
??