meet-minimalist/TinyImageNet-Benchmarks

Batch norm in Mobilenet v2.

meet-minimalist opened this issue · 0 comments

As per the architecture definition provided here, it is shown that the batch normalization is used in inverted-residual blocks as below.

  1. Bottleneck layer which is normal convolutional layer is equipped with batch norm
  2. Depthwise layer is also equipped with batch norm
  3. Pointwise layer is also equipped with batch norm.
    Even in their paper they mentioned that they use batch norm after every layer.

But when we download a pretrained model from tensorflow and visualize it in Netron, it is as per below.

  1. Bottleneck layer doesnt use batch norm instead use bias.
  2. Depthwise is equipped with batch norm.
  3. Pointwise layer doesnt use batch norm instead use bias.

So this make a huge difference in number of parameters and final accuracy.