deep-learning-with-pytorch/dlwpt-code

Strange bias initialization with He initialization

pietroventurini opened this issue · 1 comments

I noticed that for the classification models of part 2, weights are initialized using nn.init.kaiming_normal_ (He initialization). However, when biases are initialized (p2ch14.model line 94), that is done in a strange way using nn.init.normal_(m.bias, -bound, bound). I find it hard to understand why they are sampled from a Gaussian distribution with mean -bound and standard deviation bound. I believe that it's probably a leftover from a previous uniform initialization.

Should lines 91-94 be replaced with m.bias.data.fill_(.0)?

The same also holds for

  • p2ch11.model, lines 43-46
  • p2ch12.model, lines 43-46
  • p2ch13.model, lines 41-44
t-vi commented

I completely agree. Thank you for pointing this out! I'd probably use nn.init.zeros_ to keep with the nn.init theme.