enajx/HebbianMetaLearning

bias in the linear layer

Howuhh opened this issue · 1 comments

Hi!
I really curious to know why biases are disabled in all layers (Linear, CNN). What's the reason behind that? There is D_w in hebbian update rule, but, as i understand, that's not the same. It affects weights, not activations.

Can this hurt perfomance? Can we adapt hebbian rule to network biases? Or there is no need and i am missing something.

Thanks!

enajx commented

It comes down to the idea of Hebbian learning: the weights are meant to change as the result of the neuron's pre and post activations, no bias involved. You could also co-evolve biases along with the Hebbian rules but we haven't explored that since we found that we didn't need them.