TimDettmers/sparse_learning

condition to regrow the connection!!

vkvats opened this issue · 1 comments

This is a great paper, full of information and ideas. Thank you for this amazing work.

While reading I came across this line, "we want to look at the momentum magnitude of “missing” or zero-valued weights, that is, we want to look at those weights which have been excluded from training before." I was wondering if there is some weight which has large momentum, assuming that this momentum value has gathered over several updates and not just a single update, why were these connections missing in the first place?? is it because connection regrowing step is not done more frequently?? and if this is the reason, then can regrowing connection more frequently give faster convergence??

Thank you for your time and attention.
Vibhas.

The "missing" weight are weights that are 0. The gradient can still be calculated from those weights and the momentum is just the exponential mean of these gradients over time. As such, it is possible that these momentum buffers for these missing weights are large. Regrowing connections might yield faster convergence. I have tried it a little bit and did not see any obvious improvement, but I also did not test it carefully. I think it is an open research question of how the frequency of changing the sparsity pattern relates to time to convergence.