cjlin1/libmf

Hi, how to decrease the learning as the number of iterations increasing?

Closed this issue · 1 comments

I found that if learning rate is too large, it converges very slowly as the number iterations increasing.

Hi @AlecWong,
You can see the Algorithm 1 in our supplementary material.(http://www.csie.ntu.edu.tw/~cjlin/papers/libmf/libmf_supp.pdf)
The learning rate eta is divided by accumulated squared gradient in the beginning of every iteration.
One way for you to decrease the learning rate as the number of iterations increasing is to add an extra 1 each time we update the accumulated squared gradient.
Thanks for your question and any discussion is welcomed.