glm-tools/pyglmnet

cdfast convergence based on ratio norm(delta_beta) / norm(beta)

cxrodgers opened this issue · 2 comments

One thing I've noticed is that models with large beta take a lot longer to reach convergence than those with a small beta. I wonder if it has to do with this line:

if (t > 1) and (np.linalg.norm(beta - beta_old) < tol):

Convergence is assessed by:
np.linalg.norm(beta - beta_old) < tol.

Why not:
np.linalg.norm(beta - beta_old) / np.linalg.norm(beta) < tol
Would that make sense? Basically, we just want to know that our estimate of beta is not changing more than X% (e.g., 0.1%) between iterations. I find this more interpretable than an absolute change in the beta values.

But I don't know if this makes sense from a theoretical perspective. Thanks for any comments!

hmm ... what you say makes sense to me. @pavanramkumar any comments?

you might also want to check out: #278. It's not merged yet because the fix didn't appear clean. But perhaps you have an opinion there

we have now changed the convergence criteria and it pretty much aligns with what you proposed @cxrodgers . Closing this for now. Feel free to reopen if you think the problem persists.