jwcalder/GraphLearning

Strange numerical behavior with utils.conjgrad

Closed this issue · 1 comments

Hi, I noticed some strange numerical behavior with the utils.conjgrad() method when applied to certain systems. I have used it to solve systems of the form $(L + I)X = B$, where $L$ is a graph Laplacian and $B$ is scaled random normal. When the norm of $L$ grows very large, I see that the implementation of cg in graphlearning utils sometimes converges to a pretty bad solution wrt the discrete l2 norm of the residual.

In practice, it looks making the one line edit to the initialization of the search direction, changing p=r in L517 to p=r.copy(), resolves the issue. It could have something to do with side effects due to how python handles references and memory sharing? I'm not sure.

Here is a colab notebook link: colab.

Hi Chester,

Thanks for noticing this. It's indeed a bug. p=r simply points p to r, and does not copy, so the two variables are linked and updating one updates the other. It's likely that the first loop breaks this when p is reassigned at the end of the loop, at which point it should be fine. For some reason this error in the first step causes problems only when the norm of L is large.

Anyway, I've corrected it now in version 1.6.0. Let me know if you have any other issues.

Best,
Jeff