JuliaManifolds/Manopt.jl

Quasi-Newton update rules issues

mateuszbaran opened this issue · 8 comments

I'm currently working on some updates to quasi-Newton direction update rules. Here are a few things to check/consider:

  • QuasiNewtonCautiousDirectionUpdate doesn't seem to use θ function anywhere.
  • QuasiNewtonLimitedMemoryDirectionUpdate doesn't seem to use scale.
  • Some direction updates need a reset mechanism called by initialize_solver!, and maybe also when the direction stops being a descent direction. That shouldn't happen for a well-designed set of options but it's a bit hard to tell which ones are safe from that issue.
  • Unify projections to a custom function instead of a boolean flag.

I'm also working on some variant of direction update for manifolds with corners inspired by L-BFGS-B. There are different approaches with elaborate line searches and update matrices but I'm aiming for something relatively simple and generic. I'm not aiming at competing with Fortran codes, my goal is making combined manifold-box constrained optimization work relatively well.

I started looking a bit into these points.

Ad 1) The function is used when updating the Hessian at

bound = d.θ(norm(M, p, X))

or
bound = d.θ(norm(M, p_old, get_gradient(mp, p_old)))

Ad 2) That might indeed be the case, I could only find the scale in the update of the full-matrix variants.

if iter == 1 && d.scale == true
d.matrix = skyk_c / inner(M, p, st.sk, st.sk) * d.matrix
end

I am not sure where the scaling was “lost” in one of the reworks, but we can surely bring that back

Ad 3) Sure, I am not 100% sure how to realise that, but when we have the iterate available doing a reset when that is zero sounds reasonable. When we wrote this code, reusing the state (and hence the updates) was not so much thought given to, for sure.

  • I also noticed that one of the recent updates that meant to introduce a stability idea did

project::Bool=true,

and
d.project && embed_project!(M, r, p, r)

which we basically have used before but then a bit more generic (not just a boolean)

* `project!`: (`copyto!`) for numerical stability it is possible to project onto

and then
tcgs.project!(M, tcgs.δ, p, tcgs.δ)

which might be a bit nicer to allow also for other ways to care for stability.
We should both unify this and probably also switch the second one to embed_project! (since that default is older than the embed/project discussion in Manifolds.jl).

Ad 1) The function is used when updating the Hessian at

I see, thanks.

Ad 3) Sure, I am not 100% sure how to realise that, but when we have the iterate available doing a reset when that is zero sounds reasonable. When we wrote this code, reusing the state (and hence the updates) was not so much thought given to, for sure.

I have it sketched already so I will only ask for a review when it's ready 🙂

We should both unify this and probably also switch the second one to embed_project! (since that default is older than the embed/project discussion in Manifolds.jl).

Yes, unifying that is a good idea.

If you see how to correctly add the scaling ( and unifying that to be a real value), feel free to add that as well. I am sure it was from one of the Wuang papers about QN.

I think the last point mentioned here should be addressed now in #392, since I removed stabilize and introduced project!.

edit: or to be precise it is described at

Manopt.jl/Changelog.md

Lines 35 to 40 in 649bc7e

* changed the `stabilize::Bool=` keyword in `quasi_Newton` to the more flexible `project!=`
keyword, this is also more in line with the other solvers. Internally the same is done
within the `QuasiNewtonLimitedMemoryDirectionUpdate`. To adapt,
* the previous `stabilize=true` is now set with `(project!)=embed_project!` in general,
and if the manifold is represented by points in the embedding, like the sphere, `(project!)=project!` suffices
* the new default is `(project!)=copyto!`, so by default no projection/stabilization is performed.

I also checked for the scaling, if you compare the part/scale between the backward and forward pass in

https://github.com/NicolasBoumal/manopt/blob/3d742b5a4755bb1f256fba1d2dbe6de17b9e5c2f/manopt/solvers/bfgs/rlbfgs.m#L531

the scaling factor should be added in this line

r ./= d.ρ[last_safe_index] * norm(M, p, d.memory_y[last_safe_index])^2

so we could/should write

r  .*= d.scale/ (d.ρ[last_safe_index] * norm(M, p, d.memory_y[last_safe_index])^2)

I can add that to the PR we are doing as well, since we would change it to a real as well in that step, of course.
Ah and it is not 100% the same as in the Matlab version, since there they have a function that does the gradnorm idea as we do already as well. So our scale it an additional scaling factor one can specify.

Oh and the full one also has a bool we would change to a real and then scale basically at

copyto!(d.matrix, I)

as well.

Nice, so that PR would completely resolve this 👍

Yes, I can do that tomorrow after my oral exams.