lindonroberts/trust-region

pure quadratic convex maximization fails

Opened this issue · 2 comments

It is my understanding that the TRS is solvable even in the case of convex maximization (i.e. a problem that would be unbounded if unconstrained). However, with the code

    import trustregion
    import numpy as np

    n = 5
    g = np.zeros(n)
    H = -np.eye(n)
    delta = 1.0
    x = trustregion.solve(g, H, delta)

we should get a nonzero solution, having a negative objective value. However, I get a solution of 0.

Using python 3.8, installed via pip.

Hi, you're right - you can solve the TRS to global optimality even if the Hessian is not positive definite. The algorithms implemented in the package don't find this global solution, instead only find an approximate solution (which here is x=0 since this is a stationary point of the TRS objective; I get the same result as you for your example).

For example, if you change your choice of g to something that is strictly nonzero:

g = 1e-15 * np.ones(n)

then you get a global solution. The implementations here give good enough solutions to guarantee convergence of a trust-region method to first-order stationary points (but not necessarily second-order stationary points).

I think it would be worthwhile adding an implementation of a procedure to compute the full global solution.

Thanks for the reply. I think the tiny nonzero gradient is a reasonable workaround for my current application (maximizing the per-step Hamiltonian in optimal control problems), but a guaranteed global solution would be very useful.