stevengj/nlopt

nlopt COBYLA optimizer gives unexpected output

wesselvaneeghen opened this issue · 5 comments

Hi all,
I am using several optimizers from nlopt in Python. I sometimes encounter an unexpected result when using the COBYLA optimizer.

I am trying to minimize a function subject to a non-linear constraint. I give the optimizer a startpoint that satisfies the constraint. COBYLA however returns output that does not satisfy the constraint, while reporting that the optimization finished successfully. This is not what I expect to happen and I cannot use the result.

Can someone please enlighten me on this issue? I added the code below. Thanks!

example_code_cobyla.txt

I can confirm the behaviour:

Optimization result satisfies constraint: False
[ 1.43056631e+00 -3.79219946e-16  1.28916135e-01  5.15591384e+00
 -3.10138817e+00  1.80302533e+00 -1.78555743e+00]

Would you be able to provide a little context about the problem you're trying to solve? Your function and constraint function both seem to just be doing some number juggling, so having a grasp of the maths might potentially help.

After further testing, you seem to have found a magic value for the starting point, namely:

startpoint = np.array([6.69338188e-01,  -3.43007266e-16,   3.30661812e-01,
                       7.25335154e+00, 
                      -1.57398018e+01,   1.03509155e+01,   5.38888629e+00])

(reformatting mine)

If I repeat the process with various rounded (using .round()) versions of the input, and various other "random" inputs, I get convergence to values which satisfy the constraint.

For your input array z = (z0 z1 z2 z3 z4 z5 z6) you seem to break them into
X = (z0 z1 z2), c = z3 and Y = (z4 x5 z6) (naming convention mine), and define:
f = { ||Y|| for ||Y|| > 1; sqrt(||Y||) otherwise }
fc = [X.Y] + c + 1

Although I can't comment on why your magic input leads to this "misbehaving" solution...

So, in your given case, the result code is 4, which means that it was an x stopping criterion which was reached.

I noticed that you set, for both f(x) and x, both the absolute and relative tolerances.

I decided to disable the x absolute tolerance (just by commenting out the line), and interestingly, I got a different solution, which does satisfy the constraint:

Optimization result satisfies constraint: True
[ 1.45825290e+00 -4.39086148e-16 -2.54990815e-02 -1.58698803e+00
  2.78030139e-05  2.50365035e-05  1.57070194e-04]

Given that you have some of your variables of the order e-16, my suspicion is that (some of) your various tolerance values of 1e-6 are inappropriate, but I'd have to think for a bit longer before I could argue that rigorously.

Furthermore, if I comment out both the x and f absolute tolerances (i.e. only have the relative tolerances set), I get yet another different solution, which also satisfies the constraints, and which gives return code 4 again (x stopping criterion):

Optimization result satisfies constraint: True
[ 1.45824731e+00 -4.39082689e-16 -2.54953761e-02 -1.58691325e+00
 -6.88049859e-06  1.69752289e-06 -3.91427719e-06]

It is my strong suspicion that this is the minimum you're really looking for.

For now, all I can suggest @wesselvaneeghen is that you read the section in the documentation on the function and parameter tolerances: https://nlopt.readthedocs.io/en/latest/NLopt_Introduction/#function-value-and-parameter-tolerances

On the other hand, I'm still ignorant as to why setting these absolute tolerances causes "solution" values which violate the constraint...

@aphirst
Thanks a lot for taking an interest and exploring this issue.

I had actually given up on the cobyla solver a few weeks ago, but I may revise this with your contribution.

I had decided to go with the nlopt SLSQP solver, which works great for me. In fact, for some reason nlopt's slsqp seems to be producing more stable output than slsqp as implemented by scipy, which I was using before. Scipy's slsqp sometimes produces an error 'Positive directional derivative for linesearch', where nlopt's slsqp seems to be able to optimize further.

Anyway, mainly wanted to say thanks :)