embotech/ecos

ECOS incorrectly infeasible for small LP

Closed this issue · 6 comments

This is a copy from cvxgrp/cvxpy#216:

I made a small diet problem feasibility LP for a class, and ECOS seems to fail, while SCS and CVXOPT handle it fine.

The data:

from cvxpy import *
import numpy as np
h = np.array([.8, .4, .5])
c = np.array([0, .3, 2.0])
r = np.array([1, 2.1, 1.7])
p = np.array([1, .25])

The minimization version

x = Variable(2)
obj = Minimize(x.T*p)
constr = [
          h*x[0] + c*x[1] >= 50*r,
          x >= 0]

prob = Problem(obj, constr)
prob.solve(solver='ECOS')
print prob.value

gives an optimal value of 129.166666672. If I write a feasibility version of the problem:

x = Variable(2)
obj = Minimize(0)
constr = [x.T*p <= 130,
          h*x[0] + c*x[1] >= 50*r,
          x >= 0]

prob = Problem(obj, constr)
prob.solve(solver='ECOS')
print prob.value

ECOS incorrectly determines the problem to be infeasible, which is weird, since it just solved the minimization version of the problem without any trouble. SCS and CVXOPT don't have any trouble with either version.

Also, as @SteveDiamond pointed out, it doesn't look like a numerical tolerance issue, since you can relax the tolerance up to x.T*p <= 130 and remain "infeasible".

Any idea what's going on?

echu commented

My guess is that it has to do with how we check the conditions for primal
or dual infeasibility at stopping time. Any thoughts, @adomahidi?
On Mon, Aug 17, 2015 at 12:19 AM AJ Friend notifications@github.com wrote:

This is a copy from cvxgrp/cvxpy#216
https://github.com/cvxgrp/cvxpy/issues/216:

I made a small diet problem feasibility LP for a class, and ECOS seems to
fail, while SCS and CVXOPT handle it fine.

The data:

from cvxpy import *
import numpy as np
h = np.array([.8, .4, .5])
c = np.array([0, .3, 2.0])
r = np.array([1, 2.1, 1.7])
p = np.array([1, .25])

The minimization version

x = Variable(2)
obj = Minimize(x.T_p)
constr = [
h_x[0] + c_x[1] >= 50_r,
x >= 0]

prob = Problem(obj, constr)
prob.solve(solver='ECOS')
print prob.value

gives an optimal value of 129.166666672. If I write a feasibility version
of the problem:

x = Variable(2)
obj = Minimize(0)
constr = [x.T_p <= 130,
h_x[0] + c_x[1] >= 50_r,
x >= 0]

prob = Problem(obj, constr)
prob.solve(solver='ECOS')
print prob.value

ECOS incorrectly determines the problem to be infeasible, which is weird,
since it just solved the minimization version of the problem without any
trouble. SCS and CVXOPT don't have any trouble with either version.

Also, as @SteveDiamond https://github.com/SteveDiamond pointed out, it
doesn't look like a numerical tolerance issue, since you can relax the
tolerance up to x.T*p <= 130 and remain "infeasible".

Any idea what's going on?


Reply to this email directly or view it on GitHub
#124.

Well I guess that is the problem, but I'll have to investigate to which point ecos converges, and whether the self-dual embedding certificates are numerically satisfied.

It seems that we have a couple of bugs in the stopping criteria, I know how to fix one but that alone does not completely fix @ajfriend's issue. The first bug is that we have a relative gap, which is the gap/estimate_of_objective. Clearly we messed up here we need gap/MAX(estimate_of_objective,1). I believe the second bug is that when the problem definition has no vector c, the infeasibility residual
A'y+z = inf_res is the same as the dual residual A'y+z+tau c = dual_res. Then as the solution approaches optimality we believe we are detecting a certificate of infeasibility.

Upon further inspection I can confirm that the issue is that the stopping criteria assume that if the residual A'y+z = pinfres becomes small, and (-b'y-h'z)/normalization > reltol, then the algorithm converged to a recession direction. However when c = 0 this residual is the same as the dual residual A'y+z+tau c = dres, and therefore convergence to optimality looks like convergence to a certificate of infeasibility. We could check for pinfres < dres before declaring a problem infeasible. But I'm not sure that is the best way to go about it.

An infeasible problem is detected when the residual A'y+G'z = Inf_res with z \in K^\star has very small norm and the dual objective -b'y-z'h > 0 (is positive) . This happens when y,x is a feasible recession direction for the dual.

However, when we write a feasibility problem like

from cvxpy import *
import numpy as np
h = np.array([.8, .4, .5])
c = np.array([0, .3, 2.0])
r = np.array([1, 2.1, 1.7])
p = np.array([1, .25])
x = Variable(2)
obj = Minimize(0)
constr = [x.T*p <= 130,
          h*x[0] + c*x[1] >= 50*r,
          x >= 0]

Then, the the dual residual of the optimality conditions

    A'y+G'z + \tau c = d_res 

is the same as the residual for detecting infeasibility, (since c=0).
As the algorithm approaches optimality the
dual residual looks more and more as a certificate of infeasibility.
The final piece of the puzzle is that across iterations the path of the dual objective
-b'y-z'h is not guaranteed to be monotonically increasing. Therefore
it is possible that as it approaches optimality it will take a value larger
than zero. If this happens close to optimality, the stopping criteria
make a mistake like:

prob = Problem(obj, constr)
prob.solve(solver='ECOS')
print prob.value
prob.status
'infeasible'

Increasing the precision fixes the issue for then the residual is not considered as evidence of
infeasibility before the gap is small:

prob = Problem(obj, constr)
prob.solve(solver='ECOS', feastol = 1e-10)
print prob.value
prob.status
'optimal'

I think that the least intrusive change is to change the conditions from :
Infeasible if:
Not optimal and
The dual objective is positive and
The infeasibility residual A'y+G'z = inf_res has small norm.
To Infeasible if:
Not optimal and
The dual objective is positive and
The infeasibility residual A'y+G'z = inf_res has small norm and
tau < kappa.
And we should do the same for the unboundedness detection.

This condition will fix this particular bug and should not affect problems in the wild.
I do not know of any problems that are infeasible, and tau is still smaller than kappa
really close to the solution. Of course in the limit it should not happen.

Adding this condition fixes the issue.

from cvxpy import *
import numpy as np
h = np.array([.8, .4, .5])
c = np.array([0, .3, 2.0])
r = np.array([1, 2.1, 1.7])
p = np.array([1, .25])
x = Variable(2)
obj = Minimize(0)
constr = [x.T*p <= 129.35,
          h*x[0] + c*x[1] >= 50*r,
          x >= 0]
prob = Problem(obj, constr)
prob.solve(solver='ECOS')
print prob.value
prob.status
'optimal'

merged into version 2.0.3