scipopt/PySCIPOpt

No dual solution available for constraint in LP with presolve disabled.

Viech opened this issue · 31 comments

Viech commented

I'm having issues with LP duals (with presolve disabled). Some constraints yield the correct duals, other constraints in the same problem and during the same optimization pass throw no dual solution available for constraint warnings. Setting the affected dual values to 0.0 is not correct; some of the affected constraints are supposed to have non-zero duals.

Unfortunately my example where some duals of tight constraints are returned while some are not is too involved for a bug report. However, here is an example with a single constraint where retrieving its dual fails even though the constraint is tight (and its dual value should be 1.0):

#!/usr/bin/python
import pyscipopt
m = pyscipopt.Model()
x = m.addVar("x")
c = m.addCons(x >= 1)
m.setObjective(x, "minimize")
m.setPresolve(pyscipopt.scip.PY_SCIP_PARAMSETTING.OFF)
m.optimize()
m.getDualsolLinear(c)

Output on v1.2.0:

Traceback (most recent call last):
  File "src/pyscipopt/scip.pyx", line 1276, in pyscipopt.scip.Model.getDualsolLinear
  File "src/pyscipopt/scip.pyx", line 1228, in pyscipopt.scip.Model.getTransformedCons
  File "src/pyscipopt/scip.pyx", line 283, in pyscipopt.scip.Constraint.create
Warning: cannot create Constraint with SCIP_CONS* == NULL

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./test_scip_duals.py", line 9, in <module>
    m.getDualsolLinear(c)
  File "src/pyscipopt/scip.pyx", line 1292, in pyscipopt.scip.Model.getDualsolLinear
Warning: no dual solution available for constraint c1

Output on current f5956e0:

Traceback (most recent call last):
  File "src/pyscipopt/scip.pyx", line 1577, in pyscipopt.scip.Model.getDualsolLinear
  File "src/pyscipopt/scip.pyx", line 1529, in pyscipopt.scip.Model.getTransformedCons
  File "src/pyscipopt/scip.pyx", line 405, in pyscipopt.scip.Constraint.create
Warning: cannot create Constraint with SCIP_CONS* == NULL

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "./test_scip_duals.py", line 9, in <module>
    m.getDualsolLinear(c)
  File "src/pyscipopt/scip.pyx", line 1592, in pyscipopt.scip.Model.getDualsolLinear
Warning: no dual solution available for constraint c1

Python is 3.6.4, scipoptsuite is 5.0.1.

Could you please try disabling heuristics and propagation as well:

model.setHeuristics(pyscipopt.SCIP_PARAMSETTING.OFF)
model.disablePropagation()

We had a similar problem where SCIP modified the problem internally and then the LP solver couldn't provide the dual solution anymore.

On your example model, it is already enough to disable propagation to get the correct result.

Viech commented

Hello @mattmilten, I can confirm that this fixes the problem with my small example. Unfortunately, just like with the fix that was proposed in #137, I am now getting -0.0 for the previously affected constraints in my more complex example. I'll try to get a plaintext version of it or create another one.

For disablePropagation, what would be the correct way to reset this to its default?

These are SCIP's default values for propagation:

self.setIntParam("propagating/maxroundsroot", 1000)
self.setIntParam("propagating/maxrounds", 100)

Can't you call writeProblem() before solving?

Viech commented

Here's my other example:

#!/usr/bin/python

import pyscipopt

M = pyscipopt.Model()

x = []
for i in range(5):
    x.append(M.addVar(name = "x[{}]".format(i), lb = -1e20, ub = 1e20))

M.setObjective( 1.0 * x[0] + 2.0 * x[1] + 3.0 * x[2] + 4.0 * x[3] + 5.0 * x[4], "maximize")

C = []
C.append(M.addCons(-1.0 * x[0] <= 1))
C.append(M.addCons( 1.0 * x[0] <= 1))
C.append(M.addCons(-2.0 * x[2] <= 1))
C.append(M.addCons( 2.0 * x[2] <= 1))
C.append(M.addCons(-2.0 * x[3] <= 1))
C.append(M.addCons( 2.0 * x[3] <= 1))
C.append(M.addCons(-1.0 * x[3] <= 1))
C.append(M.addCons( 1.0 * x[0] <= 1))
C.append(M.addCons( 1.0 * x[0] + 2.0 * x[2] <= 1))
C.append(M.addCons(-1.0 * x[0] - 2.0 * x[2] <= 1))
C.append(M.addCons( 1.0 * x[1] + 1.0 * x[2] + 1.0 * x[3] <= 1))
C.append(M.addCons(-1.0 * x[1] - 1.0 * x[2] - 1.0 * x[3] <= 1))
C.append(M.addCons( 1.0 * x[0] + 2.0 * x[1] <= 1))
C.append(M.addCons(-1.0 * x[0] - 2.0 * x[1] <= 1))
C.append(M.addCons( 1.0 * x[0] + 3.0 * x[2] + 1.0 * x[4] <= 1))
C.append(M.addCons(-1.0 * x[0] - 3.0 * x[2] - 1.0 * x[4] <= 1))

M.setPresolve(pyscipopt.SCIP_PARAMSETTING.OFF)
M.setHeuristics(pyscipopt.SCIP_PARAMSETTING.OFF)
M.disablePropagation()
M.optimize()

print("Obj:", M.getObjVal())
for j in range(len(C)):
    print("C[{}]:".format(j), M.getDualsolLinear(C[j]))

The output is:

Obj: 14.0
C[0]: -0.0
C[1]: -0.0
C[2]: -0.0
C[3]: -0.0
C[4]: -0.0
C[5]: -0.0
C[6]: -0.0
C[7]: -0.0
C[8]: 0.0
C[9]: 5.0
C[10]: 0.0
C[11]: 0.0
C[12]: 1.0
C[13]: 0.0
C[14]: 5.0
C[15]: 0.0

Solving the same problem with Gurobi I get the same duals for C[8] to C[15] while C[3] = 1 and C[4] = 2 (all other duals are "positive" 0.0). Note that when you take adjacent pairs of constraints and sum up the absolute values of the difference of their duals you then get 14, the dual optimal value.

For completeness, here is the original test case written for PICOS. Both primal and dual tests pass for all of CPLEX, CVXOPT, GLPK, Gurobi, MOSEK and SMCP while for my work in progress reimplementation of SCIP only the primal test passes (the old SCIP implementation did not retrieve any duals):

    def setUp(self):
        # Data.
        c = picos.new_param("c", cvxopt.matrix([1,2,3,4,5]))
        A = picos.new_param("A", [
            cvxopt.matrix([1,0,0,0,0]).T,
            cvxopt.matrix([0,0,2,0,0]).T,
            cvxopt.matrix([0,0,0,2,0]).T,
            cvxopt.matrix([1,0,0,0,0]).T,
            cvxopt.matrix([1,0,2,0,0]).T,
            cvxopt.matrix([0,1,1,1,0]).T,
            cvxopt.matrix([1,2,0,0,0]).T,
            cvxopt.matrix([1,0,3,0,1]).T
        ])
        self.n = n = len(A)
        self.m = m = c.size[0]

        # Primal problem.
        self.P = P = picos.Problem()
        self.u = u = P.add_variable("u", m)
        P.set_objective("max", c|u)
        self.C = P.add_list_of_constraints(
            [abs(A[i]*u) < 1 for i in range(n)], "i", "[n]")

        # Dual problem.
        self.D = D = picos.Problem()
        self.z = z = [D.add_variable("z[{}]".format(i)) for i in range(n)]
        self.mu = mu = D.add_variable("mu", n)
        D.set_objective("min", 1|mu)
        D.add_list_of_constraints(
            [abs(z[i]) < mu[i] for i in range(n)], "i", "[n]")
        D.add_constraint(
            picos.sum([A[i].T * z[i] for i in range(n)], "i", "[n]") == c)

        self.skipUnsupported(P, D)

    def testPrimalSolution(self):
        self.primalSolve(self.P)
        self.expectObjective(self.P, 14.0)

    def testDualSolution(self):
        self.dualSolve(self.P)
        for i in range(self.n):
            self.z[i].set_value(self.C[i].dual[1] - self.C[i].dual[0])
        self.mu.set_value(cvxopt.matrix([abs(z_i.value) for z_i in self.z]))
        self.expectObjective(self.D, 14.0)

Sorry, I somehow lost track of this. Is this issue still relevant?

Viech commented

Tested again on current c317270. I get the same infeasible dual solution as before. Unfortunately this is still the only reproducible example that I have; if you are having trouble debugging it I could write a couple more basic LP test cases for PICOS, which I should do eventually anyway, and see if another one breaks.

I see, a basic LP test case would be much appreciated.

Viech commented

Sorry for the delay. I did find a smaller example where duals returned are incorrect:

#!/usr/bin/python

import pyscipopt

M = pyscipopt.Model()
x = M.addVar(name = "x", lb = -1e20, ub = 1e20)
M.setObjective(x, "maximize")
C1 = M.addCons(-1.0 * x <= 1)
C2 = M.addCons( 1.0 * x <= 1)

M.setPresolve(pyscipopt.SCIP_PARAMSETTING.OFF)
M.setHeuristics(pyscipopt.SCIP_PARAMSETTING.OFF)
M.disablePropagation()
M.optimize()

print()
print("Obj  :", M.getObjVal())
print("x    :", M.getVal(x))
print("Duals:", M.getDualsolLinear(C1), M.getDualsolLinear(C2))

The output is:

presolving:
presolving (0 rounds: 0 fast, 0 medium, 0 exhaustive):
 0 deleted vars, 0 deleted constraints, 0 added constraints, 0 tightened bounds, 0 added holes, 0 changed sides, 0 changed coefficients
 0 implications, 0 cliques
presolved problem has 1 variables (0 bin, 0 int, 0 impl, 1 cont) and 2 constraints
      2 constraints of type <linear>
Presolving Time: 0.00

 time | node  | left  |LP iter|LP it/n| mem |mdpt |frac |vars |cons |cols |rows |cuts |confs|strbr|  dualbound   | primalbound  |  gap   
* 0.0s|     1 |     0 |     0 |     - | 543k|   0 |   - |   1 |   2 |   1 |   0 |   0 |   0 |   0 | 1.000000e+00 | 1.000000e+00 |   0.00%
  0.0s|     1 |     0 |     0 |     - | 543k|   0 |   - |   1 |   2 |   1 |   0 |   0 |   0 |   0 | 1.000000e+00 | 1.000000e+00 |   0.00%

SCIP Status        : problem is solved [optimal solution found]
Solving Time (sec) : 0.00
Solving Nodes      : 1
Primal Bound       : +1.00000000000000e+00 (1 solutions)
Dual Bound         : +1.00000000000000e+00
Gap                : 0.00 %

Obj  : 1.0
x    : 1.0
Duals: 1.0 1.0

SCIP claims that x = 1 is an optimum primal solution, which is clearly correct. However, complementary slackness does not hold as the first constraint -1.0 * x <= 1 is not tight but the dual returned for it is nonzero.

Thank you for providing such a small example for reproducing the error!

oh so the problem in our code was that we always assumed that in the constraints representing bounds the coefficient was one?

oh so the problem in our code was that we always assumed that in the constraints representing bounds the coefficient was one?

Yes, exactly.

Viech commented

I can confirm that the small example works now, but I'm still not getting what I'm expecting with my larger test case.

SCIP now returns the following duals:

+0.00
+0.00

+2.00
-0.00

-0.00
-4.00

-0.00
-0.00

+5.00
+0.00

-0.00
-0.00

+0.00
+1.00

+0.00
+5.00

CVXOPT, SMCP, GLPK, MOSEK and ECOS all produce the following duals (up to signs of near-zeros):

 0.00
 0.00

 1.00
 0.00

 0.00
 2.00

 0.00
 0.00

 5.00
-0.00

 0.00
-0.00

 0.00
 1.00

 0.00
 5.00

My test puts these values into the dual problem, and concludes that the dual solution given by SCIP is indeed infeasible.

EDIT: I have yet to extract the constraints from the test case such that the same output as above is produced when fed to SCIP directly. This is not so easy because the test case uses a higher level representation of the problem. I'll add a script when I have one.

Viech commented

OK, have this fancy script:

#!/usr/bin/python

import pyscipopt

M = pyscipopt.Model()
x = []
C = []

for i in range(5):
    x.append(M.addVar(name = "x[{}]".format(i), lb = -1e20, ub = 1e20))

M.setObjective(
    1.0*x[0] + 2.0*x[1] + 3.0*x[2] + 4.0*x[3] + 5.0*x[4],
    "minimize"
)

C.append(M.addCons( 1.0 * x[0] - 1 <= 0))
C.append(M.addCons(-1.0 * x[0] - 1 <= 0))

C.append(M.addCons( 2.0 * x[2] - 1 <= 0))
C.append(M.addCons(-2.0 * x[2] - 1 <= 0))

C.append(M.addCons( 2.0 * x[3] - 1 <= 0))
C.append(M.addCons(-2.0 * x[3] - 1 <= 0))

C.append(M.addCons( 1.0 * x[0] - 1 <= 0))
C.append(M.addCons(-1.0 * x[0] - 1 <= 0))

C.append(M.addCons( 1.0 * x[0] + 2.0 * x[2] - 1 <= 0))
C.append(M.addCons(-1.0 * x[0] - 2.0 * x[2] - 1 <= 0))

C.append(M.addCons( 1.0 * x[1] + 1.0 * x[2] + 1.0 * x[3] - 1 <= 0))
C.append(M.addCons(-1.0 * x[1] - 1.0 * x[2] - 1.0 * x[3] - 1 <= 0))

C.append(M.addCons( 1.0 * x[0] + 2.0 * x[1] - 1 <= 0))
C.append(M.addCons(-1.0 * x[0] - 2.0 * x[1] - 1 <= 0))

C.append(M.addCons( 1.0 * x[0] + 3.0 * x[2] + 1.0 * x[4] - 1 <= 0))
C.append(M.addCons(-1.0 * x[0] - 3.0 * x[2] - 1.0 * x[4] - 1 <= 0))

M.setIntParam("display/verblevel", 0)
M.setPresolve(pyscipopt.SCIP_PARAMSETTING.OFF)
M.setHeuristics(pyscipopt.SCIP_PARAMSETTING.OFF)
M.disablePropagation()
M.optimize()

shouldBe = [0, 0, 1, 0, 0, 2, 0, 0, 5, 0, 0, 0, 0, 1, 0, 5]

print("Con | Negated Dual | Other Solvers | Matches")
for j in range(len(C)):
    negDual = -M.getDualsolLinear(C[j])
    should  = shouldBe[j]
    if j % 2 == 0:
        print("-"*44)
    print("{:3d} | {:12.2f} | {:13.2f} | {:>7s}"
        .format(j, negDual, should, "yes" if negDual == should else "NO"))

Right now the output is:

Con | Negated Dual | Other Solvers | Matches
--------------------------------------------
  0 |        -0.00 |          0.00 |     yes
  1 |        -0.00 |          0.00 |     yes
--------------------------------------------
  2 |         2.00 |          1.00 |      NO
  3 |        -0.00 |          0.00 |     yes
--------------------------------------------
  4 |        -0.00 |          0.00 |     yes
  5 |        -4.00 |          2.00 |      NO
--------------------------------------------
  6 |         0.00 |          0.00 |     yes
  7 |         0.00 |          0.00 |     yes
--------------------------------------------
  8 |         5.00 |          5.00 |     yes
  9 |         0.00 |          0.00 |     yes
--------------------------------------------
 10 |         0.00 |          0.00 |     yes
 11 |         0.00 |          0.00 |     yes
--------------------------------------------
 12 |         0.00 |          0.00 |     yes
 13 |         1.00 |          1.00 |     yes
--------------------------------------------
 14 |         0.00 |          0.00 |     yes
 15 |         5.00 |          5.00 |     yes

ok so the problem is to compute the dual multiplier of a constraint of the form l <= a x <= u, where l, u, and are real numbers, assuming a >0, and x is just a single variable, at an optimal solution

If the reduced cost of the variable x is c > 0, then x must be at its lower bound. The reduced cost should measure the change in the objective when changing by 1 the lower bound of x, while the dual multiplier of a constraint should measure the change in the objective by changing in 1 the active side of the constraint.

So, the lower bound is actually l/a <= x and the reduced costs measure the change when the bound is l/a - 1.
The dual multiplier want to know the change when l - 1 <= ax.

So, if the reduced cost is c > 0, then c/a should be the change in the objective when the side changes.

Let us apply it to the example above:

  • getting dual multiplier of c2: 2.0 * x[2] - 1 <= 0
    SCIP says the reduced cost is -2. Then upper bound is active: that is, c3. Now, the objective changes by -2 when the bound changes by 1. So it will change by -1=-2/2 when the constraint changes by 1 which is ok

  • getting the dual multiplier of c5: -2.0 * x[3] - 1 <= 0
    SCIP says the reduced cost is 4. Then lower bound is active, that is, c5. Now, the objective changes by 4, so it will change just by 2 when the side of c5 changes

So, the code should also divide by the absolute value of val when computing the dual multiplier from the reduced cost.

However, @mattmilten, I think there is a deeper problem. If a constraint appears many times, we will compute the same dual multiplier for the constraint, but this is wrong. So I guess we can't really fix this at the scip nor pyscipopt level, right?

Viech commented

I don't have a proof or clear argument that the dual value produced by other solvers for c5 is the unique correct value (it does not help that the dual problem used by the test case is just a dual), but dividing SCIP's -4 by an absolute value alone won't yield a positive 2, so maybe another operation is necessary?

Viech commented

The other solver's values are also brought into a form where they pose a solution to PICOS' dual problem formulation, i.e. they are also negated if necessary (in particular if the primal objective was maximize). In any case, if 1 and 2 are indeed the only correct duals for c3 and c5 (modulo negation), then your values of -2 and 4 (as SCIP outputs them before I negate them) will not both become positive (resp. negative) by dividing each by some absolute value, and thus remain incorrect.

Again, I wish I had a smaller example that I fully understand and where uniqueness is clear. Maybe @gsagnol can shed some light on the test case as he must have been the original author. Since abs is used in putting the values in the dual problem, the sign might as well not matter here…

Up to a change of variables, the dual problem asks to find the largest t such that t*c lies in a polytope with vertices a_i and -a_i. With the above values, this largest t is equal to 1/14. Now, if we can write 1/14*c as a convex combination of some a_i's and -a_i's, we get a dual solution z_i by dividing the weights of this combination by 14.

So, to check if the dual solution is unique, we need to identify the face of the polytope with vertices +/- ai where c/14 lies. The solution is unique iff this face is a simplex (i.e., if it has 5 vertices). By looking at the solution, we see that it works with the 5 vertices {-a1,a2,-a4,a6,a7}, but we must make sure that we cannot express c/14 as a convex combination of other vertices.

Although this is not a solution, I just added a note to the README to avoid bound constraints when dual values are desired.

Viech commented

On PICOS end, I introduced support levels:

Support levels replace the class method Problem.supports and allow
solvers to give a more fine-grained rating on how well they support a
certain problem with the given solver options. This improves solver
selection as solvers that require a problem reformulation or offer only
experimental or limited support will have gradually lower priorities.
The test bench also makes use of support levels; tests are skipped by
default if support is subpar.

So SCIP won't be automatically selected for LPs unless duals are disabled by the user (and even less so for other conic problems were it does not return duals at all). Once this is fixed, it'll take precedence over the fallback solver again.

@mattmilten may I ask if this is an ongoing issue with the latest version (apparently it is still the case). is it only affect single variable bound constraints? i.e. something like 0 <= x+y <= 1 should be ok? thanks

As far as I know, not much has changed in this regard. Sorry!

@mattmilten thanks for the reply, may I say that as long as we don't use bound constraints (i.e. we make sure we have at least 2 variable in each constraint), we should be fine? or you think there is still a bit of risk having wrong dual value?

I assume you should be good as long as you don't have bound constraints in your problem.

@mattmilten thanks, may I ask if this (wrong dual value for bound constraint) is an issue with scip or the linear solver (soplex) itself? so that I know if using the soplex (c++) directly would be an option. thank you very much!

This is really an issue with how SCIP works. SCIP relies completely on the LP solver for dual values. It's definitely better to the LP solver directly for working with dual values.

@mattmilten got you, thank you very much!

Viech commented

Just to be sure, this needs to be fixed in SCIP, not PySCIPOpt?

In the readme there are some recommendations to get correct dual values. Does that not work? I know it is annoying because of the "no bound constraints" condition.

For sure, a clean implementation of this should be done in SCIP and not in PySCIPOpt. However, I don't think this is going to get fixed.

This does not solve the inherent issue, but if we create a variable y = model.addVar(lb=0,ub=0) (so y == 0) and add it to the bound constraints, it seems that the discrepancies with the dual values disappear. To be clear, this is what I mean:

    import pyscipopt

    M = pyscipopt.Model()
    x = []
    C = []

    for i in range(5):
        x.append(M.addVar(name = "x[{}]".format(i), lb = -1e20, ub = 1e20))
    y = M.addVar(lb=0,ub=0)

    M.setObjective(
        1.0*x[0] + 2.0*x[1] + 3.0*x[2] + 4.0*x[3] + 5.0*x[4],
        "minimize"
    )

    C.append(M.addCons( 1.0 * x[0] + y - 1 <= 0)) # y has to be added here
    C.append(M.addCons(-1.0 * x[0] + y - 1 <= 0))

    C.append(M.addCons( 2.0 * x[2] + y - 1 <= 0))
    C.append(M.addCons(-2.0 * x[2] + y - 1 <= 0))

    C.append(M.addCons( 2.0 * x[3] + y - 1 <= 0))
    C.append(M.addCons(-2.0 * x[3] + y - 1 <= 0))

    C.append(M.addCons( 1.0 * x[0] + y - 1 <= 0))
    C.append(M.addCons(-1.0 * x[0] + y - 1 <= 0))

    C.append(M.addCons( 1.0 * x[0] + 2.0 * x[2] - 1 <= 0))
    C.append(M.addCons(-1.0 * x[0] - 2.0 * x[2] - 1 <= 0))

    C.append(M.addCons( 1.0 * x[1] + 1.0 * x[2] + 1.0 * x[3] - 1 <= 0))
    C.append(M.addCons(-1.0 * x[1] - 1.0 * x[2] - 1.0 * x[3] - 1 <= 0))

    C.append(M.addCons( 1.0 * x[0] + 2.0 * x[1] - 1 <= 0))
    C.append(M.addCons(-1.0 * x[0] - 2.0 * x[1] - 1 <= 0))

    C.append(M.addCons( 1.0 * x[0] + 3.0 * x[2] + 1.0 * x[4] - 1 <= 0))
    C.append(M.addCons(-1.0 * x[0] - 3.0 * x[2] - 1.0 * x[4] - 1 <= 0))

    M.setIntParam("display/verblevel", 0)
    M.setPresolve(pyscipopt.SCIP_PARAMSETTING.OFF)
    M.setHeuristics(pyscipopt.SCIP_PARAMSETTING.OFF)
    M.disablePropagation()
    M.optimize()

    shouldBe = [0, 0, 1, 0, 0, 2, 0, 0, 5, 0, 0, 0, 0, 1, 0, 5]

    print("Con | Negated Dual | Other Solvers | Matches")
    for j in range(len(C)):
        negDual = -M.getDualsolLinear(C[j])
        should  = shouldBe[j]
        if j % 2 == 0:
            print("-"*44)
        print("{:3d} | {:12.2f} | {:13.2f} | {:>7s}"
            .format(j, negDual, should, "yes" if negDual == should else "NO"))

At least the script yields matches for all cases when it does not without the y variable. This workaround also solved an issue I was having with my own code. @Viech, does this solve your issue?

Viech commented

That's a good catch! I will keep this in mind as a workaround for when I find some time to revisit solver implementations. Though, on PySCIPOpt's end, I suggest to return None or raise an exception when one queries the dual of a bound constraint, which should be detectable. In my opinion this would resolve the issue on the Python interface end, while only documenting the problematic behavior might not reach every user.