ds4dm/Tulip.jl

Expose presolve code

joehuchette opened this issue · 9 comments

@mtanneau mentioned at the last JuMP developers call that he was considering making the presolve code here in Tulip available for use in other packages. I'm starting work on a prototype solver for which this code would be useful. I'll eventually want to build out some MIP presolve routines on top.

What is your preferred way to do this? Would you prefer we depend on Tulip? Or would you consider splitting the code off into a separate package?

cc @BochuanBob and @Anhtu07

I think it makes sense to eventually break things off into smaller packages, especially if several projects use it.

At this point, Tulip's presolve is almost self-contained in src/Presolve. It is still tied to some Tulip-level data structures, notably ProblemData and, to a lesser extent, Model and Solution (which handle the interface with the internal IPM optimizers).

My current belief is that a "stand-alone" presolve package should be able to operate as follows:

  • Receive the original problem in some format
  • Perform presolve (this can be black-box)
  • Return presolved model (in a format similar to the original) and necessary ingredients for pre/post crush

Pointer for some inspiration: COIN-OR's OSIPresolve.

Some questions to fuel the discussion:

  • What classes of problems? (MI)LP? (MI)Conic? (MI)NLP?
  • What would it be interfaced to? MOI? Something lower-level?

cc @frapac

We only care about MILP.

For our purposes, we don't necessarily want to be tied to a particular solver, and will eventually pipe the model through MOI anyway (to solve and/or bridge). So, working at the MOI level is probably best for us.

Is there anything about your current approach that will not map nicely to MOI?

We will also potentially want to disable certain presolve routines but not others (e.g. in order to keep problem dimensions the same). So baking that into the API would be very useful.

Internally, the presolve code works with an LP representation

min    c'x + c₀
s.t.   lr ≤ Ax ≤ ur
       lc ≤  x ≤ uc

where A is stored row-by-row and column-by-column (to allow fast column-wise and row-wise access).
Bounds, right-hand side & integrality requirements (if you were going MILP) are stored in vectors.

To interface with MOI, you need to do the conversion MOI -> LP and then back LP -> MOI.
It's not hard, and should be even easier once the functionalities of MatrixOptInterface get merged into MOI.

I would be also interested in having a presolve package for https://github.com/exanauts/Simplex.jl
I do not know when MatrixOptInterface will be merged into MOI. Maybe it would make sense to build a MOIPresolve.jl on top of MatOI, but I do not know if that's the best solution available.

It seems reasonable to me to: keep the internal representation the same, and then add an MOI and/or MatrixOptInterface interface on top as-needed. Based on my brief look, I don't see any reason why you could not directly add the MIP information to your internal data structures. Generally, I'm not all that concerned about the indirection of going through MOI, so I'm fine making tying the API to MOI if others are as well.

Ideally, there would also be a programmatic way to configure the presolve! function as well.

(By the way, we have some cycles to spend on making this happen, once we converge on a plan).

I suggest we take the discussion over to... 🚧 MathOptPresolve.jl 🚧

dpo commented

I'm also very interested in building upon Tulip's presolve but I don't use MOI. It would be great to simply pass an LP or QP in "matrix form" and receive a presolved problem in the same format. It seems there could be a low-level API with an MOI layer on top.

This issue has been stale for 2 years; closing.

For reference: