GAMS-dev/gams.jl

slow on iterative solves

Jovansam opened this issue · 7 comments

How can we pass the solvelink=5 option? This is useful a solution I am obtaining iteratively.

Actually, GAMS.jl forces the use of solvelink=5 at the moment. Do you need other solvelink values in other cases?

Really nice you thought of putting that by default. Unfortunately, in my case it does not seem to be working. I have this model that I solve iteratively. Runs about about 60 times. In Ipopt.jl this takes 1.7 seconds to go through all 60. Using KNITRO/Conopt-GAMS.jl it is taking 17 seconds. Of course the solutions are eventually the same. So my guess was trying solvelink. But then it must be something else.

Hm, that doesn't sound good. :-/ Are you able to share your model so I can have a close look? Otherwise I will try to find the bottleneck running some benchmark models.

The code below illustrates the issue. Just change the solver from Ipopt to GAMS. With Ipopt all iterations are done in 2 seconds. Conversly GAMS.jl takes 16 seconds.


using JuMP
using Ipopt
using GAMS


function NLP()
    nlp = Model(optimizer_with_attributes(Ipopt.Optimizer))
    #nlp = Model(optimizer_with_attributes(GAMS.Optimizer, "Solver" => "CONOPT","ModelType" => "NLP", "solve_link" => 5))
    set_silent(nlp)
    @variable(nlp, x>=0, start = 1)
    @NLobjective(nlp, Max, log(x))
    optimize!(nlp)
    display(value(x))
    display(termination_status(nlp))
end

function solve()
    cnt = 100
    while cnt > 3
        NLP()
        cnt -= 1
    end
end

@time solve()
```

I'm afraid I can't offer a good solution here. On my machine a single solve with one iteration only takes approx 0.002s for Ipopt.jl while GAMS.jl (with CONOPT) takes 0.014s. This is of course a significant overhead that multiplies with each solve in this loop. Of this one GAMS solve, 14.5% are for initializing the GAMS workspace (e.g. finding GAMS on your system), 1.0% for translating the model to GAMS and 74.2% for running the GAMS program. It is quite expensive to start a process in comparison to calling an API directly. I also compared the time needed to run the process from Julia and from command line and that's pretty much the same, so I don't see a way to improve timing here. I would love to fill GAMS data structures more directly in GAMS.jl using our APIs, but unfortunately, this isn't currently possible due to legal restrictions.

If you have a model that takes long solving, such an overhead is of course negligible, but this is not the case in your setup. The one thing I can do: I'll create an option where you can pass the GAMSWorkspace such that you can init it once instead of in every solve. But this won't solve the issue.

@odow Do you know how I could have an option of type GAMSWorkspace in MathOptInterface (via MOI.AbstractOptimizerAttribute)? I currently get the following error:

ERROR: MethodError: no method matching map_indices(::MathOptInterface.Utilities.var"#7#8"{MathOptInterface.Utilities.IndexMap}, ::GAMSWorkspace)

It seems like that options are allowed to only take certain types. Or shouldn't a GAMSWorkspace be passed as MOI.AbstractOptimizerAttribute.

odow commented

For environments like this, the easiest way is probably like Gurobi's Env.

Modify Optimizer to take a workspace argument:

workspace = GAMS.Workspace()
model = Model(() -> GAMS.Optimizer(workspace))

The map_indices thing is here: https://github.com/jump-dev/MathOptInterface.jl/blob/master/src/Utilities/functions.jl#L55-L163. You probably just want

MOI.Utilities.map_indices(::MOI.Utilities.IndexMap, x::GAMSWorkspace) = x

Thanks @odow, I followed the way Gurobi.jl does it.

Since GAMS.jl version 0.2.3, a GAMS workspace can be initialized with either:

ws = GAMS.GAMSWorkspace()
ws = GAMS.GAMSWorkspace("<gams_system_dir>")
ws = GAMS.GAMSWorkspace("<gams_system_dir>", "<gams_working_dir>")

and the model then initialized with

model = Model(() -> GAMS.Optimizer(ws))

With this you are able to create a GAMSWorkspace once and use it for all models in your loop, @Jovansam. Unfortunately, this only reduces the execution time slightly. Because I cannot improve the GAMS command call at the moment, I'll close the issue. If you think that the bottleneck is somewhere else, please don't hesitate and reopen the issue.