jmejia8/Metaheuristics.jl

Error in GA global optimisation

Corkman99 opened this issue · 8 comments

Hi,
I am working on an optimization problem in 3 dimensions. I have been using the Particle Swarm Optimisation and it was executing without any issues. Having a very similar syntax to PSO, I wanted to start using GA. My first attemp I obtained the following error:

MethodError: no method matching (::var"#objective_function#41")(::Vector{Float64}, ::Vector{Float64}, ::Vector{Float64})
Stacktrace:
  [1] (::var"#to_optim#42"{Matrix{Float64}, var"#objective_function#41"})(theta::Vector{Float64})
    @ Main \ga_optim.jl:135
  [2] create_solution(x::Vector{Float64}, problem::Problem{Float64}; e::Float64)
    @ Metaheuristics \packages\Metaheuristics\xnKGn\src\solutions\individual.jl:320
  [3] (::Metaheuristics.var"#32#33"{Float64, Problem{Float64}, Matrix{Float64}})(i::Int64)
    @ Metaheuristics .\none:0
  [4] iterate
    @ .\generator.jl:47 [inlined]
  [5] collect(itr::Base.Generator{UnitRange{Int64}, Metaheuristics.var"#32#33"{Float64, Problem{Float64}, Matrix{Float64}}})
    @ Base .\array.jl:724
  [6] #generate_population#31
    @ \packages\Metaheuristics\xnKGn\src\solutions\individual.jl:260 [inlined]
  [7] gen_initial_

Double checking my work, I ran the code again a number of times. Confusingly, this error arises about 50% of the time, when using default GA parameters. I have discovered that using a lower population size reduces that to about 10% while increasing the population leads to almost 100%. I do not understand the error, as I am providing the right type for each function input. It must be some conversion error within the optimisation sequence. I would really appreciate any help with this.

Kind regards!

Maybe, the errors emerge when the objective function is not returning appropriate values.
Make sure that your objective function always returns:

  1. Float64 for single-objective unconstrained problems (e.g. f(x) = 1.0 + sum(x))
  2. Tuple{Float64, Vector{Float64}, Vector{Float64}} for single-objective constrained problems (e.g. f(x)=1.0, [0.0], [0.0])
  3. Tuple{Vector{Float64}, Vector{Float64}, Vector{Float64}} for multi-objective problems (e.g. f(x)=[0, 1.0], [0.0], [0.0])

Here, I'm assuming that you are not performing batch evaluations.

Note that returning a Matrix instead of Vector can lead to errors (except batch evaluations).

I will check this out, thanks for the quick answer!

No luck on my end. This is what I added to my objective function:

function objective_fn(theta)::Float64 
       fn(theta[1],theta[2],theta[3]) ...
end

in order to return a Float64, as I am doing unconstrained single-objective. My objective function takes as input some data (it is a Maximum Likelihood function) and I give in the type of a matrix. Should I convert these to Vectors?

Maybe, you have to convert that output data in your matrix into a scalar value. If theta is a matrix, then try to convert it into a vector.

Still no luck, unfortunately, after requiring theta to be a Vector{Float64} and the observations matrices converted to vectors as well. The error is still exactly the same.

MethodError: no method matching !(::Float64)
Closest candidates are:
  !(!Matched::Function) at C:\Users\marco\AppData\Local\Programs\JULIA-~1.3\share\julia\base\operators.jl:1117
  !(!Matched::Bool) at C:\Users\marco\AppData\Local\Programs\JULIA-~1.3\share\julia\base\bool.jl:35
  !(!Matched::Missing) at C:\Users\marco\AppData\Local\Programs\JULIA-~1.3\share\julia\base\missing.jl:101
Stacktrace:
  [1] _broadcast_getindex_evalf
    @ .\broadcast.jl:670 [inlined]
  [2] _broadcast_getindex
    @ .\broadcast.jl:643 [inlined]
  [3] getindex
    @ .\broadcast.jl:597 [inlined]
  [4] copy
    @ .\broadcast.jl:899 [inlined]
  [5] materialize
    @ .\broadcast.jl:860 [inlined]
  [6] mutation!(Q::Matrix{Float64}, parameters::BitFlipMutation)
    @ Metaheuristics C:\Users\marco\.julia\packages\Metaheuristics\xnKGn\src\operators\mutation.jl:13
  [7] update_state!(::State{Metaheuristics.xf_solution{Vector{Float64}}}, ::GA{RandomInBounds, TournamentSelection, UniformCro

Read the documentation on GA for Real Encoding.

Works! I will be more attentive, I had not seen that the choice in crossover and mutation functions needs to be appropriate for the encoding type. Thank you for your help.

You are welcome! :)