jMetal/jMetalPy

Reference front for quality indicators for real world problems

mishras9 opened this issue · 4 comments

As per the the discussion in jMetal/jMetal#171 (comment).

As is suggested by @ajnebro Most of indicators require the Pareto front to be computed, but this front is rarely obtained with dealing with real-world problems. The most commonly used strategy is to build a reference Pareto front, which is composed of the result of merging all the solutions obtained by all the algorithms in all their runs.

Can this be achieved in jMetalPy while we are running several algorithms in an experiment?

If you mean whether the reference front is dynamically calculated during algorithm execution, this is not provided by jMetalPy (nor by jMetal).

@ajnebro
So, how can we call the pareto front from each run of an algorithm to create a reference front? Basically, can we store solutions, function value, and time for each run?

I suggest to do the experimentation in two steps: first, execute all the algorithms, so you can generate the reference Pareto front approximations from all the obtained fronts; second, you can use those fronts to compute the quality indicators and generate the tables with statistical information.

In jMetal we use the GenerateReferenceParetoSetAndFrontFromDoubleSolutions class, but it is not included in jMetalPy.