Different results for plotOptPath and getTuneResult
Opened this issue · 2 comments
PhilippPro commented
Maybe I have overseen something, but I get different optimal results in the plot and with getTuneResult. I really like the plotOptPath function!
par.set = makeParamSet(
makeNumericParam("C", lower = -5, upper = 15, trafo = function(x) 2^x),
makeNumericParam("sigma", lower = -15, upper = 3, trafo = function(x) 2^x)
)
par.config = makeParConfig(
par.set = par.set,
par.vals = list(kernel = "rbfdot"),
learner.name = "svm",
note = "Based on the practical guide to SVM: https://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf"
)
lrn = makeLearner("classif.ksvm", id = "ksvm.hyperopt")
lrn = makeHyperoptWrapper(lrn, par.config)
mod = train(lrn, iris.task)
print(getTuneResult(mod))
plotOptPath(mod$learner.model$opt.result$opt.path)
jakob-r commented
- The output of
print(getTuneResult(mod))
is already transformed as you probably noticed - As this is a noisy optimization problem
mlrHyperopt
suggests the point where the surrogate predicts the best performance. But only taking the points into concern that have really been evaluated. The plot generated byplotOptPath()
highlights the point that has the best performance. This can be overly optimistic as this point might only have a good performance due to noise.plotOptPath
does not (and can not) respect the settings of mbo in this case.
PhilippPro commented
Ok, I understand. In the graph you just want to show evaluated points. It's just a bit confusing for the user, as he is not sure what point he should take, but probably the surrogate output is better. (Maybe an interesting question for research)