mlr-org/mlrMBO

Error in if (err < tol) break : missing value where TRUE/FALSE needed for 'classif.gausspr'

Seager1989 opened this issue · 6 comments

Hi,

I am working on the Hyperparameters tuning for 'classif.gausspr'. The error as follows:
Error in if (err < tol) break : missing value where TRUE/FALSE needed

I tried to fix this problem by following actions but failed.

  1. Set impute.val=1
  2. Set tol=0.1 or bigger

The simplified code is listed below for your reference. Here, the dataset has ten independent features (0~1) and two types of labels.

controlALL = makeTuneControlMBO(budget = 50,impute.val=1)
MLGPR <- makeLearner("classif.gausspr",par.vals=list(kernel="polydot"))        #creat gaussian process classi object
PSGPR <- makeParamSet(makeIntegerParam(id="degree",lower=1L,upper=6L),
                      makeNumericParam(id="scale",default=0.1,lower=0.1,upper=10),
                      makeNumericParam(id="offset",default=0.1,lower=0.1,upper=10))
taskML = makeClassifTask(data=MLdatasetLabel,target=colnames(MLdatasetLabel)[dim(MLdatasetLabel)[2]])    
MLGPROPT=tuneParams(MLGPR,taskML,cv5,par.set=PSGPR,control=controlALL,measures=list(mmce,setAggregation(mmce,test.sd)))

I know that there is an error to calculate the err, but I do not know how to fix it. Any suggestions are appreciated. Thank you

It's hard to reproduce the error without the data MLdatasetLabel. Could you post a traceback()?

Thank you for your reply.

The traceback() is attached as 'TracebackofGPRclassifi.txt' and the MLdatasetLabel dataset is attached as 'MLdatasetLabel.xlsx'. Hope these are helpful to fix this problem.

TracebackofGPRclassifi.txt

MLdatasetLabel.xlsx

It is appreciated if anyone can give a comment on this issue. Thank you

So the traceback indicates that this is an error of the learner that you try to tune and not mlrMBO itself. It looks like the learner (makeLearner("classif.gausspr",par.vals=list(kernel="polydot")) ) just crashes for the hyperparemter settings suggested by mlrMBO.
According to the traceback the learner was called with the parameter settings fit = FALSE, kernel = "polydot", degree = 4L, scale = 4.52147865854204, offset = 0.899868381844135.
You can either try to change the search space (par.set) to ranges that do not crash or you ignore those cases.
It looks like you already tried the latter by setting impute.val=1. However, to really activate the imputation you have to set mlr to fail silently or with only a warning by setting configureMlr(on.learner.error = "warn").
I will update the documentation in mlr to state that more clearly.

Thank you for your help. The configureMlr(on.learner.error = "warn") works for me.

To change the search space may be difficult. I found the three hyperparameters (polynomial kernel degree, scale, and offset) are coupling with each other with respect to the training crash. It is hard to find a feasible domain with no crash without missing the optimum.

You are welcome. Thanks for making us aware of the gap in the documentation.