mdabros/SharpLearning

A way to Save Bayesian Optimizer progress and continue later.

wasilus opened this issue · 1 comments

Hello,
I would like to use the Bayesian Optimizer for Hyperparameter tuning.
Is there a way to save the current status of the optimizer and then resume later. I could not find one...
Also I could not figure out a way to pass a cancellation token.
Great project.
Thank you.

Hi @wasilus,

Currently there is no way of stopping the optimizer once it has been started. Might be a feature to add though since it would be a generally useful, maybe also for the other optimizers.

To get around this with the current implementation, you could log the parameters and resulting metric after each iteration. You can do this from the optimization objective function. Then use the "open loop" feature of the BayesianOptimizer passing in the previously logged results. You can see how this is done in this unit test:

public void BayesianOptimizer_OptimizeBest_MultipleParameters_Open_Loop_Using_PreviousResults()

Probably the best we can do with the current implementation.

Best regards
Mads