Bayesian Optimization not implemented as a tuner
goergen95 opened this issue · 3 comments
The function for bayesian optimization points to an oracle rather than the implemented tuner. Is there a reason for this I am not aware and would it be possible to implement the function in a way that the tuner is directly available without the need to write a custom tuner?
I imagine the function to point to the tuner like this:
do.call(kerastuner$tuners$BayesianOptimization,args)
rather than its current implementation pointing to the oracle:
do.call(kerastuner$oracles$BayesianOptimization,args)
Hi. This is probably due to this example: https://keras-team.github.io/keras-tuner/tutorials/subclass-tuner/
would it be possible to implement the function in a way that the tuner is directly
available without the need to write a custom tuner?
Yes, but if you have an option, please open a PR.
In my opinion, there could be done the following:
oracles$BayesianOptimization
parameters are:
objective = objective,
max_trials = max_trials,
num_initial_points = num_initial_points,
alpha = alpha,
beta = beta,
seed = seed,
hyperparameters = hyperparameters,
allow_new_entries = allow_new_entries,
tune_new_entries = tune_new_entries
tuners$BayesianOptimization
parameters almost the same. But with 1 additional parameter:
hypermodel = hypermodel,
What could you do?
You could add all the parameters of tuners$BayesianOptimization
(current state) + 1 additional parameter hypermodel
. And then if hypermodel
parameter is missing,
try to call oracle. Otherwise, the tuner.
What do you think?
Thanks for the quick reply.
Thats basically what I tried with a PR from earlier today. Somehow it failed every single check so I thought to create this issue to figure out if something I am not aware of is going wrong.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.