EagerAI/kerastuneR

Optimization parameters are not updating

pauldhami opened this issue · 1 comments

Greetings,

I am trying to implement hyper-parameter tuning with keras tuneR in R, with the goal of using the Bayesian Optimization. However, before that, and following this tutorial:

https://eagerai.github.io/kerastuneR/

I am trying randomsearch. Below is my code, and trying to adapt it to the neural network I would like to test:

build_model <- function(hp) {                                
  model <- keras_model_sequential()
  
  model %>%  layer_dense(units = hp$Int('units', min_value = 10,  max_value = 50, step = 5), 
                         activation = "relu",  
                         input_shape = dim(X_pca_scores_scaled)[[2]]) %>%
    layer_dropout(rate = hp$Float('rate', min_value = 0,  max_value = 0.5, step = 0.1)) %>%
    layer_dense(units = hp$Int('units', min_value = 10,  max_value = 50, step = 5), 
                activation = "relu") %>%
    layer_dropout(rate = hp$Float('rate', min_value = 0,  max_value = 0.5, step = 0.1)) %>%
    layer_dense(units = 1) %>%
    compile(
      optimizer = "adam",
      loss = "mse",
      metrics = c("mae"))
  return(model)
}

I then run:

tuner <- RandomSearch(
  build_model,
  objective = 'mae',
  max_trials = 5,
  executions_per_trial = 3)

but then running "tuner %>% search_summary()" leads to this:

Search space summary
Default search space size: 1
units (Int)
{'default': None, 'conditions': [], 'min_value': 50, 'max_value': 500, 'step': 50, 'sampling': None}

Those parameter values are not from the code above. What am I doing wrong?

Could you share a reproducible example?