EagerAI/kerastuneR

Using validation_split with fit_tuner

briank621 opened this issue · 4 comments

Is it possible to use the validation_split argument with fit_tuner, something like this:

fit_tuner(x_train, y_train, validation_split = 0.2)

When I try to do this, I get the following issue: validation_split is only supported for Tensors or NumPy arrays. I was able to get the same code working, using the fit function, so I'm wondering if it's just not possible or if there are any additional modifications to the data that have to be made.

Hi @briank621 .
Could you share a reproducible example, please?

I have uploaded an example script, as well as the .RData file with the necessary inputs.
I've also tried manually splitting the data, but am running into some issues with types again.

https://filebin.net/u3qof1oqugpwi4wz

Please let me know if there any issues

@briank621 Hi, sorry for the late reply. Here is the solution:

library("keras")
library('kerastuneR')

load("kerastuner.RData") 

build_model = function(hp){
  cnn_model = keras_model_sequential() %>%
    layer_conv_1d(hp$Int('filters', min_value=4, max_value=8, step=4), kernel_size = 3, padding="same", activation="relu", input_shape=input_shape) %>% 
    layer_conv_1d(hp$Int('filters', min_value=4, max_value=8, step=4), kernel_size = 3, padding="same", activation = "relu") %>% 
    layer_max_pooling_1d(pool_size = 2) %>% 
    layer_dropout(rate = 0.2) %>% 
    layer_conv_1d(hp$Int('filters', min_value=4, max_value=8, step=4), kernel_size = 3, padding="same", activation = "relu") %>% 
    layer_conv_1d(hp$Int('filters', min_value=4, max_value=8, step=4), kernel_size = 3, padding="same", activation = "relu") %>% 
    layer_max_pooling_1d(pool_size = 2) %>% 
    layer_dropout(rate = 0.2) %>% 
    layer_flatten() %>% 
    layer_dense(units = 8, activation = "relu") %>% 
    layer_dropout(rate = 0.5) %>% 
    layer_dense(units = 4, activation = "relu") %>% 
    layer_dense(units = 1, activation = "sigmoid") %>% 
    compile(
      loss = loss_binary_crossentropy,
      optimizer = optimizer_adam(hp$Choice('learning_rate', values=c(1e-2, 1e-3, 1e-4))),
      metrics = c('accuracy')
    )
  return(cnn_model)
}

epochs = 10
batch_size = 5

tuner = RandomSearch(
  build_model,
  objective = 'val_accuracy',
  max_trials = 5,
  executions_per_trial = 3)

np = reticulate::import('numpy',convert = FALSE)

x_train = np$array(x_train)
y_train = np$array(as.integer(y_train))

tuner %>% fit_tuner(x_train, y_train,
                    epochs = epochs,
                    batch_size = batch_size,
                    validation_split = 0.2)

Thank you, code works beautifully