fabiodimarco/tf-levenberg-marquardt

TypeError when trying to train model

Closed this issue · 1 comments

Hi.

First of all, thank you very much for the effort in developing this version of LM for Keras, which is the only implementation as far as I know. The main issue that I am facing is related to the following error.

TypeError: Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.

This error arises as I try to execute the training for the model as

def create_model_lm(neurons1=1, neurons2=0, n_features=None):
    # create model
    model = Sequential()
    model.add(Dense(neurons1, input_dim=n_features, kernel_initializer='uniform', activation='tanh'))
    model.add(Dropout(0.2))
    if neurons2 != 0:
        model.add(Dense(neurons2, kernel_initializer='uniform', activation='tanh'))
        model.add(Dropout(0.2))
    model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid'))
    # Compile model
    model_wrapper = lm.ModelWrapper(model)
    model_wrapper.compile(
        optimizer=SGD(learning_rate=0.1),
        loss=lm.BinaryCrossentropy(from_logits=True),
        metrics=['accuracy'])

    return model_wrapper

model = create_model_lm(params['neurons1'], params['neurons2'], params['npcs'])
es = EarlyStopping(monitor='val_loss', mode='min', verbose=0, patience=20, restore_best_weights=True)

history = model.fit(train_data, train_target, batch_size=batch,
                        validation_data=(test_data, test_target),
                        shuffle=True, epochs=400, verbose=0,
                        callbacks=[es])

Can you give me a little help with any ideias for this specific error? I would be very grateful for any insight.

Good catch, Thank you.
It seems a problem that has arisen with the new tensorflow version 2.4.0. I get the same error on my examples in google colab.
Installing tensorflow 2.3.0 should solve the issue. I will try to make a fix for tensorflow 2.4.0 in the next few days.
Let me know if you have any further problems.

P.S.
If you use BinaryCrossentropy with option from_logits=True (that is more numerically stable), I would suggest you to use a linear activation in the final layer. If you use a sigmoid activation set from_logits=False.