fabiodimarco/tf-levenberg-marquardt

Getting a shape error while trying to fit another dataset

Closed this issue · 4 comments

Hello,
I am trying to implement this algorithm by following the code. After preparing the model and dataset, whenever I try to fit another dataset which is the boston_housing dataset imported from Keras, I get a shape error. It says "Cannot convert a partially known tensor shape to a tensor(13,1, None)."I am in a confusion about how to fix the shape of any dataset to fit into this algorithm, especially for a regression problem. Please help me and hope to get your response.

Hi, sorry for the inconvenience.
I think the problem may be due to the target shape not matching the shape of the model output.
For example target shape=(None,) and model output shape=(None, 1). However, I am going to add a check to handle this case.
If this do not solve your problem, please provide a code that I can use to replicate the error.
Here a dummy boston_housing example:

import tensorflow as tf
import numpy as np
import time
import levenberg_marquardt as lm

(x_train, y_train), (x_test, y_test) = \
    tf.keras.datasets.boston_housing.load_data()

train_mean = np.mean(x_train, axis=0)
train_std = np.std(x_train, axis=0)
x_train = (x_train - train_mean) / train_std

x_train = tf.cast(x_train, tf.float32)
y_train = tf.cast(y_train, tf.float32)
y_train = tf.expand_dims(y_train, axis=-1)  # without this line I get an error similar to yours

x_test = tf.cast(x_test, tf.float32)
y_test = tf.cast(y_test, tf.float32)
y_test = tf.expand_dims(y_test, axis=-1)

train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.batch(x_train.shape[0]).cache()
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)

model = tf.keras.Sequential([
    tf.keras.layers.Dense(20, activation=tf.nn.relu,
                          input_shape=[x_train.shape[1]]),
    tf.keras.layers.Dense(20, activation=tf.nn.relu),
    tf.keras.layers.Dense(1)
])

model.compile(optimizer=tf.keras.optimizers.Adam(),
              loss='mse')

model.summary()

model_wrapper = lm.ModelWrapper(tf.keras.models.clone_model(model))

model_wrapper.compile(
    optimizer=tf.keras.optimizers.SGD(learning_rate=0.1),
    loss=lm.MeanSquaredError(),
    solve_method='solve')

print("Train using Adam")
t1_start = time.perf_counter()
model.fit(train_dataset, epochs=200)
t1_stop = time.perf_counter()
print("Elapsed time: ", t1_stop - t1_start)

print("\n_________________________________________________________________")
print("Train using Levenberg-Marquardt")
t2_start = time.perf_counter()
model_wrapper.fit(train_dataset, epochs=200)
t2_stop = time.perf_counter()
print("Elapsed time: ", t2_stop - t2_start)

Thank u so much. This worked really well! And I am so grateful for your help. I am going to implement this algorithm for my research project. Honestly speaking, I searched for many resources to implement this algorithm but couldn't find better than yours.
I have one more question. By following the above method, is it possible to fit any dataset for a regression problem in this algorithm?
Thank you once again and I am hoping to get your help in the future in case of any problem related to this. Best wishes!

You're welcome!

By following the above method, is it possible to fit any dataset for a regression problem in this algorithm?

In theory yes. However, you may encounter problems in the case of models with a large number of parameters.
In my tests, I found better results with relatively small models under 2000 parameters and large batch sizes, but you can try to use to larger models by usng smaller batch sizes.
The problem is that the algorithm scales into memory as n^2 and n^3 for the resolution of the linear system, where n is be equal to the number of parameters or the batch size. The algorithm automatically chooses the smaller of the two.
You can find more details in the "Memory, Speed and Convergence considerations" section of the readme.
I have read a paper that use vector-jacobian-product to overcome the problem, but it require forward automatic differentiation which is not implemented in Tensorflow.
Please let me know if you find any problems that benefit from using levenberg-marquardt in the small batch size setting.

Thank You and definitely, I will let you know as it is a part of my research project. Good luck and best wishes!