Keras symbolic input/outputs and layer_names issue
razvanw0w opened this issue · 8 comments
Hello,
I'm trying to train a model and use Keract to display the feature maps which are applied while training, but I ran into a problem. The model I'm trying to train is depicted in the snippet below.
def build_model2():
model = Sequential()
model.add(Conv2D(75, (3, 3), strides = 1, name = "conv1", padding = 'same', activation = 'relu', kernel_initializer = "he_normal", input_shape = (28,28,1)))
model.add(BatchNormalization(name = "batchnorm1"))
model.add(MaxPooling2D((2, 2), strides = 2, padding = 'same', name = "maxpool1"))
model.add(Conv2D(50, (3, 3), strides = 1, name = "conv2", padding = 'same', activation = 'relu', kernel_initializer = "he_normal"))
model.add(Dropout(0.1, name = "dropout1"))
model.add(BatchNormalization(name = "batchnorm2"))
model.add(MaxPooling2D((2, 2), strides = 2, name = "maxpool2", padding = 'same'))
model.add(Conv2D(25, (3, 3), strides = 1, name = "conv3", padding = 'same', activation = 'relu', kernel_initializer = "he_normal"))
model.add(BatchNormalization(name = "batchnorm3"))
model.add(MaxPooling2D((2, 2), strides = 2, name = "maxpool3", padding = 'same'))
model.add(Flatten(name = "flatten"))
model.add(Dense(512, name = "dense1", activation = 'relu'))
model.add(Dropout(0.15, name = "dropout2"))
model.add(Dense(25, name = "dense_output", activation = 'softmax'))
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
model.summary()
plot_model(model, to_file = "model_enhanced.png", show_shapes = True)
return model
If I try to run the following snippet, I get the following error, although I think I don't have any input/output layers:
first_input = test_X[0]
activations = get_activations(model, first_input, auto_compile = True)
display_activations(activations, cmap = "gray", save = False)
TypeError: Keras symbolic inputs/outputs do not implement `op`. You may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.
A bigger issue is that if I try to get only the activation map for a single layer, I get another error, as such
first_input = test_X[0]
activations = get_activations(model, first_input, layer_names = "conv1", auto_compile = True)
display_activations(activations, cmap = "gray", save = False)
KeyError: 'Could not find a layer with name: [conv1]. Network layers are [conv1, batchnorm1, maxpool1, conv2, dropout1, batchnorm2, maxpool2, conv3, batchnorm3, maxpool3, flatten, dense1, dropout2, dense_output]'
Do you have any idea why this happens?
@razvanw0w I'll have a look tomorrow. In the meantime make sure your imports are from tensorflow and not from keras. import tensorflow.keras
instead of import keras
.
@razvanw0w I'll have a look tomorrow. In the meantime make sure your imports are from tensorflow and not from keras.
import tensorflow.keras
instead ofimport keras
.
Done, I ruled this out. Libraries aren't the problem.
@razvanw0w so good news I don't have any problem and I can run your code perfectly.
import numpy as np
from tensorflow.keras import Sequential
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dropout, Flatten, Dense
from tensorflow.python.keras.utils.vis_utils import plot_model
from keract import get_activations, display_activations
def build_model2():
model = Sequential()
model.add(
Conv2D(75, (3, 3), strides=1, name="conv1", padding='same', activation='relu', kernel_initializer="he_normal",
input_shape=(28, 28, 1)))
model.add(BatchNormalization(name="batchnorm1"))
model.add(MaxPooling2D((2, 2), strides=2, padding='same', name="maxpool1"))
model.add(
Conv2D(50, (3, 3), strides=1, name="conv2", padding='same', activation='relu', kernel_initializer="he_normal"))
model.add(Dropout(0.1, name="dropout1"))
model.add(BatchNormalization(name="batchnorm2"))
model.add(MaxPooling2D((2, 2), strides=2, name="maxpool2", padding='same'))
model.add(
Conv2D(25, (3, 3), strides=1, name="conv3", padding='same', activation='relu', kernel_initializer="he_normal"))
model.add(BatchNormalization(name="batchnorm3"))
model.add(MaxPooling2D((2, 2), strides=2, name="maxpool3", padding='same'))
model.add(Flatten(name="flatten"))
model.add(Dense(512, name="dense1", activation='relu'))
model.add(Dropout(0.15, name="dropout2"))
model.add(Dense(25, name="dense_output", activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.summary()
plot_model(model, to_file="model_enhanced.png", show_shapes=True)
return model
if __name__ == '__main__':
test_X = np.random.uniform(size=(2, 28, 28, 1))
first_input = test_X[0:1]
m = build_model2()
print(m.predict(first_input).shape)
activations = get_activations(m, first_input)
display_activations(activations, cmap="gray")
It's likely a dep problem. Upgrade your tensorflow version.
pip list | grep tensorflow
tensorflow 2.3.0
tensorflow-estimator 2.3.0
Had the same issue. It turns out a tensorflow dependency problem. get_activations()
doesn't work on tf 2.4.0, after I downgrade tf to 2.3.0, it works!
The dependency fix did the work. I had TF 2.4.0 and downgrading did the work. I will close the issue.
SOLVED!
I'm going to add the TF<=2.3.0
Tensorflow keeps changing their compatibility at every version..
I pushed a new version 4.3.4 that solves the compatibility with TF 2.4.0