Does the tool support other layers, such as dense layer or conv1d layers?
tinyxuyan opened this issue · 1 comments
All the examples in the project use conv2d layers and support 3-d image input. I am wondering whether the tf-explain tool supports other layers, such as dense layer, conv1d layer, and lstm layer. I am trying to explain the model with conv1d layer, but there are some errors when explainers are used. I am not sure whether it is the reason of my codes or the tf-explain tool itself. Some examples of the detailed errors are listed below:
In the model, the input feature is a 1-d vector with length 120 which should be classified into 4 different classes. The detailed network summary is listed below:
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
Input_1 (InputLayer) [(None, 120)] 0
_________________________________________________________________
Dense_2 (Dense) (None, 120) 14520
_________________________________________________________________
Dense_3 (Dense) (None, 100) 12100
_________________________________________________________________
Dense_4 (Dense) (None, 80) 8080
_________________________________________________________________
Dense_5 (Dense) (None, 60) 4860
_________________________________________________________________
Dense_6 (Dense) (None, 40) 2440
_________________________________________________________________
Dense_7 (Dense) (None, 20) 820
_________________________________________________________________
Dense_8 (Dense) (None, 10) 210
_________________________________________________________________
Dense_9 (Dense) (None, 4) 44
=================================================================
Total params: 43,074
Trainable params: 43,074
Non-trainable params: 0
_________________________________________________________________
When I use GradientsInputs API, (test_dataset is a vector with length 120, test_labelsymbol[0] is an integer)
from tf_explain.core.gradients_inputs import GradientsInputs
explainer = GradientsInputs()
data = (test_dataset, None)
grid = explainer.explain(data, model, test_labelsymbol[0])
the output error is:
(neglecting the tedious details)
ValueError: 'image' must be at least three-dimensional.
When I use GradCAM API,
from tf_explain.core.grad_cam import GradCAM
explainer = GradCAM()
data = (test_dataset, None)
grid = explainer.explain(data, model, test_labelsymbol[0], layer_name="Dense_5")
the output error is:
(neglecting the tedious details)
ValueError: No such layer: Dense_5
According the model summary, 'Dense_5' layer must exist.
When you instantiate your model, can you try passing (...name="Dense_5") to the appropriate layer? E.g. explicitly naming the layer as you want, and not relying on the automatic naming of TF.