[Question] Input_t_gradient vs LRP-Z
Closed this issue · 1 comments
Is it expected that the output heatmap of input_t_gradient
analyzer is always the same as that of LRP-Z
?
I get it in all my analyses of VGG16, and see the same at least visually in the example notebooks, eg. in examples/mnist_compare_methods.ipynb
I think that either the two methods are indeed equivalent in these special cases or it is a bug. In any case, it would be interesting to know. Thanks for your help in advance!
Indeed, after looking at the literature [G. Montavon, A. Binder, S. Lapuschkin, W. Samek, and K.-R. M ̈uller. Layer-wise relevance propagation: An overview. In Explainable AI, volume 11700 of Lecture Notes in Computer Science, pages 193–209. Springer, 2019] and [Shrikumar, A., Greenside, P., Shcherbina, A., Kundaje, A.: Not just a black box:
Learning important features through propagating activation differences. CoRR
abs/1605.01713 (2016)] I understand that the two methods are equivalent when applied to the whole network.