Compute gradient of the score or intermediate neuron with respect to the input image.
- Feature Visualization [page]
Olah et al. 2017 - Understanding Neural Networks Through Deep Visualization [paper]
Jason Yosinski et al., 2015 - Striving for Simplicity: The All Convolutional Net [paper]
Jost Tobias Springenberg et al. 2015 - Understanding Deep Image Representations by Inverting Them [paper]
Aravindh Mahendran et al., 2015 - Deep Inside Convolutional Network: Visualising Image Classification Models and Saliency Maps [paper]
Karen Simonyan et al., 2013 - Visualizing and Understanding Convolutional Networks [paper]
Matthew D Zeiler et al., 2013
- Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space [paper]
Anh Nguyen et al., 2017 - Inverting Visual Representations with Convolutional Networks [paper]
Alexey Dosovitskiy et al., 2016 - Object Detectors Emerge in Deep Scene CNNs [paper]
Bolei Zhou et al., 2015
- Understanding Deep Features with Computer-generated Imagery [paper]
Mathieu Aubry et al., 2015 - How Transferable are Features in Deep Neural Networks [paper]
Jason Yosinski et al., 2014 - Going Deeper with Convolutions [paper]
Christian Szegedy et al., 2014
- Interpretable Explanations of Black Boxes by Meaningful Perturbation [paper]
Ruth Fong et al., 2017 - Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization [paper]
Ramprasaath R. Selvaraju et al., 2017 - Visualizing Deep Neural Network Decisions: Prediction Difference Analysis [paper]
Luisa M Zintgraf et al., 2017 - The (Un)reliability of saliency methods [paper]
Pieter-Jan Kindermans et al., 2017 - "Why Should I Trust You?": Explaining the Predictions of Any Classifier [paper]
Marco Tulio Ribeiro et al., 2016
- Understanding Block-box Predictions via Influence Functions [paper]
Pang Wei Koh et al., 2017 - One Pixel Attack for Fooling Deep Neural Networks [paper]
Jiawei Su et al., 2017
- Harnessing Deep Neural Networks with Logic Rules [paper]
Zhiting Hu et al., 2016
- Examing CNN Representations with respect to Dataset Bias [paper]
Quanshi Zhang et al., 2017
- Growing Interpretable Part Graphs on ConvNets via Multi-Shot Learning [paper]
Quanshi Zhang et al.,2016 - Interpreting CNN Knowledge Via An Explanatory Graph [paper]
Quanshi Zhang et al.,2018 - Interpreting CNNs via Decision Trees [paper]
Quanshi Zhang et al.,2018 - Interpret Neural Networks by Identifying Critical Data Routing Paths [paper]
Yulong Wang et al.,2018
Modifying model structure to interpret model
- Interpretable Convolution Neural Networks [paper]
Quanshi Zhang et al.,2018 - Towards Interpretable R-CNN by Unfolding Latent Structures [paper] Tianfu Wu et al.,2018
- Dynamic Routing Between Capsules [paper]
Sara Sabour et al.,2017 - InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets [paper]
Xi Chen et al.,2016
- Network Dissection: Quantifying Interpretability of Deep Visual Representations [paper]
David Bau et al., 2017
- Interpreting CNN Knowledge Via An Explanatory Graph [paper]
Quanshi Zhang et al.,2018
- Visual Interpretability for Deep Learning: a Survey [paper]
Quanshi Zhang,Song-Chun Zhu, 2018