A Deep Neural Network explanation-by-example library for generating meaningful explanations. Used in our Explainability Study
Install the required Python packages
pip3 install -r requirements.txt
- Import the ExMatchina class
from ExMatchina import ExMatchina
- Load ExMatchina with a particular TensorFlow model + example prototypes (e.g. training data)
# X_train.npy: a numpy array of prototypes
# model: the model of interest
training_data = np.load('./X_train.npy')
model = load_model('./model')
# selected_layer: the layer to use in identifying examples.
# We recommend the layer immediately following the last convolution (e.g. flatten layer)
selected_layer = "Flatten_1"
exm = ExMatchina(model=model, layer=selected_layer, examples=training_data)
- Fetch examples and corresponding indices for a given input
# X_train.npy: a numpy array of model inputs
test_data = np.load('./X_test.npy')
test_input = test_data[0]
(examples, indices) = exm.return_nearest_examples(test_input)
The Examples/
folder contains the tutorial in python notebooks on using Exmatchina for different types of input data
Here's the Google Drive Link to the preprocessed data: Link
Download each of the folders there and place them in Examples/data/
Inside the trained_models/
folder, there are the pretrained models, named as [domain].hdf5
for each of the domains: image, text, ECG
If you find this code and results useful in your research, please cite:
@article{jeyakumar2020can,
title={How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methods},
author={Jeyakumar, Jeya Vikranth and Noor, Joseph and Cheng, Yu-Hsi and Garcia, Luis and Srivastava, Mani},
journal={Advances in Neural Information Processing Systems},
volume={33},
year={2020}
}