This is a fork from https://github.com/jiweil/Visualizing-and-Understanding-Neural-Models-in-NLP for implementation of my final project for Berkeley MIDS W266 NLP. Additional code has been utilized from https://github.com/ArrasL/LRP_for_LSTM for implementing layer-wise relevance propagation.
GPU
Torch (nn,cutorch,cunn,nngraph)
python matplotlib library (only for matrix plotting purposes)
download data
sh bidi_and_lrp.sh
This command creates trained models with a large array of hidden and embedding dimensions and takes about 12 hours to run on my NVIDIA GTX 980 GPU.
In order to run the analysis notebook rename the file from LRP_Output.txt to LRP_Output_Large.csv and add the following header: hidden_dimensions|embedding_dimensions|fine_accuracy|coarse_accuracy|true_class|predicted_class|sentence|class_scores|word_relevances|heatmap_html
The analysis can then be ran using LRP_Analysis_Large.ipynb