Opinionated list of resources facilitating model interpretability (introspection, simplification, visualization, explanation).
- Interpretable models
- Simple decision trees
- Rules
- (Regularized) linear regression
- k-NN
- (2008) Predictive learning via rule ensembles by Jerome H. Friedman, Bogdan E. Popescu
- (2014) Comprehensible classification models by Alex A. Freitas
- https://dx.doi.org/10.1145/2594473.2594475
- http://www.kdd.org/exploration_files/V15-01-01-Freitas.pdf
- Interesting discussion of interpretability for a few classification models (decision trees, classification rules, decision tables, nearest neighbors and Bayesian network classifier)
- (2015) Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model by Benjamin Letham, Cynthia Rudin, Tyler H. McCormick, David Madigan
- (2017) Learning Explanatory Rules from Noisy Data by Richard Evans, Edward Grefenstette
- (2019) Transparent Classification with Multilayer Logical Perceptrons and Random Binarization by Zhuo Wang, Wei Zhang, Ning Liu, Jianyong Wang
- Models offering feature importance measures
- Random forest
- Boosted trees
- Extremely randomized trees
- (2006) Extremely randomized trees by Pierre Geurts, Damien Ernst, Louis Wehenkel
- Random ferns
- (2015) rFerns: An Implementation of the Random Ferns Method for General-Purpose Machine Learning by Miron B. Kursa
- Linear regression (with a grain of salt)
- (2007) Bias in random forest variable importance measures: Illustrations, sources and a solution by Carolin Strobl, Anne-Laure Boulesteix, Achim Zeileis, Torsten Hothorn
- (2008) Conditional Variable Importance for Random Forests by Carolin Strobl, Anne-Laure Boulesteix, Thomas Kneib, Thomas Augustin, Achim Zeileis
- (2018) Model Class Reliance: Variable Importance Measures for any Machine Learning Model Class, from the “Rashomon” Perspective by Aaron Fisher, Cynthia Rudin, Francesca Dominici
- https://arxiv.org/pdf/1801.01489
- https://github.com/aaronjfisher/mcr
- Universal (model agnostic) variable importance measure
- (2019) Please Stop Permuting Features: An Explanation and Alternatives by Giles Hooker, Lucas Mentch
- https://arxiv.org/pdf/1905.03151
- Paper advocating against feature permutation for importance
- (2018) Visualizing the Feature Importance for Black Box Models by Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl
- https://arxiv.org/pdf/1804.06620
- https://github.com/giuseppec/featureImportance
- Global and local (model agnostic) variable importance measure (based on Model Reliance)
- Very good blog post describing deficiencies of random forest feature importance and the permutation importance
- Permutation importance - simple model agnostic approach is described in Eli5 documentation
- Classification of feature selection methods
- Filters
- Wrappers
- Embedded methods
- (2003) An Introduction to Variable and Feature Selection by Isabelle Guyon, André Elisseeff
- http://www.jmlr.org/papers/volume3/guyon03a/guyon03a.pdf
- Be sure to read this very illustrative introduction to feature selection
- Filter Methods
- (2006) On the Use of Variable Complementarity for Feature Selection in Cancer Classification by Patrick Meyer, Gianluca Bontempi
- https://dx.doi.org/10.1007/11732242_9
- https://pdfs.semanticscholar.org/d72f/f5063520ce4542d6d9b9e6a4f12aafab6091.pdf
- Introduces information theoretic methods - double input symmetrical relevance (DISR)
- (2012) Conditional Likelihood Maximisation: A Unifying Framework for Information Theoretic Feature Selection by Gavin Brown, Adam Pocock, Ming-Jie Zhao, Mikel Luján
- http://www.jmlr.org/papers/volume13/brown12a/brown12a.pdf
- Code: https://github.com/Craigacp/FEAST
- Discusses various approaches based on mutual information (MIM, mRMR, MIFS, CMIM, JMI, DISR, ICAP, CIFE, CMI)
- (2012) Feature selection via joint likelihood by Adam Pocock
- (2017) Relief-Based Feature Selection: Introduction and Review by Ryan J. Urbanowicz, Melissa Meeker, William LaCava, Randal S. Olson, Jason H. Moore
- (2017) Benchmarking Relief-Based Feature Selection Methods for Bioinformatics Data Mining by Ryan J. Urbanowicz, Randal S. Olson, Peter Schmitt, Melissa Meeker, Jason H. Moore
- (2006) On the Use of Variable Complementarity for Feature Selection in Cancer Classification by Patrick Meyer, Gianluca Bontempi
- Wrapper methods
- (2015) Feature Selection with theBorutaPackage by Miron B. Kursa, Witold R. Rudnicki
- Boruta for those in a hurry
- General
- (1994) Irrelevant Features and the Subset Selection Problem by George John, Ron Kohavi, Karl Pfleger
- https://pdfs.semanticscholar.org/a83b/ddb34618cc68f1014ca12eef7f537825d104.pdf
- Classic paper discussing weakly relevant features, irrelevant features, strongly relevant features
- (2003) Special issue of JMLR of feature selection - oldish (2003)
- (2004) Result Analysis of the NIPS 2003 Feature Selection Challenge by Isabelle Guyon, Steve Gunn, Asa Ben-Hur, Gideon Dror
- (2007) Consistent Feature Selection for Pattern Recognition in Polynomial Time by Roland Nilsson, José Peña, Johan Björkegren, Jesper Tegnér
- http://www.jmlr.org/papers/volume8/nilsson07a/nilsson07a.pdf
- Discusses minimal optimal vs all-relevant approaches to feature selection
- (1994) Irrelevant Features and the Subset Selection Problem by George John, Ron Kohavi, Karl Pfleger
- Feature Engineering and Selection by Kuhn & Johnson
- Sligtly off-topic, but very interesting book
- http://www.feat.engineering/index.html
- https://bookdown.org/max/FES/
- https://github.com/topepo/FES
- Feature Engineering presentation by H. J. van Veen
- Slightly off-topicm but very interesting deck of slides
- Slides: https://www.slideshare.net/HJvanVeen/feature-engineering-72376750
- Magnets by R. P. Feynman https://www.youtube.com/watch?v=wMFPe-DwULM
- (2002) Looking Inside the Black Box, presentation of Leo Breiman
- (2011) To Explain or to Predict? by Galit Shmueli
- (2016) The Mythos of Model Interpretability by Zachary C. Lipton
- (2017) Towards A Rigorous Science of Interpretable Machine Learning by Finale Doshi-Velez, Been Kim
- (2017) The Promise and Peril of Human Evaluation for Model Interpretability by Bernease Herman
- (2018) The Book of Why: The New Science of Cause and Effect by Judea Pearl
- (2018) Please Stop Doing the “Explainable” ML by Cynthia Rudin
- Video (starts 17:30, lasts 10 min): https://zoom.us/recording/play/0y-iI9HamgyDzzP2k_jiTu6jB7JgVVXnjWZKDMbnyRTn3FsxTDZy6Wkrj3_ekx4J
- Linked at: https://users.cs.duke.edu/~cynthia/mediatalks.html
- (2018) Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning by Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, Lalana Kagal
- (2019) Interpretable machine learning: definitions, methods, and applications by W. James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, Bin Yu
- (2019) On Explainable Machine Learning Misconceptions A More Human-Centered Machine Learning by Patrick Hall
- (2019) An Introduction to Machine Learning Interpretability. An Applied Perspective on Fairness, Accountability, Transparency, and Explainable AI by Patrick Hall and Navdeep Gill
- (2009) How to Explain Individual Classification Decisions by David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, Klaus-Robert Mueller
- (2013) Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation by Alex Goldstein, Adam Kapelner, Justin Bleich, Emil Pitkin
- (2016) “Why Should I Trust You?”: Explaining the Predictions of Any Classifier by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
- https://arxiv.org/pdf/1602.04938
- Code: https://github.com/marcotcr/lime
- https://github.com/marcotcr/lime-experiments
- https://www.youtube.com/watch?v=bCgEP2zuYxI
- Introduces the LIME method (Local Interpretable Model-agnostic Explanations)
- (2016) A Model Explanation System: Latest Updates and Extensions by Ryan Turner
- (2017) Understanding Black-box Predictions via Influence Functions by Pang Wei Koh, Percy Liang
- (2017) A Unified Approach to Interpreting Model Predictions by Scott Lundberg, Su-In Lee
- https://arxiv.org/pdf/1705.07874
- Code: https://github.com/slundberg/shap
- Introduces the SHAP method (SHapley Additive exPlanations), generalizing LIME
- (2018) Anchors: High-Precision Model-Agnostic Explanations by Marco Ribeiro, Sameer Singh, Carlos Guestrin
- (2018) Learning to Explain: An Information-Theoretic Perspective on Model Interpretation by Jianbo Chen, Le Song, Martin J. Wainwright, Michael I. Jordan
- (2018) Explanations of model predictions with live and breakDown packages by Mateusz Staniak, Przemyslaw Biecek
- (2018) A review book - Interpretable Machine Learning. A Guide for Making Black Box Models Explainable by Christoph Molnar
- (2018) Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead by Cynthia Rudin
- (2019) Quantifying Interpretability of Arbitrary Machine Learning Models Through Functional Decomposition by Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl
- (2013) Visualizing and Understanding Convolutional Networks by Matthew D Zeiler, Rob Fergus
- (2013) Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps by Karen Simonyan, Andrea Vedaldi, Andrew Zisserman
- (2015) Understanding Neural Networks Through Deep Visualization by Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, Hod Lipson
- (2016) Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization by Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra
- (2016) Generating Visual Explanations by Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, Trevor Darrell
- (2016) Rationalizing Neural Predictions by Tao Lei, Regina Barzilay, Tommi Jaakkola
- (2016) Gradients of Counterfactuals by Mukund Sundararajan, Ankur Taly, Qiqi Yan
- Pixel entropy can be used to detect relevant picture regions (for CovNets)
- See Visualization section and Fig. 5 of the paper
- (2017) High-Resolution Breast Cancer Screening with Multi-View Deep Convolutional Neural Networks by Krzysztof J. Geras, Stacey Wolfson, Yiqiu Shen, Nan Wu, S. Gene Kim, Eric Kim, Laura Heacock, Ujas Parikh, Linda Moy, Kyunghyun Cho
- See Visualization section and Fig. 5 of the paper
- (2017) SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability by Maithra Raghu, Justin Gilmer, Jason Yosinski, Jascha Sohl-Dickstein
- (2017) Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks by Jose Oramas, Kaili Wang, Tinne Tuytelaars
- (2017) Axiomatic Attribution for Deep Networks by Mukund Sundararajan, Ankur Taly, Qiqi Yan
- https://arxiv.org/pdf/1703.01365
- Code: https://github.com/ankurtaly/Integrated-Gradients
- Proposes Integrated Gradients Method
- See also: Gradients of Counterfactuals https://arxiv.org/pdf/1611.02639.pdf
- (2017) Learning Important Features Through Propagating Activation Differences by Avanti Shrikumar, Peyton Greenside, Anshul Kundaje
- https://arxiv.org/pdf/1704.02685
- Proposes Deep Lift method
- Code: https://github.com/kundajelab/deeplift
- Videos: https://www.youtube.com/playlist?list=PLJLjQOkqSRTP3cLB2cOOi_bQFw6KPGKML
- (2017) The (Un)reliability of saliency methods by Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim
- https://arxiv.org/pdf/1711.0867
- Review of failures for methods extracting most important pixels for prediction
- (2018) Classifier-agnostic saliency map extraction by Konrad Zolna, Krzysztof J. Geras, Kyunghyun Cho
- (2018) A Benchmark for Interpretability Methods in Deep Neural Networks by Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim
- (2018) The Building Blocks of Interpretability by Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye, Alexander Mordvintsev
- https://dx.doi.org/10.23915/distill.00010
- Has some embeded links to notebooks
- Uses Lucid library https://github.com/tensorflow/lucid
- (2018) Hierarchical interpretations for neural network predictions by Chandan Singh, W. James Murdoch, Bin Yu
- (2018) iNNvestigate neural networks! by Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans
- (2018) YASENN: Explaining Neural Networks via Partitioning Activation Sequences by Yaroslav Zharov, Denis Korzhenkov, Pavel Shvechikov, Alexander Tuzhilin
- (2019) Attention is not Explanation by Sarthak Jain, Byron C. Wallace
- (2019) Attention Interpretability Across NLP Tasks by Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, Manaal Faruqui
- (2017) Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples by Gail Weiss, Yoav Goldberg, Eran Yahav
- (2017) Distilling a Neural Network Into a Soft Decision Tree by Nicholas Frosst, Geoffrey Hinton
- (2017) Detecting Bias in Black-Box Models Using Transparent Model Distillation by Sarah Tan, Rich Caruana, Giles Hooker, Yin Lou
- Visualizing Statistical Models: Removing the blindfold
- Partial dependence plots
- http://scikit-learn.org/stable/auto_examples/ensemble/plot_partial_dependence.html
- pdp: An R Package for Constructing Partial Dependence Plots https://journal.r-project.org/archive/2017/RJ-2017-016/RJ-2017-016.pdf https://cran.r-project.org/web/packages/pdp/index.html
- ggfortify: Unified Interface to Visualize Statistical Results of Popular R Packages
- RandomForestExplainer
- ggRandomForest
- Tutorial on Interpretable machine learning at ICML 2017
- P. Biecek, Show Me Your Model - Tools for Visualisation of Statistical Models
- S. Ritchie, Just-So Stories of AI
- C. Jarmul, Towards Interpretable Accountable Models
- I. Oszvald, Machine Learning Libraries You’d Wish You’d Known About
- A large part of the talk covers model explanation and visualization
- Video: https://www.youtube.com/watch?v=nDF7_8FOhpI
- Associated notebook on explaining regression predictions: https://github.com/ianozsvald/data_science_delivered/blob/master/ml_explain_regression_prediction.ipynb
- G. Varoquaux, Understanding and diagnosing your machine-learning models (covers PDP and Lime among others)
- Interpretable ML Symposium (NIPS 2017) (contains links to papers, slides and videos)
- http://interpretable.ml/
- Debate, Interpretability is necessary in machine learning
- Workshop on Human Interpretability in Machine Learning (WHI), organised in conjunction with ICML
- 2018 (contains links to papers and slides)
- 2017 (contains links to papers and slides)
- 2016 (contains links to papers)
- Analyzing and interpreting neural networks for NLP (BlackboxNLP), organised in conjunction with EMNLP
- 2019 (links below may get prefixed by 2019 later on)
- https://blackboxnlp.github.io/
- https://blackboxnlp.github.io/program.html
- Papers should be available on arXiv
- 2018
- 2019 (links below may get prefixed by 2019 later on)
- FAT/ML Fairness, Accountability, and Transparency in Machine Learning https://www.fatml.org/
- 2018
- 2017
- 2016
- 2016
- 2015
- 2014
- AAAI/ACM Annual Conferenceon AI, Ethics, and Society
- 2019 (links below may get prefixed by 2019 later on)
- 2018
Software related to papers is mentioned along with each publication. Here only standalone software is included.
- DALEX - R package, Descriptive mAchine Learning EXplanations
- ELI5 - Python package dedicated to debugging machine learning classifiers and explaining their predictions
- forestmodel - R package visualizing coefficients of different models with the so called forest plot
- fscaret - R package with automated Feature Selection from ‘caret’
- iml - R package for Interpretable Machine Learning
- interpret - Python package package for training interpretable models and explaining blackbox systems by Microsoft
- lime - R package implementing LIME
- lofo-importance - Python package feature importance by Leave One Feature Out Importance method
- Lucid - a collection of infrastructure and tools for research in neural network interpretability
- praznik - R package with a collection of feature selection filters performing greedy optimisation of mutual information-based usefulness criteria, see JMLR 13, 27−66 (2012)
- yellowbrick - Python package offering visual analysis and diagnostic tools to facilitate machine learning model selection
- Awesome list of resources by Patrick Hall
- Awesome XAI resources by Przemysław Biecek