InterpretML is an open-source package that incorporates state-of-the-art machine learning interpretability techniques under one roof. With this package, you can train interpretable glassbox models and explain blackbox systems. InterpretML helps you understand your model's global behavior, or understand the reasons behind individual predictions.
Interpretability is essential for:
- Model debugging - Why did my model make this mistake?
- Feature Engineering - How can I improve my model?
- Detecting fairness issues - Does my model discriminate?
- Human-AI cooperation - How can I understand and trust the model's decisions?
- Regulatory compliance - Does my model satisfy legal requirements?
- High-risk applications - Healthcare, finance, judicial, ...
Python 3.6+ | Linux, Mac, Windows
pip install interpret
EBM is an interpretable model developed at Microsoft Research*. It uses modern machine learning techniques like bagging, gradient boosting, and automatic interaction detection to breathe new life into traditional GAMs (Generalized Additive Models). This makes EBMs as accurate as state-of-the-art techniques like random forests and gradient boosted trees. However, unlike these blackbox models, EBMs produce exact explanations and are editable by domain experts.
Dataset/AUROC | Domain | Logistic Regression | Random Forest | XGBoost | Explainable Boosting Machine |
---|---|---|---|---|---|
Adult Income | Finance | .907±.003 | .903±.002 | .927±.001 | .928±.002 |
Heart Disease | Medical | .895±.030 | .890±.008 | .851±.018 | .898±.013 |
Breast Cancer | Medical | .995±.005 | .992±.009 | .992±.010 | .995±.006 |
Telecom Churn | Business | .849±.005 | .824±.004 | .828±.010 | .852±.006 |
Credit Fraud | Security | .979±.002 | .950±.007 | .981±.003 | .981±.003 |
Notebook for reproducing table
Interpretability Technique | Type |
---|---|
Explainable Boosting | glassbox model |
Decision Tree | glassbox model |
Decision Rule List | glassbox model |
Linear/Logistic Regression | glassbox model |
SHAP Kernel Explainer | blackbox explainer |
LIME | blackbox explainer |
Morris Sensitivity Analysis | blackbox explainer |
Partial Dependence | blackbox explainer |
Let's fit an Explainable Boosting Machine
from interpret.glassbox import ExplainableBoostingClassifier
ebm = ExplainableBoostingClassifier()
ebm.fit(X_train, y_train)
# or substitute with LogisticRegression, DecisionTreeClassifier, RuleListClassifier, ...
# EBM supports pandas dataframes, numpy arrays, and handles "string" data natively.
Understand the model
from interpret import show
ebm_global = ebm.explain_global()
show(ebm_global)
Understand individual predictions
ebm_local = ebm.explain_local(X_test, y_test)
show(ebm_local)
And if you have multiple model explanations, compare them
show([logistic_regression_global, decision_tree_global])
If you need to keep your data private, we also support Differentially Private EBMs (see DP-EBMs)
from interpret.privacy import DPExplainableBoostingClassifier, DPExplainableBoostingRegressor
dp_ebm = DPExplainableBoostingClassifier(epsilon=1, delta=1e-5) # Specify privacy parameters
dp_ebm.fit(X_train, y_train)
show(dp_ebm.explain_global()) # Identical function calls to standard EBMs
For more information, see the documentation.
InterpretML was originally created by (equal contributions): Samuel Jenkins, Harsha Nori, Paul Koch, and Rich Caruana
EBMs are fast derivative of GA2M, invented by: Yin Lou, Rich Caruana, Johannes Gehrke, and Giles Hooker
Many people have supported us along the way. Check out ACKNOWLEDGEMENTS.md!
We also build on top of many great packages. Please check them out!
plotly | dash | scikit-learn | lime | shap | salib | skope-rules | treeinterpreter | gevent | joblib | pytest | jupyter
InterpretML
"InterpretML: A Unified Framework for Machine Learning Interpretability" (H. Nori, S. Jenkins, P. Koch, and R. Caruana 2019)
@article{nori2019interpretml, title={InterpretML: A Unified Framework for Machine Learning Interpretability}, author={Nori, Harsha and Jenkins, Samuel and Koch, Paul and Caruana, Rich}, journal={arXiv preprint arXiv:1909.09223}, year={2019} }Paper link
Explainable Boosting
"Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission" (R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, and N. Elhadad 2015)
@inproceedings{caruana2015intelligible, title={Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission}, author={Caruana, Rich and Lou, Yin and Gehrke, Johannes and Koch, Paul and Sturm, Marc and Elhadad, Noemie}, booktitle={Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining}, pages={1721--1730}, year={2015}, organization={ACM} }Paper link
"Accurate intelligible models with pairwise interactions" (Y. Lou, R. Caruana, J. Gehrke, and G. Hooker 2013)
@inproceedings{lou2013accurate, title={Accurate intelligible models with pairwise interactions}, author={Lou, Yin and Caruana, Rich and Gehrke, Johannes and Hooker, Giles}, booktitle={Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining}, pages={623--631}, year={2013}, organization={ACM} }Paper link
"Intelligible models for classification and regression" (Y. Lou, R. Caruana, and J. Gehrke 2012)
@inproceedings{lou2012intelligible, title={Intelligible models for classification and regression}, author={Lou, Yin and Caruana, Rich and Gehrke, Johannes}, booktitle={Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining}, pages={150--158}, year={2012}, organization={ACM} }Paper link
"Axiomatic Interpretability for Multiclass Additive Models" (X. Zhang, S. Tan, P. Koch, Y. Lou, U. Chajewska, and R. Caruana 2019)
@inproceedings{zhang2019axiomatic, title={Axiomatic Interpretability for Multiclass Additive Models}, author={Zhang, Xuezhou and Tan, Sarah and Koch, Paul and Lou, Yin and Chajewska, Urszula and Caruana, Rich}, booktitle={Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery \& Data Mining}, pages={226--234}, year={2019}, organization={ACM} }Paper link
"Distill-and-compare: auditing black-box models using transparent model distillation" (S. Tan, R. Caruana, G. Hooker, and Y. Lou 2018)
@inproceedings{tan2018distill, title={Distill-and-compare: auditing black-box models using transparent model distillation}, author={Tan, Sarah and Caruana, Rich and Hooker, Giles and Lou, Yin}, booktitle={Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society}, pages={303--310}, year={2018}, organization={ACM} }Paper link
"Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive Models" (B. Lengerich, S. Tan, C. Chang, G. Hooker, R. Caruana 2019)
@article{lengerich2019purifying, title={Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive Models}, author={Lengerich, Benjamin and Tan, Sarah and Chang, Chun-Hao and Hooker, Giles and Caruana, Rich}, journal={arXiv preprint arXiv:1911.04974}, year={2019} }Paper link
"Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning" (H. Kaur, H. Nori, S. Jenkins, R. Caruana, H. Wallach, J. Wortman Vaughan 2020)
@inproceedings{kaur2020interpreting, title={Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning}, author={Kaur, Harmanpreet and Nori, Harsha and Jenkins, Samuel and Caruana, Rich and Wallach, Hanna and Wortman Vaughan, Jennifer}, booktitle={Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems}, pages={1--14}, year={2020} }Paper link
"How Interpretable and Trustworthy are GAMs?" (C. Chang, S. Tan, B. Lengerich, A. Goldenberg, R. Caruana 2020)
@article{chang2020interpretable, title={How Interpretable and Trustworthy are GAMs?}, author={Chang, Chun-Hao and Tan, Sarah and Lengerich, Ben and Goldenberg, Anna and Caruana, Rich}, journal={arXiv preprint arXiv:2006.06466}, year={2020} }Paper link
Differential Privacy
"Accuracy, Interpretability, and Differential Privacy via Explainable Boosting" (H. Nori, R. Caruana, Z. Bu, J. Shen, J. Kulkarni 2021)
@inproceedings{pmlr-v139-nori21a, title = {Accuracy, Interpretability, and Differential Privacy via Explainable Boosting}, author = {Nori, Harsha and Caruana, Rich and Bu, Zhiqi and Shen, Judy Hanwen and Kulkarni, Janardhan}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {8227--8237}, year = {2021}, volume = {139}, series = {Proceedings of Machine Learning Research}, publisher = {PMLR} }Paper link
LIME
"Why should i trust you?: Explaining the predictions of any classifier" (M. T. Ribeiro, S. Singh, and C. Guestrin 2016)
@inproceedings{ribeiro2016should, title={Why should i trust you?: Explaining the predictions of any classifier}, author={Ribeiro, Marco Tulio and Singh, Sameer and Guestrin, Carlos}, booktitle={Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining}, pages={1135--1144}, year={2016}, organization={ACM} }Paper link
SHAP
"A Unified Approach to Interpreting Model Predictions" (S. M. Lundberg and S.-I. Lee 2017)
@incollection{NIPS2017_7062, title = {A Unified Approach to Interpreting Model Predictions}, author = {Lundberg, Scott M and Lee, Su-In}, booktitle = {Advances in Neural Information Processing Systems 30}, editor = {I. Guyon and U. V. Luxburg and S. Bengio and H. Wallach and R. Fergus and S. Vishwanathan and R. Garnett}, pages = {4765--4774}, year = {2017}, publisher = {Curran Associates, Inc.}, url = {http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf} }Paper link
"Consistent individualized feature attribution for tree ensembles" (Lundberg, Scott M and Erion, Gabriel G and Lee, Su-In 2018)
@article{lundberg2018consistent, title={Consistent individualized feature attribution for tree ensembles}, author={Lundberg, Scott M and Erion, Gabriel G and Lee, Su-In}, journal={arXiv preprint arXiv:1802.03888}, year={2018} }Paper link
"Explainable machine-learning predictions for the prevention of hypoxaemia during surgery" (S. M. Lundberg et al. 2018)
@article{lundberg2018explainable, title={Explainable machine-learning predictions for the prevention of hypoxaemia during surgery}, author={Lundberg, Scott M and Nair, Bala and Vavilala, Monica S and Horibe, Mayumi and Eisses, Michael J and Adams, Trevor and Liston, David E and Low, Daniel King-Wai and Newman, Shu-Fang and Kim, Jerry and others}, journal={Nature Biomedical Engineering}, volume={2}, number={10}, pages={749}, year={2018}, publisher={Nature Publishing Group} }Paper link
Sensitivity Analysis
"SALib: An open-source Python library for Sensitivity Analysis" (J. D. Herman and W. Usher 2017)
@article{herman2017salib, title={SALib: An open-source Python library for Sensitivity Analysis.}, author={Herman, Jonathan D and Usher, Will}, journal={J. Open Source Software}, volume={2}, number={9}, pages={97}, year={2017} }Paper link
"Factorial sampling plans for preliminary computational experiments" (M. D. Morris 1991)
@article{morris1991factorial, title={}, author={Morris, Max D}, journal={Technometrics}, volume={33}, number={2}, pages={161--174}, year={1991}, publisher={Taylor \& Francis Group} }Paper link
Partial Dependence
"Greedy function approximation: a gradient boosting machine" (J. H. Friedman 2001)
@article{friedman2001greedy, title={Greedy function approximation: a gradient boosting machine}, author={Friedman, Jerome H}, journal={Annals of statistics}, pages={1189--1232}, year={2001}, publisher={JSTOR} }Paper link
Open Source Software
"Scikit-learn: Machine learning in Python" (F. Pedregosa et al. 2011)
@article{pedregosa2011scikit, title={Scikit-learn: Machine learning in Python}, author={Pedregosa, Fabian and Varoquaux, Ga{\"e}l and Gramfort, Alexandre and Michel, Vincent and Thirion, Bertrand and Grisel, Olivier and Blondel, Mathieu and Prettenhofer, Peter and Weiss, Ron and Dubourg, Vincent and others}, journal={Journal of machine learning research}, volume={12}, number={Oct}, pages={2825--2830}, year={2011} }Paper link
"Collaborative data science" (Plotly Technologies Inc. 2015)
@online{plotly, author = {Plotly Technologies Inc.}, title = {Collaborative data science}, publisher = {Plotly Technologies Inc.}, address = {Montreal, QC}, year = {2015}, url = {https://plot.ly} }Link
"Joblib: running python function as pipeline jobs" (G. Varoquaux and O. Grisel 2009)
@article{varoquaux2009joblib, title={Joblib: running python function as pipeline jobs}, author={Varoquaux, Ga{\"e}l and Grisel, O}, journal={packages. python. org/joblib}, year={2009} }Link
- The Science Behind InterpretML: Explainable Boosting Machine
- How to Explain Models with InterpretML Deep Dive
- Black-Box and Glass-Box Explanation in Machine Learning
- Explainable AI explained! By-design interpretable models with Microsofts InterpretML
- Interpreting Machine Learning Models with InterpretML
- Interpretable or Accurate? Why Not Both?
- The Explainable Boosting Machine. As accurate as gradient boosting, as interpretable as linear regression.
- Performance And Explainability With EBM
- InterpretML: Another Way to Explain Your Model
- A gentle introduction to GA2Ms, a white box model
- Model Interpretation with Microsoft’s Interpret ML
- Explaining Model Pipelines With InterpretML
- Explain Your Model with Microsoft’s InterpretML
- On Model Explainability: From LIME, SHAP, to Explainable Boosting
- Dealing with Imbalanced Data (Mortgage loans defaults)
- The right way to compute your Shapley Values
- The Art of Sprezzatura for Machine Learning
- Explainable Boosting Machines for Slope Failure Spatial Predictive Modeling
- Micromodels for Efficient, Explainable, and Reusable Systems: A Case Study on Mental Health
- Identifying main and interaction effects of risk factors to predict intensive care admission in patients hospitalized with COVID-19
- Neural Additive Models: Interpretable Machine Learning with Neural Nets
- NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning
- Integrating Co-Clustering and Interpretable Machine Learning for the Prediction of Intravenous Immunoglobulin Resistance in Kawasaki Disease
- GAMI-Net: An Explainable Neural Network based on Generalized Additive Models with Structured Interactions
- Interpretable Prediction of Goals in Soccer
- Extending the Tsetlin Machine with Integer-Weighted Clauses for Increased Interpretability
- In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction
- Development and Validation of an Interpretable 3-day Intensive Care Unit Readmission Prediction Model Using Explainable Boosting Machines
- Explainable Boosting Machine for Predicting Alzheimer’s Disease from MRI Hippocampal Subfields
- Impact of Accuracy on Model Interpretations
- Interpretable Machine Learning with Python
- Explainable Artificial Intelligence: An Introduction to Interpretable Machine Learning
There are multiple ways to get in touch:
- Email us at interpret@microsoft.com
- Or, feel free to raise a GitHub issue