This repository contains Jupyter notebooks implementing the code samples found in the book Interpretable AI - Building Explainable Machine Learning Systems (Manning Publications). The book features far more content than you will find in these notebooks.
These notebooks use Python 3.7, scikit-learn 0.21.3 and PyTorch 1.4.0. You can create the conda environment from the environment.yml
file as follows.
conda env create -f environment.yml
The environment name is interpretable-ai
and it can be activated as follows.
conda activate interpretable-ai
- Chapter 2: White-Box Models
- Chapter 3: Model Agnostic Methods - Global Interpretability
- Tree Ensembles and Global Interpretability
- Tree Ensembles (Random Forest)
- Partial Dependence Plots (PDPs)
- Feature Interactions
- Data
- Models
- Tree Ensembles and Global Interpretability
- Chapter 4: Model Agnostic Methods - Local Interpretability
- Deep Neural Networks and Local Interpretability
- Deep Neural Networks (DNNs)
- Local Interpretable Model-agnostic Explanations (LIME)
- Shapley Additive exPlanations (SHAP)
- Anchors
- Illustration of Activation Functions
- Data
- Models
- Deep Neural Networks and Local Interpretability
- Chapter 5: Saliency Mapping
- Convolutional Neural Networks and Visual Attribution
- Convolutional Neural Networks (CNNs)
- Visual Attribution Methods
- Vanilla backpropagation
- Guided backpropagation
- Integrated gradients
- SmoothGrad
- Grad-CAM
- Guided Grad-CAM
- Data
- Models
- Convolutional Neural Networks and Visual Attribution
- Chapter 6: Understanding Layers and Units
- Setup: Refer the readme on how to setup the network dissection framework
- Results: Refer the readme to download the network dissection results for certain pre-trained models
- Network Dissection
- Visualize Network Dissection Results
- Appendix A: PyTorch
- Chapter 7 (work in progress)
- Chapter 8 (work in progress)