interpretability
There are 698 repositories under interpretability topic.
shap/shap
A game theoretic approach to explain the output of any machine learning model.
EthicalML/awesome-production-machine-learning
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
jacobgil/pytorch-grad-cam
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
interpretml/interpret
Fit interpretable models. Explain blackbox machine learning.
pytorch/captum
Model interpretability and understanding for PyTorch
tensorflow/lucid
A collection of infrastructure and tools for research in neural network interpretability.
jphall663/awesome-machine-learning-interpretability
A curated list of awesome responsible machine learning resources.
stellargraph/stellargraph
StellarGraph - Machine Learning on Graphs
MAIF/shapash
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
SeldonIO/alibi
Algorithms for explaining machine learning models
frgfm/torch-cam
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)
chaoyanghe/Awesome-Federated-Learning
FedML - The Research and Production Integrated Federated Learning Library: https://fedml.ai
google-deepmind/penzai
A JAX research toolkit for building, editing, and visualizing neural networks.
ramprs/grad-cam
[ICCV 2017] Torch code for Grad-CAM
wangyongjie-ntu/Awesome-explainable-AI
A collection of research materials on explainable AI/ML
csinva/imodels
Interpretable ML package 🔍 for concise, transparent, and accurate predictive modeling (sklearn-compatible).
microsoft/responsible-ai-toolbox
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
ModelOriented/DALEX
moDel Agnostic Language for Exploration and eXplanation
cdpierse/transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
stanfordnlp/pyreft
ReFT: Representation Finetuning for Language Models
EthicalML/xai
XAI - An eXplainability toolbox for machine learning
sicara/tf-explain
Interpretability Methods for tf.keras models with Tensorflow 2.x
shubhomoydas/ad_examples
A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanation/interpretability. Analysis of incorporating label feedback with ensemble and tree-based detectors. Includes adversarial attacks with Graph Convolutional Network.
kundajelab/deeplift
Public facing deeplift repo
pbiecek/xai_resources
Interesting resources related to XAI (Explainable Artificial Intelligence)
hila-chefer/Transformer-MM-Explainability
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
oneTaken/awesome_deep_learning_interpretability
深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)
MisaOgura/flashtorch
Visualization toolkit for neural networks in PyTorch! Demo -->
jphall663/interpretable_machine_learning_with_python
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
tensorflow/decision-forests
A collection of state-of-the-art algorithms for the training, serving and interpretation of Decision Forest models in Keras.
stanfordnlp/pyvene
Stanford NLP Python Library for Understanding and Improving PyTorch Models via Interventions
deel-ai/xplique
👋 Xplique is a Neural Networks Explainability Toolbox
tensorflow/tcav
Code for the TCAV ML interpretability project
alvinwan/neural-backed-decision-trees
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
kmeng01/rome
Locating and editing factual associations in GPT (NeurIPS 2022)
understandable-machine-intelligence-lab/Quantus
Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations