interpretable-ai

There are 103 repositories under interpretable-ai topic.

  • jacobgil/pytorch-grad-cam

    Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.

    Language:Python10.3k444081.5k
  • interpretml/interpret

    Fit interpretable models. Explain blackbox machine learning.

    Language:C++6.2k144441726
  • pytorch/captum

    Model interpretability and understanding for PyTorch

    Language:Python4.8k269539489
  • jphall663/awesome-machine-learning-interpretability

    A curated list of awesome responsible machine learning resources.

  • wangyongjie-ntu/Awesome-explainable-AI

    A collection of research materials on explainable AI/ML

  • jphall663/interpretable_machine_learning_with_python

    Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.

    Language:Jupyter Notebook673426207
  • h2oai/mli-resources

    H2O.ai Machine Learning Interpretability Resources

    Language:Jupyter Notebook4811477131
  • explainx

    explainX/explainx

    Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code. We are looking for co-authors to take this project forward. Reach out @ ms8909@nyu.edu

    Language:Jupyter Notebook407103154
  • chr5tphr/zennit

    Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.

    Language:Python191145032
  • pietrobarbiero/pytorch_explain

    PyTorch Explain: Interpretable Deep Learning in Python.

    Language:Jupyter Notebook13613612
  • ajayarunachalam/Deep_XF

    Package towards building Explainable Forecasting and Nowcasting Models with State-of-the-art Deep Neural Networks and Dynamic Factor Model on Time Series data sets with single line of code. Also, provides utilify facility for time-series signal similarities matching, and removing noise from timeseries signals.

    Language:Jupyter Notebook1124223
  • andreysharapov/xaience

    All about explainable AI, algorithmic fairness and more

    Language:HTML1079013
  • Julia-XAI/ExplainableAI.jl

    Explainable AI in Julia.

    Language:Julia1064532
  • Machine-Learning

    VincentGranville/Machine-Learning

    Material related to my book Intuitive Machine Learning. Some of this material is also featured in my new book Synthetic Data and Generative AI.

    Language:Python978431
  • 12wang3/rrl

    The code of NeurIPS 2021 paper "Scalable Rule-Based Representation Learning for Interpretable Classification" and TPAMI paper "Learning Interpretable Rules for Scalable Data Representation and Classification"

    Language:Python9641525
  • fat-forensics/fat-forensics

    Modular Python Toolbox for Fairness, Accountability and Transparency Forensics

    Language:Python736516
  • AthenaCore/AwesomeResponsibleAI

    A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible AI, Trustworthy AI, and Human-Centered AI.

  • adaamko/POTATO

    XAI based human-in-the-loop framework for automatic rule-learning.

    Language:Jupyter Notebook474318
  • MarcoParola/pytorch-sidu

    SIDU: SImilarity Difference and Uniqueness method for explainable AI

    Language:Python42200
  • jialinwu17/self_critical_vqa

    Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''

    Language:Python4051110
  • linkedin/TE2Rules

    Python library to explain Tree Ensemble models (TE) like XGBoost, using a rule list.

    Language:Python408105
  • TooTouch/WhiteBox-Part1

    In this part, I've introduced and experimented with ways to interpret and evaluate models in the field of image. (Pytorch)

    Language:Jupyter Notebook394016
  • naotoo1/Beyond-Neural-Scaling

    Implementation of Beyond Neural Scaling beating power laws for deep models and prototype-based models

    Language:Python31113
  • koriavinash1/BioExp

    Explainability of Deep Learning Models

    Language:Python28675
  • weimin17/Multimodal_Transformer

    A Multimodal Transformer: Fusing Clinical Notes With Structured EHR Data for Interpretable In-Hospital Mortality Prediction

    Language:Python28413
  • willbakst/pytorch-lattice

    A PyTorch implementation of constrained optimization and modeling techniques

    Language:Python26440
  • guidelabs/infembed

    Find the samples, in the test data, on which your (generative) model makes mistakes.

    Language:Python24431
  • jphall663/hc_ml

    Slides, videos and other potentially useful artifacts from various presentations on responsible machine learning.

    Language:TeX22408
  • 12wang3/mllp

    The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".

    Language:Python21446
  • navdeep-G/interpretable-ml

    Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.

    Language:Jupyter Notebook207308
  • si-cim/prototorch

    ProtoTorch is a PyTorch-based Python toolbox for bleeding-edge research in prototype-based machine learning algorithms.

    Language:Python19709
  • cwangrun/ST-ProtoPNet

    [ICCV 2023] Learning Support and Trivial Prototypes for Interpretable Image Classification

    Language:Python18201
  • deepfx/netlens

    A toolkit for interpreting and analyzing neural networks (vision)

    Language:Jupyter Notebook16300
  • prclibo/ice

    Interpretable Control Exploration and Counterfactual Explanation (ICE) on StyleGAN

    Language:Jupyter Notebook15229
  • uncbiag/NAISR

    NAISR: A 3D Neural Additive Model for Interpretable Shape Representation

    Language:Python15222