captum

There are 29 repositories under captum topic.

  • cdpierse/transformers-interpret

    Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.

    Language:Jupyter Notebook1.4k1979100
  • inseq-team/inseq

    Interpretability for sequence generation models 🐛 🔍

    Language:Python43889538
  • DFKI-NLP/thermostat

    Collection of NLP model explanations and accompanying analysis tools

    Language:Jsonnet1443138
  • copenlu/ALPS_2021

    XAI Tutorial for the Explainable AI track in the ALPS winter school 2021

    Language:Jupyter Notebook58417
  • TannerGilbert/Model-Interpretation

    Overview of different model interpretability libraries.

    Language:Jupyter Notebook492214
  • robinvanschaik/interpret-flair

    A small repository to test Captum Explainable AI with a trained Flair transformers-based text classifier.

    Language:Jupyter Notebook27162
  • richouzo/hate-speech-detection-survey

    Trained Neural Networks (LSTM, HybridCNN/LSTM, PyramidCNN, Transformers, etc.) & comparison for the task of Hate Speech Detection on the OLID Dataset (Tweets).

    Language:Jupyter Notebook21103
  • CECNL/XBrainLab

    We introduce XBrainLab, an open-source user-friendly software, for accelerated interpretation of neural patterns from EEG data based on cutting-edge computational approach.

    Language:Python8302
  • jihyeonseong/SAI-board-by-streamlit

    Cyber Security AI Dashboard

    Language:Jupyter Notebook8104
  • LuanAdemi/VisualGo

    Training a CNN to recognize the current Go position with photorealistic renders

    Language:Jupyter Notebook7101
  • esceptico/toxic

    End-to-end toxic Russian comment classification

    Language:Python5101
  • nicovandenhooff/indoor-scene-detector

    This repository contains the source code for Indoor Scene Detector, a full stack deep learning computer vision application.

    Language:Python41150
  • braindatalab/xai-tris

    XAI-Tris

    Language:Jupyter Notebook3202
  • dg1223/explainable-ai

    Model interpretability for Explainable Artificial Intelligence

    Language:Jupyter Notebook3300
  • deep_classiflie

    speediedan/deep_classiflie

    Deep Classiflie is a framework for developing ML models that bolster fact-checking efficiency. As a POC, the initial alpha release of Deep Classiflie generates/analyzes a model that continuously classifies a single individual's statements (Donald Trump) using a single ground truth labeling source (The Washington Post). For statements the model deems most likely to be labeled falsehoods, the @DeepClassiflie twitter bot tweets out a statement analysis and model interpretation "report"

    Language:Python34780
  • the-ahuja-lab/Odorify-webserver

    OdoriFy is an open-source tool with multiple prediction engines. This is the source code of the webserver.

    Language:Python3201
  • tsKenneth/interpretable-graph-classification

    Interpretable graph classifications using Graph Convolutional Neural Network

    Language:GLSL3110
  • FilippoMB/Tutorial_GNN_explainability

    This in an introduction to PyTorch Geometric, the deep learning library for Graph Neural Networks, and to interpretability tools for analyzing the decision process of a GNN.

    Language:Jupyter Notebook210
  • js-yoo/xai_kimst2020

    "XAI를 위한 Attribution Method 접근법 분석 및 동향 Analysis and Trend of Attribution Methods for XAI" 에서 사용한 코드와 예시를 공개

    Language:Jupyter Notebook110
  • k-forghani/pytorch-workshop

    PyTorch Beginner Workshop (Brad Heintz)

    Language:Jupyter Notebook110
  • LennardZuendorf/thesis-files

    Collection of associated files for my bachelor thesis

    Language:Jupyter Notebook1100
  • deep_classiflie_db

    speediedan/deep_classiflie_db

    Deep_classiflie_db is the backend data system for managing Deep Classiflie metadata, analyzing Deep Classiflie intermediate datasets and orchestrating Deep Classiflie model training pipelines. Deep_classiflie_db includes data scraping modules for the initial model data sources. Deep Classiflie depends upon deep_classiflie_db for much of its analytical and dataset generation functionality but the data system is currently maintained as a separate repository here to maximize architectural flexibility. Depending on how Deep Classiflie evolves (e.g. as it supports distributed data stores etc.), it may make more sense to integrate deep_classiflie_db back into deep_classiflie. Currently, deep_classiflie_db releases are synchronized to deep_classiflie releases. To learn more, visit deepclassiflie.org.

    Language:Jupyter Notebook12560
  • luispky/XAI-RAI-UniTS

    Repository with the project of the Explainable and Reliable Artificial Intelligence course at UniTS (2024-2025).

    Language:Python0100
  • manyue-zhang/Frontend-for-Captum

    Based on the papers "Interpretability Beyond Feature Attribution: QuantitativeTestingwithConceptActivationVectors(TCAV)" and Captum's instantiation https://captum.ai/docs/captum_insights, we developed this frontend for the Captum project based on the streamlit framework.

  • NajdBinrabah/Deep-Learning-with-PyTorch-and-Captum

    This project classifies smoking images using VGG19 with data augmentation, and Captum for model explainability, identifying key features per prediction.

    Language:Jupyter Notebook0100
  • R-N/covid-forecasting-joint-learning

    COVID-19 forecasting model for East Java cities using Joint Learning

    Language:Python0210
  • kotiyalanurag/Exploring-Data-Augmentation-Methods-through-Attribution

    Code for my Master Thesis titled "Exploring Data Augmentation Methods through Attribution".

    Language:Python
  • ProGamerGov/captum-tutorials

    Language:Jupyter Notebook10
  • yuneg11/Interpretability-Metrics

    Interpretability Metrics

    Language:Python201