captum
There are 23 repositories under captum topic.
cdpierse/transformers-interpret
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
inseq-team/inseq
Interpretability for sequence generation models 🐛 🔍
DFKI-NLP/thermostat
Collection of NLP model explanations and accompanying analysis tools
copenlu/ALPS_2021
XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
TannerGilbert/Model-Interpretation
Overview of different model interpretability libraries.
robinvanschaik/interpret-flair
A small repository to test Captum Explainable AI with a trained Flair transformers-based text classifier.
richouzo/hate-speech-detection-survey
Trained Neural Networks (LSTM, HybridCNN/LSTM, PyramidCNN, Transformers, etc.) & comparison for the task of Hate Speech Detection on the OLID Dataset (Tweets).
LuanAdemi/VisualGo
Training a CNN to recognize the current Go position with photorealistic renders
esceptico/toxic
End-to-end toxic Russian comment classification
nicovandenhooff/indoor-scene-detector
This repository contains the source code for Indoor Scene Detector, a full stack deep learning computer vision application.
tsKenneth/interpretable-graph-classification
Interpretable graph classifications using Graph Convolutional Neural Network
speediedan/deep_classiflie
Deep Classiflie is a framework for developing ML models that bolster fact-checking efficiency. As a POC, the initial alpha release of Deep Classiflie generates/analyzes a model that continuously classifies a single individual's statements (Donald Trump) using a single ground truth labeling source (The Washington Post). For statements the model deems most likely to be labeled falsehoods, the @DeepClassiflie twitter bot tweets out a statement analysis and model interpretation "report"
the-ahuja-lab/Odorify-webserver
OdoriFy is an open-source tool with multiple prediction engines. This is the source code of the webserver.
braindatalab/xai-tris
XAI-Tris
jihyeonseong/SAI-board-by-streamlit
Cyber Security AI Dashboard
dg1223/explainable-ai
Model interpretability for Explainable Artificial Intelligence
js-yoo/xai_kimst2020
"XAI를 위한 Attribution Method 접근법 분석 및 동향 Analysis and Trend of Attribution Methods for XAI" 에서 사용한 코드와 예시를 공개
speediedan/deep_classiflie_db
Deep_classiflie_db is the backend data system for managing Deep Classiflie metadata, analyzing Deep Classiflie intermediate datasets and orchestrating Deep Classiflie model training pipelines. Deep_classiflie_db includes data scraping modules for the initial model data sources. Deep Classiflie depends upon deep_classiflie_db for much of its analytical and dataset generation functionality but the data system is currently maintained as a separate repository here to maximize architectural flexibility. Depending on how Deep Classiflie evolves (e.g. as it supports distributed data stores etc.), it may make more sense to integrate deep_classiflie_db back into deep_classiflie. Currently, deep_classiflie_db releases are synchronized to deep_classiflie releases. To learn more, visit deepclassiflie.org.
manyue-zhang/Frontend-for-Captum
Based on the papers "Interpretability Beyond Feature Attribution: QuantitativeTestingwithConceptActivationVectors(TCAV)" and Captum's instantiation https://captum.ai/docs/captum_insights, we developed this frontend for the Captum project based on the streamlit framework.
R-N/covid-forecasting-joint-learning
COVID-19 forecasting model for East Java cities using Joint Learning. My undergrad thesis.
LennardZuendorf/thesis-files
Collection of associated files for my bachelor thesis
yuneg11/Interpretability-Metrics
Interpretability Metrics