BROranger's Stars
changzhang777/ANCRA
official code of ANCRA
JacksonWuxs/UsableXAI_LLM
Using Explanations as a Tool for Advanced LLMs
rmrisforbidden/Fooling_Neural_Network-Interpretations
This repository provides a PyTorch implementation of "Fooling Neural Network Interpretations via Adversarial Model Manipulation". Our paper has been accepted to NeurIPS 2019.
fastai/imagenette
A smaller subset of 10 easily classified classes from Imagenet, and a little more French
harshays/inputgradients
Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)
alps-lab/adv2
ADV2: Interpretable Deep Learning under Fire
AkhilanB/Proper-Interpretability
Codes for reproducing the experimental results in "Proper Network Interpretability Helps Adversarial Robustness in Classification", published at ICML 2020
Muzammal-Naseer/CDA
Official repository for "Cross-Domain Transferability of Adversarial Perturbations" (NeurIPS 2019)
openai/CLIP
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
BioroboticsLab/IBA
Information Bottlenecks for Attribution
thestephencasper/feature_level_adv
Demo code for the paper: One Thing to Fool them All: Generating Interpretable, Universal, and Physically-Realizable Adversarial Features
marcotcr/lime
Lime: Explaining the predictions of any machine learning classifier
pytorch/vision
Datasets, Transforms and Models specific to Computer Vision
ivanpanshin/SupCon-Framework
Implementation of Supervised Contrastive Learning with AMP, EMA, SWA, and many other tricks
MadryLab/robustness
A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.
locuslab/fast_adversarial
[ICLR 2020] A repository for extremely fast adversarial training using FGSM
samzabdiel/XAI
Papers and code of Explainable AI esp. w.r.t. Image classificiation
RUCAIBox/RecSysDatasets
This is a repository of public data sources for Recommender Systems (RS).
xherdan76/A-Unified-Approach-to-Interpreting-and-Boosting-Adversarial-Transferability
A Unified Approach to Interpreting and Boosting Adversarial Transferability (ICLR2021)
alvinwan/neural-backed-decision-trees
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
ankurtaly/Integrated-Gradients
Attributing predictions made by the Inception network using the Integrated Gradients method
oneTaken/awesome_deep_learning_interpretability
深度学习近年来关于神经网络模型解释性的相关高引用/顶会论文(附带代码)
pytorch/captum
Model interpretability and understanding for PyTorch
XAITK/xaitk-saliency
As part of the Explainable AI Toolkit (XAITK), XAITK-Saliency is an open source, explainable AI framework for visual saliency algorithm interfaces and implementations, built for analytics and autonomy applications.
Harry24k/adversarial-attacks-pytorch
PyTorch implementation of adversarial attacks [torchattacks]
zymaples/Sign
localhost02/SealUtil
印章生成工具:使用Java Graphics2D生成各类圆形/椭圆公章、私章图片
utkuozbulak/pytorch-cnn-visualizations
Pytorch implementation of convolutional neural network visualization techniques
NVlabs/stylegan
StyleGAN - Official TensorFlow Implementation
junyanz/pytorch-CycleGAN-and-pix2pix
Image-to-Image Translation in PyTorch