adversarial-defense
There are 98 repositories under adversarial-defense topic.
thunlp/TAADpapers
Must-read Papers on Textual Adversarial Attack and Defense
Verified-Intelligence/auto_LiRPA
auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs
Hadisalman/smoothing-adversarial
Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"
tao-bai/attack-and-defense-methods
A curated list of papers on adversarial machine learning (adversarial examples and defense methods).
chs20/RobustVLM
[ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models
as791/Adversarial-Example-Attack-and-Defense
This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defense against all attacks using MNIST dataset.
nebula-beta/awesome-adversarial-deep-learning
A list of awesome resources for adversarial attack and defense method in deep learning
huanzhang12/CROWN-IBP
Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTorch).
microsoft/denoised-smoothing
Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs
davide97l/rl-policies-attacks-defenses
Adversarial attacks on Deep Reinforcement Learning (RL)
AI-secure/InfoBERT
[ICLR 2021] "InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective" by Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu
ForeverPs/Robust-Classification
CVPR 2022 Workshop Robust Classification
wunderwuzzi23/mlattacks
Machine Learning Attack Series
dongyp13/Adversarial-Distributional-Training
Adversarial Distributional Training (NeurIPS 2020)
lionelmessi6410/awesome-real-world-adversarial-examples
😎 A curated list of awesome real-world adversarial examples resources
wssun/TiSE-CodeLM-Security
This repository provide the studies on the security of language models for code (CodeLMs).
sukrutrao/Adversarial-Patch-Training
Code for the paper: Adversarial Training Against Location-Optimized Adversarial Patches. ECCV-W 2020.
elliothe/CVPR_2019_PNI
pytorch implementation of Parametric Noise Injection for adversarial defense
dvlab-research/LBGAT
Learnable Boundary Guided Adversarial Training (ICCV2021)
cornell-zhang/GARNET
GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks
YonghaoXu/SACNet
[IEEE TIP 2021] Self-Attention Context Network: Addressing the Threat of Adversarial Attacks for Hyperspectral Image Classification
jh-jeong/smoothing-consistency
Code for the paper "Consistency Regularization for Certified Robustness of Smoothed Classifiers" (NeurIPS 2020)
wkim97/FSR
Feature Separation and Recalibration (CVPR 2023 Highlights)
Harry24k/catastrophic-overfitting
Understanding Catastrophic Overfitting in Single-step Adversarial Training [AAAI 2021]
SEC4SR/SEC4SR
Source Code for 'SECurity evaluation platform FOR Speaker Recognition' released in 'Defending against Audio Adversarial Examples on Speaker Recognition Systems'
cdluminate/advrank
Adversarial Ranking Attack and Defense, ECCV, 2020.
cdluminate/robrank
Adversarial Attack and Defense in Deep Ranking, T-PAMI, 2024
rshaojimmy/OSAD
[ECCV 2020] Pytorch codes for Open-set Adversarial Defense
jh-jeong/smoothmix
Code for the paper "SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness" (NeurIPS 2021)
cdluminate/robdml
Enhancing Adversarial Robustness for Deep Metric Learning, CVPR, 2022
sayakpaul/Denoised-Smoothing-TF
Minimal implementation of Denoised Smoothing (https://arxiv.org/abs/2003.01908) in TensorFlow.
tangxianfeng/PA-GNN
Implementation of paper "Transferring Robustness for Graph Neural Network Against Poisoning Attacks".
YudiDong/GAN-based-E2E-communications-system-for-defense-against-adversarial-attack
A Robust Adversarial Network-Based End-to-End Communications System With Strong Generalization Ability Against Adversarial Attacks
CEA-LIST/adv-reid
Metric Adversarial Attacks and Defense
khalooei/LSA
LSA : Layer Sustainability Analysis framework for the analysis of layer vulnerability in a given neural network. LSA can be a helpful toolkit to assess deep neural networks and to extend the adversarial training approaches towards improving the sustainability of model layers via layer monitoring and analysis.
safreita1/unmask
Adversarial detection and defense for deep learning systems using robust feature alignment