adversarial-robustness
There are 93 repositories under adversarial-robustness topic.
RobustBench/robustbench
RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]
fra31/auto-attack
Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
thu-ml/ares
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
alibaba/easyrobust
EasyRobust: an Easy-to-use library for state-of-the-art Robust Computer Vision Research with PyTorch.
Verified-Intelligence/alpha-beta-CROWN
alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, 2023, 2024, 2025)
max-andr/square-attack
Square Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
LayneH/self-adaptive-training
[TPAMI2022 & NeurIPS2020] Official implementation of Self-Adaptive Training
VITA-Group/Aug-NeRF
[CVPR 2022] "Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level Physically-Grounded Augmentations" by Tianlong Chen*, Peihao Wang*, Zhiwen Fan, Zhangyang Wang
imrahulr/adversarial_robustness_pytorch
Unofficial implementation of the DeepMind papers "Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples" & "Fixing Data Augmentation to Improve Adversarial Robustness" in PyTorch
microsoft/denoised-smoothing
Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs
AI-secure/InfoBERT
[ICLR 2021] "InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective" by Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu
VITA-Group/Adv-SS-Pretraining
[CVPR 2020] Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
Haichao-Zhang/FeatureScatter
Feature Scattering Adversarial Training (NeurIPS19)
cemanil/LNets
Lipschitz Neural Networks described in "Sorting Out Lipschitz Function Approximation" (ICML 2019).
zjysteven/DVERGE
[NeurIPS'20 Oral] DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles
jiequancui/DKL
Decoupled Kullback-Leibler Divergence Loss (DKL), NeurIPS 2024 / Generalized Kullback-Leibler Divergence Loss (GKL)
VITA-Group/Alleviate-Robust-Overfitting
[ICLR 2021] "Robust Overfitting may be mitigated by properly learned smoothening" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, Shiyu Chang, Zhangyang Wang
zbh2047/L_inf-dist-net
[ICML 2021] This is the official github repo for training L_inf dist nets with high certified accuracy.
Harry24k/MAIR
Fantastic Robustness Measures: The Secrets of Robust Generalization [NeurIPS 2023]
hxxdtd/Awesome-Diffusion-Model-Unlearning
A repository of resources on machine unlearning for diffusion models
sayakpaul/par-cvpr-21
Contains notebooks for the PAR tutorial at CVPR 2021.
XinyiYS/Robust-and-Fair-Federated-Learning
Implementing the algorithm from our paper: "A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning".
GATECH-EIC/Patch-Fool
[ICLR 2022] "Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?" by Yonggan Fu, Shunyao Zhang, Shang Wu, Cheng Wan, and Yingyan (Celine) Lin.
fra31/fab-attack
Code for FAB-attack
imrahulr/hat
Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off
cdluminate/robrank
Adversarial Attack and Defense in Deep Ranking, T-PAMI, 2024
VITA-Group/triple-wins
[ICLR 2020] ”Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference“
zbh2047/L_inf-dist-net-v2
[ICLR 2022] Training L_inf-dist-net with faster acceleration and better training strategies
zhichao-lu/robust-residual-network
Revisiting Residual Networks for Adversarial Robustness: An Architectural Perspective
fra31/robust-finetuning
Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"
CN-TU/adversarial-recurrent-ids
Contact: Alexander Hartl, Maximilian Bachl, Fares Meghdouri. Explainability methods and Adversarial Robustness metrics for RNNs for Intrusion Detection Systems. Also contains code for "SparseIDS: Learning Packet Sampling with Reinforcement Learning" (branch "rl").
GATECH-EIC/NeRFool
[ICML 2023] "NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations" by Yonggan Fu, Ye Yuan, Souvik Kundu, Shang Wu, Shunyao Zhang, Yingyan (Celine) Lin
sigeisler/reliable_gnn_via_robust_aggregation
This repository contains the official implementation of the paper "Reliable Graph Neural Networks via Robust Aggregation" (NeurIPS, 2020).
lafeat/lafeat
LAFEAT: Piercing Through Adversarial Defenses with Latent Features (CVPR 2021 Oral)
alexklwong/targeted-adversarial-perturbations-monocular-depth
PyTorch implementation of Targeted Adversarial Perturbations for Monocular Depth Predictions (in NeurIPS 2020)
yangarbiter/interpretable-robust-trees
Connecting Interpretability and Robustness in Decision Trees through Separation