adversarial-robustness

There are 93 repositories under adversarial-robustness topic.

  • RobustBench/robustbench

    RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]

    Language:Python7408121100
  • fra31/auto-attack

    Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"

    Language:Python716971120
  • thu-ml/ares

    A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.

    Language:Python513151788
  • alibaba/easyrobust

    EasyRobust: an Easy-to-use library for state-of-the-art Robust Computer Vision Research with PyTorch.

    Language:Jupyter Notebook33462235
  • Verified-Intelligence/alpha-beta-CROWN

    alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, 2023, 2024, 2025)

    Language:Python307148682
  • square-attack

    max-andr/square-attack

    Square Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]

    Language:Python16851329
  • LayneH/self-adaptive-training

    [TPAMI2022 & NeurIPS2020] Official implementation of Self-Adaptive Training

    Language:Python12941223
  • VITA-Group/Aug-NeRF

    [CVPR 2022] "Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level Physically-Grounded Augmentations" by Tianlong Chen*, Peihao Wang*, Zhiwen Fan, Zhangyang Wang

    Language:Python1251668
  • imrahulr/adversarial_robustness_pytorch

    Unofficial implementation of the DeepMind papers "Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples" & "Fixing Data Augmentation to Improve Adversarial Robustness" in PyTorch

    Language:Python972412
  • microsoft/denoised-smoothing

    Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs

    Language:Jupyter Notebook969018
  • AI-secure/InfoBERT

    [ICLR 2021] "InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective" by Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu

    Language:Python85268
  • VITA-Group/Adv-SS-Pretraining

    [CVPR 2020] Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning

    Language:Python8513013
  • Haichao-Zhang/FeatureScatter

    Feature Scattering Adversarial Training (NeurIPS19)

    Language:Python742511
  • cemanil/LNets

    Lipschitz Neural Networks described in "Sorting Out Lipschitz Function Approximation" (ICML 2019).

    Language:Python572720
  • zjysteven/DVERGE

    [NeurIPS'20 Oral] DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles

    Language:Python551613
  • jiequancui/DKL

    Decoupled Kullback-Leibler Divergence Loss (DKL), NeurIPS 2024 / Generalized Kullback-Leibler Divergence Loss (GKL)

    Language:Python47134
  • VITA-Group/Alleviate-Robust-Overfitting

    [ICLR 2021] "Robust Overfitting may be mitigated by properly learned smoothening" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, Shiyu Chang, Zhangyang Wang

    Language:Python47815
  • zbh2047/L_inf-dist-net

    [ICML 2021] This is the official github repo for training L_inf dist nets with high certified accuracy.

    Language:Python42217
  • Harry24k/MAIR

    Fantastic Robustness Measures: The Secrets of Robust Generalization [NeurIPS 2023]

    Language:Python41237
  • hxxdtd/Awesome-Diffusion-Model-Unlearning

    A repository of resources on machine unlearning for diffusion models

  • sayakpaul/par-cvpr-21

    Contains notebooks for the PAR tutorial at CVPR 2021.

    Language:Jupyter Notebook36008
  • XinyiYS/Robust-and-Fair-Federated-Learning

    Implementing the algorithm from our paper: "A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning".

    Language:Python352811
  • GATECH-EIC/Patch-Fool

    [ICLR 2022] "Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?" by Yonggan Fu, Shunyao Zhang, Shang Wu, Cheng Wan, and Yingyan (Celine) Lin.

    Language:Python341012
  • fra31/fab-attack

    Code for FAB-attack

    Language:Python33228
  • imrahulr/hat

    Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off

    Language:Python33216
  • cdluminate/robrank

    Adversarial Attack and Defense in Deep Ranking, T-PAMI, 2024

    Language:Python241182
  • VITA-Group/triple-wins

    [ICLR 2020] ”Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference“

    Language:Python241217
  • zbh2047/L_inf-dist-net-v2

    [ICLR 2022] Training L_inf-dist-net with faster acceleration and better training strategies

    Language:Cuda22134
  • zhichao-lu/robust-residual-network

    Revisiting Residual Networks for Adversarial Robustness: An Architectural Perspective

    Language:Python20106
  • fra31/robust-finetuning

    Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"

    Language:Python19134
  • CN-TU/adversarial-recurrent-ids

    Contact: Alexander Hartl, Maximilian Bachl, Fares Meghdouri. Explainability methods and Adversarial Robustness metrics for RNNs for Intrusion Detection Systems. Also contains code for "SparseIDS: Learning Packet Sampling with Reinforcement Learning" (branch "rl").

    Language:TeX182012
  • GATECH-EIC/NeRFool

    [ICML 2023] "NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations" by Yonggan Fu, Ye Yuan, Souvik Kundu, Shang Wu, Shunyao Zhang, Yingyan (Celine) Lin

    Language:Python18221
  • reliable_gnn_via_robust_aggregation

    sigeisler/reliable_gnn_via_robust_aggregation

    This repository contains the official implementation of the paper "Reliable Graph Neural Networks via Robust Aggregation" (NeurIPS, 2020).

    Language:Python18414
  • lafeat/lafeat

    LAFEAT: Piercing Through Adversarial Defenses with Latent Features (CVPR 2021 Oral)

    Language:Python17213
  • alexklwong/targeted-adversarial-perturbations-monocular-depth

    PyTorch implementation of Targeted Adversarial Perturbations for Monocular Depth Predictions (in NeurIPS 2020)

    Language:HTML16303
  • yangarbiter/interpretable-robust-trees

    Connecting Interpretability and Robustness in Decision Trees through Separation

    Language:Jupyter Notebook16300