/PapersAboutAdversarialExamples

我关于神经网络对抗样本方面论文的笔记

PapersAboutAdversarialExamples

本仓库仅关注视觉神经网络的对抗样本。

2019年起的大部分论文已经被列出。

目前更新到ICLR 2020 Conference | OpenReview

综述类

2017年以后

  • Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
  • Adversarial attacks and defenses against deep neural networks: A survey
  • Adversarial Examples: Attacks and Defenses for Deep Learning
  • Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey
  • Adversarial Machine Learning in Image Classification: A Survey Towards the Defender’s Perspective
  • Adversarial Attacks and Defenses in Images,Graphs and Text A Review

白盒攻击类(也包括可迁移场景)

2017年及以前

  • Explaining and Harnessing Adversarial Examples
  • Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
  • DeepFool: a simple and accurate method to fool deep neural networks (CVPR 2016)
  • The Limitations of Deep Learning in Adversarial Settings (S&P 2016)
  • Adversarial Machine Learning at Scale (ICLR 2016)
  • Towards Evaluating the Robustness of Neural Networks (S&P 2017)

2017年以后

  • Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
  • Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers (ICCV 2019)
  • Physical Adversarial Textures That Fool Visual Object Tracking (ICCV 2019)
  • advPattern: Physical-World Attacks on Deep Person Re-Identification via Adversarially Transformable Patterns (ICCV 2019)
  • Exact Adversarial Attack to Image Captioning via Structured Output Learning with Latent Variables (CVPR 2019)
  • Adversarial Attacks Beyond the Image Space (CVPR 2019)
  • Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses (CVPR 2019)
  • Trust Region Based Adversarial Attack on Neural Networks (CVPR 2019)
  • Functional Adversarial Attacks (NIPS 2019)
  • Cross-Modal Learning with Adversarial Samples (NIPS 2019)
  • Adversarial camera stickers: A physical camera-based attack on deep learning systems (ICML 2019)
  • Wasserstein Adversarial Examples via Projected Sinkhorn Iterations (ICML 2019)
  • Adversarial T-shirt! Evading Person Detectors in A Physical World (ECCV 2020)
  • AdvPC: Transferable Adversarial Perturbations on 3D Point Clouds (ECCV 2020)
  • Sparse Adversarial Attack via Perturbation Factorization (ECCV 2020)
  • Improving the Transferability of Adversarial Examples with Resized-Diverse-Inputs, Diversity-Ensemble and Region Fitting (ECCV 2020)
  • Backpropagating Linearly Improves Transferability of Adversarial Examples (NIPS 2020)
  • On Adaptive Attacks to Adversarial Example Defenses (NIPS 2020)
  • Targeted Adversarial Perturbations for Monocular Depth Prediction (NIPS 2020)
  • GreedyFool: Distortion-Aware Sparse Adversarial Attack (NIPS 2020)
  • Practical No-box Adversarial Attacks against DNNs (NIPS 2020)
  • Stronger and Faster Wasserstein Adversarial Attacks (ICML 2020)
  • Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack (ICML 2020)
  • Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking (ICLR 2020)
  • Unrestricted Adversarial Examples via Semantic Manipulation (ICLR 2020)
  • Learning Transferable Adversarial Perturbations (NIPS 2021)
  • Adversarial Attack Generation Empowered by Min-Max Optimization (NIPS 2021)
  • Distilling Robust and Non-Robust Features in Adversarial Examples by Information Bottleneck (NIPS 2021)
  • Adversarial Robustness with Non-uniform Perturbations (NIPS 2021)
  • Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints (NIPS 2021)
  • Mind the Box: l1-APGD for Sparse Adversarial Attacks on Image Classifiers (ICML 2021)
  • Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm (ICML 2021)
  • A Unified Approach to Interpreting and Boosting Adversarial Transferability (ICLR 2021)
  • Frequency-driven Imperceptible Adversarial Attack on Semantic Similarity (CVPR 2022)
  • [ ]

黑盒攻击类

2017年及以前

  • Practical Black-Box Attacks against Machine Learning (CCS 2017)
  • UPSET and ANGRI : Breaking High Performance Image Classifiers (arxiv preprint 2017)
  • Houdini: Fooling deep structured prediction models (arxiv preprint 2017)

2017年以后

  • Boosting Adversarial Attacks with Momentum
  • Transferable Perturbations of Deep Feature Distributions
  • Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks
  • On the Design of Black-Box Adversarial Examples by Leveraging Gradient-Free Optimization and Operator Splitting Method (ICCV 2019)
  • Universal Adversarial Perturbation via Prior Driven Uncertainty Approximation (ICCV 2019)
  • Sparse and Imperceivable Adversarial Attacks (ICCV 2019)
  • Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks (ICCV 2019)
  • Enhancing Adversarial Example Transferability With an Intermediate Level Attack (ICCV 2019)
  • The LogBarrier adversarial attack: making effective use of decision boundary information (ICCV 2019)
  • Guessing Smart: Biased Sampling for Efficient Black-Box Adversarial Attacks (ICCV 2019)
  • Targeted Mismatch Adversarial Attack: Query With a Flower to Retrieve the Tower (ICCV 2019)
  • Improving Transferability of Adversarial Examples with Input Diversity (CVPR 2019)
  • Feature Space Perturbations Yield More Transferable Adversarial Examples (CVPR 2019)
  • Efficient Decision-Based Black-Box Adversarial Attacks on Face Recognition (CVPR 2019)
  • Catastrophic Child’s Play: Easy to Perform, Hard to Defend Adversarial Attacks (CVPR 2019)
  • Improving Black-box Adversarial Attacks with a Transfer-based Prior (NIPS 2019)
  • Simple Black-box Adversarial Attacks (ICML 2019)
  • NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks (ICML 2019)
  • Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization (ICML 2019)
  • Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors (ECCV 2020)
  • Regional Homogeneity: Towards Learning Transferable Universal Adversarial Perturbations Against Defenses (ECCV 2020)
  • Bias-based Universal Adversarial Patch Attack for Automatic Check-out (ECCV 2020)
  • SemanticAdv: Generating Adversarial Examples via Attribute-conditioned Image Editing (ECCV 2020)
  • Boosting Decision-based Black-box Adversarial Attacks with Random Sign Flip (ECCV 2020)
  • Design and Interpretation of Universal Adversarial Patches in Face Detection (ECCV 2020)
  • Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior (ECCV 2020)
  • Square Attack: a query-efficient black-box adversarial attack via random search (ECCV 2020)
  • Improving Query Efficiency of Black-box Adversarial Attack (ECCV 2020)
  • Efficient Adversarial Attacks for Visual Object Tracking (ECCV 2020)
  • AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows (NIPS 2020)
  • Black-Box Adversarial Attack with Transferable Model-based Embedding (ICLR 2020)
  • Sign-OPT: A Query-Efficient Hard-label Adversarial Attack (ICLR 2020)
  • BayesOpt Adversarial Attack (ICLR 2020)
  • BREAKING CERTIFIED DEFENSES: SEMANTIC ADVERSARIAL EXAMPLES WITH SPOOFED ROBUSTNESS CERTIFICATES (ICLR 2020)
  • IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for Visual Object Tracking (CVPR 2021)
  • BASAR:Black-box Attack on Skeletal Action Recognition (CVPR 2021)
  • Enhancing the Transferability of Adversarial Attacks through Variance Tuning (CVPR 2021)
  • Adversarial Attacks on Black Box Video Classifiers: Leveraging the Power of Geometric Transformations (NIPS 2021)
  • Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks (NIPS 2021)
  • Policy-Driven Attack: Learning to Query for Hard-label Black-box Adversarial Examples (ICLR 2021)
  • LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition (ICLR 2021)
  • Shadows can be Dangerous: Stealthy and Effective Physical-world Adversarial Attack by Natural Phenomenon (CVPR 2022)
  • Adversarial Texture for Fooling Person Detectors in the Physical World (CVPR 2022)

防御以及鲁棒类

2017年及以前

  • Intriguing properties of neural networks
  • Adversarial Machine Learning at Scale

2017年以后

  • Ensemble Adversarial Training: Attacks and Defenses
  • Boosting Adversarial Attacks with Momentum
  • Mitigating Adversarial Effects Through Randomization
  • Thermometer Encoding: One Hot Way To Resist Adversarial Examples
  • Countering Adversarial Images using Input Transformations
  • Stochastic activation pruning for robust adversarial defense
  • PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples
  • Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
  • Towards Deep Learning Models Resistant to Adversarial Attacks (ICLR 2018)
  • Adversarial Robustness vs. Model Compression, or Both? (ICCV 2019)
  • Evaluating Robustness of Deep Image Super-Resolution Against Adversarial Attacks (ICCV 2019)
  • Towards Adversarially Robust Object Detection (ICCV 2019)
  • Adversarial Defense via Learning to Generate Diverse Attacks (ICCV 2019)
  • Hilbert-Based Generative Defense for Adversarial Examples (ICCV 2019)
  • Improving Adversarial Robustness via Guided Complement Entropy (ICCV 2019)
  • Defending Against Universal Perturbations With Shared Adversarial Training (ICCV 2019)
  • Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks (ICCV 2019)
  • CIIDefence: Defeating Adversarial Attacks by Fusing Class-Specific Image Inpainting and Image Denoising (ICCV 2019)
  • Feature Denoising for Improving Adversarial Robustness (CVPR 2019)
  • Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attack (CVPR 2019)
  • Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness Against Adversarial Attack (CVPR 2019)
  • Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples (CVPR 2019)
  • Searching for a Robust Neural Architecture in Four GPU Hours (CVPR 2019)
  • What Does It Mean to Learn in Deep Networks? And, How Does One Detect Adversarial Attacks? (CVPR 2019)
  • Detection Based Defense Against Adversarial Examples From the Steganalysis Point of View (CVPR 2019)
  • Adversarial Defense Through Network Profiling Based Path Extraction (CVPR 2019)
  • ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples (CVPR 2019)
  • Curls & Whey: Boosting Black-Box Adversarial Attacks (CVPR 2019)
  • Barrage of Random Transforms for Adversarially Robust Defense (CVPR 2019)
  • Disentangling Adversarial Robustness and Generalization (CVPR 2019)
  • ShieldNets: Defending Against Adversarial Attacks Using Probabilistic Adversarial Robustness (CVPR 2019)
  • Defense Against Adversarial Images Using Web-Scale Nearest-Neighbor Search (CVPR 2019)
  • Adversarial Defense by Stratified Convolutional Sparse Coding (CVPR 2019)
  • Adversarial Defense by Stratified Convolutional Sparse Coding (CVPR 2019)
  • Robustness of 3D Deep Learning in an Adversarial Setting (CVPR 2019)
  • Defending Against Adversarial Attacks by Randomized Diversification (CVPR 2019)
  • Lower Bounds on Adversarial Robustness from Optimal Transport (NIPS 2019)
  • Adversarial Robustness through Local Linearization (NIPS 2019)
  • On Robustness to Adversarial Examples and Polynomial Optimization (NIPS 2019)
  • Model Compression with Adversarial Robustness: A Unified Optimization Framework (NIPS 2019)
  • Unlabeled Data Improves Adversarial Robustness (NIPS 2019)
  • Theoretical evidence for adversarial robustness through randomization (NIPS 2019)
  • Provably robust boosted decision stumps and trees against adversarial attacks (NIPS 2019)
  • Robustness to Adversarial Perturbations in Learning from Incomplete Data (NIPS 2019)
  • Reverse KL-Divergence Training of Prior Networks: Improved Uncertainty and Adversarial Robustness (NIPS 2019)
  • Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes (NIPS 2019)
  • Are Labels Required for Improving Adversarial Robustness? (NIPS 2019)
  • Metric Learning for Adversarial Robustness (NIPS 2019)
  • A New Defense Against Adversarial Images: Turning a Weakness into a Strength (NIPS 2019)
  • Error Correcting Output Codes Improve Probability Estimation and Adversarial Robustness of Deep Neural Networks (NIPS 2019)
  • Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training (NIPS 2019)
  • Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers (NIPS 2019)
  • Adversarial examples from computational constraints (ICML 2019)
  • Certified Adversarial Robustness via Randomized Smoothing (ICML 2019)
  • Generalized No Free Lunch Theorem for Adversarial Robustness (ICML 2019)
  • On the Connection Between Adversarial Robustness and Saliency Map Interpretability (ICML 2019)
  • Adversarial Examples Are a Natural Consequence of Test Error in Noise (ICML 2019)
  • Are Generative Classifiers More Robust to Adversarial Attacks? (ICML 2019)
  • On Certifying Non-Uniform Bounds against Adversarial Attacks (ICML 2019)
  • Improving Adversarial Robustness via Promoting Ensemble Diversity (ICML 2019)
  • The Odds are Odd: A Statistical Test for Detecting Adversarial Examples (ICML 2019)
  • First-Order Adversarial Vulnerability of Neural Networks and Input Dimension (ICML 2019)
  • ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation (ICML 2019)
  • Rademacher Complexity for Adversarially Robust Generalization (ICML 2019)
  • Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory (CVPR 2020)
  • Multitask Learning Strengthens Adversarial Robustness (ECCV 2020)
  • Towards Automated Testing and Robustification by Semantic Adversarial Data Generation (ECCV 2020)
  • Robust Tracking against Adversarial Attacks (ECCV 2020)
  • Connecting the Dots: Detecting Adversarial Perturbations Using Context Inconsistency (ECCV 2020)
  • Adversarial Robustness on In- and Out-Distribution Improves Explainability (ECCV 2020)
  • Improving Adversarial Robustness by Enforcing Local and Global Compactness (ECCV 2020)
  • Defense Against Adversarial Attacks via Controlling Gradient Leaking on Embedded Manifolds (ECCV 2020)
  • Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations (ECCV 2020)
  • Manifold Projection for Adversarial Defense on Face Recognition (ECCV 2020)
  • Adversarial Robustness of Supervised Sparse Coding (NIPS 2020)
  • Biologically Inspired Mechanisms for Adversarial Robustness (NIPS 2020)
  • Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks (NIPS 2020)
  • On the Tightness of Semidefinite Relaxations for Certifying Robustness to Adversarial Examples (NIPS 2020)
  • Adversarial robustness via robust low rank representations (NIPS 2020)
  • Fast Adversarial Robustness Certification of Nearest Prototype Classifiers for Arbitrary Seminorms (NIPS 2020)
  • Contrastive Learning with Adversarial Examples (NIPS 2020)
  • Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses (NIPS 2020)
  • Black-box Certification and Learning under Adversarial Perturbations (ICML 2020)
  • Proper Network Interpretability Helps Adversarial Robustness in Classification (ICML 2020)
  • Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks (ICML 2020)
  • Sharp Statistical Guaratees for Adversarially Robust Gaussian Classification (ICML 2020)
  • Hierarchical Verification for Adversarial Robustness (ICML 2020)
  • Implicit Euler Skip Connections: Enhancing Adversarial Robustness via Numerical Stability (ICML 2020)
  • Adversarial Robustness Against the Union of Multiple Perturbation Models (ICML 2020)
  • Efficiently Learning Adversarially Robust Halfspaces with Noise (ICML 2020)
  • Randomization matters How to defend against strong adversarial attacks (ICML 2020)
  • Second-Order Provable Defenses against Adversarial Attacks (ICML 2020)
  • Overfitting in adversarially robust deep learning (ICML 2020)
  • Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations (ICML 2020)
  • Neural Network Control Policy Verification With Persistent Adversarial Perturbation (ICML 2020)
  • Towards Understanding the Regularization of Adversarial Robustness on Neural Networks (ICML 2020)
  • Adversarial Robustness via Runtime Masking and Cleansing (ICML 2020)
  • Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier (ICLR 2020)
  • Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks (ICLR 2020)
  • Rethinking Softmax Cross-Entropy Loss for Adversarial Robustness (ICLR 2020)
  • Improving Adversarial Robustness Requires Revisiting Misclassified Examples (ICLR 2020)
  • Detecting and Diagnosing Adversarial Images with Class-Conditional Capsule Reconstructions (ICLR 2020)
  • GAT: Generative Adversarial Training for Adversarial Example Detection and Robust Classification (ICLR 2020)
  • Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing (ICLR 2020)
  • Adversarially Robust Representations with Smooth Encoders (ICLR 2020)
  • Adversarially robust transfer learning (ICLR 2020)
  • Biologically inspired sleep algorithm for increased generalization and adversarial robustness in deep neural networks (ICLR 2020)
  • Jacobian Adversarially Regularized Networks for Robustness (ICLR 2020)
  • Certified Defenses for Adversarial Patches (ICLR 2020)
  • Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks (ICLR 2020)
  • Provable robustness against all adversarial lp-perturbations for p≥1 (ICLR 2020)
  • EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness Against Adversarial Attacks(ICLR 2020)
  • Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier (ICLR 2020)
  • Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation (CVPR 2021)
  • When Human Pose Estimation Meets Robustness: Adversarial Algorithms and Benchmarks (CVPR 2021)
  • Understanding the Robustness of Skeleton-based Action Recognition under Adversarial Attack (CVPR 2021)
  • LAFEAT: Piercing Through Adversarial Defenses with Latent Features (CVPR 2021)
  • LiBRe: A Practical Bayesian Approach to Adversarial Detection (CVPR 2021)
  • Zero-shot Adversarial Quantization (CVPR 2021)
  • VideoMoCo: Contrastive Video Representation Learning with Temporally Adversarial Examples (CVPR 2021)
  • Adversarial Robustness under Long-Tailed Distribution (CVPR 2021)
  • Renofeation: A Simple Transfer Learning Method for Improved Adversarial Robustness (CVPR 2021)
  • AugMax: Adversarial Composition of Random Augmentations for Robust Training (NIPS 2021)
  • Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness (NIPS 2021)
  • Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples (NIPS 2021)
  • Shift Invariance Can Reduce Adversarial Robustness (NIPS 2021)
  • Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks (NIPS 2021)
  • Adversarial Robustness with Semi-Infinite Constrained Learning (NIPS 2021)
  • Do Wider Neural Networks Really Help Adversarial Robustness? (NIPS 2021)
  • Adversarial Examples in Multi-Layer Random ReLU Networks (NIPS 2021)
  • Generalized Depthwise-Separable Convolutions for Adversarially Robust and Efficient Neural Networks (NIPS 2021)
  • Stable Neural ODE with Lyapunov-Stable Equilibrium Points for Defending Against Adversarial Attacks (NIPS 2021)
  • Exponential Separation between Two Learning Models and Adversarial Robustness (NIPS 2021)
  • When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning? (NIPS 2021)
  • Automated Discovery of Adaptive Attacks on Adversarial Defenses (NIPS 2021)
  • Can we have it all? On the Trade-off between Spatial and Adversarial Robustness of Neural Networks (NIPS 2021)
  • Encoding Robustness to Image Style via Adversarial Feature Perturbations (NIPS 2021)
  • Adversarially robust learning for security-constrained optimal power flow (NIPS 2021)
  • Neural Architecture Dilation for Adversarial Robustness (NIPS 2021)
  • Clustering Effect of Adversarial Robust Models (NIPS 2021)
  • CARTL: Cooperative Adversarially-Robust Transfer Learning (ICML 2021)
  • SPADE: A Spectral Method for Black-Box Adversarial Robustness Evaluation (ICML 2021)
  • Adversarial Robustness Guarantees for Random Deep Neural Networks (ICML 2021)
  • Learning Diverse-Structured Networks for Adversarial Robustness (ICML 2021)
  • Weight-covariance alignment for adversarially robust neural networks (ICML 2021)
  • Maximum Mean Discrepancy Test is Aware of Adversarial Attacks (ICML 2021)
  • Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial Attacks (ICML 2021)
  • CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection (ICML 2021)
  • Towards Defending against Adversarial Examples via Attack-Invariant Features (ICML 2021)
  • Improving Adversarial Robustness via Channel-wise Activation Suppressing (ICLR 2021)
  • Improving VAEs' Robustness to Adversarial Attack (ICLR 2021)
  • Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds (ICLR 2021)
  • Perceptual Adversarial Robustness: Defense Against Unseen Threat Models (ICLR 2021)
  • Self-supervised Adversarial Robustness for the Low-label, High-data Regime (ICLR 2021)
  • Provably robust classification of adversarial examples with detection (ICLR 2021)
  • Stochastic Security: Adversarial Defense Using Long-Run Dynamics of Energy-Based Models (ICLR 2021)
  • ARMOURED: Adversarially Robust MOdels using Unlabeled data by REgularizing Diversity (ICLR 2021)
  • Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input (CVPR 2022)
  • Enhancing Adversarial Training with Second-Order Statistics of Weights (CVPR 2022)
  • Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack (CVPR 2022)
  • On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles (CVPR 2022)
  • Enhancing Adversarial Robustness for Deep Metric Learning (CVPR 2022)
  • [ ]

其他

  • Detecting Overfitting via Adversarial Examples (NIPS 2019)
  • On Relating Explanations and Adversarial Examples (NIPS 2019)
  • Cross-Domain Transferability of Adversarial Perturbations (NIPS 2019)
  • Adversarial Examples Are Not Bugs, They Are Features (NIPS 2019)
  • A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses (NIPS 2020)
  • Most ReLU Networks Suffer from ℓ2ℓ2 Adversarial Perturbations (NIPS 2020)
  • On the Trade-off between Adversarial and Backdoor Robustness (NIPS 2020)
  • HYDRA: Pruning Adversarially Robust Neural Networks (NIPS 2020)
  • APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection (ECCV 2020)
  • Towards a Unified Game-Theoretic View of Adversarial Perturbations and Robustness (NIPS 2021)
  • A PAC-Bayes Analysis of Adversarial Robustness (NIPS 2021)
  • Adversarial Examples Make Strong Poisons (NIPS 2021)
  • Query Complexity of Adversarial Attacks (ICML 2021)
  • Mixed Nash Equilibria in the Adversarial Examples Game (ICML 2021)