Inspired by this repo and ML Writing Month. Questions and discussions are most welcome!
Lil-log is the best blog I have ever read!
TNNLS 2019
Adversarial Examples: Attacks and Defenses for Deep LearningIEEE ACCESS 2018
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey2019
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review2019
A Study of Black Box Adversarial Attacks in Computer Vision2019
Adversarial Examples in Modern Machine Learning: A Review2020
Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey2020
Knowledge Distillation and Student-Teacher Learning for Visual Intelligence\ A Review and New Outlooks2019
Adversarial attack and defense in reinforcement learning-from AI security view2020
A Survey of Privacy Attacks in Machine Learning2020
Learning from Noisy Labels with Deep Neural Networks: A Survey2020
Optimization for Deep Learning: An Overview2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review2020
Learning from Noisy Labels with Deep Neural Networks: A Survey2020
Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective2020
Efficient Transformers: A Survey
EuroS&P
The limitations of deep learning in adversarial settingsCVPR
DeepfoolSP
C&W Towards evaluating the robustness of neural networksArxiv
Transferability in machine learning: from phenomena to black-box attacks using adversarial samplesNIPS
[Adversarial Images for Variational Autoencoders]
ICLR
Delving into Transferable Adversarial Examples and Black-box AttacksCVPR
Universal Adversarial PerturbationsICCV
Adversarial Examples for Semantic Segmentation and Object DetectionARXIV
Adversarial Examples that Fool DetectorsCVPR
A-Fast-RCNN: Hard Positive Generation via Adversary for Object DetectionICCV
Adversarial Examples Detection in Deep Networks with Convolutional Filter StatisticsAIS
[Adversarial examples are not easily detected: Bypassing ten detection methods]ICCV
UNIVERSAL
[Universal Adversarial Perturbations Against Semantic Image Segmentation]
ICLR
Generating Natural Adversarial ExamplesNeurlPS
Constructing Unrestricted Adversarial Examples with Generative ModelsIJCAI
Generating Adversarial Examples with Adversarial NetworksCVPR
Generative Adversarial PerturbationsAAAI
Learning to Attack: Adversarial transformation networksS&P
Learning Universal Adversarial Perturbations with Generative ModelsCVPR
Robust physical-world attacks on deep learning visual classificationICLR
Spatially Transformed Adversarial ExamplesCVPR
Boosting Adversarial Attacks With MomentumCVPR
UNIVERSAL
[Art of Singular Vectors and Universal Adversarial Perturbations]ARXIV
[Adversarial Spheres]ICML
[Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples]ECCV
[Characterizing adversarial examples based on spatial consistency information for semantic segmentation]
CVPR
Feature Space Perturbations Yield More Transferable Adversarial ExamplesICLR
The Limitations of Adversarial Training and the Blind-Spot AttackICLR
Are adversarial examples inevitable? 💭IEEE TEC
One pixel attack for fooling deep neural networksARXIV
Generalizable Adversarial Attacks Using Generative ModelsICML
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks💭ARXIV
SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image EditingCVPR
Rob-GAN: Generator, Discriminator, and Adversarial AttackerARXIV
Cycle-Consistent Adversarial {GAN:} the integration of adversarial attack and defenseARXIV
Generating Realistic Unrestricted Adversarial Inputs using Dual-Objective {GAN} Training 💭ICCV
Sparse and Imperceivable Adversarial Attacks💭ARXIV
Perturbations are not Enough: Generating Adversarial Examples with Spatial DistortionsARXIV
Joint Adversarial Training: Incorporating both Spatial and Pixel AttacksIJCAI
Transferable Adversarial Attacks for Image and Video Object DetectionTPAMI
Generalizable Data-Free Objective for Crafting Universal Adversarial PerturbationsCVPR
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and DefensesCVPR
[FDA: Feature Disruptive Attack]ARXIV
[SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations]CVPR
[SparseFool: a few pixels make a big difference]ICLR
[Adversarial Attacks on Graph Neural Networks via Meta Learning]
ICLR
Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking💭ARXIV
[Sponge Examples: Energy-Latency Attacks on Neural Networks]
ARXIV
Countering Adversarial Images using Input TransformationsICCV
[SafetyNet: Detecting and Rejecting Adversarial Examples Robustly]Arxiv
Detection
Detecting adversarial samples from artifactsICLR
Detection
On Detecting Adversarial Perturbations 💭
ICLR
Defense-{GAN}: Protecting Classifiers Against Adversarial Attacks Using Generative Models- .
ICLR
Ensemble Adversarial Training: Attacks and Defences CVPR
Defense Against Universal Adversarial PerturbationsCVPR
Deflecting Adversarial Attacks With Pixel DeflectionTPAMI
Virtual adversarial training: a regularization method for supervised and semi-supervised learning 💭ARXIV
Adversarial Logit PairingCVPR
Defense Against Adversarial Attacks Using High-Level Representation Guided DenoiserARXIV
Evaluating and understanding the robustness of adversarial logit pairingCCS
Machine Learning with Membership Privacy Using Adversarial Regularization
NIPS
Adversarial Training and Robustness for Multiple PerturbationsNIPS
Adversarial Robustness through Local LinearizationCVPR
Retrieval-Augmented Convolutional Neural Networks against Adversarial ExamplesCVPR
Feature Denoising for Improving Adversarial RobustnessNEURIPS
A New Defense Against Adversarial Images: Turning a Weakness into a StrengthICML
Interpreting Adversarially Trained Convolutional Neural NetworksICLR
Robustness May Be at Odds with Accuracy💭IJCAI
Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet LossICML
Adversarial Examples Are a Natural Consequence of Test Error in Noise💭ICML
On the Connection Between Adversarial Robustness and Saliency Map InterpretabilityNeurIPS
Metric Learning for Adversarial RobustnessARXIV
Defending Adversarial Attacks by Correcting logitsICCV
Adversarial Learning With Margin-Based Triplet Embedding RegularizationICCV
CIIDefence: Defeating Adversarial Attacks by Fusing Class-Specific Image Inpainting and Image DenoisingNIPS
Adversarial Examples Are Not Bugs, They Are FeaturesICML
Using Pre-Training Can Improve Model Robustness and UncertaintyNIPS
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training💭ICCV
Improving Adversarial Robustness via Guided Complement EntropyNIPS
Robust Attribution Regularization 💭NIPS
Are Labels Required for Improving Adversarial Robustness?ICLR
Theoretically Principled Trade-off between Robustness and AccuracyCVPR
[Adversarial defense by stratified convolutional sparse coding]
ICLR
Jacobian Adversarially Regularized Networks for RobustnessCVPR
What it Thinks is Important is Important: Robustness Transfers through Input GradientsICLR
Adversarially Robust Representations with Smooth Encoders 💭ARXIV
Heat and Blur: An Effective and Fast Defense Against Adversarial ExamplesICML
Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive InferenceCVPR
Wavelet Integrated CNNs for Noise-Robust Image ClassificationARXIV
Deflecting Adversarial AttacksICLR
Robust Local Features for Improving the Generalization of Adversarial TrainingICLR
Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution ClassifierCVPR
A Self-supervised Approach for Adversarial RobustnessICLR
Improving Adversarial Robustness Requires Revisiting Misclassified Examples 👍ARXIV
Manifold regularization for adversarial robustness
ICCV 2017
CVAE-GAN: Fine-Grained Image Generation Through Asymmetric TrainingICML 2016
Autoencoding beyond pixels using a learned similarity metricARXIV 2019
Natural Adversarial ExamplesICML 2017
Conditional Image Synthesis with Auxiliary Classifier {GAN}sICCV 2019
SinGAN: Learning a Generative Model From a Single Natural ImageICLR 2020
Robust And Interpretable Blind Image Denoising Via Bias-Free Convolutional Neural NetworksICLR 2020
Pay Attention to Features, Transfer Learn Faster CNNsICLR 2020
On Robustness of Neural Ordinary Differential EquationsICCV 2019
Real Image Denoising With Feature AttentionICLR 2018
Multi-Scale Dense Networks for Resource Efficient Image ClassificationARXIV 2019
Rethinking Data Augmentation: Self-Supervision and Self-DistillationICCV 2019
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self DistillationARXIV 2019
Adversarially Robust DistillationARXIV 2019
Knowledge Distillation from Internal RepresentationsICLR 2020
Contrastive Representation Distillation 💭NIPS 2018
Faster Neural Networks Straight from JPEGARXIV 2019
A Closer Look at Double Backpropagation💭CVPR 2016
Learning Deep Features for Discriminative LocalizationICML 2019
Noise2Self: Blind Denoising by Self-SupervisionARXIV 2020
Supervised Contrastive LearningCVPR 2020
High-Frequency Component Helps Explain the Generalization of Convolutional Neural NetworksNIPS 2017
[Counterfactual Fairness]ARXIV 2020
[An Adversarial Approach for Explaining the Predictions of Deep Neural Networks]CVPR 2014
[Rich feature hierarchies for accurate object detection and semantic segmentation]ICLR 2018
[Spectral Normalization for Generative Adversarial Networks]NIPS 2018
[MetaGAN: An Adversarial Approach to Few-Shot Learning]ARXIV 2019
[Breaking the cycle -- Colleagues are all you need]ARXIV 2019
[LOGAN: Latent Optimisation for Generative Adversarial Networks]