/attack-and-defense-methods

A curated list of papers on adversarial machine learning (adversarial examples and defense methods).

Primary LanguageTeXMIT LicenseMIT

Maintenance GitHub last commit Awesome

About

Inspired by this repo and ML Writing Month. Questions and discussions are most welcome!

Lil-log is the best blog I have ever read!

Papers

Survey

  1. TNNLS 2019 Adversarial Examples: Attacks and Defenses for Deep Learning
  2. IEEE ACCESS 2018 Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
  3. 2019 Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
  4. 2019 A Study of Black Box Adversarial Attacks in Computer Vision
  5. 2019 Adversarial Examples in Modern Machine Learning: A Review
  6. 2020 Opportunities and Challenges in Deep Learning Adversarial Robustness: A Survey
  7. TPAMI 2021 Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks
  8. 2019 Adversarial attack and defense in reinforcement learning-from AI security view
  9. 2020 A Survey of Privacy Attacks in Machine Learning
  10. 2020 Learning from Noisy Labels with Deep Neural Networks: A Survey
  11. 2020 Optimization for Deep Learning: An Overview
  12. 2020 Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
  13. 2020 Learning from Noisy Labels with Deep Neural Networks: A Survey
  14. 2020 Adversarial Machine Learning in Image Classification: A Survey Towards the Defender's Perspective
  15. 2020 Efficient Transformers: A Survey
  16. 2019 A Survey of Black-Box Adversarial Attacks on Computer Vision Models
  17. 2020 Backdoor Learning: A Survey
  18. 2020 Transformers in Vision: A Survey
  19. 2020 A Survey on Neural Network Interpretability
  20. 2020A Survey of Privacy Attacks in Machine Learning
  21. 2020 Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
  22. 2021 Recent Advances in Adversarial Training for Adversarial Robustness (Our work, accepted by IJCAI 2021)
  23. 2021 Explainable Artificial Intelligence Approaches: A Survey
  24. 2021 A Survey on Understanding, Visualizations, and Explanation of Deep Neural Networks
  25. 2020 A survey on Semi-, Self- and Unsupervised Learning for Image Classification
  26. 2021 Model Complexity of Deep Learning: A Survey
  27. 2021 Deep Generative Modelling: A Comparative Review of VAEs, GANs, Normalizing Flows, Energy-Based and Autoregressive Models
  28. 2021 Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
  29. 2019 Advances and Open Problems in Federated Learning
  30. 2021 Countering Malicious DeepFakes: Survey, Battleground, and Horizon

Attack

2013

  1. ICLR Evasion Attacks against Machine Learning at Test Time

2014

  1. ICLR Intriguing properties of neural networks
  2. ARXIV [Identifying and attacking the saddle point problem in high-dimensional non-convex optimization]

2015

  1. ICLR Explaining and Harnessing Adversarial Examples

2016

  1. EuroS&P The limitations of deep learning in adversarial settings
  2. CVPR Deepfool
  3. SP C&W Towards evaluating the robustness of neural networks
  4. Arxiv Transferability in machine learning: from phenomena to black-box attacks using adversarial samples
  5. NIPS [Adversarial Images for Variational Autoencoders]
  6. ARXIV [A boundary tilting persepective on the phenomenon of adversarial examples]
  7. ARXIV [Adversarial examples in the physical world]

2017

  1. ICLR Delving into Transferable Adversarial Examples and Black-box Attacks
  2. CVPR Universal Adversarial Perturbations
  3. ICCV Adversarial Examples for Semantic Segmentation and Object Detection
  4. ARXIV Adversarial Examples that Fool Detectors
  5. CVPR A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection
  6. ICCV Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics
  7. AIS [Adversarial examples are not easily detected: Bypassing ten detection methods]
  8. ICCV UNIVERSAL [Universal Adversarial Perturbations Against Semantic Image Segmentation]
  9. ICLR [Adversarial Machine Learning at Scale]
  10. ARXIV [The space of transferable adversarial examples]
  11. ARXIV [Adversarial attacks on neural network policies]

2018

  1. ICLR Generating Natural Adversarial Examples
  2. NeurlPS Constructing Unrestricted Adversarial Examples with Generative Models
  3. IJCAI Generating Adversarial Examples with Adversarial Networks
  4. CVPR Generative Adversarial Perturbations
  5. AAAI Learning to Attack: Adversarial transformation networks
  6. S&P Learning Universal Adversarial Perturbations with Generative Models
  7. CVPR Robust physical-world attacks on deep learning visual classification
  8. ICLR Spatially Transformed Adversarial Examples
  9. CVPRBoosting Adversarial Attacks With Momentum
  10. ICML Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples πŸ‘
  11. CVPR UNIVERSAL [Art of Singular Vectors and Universal Adversarial Perturbations]
  12. ARXIV [Adversarial Spheres]
  13. ECCV [Characterizing adversarial examples based on spatial consistency information for semantic segmentation]
  14. ARXIV [Generating natural language adversarial examples]
  15. SP [Audio adversarial examples: Targeted attacks on speech-to-text]
  16. ARXIV [Adversarial attack on graph structured data]
  17. ARXIV [Maximal Jacobian-based Saliency Map Attack (Variants of JAMA)]
  18. SP [Exploiting Unintended Feature Leakage in Collaborative Learning]

2019

  1. CVPR Feature Space Perturbations Yield More Transferable Adversarial Examples
  2. ICLR The Limitations of Adversarial Training and the Blind-Spot Attack
  3. ICLR Are adversarial examples inevitable? πŸ’­
  4. IEEE TEC One pixel attack for fooling deep neural networks
  5. ARXIV Generalizable Adversarial Attacks Using Generative Models
  6. ICML NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural NetworksπŸ’­
  7. ARXIV SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing
  8. CVPR Rob-GAN: Generator, Discriminator, and Adversarial Attacker
  9. ARXIV Cycle-Consistent Adversarial {GAN:} the integration of adversarial attack and defense
  10. ARXIV Generating Realistic Unrestricted Adversarial Inputs using Dual-Objective {GAN} Training πŸ’­
  11. ICCV Sparse and Imperceivable Adversarial AttacksπŸ’­
  12. ARXIV Perturbations are not Enough: Generating Adversarial Examples with Spatial Distortions
  13. ARXIV Joint Adversarial Training: Incorporating both Spatial and Pixel Attacks
  14. IJCAI Transferable Adversarial Attacks for Image and Video Object Detection
  15. TPAMI Generalizable Data-Free Objective for Crafting Universal Adversarial Perturbations
  16. CVPR Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses
  17. CVPR [FDA: Feature Disruptive Attack]
  18. ARXIV [SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations]
  19. CVPR [SparseFool: a few pixels make a big difference]
  20. ICLR [Adversarial Attacks on Graph Neural Networks via Meta Learning]
  21. NeurIPS [Deep Leakage from Gradients]
  22. CCS [Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning]
  23. ICCV [Universal Perturbation Attack Against Image Retrieval]
  24. ICCV [Enhancing Adversarial Example Transferability with an Intermediate Level Attack]
  25. CVPR [Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks]
  26. ICLR [ADef: an Iterative Algorithm to Construct Adversarial Deformations]
  27. Neurips [iDLG: Improved deep leakage from gradients.]
  28. ARXIV [Reversible Adversarial Attack based on Reversible Image Transformation]

2020

  1. ICLR Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object TrackingπŸ’­
  2. ARXIV [Sponge Examples: Energy-Latency Attacks on Neural Networks]
  3. ICML [Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack]
  4. ICML [Stronger and Faster Wasserstein Adversarial Attacks]
  5. CVPR [QEBA: Query-Efficient Boundary-Based Blackbox Attack]
  6. ECCV [New Threats Against Object Detector with Non-local Block]
  7. ARXIV [Towards Imperceptible Universal Attacks on Texture Recognition]
  8. ECCV [Frequency-Tuned Universal Adversarial Attacks]
  9. AAAI [Learning Transferable Adversarial Examples via Ghost Networks]
  10. ECCV [SPARK: Spatial-aware Online Incremental Attack Against Visual Tracking]
  11. Neurips [Inverting Gradients - How easy is it to break privacy in federated learning?]
  12. ICLR [Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks]

2021

  1. ARXIV [On Generating Transferable Targeted Perturbations]
  2. CVPR [See through Gradients: Image Batch Recovery via GradInversion] πŸ‘
  3. ARXIV [Admix: Enhancing the Transferability of Adversarial Attacks]
  4. ARXIV [Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep Image-to-Image Models against Adversarial Attacks]
  5. ARXIV [Poisoning the Unlabeled Dataset of Semi-Supervised Learning] Carlini
  6. ARXIV [AdvHaze: Adversarial Haze Attack]
  7. CVPR LAFEAT : Piercing Through Adversarial Defenses with Latent Features

Defence

2014

  1. ARXIV Towards deep neural network architectures robust to adversarial examples

2015

  1. [Learning with a strong adversary]
  2. [IMPROVING BACK-PROPAGATION BY ADDING AN ADVERSARIAL GRADIENT]
  3. [Distributional Smoothing with Virtual Adversarial Training]

2016

  1. NIPS Robustness of classifiers: from adversarial to random noise πŸ’­

2017

  1. ARXIV Countering Adversarial Images using Input Transformations
  2. ICCV [SafetyNet: Detecting and Rejecting Adversarial Examples Robustly]
  3. Arxiv Detecting adversarial samples from artifacts
  4. ICLR On Detecting Adversarial Perturbations πŸ’­
  5. ASIA CCS [Practical black-box attacks against machine learning]
  6. ARXIV [The space of transferable adversarial examples]
  7. ICCV [Adversarial Examples for Semantic Segmentation and Object Detection]

2018

  1. ICLR Defense-{GAN}: Protecting Classifiers Against Adversarial Attacks Using Generative Models
  2. . ICLR Ensemble Adversarial Training: Attacks and Defences
  3. CVPR Defense Against Universal Adversarial Perturbations
  4. CVPR Deflecting Adversarial Attacks With Pixel Deflection
  5. TPAMI Virtual adversarial training: a regularization method for supervised and semi-supervised learning πŸ’­
  6. ARXIV Adversarial Logit Pairing
  7. CVPR Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser
  8. ARXIV Evaluating and understanding the robustness of adversarial logit pairing
  9. CCS Machine Learning with Membership Privacy Using Adversarial Regularization
  10. ARXIV [On the robustness of the cvpr 2018 white-box adversarial example defenses]
  11. ICLR [Thermometer Encoding: One Hot Way To Resist Adversarial Examples]
  12. IJCAI [Curriculum Adversarial Training]
  13. ICLR [Countering Adversarial Images using Input Transformations]
  14. CVPR [Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser]
  15. ICLR [Towards Deep Learning Models Resistant to Adversarial Attacks]
  16. AAAI [Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing Their Input Gradients]
  17. NIPS [Adversarially robust generalization requires more data]
  18. ARXIV [Is robustness the cost of accuracy? - {A} comprehensive study on the robustness of 18 deep image classification models.]
  19. ARXIV [Robustness may be at odds with accuracy]

2019

  1. NIPS Adversarial Training and Robustness for Multiple Perturbations
  2. NIPS Adversarial Robustness through Local Linearization
  3. CVPR Retrieval-Augmented Convolutional Neural Networks against Adversarial Examples
  4. CVPR Feature Denoising for Improving Adversarial Robustness
  5. NEURIPS A New Defense Against Adversarial Images: Turning a Weakness into a Strength
  6. ICML Interpreting Adversarially Trained Convolutional Neural Networks
  7. ICLR Robustness May Be at Odds with AccuracyπŸ’­
  8. IJCAI Improving the Robustness of Deep Neural Networks via Adversarial Training with Triplet Loss
  9. ICML Adversarial Examples Are a Natural Consequence of Test Error in NoiseπŸ’­
  10. ICML On the Connection Between Adversarial Robustness and Saliency Map Interpretability
  11. NeurIPS Metric Learning for Adversarial Robustness
  12. ARXIV Defending Adversarial Attacks by Correcting logits
  13. ICCV Adversarial Learning With Margin-Based Triplet Embedding Regularization
  14. ICCV CIIDefence: Defeating Adversarial Attacks by Fusing Class-Specific Image Inpainting and Image Denoising
  15. NIPS Adversarial Examples Are Not Bugs, They Are Features
  16. ICML Using Pre-Training Can Improve Model Robustness and Uncertainty
  17. NIPS Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial TrainingπŸ’­
  18. ICCV Improving Adversarial Robustness via Guided Complement Entropy
  19. NIPS Robust Attribution Regularization πŸ’­
  20. NIPS Are Labels Required for Improving Adversarial Robustness?
  21. ICLR Theoretically Principled Trade-off between Robustness and Accuracy
  22. CVPR [Adversarial defense by stratified convolutional sparse coding]
  23. ICML [On the Convergence and Robustness of Adversarial Training]
  24. CVPR [Robustness via Curvature Regularization, and Vice Versa]
  25. CVPR [ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples]
  26. ICML [Improving Adversarial Robustness via Promoting Ensemble Diversity]
  27. ICML [Towards the first adversarially robust neural network model on {MNIST}]
  28. NIPS [Unlabeled Data Improves Adversarial Robustness]
  29. ICCV [Evaluating Robustness of Deep Image Super-Resolution Against Adversarial Attacks]
  30. ICML [Using Pre-Training Can Improve Model Robustness and Uncertainty]
  31. ARXIV [Improving adversarial robustness of ensembles with diversity training]
  32. ICML [Adversarial Robustness Against the Union of Multiple Perturbation Models]
  33. CVPR [Robustness via Curvature Regularization, and Vice Versa]
  34. NIPS [Robustness to Adversarial Perturbations in Learning from Incomplete Data]
  35. ICML [Improving Adversarial Robustness via Promoting Ensemble Diversity]
  36. NIPS [Adversarial Robustness through Local Linearization]
  37. ARXIV [Adversarial training can hurt generalization]
  38. NIPS [Adversarial training for free!]
  39. ICLR [Improving the generalization of adversarial training with domain adaptation]
  40. CVPR [Disentangling Adversarial Robustness and Generalization]
  41. NIPS [Adversarial Training and Robustness for Multiple Perturbations]
  42. ICCV [Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks]
  43. ICML [On the Convergence and Robustness of Adversarial Training]
  44. ICML [Rademacher Complexity for Adversarially Robust Generalization]
  45. ARXIV [Adversarially Robust Generalization Just Requires More Unlabeled Data]
  46. ARXIV [You only propagate once: Accelerating adversarial training via maximal principle]
  47. NIPS Cross-Domain Transferability of Adversarial Perturbations
  48. ARXIV [Adversarial Robustness as a Prior for Learned Representations]
  49. ICLR [Structured Adversarial Attack: Towards General Implementation and Better Interpretability]
  50. ICLR [Defensive Quantization: When Efficiency Meets Robustness]
  51. NeurIPS [A New Defense Against Adversarial Images: Turning a Weakness into a Strength]
  52. ICLR [PIXELDEFEND: LEVERAGING GENERATIVE MODELS TO UNDERSTAND AND DEFEND AGAINST ADVERSARIAL EXAMPLES]

2020

  1. ICLR Jacobian Adversarially Regularized Networks for Robustness
  2. CVPR What it Thinks is Important is Important: Robustness Transfers through Input Gradients
  3. ICLR Adversarially Robust Representations with Smooth Encoders πŸ’­
  4. ARXIV Heat and Blur: An Effective and Fast Defense Against Adversarial Examples
  5. ICML Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference
  6. CVPR Wavelet Integrated CNNs for Noise-Robust Image Classification
  7. ARXIV Deflecting Adversarial Attacks
  8. ICLR Robust Local Features for Improving the Generalization of Adversarial Training
  9. ICLR Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier
  10. CVPR A Self-supervised Approach for Adversarial Robustness
  11. ICLR Improving Adversarial Robustness Requires Revisiting Misclassified Examples πŸ‘
  12. ARXIV Manifold regularization for adversarial robustness
  13. NeurIPS DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles
  14. ARXIV A Closer Look at Accuracy vs. Robustness
  15. NeurIPS Energy-based Out-of-distribution Detection
  16. ARXIV Out-of-Distribution Generalization via Risk Extrapolation (REx)
  17. CVPR Adversarial Examples Improve Image Recognition
  18. ICML [Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks] πŸ‘
  19. ICML [Efficiently Learning Adversarially Robust Halfspaces with Noise]
  20. ICML [Implicit Euler Skip Connections: Enhancing Adversarial Robustness via Numerical Stability]
  21. ICML [Friendly Adversarial Training: Attacks Which Do Not Kill Training Make Adversarial Learning Stronger]
  22. ICML [Learning Adversarially Robust Representations via Worst-Case Mutual Information Maximization] πŸ‘
  23. ICML [Overfitting in adversarially robust deep learning] πŸ‘
  24. ICML [Proper Network Interpretability Helps Adversarial Robustness in Classification]
  25. ICML [Randomization matters How to defend against strong adversarial attacks]
  26. ICML [Reliable Evaluation of Adversarial Robustness with an Ensemble of Diverse Parameter-free Attacks]
  27. ICML [Towards Understanding the Regularization of Adversarial Robustness on Neural Networks]
  28. CVPR [Defending Against Universal Attacks Through Selective Feature Regeneration]
  29. ARXIV [Understanding and improving fast adversarial training]
  30. ARXIV [Cat: Customized adversarial training for improved robustness]
  31. ICLR [MMA Training: Direct Input Space Margin Maximization through Adversarial Training]
  32. ARXIV [Bridging the performance gap between fgsm and pgd adversarial training]
  33. CVPR [Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization]
  34. ARXIV [Towards understanding fast adversarial training]
  35. ARXIV [Overfitting in adversarially robust deep learning]
  36. ICLR [Robust local features for improving the generalization of adversarial training]
  37. ICML [Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks]
  38. ARXIV [Regularizers for single-step adversarial training]
  39. CVPR [Single-step adversarial training with dropout scheduling]
  40. ICLR [Improving Adversarial Robustness Requires Revisiting Misclassified Examples]
  41. ARXIV [Fast is better than free: Revisiting adversarial training.]
  42. ARXIV [On the Generalization Properties of Adversarial Training]
  43. ARXIV [A closer look at accuracy vs. robustness]
  44. ICLR [Adversarially robust transfer learning]
  45. ARXIV [On Saliency Maps and Adversarial Robustness]
  46. ARXIV [On Detecting Adversarial Inputs with Entropy of Saliency Maps]
  47. ARXIV [Detecting Adversarial Perturbations with Saliency]
  48. ARXIV [Detection Defense Against Adversarial Attacks with Saliency Map]
  49. ARXIV [Model-based Saliency for the Detection of Adversarial Examples]
  50. CVPR [Auxiliary Training: Towards Accurate and Robust Models]
  51. CVPR [Single-step Adversarial training with Dropout Scheduling]
  52. CVPR [Achieving Robustness in the Wild via Adversarial Mixing With Disentangled Representations]
  53. ICML Test-Time Training with Self-Supervision for Generalization under Distribution Shifts
  54. NeurIPS [Improving robustness against common corruptions by covariate shift adaptation]
  55. CCS [Gotta Catch'Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks]
  56. ECCV [A simple way to make neural networks robust against diverse image corruptions]

2021

  1. ARXIV On the Limitations of Denoising Strategies as Adversarial Defenses
  2. AAAI [Understanding catastrophic overfitting in single-step adversarial training]
  3. ICLR [Bag of tricks for adversarial training]
  4. ARXIV [Bridging the Gap Between Adversarial Robustness and Optimization Bias]
  5. ICLR [Perceptual Adversarial Robustness: Defense Against Unseen Threat Models]
  6. AAAI [Adversarial Robustness through Disentangled Representations]
  7. ARXIV [Understanding Robustness of Transformers for Image Classification]
  8. CVPR [Adversarial Robustness under Long-Tailed Distribution]
  9. ARXIV [Adversarial Attacks are Reversible with Natural Supervision]
  10. AAAI [Attribute-Guided Adversarial Training for Robustness to Natural Perturbations]
  11. ICLR [LEARNING PERTURBATION SETS FOR ROBUST MACHINE LEARNING]
  12. ICLR [Improving Adversarial Robustness via Channel-wise Activation Suppressing]
  13. AAAI [Efficient Certification of Spatial Robustness]
  14. ARXIV [Domain Invariant Adversarial Learning]
  15. ARXIV [Learning Defense Transformers for Counterattacking Adversarial Examples]
  16. ICLR [ONLINE ADVERSARIAL PURIFICATION BASED ON SELF-SUPERVISED LEARNING]
  17. ARXIV [Removing Adversarial Noise in Class Activation Feature Space]
  18. ARXIV [Improving Adversarial Robustness Using Proxy Distributions]
  19. ARXIV [Decoder-free Robustness Disentanglement without (Additional) Supervision]
  20. ARXIV [Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks]
  21. ARXIV [Reversible Adversarial Attack based on Reversible Image Transformation]
  22. ICLR [ONLINE ADVERSARIAL PURIFICATION BASED ON SELF-SUPERVISED LEARNING]
  23. ARXIV [Towards Corruption-Agnostic Robust Domain Adaptation]
  24. ARXIV [Adversarially Trained Models with Test-Time Covariate Shift Adaptation]
  25. ICLR workshop [COVARIATE SHIFT ADAPTATION FOR ADVERSARIALLY ROBUST CLASSIFIER]
  26. ARXIV [Self-Supervised Adversarial Example Detection by Disentangled Representation]
  27. AAAI [Adversarial Defence by Diversified Simultaneous Training of Deep Ensembles]
  28. ARXIV [Understanding Catastrophic Overfitting in Adversarial Training]
  29. ACM Trans. Multimedia Comput. Commun. Appl [Towards Corruption-Agnostic Robust Domain Adaptation]
  30. ICLR [TENT: FULLY TEST-TIME ADAPTATION BY ENTROPY MINIMIZATION]

4th-Class

  1. ICCV 2017 CVAE-GAN: Fine-Grained Image Generation Through Asymmetric Training
  2. ICML 2016 Autoencoding beyond pixels using a learned similarity metric
  3. ARXIV 2019 Natural Adversarial Examples
  4. ICML 2017 Conditional Image Synthesis with Auxiliary Classifier {GAN}s
  5. ICCV 2019 SinGAN: Learning a Generative Model From a Single Natural Image
  6. ICLR 2020 Robust And Interpretable Blind Image Denoising Via Bias-Free Convolutional Neural Networks
  7. ICLR 2020 Pay Attention to Features, Transfer Learn Faster CNNs
  8. ICLR 2020 On Robustness of Neural Ordinary Differential Equations
  9. ICCV 2019 Real Image Denoising With Feature Attention
  10. ICLR 2018 Multi-Scale Dense Networks for Resource Efficient Image Classification
  11. ARXIV 2019 Rethinking Data Augmentation: Self-Supervision and Self-Distillation
  12. ICCV 2019 Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
  13. ARXIV 2019 Adversarially Robust Distillation
  14. ARXIV 2019 Knowledge Distillation from Internal Representations
  15. ICLR 2020 Contrastive Representation Distillation πŸ’­
  16. NIPS 2018 Faster Neural Networks Straight from JPEG
  17. ARXIV 2019 A Closer Look at Double BackpropagationπŸ’­
  18. CVPR 2016 Learning Deep Features for Discriminative Localization
  19. ICML 2019 Noise2Self: Blind Denoising by Self-Supervision
  20. ARXIV 2020 Supervised Contrastive Learning
  21. CVPR 2020 High-Frequency Component Helps Explain the Generalization of Convolutional Neural Networks
  22. NIPS 2017 [Counterfactual Fairness]
  23. ARXIV 2020 [An Adversarial Approach for Explaining the Predictions of Deep Neural Networks]
  24. CVPR 2014 [Rich feature hierarchies for accurate object detection and semantic segmentation]
  25. ICLR 2018 [Spectral Normalization for Generative Adversarial Networks]
  26. NIPS 2018 [MetaGAN: An Adversarial Approach to Few-Shot Learning]
  27. ARXIV 2019 [Breaking the cycle -- Colleagues are all you need]
  28. ARXIV 2019 [LOGAN: Latent Optimisation for Generative Adversarial Networks]
  29. ICML 2020 [Margin-aware Adversarial Domain Adaptation with Optimal Transport]
  30. ICML 2020 [Representation Learning Using Adversarially-Contrastive Optimal Transport]
  31. ICLR 2021 [Free Lunch for Few-shot Learning: Distribution Calibration]
  32. CVPR 2019 [Unprocessing Images for Learned Raw Denoising]
  33. TPAMI 2020 [Image Quality Assessment: Unifying Structure and Texture Similarity]
  34. CVPR 2020 [Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion]
  35. ICLR 2021 [WHAT SHOULD NOT BE CONTRASTIVE IN CONTRASTIVE LEARNING]
  36. ARXIV [MT3: Meta Test-Time Training for Self-Supervised Test-Time Adaption]
  37. ARXIV [UNSUPERVISED DOMAIN ADAPTATION THROUGH SELF-SUPERVISION]

Links