Papers about Neural Backdoor

A list of papers of Neural Backdoor

Neural Trojan Attacks

Training Data Poisoning

  • Neural network trojan (Journal of Computer Security, 2013) [Paper]

  • Backdoor attacks against learning systems (CNS, 2017) [paper]

  • Trojaning attack on neural networks (NDSS, 2018) [paper]

  • Neural trojans (ICCD, 2017) [paper]

  • Design of intentional backdoors in sequential models (2019) [paper]

  • Targeted backdoor attacks on deep learning systems using data poisoning (2017) [paper]

Hiding Trojan Triggers

  • A new Backdoor Attack in CNNs by training set corruption without label poisoning (2019) [paper]

  • Backdoor embedding in convolutional neural network models via invisible perturbation (2018) [paper]

  • Invisible Backdoor Attacks Against Deep Neural Networks (2019) [paper]

  • Hidden Trigger Backdoor Attacks (2019) [paper]

Altering Training Algorithms

  • Backdoor Attacks on Neural Network Operations (GlobalSIP 2018) [paper]

Trojan Insertion via Transfer Learning

  • BadNets: Evaluating Backdooring Attacks on Deep Neural Networks (IEEE access 2019) [paper]

  • Latent Backdoor Attacks on Deep Neural Networks (CCS 19) [paper]

Neural Trojans in Hardware

  • Hu-fu: Hardware and software collaborative attack framework against neural networks (ISVLSI 2018) [paper]

  • Hardware trojan attacks on neural networks (2018) [paper]

Binary-Level Attacks

  • SIN 2: Stealth infection on neural network—a low-cost agile neural trojan attack methodology. (HOST 2018) [paper]

Defense Techniques

Neural Network Verification

  • Quantitative Verification of Neural Networks And its Security Applications (2019) [paper]

  • Sensitive-Sample Fingerprinting of Deep Neural Networks (CVPR 2019) [paper]

Trojan Trigger Detection

  • Detecting Poisoning Attacks on Machine Learning in IoT Environments (ICIOT 2018) [paper]

  • Debugging Machine Learning Tasks (2016) [paper]

  • DeepInspect: A Black-box Trojan Detection and Mitigation Framework for Deep Neural Networks (IJCAI 2019) [paper]

  • STRIP: A Defence Against Trojan Attacks on Deep Neural Networks (2019) [paper]

  • ABS: Scanning Neural Networks for Back-doors by Artificial Brain Stimulation (CCS 2019) [paper]

  • Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs (2019) [paper]

  • Detecting AI Trojans Using Meta Neural Analysis (2019) [paper]

  • Revealing Backdoors, Post-Training, in DNN Classifiers via Novel Inference on Optimized Perturbations Inducing Group Misclassification (2019) [paper]

Restoring Compromised Neural Models

Model Correction

  • Resilience of Pruned Neural Network Against Poisoning Attack (MALWARE 2018) [paper]

  • Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks (ISRAID 2018) [paper]

Trigger-based Trojan Reversing

  • Neural cleanse: Identifying and mitigating backdoor attacks in neural networks (2019) [paper]

  • TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems (2019) [paper]

  • Detecting backdoor attacks on deep neural networks by activation clustering (2018) [paper]

Bypassing Neural Trojans

  • Deep-Cleanse: A Black-box Input SanitizationFramework Against Backdoor Attacks on DeepNeural Networks (2019) [paper]

  • Neural trojans (ICCD, 2017) [paper]

  • Model Agnostic Defence against Backdoor Attacks in Machine Learning (2019) [paper]

Using Neural Trojans for Good

  • Turning your weakness into a strength: Watermarking deep neural networks by backdooring (USENIX 2018) [paper]

  • Watermarking deep neural networks for embedded systems (ICCAD 2018) [paper]

  • Using Honeypots to Catch Adversarial Attacks on Neural Networks.[paper]