/Sneaky-Spikes

Corresponding code for the paper: "Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data", at Network and Distributed System Security (NDSS).

Primary LanguagePythonMIT LicenseMIT

Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data

Corresponding code for the paper: "Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data", at Network and Distributed System Security (NDSS) 2024.

A guide to the code is available here.

Examples

Static Triggers

static

static

static

static

Moving Triggers

moving

moving

moving

Smart Triggers

Clean Image

clean image

Trigger in the least important area

trigger

Trigger in the most important area

trigger

Dynamic Triggers

Attack Overview

attack

Dynamic Examples

γ 0.1 0.05 0.01
Clean image clean image clean image clean image
Noise noise noise noise
Projected Noise projection projection projection
Backdoor image bk image bk image bk image

Authors

Gorka Abad, Oguzhan Ersoy, Stjepan Picek, and Aitor Urbieta.

How to cite

@inproceedings{abad2024sneaky,
  title={Sneaky Spikes: Uncovering Stealthy Backdoor Attacks in Spiking Neural Networks with Neuromorphic Data.},
  author={Abad, Gorka and Ersoy, Oguzhan and Picek, Stjepan and Urbieta, Aitor.},
  booktitle={NDSS},
  year={2024}
}

License

This project is licensed under the MIT License - see the LICENSE file for details.