TrojanArmor is a Python library for experimenting with trojan attacks in neural networks. With this library, you can choose the dataset, neural network architecture, and attack method for your experiments.
Install the library using pip:
pip install git+https://github.com/maloyan/TrojanArmor.git
To run an experiment, import the necessary modules and use the run_experiment function with the desired parameters:
import torch
from trojan_armor.experiment import run_experiment
run_experiment(
dataset_name="cifar10",
model_name="timm_resnet18",
attack_method="BadNet",
attack_params={
"trigger": torch.zeros(3, 5, 5),
"target_label": 0,
"attack_prob": 0.5,
},
device="cuda",
)
- MNIST
- CIFAR-10
- CIFAR-100
- ImageNet
- GTSRB
- VGGFace2
- BadNet
- Blended
- TrojanNN
- Poison Frogs
- Filter Attack
- WaNet
- Input Aware Dynamic Attack
- SIG
- Label Consistent Backdoor Attack
- ISSBA
- IMC
- TrojanNet Attack
- Refool
- All models from timm library
Method | Trojan Armor | Backdoor Toolbox | BackdoorBench | BackdoorBox | TrojanZoo |
---|---|---|---|---|---|
BadNet (2017) | ✅ | ✅ | ✅ | ✅ | ✅ |
Blended (2017) | ✅ | ✅ | ✅ | ✅ | ✅ |
TrojanNN (2017) | ❌ | ✅ | ❌ | ❌ | ✅ |
Poison Frogs (2018) | ❌ | ❌ | ❌ | ❌ | ❌ |
Filter Attack (2019) | ✅ | ❌ | ❌ | ❌ | ❌ |
WaNet (2021) | ✅ | ✅ | ✅ | ✅ | ❌ |
Input Aware Dynamic Attack (2020) | ❌ | ✅ | ✅ | ✅ | ✅ |
SIG (2019) | ❌ | ✅ | ✅ | ❌ | ❌ |
Label Consistent Backdoor Attack (Clean Label) (2019) | ❌ | ✅ | ✅ | ✅ | ❌ |
ISSBA (2019) | ❌ | ✅ | ✅ | ✅ | ❌ |
IMC (2019) | ❌ | ✅ | ❌ | ❌ | ✅ |
TrojanNet Attack (2020) | ❌ | ❌ | ❌ | ❌ | ✅ |
Refool (2020) | ❌ | ✅ | ❌ | ✅ | ✅ |
TaCT (2019) | ❌ | ✅ | ❌ | ❌ | ❌ |
Adaptive (2023) | ❌ | ✅ | ❌ | ❌ | ❌ |
SleeperAgent (2022) | ❌ | ✅ | ❌ | ✅ | ❌ |
Low Frequency (2021) | ❌ | ❌ | ✅ | ❌ | ❌ |
TUAP (2020) | ❌ | ❌ | ❌ | ✅ | ❌ |
PhysicalBA (2021) | ❌ | ❌ | ❌ | ✅ | ❌ |
LIRA (2021) | ❌ | ❌ | ❌ | ✅ | ❌ |
Blind (blended-based) (2020) | ❌ | ❌ | ❌ | ✅ | ❌ |
LatentBackdoor (2019) | ❌ | ❌ | ❌ | ❌ | ✅ |
Adversarial Embedding Attack (2019) | ❌ | ❌ | ❌ | ❌ | ✅ |