A set of adversarial attacks in PyTorch.
# Install from github source
python -m pip install git+https://github.com/daisylab-bit/torchattack
# Install from gitee mirror
python -m pip install git+https://gitee.com/daisylab-bit/torchattack
import torch
from torchattack import FGSM, MIFGSM
from torchvision.models import resnet50
from torchvision.transforms import transforms
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Load a model
model = resnet50(weights='DEFAULT')
model = model.eval().to(device)
# Define normalization (you are responsible for normalizing the data if needed)
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
# Initialize an attack
attack = FGSM(model, normalize, device)
# Initialize an attack with extra params
attack = MIFGSM(model, normalize, device, eps=0.03, steps=10, decay=1.0)
Check out torchattack.runner.run_attack
for a simple example.
Gradient-based attacks:
Others:
Name | Paper |
torchattack class |
|
---|---|---|---|
DeepFool | DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks | torchattack.DeepFool |
|
GeoDA |
|
GeoDA: A Geometric Framework for Black-box Adversarial Attacks | torchattack.GeoDA |
SSP | A Self-supervised Approach for Adversarial Robustness | torchattack.SSP |
# Create a virtual environment
python -m venv .venv
source .venv/bin/activate
# Install deps with dev extras
python -m pip install -r requirements.txt
python -m pip install -e '.[dev]'