This is a lightweight repository of adversarial attacks for Pytorch.
There are popular attack methods and some utils.
Here is a documentation for this package.
- Usage
- Attacks and Papers
- Demos
- Frequently Asked Questions
- Update Records
- Recommended Sites and Packages
- torch 1.2.0
- python 3.6
pip install torchattacks
orgit clone https://github.com/Harry24k/adversairal-attacks-pytorch
import torchattacks
pgd_attack = torchattacks.PGD(model, eps = 4/255, alpha = 8/255)
adversarial_images = pgd_attack(images, labels)
- WARNING :: All images should be scaled to [0, 1] with transform[to.Tensor()] before used in attacks.
- WARNING :: All models should return ONLY ONE vector of
(N, C)
whereC = number of classes
.
The papers and the methods with a brief summary and example. All attacks in this repository are provided as CLASS. If you want to get attacks built in Function, please refer below repositories.
-
Explaining and harnessing adversarial examples : Paper, Repo
- FGSM
-
DeepFool: a simple and accurate method to fool deep neural networks : Paper
- DeepFool
-
Adversarial Examples in the Physical World : Paper, Repo
- BIM or iterative-FSGM
- StepLL
-
Towards Evaluating the Robustness of Neural Networks : Paper, Repo
- CW(L2)
-
Ensemble Adversarial Traning : Attacks and Defences : Paper, Repo
- RFGSM
-
Towards Deep Learning Models Resistant to Adversarial Attacks : Paper, Repo
- PGD(Linf)
-
Comment on "Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network" : Paper
- APGD(EOT + PGD)
-
Fast is better than free: Revisiting adversarial training" : Paper
- FFGSM(Fast's FGSM)
-
Theoretically Principled Trade-off between Robustness and Accuracy" : Paper
- TPGD(TRADES' PGD)
Attack | Clean | Adversarial |
---|---|---|
FGSM | ||
BIM | ||
StepLL | ||
RFGSM | ||
CW | ||
PGD(w/o random starts) | ||
PGD(w/ random starts) | ||
DeepFool |
- White Box Attack with Imagenet (code): To make adversarial examples with the Imagenet dataset to fool Inception v3. However, the Imagenet dataset is too large, so only 'Giant Panda' is used.
- Black Box Attack with CIFAR10 (code): This demo provides an example of black box attack with two different models. First, make adversarial datasets from a holdout model with CIFAR10 and save it as torch dataset. Second, use the adversarial datasets to attack a target model.
- Adversairal Training with MNIST (code): This code shows how to do adversarial training with this repository. The MNIST dataset and a custom model are used in this code. The adversarial training is performed with PGD, and then FGSM is applied to test the model.
- Targeted PGD with Imagenet (code): It shows we can perturb images to be classified into the labels we want with targeted PGD.
- MultiAttack with MNIST (code): This code shows an example of PGD with N-random-restarts.
- I want to use image normalization. : In this case, you have to put normalize layer in the model. Please refer to DEMO:White Box Attack with Imagenet.
- There is no randomize process in my model, but attacks return different results. : Some operations are non-deterministic with float tensors on GPU. If you want to get same results with same inputs, please run "torch.backends.cudnn.deterministic = True".
- Pip packages were corrupted by accumulating previous versions.
- Pip Package Re-uploaded.
- PGD :
- Now it supports targeted mode.
- MultiAttack :
- MultiAttack added.
- With it, you can use PGD with N-random-restarts or stronger attacks with different methods.
-
steps instead of iters :
- For compatibility reasons, all iters are changed to steps.
-
FFGSM :
- New FGSM proposed by Eric Wong et al. added.
-
TPGD :
- PGD(Linf) based on KL-Divergence loss proposed by Hongyang Zhang et al. added.
-
Other Adversarial Attack Packages :
- https://github.com/IBM/adversarial-robustness-toolbox : Adversarial attack and defense package made by IBM. TensorFlow, Keras, Pyotrch available.
- https://github.com/bethgelab/foolbox : Adversarial attack package made by Bethge Lab. TensorFlow, Pyotrch available.
- https://github.com/tensorflow/cleverhans : Adversarial attack package made by Google Brain. TensorFlow available.
- https://github.com/BorealisAI/advertorch : Adversarial attack package made by BorealisAI. Pytorch available.
- https://github.com/DSE-MSU/DeepRobust : Adversarial attack (especially on GNN) package made by BorealisAI. Pytorch available.
-
Adversarial Defense Leaderboard :
-
Adversarial Attack and Defense Papers:
- https://nicholas.carlini.com/writing/2019/all-adversarial-example-papers.html : A Complete List of All (arXiv) Adversarial Example Papers made by Nicholas Carlini.
- https://github.com/chawins/Adversarial-Examples-Reading-List : Adversarial Examples Reading List made by Chawin Sitawarin.