/Translation-Invariant-Attacks

The translation-invariant adversarial attack method to improve the transferability of adversarial examples.

Primary LanguagePythonApache License 2.0Apache-2.0

Translation-Invariant Attacks

Introduction

This repository contains the code for Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks (CVPR 2019 Oral).

Method

We proposed a translation-invariant (TI) attack method to generate more transferable adversarial examples. This method is implemented by convolving the gradient with a pre-defined kernel in each attack iteration, and can be integrated into any gradient-based attack method.

Run the code

First download the models. You can also use other models by changing the model definition part in the code. Then run the following command

bash run_attack.sh input_dir output_dir 16

where original images are stored in input_dir with .png format, and the generated adversarial images are saved in output_dir. We used the Python 2.7 and Tensorflow 1.12 versions.

Results

We consider eight STOA defense models on ImageNet:

We attacked these models by the fast gradient sign method (FGSM), momentum iterative fast gradient sign method (MI-FGSM), diverse input method (DIM), and their translation-invariant versions as TI-FGSM, TI-MI-FGSM, and TI-DIM. We generated adversarial examples for the ensemble of Inception V3, Inception V4, Inception ResNet V2, and ResNet V2 152 with epsilon 16. The success rates against the eight defenses are:

Citation

If you use our method for attacks in your research, please consider citing

@inproceedings{dong2019evading,
  title={Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks},
  author={Dong, Yinpeng and Pang, Tianyu and Su, Hang and Zhu, Jun},
  booktitle={Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition},
  year={2019}
}

Implementation

Models

The models can be downloaded at Inception V3, Inception V4, Inception ResNet V2, and ResNet V2 152.

If you want to attack other models, you can replace the model definition part to your own models.

Hyper-parameters

  • For TI-FGSM, set num_iter=1, momentum=0.0, prob=0.0;
  • For TI-MI-FGSM, set num_iter=10, momentum=1.0, prob=0.0;
  • For TI-DIM, set num_iter=10, momentum=1.0, prob=0.7;