This repository contains the code for Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks (CVPR 2019 Oral).
We proposed a translation-invariant (TI) attack method to generate more transferable adversarial examples. This method is implemented by convolving the gradient with a pre-defined kernel in each attack iteration, and can be integrated into any gradient-based attack method.
First download the models. You can also use other models by changing the model definition part in the code. Then run the following command
bash run_attack.sh input_dir output_dir 16
where original images are stored in input_dir
with .png
format, and the generated adversarial images are saved in output_dir
.
We used the Python 2.7 and Tensorflow 1.12 versions.
We consider eight STOA defense models on ImageNet:
- Inc-v3ens3, Inc-v3ens4, IncRes-v2ens trained by Ensemble Adversarial Training;
- High-level representation guided denoiser (HGD, top-1 submission in the NIPS 2017 defense competition);
- Input transformation through random resizing and padding (R&P, rank-2 submission in the NIPS 2017 defense competition);
- Input transformation through JPEG compression or total variance minimization (TVM);
- Rank-3 submission in the NIPS 2017 defense competition (NIPS-r3);
We attacked these models by the fast gradient sign method (FGSM), momentum iterative fast gradient sign method (MI-FGSM), diverse input method (DIM), and their translation-invariant versions as TI-FGSM, TI-MI-FGSM, and TI-DIM. We generated adversarial examples for the ensemble of Inception V3, Inception V4, Inception ResNet V2, and ResNet V2 152 with epsilon 16. The success rates against the eight defenses are:
If you use our method for attacks in your research, please consider citing
@inproceedings{dong2019evading,
title={Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks},
author={Dong, Yinpeng and Pang, Tianyu and Su, Hang and Zhu, Jun},
booktitle={Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition},
year={2019}
}
The models can be downloaded at Inception V3, Inception V4, Inception ResNet V2, and ResNet V2 152.
If you want to attack other models, you can replace the model definition part to your own models.
- For TI-FGSM, set
num_iter=1
,momentum=0.0
,prob=0.0
; - For TI-MI-FGSM, set
num_iter=10
,momentum=1.0
,prob=0.0
; - For TI-DIM, set
num_iter=10
,momentum=1.0
,prob=0.7
;