Frequency-based methods for improving the imperceptibility and transferability of adversarial examples
- python 3.8
- torch 1.8
- numpy 1.19
- pandas 1.2
-
Generate adversarial examples
Using
FSD_MIM.py
to generate highly transferable adversarial examples, you can run this attack as followingCUDA_VISIBLE_DEVICES=gpuid python FSD_MIM.py --output_dir outputs
where
gpuid
can be set to any free GPU ID in your machine. And adversarial examples will be generated in directory./outputs
. -
Evaluations on normally trained models
Running
verify.py
to evaluate the attack success rateCUDA_VISIBLE_DEVICES=gpuid python verify.py