Why are the images generated by using DeepFool to attack CIFAR-10 so strange?
Closed this issue · 0 comments
leo-bleuming commented
Describe the bug
Why are the images generated by using DeepFool to attack CIFAR-10 so strange?
It seems that this is related to the cnn model I built. When I use DeepFool to attack the CNN model in adversarial_training_cifar10.py, the attack can be successful and the perturbation is invisible to the naked eye. However, if I use the CNN model built by myself, the perturbation generated by using DeepFool to attack the CNN model is visible to the naked eye, and it cannot be adjusted through parameters.
System information (please complete the following information):
- OS win11
- Python version 3.8.20
- ART version or commit number 1.18.2
- TensorFlow-gpu = 2.6.0 / Keras = 2.6.0
the cnn model of mine
cnn_cifar10.txt