peikexin9/deepxplore

Adversarial example classified correctly

mkbera opened this issue · 1 comments

The adversarial samples like 'occl_4_to_7_model1.png' are being classifed correctly by model1.

The images generated undergo further processing only for illustration purpose so I believe they may not trigger misclassification.
I would recommend saving the raw numpy images matrices and reload them to check if they can really misguide the prediction.