MadryLab/mnist_challenge
A challenge to explore adversarial robustness of neural networks on MNIST.
PythonMIT
Issues
- 6
Why epsilon can be larger than 1?
#13 opened by sjyjytu - 0
problems between paper(Generating Adversarial Examples with Adversarial Networks) and code
#18 opened by bieyl - 1
about version
#17 opened by lsy-sunny - 2
Are there any Pytorch version of this challenge? Cause tensorflow usually has conflict with my Pytorch.
#16 opened by Asber777 - 0
- 1
Adversarial performance for MNIST models vary widely with different random seed initializations
#14 opened by hangletn - 1
What does this line do?
#12 opened by ovshake - 3
- 3
wrong implementation of CW attack
#10 opened by CNOCycle - 8
- 1
- 2
About random restart
#6 opened by jinghuichen - 1
For the reported results with 100 iterations, is the eps_iter/"a" value still 0.01?
#5 opened by gwding - 2
Question about reproducing your results
#4 opened by liuchihuang - 3
Does PGD not need to perform random restart in every iterative ? Is it enough to start with random noise at FGSM?
#3 opened by lepangdan - 2
- 3
Release Model
#1 opened by huanzhang12