MadryLab/cifar10_challenge
A challenge to explore adversarial robustness of neural networks on CIFAR10.
PythonMIT
Issues
- 1
pretrained model link expired
#30 opened by hank08tw - 1
- 0
lack of Sign function
#28 opened by Sumching - 1
The config of training robust model for CIFAR10.
#27 opened by ylsung - 1
pytorch definition of the model
#26 opened by kartikgupta-at-anu - 1
- 1
- 1
About the network architecture
#23 opened by dongyp13 - 1
Making adversarial examples during training
#22 opened by symoon11 - 1
Overflow when random_restart is false
#21 opened by hanboa - 1
Number of trainable parameters
#20 opened by JonathanCMitchell - 1
When generating uniform noise in random start, floating point number will cause invalid pixel value.
#19 opened by Line290 - 2
Matching training statistics
#16 opened by JonathanCMitchell - 1
PGD steps along the sign of the gradient
#18 opened by SohamTamba - 2
About the convergence of training.
#17 opened by anonymous530 - 2
Dataset normalization
#15 opened by TimurIbrayev - 1
Same accuracy logged twice
#14 opened by JonathanCMitchell - 3
- 1
Googlenet with owndata
#12 opened by rajasekharponakala - 1
Image Channels
#11 opened by rajasekharponakala - 1
About the accuracy of adversarial examples
#10 opened by lith0613 - 1
about the loss in the pgd_attack
#9 opened by lith0613 - 1
- 1
- 4
Questions about recreating paper results
#6 opened by inkawhich - 4
How to determine "best" model
#5 opened by inkawhich - 2
- 1