Adversarial Training with pytorch
python main.py -a -v
Accuracy (WIP)
Model | Acc. |
---|---|
VGG16 | --.--% |
ResNet18 | 51.99% |
ResNet50 | --.--% |
ResNet101 | --.--% |
MobileNetV2 | --.--% |
ResNeXt29(32x4d) | --.--% |
ResNeXt29(2x64d) | --.--% |
DenseNet121 | --.--% |
PreActResNet18 | --.--% |
DPN92 | --.--% |
Learning rate adjustment
I manually change the lr
during training:
0.1
for epoch[0,50)
0.01
for epoch[50,60)
Resume the training with python main.py -r --lr=0.01 -a -v
References
-
Authors' code: MadryLab/cifar10_challenge
-
Baseline code: kuangliu/pytorch-cifar
Notes
To read more about Projected Gradient Descent (PGD) attack, you can read the following papers: