For our implementation, we use pretrained PyTorch models by huyvnphan.
python train.py --download_weights 1
python select_data.py
This will download the raw CIFAR10 testing data (10,000 entries) via PyTorch within the cifar10
directory, and then randomly sample 10 images from each class (100 images in total) and save them as numpy arrays in ../data/
.
X.npy
includes all 100 images as normalized matrices, while Y.npy
includes the ground truth labels as a numpy array.
python fgsm.py --model [vgg16|vgg19] --epsilon [0.01|0.02|0.03]
This will create sub-folders within ../data/
and store the generated outputs. An example path would be ../data/vgg16/001/[generated files]
, where vgg16
stands for the model chosen and 001
stands for perturbation size 0.01.
adv_X.npy
stores the perturbed images as matrices, confid_level.npy
stores the confidence levels across all classes in each prediction, error.pckl
stores the robust error, Y_hat.npy
stores the predicted labels, and noise.npy
stores the applied perturbation normalized as images.
python fgsm.py --model [vgg16|vgg19] --natural
If the flag --natural
is present, it will ignore the argument passed to --epsilon
.
confid_level.npy
stores the confidence levels across all classes in each prediction, error.pckl
stores the natural error. The outputs will be stored in ../data/[model name]/000/
python vgg_feature.py --model [vgg16|vgg19] --epsilon [0.01|0.02|0.03] [--natural]
This will save the features as features.npy
in the corresponding folder.
python dimen_reduc.py --model [vgg16|vgg19]
This will apply dimensionality reduction on the images and output it as a csv file data.csv
in the corresponding path combined with confidence levels, prediction labels, and ground truth labels.