zjysteven/DVERGE

Hi! Could you please tell what CUDA version you used while running the experiments?

Saif-KF opened this issue · 7 comments

Hi! Could you please tell what CUDA version you used while running the experiments?

Hi!
Could you please tell what CUDA version you used while running the experiments?
I have been struggling in fixing the error message for the line "Torch.Cuda.is_available()" in evaluation.sh script, likewise other scripts. I am trying to run the code in Windows. I have created the required environment in Anaconda3.

Your kind help is appreciated.
Also, why you assigned "6" to GPUID (the first line in evaluation.sh)?

Hi,

Sorry for the late response. The CUDA version we are using is 10.1. However, as long as your CUDA version is consistent with Pytorch's requirements, the "torch.cuda.is_available()" should run normally. It could be that the Pytorch version specified in our conda environment file does not match the cuda version on your computer.

The "6" to GPUID in evaluation.sh is just to tell the program which GPU to use (exactly the same as what CUDA_VISIBLE_DEVICES="6" does).

Let me know if you have further questions!

Thanks

Thank you very much for your kind response. The code worked as soon as I changed the hard-coded GPUID=6 into the default one which is 0 on my PC.

In my work, I need to use your DVERGE trained model as the state-of-the-art defense technique and evaluate its robustness in one of the adversarially attacked datasets in the "data" folder.

  1. Is it ready to be used? That is, what I need to do is only use the trained model "checkpoints/dverge/seed_0/3_ResNet20_eps_0.07/epoch_200.pth" within the test function?
  2. What data should I use for evaluating the robustness in general?

Thanks in advance

Is it ready to be used? That is, what I need to do is only use the trained model "checkpoints/dverge/seed_0/3_ResNet20_eps_0.07/epoch_200.pth" within the test function?

Yes. Just be careful about the normalization of input images. Specifically, if you load our trained models using our codes, then you only need to turn the images to [0,1] tensors as the inputs. Otherwise, if you load the models on your own, after turning images to [0,1] tensors, also remember to normalize them with channel-wise mean/std statistics.

What data should I use for evaluating the robustness in general?

In general, people often talk about white-box robustness and black-box robustness. The "adversarially attacked datasets" that we release can be used to evaluate black-box robustness (but somewhat loosely, since the attacks here are purely transfer-based and do not consider query-based). Or even more generally, people may care about non-adversarial robustness, e.g. against some common corruptions. There is one benchmark based on ImageNet for evaluating such robustness https://arxiv.org/pdf/1903.12261.pdf, but is not directly applicable to our trained models since we are training on CIFAR-10. Let me know if this answers your question.

@zjysteven Many thanks for your detailed answers.
I am planning to use the trained model within your code and utilize the predicted outputs as a relatively reliable prediction (assuming it the best technique in its class). Then, I will filter the outputs that are below a threshold (the most suspected to be adversarial examples) to be processed separately with another round of work.
Your advice will be highly appreciated.

@Saif-KF It sounds to me that you are trying to identify adversarial examples from clean images by looking at the outputs of a model with some adversarial robustness. I'm not sure whether this can work. Actually, I think it's possible that some adversarial examples can cause pretty high output probabilities even on a robust model.

@Saif-KF It seems that further questions, if you have any, will not be directly related to the repo. So I'm closing the issue right now, but feel free to leave comments.