akshaychawla/Adversarial-Examples-in-PyTorch

Trying to understand your implementation of imnet-fast-gradient.py

savan77 opened this issue · 1 comments

Hi,
I am not getting couple of things in your implementation of imnet-fast-gradient.py.

1- In line 59, you say we don't min/max because of torch's own stuff. I am new to imagenet and not getting it. Why PyTorch is the problem here? Can you please give me more details on this?

2- In line 53, you pass 'output' and 'y' to the loss function. In your implementation, 'y' is the index of label predicted by our model (then converted into LongTensor). However, in the original paper author refers to 'y' as the true label. Is it a mistake or am I missing something?

Thanks

Hey @savan77 I'm soo sorry for not replying earlier. I hope I can clear some of your doubts.

  1. In line 59, I calculate the adversarial version of x, now ideally, whenever you create a new data point, its usually good to sanity check if it is within the standard max/min range of your dataset. Now, usually, images are between 0 - 255, HOWEVER, the image_loader function returns a normalized version of the image as a float rather than an unsigned int, thus I don't know the max/min to which adv_x should be clipped such that it remains within the dataset range.

  2. You are absolutely correct that I should ideally pass the TRUE label of the image into the softmax function before back-propagating the gradients, however, I'm assuming that my network is really strong and thus would almost always predict the correct label, so I assume it is the correct label. It is less of a mistake and more of a lazy workaround :P since I did not want to write a parser that would parse the class from the raw image path which looks something like "./downloads/images/dog/image1.jpg"

Hope it was still helpful!