Trusted-AI/adversarial-robustness-toolbox

Bugs in AutoProjectedGradientDescent for cross_entropy loss

beat-buesser opened this issue · 0 comments

The implementaiton of AutoProjectedGradientDescent for cross_entropy loss has two bugs since ART 1.14.0: the order of prediction and label arguments is wrong in the custom loss class and the PyTorchClassifier does not recognize the custom loss class to set internal attributes correctly. This pull request fixes both issues.

Discussed in #2109

Originally posted by SignedQiu April 18, 2023
Describe the bug
There is a data type error in the auto pgd implementaion of autoattack.

To Reproduce
Simply run the AutoAttack using the default parameters in Mnist dataset and PyTorchClassifier (where i met the bug), the errors will occurs with "RuntimeError: Expected object of scalar type Long but got scalar type Byte for argument #2 'target' in call to _thnn_nll_loss_forward"

Expected behavior
I attempted to fix this bug, but i think the code "grad = self.estimator.loss_gradient(x_k, y_batch) * (1 - 2 * int(self.targeted))" in the 455th line of the file auto_projected_gradient_descent.py is the key to solving this problem.

Screenshots
use

image

System information (please complete the following information):

  • OS = Linux
  • Python version = 3.7.9
  • ART version or commit number = 1.14.0
  • PyTorch = 1.8.2+cu111