ACG and APGD loss runtime error with single sample
KandBM opened this issue · 12 comments
Describe the bug
When generating an adversarial example from a single sample using ACG or APGD, an error occurs while calculating the loss using both DLR and crossentropy (same error for APGD and ACG). This behavior does not occur using PGD.
To Reproduce
Steps to reproduce the behavior:
- Define model arch (called PPOAppendPolicy in my code):
Sequential(
(0): Linear(in_features=29, out_features=64, bias=True)
(1): Tanh()
(2): Linear(in_features=64, out_features=64, bias=True)
(3): Tanh()
(4): Linear(in_features=64, out_features=10, bias=True)
) - Define pytorch classifier:
PolicyClf = PyTorchClassifier(model=PPOAppendPolicy,
loss=nn.CrossEntropyLoss(),
input_shape=PPOmodel.env.observation_space.shape, #(29,)
nb_classes=PPOmodel.action_space.n) # 10 - Define sample to be attacked, this is a float32 np.array called obs
- Try attack with DLR loss:
attack = APGD(PolicyClf, loss_type="difference_logits_ratio")
obs_adv = attack.generate(np.expand_dims(obs, axis=0)) #expand dims so obs isn't interpreted as 29 separate samples - Error:
RuntimeError Traceback (most recent call last)
Cell In[46], line 2
1 attack = APGD(PolicyClf, loss_type="difference_logits_ratio")
----> 2 obs_adv = attack.generate(np.expand_dims(obs, axis=0))
File /usr/local/lib/python3.10/dist-packages/art/attacks/evasion/auto_projected_gradient_descent.py:500, in AutoProjectedGradientDescent.generate(self, x, y, **kwargs)
497 x_1 = x_init_batch + perturbation
499 f_0 = self.estimator.compute_loss(x=x_k, y=y_batch, reduction="none")
--> 500 f_1 = self.estimator.compute_loss(x=x_1, y=y_batch, reduction="none")
502 # modification for image-wise stepsize update
503 self.eta_w_j_m_1 = eta.copy()
File /usr/local/lib/python3.10/dist-packages/art/estimators/classification/pytorch.py:747, in PyTorchClassifier.compute_loss(self, x, y, reduction, **kwargs)
745 # Return individual loss values
746 self._loss.reduction = reduction
--> 747 loss = self._loss(model_outputs[-1], labels_t)
748 self._loss.reduction = prev_reduction
750 if isinstance(x, torch.Tensor):
File /usr/local/lib/python3.10/dist-packages/art/attacks/evasion/auto_projected_gradient_descent.py:292, in AutoProjectedGradientDescent.init..DifferenceLogitsRatioPyTorch.call(self, y_pred, y_true)
289 i_z_i_list = []
291 for i in range(y_true.shape[0]):
--> 292 if i_y_pred_arg[i, -1] != i_y_true[i]:
293 i_z_i_list.append(i_y_pred_arg[i, -1])
294 else:
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
6. Try with crossentropy:
attack = ACG(PolicyClf)
obs_adv = attack.generate(np.expand_dims(obs, axis=0))
7. error:
RuntimeError Traceback (most recent call last)
Cell In[42], line 2
1 attack = ACG(ActionClf)
----> 2 obs_adv = attack.generate(np.expand_dims(obs, axis=0))
File /usr/local/lib/python3.10/dist-packages/art/attacks/evasion/auto_conjugate_gradient.py:519, in AutoConjugateGradient.generate(self, x, y, **kwargs)
516 x_1 = x_init_batch + perturbation
518 f_0 = self.estimator.compute_loss(x=x_k, y=y_batch, reduction="none")
--> 519 f_1 = self.estimator.compute_loss(x=x_1, y=y_batch, reduction="none")
521 self.eta_w_j_m_1 = eta.copy()
522 self.f_max_w_j_m_1 = f_0.copy()
File /usr/local/lib/python3.10/dist-packages/art/estimators/classification/pytorch.py:747, in PyTorchClassifier.compute_loss(self, x, y, reduction, **kwargs)
745 # Return individual loss values
746 self._loss.reduction = reduction
--> 747 loss = self._loss(model_outputs[-1], labels_t)
748 self._loss.reduction = prev_reduction
750 if isinstance(x, torch.Tensor):
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File /usr/local/lib/python3.10/dist-packages/torch/nn/modules/loss.py:1174, in CrossEntropyLoss.forward(self, input, target)
1173 def forward(self, input: Tensor, target: Tensor) -> Tensor:
-> 1174 return F.cross_entropy(input, target, weight=self.weight,
1175 ignore_index=self.ignore_index, reduction=self.reduction,
1176 label_smoothing=self.label_smoothing)
File /usr/local/lib/python3.10/dist-packages/torch/nn/functional.py:3029, in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
3027 if size_average is not None or reduce is not None:
3028 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 3029 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: only batches of spatial targets supported (3D tensors) but got targets of size: : [1]
Expected behavior
This approach works for PGD:
attack = PGD(PolicyClf)
obs_adv = attack.generate(np.expand_dims(obs, axis=0))
System information (please complete the following information):
- UBUNTU 20.04
- Python version 3.10.6
- ART version 1.14.1
- PyTorch version 2.0.0
Hi @KandBM Thank you very much for using ART!
What is the shape of np.expand_dims(obs, axis=0)
as input to method generate
?
It's a 2d array with 29 elements, [[... , ... , ...]]. Hope that makes sense, I'm away from my pc to run .shape
Thanks!
@KandBM Are you running it for a single sample with 29 features?
Exactly! I'm attacking an RL agent (for a master's thesis) so the samples only come one at a time from the environment
edit: @beat-buesser
@KandBM Ok, and what is the shape of the output of the model?
@beat-buesser The model has 10 outputs and the shape is (10, 1)
@beat-buesser Is there any other info you needed? Was this enough to reproduce the issue?
Hi @KandBM I think we lost track of this issue during preparation of the latest release. We will revisit it as soon as possible. Did you by chance do any further investigation into the cause of the error?
Hi @KandBM I think I have found the bug, these attacks include explicit assumptions of 3-dimensional inputs. I have pushed a solution to branch https://github.com/Trusted-AI/adversarial-robustness-toolbox/tree/development_issue_2165. Could you please test it and let me know how it works for you?
@beat-buesser Thanks so much! Unfortunately I'm away from my lab computer but will test by the end of the week
@beat-buesser It works, thanks again! Will this be rolled into the main branch?
@KandBM Yes, it is now part of ART 1.15.1.