Trusted-AI/adversarial-robustness-toolbox

Can't execute the generate function from AdversarialPatchPytorch

Closed this issue · 1 comments

DJE98 commented

Describe the bug
I can't execute the generate function from AdversarialPatchPytorch without Errors.

To Reproduce
The used code:

    def train(self, data_loader: DataLoader):
        images, labels = next(iter(data_loader))
        
        self.patch, self.mask = self.adversarial_patch.generate(x=np.array(images.cpu().numpy()), y=np.array(labels.cpu().numpy()))

The Stack Trace:

adversarial_patch_trainer.py 50 train
self.patch, self.mask = self.adversarial_patch.generate(x=images, y=labels)

adversarial_patch_pytorch.py 615 generate
_ = self._train_step(images=images, target=target, mask=None)

adversarial_patch_pytorch.py 190 _train_step
loss = self._loss(images, target, mask)

adversarial_patch_pytorch.py 234 _loss
predictions, target = self._predictions(images, mask, target)

adversarial_patch_pytorch.py 218 _predictions
patched_input = self._random_overlay(images, self._patch, mask=mask)

adversarial_patch_pytorch.py 306 _random_overlay
image_mask = torchvision.transforms.functional.resize(

functional.py 492 resize
return F_t.resize(img, size=output_size, interpolation=interpolation.value, antialias=antialias)

_functional_tensor.py 467 resize
img = interpolate(img, size=size, mode=interpolation, align_corners=align_corners, antialias=antialias)

functional.py 3924 interpolate
raise TypeError(

TypeError:
expected size to be one of int or Tuple[int] or Tuple[int, int] or Tuple[int, int, int], but got size with types [<class 'numpy.int64'>, <class 'numpy.int64'>]

Relevant code in the library:

    def _random_overlay(
        self,
        images: "torch.Tensor",
        patch: "torch.Tensor",
        scale: Optional[float] = None,
        mask: Optional["torch.Tensor"] = None,
    ) -> "torch.Tensor":
        import torch
        import torchvision

        # Ensure channels-first
        if not self.estimator.channels_first:
            images = torch.permute(images, (0, 3, 1, 2))

        nb_samples = images.shape[0]

        image_mask = self._get_circular_patch_mask(nb_samples=nb_samples)
        image_mask = image_mask.float()

        self.image_shape = images.shape[1:]

        smallest_image_edge = np.minimum(self.image_shape[self.i_h], self.image_shape[self.i_w])

        image_mask = torchvision.transforms.functional.resize(
            img=image_mask,
            size=(smallest_image_edge, smallest_image_edge),
            interpolation=2,
        )

Expected behavior
Normal execution

System information:

  • Ubuntu 23.10
  • Python 3.11
  • Art 1.16.0
  • PyTorch Library

Moving back to torch==2.0.1 solved the issue