Trusted-AI/adversarial-robustness-toolbox

Issue with PyTorchFasterRCNN and RobustDPatch - Gradient term in PyTorch model is "None"

LukasFreudenmann opened this issue · 3 comments

Hello,

I have an issue with the FasterRCNN implementation in combination with the RobustDPatch attack.

Im trying to run this example:

detector = PyTorchFasterRCNN(
          clip_values=(0, 255),
          attack_losses=["loss_classifier", "loss_box_reg", "loss_objectness", "loss_rpn_box_reg"],
          input_shape=(640, 640, 3),
      )

response = requests.get('https://ultralytics.com/images/zidane.jpg')
img = np.asarray(Image.open(BytesIO(response.content)).resize((640, 640)))
images = np.stack([img], axis=0).astype(np.float32)

attack = RobustDPatch(
    detector,
    patch_shape=(40, 40, 3),
    patch_location=(0, 0),
    crop_range=[0, 0],
    brightness_range=[1.0, 1.0],
    rotation_weights=[1, 0, 0, 0],
    sample_size=1,
    learning_rate=1.99,
    max_iter=1,
    batch_size=1,
    verbose=True,
    targeted=False
)

for i in tqdm(range(100)):
    patch = attack.generate(images)
    # patched_images = attack.apply_patch(images)

I am getting the following error :

Traceback (most recent call last):
File "~/attacks/dpatch/dpatch.py", line 67, in train
    patch = attack.generate(images)
  File "~/anaconda3/envs/ma/lib/python3.8/site-packages/art/attacks/evasion/dpatch_robust.py", line 215, in generate
    gradients = self.estimator.loss_gradient(
  File "~/anaconda3/envs/ma/lib/python3.8/site-packages/art/estimators/object_detection/pytorch_object_detector.py", line 312, in loss_gradient
    raise ValueError("Gradient term in PyTorch model is "None".")
ValueError: Gradient term in PyTorch model is "None".

Could you please help me on this?
Thanks in advance!

Seems to be an issue with the latest release of ART, specifically, the changes in pytorch_object_detector.py (although I could be wrong).

Use pip install adversarial-robustness-toolbox==1.14.1 and your code should work.

Thank you. I've tested it with the older version and it seems to be working fine.

This it too late, but basically the pytorch_yolo.py has :

    x_preprocessed = x_preprocessed.to(self.device)

And when this happens, the x_preprocessed is no longer a leaf node in pytorch computation graph. So after returning it, and trying to access the gradient, it has not been kept.

I wrote :

    x_preprocessed.requires_grad_(True)
    x_preprocessed.retain_grad()

write below the original line, and was able to train on gpu without the error.