thunlp/OpenBackdoor

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

clearloveclearlove opened this issue · 1 comments

when i choose style_config.json and change dataset as offenseval, then i run the demo_attack.py
it raise runtime error. full information as below:

[2022-11-09 20:27:11,309 INFO] stylebkd_poisoner Begin to transform sentence.
100%|██████████| 28/28 [00:59<00:00, 2.12s/it]
E:\anaconda\envs\open_backdoor\lib\site-packages\transformers-4.23.1-py3.8.egg\transformers\optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead, or set no_deprecation_warning=True to disable this warning
warnings.warn(
[2022-11-09 20:28:10,776 INFO] trainer ***** Training *****
[2022-11-09 20:28:10,776 INFO] trainer Num Epochs = 5
[2022-11-09 20:28:10,776 INFO] trainer Instantaneous batch size per GPU = 32
[2022-11-09 20:28:10,776 INFO] trainer Gradient Accumulation steps = 1
[2022-11-09 20:28:10,776 INFO] trainer Total optimization steps = 1865
Iteration: 0%| | 0/373 [00:00<?, ?it/s]
Traceback (most recent call last):
File "D:\code\backdoor_experients\github\OpenBackdoor\demo_attack.py", line 60, in
main(config)
File "D:\code\backdoor_experients\github\OpenBackdoor\demo_attack.py", line 41, in main
backdoored_model = attacker.attack(victim, poison_dataset)
File "D:\code\backdoor_experients\github\OpenBackdoor\openbackdoor\attackers\attacker.py", line 62, in attack
backdoored_model = self.train(victim, poison_dataset)
File "D:\code\backdoor_experients\github\OpenBackdoor\openbackdoor\attackers\attacker.py", line 92, in train
return self.poison_trainer.train(victim, dataset, self.metrics)
File "D:\code\backdoor_experients\github\OpenBackdoor\openbackdoor\trainers\trainer.py", line 198, in train
epoch_loss, poison_loss, normal_loss = self.train_one_epoch(epoch, epoch_iterator)
File "D:\code\backdoor_experients\github\OpenBackdoor\openbackdoor\trainers\trainer.py", line 156, in train_one_epoch
loss.backward()
File "E:\anaconda\envs\open_backdoor\lib\site-packages\torch_tensor.py", line 396, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "E:\anaconda\envs\open_backdoor\lib\site-packages\torch\autograd_init_.py", line 173, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Sorry, I cannot reproduce the error, could you please provide more details?

However, I just found that style poisoner would produce some empty strings, which may be attributed to the style transformation model. I am not sure whether your error was caused by this bug. I have fixed it and you can try again :)