Pytorch autograd failing when running bear/examples/sac.py
ashishk88 opened this issue · 2 comments
ashishk88 commented
Hi,
I am using PyTorch version 1.6 to run this script: bear/examples/sac.py
. The script fails with the following error:
Traceback (most recent call last):
File "sac.py", line 111, in <module>
experiment(variant)
File "sac.py", line 78, in experiment
algorithm.train()
File "/home/ashish/d4rl_evaluations/bear/rlkit/core/rl_algorithm.py", line 46, in train
self._train()
File "/home/ashish/d4rl_evaluations/bear/rlkit/core/batch_rl_algorithm.py", line 172, in _train
self.trainer.train(train_data)
File "/home/ashish/d4rl_evaluations/bear/rlkit/torch/torch_rl_algorithm.py", line 40, in train
self.train_from_torch(batch)
File "/home/ashish/d4rl_evaluations/bear/rlkit/torch/sac/sac.py", line 144, in train_from_torch
policy_loss.backward()
File "/home/ashish/ve/py36/lib/python3.6/site-packages/torch/tensor.py", line 185, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/ashish/ve/py36/lib/python3.6/site-packages/torch/autograd/__init__.py", line 127, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [256, 1]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Does rlkit support pytorch 1.6 or is this a deeper issue?
ashishk88 commented
Installing pytorch 1.4.0 solved the issue. Thanks!
George-Chia commented
It does not work for me. Is it related to other packages' versions?