Seokju-Cho/Volumetric-Aggregation-Transformer

RuntimeError: one of the variables needed for gradient computation

SirojbekSafarov opened this issue · 2 comments

Hello.

Thank you for this amazing work.

I want to retrain your model, but when I traid I got this error. Do you have any experience with this error?

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2, 128, 32, 32, 8, 8]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

I am facing the same issue. Any solution ?

I found one solution. we need to change the inplace operation to non -inplace operations. There was some operation in ours.py file. i re-wrote the codes of lines.
image