Grad_output must be contiguous
915067906 opened this issue · 0 comments
Hi authors, thanks for your great work!
During the incremental learning phase, we find the following errors occurring:
File "/workspace/CIM-CIL-main/models/foster.py", line 277, in _feature_boosting
grads = torch.autograd.grad(inner_loss, [p for (n, p) in fast_weights.items()], create_graph=False)
File "/opt/conda/lib/python3.7/site-packages/torch/autograd/init.py", line 228, in grad
inputs, allow_unused, accumulate_grad=False)
File "/opt/conda/lib/python3.7/site-packages/torch/autograd/function.py", line 87, in apply
return self._forward_cls.backward(self, *args) # type: ignore[attr-defined]
File "/opt/conda/lib/python3.7/site-packages/rational/torch/rational_cuda_functions.py", line 16, in backward
d_x, d_weight_numerator, d_weight_denominator = backward_A_5_4(grad_output, x, w_numerator, w_denominator)
RuntimeError: grad_output must be contiguous