KDD-OpenSource/DeepADoTS

DGAMM-LSTM cannot run on gpu

yuehu9 opened this issue · 2 comments

When I tried to run DGAMM-LSTM on gpu I got the following message:

can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first

I faced the same issue with the plain DAGMM. Worked around this by adding .cpu() and .cuda() as follows in src/algorithms/dagmm.py:

Line 231: pinv = np.linalg.pinv(cov_k.data.cpu().numpy())
Line 243: cov_inverse = torch.cat(cov_inverse, dim=0).cuda()

I hope this helps you too.

I faced the same issue with the plain DAGMM. Worked around this by adding .cpu() and .cuda() as follows in src/algorithms/dagmm.py:

Line 231: pinv = np.linalg.pinv(cov_k.data.cpu().numpy())
Line 243: cov_inverse = torch.cat(cov_inverse, dim=0).cuda()

I hope this helps you too.

This works, thanks!