Error using CUDA
rsteca opened this issue · 4 comments
rsteca commented
I get this error when trying to use the code with GPU (it works fine with CPU):
2019-05-14 19:51:48,675 - VOC_TOPICS - INFO - Shape of data: (40560, 82).
Missing in data: 0.
2019-05-14 19:51:48,785 - VOC_TOPICS - INFO - Training size: 28392.
2019-05-14 19:51:51,329 - VOC_TOPICS - INFO - Iterations per epoch: 221.812 ~ 222.
Traceback (most recent call last):
File "main.py", line 196, in <module>
iter_loss, epoch_loss = train(model, data, config, n_epochs=10, save_plots=save_plots)
File "main.py", line 84, in train
loss = train_iteration(net, t_cfg.loss_func, feats, y_history, y_target)
File "main.py", line 143, in train_iteration
input_weighted, input_encoded = t_net.encoder(numpy_to_tvar(X))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/content/da-rnn/da-rnn/modules.py", line 43, in forward
input_data.permute(0, 2, 1)), dim=2) # batch_size * input_size * (2*hidden_size + T - 1)
RuntimeError: Expected object of backend CPU but got backend CUDA for sequence element 2 in sequence argument at position #1 'tensors'
ywatanabe1989 commented
Me too.
shwangdev commented
Me tooo
barryflower commented
Yes, the same problem. Has this been tested with cuda?
barryflower commented
Further to this problem there is an easy fix. Just add/change the following 3 lines in modules.py. Starting from the bottom so that the line numbers I show don't change.
Change line 59 to:
return input_weighted, input_encoded.to(device) # Fixed CUDA bug #
Change line 11 to:
return Variable(torch.zeros(1, x.size(0), hidden_size).to(device)) # Fixed CUDA bug #
Add after line 4:
from constants import device # Fixed CUDA bug BF 20190710 #
Thats it!