在GPU上运行pytorch版本的代码
wwwjs opened this issue · 3 comments
您好,我在GPU上运行pytorch版本的代码时出现了以下错误,请问如何解决:
Traceback (most recent call last):
File "train.py", line 59, in
loss = model.neg_log_likelihood(sentence, tags)
File "/home/sjwang/data111/ChineseNER-master/pytorch/BiLSTM_CRF.py", line 154, in neg_log_likelihood
feats = self._get_lstm_features(sentence)
File "/home/sjwang/data111/ChineseNER-master/pytorch/BiLSTM_CRF.py", line 94, in _get_lstm_features
lstm_out, self.hidden = self.lstm(embeds, self.hidden)
File "/home/sjwang/py/python3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "/home/sjwang/py/python3/lib/python3.6/site-packages/torch/nn/modules/rnn.py", line 179, in forward
self.dropout, self.training, self.bidirectional, self.batch_first)
RuntimeError: Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu
多谢分享,但我还有个疑问,我想在model中将lstm的层数设置为3,依旧是将nn.LSTM()中的num_player设置为3,但运行会报错,为什么呢,可以解答一下吗,万分感谢
多谢分享,但我还有个疑问,我想在model中将lstm的层数设置为3,依旧是将nn.LSTM()中的num_player设置为3,但运行会报错,为什么呢,可以解答一下吗,万分感谢
这我就不清楚了,你把报错直接google,应该会有解决办法的。