deep-learning-with-pytorch/dlwpt-code

ImageCaptioning.pytorch Error

ivanvoid opened this issue · 1 comments

Tried run ImageCaptioning from chapter 2 (2.3.1 NeuralTalk2) and get this error, any idea what to do with it? cuda available btw

(dcv37) ivan:~/dev/gg/Deep-Learning-with-PyTorch/ImageCaptioning.pytorch$ python eval.py --model ./data/FC/fc-model.pth --infos_path ./data/FC/fc-infos.pkl --image_folder ./data
DataLoaderRaw loading images from folder:  ./data
0
listing all images in directory ./data
DataLoaderRaw found  1  images
/home/ivan/anaconda3/envs/dcv37/lib/python3.7/site-packages/torch/nn/functional.py:1625: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
  warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
/home/ivan/anaconda3/envs/dcv37/lib/python3.7/site-packages/torch/nn/functional.py:1614: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.
  warnings.warn("nn.functional.tanh is deprecated. Use torch.tanh instead.")
/home/ivan/dev/gg/Deep-Learning-with-PyTorch/ImageCaptioning.pytorch/models/FCModel.py:147: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
  logprobs = F.log_softmax(self.logit(output))
Traceback (most recent call last):
  File "eval.py", line 134, in <module>
    vars(opt))
  File "/home/ivan/dev/gg/Deep-Learning-with-PyTorch/ImageCaptioning.pytorch/eval_utils.py", line 106, in eval_split
    seq, _ = model.sample(fc_feats, att_feats, eval_kwargs)
  File "/home/ivan/dev/gg/Deep-Learning-with-PyTorch/ImageCaptioning.pytorch/models/FCModel.py", line 160, in sample
    return self.sample_beam(fc_feats, att_feats, opt)
  File "/home/ivan/dev/gg/Deep-Learning-with-PyTorch/ImageCaptioning.pytorch/models/FCModel.py", line 144, in sample_beam
    xt = self.embed(Variable(it, requires_grad=False))
  File "/home/ivan/anaconda3/envs/dcv37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/ivan/anaconda3/envs/dcv37/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 126, in forward
    self.norm_type, self.scale_grad_by_freq, self.sparse)
  File "/home/ivan/anaconda3/envs/dcv37/lib/python3.7/site-packages/torch/nn/functional.py", line 1814, in embedding
    return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #1 'self' in call to _th_index_select
(dcv37) ivan@adsl-2080-server:~/dev/gg/Deep-Learning-with-PyTorch/ImageCaptioning.pytorch$ python 
Python 3.7.7 (default, May  7 2020, 21:25:33) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> 'cuda' if torch.cuda.is_available() else 'cpu'
'cuda'
>>> 

Hey, sorry for the delayed response.

Can you verify that the tensors returned by tmp = [Variable(torch.from_numpy(_)).to(device=device) for _ in tmp] are on the GPU as expected? It might also help to check if all of the model is also on the GPU.

@lantiga have you seen similar errors before?