iArunava/ENet-Real-Time-Semantic-Segmentation

"Expected CUDA backend but got backend CPU"

marisna opened this issue · 6 comments

Hi,
I'm quite new to pyTorch, so please forgive me if I'm going to quickly on asking explanations.
I tried to launch the inference of your repo on one image, here's what i got :
(pytorch) C:\Users\marisna\ENet-RT>py init.py --mode test -i "seq1.jpg" --cuda True Traceback (most recent call last): File "init.py", line 153, in <module> test(FLAGS) File "C:\Users\marisna\ENet-RT\test.py", line 32, in test out1 = enet(tmg.float()).squeeze(0) File "C:\Users\marisna\Envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\marisna\ENet-RT\models\ENet.py", line 194, in forward x, i1 = self.b10(x) File "C:\Users\marisna\Envs\pytorch\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "C:\Users\marisna\ENet-RT\models\RDDNeck.py", line 110, in forward x_copy = torch.cat((x_copy, extras), dim = 1) RuntimeError: Expected object of backend CUDA but got backend CPU for sequence element 0 in sequence argument at position #1 'tensors'

(I got torch 1.4.0+cu92)

Thank you in advance for any help !

I think that your problem is you didn't pass the input which is in your case tmg to cuda device.
tried the following thing tmg = tmg.to(device) where device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
let me know if that helps.

Hi, thank you for the advice ! Actually I got those lines in the code. By checking the type of device, I can see that it is effectively well my gpu, built with cuda:0

Did you change anything in the code? it seems like x_copy is assigned to the cpu instead of gpu.

can you print the following line before the error line:
x_copy.is_cuda
If you will get False you will need to assign x_copy to your cuda device.

Hi, sorry to answer so late.
Indeed x_copy was not assigned to cuda.
I finally managed to go through this issue by setting the whole Enet model to use cuda backend :
(test.py)
enet = ENet(12)
enet = enet.to(device)

and also setting x = x.to(device) at the beginning of the forward function in InitialBlock.py, what assured to get outputs weights tensor as torch.cuda.FloatTensor indeed of torch.FloatTensor, which caused the error.
I'm hoping this would be useful.
Thank you a lot for your help ! It really guides me !

Glad to help you :)