iArunava/ENet-Real-Time-Semantic-Segmentation

Unable to test models

lccatala opened this issue · 4 comments

Hi, if I try to test a model I run into one of these two issues:

  1. If I try to test the model provided at the repository in datasets/CamVid/ckpt-enet.pth, independently of using or not CUDA, I get the following error message:
Traceback (most recent call last):
  File "init.py", line 153, in <module>
    test(FLAGS)
  File "C:\Users\User\Desktop\ENet-Real-Time-Semantic-Segmentation\test.py", line 32, in test
    out1 = enet(tmg.float()).squeeze(0)
  File "C:\Users\User\Anaconda2\envs\tfg_temp\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "C:\Users\User\Desktop\ENet-Real-Time-Semantic-Segmentation\models\ENet.py", line 194, in forward
    x, i1 = self.b10(x)
  File "C:\Users\User\Anaconda2\envs\tfg_temp\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "C:\Users\User\Desktop\ENet-Real-Time-Semantic-Segmentation\models\RDDNeck.py", line 110, in forward
    x_copy = torch.cat((x_copy, extras), dim = 1)
RuntimeError: Expected object of backend CUDA but got backend CPU for sequence element 0 in sequence argument at position #1 'tensors'
  1. If I try to run the model I've trained, I get the following error message:
Traceback (most recent call last):
  File "init.py", line 153, in <module>
    test(FLAGS)
  File "C:\Users\User\Desktop\ENet-Real-Time-Semantic-Segmentation\test.py", line 24, in test
    enet.load_state_dict(checkpoint['state_dict'])
  File "C:\Users\User\Anaconda2\envs\tfg_temp\lib\site-packages\torch\nn\modules\module.py", line 829, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for ENet:
        size mismatch for fullconv.weight: copying a param with shape torch.Size([16, 102, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 12, 3, 3]).

As for (1) can you share how you have loaded the weights to the model?

About (2) did you change anything in the model architecture?

As for(1), after loading the model, you can copy to cuda, just like this:
enet = ENet(12)
enet.load_state_dict(checkpoint['state_dict'])
enet = enet.to(FLAGS.cuda)
and then, copy image data to cuda,
tmg_ = plt.imread(FLAGS.image_path)
tmg_ = cv2.resize(tmg_, (h, w), cv2.INTER_NEAREST)
tmg = torch.tensor(tmg_).unsqueeze(0).float()
tmg = tmg.transpose(2, 3).transpose(1, 2)
tmg = tmg.to(FLAGS.cuda)

@AvivSham i have met the (1) question. And i just use the pretrain model to do test.

@Riwaly @AvivSham I get the answer, the --cuda in code is not work, you need load model, weights and picture into gpu, and it will go.