When test the code there is a runtime error occurs,how to solve this
Closed this issue · 1 comments
Traceback (most recent call last):
File "test.py", line 89, in
run()
File "test.py", line 73, in run
restored = model_restoration(input_)
File "D:\anaconda\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\vishn\CMFNet\model\CMFNet.py", line 169, in forward
x1_D = self.stage1_decoder(x1)
File "D:\anaconda\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\vishn\CMFNet\model\CMFNet.py", line 75, in forward
x = self.up21(dec2, self.skip_attn1(enc1))
File "D:\anaconda\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\vishn\CMFNet\model\CMFNet.py", line 109, in forward
x = x + y
RuntimeError: The size of tensor a (720) must match the size of tensor b (721) at non-singleton dimension 3
It seems the size of your input images are not the multiples of 8 (because of the 3 down-sample in our UNet), you can activate these lines to do the padded-unpadded process.