wcy-cs/FishFSRNet

关于测试的一些问题

Closed this issue · 0 comments

作者您好!我最近用您公布的源代码重新训练了FishFSRNet×4和FishFSRNet×8网络,在做测试的时候,×8的模型没有任何问题可以得出PSNR值,但是×4的模型不知为何有如下的报错,请问该如何解决呢?十分感谢!
Traceback (most recent call last):
File "D:\pythonProject1\FishFSRNet-main\fsr\test.py", line 41, in
main()
File "D:\pythonProject1\FishFSRNet-main\fsr\test.py", line 20, in main
net.load_state_dict(pretrained_dict)
File "D:\Anaconda\envs\fish\Lib\site-packages\torch\nn\modules\module.py", line 2153, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for FISHNET:
Missing key(s) in state_dict: "refine2.0.refine8.conv.0.body.0.weight", "refine2.0.refine8.conv.0.body.0.bias", "refine2.0.refine8.conv.0.body.2.weight", "refine2.0.refine8.conv.0.body.2.bias", "refine2.0.refine8.conv.1.body.0.weight", "refine2.0.refine8.conv.1.body.0.bias", "refine2.0.refine8.conv.1.body.2.weight", "refine2.0.refine8.conv.1.body.2.bias", "refine2.0.attention.mlp.1.weight", "refine2.0.attention.mlp.1.bias", "refine2.0.attention.mlp.3.weight", "refine2.0.attention.mlp.3.bias", "refine2.1.refine8.conv.0.body.0.weight", "refine2.1.refine8.conv.0.body.0.bias", "refine2.1.refine8.conv.0.body.2.weight", "refine2.1.refine8.conv.0.body.2.bias", "refine2.1.refine8.conv.1.body.0.weight", "refine2.1.refine8.conv.1.body.0.bias", "refine2.1.refine8.conv.1.body.2.weight", "refine2.1.refine8.conv.1.body.2.bias", "refine2.1.attention.mlp.1.weight", "refine2.1.attention.mlp.1.bias", "refine2.1.attention.mlp.3.weight", "refine2.1.attention.mlp.3.bias", "refine2.2.down2.0.weight", "refine2.2.down2.0.bias", "refine2.2.down2.2.weight", "refine2.2.down2.2.bias", "refine2.2.refine8.conv.0.body.0.weight", "refine2.2.refine8.conv.0.body.0.bias", "refine2.2.refine8.conv.0.body.2.weight", "refine2.2.refine8.conv.0.body.2.bias", "refine2.2.refine8.conv.1.body.0.weight", "refine2.2.refine8.conv.1.body.0.bias", "refine2.2.refine8.conv.1.body.2.weight", "refine2.2.refine8.conv.1.body.2.bias", "refine2.2.attention.mlp.1.weight", "refine2.2.attention.mlp.1.bias", "refine2.2.attention.mlp.3.weight", "refine2.2.attention.mlp.3.bias", "refine2.3.down1.weight", "refine2.3.down1.bias", "refine2.3.down2.0.weight", "refine2.3.down2.0.bias", "refine2.3.down2.2.weight", "refine2.3.down2.2.bias", "refine2.3.refine8.conv.0.body.0.weight", "refine2.3.refine8.conv.0.body.0.bias", "refine2.3.refine8.conv.0.body.2.weight", "refine2.3.refine8.conv.0.body.2.bias", "refine2.3.refine8.conv.1.body.0.weight", "refine2.3.refine8.conv.1.body.0.bias", "refine2.3.refine8.conv.1.body.2.weight", "refine2.3.refine8.conv.1.body.2.bias", "refine2.3.attention.mlp.1.weight", "refine2.3.attention.mlp.1.bias", "refine2.3.attention.mlp.3.weight", "refine2.3.attention.mlp.3.bias", "refine2.4.down1.weight", "refine2.4.down1.bias", "refine2.4.refine2.conv.0.body.0.weight", "refine2.4.refine2.conv.0.body.0.bias", "refine2.4.refine2.conv.0.body.2.weight", "refine2.4.refine2.conv.0.body.2.bias", "refine2.4.refine2.conv.1.body.0.weight", "refine2.4.refine2.conv.1.body.0.bias", "refine2.4.refine2.conv.1.body.2.weight", "refine2.4.refine2.conv.1.body.2.bias", "refine2.4.refine4.conv.0.body.0.weight", "refine2.4.refine4.conv.0.body.0.bias", "refine2.4.refine4.conv.0.body.2.weight", "refine2.4.refine4.conv.0.body.2.bias", "refine2.4.refine4.conv.1.body.0.weight", "refine2.4.refine4.conv.1.body.0.bias", "refine2.4.refine4.conv.1.body.2.weight", "refine2.4.refine4.conv.1.body.2.bias", "refine2.4.refine8.conv.0.body.0.weight", "refine2.4.refine8.conv.0.body.0.bias", "refine2.4.refine8.conv.0.body.2.weight", "refine2.4.refine8.conv.0.body.2.bias", "refine2.4.refine8.conv.1.body.0.weight", "refine2.4.refine8.conv.1.body.0.bias", "refine2.4.refine8.conv.1.body.2.weight", "refine2.4.refine8.conv.1.body.2.bias", "refine2.4.attention.mlp.1.weight", "refine2.4.attention.mlp.1.bias", "refine2.4.attention.mlp.3.weight", "refine2.4.attention.mlp.3.bias", "refine2.4.conv.weight", "refine2.4.conv.bias", "refine2.5.refine2.conv.0.body.0.weight", "refine2.5.refine2.conv.0.body.0.bias", "refine2.5.refine2.conv.0.body.2.weight", "refine2.5.refine2.conv.0.body.2.bias", "refine2.5.refine2.conv.1.body.0.weight", "refine2.5.refine2.conv.1.body.0.bias", "refine2.5.refine2.conv.1.body.2.weight", "refine2.5.refine2.conv.1.body.2.bias", "refine2.5.refine4.conv.0.body.0.weight", "refine2.5.refine4.conv.0.body.0.bias", "refine2.5.refine4.conv.0.body.2.weight", "refine2.5.refine4.conv.0.body.2.bias", "refine2.5.refine4.conv.1.body.0.weight", "refine2.5.refine4.conv.1.body.0.bias", "refine2.5.refine4.conv.1.body.2.weight", "refine2.5.refine4.conv.1.body.2.bias", "refine2.5.refine8.conv.0.body.0.weight", "refine2.5.refine8.conv.0.body.0.bias", "refine2.5.refine8.conv.0.body.2.weight", "refine2.5.refine8.conv.0.body.2.bias", "refine2.5.refine8.conv.1.body.0.weight", "refine2.5.refine8.conv.1.body.0.bias", "refine2.5.refine8.conv.1.body.2.weight", "refine2.5.refine8.conv.1.body.2.bias", "refine2.5.attention.mlp.1.weight", "refine2.5.attention.mlp.1.bias", "refine2.5.attention.mlp.3.weight", "refine2.5.attention.mlp.3.bias", "refine2.5.conv.weight", "refine2.5.conv.bias", "up1.0.0.weight", "up1.0.0.bias", "up2.0.0.weight", "up2.0.0.bias", "up3.0.0.weight", "up3.0.0.bias", "up_stage3.0.body.0.weight", "up_stage3.0.body.0.bias", "up_stage3.0.body.2.weight", "up_stage3.0.body.2.bias", "up_stage3.0.attention_layer1.spatial_layer1.weight", "up_stage3.0.attention_layer1.spatial_layer1.bias", "up_stage3.0.attention_layer1.spatial_layer3.weight", "up_stage3.0.attention_layer1.spatial_layer3.bias", "up_stage3.0.attention_layer2.mlp.1.weight", "up_stage3.0.attention_layer2.mlp.1.bias", "up_stage3.0.attention_layer2.mlp.3.weight", "up_stage3.0.attention_layer2.mlp.3.bias", "up_stage3.0.conv.weight", "up_stage3.0.conv.bias", "up_stage3.0.conv_feature.0.weight", "up_stage3.0.conv_feature.0.bias", "up_stage3.0.conv_parsing.0.weight", "up_stage3.0.conv_parsing.0.bias", "up_stage3.0.conv_fusion.weight", "up_stage3.0.conv_fusion.bias", "up_stage3.0.attention_fusion.weight", "up_stage3.0.attention_fusion.bias", "up_stage3.1.body.0.weight", "up_stage3.1.body.0.bias", "up_stage3.1.body.2.weight", "up_stage3.1.body.2.bias", "up_stage3.1.attention_layer1.spatial_layer1.weight", "up_stage3.1.attention_layer1.spatial_layer1.bias", "up_stage3.1.attention_layer1.spatial_layer3.weight", "up_stage3.1.attention_layer1.spatial_layer3.bias", "up_stage3.1.attention_layer2.mlp.1.weight", "up_stage3.1.attention_layer2.mlp.1.bias", "up_stage3.1.attention_layer2.mlp.3.weight", "up_stage3.1.attention_layer2.mlp.3.bias", "up_stage3.1.conv.weight", "up_stage3.1.conv.bias", "up_stage3.1.conv_feature.0.weight", "up_stage3.1.conv_feature.0.bias", "up_stage3.1.conv_parsing.0.weight", "up_stage3.1.conv_parsing.0.bias", "up_stage3.1.conv_fusion.weight", "up_stage3.1.conv_fusion.bias", "up_stage3.1.attention_fusion.weight", "up_stage3.1.attention_fusion.bias", "down1.conv.weight", "down1.conv.bias", "down_stage1.0.body.0.weight", "down_stage1.0.body.0.bias", "down_stage1.0.body.2.weight", "down_stage1.0.body.2.bias", "down_stage1.0.attention_layer1.spatial_layer1.weight", "down_stage1.0.attention_layer1.spatial_layer1.bias", "down_stage1.0.attention_layer1.spatial_layer3.weight", "down_stage1.0.attention_layer1.spatial_layer3.bias", "down_stage1.0.attention_layer2.mlp.1.weight", "down_stage1.0.attention_layer2.mlp.1.bias", "down_stage1.0.attention_layer2.mlp.3.weight", "down_stage1.0.attention_layer2.mlp.3.bias", "down_stage1.0.conv.weight", "down_stage1.0.conv.bias", "down_stage1.0.conv_feature.0.weight", "down_stage1.0.conv_feature.0.bias", "down_stage1.0.conv_parsing.0.weight", "down_stage1.0.conv_parsing.0.bias", "down_stage1.0.conv_fusion.weight", "down_stage1.0.conv_fusion.bias", "down_stage1.0.attention_fusion.weight", "down_stage1.0.attention_fusion.bias", "down_stage1.1.body.0.weight", "down_stage1.1.body.0.bias", "down_stage1.1.body.2.weight", "down_stage1.1.body.2.bias", "down_stage1.1.attention_layer1.spatial_layer1.weight", "down_stage1.1.attention_layer1.spatial_layer1.bias", "down_stage1.1.attention_layer1.spatial_layer3.weight", "down_stage1.1.attention_layer1.spatial_layer3.bias", "down_stage1.1.attention_layer2.mlp.1.weight", "down_stage1.1.attention_layer2.mlp.1.bias", "down_stage1.1.attention_layer2.mlp.3.weight", "down_stage1.1.attention_layer2.mlp.3.bias", "down_stage1.1.conv.weight", "down_stage1.1.conv.bias", "down_stage1.1.conv_feature.0.weight", "down_stage1.1.conv_feature.0.bias", "down_stage1.1.conv_parsing.0.weight", "down_stage1.1.conv_parsing.0.bias", "down_stage1.1.conv_fusion.weight", "down_stage1.1.conv_fusion.bias", "down_stage1.1.attention_fusion.weight", "down_stage1.1.attention_fusion.bias", "conv_tail1.weight", "conv_tail1.bias", "conv.weight", "conv.bias", "up21.0.0.weight", "up21.0.0.bias", "conv_tail2.weight", "conv_tail2.bias", "up22.0.0.weight", "up22.0.0.bias", "up23.0.0.weight", "up23.0.0.bias", "conv_tail3.weight", "conv_tail3.bias", "up2_stage3.0.body.0.weight", "up2_stage3.0.body.0.bias", "up2_stage3.0.body.2.weight", "up2_stage3.0.body.2.bias", "up2_stage3.0.attention_layer1.spatial_layer1.weight", "up2_stage3.0.attention_layer1.spatial_layer1.bias", "up2_stage3.0.attention_layer1.spatial_layer3.weight", "up2_stage3.0.attention_layer1.spatial_layer3.bias", "up2_stage3.0.attention_layer2.mlp.1.weight", "up2_stage3.0.attention_layer2.mlp.1.bias", "up2_stage3.0.attention_layer2.mlp.3.weight", "up2_stage3.0.attention_layer2.mlp.3.bias", "up2_stage3.0.conv.weight", "up2_stage3.0.conv.bias", "up2_stage3.0.conv_feature.0.weight", "up2_stage3.0.conv_feature.0.bias", "up2_stage3.0.conv_parsing.0.weight", "up2_stage3.0.conv_parsing.0.bias", "up2_stage3.0.conv_fusion.weight", "up2_stage3.0.conv_fusion.bias", "up2_stage3.0.attention_fusion.weight", "up2_stage3.0.attention_fusion.bias", "up2_stage3.1.body.0.weight", "up2_stage3.1.body.0.bias", "up2_stage3.1.body.2.weight", "up2_stage3.1.body.2.bias", "up2_stage3.1.attention_layer1.spatial_layer1.weight", "up2_stage3.1.attention_layer1.spatial_layer1.bias", "up2_stage3.1.attention_layer1.spatial_layer3.weight", "up2_stage3.1.attention_layer1.spatial_layer3.bias", "up2_stage3.1.attention_layer2.mlp.1.weight", "up2_stage3.1.attention_layer2.mlp.1.bias", "up2_stage3.1.attention_layer2.mlp.3.weight", "up2_stage3.1.attention_layer2.mlp.3.bias", "up2_stage3.1.conv.weight", "up2_stage3.1.conv.bias", "up2_stage3.1.conv_feature.0.weight", "up2_stage3.1.conv_feature.0.bias", "up2_stage3.1.conv_parsing.0.weight", "up2_stage3.1.conv_parsing.0.bias", "up2_stage3.1.conv_fusion.weight", "up2_stage3.1.conv_fusion.bias", "up2_stage3.1.attention_fusion.weight", "up2_stage3.1.attention_fusion.bias".
Unexpected key(s) in state_dict: "refine2.0.attention.body.0.weight", "refine2.0.attention.body.0.bias", "refine2.0.attention.body.2.conv1.weight", "refine2.0.attention.body.2.conv1.bias", "refine2.0.attention.body.2.conv3.weight", "refine2.0.attention.body.2.conv3.bias", "refine2.0.attention.body.2.conv5.weight", "refine2.0.attention.body.2.conv5.bias", "refine2.0.attention.body.2.conv7.weight", "refine2.0.attention.body.2.conv7.bias", "refine2.0.attention.attention_layer2.mlp.1.weight", "refine2.0.attention.attention_layer2.mlp.1.bias", "refine2.0.attention.attention_layer2.mlp.3.weight", "refine2.0.attention.attention_layer2.mlp.3.bias", "refine2.1.attention.body.0.weight", "refine2.1.attention.body.0.bias", "refine2.1.attention.body.2.conv1.weight", "refine2.1.attention.body.2.conv1.bias", "refine2.1.attention.body.2.conv3.weight", "refine2.1.attention.body.2.conv3.bias", "refine2.1.attention.body.2.conv5.weight", "refine2.1.attention.body.2.conv5.bias", "refine2.1.attention.body.2.conv7.weight", "refine2.1.attention.body.2.conv7.bias", "refine2.1.attention.attention_layer2.mlp.1.weight", "refine2.1.attention.attention_layer2.mlp.1.bias", "refine2.1.attention.attention_layer2.mlp.3.weight", "refine2.1.attention.attention_layer2.mlp.3.bias", "refine2.2.attention.body.0.weight", "refine2.2.attention.body.0.bias", "refine2.2.attention.body.2.conv1.weight", "refine2.2.attention.body.2.conv1.bias", "refine2.2.attention.body.2.conv3.weight", "refine2.2.attention.body.2.conv3.bias", "refine2.2.attention.body.2.conv5.weight", "refine2.2.attention.body.2.conv5.bias", "refine2.2.attention.body.2.conv7.weight", "refine2.2.attention.body.2.conv7.bias", "refine2.2.attention.attention_layer2.mlp.1.weight", "refine2.2.attention.attention_layer2.mlp.1.bias", "refine2.2.attention.attention_layer2.mlp.3.weight", "refine2.2.attention.attention_layer2.mlp.3.bias", "refine2.3.attention.body.0.weight", "refine2.3.attention.body.0.bias", "refine2.3.attention.body.2.conv1.weight", "refine2.3.attention.body.2.conv1.bias", "refine2.3.attention.body.2.conv3.weight", "refine2.3.attention.body.2.conv3.bias", "refine2.3.attention.body.2.conv5.weight", "refine2.3.attention.body.2.conv5.bias", "refine2.3.attention.body.2.conv7.weight", "refine2.3.attention.body.2.conv7.bias", "refine2.3.attention.attention_layer2.mlp.1.weight", "refine2.3.attention.attention_layer2.mlp.1.bias", "refine2.3.attention.attention_layer2.mlp.3.weight", "refine2.3.attention.attention_layer2.mlp.3.bias", "up1.body.0.weight", "up1.body.0.bias", "up2.body.0.weight", "up2.body.0.bias", "up21.body.0.weight", "up21.body.0.bias", "up22.body.0.weight", "up22.body.0.bias".
size mismatch for refine2.0.conv.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 192, 1, 1]).
size mismatch for refine2.1.conv.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 192, 1, 1]).
size mismatch for refine2.2.conv.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 192, 1, 1]).
size mismatch for refine2.3.conv.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 192, 1, 1]).