SRformer based on Real-ESRGAN?
AIisCool opened this issue ยท 10 comments
Is this still coming at some point?
Sorry for the late, I trained SRFormer on Real-ESRGAN earlier, but have not yet had the opportunity to verify its correctness due to being preoccupied with another project. Today, I briefly checked this model. If there are no issues, I plan to update the repo within a week.
@Z-YuPeng Any news?
VERY SORRY for the delay! I updated a version and thanks for your continued interest in my work. And, we are about to launch SRFormer V2.
Hi, thank u @Z-YuPeng for sharing the real-world SR version checkpoint.
However, I found error when loading state_dict for SRFormer with your newly uploaded checkpoint and yaml config file. The error message is as follows.
RuntimeError: Error(s) in loading state_dict for SRFormer: size mismatch for layers.0.residual_group.blocks.1.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.0.residual_group.blocks.3.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.0.residual_group.blocks.5.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.1.residual_group.blocks.1.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.1.residual_group.blocks.3.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.1.residual_group.blocks.5.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.2.residual_group.blocks.1.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.2.residual_group.blocks.3.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.2.residual_group.blocks.5.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.3.residual_group.blocks.1.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.3.residual_group.blocks.3.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.3.residual_group.blocks.5.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.4.residual_group.blocks.1.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.4.residual_group.blocks.3.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.4.residual_group.blocks.5.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.5.residual_group.blocks.1.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.5.residual_group.blocks.3.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]). size mismatch for layers.5.residual_group.blocks.5.attn_mask: copying a param with shape torch.Size([9, 576, 144]) from checkpoint, the shape in current model is torch.Size([4, 576, 144]).
My command using infer_sr.py for inference was as follows.
python basicsr/infer_sr.py -opt options/test/SRFormer/test_SRFormer-S_x4_real.yml --input_dir ../datasets/RealSRSet+5images/ --output_dir ./out/
Hi, @cyy2427 thanks for your quick reply, I have fixed it and uploaded a new weight of SRFormer-S_x4_real, please redownload SRFormer-S_x4_real.pth and run git pull
of this repo.
@Z-YuPeng Can you upload the pre-change parameter version? In Chainner, that version worked normally for me compared to this corrected link. Thanks.
Hi, @zelenooki87 , I have reuploaded it! Thanks for your suggestion!
Hi, @zelenooki87 , I have reuploaded it! Thanks for your suggestion!
Thank you so much for this reupload, I converted the model to onnx and in Selur's Hybrid with the help of the vsmlrt module it works phenomenally for upscaling video clips. I am thrilled. It is also good for photos, of course. Can't wait for SRFORMER v2. Greetings.
@Z-YuPeng Quick question - can't find the "SRFormer-S_x4_real.pth" in the google drive link. Is it removed for some reason?