Output is not sharp
harpavatkeerti opened this issue · 5 comments
Hi, very interesting work, I wonder if a slightly modified architecture can be used for image deblurring also. I tried the following modified model for deblurring 128x128 images
network_g:
type: SRFormer
upscale: 1
in_chans: 3
img_size: 128
window_size: 16
img_range: 1.
depths: [6, 6, 6, 6]
embed_dim: 60
num_heads: [6, 6, 6, 6]
mlp_ratio: 2
# upsampler: None
resi_connection: '1conv'
with the simple loss function
l1_loss = torch.nn.L1Loss(reduction='mean')
The outputs which I am getting are decent, but it is not able to produce sharp edges and corners. It seems to be a smooth output, closer to some oil painting. Can you please tell me if there is something wrong with this method?
I had a couple of other doubts about the model architecture:
- I didn't understand the img_size parameter. I am able to pass any sized image as input and the model gives the output without any error.
- Are you not using cross-window attention as SwinIR does through shifted windows?
Thanks a lot!!!
I am very sorry for later reply.
I didn't understand the img_size parameter. I am able to pass any sized image as input and the model gives the output without any error.
You are right, img_size parameter is only used to initial the attention mask before training, if you pass any sized image as input different with img_size, the mask will recomputed without any error.
Are you not using cross-window attention as SwinIR does through shifted windows?
We also use shifted windows, you can find at here.
The outputs which I am getting are decent, but it is not able to produce sharp edges and corners. It seems to be a smooth output, closer to some oil painting. Can you please tell me if there is something wrong with this method?
If your code is implemented correctly, this may be a question worth exploring, we haven't tested our approach on deblur so may not be able to answer for this reason.
I didn't check the issue before because of some things, thank you for your attention to our work!
No issues, thanks a lot for the replies.