ceciliavision/zoom-learn-zoom

Question about CoBiRGB loss

zpkosmos opened this issue · 4 comments

Thanks for your work!
1.Is the compute_patch_contextual_loss in loss.py corresponds to the CoBiRGB in your paper?
2.Is the CoBiRGB a kind of Pixel level loss like L1 in EDSR and RDN but compute the cosine distance?

i think so too,,but cosine distance ignores the absolute difference between pixel values,,,i am not sure whether it could be better to use absolute difference between pixel value or not .

  1. compute_contextual_loss uses VGG features for feature search and match, while compute_patch_contextual_loss uses rgb pixel values.
  2. L1 is pixel-wise loss, but CoBiRGB is for unaligned data pairs, it only matches the most similar feature in the target given any feature in the source

when you introduce CoBiRGB, " where we use n×n RGB patches as features for CoBiRGB,and n should be larger for the 8X zoom (optimal n = 15) than the 4X zoom model (optimal n = 10)" ,
I can not understand the relation between zoom and n.
1.what's the relation between zoom and n?
2.why should set the patches as the feature,not only rgb?
3.If 1X zoom, how should I set the n?
Can you please tell me?

I set n=5 for 2X zoom, and try to set n=10 for 2X zoom, too. But I can't find any differences for these two situations except for more artifacts on the restore images. Have you all encountered some artifacts?