twtygqyy/pytorch-SRResNet

Orthogonal weight init

Kaixhin opened this issue · 7 comments

Shouldn't the repo be called pytorch-SRGAN instead of pytorch-SRResNet?

Anyway, the same group mention using orthogonal weight initialisation in the ESPCN paper released roughly at the same time period - even if they haven't specified it for SRGAN, it's definitely worth trying. The text is as follows:

Biases are initialised to 0 and weights use orthogonal initialisation with gain √2 following recommendations in [30].

So for all convolutional layers you'll want:

nn.init.orthogonal(layer.weight, math.sqrt(2))
nn.init.constant(layer.bias, 0)

Also there is a v5 of the paper with I think more training details, so worth checking carefully to see if there's anything you missed.

@Kaixhin Thanks for the suggestions, I will take a look and have a try.

You may also want to check out the pytorch superresolution example for info on weight init, if need be.

They use orthogonal init there (but it was written before nn.init) so indeed you should do that. Make note that the √2 is for layers preceding ReLUs, and therefore the final (output) layer does not have a gain (or uses the default of 1).

Hi, I have a question, why not implement the VGG loss in the original paper.

@CasdDesnDR I updated the code for content loss support.

@Kaixhin Hi,I don't know why it needs so much memory when test and the train is normal. Have you ever had this problem?
::
RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1501969512886/work/pytorch-0.1.12/torch/lib/THC/generic/THCStorage.cu:66

@HPL123 There's not enough detail in your comment to determine what the issue is, but if you are talking about running out of GPU memory because of the orthogonal weight initialisation (if not, wrong issue) then you should initialise on CPU and then transfer the model to GPU.