warmup learning rate should be float?
3zhang opened this issue · 4 comments
Current:
parser.add_argument("--warmup-learning-rate", type=int, default=1e-6,
help="learning rate for warmup")
You're right.
Also warmup option has a problem that only applies the first 1 epoch.
I can't touch my pc now. I will check when I get home .(next week)
You're right. Also warmup option has a problem that only applies the first 1 epoch.
I can't touch my pc now. I will check when I get home .(next week)
I read your code of Waifu2xDataset. For the __getitem__
, at the end there's a line TF.pad(y, [-self.model_offset] * 4)
. Could you explain a little bit? What is model_offset?
What is model_offset?
model_offset is the unpad size of the model output.
Historically, waifu2x models do not use zero padding for conv2d. This is because of the problem of visible seams in tiled rendering. ( releated to nagadomi/waifu2x#238 , The quoted comment was written by me. )
So the output size of the model will be smaller than 2x the input.
For example, UpConv7.i2i_scale = 2, UpConv7.i2i_offset = 14
nunif/waifu2x/models/upconv_7.py
Lines 7 to 11 in 2992576
input x = 256x256
nunif/waifu2x/models/upconv_7.py
Lines 41 to 51 in 2992576
torch.Size([1, 3, 484, 484])
output size is 484x484.
output_size = input_size * scale - offset * 2
( 484 = 256 * 2 - 14 * 2)