Two problems when training model to generate 128x128
shoutOutYangJie opened this issue · 1 comments
when I train model to generate 64x64, everything is normal. However, when I train 128x128, I find a weird result about auxiliary input.
The first five rows contain the output of the INR network, and the last five rows contain the output of the auxiliary rgb layer. You can find the result is abnormal. When I train 64x64, the last five rows are also normal.
So I am confused about whether the author also meets this problem or not.
By the way, I guess when I increased the resolution, the Discriminator also inserts some new layers before original discriminator, I am not sure that it can work if new random-initialized layers are inserted before a pretrained network?
so I have two questions.
- Is the aboving result of the auxiliary rgb layer normal?
- Is it reasonable to insert some new layers to a pretrained model?
Thanks.
- Using this training opts:
--tl_opts curriculum.new_attrs.image_list_file datasets/ffhq/images256x256_image_list.txt \
D_first_layer_warmup True reset_best_fid True update_aux_every 16 d_reg_every 1 train_aux_img True
- Based on my experience in training stylegan, adding new layers in the pre-trained discriminator can work well in our case where the generator has already been able to generate reasonable images.
However, in the case where the generator is initialized randomly, it is most likely not to work. It is because the generator and discriminator are unbalanced in this case (the discriminator is much stronger than the generator).