Patch selection for training
tboen1 opened this issue · 2 comments
Hi, how are patches selected for training the generator and discriminator at each scale? I'm aware that the receptive field remains constant while the image is upscaled at each scale, but could someone direct me to a specific method or piece of code that shows how these patches are selected?
I believe that the method creat_reals_pyramid(real, reals, opt)
creates the upsampled image at each scale, but how/where are patches extracted from these upsampled images?
Thank you.
We do not directly extract the patches of the image at any point.
As you said, the receptive field of both the generator and the discriminator is 11x11 which means they see only small patches within the image.
The discriminator gets an input image (either real or fake) and output a discrimination map where each of the pixels is the discriminator score for the corresponding 11x11 patch of the input image.
Because of these, we don't need to explicitly extract the image patches at any point.
Hi , I have a question : I don't understand if in each scale(level) we work with 11*11 patches from down-sampled real image or patches from real-size of real image? This confuses me so how the receptive field increases as we go upper(to the finest scale) . Thank you for your attention. Regards.