Why not use a patch discriminator?
taki0112 opened this issue · 4 comments
In the original paper, the discriminator used patchGAN.
In your model.py, it seems to be implemented. (random_crop, I do not know if this is correct)
Why did not you use this?
There was no reason to not implement PatchGAN. I talked to authors and they said that they did not use PatchGAN in their implementation on the github.
They have used the following function for the discriminator: https://github.com/junyanz/CycleGAN/blob/master/models/architectures.lua#L338
I'm surprised that the authors said they did not use PatchGAN!
In the option for training https://github.com/junyanz/CycleGAN/blob/master/options.lua they set n_layers_D = 3, and I thought that will lead to a PatchGAN with receptive filed 70x70?
I also thought crop the image to 70x70 is correct, if we average all overlapping patches of the image.
junyanz/pytorch-CycleGAN-and-pix2pix#39
But this code seems slightly differ from the original idea?
If there is any incorrect please tell me~
Hi @ydnaandy123,
That's right, we used a 70x70 PatchGAN discriminator for CycleGAN, which is indeed what n_layers_D = 3 does. The discriminator has the same architecture as in pix2pix.
Same problem here. Glad to see some explanation unseen from the blog post :)