tkwoo/anogan-keras

Loss Function for Generator and Discriminator

Closed this issue · 1 comments

Why are we using 'mse' as the loss function for both generator and discrimator? Do we not use 'binary_crossentropy' in case of the optimizers?

Also another doubt was to know the reason behind the usage of Conv2dTranspose layers instead of Upsampling layers?

tkwoo commented

Of course, you can use cross entrophy (original DCGAN).
Because I used mse, I read LSGAN (Least Square GAN). this paper said that least square type loss function is more stable in training process.
Please check LSGAN paper : https://arxiv.org/abs/1611.04076

From my own experience, Conv2dTranspose is better than upsampling. I tought the reason is conv makes higher non linearity than upsampling. upsampling is not trainable... well.. I am not sure that clear reason.