Why is the direction of loss function in Generator and Discriminator same?
Minqi824 opened this issue · 1 comments
Minqi824 commented
Here we mainly discuss the latent loss, which appears in both generator and discriminator.
In Generator: the latent loss should be the smaller the better, so the x and x_hat vector would be much similar
However, the latent loss should be opposite (which should be negative) in discriminator, and thus the entire gan is adversarially trained. Can the author answer this question? Thanks a lot! @samet-akcay
tomatowithpotato commented
same problem