Regarding the LSGAN and RaLSGAN variant loss function
Closed this issue · 1 comments
Hi Lucas,
I had some confusion regarding the LSGAN variant. As per your code: the loss for LSGAN is as follows:
if args.loss == 0 :
dis_loss = (((real_out - fake_out.mean() - 1) ** 2).mean() +
((fake_out - real_out.mean() + 1) ** 2).mean()) / 2
else:
dis_loss = (torch.mean((real_out - 1) ** 2) + torch.mean((fake_out - 0) ** 2)) / 2
For the discriminator the code uses (real_out - fake_out.mean() - 1)for the loss in the first subpart and (fake_out - real_out.mean() + 1) in the second sub-part, but for the generator we replace the signs. Is this approach used to fool the discriminator.
Requesting your help on this.
--Prashant
Hi,
Sorry I don't really remember; I think that loss == 0 is the RaLSGAN, as the dis_loss = (torch.mean((real_out - 1) ** 2) + torch.mean((fake_out - 0) ** 2)) / 2
seems to look like the original definition of LSGAN. But I might be wrong, it's been a while since I've played with GANs