junyanz/BicycleGAN

Question about backward_G_alone

Closed this issue · 1 comments

In this line could you explain why torch.mean(torch.abs(self.mu2 - self.z_random)) not torch.mean(torch.abs(self.z_predict - self.z_random))?

In the paper, we mentioned, "Note that the encoder E here is producing a point estimate for z, whereas the
encoder in the previous section was predicting a Gaussian distribution." (Sec 3.3).
In practice, z_predict = Gaussian(mu2, std2). This loss might not be stable if your std2 is big.