bojone/vae

2nd element of recon_loss?

YongWookHa opened this issue · 3 comments

Hello, I'm a student studying deep learning.
First of all, Your code is really helpful to learn about VAE.
Thank you very much.

I've got a question.
I'm curious about the reason that you put log(2pi) to the 2nd element recon_loss.

Thank you for the answer in advance.
Have a good day.

Hi, @YongWookHa were you able to figure the logic behind the reconstruction loss in vae_keras_celeba.py?

recon_loss = 0.5 * K.sum(K.mean(x_out**2, 0)) + 0.5 * np.log(2*np.pi) * np.prod(K.int_shape(x_out)[1:])

Hello, @moha23.
An year passed! :)

x_out = Subtract()([x_in, x_recon])
recon_loss = 0.5 * K.sum(K.mean(x_out**2, 0)) + 0.5 * np.log(2*np.pi) * np.prod(K.int_shape(x_out)[1:])

As I think, in recon_loss, 0.5 * K.sum(K.mean(x_out**2, 0)) refers MSE.
And added value of 0.5 * np.log(2*np.pi) * np.prod(K.int_shape(x_out)[1:]) is a constant value.
The code np.prod(K.int_shape(x_out)[1:]) calculates H x W x C.
I think this constant value works like a bias but not that meaningful.

So, I guess you would be able to get similar result without the latter code.
I'm sorry that i am not in a situation to simulate my theory.
I will put it off to you.

Have a nice day.

Thanks @YongWookHa! Yes that's the direction I was going too 👍

Wishing another fruitful year ahead :)