sbarratt/inception-score-pytorch

about entorpy

albb762 opened this issue · 3 comments

hi.
Thank you for share your code.
But I try it with some toy data.
I found out, If we want to assign a high score to the data which has uniform P(y) and skew P(yx), we should use entropy(py, pyx), instead of entropy(pyx, py).
And here is the code from openai:
kl = part * (np.log(part) - np.log(np.expand_dims(np.mean(part, 0), 0)))
https://github.com/openai/improved-gan/blob/master/inception_score/model.py

hi.
Thank you for share your code.
But I try it with some toy data.
I found out, If we want to assign a high score to the data which has uniform P(y) and skew P(yx), we should use entropy(py, pyx), instead of entropy(pyx, py).
And here is the code from openai:
kl = part * (np.log(part) - np.log(np.expand_dims(np.mean(part, 0), 0)))
https://github.com/openai/improved-gan/blob/master/inception_score/model.py

Hi,
I have the same confuse with you, so, should i replace the 'entropy(py, pyx)' to 'kl = part * (np.log(part) - np.log(np.expand_dims(np.mean(part, 0), 0)))'. In my opinion, this is not equal.

scipy.stats.entropy uses the KL divergence if two distributions are given.

"If qk is not None, then compute the Kullback-Leibler divergence"
You can check the document below,
https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.entropy.html