During running the example code, why the loss is positive at first and then decrease to negative value?
SnowYJ opened this issue · 1 comments
SnowYJ commented
Hi,
I'm wondering is it the correct process that the loss is positive at first and then decrease to negative?
fdraxler commented
Hi, this is expected: The negative log likelihood loss is bounded from below, but not by zero. In fact, it is bounded from below by the negative entropy of the data distribution that is fed into the normalizing flow.