The training log shows that the loss value is negative
AugggRush opened this issue · 2 comments
AugggRush commented
I trained this model by myself with the same dataset, Ljspeech, and I found that the loss
print on the terminal is negative, with iteration growing, It becomes more negative, getting far from zero.
I want to know this is correct?I think that to maximize log likelihood, the loss should generally reach zero.
If anyone have ideas, please note me, thanks very much.
yasntrk commented
It is okay to get negative loss
KB00100100 commented
I have the same question. The loss value is more negative when iteration growing, would the final trained model work?