aamini/evidential-deep-learning

NIG_Loss smaller than zero!

PrinceDouble opened this issue · 2 comments

NIG_Loss smaller than zero! NIG_loss becomes negative during training, I don't know why, thank you!

Epoch: 0, iter: 360, abs_loss: 0.075639, nll_loss: -0.842837, reg_loss:0.364714 ,acc: 0.952216, lr_pre: 0.000030, lr_last: 0.000300
Epoch: 0, iter: 361, abs_loss: 0.075609, nll_loss: -0.843703, reg_loss:0.364581 ,acc: 0.952219, lr_pre: 0.000030, lr_last: 0.000300
Epoch: 0, iter: 362, abs_loss: 0.075563, nll_loss: -0.844643, reg_loss:0.364431 ,acc: 0.952307, lr_pre: 0.000030, lr_last: 0.000300
Epoch: 0, iter: 363, abs_loss: 0.075479, nll_loss: -0.846680, reg_loss:0.364073 ,acc: 0.952395, lr_pre: 0.000030, lr_last: 0.000300
Epoch: 0, iter: 364, abs_loss: 0.075361, nll_loss: -0.849301, reg_loss:0.363639 ,acc: 0.952526, lr_pre: 0.000030, lr_last: 0.000300
Epoch: 0, iter: 365, abs_loss: 0.075248, nll_loss: -0.851855, reg_loss:0.363197 ,acc: 0.952655, lr_pre: 0.000030, lr_last: 0.000300
Epoch: 0, iter: 366, abs_loss: 0.075118, nll_loss: -0.854777, reg_loss:0.362667 ,acc: 0.952784, lr_pre: 0.000030, lr_last: 0.000300

This is expected as NLL is negative log likelihood. If your log likelihood is positive then the NLL will be negative. This loss can be any real number (either positive or negative).

First of all, thank you very much for your masterpiece. Recently, I have been studying in face anti-spoofing detection. The model has a relatively good effect intra-dataset, but the generalization performance is weak. I want to use evidence_learning to improve the generalization performance of the model, feature map supervision used on face anti-spoofing task. I see that your work uses evidence_learning for depth estimation; May I ask if the loss function of this work can be used on face anti-spoofing task. Thx very much!