maudzung/RTM3D

Nan loss occurs while training

lyp0413 opened this issue · 2 comments

There is a little bug in src/losses/losses.py 52 & 53,which may cause a nan loss while training.
I think a epsilon like 1e-10 should be added for avoiding log(0).

What epoch did you face with the NaN loss? I trained the model without NaN loss. Please make sure that you are using the latest code in the repo.

Hi @lyp0413
I clamped the heatmaps' values with sigmoid activation before computing losses. You can refer to the implementation here