loss problem
Closed this issue · 2 comments
xubangwu commented
Hello, if the number of instances in one batch is zero, the smooth L1 loss(Linit, Lcoarse, Liter) will be 'nan'. How can I address this problem?
zhang-tao-whu commented
Sorry for the late answer, I was very busy some time ago so I didn't come to answer the question.
You can add a decision to change the way the loss is calculated when the number of instances is 0.
For example:
if len(pred_polys) == 0:
loss = torch.sum(pred_polys)
else:
loss = smooth_l1(pred_polys, target_polys)
zhang-tao-whu commented
I fixed this bugs.