inst_loss: nan
Closed this issue · 1 comments
spatiall commented
Hello author, when I run this command, CUDA_VISIBLE_DEVICES=0 python tools/train.py configs/ESAM_CA/ESAM_sv_scannet200_CA.py --work-dir work_dirs/ESAM_sv_scannet200_CA/,naN appears in the loss obtained by training.
After debugging, I found that in this command, mask_bce_losses.append(F.binary_cross_entropy_with_logits(
pred_mask, tgt_mask.float())),pred_mask and tgt_mask.float() sometimes become tenser[], which leads to loss is naN.
Do you konw how to solve this problem?
xuxw98 commented
That's ok. This nan will not affect the training.