Getting cls_logits NaN of Inf during training
AceMcAwesome77 opened this issue · 1 comments
I am training this retinanet 3D detection model with mostly the same parameters as the example in this repo, except with batch_size in the config = 1 because many image volumes are smaller than the training patch size. During training, I am getting this error at random, several epochs into the training:
Traceback of TorchScript, original code (most recent call last):
File "/home/mycomputer/.local/lib/python3.10/site-packages/monai/apps/detection/networks/retinanet_network.py", line 130, in forward
if torch.isnan(cls_logits).any() or torch.isinf(cls_logits).any():
if torch.is_grad_enabled():
raise ValueError("cls_logits is NaN or Inf.")
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
else:
warnings.warn("cls_logits is NaN or Inf.")
builtins.ValueError: cls_logits is NaN or Inf.
On the last few training attempts, this failed on epoch 6 on the first two attempts, then failed on epoch 12 on the third attempt. So it can make it though all the training data without failing on any particular case. Does anyone know what could be causing this? If it's exploding gradients, is there something built into MONAI to clip these and prevent the training from crashing? Thanks!
Hi @AceMcAwesome77,
The error message you're encountering, "cls_logits is NaN or Inf.", is letting you know that at some point in your training, the cls_logits tensor contains a Not a Number (NaN) or Infinity (Inf).
This can occur due to various reasons. This could come from a learning rate that's too high, instabilities in your numerical operations, uninitialized variables, or it could also be a problem with the specific data you're inputting into the model. It's a sign that the model is diverging, and gradients are getting out of control, which can also stem from exploding or vanishing gradients.
You indeed can attempt to mitigate this issue using gradient clipping, which can help ensure gradients never exceed a certain threshold. However, applying gradient clipping doesn't guarantee to resolve the root cause of the problem.
I would recommend looking at your training process more holistically. Inspect the learning rate, look for possible issues in the data, try normalizing the inputs, or use different weight initialization techniques.
Hope it helps, thanks.