facebookresearch/vicreg

Loss becomes Nan suddenly.

dc250601 opened this issue · 0 comments

The loss becomes NaN after some number of epochs, and then the model never converges. This happens randomly. Trying to train a custom dataset with a batch size of 2048 and base lr of 0.2 on 4 A100s.
{"epoch": 299, "step": 264121, "loss": 14.823115348815918, "time": 33933, "lr": 1.2852132052592342} {"epoch": 299, "step": 264163, "loss": 14.84267520904541, "time": 33993, "lr": 1.2851170355212638} {"epoch": 299, "step": 264205, "loss": NaN, "time": 34054, "lr": 1.2850208546990547} {"epoch": 299, "step": 264245, "loss": 14.683608055114746, "time": 34115, "lr": 1.2849292436129383} {"epoch": 300, "step": 264300, "loss": 14.624290466308594, "time": 34213, "lr": 1.2848032619604233} {"epoch": 300, "step": 264335, "loss": 14.825407981872559, "time": 34275, "lr": 1.2847230819273718} {"epoch": 300, "step": 264376, "loss": 14.545491218566895, "time": 34336, "lr": 1.2846291469640327} {"epoch": 300, "step": 264417, "loss": 14.715323448181152, "time": 34397, "lr": 1.2845352014486329} {"epoch": 300, "step": 264458, "loss": 54.99197769165039, "time": 34458, "lr": 1.284441245383221} {"epoch": 300, "step": 264501, "loss": 22.475656509399414, "time": 34518, "lr": 1.2843426947628278} {"epoch": 300, "step": 264542, "loss": 23.183082580566406, "time": 34579, "lr": 1.2842487170891577} {"epoch": 300, "step": 264583, "loss": 23.91857147216797, "time": 34640, "lr": 1.284154728871724} {"epoch": 300, "step": 264626, "loss": 24.39642906188965, "time": 34701, "lr": 1.284056144537631} {"epoch": 300, "step": 264668, "loss": 24.610559463500977, "time": 34762, "lr": 1.2839598416708278} {"epoch": 300, "step": 264711, "loss": 24.703632354736328, "time": 34823, "lr": 1.2838612344228337} {"epoch": 300, "step": 264753, "loss": 24.7344970703125, "time": 34883, "lr": 1.283764909179525} {"epoch": 300, "step": 264793, "loss": 24.75002670288086, "time": 34944, "lr": 1.283673160576227} {"epoch": 300, "step": 264835, "loss": 24.750017166137695, "time": 35005, "lr": 1.2835768137547283} {"epoch": 300, "step": 264878, "loss": 24.750001907348633, "time": 35066, "lr": 1.2834781615145805} {"epoch": 300, "step": 264921, "loss": 24.75, "time": 35127, "lr": 1.283379497695405} {"epoch": 300, "step": 264963, "loss": 24.75, "time": 35187, "lr": 1.2832831172076795} {"epoch": 300, "step": 265005, "loss": 24.75, "time": 35248, "lr": 1.2831867256776874}