NaN in loss occurred when training on nyu with default config
dongli12 opened this issue · 2 comments
Dear authors,
I follow your instructions and use your default config to train densenet161 on nyu but encounter NaN in loss. Have you encountered the same issue? Do you have any comments to help address this issue? I attached part of training log as below for your reference.
28665 [epoch][s/s_per_e/gs]: [4][2564/6058/26796], lr: 0.000092801574, loss: 0.587257266045
28666 [epoch][s/s_per_e/gs]: [4][2565/6058/26797], lr: 0.000092801304, loss: 0.969502031803
28667 [epoch][s/s_per_e/gs]: [4][2566/6058/26798], lr: 0.000092801034, loss: 0.852333068848
28668 [epoch][s/s_per_e/gs]: [4][2567/6058/26799], lr: 0.000092800764, loss: 0.468196690083
28669 [epoch][s/s_per_e/gs]: [4][2568/6058/26800], lr: 0.000092800494, loss: 0.398437529802
28670 bts_nyu_v2_pytorch_test
28671 GPU: 0 | examples/s: 1.30 | loss: 0.39844 | var sum: -8274.029 avg: -36.449 | time elapsed: 6.43h | time left: 66.27h
28672 [epoch][s/s_per_e/gs]: [4][2569/6058/26801], lr: 0.000092800224, loss: 0.396373450756
28673 [epoch][s/s_per_e/gs]: [4][2570/6058/26802], lr: 0.000092799954, loss: 0.725467324257
28674 [epoch][s/s_per_e/gs]: [4][2571/6058/26803], lr: 0.000092799685, loss: 0.693935453892
28675 [epoch][s/s_per_e/gs]: [4][2572/6058/26804], lr: 0.000092799415, loss: 0.809877395630
28676 [epoch][s/s_per_e/gs]: [4][2573/6058/26805], lr: 0.000092799145, loss: 0.508117854595
28677 [epoch][s/s_per_e/gs]: [4][2574/6058/26806], lr: 0.000092798875, loss: 0.631862759590
28678 [epoch][s/s_per_e/gs]: [4][2575/6058/26807], lr: 0.000092798605, loss: 0.562379121780
28679 [epoch][s/s_per_e/gs]: [4][2576/6058/26808], lr: 0.000092798335, loss: 0.636592268944
28680 [epoch][s/s_per_e/gs]: [4][2577/6058/26809], lr: 0.000092798065, loss: 1.480627655983
28681 [epoch][s/s_per_e/gs]: [4][2578/6058/26810], lr: 0.000092797795, loss: 0.720919907093
28682 [epoch][s/s_per_e/gs]: [4][2579/6058/26811], lr: 0.000092797525, loss: 0.398592323065
28683 [epoch][s/s_per_e/gs]: [4][2580/6058/26812], lr: 0.000092797255, loss: 0.565192580223
28684 [epoch][s/s_per_e/gs]: [4][2581/6058/26813], lr: 0.000092796986, loss: 0.622062742710
28685 [epoch][s/s_per_e/gs]: [4][2582/6058/26814], lr: 0.000092796716, loss: 0.572721838951
28686 [epoch][s/s_per_e/gs]: [4][2583/6058/26815], lr: 0.000092796446, loss: 0.990063309669
28687 [epoch][s/s_per_e/gs]: [4][2584/6058/26816], lr: 0.000092796176, loss: 0.572463631630
28688 [epoch][s/s_per_e/gs]: [4][2585/6058/26817], lr: 0.000092795906, loss: 0.458469241858
28689 [epoch][s/s_per_e/gs]: [4][2586/6058/26818], lr: 0.000092795636, loss: 1.144481301308
28690 [epoch][s/s_per_e/gs]: [4][2587/6058/26819], lr: 0.000092795366, loss: 0.880649805069
28691 [epoch][s/s_per_e/gs]: [4][2588/6058/26820], lr: 0.000092795096, loss: 0.429279804230
28692 [epoch][s/s_per_e/gs]: [4][2589/6058/26821], lr: 0.000092794826, loss: 0.502728164196
28693 [epoch][s/s_per_e/gs]: [4][2590/6058/26822], lr: 0.000092794556, loss: 0.697629332542
28694 [epoch][s/s_per_e/gs]: [4][2591/6058/26823], lr: 0.000092794286, loss: 1.542018651962
28695 [epoch][s/s_per_e/gs]: [4][2592/6058/26824], lr: 0.000092794017, loss: 0.493366718292
28696 [epoch][s/s_per_e/gs]: [4][2593/6058/26825], lr: 0.000092793747, loss: 0.714045524597
28697 [epoch][s/s_per_e/gs]: [4][2594/6058/26826], lr: 0.000092793477, loss: 0.727897644043
28698 [epoch][s/s_per_e/gs]: [4][2595/6058/26827], lr: 0.000092793207, loss: nan
28699 NaN in loss occurred. Aborting training.
Look forward to your reply.
Thanks!
Dong
Dear authors,
Same issue happened when training on kitti with default config:
5055 [epoch][s/s_per_e/gs]: [0][4696/5790/4696], lr: 0.000098685021, loss: 0.928951561451
5056 [epoch][s/s_per_e/gs]: [0][4697/5790/4697], lr: 0.000098684741, loss: 1.438320755959
5057 [epoch][s/s_per_e/gs]: [0][4698/5790/4698], lr: 0.000098684461, loss: 1.086255431175
5058 [epoch][s/s_per_e/gs]: [0][4699/5790/4699], lr: 0.000098684180, loss: 1.162461757660
5059 [epoch][s/s_per_e/gs]: [0][4700/5790/4700], lr: 0.000098683900, loss: 1.128360152245
5060 bts_eigen_v2_pytorch_test
5061 GPU: 0 | examples/s: 1.51 | loss: 1.12836 | var sum: -5138.767 avg: -22.638 | time elapsed: 1.19h | time left: 72.19h
5062 [epoch][s/s_per_e/gs]: [0][4701/5790/4701], lr: 0.000098683620, loss: 0.960600614548
5063 [epoch][s/s_per_e/gs]: [0][4702/5790/4702], lr: 0.000098683340, loss: 0.803400218487
5064 [epoch][s/s_per_e/gs]: [0][4703/5790/4703], lr: 0.000098683059, loss: 0.712845563889
5065 [epoch][s/s_per_e/gs]: [0][4704/5790/4704], lr: 0.000098682779, loss: 0.893965423107
5066 [epoch][s/s_per_e/gs]: [0][4705/5790/4705], lr: 0.000098682499, loss: 1.325345993042
5067 [epoch][s/s_per_e/gs]: [0][4706/5790/4706], lr: 0.000098682219, loss: 1.235568523407
5068 [epoch][s/s_per_e/gs]: [0][4707/5790/4707], lr: 0.000098681938, loss: 1.001750826836
5069 [epoch][s/s_per_e/gs]: [0][4708/5790/4708], lr: 0.000098681658, loss: 0.820477724075
5070 [epoch][s/s_per_e/gs]: [0][4709/5790/4709], lr: 0.000098681378, loss: 0.757687449455
5071 [epoch][s/s_per_e/gs]: [0][4710/5790/4710], lr: 0.000098681098, loss: 0.767939090729
5072 [epoch][s/s_per_e/gs]: [0][4711/5790/4711], lr: 0.000098680817, loss: 0.980791509151
5073 [epoch][s/s_per_e/gs]: [0][4712/5790/4712], lr: 0.000098680537, loss: 1.235353946686
5074 [epoch][s/s_per_e/gs]: [0][4713/5790/4713], lr: 0.000098680257, loss: 1.501036763191
5075 [epoch][s/s_per_e/gs]: [0][4714/5790/4714], lr: 0.000098679977, loss: 0.976272106171
5076 [epoch][s/s_per_e/gs]: [0][4715/5790/4715], lr: 0.000098679696, loss: 0.827641785145
5077 [epoch][s/s_per_e/gs]: [0][4716/5790/4716], lr: 0.000098679416, loss: 0.638071060181
5078 [epoch][s/s_per_e/gs]: [0][4717/5790/4717], lr: 0.000098679136, loss: 0.891529560089
5079 [epoch][s/s_per_e/gs]: [0][4718/5790/4718], lr: 0.000098678856, loss: 0.863552570343
5080 [epoch][s/s_per_e/gs]: [0][4719/5790/4719], lr: 0.000098678575, loss: 0.979450643063
5081 [epoch][s/s_per_e/gs]: [0][4720/5790/4720], lr: 0.000098678295, loss: 0.916973829269
5082 [epoch][s/s_per_e/gs]: [0][4721/5790/4721], lr: 0.000098678015, loss: 1.077144861221
5083 [epoch][s/s_per_e/gs]: [0][4722/5790/4722], lr: 0.000098677735, loss: 1.062819480896
5084 [epoch][s/s_per_e/gs]: [0][4723/5790/4723], lr: 0.000098677454, loss: 1.114348530769
5085 [epoch][s/s_per_e/gs]: [0][4724/5790/4724], lr: 0.000098677174, loss: 1.016950249672
5086 [epoch][s/s_per_e/gs]: [0][4725/5790/4725], lr: 0.000098676894, loss: 1.080961704254
5087 [epoch][s/s_per_e/gs]: [0][4726/5790/4726], lr: 0.000098676614, loss: 0.860958218575
5088 [epoch][s/s_per_e/gs]: [0][4727/5790/4727], lr: 0.000098676333, loss: 0.900979578495
5089 [epoch][s/s_per_e/gs]: [0][4728/5790/4728], lr: 0.000098676053, loss: 0.744510948658
5090 [epoch][s/s_per_e/gs]: [0][4729/5790/4729], lr: 0.000098675773, loss: 1.005219578743
5091 [epoch][s/s_per_e/gs]: [0][4730/5790/4730], lr: 0.000098675493, loss: 1.165772676468
5092 [epoch][s/s_per_e/gs]: [0][4731/5790/4731], lr: 0.000098675212, loss: 0.926906228065
5093 [epoch][s/s_per_e/gs]: [0][4732/5790/4732], lr: 0.000098674932, loss: 1.262896656990
5094 [epoch][s/s_per_e/gs]: [0][4733/5790/4733], lr: 0.000098674652, loss: 1.104318261147
5095 [epoch][s/s_per_e/gs]: [0][4734/5790/4734], lr: 0.000098674372, loss: 1.177158355713
5096 [epoch][s/s_per_e/gs]: [0][4735/5790/4735], lr: 0.000098674091, loss: 0.944988369942
5097 [epoch][s/s_per_e/gs]: [0][4736/5790/4736], lr: 0.000098673811, loss: 0.748923361301
5098 [epoch][s/s_per_e/gs]: [0][4737/5790/4737], lr: 0.000098673531, loss: 1.580052614212
5099 [epoch][s/s_per_e/gs]: [0][4738/5790/4738], lr: 0.000098673251, loss: 3.080396413803
5100 [epoch][s/s_per_e/gs]: [0][4739/5790/4739], lr: 0.000098672970, loss: 8.520328521729
5101 [epoch][s/s_per_e/gs]: [0][4740/5790/4740], lr: 0.000098672690, loss: 6.916765689850
5102 [epoch][s/s_per_e/gs]: [0][4741/5790/4741], lr: 0.000098672410, loss: nan
5103 NaN in loss occurred. Aborting training.
5104
5105 Exception in thread Thread-1:
5106 Traceback (most recent call last):
5107 File "/scratch/workspace/dongl/anaconda3/envs/bts/lib/python3.7/threading.py", line 926, in _bootstrap_inner
5108 self.run()
5109 File "/scratch/workspace/dongl/anaconda3/envs/bts/lib/python3.7/site-packages/tensorboardX/event_file_writer.py", line 202, in run
5110 data = self._queue.get(True, queue_wait_duration)
5111 File "/scratch/workspace/dongl/anaconda3/envs/bts/lib/python3.7/multiprocessing/queues.py", line 108, in get
5112 res = self._recv_bytes()
5113 File "/scratch/workspace/dongl/anaconda3/envs/bts/lib/python3.7/multiprocessing/connection.py", line 216, in recv_bytes
5114 buf = self._recv_bytes(maxlength)
5115 File "/scratch/workspace/dongl/anaconda3/envs/bts/lib/python3.7/multiprocessing/connection.py", line 407, in _recv_bytes
5116 buf = self._recv(4)
5117 File "/scratch/workspace/dongl/anaconda3/envs/bts/lib/python3.7/multiprocessing/connection.py", line 383, in _recv
5118 raise EOFError
5119 EOFError
5120
5121 Exception in thread Thread-2:
5122 Traceback (most recent call last):
5123 File "/scratch/workspace/dongl/anaconda3/envs/bts/lib/python3.7/threading.py", line 926, in _bootstrap_inner
5124 self.run()
5125 File "/scratch/workspace/dongl/anaconda3/envs/bts/lib/python3.7/site-packages/tensorboardX/event_file_writer.py", line 202, in run
5126 data = self._queue.get(True, queue_wait_duration)
5127 File "/scratch/workspace/dongl/anaconda3/envs/bts/lib/python3.7/multiprocessing/queues.py", line 108, in get
5128 res = self._recv_bytes()
5129 File "/scratch/workspace/dongl/anaconda3/envs/bts/lib/python3.7/multiprocessing/connection.py", line 216, in recv_bytes
5130 buf = self._recv_bytes(maxlength)
5131 File "/scratch/workspace/dongl/anaconda3/envs/bts/lib/python3.7/multiprocessing/connection.py", line 407, in _recv_bytes
5132 buf = self._recv(4)
5133 File "/scratch/workspace/dongl/anaconda3/envs/bts/lib/python3.7/multiprocessing/connection.py", line 383, in _recv
5134 raise EOFError
5135 EOFError
For training on nyu and kitti, I set batch size = 4 and use multiprocessing_distributed with your provided default config.
Look forward to your reply. Thanks!
Dong
@dongli12 Hi. Could you adjust the epsilon value on
https://github.com/cogaplex-bts/bts/blob/653b5d3a57c9ba0dd19f4d25e9121449d2cc761c/pytorch/bts_main.py#L373
and try again?
Proper value should be in [1e-1, 1e-8].