prismformore/Multi-Task-Transformer

RuntimeError: value cannot be converted to type float without overflow: (1.37209e-09,-4.45819e-10)

Closed this issue · 7 comments

Traceback (most recent call last):
File "main.py", line 172, in
main()
File "main.py", line 148, in main
end_signal, iter_count = train_phase(p, args, train_dataloader, test_dataloader, model, criterion, optimizer, scheduler, epoch, tb_writer_t
rain, tb_writer_test, iter_count)
File "/public/home/ws/InvPT/utils/train_utils.py", line 41, in train_phase
5%|████▊ | 9/198 [00:08<03:04, 1.02it/s]
Traceback (most recent call last):
File "main.py", line 172, in
main()
File "main.py", line 148, in main
end_signal, iter_count = train_phase(p, args, train_dataloader, test_dataloader, model, criterion, optimizer, scheduler, epoch, tb_writer_train, tb_writer_test, iter_count)
File "/public/home/ws/vpt/InvPT/utils/train_utils.py", line 41, in train_phase
optimizer.step()
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper optimizer.step()
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper return wrapped(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/optimizer.py", line 88, in wrapper return wrapped(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/optimizer.py", line 88, in wrapper return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/optimizer.py", line 88, in wrappe[47/1951] return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/adam.py", line 144, in step
return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/adam.py", line 144, in step
eps=group['eps'])
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/functional.py", line 98, in adam
eps=group['eps'])
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/functional.py", line 98, in adam param.addcdiv(exp_avg, denom, value=-step_size)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/functional.py", line 98, in adam[33/1951]
param.addcdiv
(exp_avg, denom, value=-step_size)
RuntimeError: value cannot be converted to type float without overflow: (1.37209e-09,-4.45819e-10)
param.addcdiv
(exp_avg, denom, value=-step_size)
RuntimeError: value cannot be converted to type float without overflow: (1.37209e-09,-4.45819e-10)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 182046) of binary: /public/home/ws/Anacondas
/anaconda3/envs/invpt/bin/python
Traceback (most recent call last):
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in
main() File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main launch(args)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in ma[19/1951] launch(args)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch run(args)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run
)(*cmd_args)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in call return launch_agent(self._config, self._entrypoint, list(args))
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

main.py FAILED

main.py FAILED [5/1951]

Failures:
[1]:
time : 2022-09-21_15:42:37
host : ai_gpu02
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 182053)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure):
[0]:
time : 2022-09-21_15:42:37
host : ai_gpu02
rank : 0 (local_rank: 0)
host : ai_gpu02 [0/1951]
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 182053)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure):
[0]:
time : 2022-09-21_15:42:37
host : ai_gpu02
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 182046)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Excuse me, I have this bug after training 39000+ iterations, how to solve it? Is this problem related to the number of computing cards used? I used two cards in training.

Traceback (most recent call last):

File "main.py", line 172, in
main()
File "main.py", line 148, in main
end_signal, iter_count = train_phase(p, args, train_dataloader, test_dataloader, model, criterion, optimizer, scheduler, epoch, tb_writer_t
rain, tb_writer_test, iter_count)
File "/public/home/ws/InvPT/utils/train_utils.py", line 41, in train_phase
5%|████▊ | 9/198 [00:08<03:04, 1.02it/s]
Traceback (most recent call last):
File "main.py", line 172, in
main()
File "main.py", line 148, in main
end_signal, iter_count = train_phase(p, args, train_dataloader, test_dataloader, model, criterion, optimizer, scheduler, epoch, tb_writer_train, tb_writer_test, iter_count)
File "/public/home/ws/vpt/InvPT/utils/train_utils.py", line 41, in train_phase
optimizer.step()
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper optimizer.step()
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper return wrapped(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/optimizer.py", line 88, in wrapper return wrapped(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/optimizer.py", line 88, in wrapper return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/optimizer.py", line 88, in wrappe[47/1951] return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/adam.py", line 144, in step
return func(*args, **kwargs)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/adam.py", line 144, in step
eps=group['eps'])
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/functional.py", line 98, in adam eps=group['eps']) File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/functional.py", line 98, in adam param.addcdiv(exp_avg, denom, value=-step_size) File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/optim/functional.py", line 98, in adam[33/1951] param.addcdiv(exp_avg, denom, value=-step_size) RuntimeError: value cannot be converted to type float without overflow: (1.37209e-09,-4.45819e-10) param.addcdiv(exp_avg, denom, value=-step_size)
RuntimeError: value cannot be converted to type float without overflow: (1.37209e-09,-4.45819e-10)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 182046) of binary: /public/home/ws/Anacondas
/anaconda3/envs/invpt/bin/python
Traceback (most recent call last):
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in
main() File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main launch(args)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in ma[19/1951] launch(args)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch run(args)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run
)(*cmd_args)
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in call return launch_agent(self._config, self._entrypoint, list(args))
File "/public/home/ws/Anacondas/anaconda3/envs/invpt/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

main.py FAILED

main.py FAILED [5/1951]

Failures:

[1]:
time : 2022-09-21_15:42:37
host : ai_gpu02
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 182053)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure):

[0]:
time : 2022-09-21_15:42:37
host : ai_gpu02
rank : 0 (local_rank: 0)
host : ai_gpu02 [0/1951]
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 182053)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

Root Cause (first observed failure):

[0]:
time : 2022-09-21_15:42:37
host : ai_gpu02
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 182046)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
Excuse me, I have this bug after training 39000+ iterations, how to solve it? Is this problem related to the number of computing cards used? I used two cards in training.

I seem to have solved this problem. It should be that the number of iterations has reached 40k. According to the characteristics of the ploy schedule, the learning rate is close to 0 at this time, but the program has not yet reached the max epoch.

@tb2-sy I am not sure what is the real reason. The direct cause of this bug is that lr is too small. Did you change the scheduler, learning rate, or another setting? Did you use a different batch size or retrain the model?

@tb2-sy I am not sure what is the real reason. The direct cause of this bug is that lr is too small. Did you change the scheduler, learning rate, or another setting? Did you use a different batch size or retrain the model?

oh, the batch on each gpu is 2. I used two cards, so the total batch is 4. Is this the reason?

@tb2-sy I am not sure what is the real reason. The direct cause of this bug is that lr is too small. Did you change the scheduler, learning rate, or another setting? Did you use a different batch size or retrain the model?

oh, the batch on each gpu is 2. I used two cards, so the total batch is 4. Is this the reason?

I don't think so. LR should be adjusted based on iteration no.

@tb2-sy Hi, have you solved this issue? : )

@tb2-sy Hi, have you solved this issue? : )

I have never encountered this problem again. It is speculated that this error was caused by changing the number of config iterations during the training resume.

@tb2-sy Great. I will close this issue for now. Thanks.