Problems in running the training code
Sun15194 opened this issue · 2 comments
Hello, I encountered the following problem when running the training code, and I hope to solve it.
`(letr) root@shuusv005:~/sjc/LETR# bash ./script/train/a0_train_stage1_res50.sh res50_stage1
folder not exist
| distributed init (rank 1): env://
Traceback (most recent call last):
File "src/main.py", line 215, in
main(args)
File "src/main.py", line 21, in main
utils.init_distributed_mode(args)
File "/home/shu-usv005/sjc/LETR/src/util/misc.py", line 421, in init_distributed_mode
torch.cuda.set_device(args.gpu)
File "/home/shu-usv005/anaconda3/envs/letr/lib/python3.7/site-packages/torch/cuda/init.py", line 261, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
Traceback (most recent call last):
File "src/main.py", line 215, in
main(args)
File "src/main.py", line 21, in main
utils.init_distributed_mode(args)
File "/home/shu-usv005/sjc/LETR/src/util/misc.py", line 421, in init_distributed_mode
torch.cuda.set_device(args.gpu)
File "/home/shu-usv005/anaconda3/envs/letr/lib/python3.7/site-packages/torch/cuda/init.py", line 261, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
Traceback (most recent call last):
File "src/main.py", line 215, in
main(args)
File "src/main.py", line 21, in main
utils.init_distributed_mode(args)
File "/home/shu-usv005/sjc/LETR/src/util/misc.py", line 421, in init_distributed_mode
torch.cuda.set_device(args.gpu)
File "/home/shu-usv005/anaconda3/envs/letr/lib/python3.7/site-packages/torch/cuda/init.py", line 261, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
| distributed init (rank 0): env://
Traceback (most recent call last):
File "src/main.py", line 215, in
main(args)
File "src/main.py", line 21, in main
utils.init_distributed_mode(args)
File "/home/shu-usv005/sjc/LETR/src/util/misc.py", line 421, in init_distributed_mode
torch.cuda.set_device(args.gpu)
File "/home/shu-usv005/anaconda3/envs/letr/lib/python3.7/site-packages/torch/cuda/init.py", line 261, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
Traceback (most recent call last):
File "src/main.py", line 215, in
Traceback (most recent call last):
File "src/main.py", line 215, in
main(args)
File "src/main.py", line 21, in main
utils.init_distributed_mode(args)main(args)
File "/home/shu-usv005/sjc/LETR/src/util/misc.py", line 421, in init_distributed_mode
File "src/main.py", line 21, in main
utils.init_distributed_mode(args)
File "/home/shu-usv005/sjc/LETR/src/util/misc.py", line 421, in init_distributed_mode
torch.cuda.set_device(args.gpu)
File "/home/shu-usv005/anaconda3/envs/letr/lib/python3.7/site-packages/torch/cuda/init.py", line 261, in set_device
torch.cuda.set_device(args.gpu)
File "/home/shu-usv005/anaconda3/envs/letr/lib/python3.7/site-packages/torch/cuda/init.py", line 261, in set_device
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
torch._C._cuda_setDevice(device)
RuntimeError: CUDA error: invalid device ordinal
Traceback (most recent call last):
File "/home/shu-usv005/anaconda3/envs/letr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/shu-usv005/anaconda3/envs/letr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/shu-usv005/anaconda3/envs/letr/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in
main()
File "/home/shu-usv005/anaconda3/envs/letr/lib/python3.7/site-packages/torch/distributed/launch.py", line 326, in main
sigkill_handler(signal.SIGTERM, None) # not coming back
File "/home/shu-usv005/anaconda3/envs/letr/lib/python3.7/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/shu-usv005/anaconda3/envs/letr/bin/python', '-u', 'src/main.py', '--coco_path', 'data/wireframe_processed', '--output_dir', 'exp/res50_stage1', '--backbone', 'resnet50', '--resume', 'https://dl.fbaipublicfiles.com/detr/detr-r50-e632da11.pth', '--batch_size', '1', '--epochs', '500', '--lr_drop', '200', '--num_queries', '1000', '--num_gpus', '1', '--layer1_num', '3']' returned non-zero exit status 1.
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
Killing subprocess 6288
Killing subprocess 6289
Killing subprocess 6290
Killing subprocess 6291
Killing subprocess 6292
Killing subprocess 6293
Killing subprocess 6294
Killing subprocess 6295
`
Have you modified a0_train_stage1_res50.sh file as per your gpu availability? set --nproc_per_node and --num_gpus with the number of GPUs available to you. and also in src/args.py update the --world_size with the number of gpus available.
Have you modified a0_train_stage1_res50.sh file as per your gpu availability? set --nproc_per_node and --num_gpus with the number of GPUs available to you. and also in src/args.py update the --world_size with the number of gpus available.
The problem has been solved. Thank you for your kind answer, which is of great help to me