microsoft/UniVL

RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.

lokeaichirou opened this issue · 3 comments

Hi, @ArrowLuo I did training in fine-tuning stage for video captioning task. However, there is error of 'RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.'


RuntimeError Traceback (most recent call last)
in ()
31 coef_lr = 1.0
32 optimizer, scheduler, model = prep_optimizer(args, model, num_train_optimization_steps, device, n_gpu,
---> 33 args.local_rank, coef_lr=coef_lr)
34
35 if args.local_rank == 0:

2 frames
in prep_optimizer(args, model, num_train_optimization_steps, device, n_gpu, local_rank, coef_lr)
28
29 model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_rank],
---> 30 output_device=local_rank, find_unused_parameters=True)
31
32 #model = torch.nn.DataParallel(model).cuda()

/usr/local/lib/python3.7/dist-packages/torch/nn/parallel/distributed.py in init(self, module, device_ids, output_device, dim, broadcast_buffers, process_group, bucket_cap_mb, find_unused_parameters, check_reduction, gradient_as_bucket_view)
399
400 if process_group is None:
--> 401 self.process_group = _get_default_group()
402 else:
403 self.process_group = process_group

/usr/local/lib/python3.7/dist-packages/torch/distributed/distributed_c10d.py in _get_default_group()
345 """
346 if not is_initialized():
--> 347 raise RuntimeError("Default process group has not been initialized, "
348 "please make sure to call init_process_group.")
349 return GroupMember.WORLD

RuntimeError: Default process group has not been initialized, please make sure to call init_process_group.

I set worldsize = 1. I attach my log.txt here.
log (1).txt

Did you submit task via python -m torch.distributed.launch --nproc_per_node=1? The process group will be initialized as torch.distributed.init_process_group(backend="nccl") in https://github.com/microsoft/UniVL/blob/main/main_task_caption.py#L24. The worldsize will be set automatically.

Did you submit task via python -m torch.distributed.launch --nproc_per_node=1? The process group will be initialized as torch.distributed.init_process_group(backend="nccl") in https://github.com/microsoft/UniVL/blob/main/main_task_caption.py#L24. The worldsize will be set automatically.

Hi, @ArrowLuo , I use google codelab to run the main_task_caption script, is there any way to launch the distributed computation in codelab?

@lokeaichirou Sorry, I have no idea about the google codelab.