Distributed Training
NehaNishikant opened this issue · 0 comments
NehaNishikant commented
Hi, does train_mhop.py support distributed training?
I noticed a call to torch.distributed.init_process_group, but n_gpu is hardcoded to 1.
NehaNishikant opened this issue · 0 comments
Hi, does train_mhop.py support distributed training?
I noticed a call to torch.distributed.init_process_group, but n_gpu is hardcoded to 1.