facebookresearch/multihop_dense_retrieval

Distributed Training

NehaNishikant opened this issue · 0 comments

Hi, does train_mhop.py support distributed training?

I noticed a call to torch.distributed.init_process_group, but n_gpu is hardcoded to 1.