pytorch/examples

If I am training on a SINGLE GPU, should this "--dist-backend 'gloo'" argument be added to the command?

HassanBinHaroon opened this issue ยท 10 comments

@Jaiaid

Is this "--dist-backend 'gloo'" be included in the terminal command if using a SINGLE GPU or having just one GPU on the machine?

Is the following example command correct for SINGLE GPU?

python main.py --dist-backend 'gloo' -a resnet18 [imagenet-folder with train and val folders]

Is that what your new committed warning implies?

@HassanBinHaroon
yes
it should be --dist-backend gloo (without the quote). You do not need to give quote for cmd line args
but if in your system nccl version <2.5. It should be fine to use nccl

@HassanBinHaroon yes it should be --dist-backend gloo (without the quote). You do not need to give quote for cmd line args but if in your system nccl version <2.5. It should be fine to use nccl

@Jaiaid Thanks!

@HassanBinHaroon yes it should be --dist-backend gloo (without the quote). You do not need to give quote for cmd line args but if in your system nccl version <2.5. It should be fine to use nccl

@Jaiaid Does the nccl and gloo comes with PyTorch?
If yes, then how to check their availability and version?

I am checking the availability using the following commands. Is it the right procedure?

"import torch.distributed as dist

print(dist.is_available()) # Should print True if distributed is available
print(dist.is_nccl_available()) # Should print True if NCCL is available
print(dist.is_gloo_available()) # Should print True if Gloo is available
"
Moreover, I am checking the nccl version using the following command, Again, please enlighten is it correct procedure?

"print(torch.cuda.nccl.version)"

The output is (2,19,3)

Does it mean that I have to be compulsorily using --dist-backend gloo?

Please elaborate, Thanks!

@HassanBinHaroon
yes they should come with pytorch
yes, it should otherwise throw an error from NCCL (something like internal check error), to avoid that you have to do --dist-backend gloo
BTW have you tried actually using NCCL to run your code?

@Jaiaid Yes, I have tried running code --dist-backend nccl. It logs the user warning that (I think) you recently added and code executes smoothly BTW.

Thank you for the information.

@HassanBinHaroon
If you are using one rank in one GPU than nccl backend should be fine but if there are multiple rank using a single GPU than it will be an issue. I guess I should improve the message.
In your case, are you using multiple rank/process in a single GPU machine?

@Jaiaid I am using just SINGLE GPU for training. I am not forcefully adjusting the rank and It's -1 by default. The command that I have been using is "python main.py -b 512 --dist-backend gloo -a resnet18 imagenet/"

@HassanBinHaroon
If you use "python main.py -b 512 --dist-backend nccl -a resnet18 imagenet/" does it run smoothly?

@HassanBinHaroon If you use "python main.py -b 512 --dist-backend nccl -a resnet18 imagenet/" does it run smoothly?

@Jaiaid Yes, it absolutely runs smoothly.