facebookresearch/fairseq-lua

Pointers to distributed training

Closed this issue · 2 comments

Hi,

I am wondering whether it is possible to have multiple gpu training across nodes? Any pointer would be helpful.
Thanks!

You can probably do this with nccl by now but we do not support it in this project.

Thanks!