Run the experiment with distributed mode
hwang595 opened this issue · 1 comments
hwang595 commented
Hi @JiahuiYu, thanks a lot for releasing the code for your awesome work! I'm inspired a lot by reading through your implementation.
I have one quick question about running the slimmable ResNet-50 experiment using distributed
mode. In the current configuration file, it triggers this line by default. However, I have a server with 16 GPUs where using DataParallel
is not efficient. How can I enable the training with distributed
or distributed_all_reduce
e.g. as implemented here?
Thanks a lot in advance.
Best,
Hongyi
JiahuiYu commented
Hi Hongyi,
Just adding distributed: True
in config file and you should be ready to run.