Multiple GPUs
wangqian621 opened this issue · 1 comments
wangqian621 commented
How to use multiple GPUs for training? Do I need to write the function of setting multiple GPUs into the model?
airsplay commented
You can try with distributed data parallel: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html. However, since different nav rollouts have different length and PyTorch DDP uses synchronized update, the speed of each step would be equal the the slowest one.