airsplay/R2R-EnvDrop

Multiple GPUs

wangqian621 opened this issue · 1 comments

How to use multiple GPUs for training? Do I need to write the function of setting multiple GPUs into the model?

You can try with distributed data parallel: https://pytorch.org/tutorials/intermediate/ddp_tutorial.html. However, since different nav rollouts have different length and PyTorch DDP uses synchronized update, the speed of each step would be equal the the slowest one.