drprojects/superpoint_transformer

How to train in different devices?

Closed this issue · 2 comments

Hello,
If I have 4 gpu in 1 node, and I want to separately train a model in one gpu.(such as training1 -> gpu0, training2 -> gpu1...)
So how do I specify a gpu for training?(such as gpu0, gpu1).

The code is based on the lightning hydra template, which itself builds on PyTorch lightning. Please have a look at the corresponding documentations:

Thank you!