Whether we maintain two same models in two process
Opened this issue · 1 comments
LukeLIN-web commented
Line 106 in 82ee2a3
Whether it isn't useful in any real practice? I think eventually we only need one model
How I can know which GPU place each data replicas and each model?
LukeLIN-web commented
we need use some codes like:
device = torch.device("cuda:{}".format(rank))
model = Net().to(device)
data, target = data.to(device), target.to(device)