odegeasslbc/Progressive-GAN-pytorch

cannot use multiple gpus

fanbyprinciple opened this issue · 0 comments

When uncommenting the line for using multiple gpus using dataparallel, the accumulation function shows index error. after changing it to -

def accumulate(model1, model2, decay=0.999):
    par1 = dict(model1.named_parameters())
    par2 = dict(model2.named_parameters())
    
    
    print(len(par1.keys()))
    print(len(par2.keys()))
    
    for k in par1.keys():
        k_module = "module." + k
        par1[k].data.mul_(decay).add_(1 - decay, par2[k_module].data)

now new error has cropped up,

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Input In [55], in <cell line: 27>()
     24 g_optimizer = optim.Adam(generator.parameters(), lr=args.lr, betas=(0.0, 0.99))
     25 d_optimizer = optim.Adam(discriminator.parameters(), lr=args.lr, betas=(0.0, 0.99))
---> 27 accumulate(g_running, generator, 0)
     29 loader = imagefolder_loader(args.path)
     31 print(loader.__len__)

Input In [54], in accumulate(model1, model2, decay)
      9 for k in par1.keys():
     10     k_module = "module." + k
---> 11     par1[k].data.mul_(decay).add_(1 - decay, par2[k_module].data)

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu!

Response welcome.