code from ETRI project
Wrapped the training process in torch.DistributedDataParallel to utilize multiple GPUs in training. 4x faster training speed run on 4 GPUs.
Forked from: https://github.com/eriklindernoren/PyTorch-GAN
code from ETRI project
Wrapped the training process in torch.DistributedDataParallel to utilize multiple GPUs in training. 4x faster training speed run on 4 GPUs.
Forked from: https://github.com/eriklindernoren/PyTorch-GAN