This repo provides test codes for running PyTorch model using multiple GPUs.
You can find the environment setup for mutiple GPUs on this repo.
You only need to warp your model using torch.nn.DataParallel
function:
model = nn.DataParallel(model)
You may check codes here to test your multiple GPU environment. These codes are mainly from this tutorial.
Sample codes to run deep learning model are provided in this folder, which replicates the paper Maximum Classifier Discrepancy for Unsupervised Domain Adaptation.
Instead of using model.xxx, access the model attributes by model.module.xxx.
[ref: https://discuss.pytorch.org/t/how-to-reach-model-attributes-wrapped-by-nn-dataparallel/1373]