Can multiple GPUs be utilized with the PyTorch DataParallel class?
Tpool1 opened this issue · 1 comments
Tpool1 commented
Hello,
I am looking to use the 3D_GMIC model with a batch size greater than 1 by using the PyTorch DataParallel class and multiple GPUs. I noticed there are assertions at the beginning of the forward function expecting a batch size of only 1. If a higher batch size has been successfully employed, assistance would be greatly appreciated!