A simple reference implementation to train multiple models on one or more GPUs in parallel with PyTorch.
An minimal example of training multiple networks on MNIST in parallel with a multiprocessing Queue structure:
single_gpu.py
parallel training of multiple models with a single GPUmulti_gpu.py
as above but with multiple GPUsmain.py
a simple implementation of population based training, from here.
- PyTorch >= 1.0.0