rosinality/alias-free-gan-pytorch

GPU Scalability Bug (GPU 0 has 4x the vram of all others)

Skylion007 opened this issue · 2 comments

image
I tried running this on multiple GPUs and noticed all the processes allocate memory on GPU number 1. This essentially limits me to a batch size of just 1 when each GPU can support closer to a batch size of 4 on their own.

Confirmed, the difference is entirely caused by the sampling code, I optimized it and submitted a PR.