Train on 2 or more GPUs
muranski opened this issue · 1 comments
muranski commented
Awesome work!
How should I change this code to train on 2 or more GPUs?
seungwonpark commented
Probably you won't have to. It has been shown that using batch size as 16 leads to best audio fidelity, (see #14) and that only consumes about 4~5GB of GPU memory.