lucidrains/byol-pytorch

Advice of how to avoid cuda out of memory?

HyamsG opened this issue · 0 comments

On which machine did you manage to train this code?
When run it with batch size = 1, image size 128, accelerator ='dp' or default (ddp_spawn, which does not makes sense for me) on 8 16GB GPUs, I'm getting cuda out of memory