lena-voita/good-translation-wrong-in-context

How to utilise all available GPU memory?

Closed this issue · 1 comments

Hi!
How do I get the model to utilise all available GPU memory on each GPU?
I tried changing --batch-len , --optimizer, --optimizer-opts and some other parameters, but I can't seem to get it to use anything other than 416MiB per GPU.

Here I'm training 3 models in parallel:
image

Thanks!

It seems I had problems with cuda and cudnn versions... works just great on a different machine :)