fidler-lab/polyrnn-pp-pytorch

Runtime error: CUDA error : out of memory

abinjoabraham opened this issue · 4 comments

I was trying to do the demo of code using the model : ../models/ggnn_epoch5_step14000.pth and while running the code I encountered to GPU out of memory. I am having an Nvidia Quadro 2 GB GPU.

Can I adjust some data processing chunks size somewhere in the program, so that this will run in my GPU. ?

Hi, this is weird since the model actually takes a maximum of 1100 MB on the gpu, which means if you have that much free, it should run!

@amlankar Thanks a lot amlan. You were right some of the memory of the GPU was held by some other applications and the same runtime error is resolved once I restarted my system. Now it's perfectly working for me. :)

Great! Just leaving a note here: The inference model (tool) takes around 1.1 GB on the GPU at max. While training, different models take different amounts of memory on the GPU. The maximum I have seen is around 10 GB while training the cross entropy model. The easiest way to reduce this would be to reduce the batch size while training (which will also lead to lower performance).

Closing for now, feel free to reopen if the issue isn't solved!