RuntimeError: CUDA out of memory
schrtim opened this issue · 1 comments
Hi, I am trying to run the new version, that you have recently uploaded to fix a bug for the new torch version.
While trying to run the code now, I am prompted with the following error:
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 3.82 GiB total capacity; 1.99 GiB already allocated; 16.00 MiB free; 2.00 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I have already tried to clean the cash of CUDA, with:
torch.cuda.memory_summary(device=None, abbreviated=False)
torch.cuda.empty_cache()
However, this does not resolve the issue.
Could this be resolved by reducing the batch size, or is my GPU just too tiny ?
Thanks for your help!
DeepGaze needs quite a bit of GPU memory. I use a batch size of 4 on 2080s which have 12 GB of RAM, although right now I'm not sure whether larger batch sizes might have worked for me. If not even a batch size of 1 works for you, you can still run the model on the CPU (only training would probably be infeasible).