about gpus memory
Closed this issue · 2 comments
Can the code run on a GPU with ≤ 24GB of VRAM?
I get the following on a 24GB VRAM machine:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 2.18 GiB (GPU 0; 23.64 GiB total capacity; 18.12 GiB already allocated; 1.88 GiB free; 20.57 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
How should I adjust the code/config?
Hi guys, I am so sorry for my late reply!
Please check out my latest commit. I fixed this memory problem by limiting the number of detections per image forwarded to 4 (before all detections were forwarded at once).
It only requires much less memory now. Please don't hesitate re-open the issue if there is any problems.