TheLastBen/PPS

CUDA Out of Memory Error on Base Install

Opened this issue · 1 comments

I'm running on Paperspace, Gradient Notebooks, P4000 GPU, and 50GB of persistent storage. I run a default notebook, go through the installation steps, start the gradio page, and keep getting this error whenever I try to run a "prediction" (any kind of ML generation):

OutOfMemoryError: CUDA out of memory. Tried to allocate 10.00 MiB (GPU 0; 7.92 GiB total capacity; 6.80 GiB already allocated; 1.69 MiB free; 7.04 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Any ideas? Is it just a GPU with too low internal memory? I see that stable diffusion's recommendation is a GPU with at least 6GB VRAM, and this one has 8GB, so why would that be the problem?

If your running SDXL, 8GB might not always work, it might work for 1024x1024, but if you do high res or refiner, it will crash