0xbitches/sd-webui-lcm

Out of memory during loading

Closed this issue · 3 comments

When using GPU, during startup:
Is there any minimum memory requirement here?

webui-docker-auto-1  | *** Error loading script: main.py
webui-docker-auto-1  |     Traceback (most recent call last):
webui-docker-auto-1  |       File "/stable-diffusion-webui/modules/scripts.py", line 382, in load_scripts
webui-docker-auto-1  |         script_module = script_loading.load_module(scriptfile.path)
webui-docker-auto-1  |       File "/stable-diffusion-webui/modules/script_loading.py", line 10, in load_module
webui-docker-auto-1  |         module_spec.loader.exec_module(module)
webui-docker-auto-1  |       File "<frozen importlib._bootstrap_external>", line 883, in exec_module
webui-docker-auto-1  |       File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
webui-docker-auto-1  |       File "/stable-diffusion-webui/extensions/sd-webui-lcm/scripts/main.py", line 75, in <module>
webui-docker-auto-1  |         pipe.to("cuda")
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 733, in to
webui-docker-auto-1  |         module.to(torch_device, torch_dtype)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1145, in to
webui-docker-auto-1  |         return self._apply(convert)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
webui-docker-auto-1  |         module._apply(fn)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
webui-docker-auto-1  |         module._apply(fn)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 797, in _apply
webui-docker-auto-1  |         module._apply(fn)
webui-docker-auto-1  |       [Previous line repeated 2 more times]
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 820, in _apply
webui-docker-auto-1  |         param_applied = fn(param)
webui-docker-auto-1  |       File "/usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1143, in convert
webui-docker-auto-1  |         return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
webui-docker-auto-1  |     torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 3.95 GiB total capacity; 3.84 GiB already allocated; 10.75 MiB free; 3.89 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

This is due to Automatic loading an SD checkpoint and the LCM model at the same time. Try "Settings -> Actions -> Unload SD checkpoint to free VRAM"

"Settings -> Actions -> Unload SD checkpoint to free VRAM"

This is exactly what I've been doing so far. Before doing that I was not able to use 728x728 resolution, but now it works perfectly.

Closing this for now. I'll mostly be using #5 to keep track of lower end machine optimizations.