tin2tin/Pallaidium

_offload_gpu_id

cl4usman opened this issue · 6 comments

I just created a sample text strip and hit generate...


Strip input processing started (ctrl+c to cancel).
Use file seed and prompt: Yes

1/1
Prompt: a woman singing, a woman singing, a woman singing, a car,
Negative Prompt: low quality
Load: runwayml/stable-diffusion-v1-5 Model
unet\diffusion_pytorch_model.fp16.safetensors not found
Loading pipeline components...: 71%|█████████████████████████████████████▏ | 5/7 [00:01<00:00, 3.69it/s]text_config_dict is provided which will be used to initialize CLIPTextConfig. The value text_config["id2label"] will be overriden.
text_config_dict is provided which will be used to initialize CLIPTextConfig. The value text_config["bos_token_id"] will be overriden.
text_config_dict is provided which will be used to initialize CLIPTextConfig. The value text_config["eos_token_id"] will be overriden.
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 7/7 [00:02<00:00, 3.38it/s]
Traceback (most recent call last):
File "C:\Users\claud\AppData\Roaming\Blender Foundation\Blender\3.6\scripts\addons\Pallaidium-main_init_.py", line 2638, in execute
pipe.enable_model_cpu_offload()
File "C:\Users\claud\AppData\Roaming\Python\Python310\site-packages\diffusers\pipelines\pipeline_utils.py", line 1363, in enable_model_cpu_offload
self._offload_gpu_id = gpu_id or torch_device.index or self._offload_gpu_id or 0
File "C:\Users\claud\AppData\Roaming\Python\Python310\site-packages\diffusers\configuration_utils.py", line 137, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'StableDiffusionPipeline' object has no attribute 'offload_gpu_id'
Error: Python: Traceback (most recent call last):
File "C:\Users\claud\AppData\Roaming\Blender Foundation\Blender\3.6\scripts\addons\Pallaidium-main_init
.py", line 2638, in execute
pipe.enable_model_cpu_offload()
File "C:\Users\claud\AppData\Roaming\Python\Python310\site-packages\diffusers\pipelines\pipeline_utils.py", line 1363, in enable_model_cpu_offload
self._offload_gpu_id = gpu_id or torch_device.index or self._offload_gpu_id or 0
File "C:\Users\claud\AppData\Roaming\Python\Python310\site-packages\diffusers\configuration_utils.py", line 137, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'StableDiffusionPipeline' object has no attribute 'offload_gpu_id'
Location: C:\Program Files\Blender Foundation\Blender 3.6\3.6\scripts\modules\bpy\ops.py:113
Traceback (most recent call last):
File "C:\Users\claud\AppData\Roaming\Blender Foundation\Blender\3.6\scripts\addons\Pallaidium-main_init
.py", line 3191, in execute
sequencer.generate_image()
File "C:\Program Files\Blender Foundation\Blender 3.6\3.6\scripts\modules\bpy\ops.py", line 113, in call
ret = op_call(self.idname_py(), None, kw)
RuntimeError: Error: Python: Traceback (most recent call last):
File "C:\Users\claud\AppData\Roaming\Blender Foundation\Blender\3.6\scripts\addons\Pallaidium-main_init
.py", line 2638, in execute
pipe.enable_model_cpu_offload()
File "C:\Users\claud\AppData\Roaming\Python\Python310\site-packages\diffusers\pipelines\pipeline_utils.py", line 1363, in enable_model_cpu_offload
self._offload_gpu_id = gpu_id or torch_device.index or self._offload_gpu_id or 0
File "C:\Users\claud\AppData\Roaming\Python\Python310\site-packages\diffusers\configuration_utils.py", line 137, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'StableDiffusionPipeline' object has no attribute '_offload_gpu_id'
Location: C:\Program Files\Blender Foundation\Blender 3.6\3.6\scripts\modules\bpy\ops.py:113

Error: Python: Traceback (most recent call last):
File "C:\Users\claud\AppData\Roaming\Blender Foundation\Blender\3.6\scripts\addons\Pallaidium-main_init_.py", line 3191, in execute
sequencer.generate_image()
File "C:\Program Files\Blender Foundation\Blender 3.6\3.6\scripts\modules\bpy\ops.py", line 113, in call
ret = op_call(self.idname_py(), None, kw)
RuntimeError: Error: Python: Traceback (most recent call last):
File "C:\Users\claud\AppData\Roaming\Blender Foundation\Blender\3.6\scripts\addons\Pallaidium-main_init
.py", line 2638, in execute
pipe.enable_model_cpu_offload()
File "C:\Users\claud\AppData\Roaming\Python\Python310\site-packages\diffusers\pipelines\pipeline_utils.py", line 1363, in enable_model_cpu_offload
self._offload_gpu_id = gpu_id or torch_device.index or self._offload_gpu_id or 0
File "C:\Users\claud\AppData\Roaming\Python\Python310\site-packages\diffusers\configuration_utils.py", line 137, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'StableDiffusionPipeline' object has no attribute '_offload_gpu_id'
Location: C:\Program Files\Blender Foundation\Blender 3.6\3.6\scripts\modules\bpy\ops.py:113

Using the runwayml/stable-diffusion-v1-5 model works fine here with enable_model_cpu_offload() (6 GB VRAM - one gfx card)- but it might be due to a hardware difference. What is your OS, how much VRAM and GFX cards do you have?

If you have more than 6 GB VRAM, you can change the limit where offloading kicks in line 529:
image

In the latest version the vram enhacements kicks in under 6 GB VRAM. I hope this solves your problem.

I have installed a "bit oldie" NVidia GTX 1070 8GB / GDDR5 /2048 CUDA cores. I know it's a bit on verge of low performance level.
I'll check how I could free some resources up to run Pallaidium.
Thanks for your time!

In theory, it should work better with your 8 GB of VRAM, as compared with my RTX 2060 with 6 GB of VRAM. With the latest update of Pallaidium, your 8 GB of vram should load everything on to the GPU and not offloading to CPU as seems to be the problem here. So try the new version, and let me know how it goes.

Hey, I downloaded the latests version and it finally worked!
You're awesome! Thanks!

Eh, cool! Nice to hear! If you do something nice, then please share! In Discussions there is a Cookbook, so if you come up with some great settings or workflows, please share. Have fun.