AUTOMATIC1111/stable-diffusion-webui

[Bug]: When using ControlNet Union with "Low VRAM" checked, I get the error: "Expected all tensors to be on the same device..." (Details in thread.)

LinaWolfGoddess opened this issue · 0 comments

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

I want to do inpainting, assisted by ControlNet Union (Depth and LineArt) and IPAdapter (FaceID); with the IPAdapter LoRA, and another LoRA for style, alongside SDXL (the model doesn't really matter, because I get it with both, but they are RealVis, and FaeTastic).
I can't load both the instances of Union and IPAdapter, the LoRAs, and SDXL in VRAM, because I run out. So, I usually check the "Low VRAM" option in ControlNet to lower memory usage, and prevent overflowing the graphics memory in the shared space (doesn't crash, but slows down tremendously; not relevant to the issue).

When I have "Low VRAM" checked, the program gives me an error message, and breaks. It doesn't crash, just breaks execution. The Error message is: "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)"
(Details in the error logs.)

With LowVRAM unchecked, the program works.

If there's anything that I've missed, please, let me know. I'll try to provide as much info as I can.

Steps to reproduce the problem

  1. Do anything related to inpainting.
  2. Load ControlNet Union with both Depth and Lineart.
  3. Preprocess the ControlNet input image, and set it as the input image. Preprocessor doesn't seem relevant - I just preprocess, drag to input, and set preprocessor to "None."
  4. Either add an IPAdapter instance and check the "Low VRAM" option, or just check the "Low VRAM" option on either / both ControlNet Union tabs. It breaks either way.
  5. Click "Generate" with your favorite prompt.

What should have happened?

It should just do the inpainting. Very preferably without breaking execution before it's done!

What browsers do you use to access the UI ?

Microsoft Edge

Sysinfo

sysinfo-2024-10-19-16-14.json

Console logs

2024-10-19 18:52:19,143 - ControlNet - INFO - unit_separate = False, style_align = False 30/30 [00:44<00:00,  1.43s/it]
2024-10-19 18:52:19,481 - ControlNet - INFO - Loading model: ip-adapter-faceid-plusv2_sdxl [187cb962]
2024-10-19 18:52:20,487 - ControlNet - INFO - Loaded state_dict from [C:\Diffusion\webui\models\ControlNet\ip-adapter-faceid-plusv2_sdxl.bin]
2024-10-19 18:52:28,154 - ControlNet - INFO - ControlNet model ip-adapter-faceid-plusv2_sdxl [187cb962](ControlModelType.IPAdapter) loaded.
2024-10-19 18:52:28,189 - ControlNet - INFO - Using preprocessor: ip-adapter_face_id_plus
2024-10-19 18:52:28,189 - ControlNet - INFO - preprocessor resolution = 1024
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Diffusion\webui\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\buffalo_l\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Diffusion\webui\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\buffalo_l\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Diffusion\webui\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\buffalo_l\det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Diffusion\webui\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\buffalo_l\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0
Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}
find model: C:\Diffusion\webui\extensions\sd-webui-controlnet\annotator\downloads\insightface\models\buffalo_l\w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5
set det-size: (640, 640)
2024-10-19 18:52:33,648 - ControlNet - WARNING - Insightface: More than one face is detected in the image. Only the biggest one will be used.
2024-10-19 18:53:10,457 - ControlNet - WARNING - Unable to determine version for ControlNet model 'Controlnet--Union [15e6ad5d]'.
2024-10-19 18:53:10,814 - ControlNet - INFO - Loading model: Controlnet--Union [15e6ad5d]
2024-10-19 18:53:10,987 - ControlNet - INFO - Loaded state_dict from [C:\Diffusion\webui\models\ControlNet\Controlnet--Union.safetensors]
2024-10-19 18:53:11,005 - ControlNet - INFO - controlnet_sdxl_config
2024-10-19 18:53:44,786 - ControlNet - INFO - ControlNet model Controlnet--Union [15e6ad5d](ControlModelType.ControlNetUnion) loaded.
2024-10-19 18:53:45,202 - ControlNet - INFO - Using preprocessor: none
2024-10-19 18:53:45,202 - ControlNet - INFO - preprocessor resolution = 1024
2024-10-19 18:53:45,409 - ControlNet - INFO - ControlNetUnion control type: ControlNetUnionControlType.DEPTH
2024-10-19 18:53:45,410 - ControlNet - WARNING - Unable to determine version for ControlNet model 'Controlnet--Union [15e6ad5d]'.
2024-10-19 18:53:45,412 - ControlNet - INFO - Loading model from cache: Controlnet--Union [15e6ad5d]
2024-10-19 18:53:45,620 - ControlNet - INFO - Using preprocessor: none
2024-10-19 18:53:45,621 - ControlNet - INFO - preprocessor resolution = 1024
2024-10-19 18:53:45,647 - ControlNet - INFO - ControlNetUnion control type: ControlNetUnionControlType.HARD_EDGE
2024-10-19 18:53:48,044 - ControlNet - INFO - ControlNet Hooked - Time = 88.91933393478394
  0%|                                                                                           | 0/46 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(7gaia0ozp7g9rh2)', <gradio.routes.Request object at 0x0000028736B25DE0>, 2, '3d render, chibi fairy with bun updo, almond-shaped slanted eyes, makeup, looking curiously. <lora:ip-adapter-faceid-plusv2_sdxl_lora:0.45>', '', [], None, None, {'image': <PIL.Image.Image image mode=RGBA size=1024x1024 at 0x28736C6B880>, 'mask': <PIL.Image.Image image mode=RGB size=1024x1024 at 0x28736C68910>}, None, None, None, None, 4, 0, 0, 4, 1, 7, 1.5, 1, 0.0, 1024, 1024, 1, 0, 1, 64, 0, '', '', '', [], False, [], '', 0, False, 1, 0.5, 4, 0, 0.5, 2, 46, 'Restart', 'Automatic', False, '', 0.8, 11628035, False, -1, 0, 0, 0, <scripts.animatediff_ui.AnimateDiffProcess object at 0x0000028736C6BDC0>, ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=True, module='ip-adapter_face_id_plus', model='ip-adapter-faceid-plusv2_sdxl [187cb962]', weight=1.0, image={'image': array([[[87, 64, 56],
***         [87, 64, 56],
***         [87, 64, 56],
***         ...,
***         [81, 53, 41],
***         [81, 53, 41],
***         [81, 53, 41]],
***
***        [[87, 64, 56],
***         [87, 64, 56],
***         [87, 64, 56],
***         ...,
***         [81, 53, 41],
***         [81, 53, 41],
***         [81, 53, 41]],
***
***        [[88, 65, 57],
***         [88, 65, 57],
***         [88, 65, 57],
***         ...,
***         [81, 53, 41],
***         [81, 53, 41],
***         [81, 53, 41]],
***
***        ...,
***
***        [[44, 65, 92],
***         [44, 65, 92],
***         [43, 64, 91],
***         ...,
***         [34, 36, 48],
***         [33, 35, 47],
***         [33, 35, 47]],
***
***        [[42, 66, 92],
***         [42, 66, 92],
***         [41, 65, 91],
***         ...,
***         [32, 34, 46],
***         [31, 33, 45],
***         [31, 33, 45]],
***
***        [[42, 66, 92],
***         [42, 66, 92],
***         [41, 65, 91],
***         ...,
***         [30, 32, 44],
***         [29, 31, 43],
***         [29, 31, 43]]], dtype=uint8), 'mask': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        ...,
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=True, processor_res=1024, threshold_a=0.5, threshold_b=0.5, guidance_start=0.0, guidance_end=1.0, pixel_perfect=True, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=False, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.UNKNOWN: 'Unknown'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=True, module='none', model='Controlnet--Union [15e6ad5d]', weight=0.5, image={'image': array([[[ 14,  14,  14],
***         [ 14,  14,  14],
***         [ 14,  14,  14],
***         ...,
***         [ 17,  17,  17],
***         [ 16,  16,  16],
***         [ 15,  15,  15]],
***
***        [[ 14,  14,  14],
***         [ 14,  14,  14],
***         [ 14,  14,  14],
***         ...,
***         [ 17,  17,  17],
***         [ 16,  16,  16],
***         [ 16,  16,  16]],
***
***        [[ 14,  14,  14],
***         [ 14,  14,  14],
***         [ 14,  14,  14],
***         ...,
***         [ 17,  17,  17],
***         [ 17,  17,  17],
***         [ 17,  17,  17]],
***
***        ...,
***
***        [[254, 254, 254],
***         [254, 254, 254],
***         [254, 254, 254],
***         ...,
***         [176, 176, 176],
***         [176, 176, 176],
***         [176, 176, 176]],
***
***        [[254, 254, 254],
***         [254, 254, 254],
***         [254, 254, 254],
***         ...,
***         [176, 176, 176],
***         [176, 176, 176],
***         [176, 176, 176]],
***
***        [[254, 254, 254],
***         [254, 254, 254],
***         [253, 253, 253],
***         ...,
***         [176, 176, 176],
***         [176, 176, 176],
***         [176, 176, 176]]], dtype=uint8), 'mask': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        ...,
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=1024, threshold_a=0.5, threshold_b=0.5, guidance_start=0.0, guidance_end=1.0, pixel_perfect=True, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=True, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.DEPTH: 'Depth'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), ControlNetUnit(is_ui=True, input_mode=<InputMode.SIMPLE: 'simple'>, batch_images='', output_dir='', loopback=False, enabled=True, module='none', model='Controlnet--Union [15e6ad5d]', weight=0.5, image={'image': array([[[ 5,  5,  5],
***         [10, 10, 10],
***         [ 6,  6,  6],
***         ...,
***         [ 2,  2,  2],
***         [ 1,  1,  1],
***         [ 1,  1,  1]],
***
***        [[ 8,  8,  8],
***         [10, 10, 10],
***         [ 5,  5,  5],
***         ...,
***         [ 1,  1,  1],
***         [ 1,  1,  1],
***         [ 1,  1,  1]],
***
***        [[ 4,  4,  4],
***         [ 2,  2,  2],
***         [ 3,  3,  3],
***         ...,
***         [ 1,  1,  1],
***         [ 1,  1,  1],
***         [ 1,  1,  1]],
***
***        ...,
***
***        [[ 6,  6,  6],
***         [ 7,  7,  7],
***         [ 4,  4,  4],
***         ...,
***         [ 0,  0,  0],
***         [ 0,  0,  0],
***         [ 0,  0,  0]],
***
***        [[ 1,  1,  1],
***         [ 1,  1,  1],
***         [ 1,  1,  1],
***         ...,
***         [ 0,  0,  0],
***         [ 0,  0,  0],
***         [ 1,  1,  1]],
***
***        [[ 1,  1,  1],
***         [ 1,  1,  1],
***         [ 1,  1,  1],
***         ...,
***         [ 1,  1,  1],
***         [ 1,  1,  1],
***         [ 1,  1,  1]]], dtype=uint8), 'mask': array([[[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        ...,
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]],
***
***        [[0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0],
***         ...,
***         [0, 0, 0],
***         [0, 0, 0],
***         [0, 0, 0]]], dtype=uint8)}, resize_mode=<ResizeMode.INNER_FIT: 'Crop and Resize'>, low_vram=False, processor_res=1024, threshold_a=0.5, threshold_b=0.5, guidance_start=0.0, guidance_end=1.0, pixel_perfect=True, control_mode=<ControlMode.BALANCED: 'Balanced'>, inpaint_crop_input_image=True, hr_option=<HiResFixOption.BOTH: 'Both'>, save_detected_map=True, advanced_weighting=None, effective_region_mask=None, pulid_mode=<PuLIDMode.FIDELITY: 'Fidelity'>, union_control_type=<ControlNetUnionControlType.HARD_EDGE: 'Hard Edge'>, ipadapter_input=None, mask=None, batch_mask_dir=None, animatediff_batch=False, batch_modifiers=[], batch_image_files=[], batch_keyframe_idx=None), False, '', 0.5, True, False, '', 'Lerp', False, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', 0, '', True, False, False, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50, False, False, 0, 'Range', 1, 'GPU', True, False, False, False, False, 0, 448, False, 448, False, False, 3, False, 3, True, 3, False, 'Horizontal', False, False, 'u2net', False, True, True, False, 0, 2.5, 'polylines_sharp', ['left-right', 'red-cyan-anaglyph'], 2, 0, False, '∯boost∯clipdepth∯clipdepth_far∯clipdepth_mode∯clipdepth_near∯compute_device∯do_output_depth∯gen_normalmap∯gen_rembg∯gen_simple_mesh∯gen_stereo∯model_type∯net_height∯net_size_match∯net_width∯normalmap_invert∯normalmap_post_blur∯normalmap_post_blur_kernel∯normalmap_pre_blur∯normalmap_pre_blur_kernel∯normalmap_sobel∯normalmap_sobel_kernel∯output_depth_combine∯output_depth_combine_axis∯output_depth_invert∯pre_depth_background_removal∯rembg_model∯save_background_removal_masks∯save_outputs∯simple_mesh_occlude∯simple_mesh_spherical∯stereo_balance∯stereo_divergence∯stereo_fill_algo∯stereo_modes∯stereo_offset_exponent∯stereo_separation∯tiling_mode') {}
    Traceback (most recent call last):
      File "C:\Diffusion\webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Diffusion\webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Diffusion\webui\modules\img2img.py", line 232, in img2img
        processed = process_images(p)
      File "C:\Diffusion\webui\modules\processing.py", line 845, in process_images
        res = process_images_inner(p)
      File "C:\Diffusion\webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "C:\Diffusion\webui\modules\processing.py", line 981, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Diffusion\webui\extensions\sd-webui-controlnet\scripts\hook.py", line 470, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "C:\Diffusion\webui\modules\processing.py", line 1741, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "C:\Diffusion\webui\modules\sd_samplers_kdiffusion.py", line 172, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Diffusion\webui\modules\sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "C:\Diffusion\webui\modules\sd_samplers_kdiffusion.py", line 172, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Diffusion\webui\modules\sd_samplers_extra.py", line 71, in restart_sampler
        x = heun_step(x, old_sigma, new_sigma)
      File "C:\Diffusion\webui\modules\sd_samplers_extra.py", line 19, in heun_step
        denoised = model(x, old_sigma * s_in, **extra_args)
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\webui\modules\sd_samplers_cfg_denoiser.py", line 237, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "C:\Diffusion\webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "C:\Diffusion\webui\modules\sd_models_xl.py", line 44, in apply_model
        return self.model(x, t, cond)
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\webui\modules\sd_hijack_utils.py", line 18, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\Diffusion\webui\modules\sd_hijack_utils.py", line 32, in __call__
        return self.__orig_func(*args, **kwargs)
      File "C:\Diffusion\webui\repositories\generative-models\sgm\modules\diffusionmodules\wrappers.py", line 28, in forward
        return self.diffusion_model(
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\webui\extensions\sd-webui-controlnet\scripts\hook.py", line 905, in forward_webui
        raise e
      File "C:\Diffusion\webui\extensions\sd-webui-controlnet\scripts\hook.py", line 902, in forward_webui
        return forward(*args, **kwargs)
      File "C:\Diffusion\webui\extensions\sd-webui-controlnet\scripts\hook.py", line 613, in forward
        control = param.control_model(
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 32, in forward
        return self.control_model(*args, **kwargs)
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\webui\extensions\sd-webui-controlnet\scripts\cldm.py", line 370, in forward
        emb += self.control_add_embedding(control_type, emb.dtype, emb.device)
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\webui\extensions\sd-webui-controlnet\scripts\controlnet_core\controlnet_union.py", line 64, in forward
        return self.linear_2(torch.nn.functional.silu(self.linear_1(c_type)))
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Diffusion\webui\extensions-builtin\Lora\networks.py", line 503, in network_Linear_forward
        return originals.Linear_forward(self, input)
      File "C:\Diffusion\webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
        return F.linear(input, self.weight, self.bias)
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

---

Additional information

No response