kex0/batch-face-swap

Looks like the new version plugin is not compatible with "xformers"

xiaojiemeidu opened this issue · 10 comments

python: 3.10.6  •  torch: 2.0.0+cu118  •  xformers: 0.0.19  •  gradio: 3.28.1
Microsoft Edge

fc79eef

[--opt-sdp-attention is OK.]

Total progress: 22it [02:29, 3.52s/it]
Will process 1 images, generating 1 new images for each.
Found 1 face(s) in <PIL.Image.Image image mode=RGB size=944x944 at 0x1A77AB4C9A0>
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11/11 [00:01<00:00, 10.01it/s]
Found 1 faces in 1 images in 0.953125 seconds.
Error completing request
Arguments: ('task(f4zd02y7lcvqlui)', 0, '', '', [], <PIL.Image.Image image mode=RGBA size=944x944 at 0x1A77AB4F070>, None, None, None, None, None, None, 20, 0, 4, 0, 1, False, False, 1, 1, 7, 1.5, 0.75, -1.0, -1.0, 0, 0, 0, False, 0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], 0, True, 'img2img', False, '', '', False, 'Euler a', False, '2339HalfTheFishWas_v10.safetensors [7bf8da1368]', True, 0.5, True, 4, True, 32, '', False, 1, 'Both ▦', False, '', False, True, True, False, False, False, False, 1, False, '', '', '', 'generateMasksTab', 1, 4, 2.5, 30, 1.03, 1, 1, 5, 0.5, 5, False, True, False, 20, False, 'MultiDiffusion', False, 10, 1, 1, 64, False, True, 1024, 1024, 96, 96, 48, 1, 'None', 2, False, False, False, False, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, 0.4, 0.4, 0.2, 0.2, '', '', 'Background', 0.2, -1.0, False, True, True, True, False, 1536, 96, <controlnet.py.UiControlNetUnit object at 0x000001A77AD5A0E0>, '

    \n
  • CFG Scale should be 2 or lower.
  • \n
\n', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '

Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '

Will upscale the image by the selected scale factor; use width and height sliders to set tile size

', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, 1.6, 0.97, 0.4, 0.15, 20, 0, 0, '', False, False, False, None, False, 50, 'dynamic_thresholding;dynamic_prompting', True, True, True, 2, 'Original', 32, 5, 512, 512, 0.1, True, True, 'Inner', 'Original', 'sam_vit_b_01ec64.pth', 'groundingdino_swint_ogc.pth', True, 32, 4, '', '', '', '', '', '', 0.4, 0.4, 0, 0, 0, 0, True, True, 16, 16, 'Text', 'Center', None, 100, 100, '', None, 'Trebuchet MS', 50, 10, 0.4, '') {}
Traceback (most recent call last):
File "D:\StabilityAI\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "D:\StabilityAI\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "D:\StabilityAI\stable-diffusion-webui\modules\img2img.py", line 181, in img2img
processed = process_images(p)
File "D:\StabilityAI\stable-diffusion-webui\modules\processing.py", line 515, in process_images
res = process_images_inner(p)
File "D:\StabilityAI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "D:\StabilityAI\stable-diffusion-webui\modules\processing.py", line 604, in process_images_inner
p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
File "D:\StabilityAI\stable-diffusion-webui\modules\processing.py", line 1084, in init
self.init_latent = self.sd_model.get_first_stage_encoding(self.sd_model.encode_first_stage(image))
File "D:\StabilityAI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "D:\StabilityAI\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in call
return self.__orig_func(*args, **kwargs)
File "D:\StabilityAI\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\StabilityAI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage
return self.first_stage_model.encode(x)
File "D:\StabilityAI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode
h = self.encoder(x)
File "D:\StabilityAI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\StabilityAI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 536, in forward
h = self.mid.attn_1(h)
File "D:\StabilityAI\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "D:\StabilityAI\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 258, in forward
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)
File "D:\StabilityAI\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 192, in memory_efficient_attention
return memory_efficient_attention(
File "D:\StabilityAI\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 290, in _memory_efficient_attention
return memory_efficient_attention_forward(
File "D:\StabilityAI\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha_init
.py", line 310, in memory_efficient_attention_forward
out, *
= op.apply(inp, needs_gradient=False)
File "D:\StabilityAI\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\cutlass.py", line 175, in apply
out, lse, rng_seed, rng_offset = cls.OPERATOR(
File "D:\StabilityAI\stable-diffusion-webui\venv\lib\site-packages\torch_ops.py", line 502, in call
return self._op(*args, **kwargs or {})
RuntimeError: CUDA error: invalid configuration argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

kex0 commented

I just updated to the latest A1111, updated torch and xformers and it's working. I don't know what could be causing your issue.

I just updated to the latest A1111, updated torch and xformers and it's working. I don't know what could be causing your issue.

Your xformers 0.0.17?

kex0 commented

Your xformers 0.0.17?

Yes
chrome_qzdcRuRfvq

Your xformers 0.0.17?

Yes chrome_qzdcRuRfvq

Could it be a problem with the latest version of xformers?

kex0 commented

Could it be a problem with the latest version of xformers?

possibly

Could it be a problem with the latest version of xformers?

possibly

Ok, I will downgrade the xformers version and try again, thank you for your reply.

kex0 commented

I just tested img2img tab and I'm getting the same error, I'll look into it.

kex0 commented

Alright, looks like these lines are causing the issue

p.batch_size = 0
p.n_iter = 0
p.init_images[0] = all_images[0]

Removing them works but then it generates 2x as many images in img2img

The reason is because I'm already generating the faces here

proc = renderImg2Img(
bfs_prompt,
bfs_nprompt,
sd_sampler,
steps,
cfg_scale,
seed,
bfs_width,
bfs_height,
image,
image_mask,
batch_size,
n_iter,
denoising_strength,
mask_blur,
inpainting_fill,
inpainting_full_res,
inpaint_full_res_padding,
do_not_save_samples = True,
)

but then it continues and does the regular img2img. I need to somehow stop it after it's done with the faces.

kex0 commented

Just setting p.batch_size to 1 seems to be working.

Can you please try it out?

p.batch_size = 1
p.n_iter = 0
p.init_images[0] = all_images[0]

p.batch_size = 0
p.n_iter = 0
p.init_images[0] = all_images[0]

Just setting p.batch_size to 1 seems to be working.

Can you please try it out?

p.batch_size = 1
p.n_iter = 0
p.init_images[0] = all_images[0]

p.batch_size = 0
p.n_iter = 0
p.init_images[0] = all_images[0]

image

image
python: 3.10.6  •  torch: 2.0.0+cu118  •  xformers: 0.0.19

it works!thanks!