glucauze/sd-webui-faceswaplab

LDSR on Automatic v1.6.0-125-g59544321

Zloigad opened this issue · 2 comments

If I try to use LDSR, in a standard scenario, for the purity of the experiment in the Batch Process window, today when using LDSR I started to get the original image in the output.

If I change the upscaler to any other, the problem does not occur, but the quality of the picture suffers.

LDSR works if you run it in Global Post-Processing, but does not give the desired effect.

Clean installation of the extension did not help. The problem occurred today. Perhaps it is a matter of updating the Automatic client.

image

0it [00:00, ?it/s] 2023-09-12 18:45:03,815 - FaceSwapLab - INFO - Try to use model : C:\Users\zloig\stable-diffusion-webui\models\faceswaplab\inswapper_128.onnx 2023-09-12 18:45:04,236 - FaceSwapLab - INFO - blend all faces together 2023-09-12 18:45:04,237 - FaceSwapLab - INFO - loading face ZSobchak.safetensors 2023-09-12 18:45:04,238 - FaceSwapLab - INFO - Int Gender : 0 2023-09-12 18:45:04,238 - FaceSwapLab - INFO - Process face 0 2023-09-12 18:45:04,249 - FaceSwapLab - INFO - Source Gender 0 2023-09-12 18:45:04,249 - FaceSwapLab - INFO - Target faces count : 1 2023-09-12 18:45:04,249 - FaceSwapLab - INFO - swap face 0 2023-09-12 18:45:05,068 - FaceSwapLab - INFO - ******************************************************************************** 2023-09-12 18:45:05,069 - FaceSwapLab - INFO - Inswapper 2023-09-12 18:45:05,072 - FaceSwapLab - INFO - Upscale with LDSR scale = 4 Loading model from C:\Users\zloig\stable-diffusion-webui\models\LDSR\model.ckpt LatentDiffusionV1: Running in eps-prediction mode Keeping EMAs of 308. Applying attention optimization: xformers... done. Down sample rate is 1 from 4 / 4 (Not downsampling) Plotting: Switched to EMA weights Sampling with eta = 1.0; steps: 100 Data shape for DDIM sampling is (1, 3, 128, 128), eta 1.0 Running DDIM Sampling with 100 timesteps DDIM Sampler: 0%| | 0/100 [00:00<?, ?it/s] Plotting: Restored training weights Traceback (most recent call last): File "C:\Users\zloig\stable-diffusion-webui\extensions\sd-webui-faceswaplab\scripts\faceswaplab_swapping\upscaled_inswapper.py", line 215, in get bgr_fake = self.upscale_and_restore( File "C:\Users\zloig\stable-diffusion-webui\extensions\sd-webui-faceswaplab\scripts\faceswaplab_swapping\upscaled_inswapper.py", line 153, in upscale_and_restore upscaled = upscaling.upscale_img(pil_img, pp_options) File "C:\Users\zloig\stable-diffusion-webui\extensions\sd-webui-faceswaplab\scripts\faceswaplab_postprocessing\upscaling.py", line 19, in upscale_img result_image = pp_options.upscaler.scaler.upscale( File "C:\Users\zloig\stable-diffusion-webui\modules\upscaler.py", line 62, in upscale img = self.do_upscale(img, selected_model) File "C:\Users\zloig\stable-diffusion-webui\extensions-builtin\LDSR\scripts\ldsr_model.py", line 58, in do_upscale return ldsr.super_resolution(img, ddim_steps, self.scale) File "C:\Users\zloig\stable-diffusion-webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 137, in super_resolution logs = self.run(model["model"], im_padded, diffusion_steps, eta) File "C:\Users\zloig\stable-diffusion-webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 96, in run logs = make_convolutional_sample(example, model, File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\zloig\stable-diffusion-webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 228, in make_convolutional_sample sample, intermediates = convsample_ddim(model, c, steps=custom_steps, shape=z.shape, File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\zloig\stable-diffusion-webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 184, in convsample_ddim samples, intermediates = ddim.sample(steps, batch_size=bs, shape=shape, conditioning=cond, callback=callback, File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\zloig\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 104, in sample samples, intermediates = self.ddim_sampling(conditioning, size, File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\zloig\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 164, in ddim_sampling outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\zloig\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 189, in p_sample_ddim model_output = self.model.apply_model(x, t, c) File "C:\Users\zloig\stable-diffusion-webui\extensions-builtin\LDSR\sd_hijack_ddpm_v1.py", line 964, in apply_model output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])] File "C:\Users\zloig\stable-diffusion-webui\extensions-builtin\LDSR\sd_hijack_ddpm_v1.py", line 964, in <listcomp> output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])] File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\zloig\stable-diffusion-webui\extensions-builtin\LDSR\sd_hijack_ddpm_v1.py", line 1400, in forward out = self.diffusion_model(xc, t) File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\zloig\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward return original_forward(self, x, timesteps, context, *args, **kwargs) TypeError: 'NoneType' object is not callable 2023-09-12 18:45:09,693 - FaceSwapLab - ERROR - Conversion failed 'NoneType' object is not callable 2023-09-12 18:45:09,693 - FaceSwapLab - ERROR - Failed to swap face in postprocess method : 'NoneType' object is not callable Traceback (most recent call last): File "C:\Users\zloig\stable-diffusion-webui\extensions\sd-webui-faceswaplab\scripts\faceswaplab.py", line 187, in postprocess swapped_images = swapper.process_images_units( File "C:\Users\zloig\stable-diffusion-webui\extensions\sd-webui-faceswaplab\scripts\faceswaplab_swapping\swapper.py", line 841, in process_images_units swapped = process_image_unit(model, units[0], image, info, force_blend) File "C:\Users\zloig\stable-diffusion-webui\extensions\sd-webui-faceswaplab\scripts\faceswaplab_swapping\swapper.py", line 752, in process_image_unit result: ImageResult = swap_face( File "C:\Users\zloig\stable-diffusion-webui\extensions\sd-webui-faceswaplab\scripts\faceswaplab_swapping\swapper.py", line 658, in swap_face raise e File "C:\Users\zloig\stable-diffusion-webui\extensions\sd-webui-faceswaplab\scripts\faceswaplab_swapping\swapper.py", line 646, in swap_face result = face_swapper.get( File "C:\Users\zloig\stable-diffusion-webui\extensions\sd-webui-faceswaplab\scripts\faceswaplab_swapping\upscaled_inswapper.py", line 328, in get raise e File "C:\Users\zloig\stable-diffusion-webui\extensions\sd-webui-faceswaplab\scripts\faceswaplab_swapping\upscaled_inswapper.py", line 215, in get bgr_fake = self.upscale_and_restore( File "C:\Users\zloig\stable-diffusion-webui\extensions\sd-webui-faceswaplab\scripts\faceswaplab_swapping\upscaled_inswapper.py", line 153, in upscale_and_restore upscaled = upscaling.upscale_img(pil_img, pp_options) File "C:\Users\zloig\stable-diffusion-webui\extensions\sd-webui-faceswaplab\scripts\faceswaplab_postprocessing\upscaling.py", line 19, in upscale_img result_image = pp_options.upscaler.scaler.upscale( File "C:\Users\zloig\stable-diffusion-webui\modules\upscaler.py", line 62, in upscale img = self.do_upscale(img, selected_model) File "C:\Users\zloig\stable-diffusion-webui\extensions-builtin\LDSR\scripts\ldsr_model.py", line 58, in do_upscale return ldsr.super_resolution(img, ddim_steps, self.scale) File "C:\Users\zloig\stable-diffusion-webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 137, in super_resolution logs = self.run(model["model"], im_padded, diffusion_steps, eta) File "C:\Users\zloig\stable-diffusion-webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 96, in run logs = make_convolutional_sample(example, model, File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\zloig\stable-diffusion-webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 228, in make_convolutional_sample sample, intermediates = convsample_ddim(model, c, steps=custom_steps, shape=z.shape, File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\zloig\stable-diffusion-webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 184, in convsample_ddim samples, intermediates = ddim.sample(steps, batch_size=bs, shape=shape, conditioning=cond, callback=callback, File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\zloig\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 104, in sample samples, intermediates = self.ddim_sampling(conditioning, size, File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\zloig\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 164, in ddim_sampling outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps, File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "C:\Users\zloig\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 189, in p_sample_ddim model_output = self.model.apply_model(x, t, c) File "C:\Users\zloig\stable-diffusion-webui\extensions-builtin\LDSR\sd_hijack_ddpm_v1.py", line 964, in apply_model output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])] File "C:\Users\zloig\stable-diffusion-webui\extensions-builtin\LDSR\sd_hijack_ddpm_v1.py", line 964, in <listcomp> output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])] File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\zloig\stable-diffusion-webui\extensions-builtin\LDSR\sd_hijack_ddpm_v1.py", line 1400, in forward out = self.diffusion_model(xc, t) File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "C:\Users\zloig\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward return original_forward(self, x, timesteps, context, *args, **kwargs) TypeError: 'NoneType' object is not callable {"prompt": "", "all_prompts": [""], "negative_prompt": "", "all_negative_prompts": [""], "seed": 417081445, "all_seeds": [417081445], "subseed": 534183807, "all_subseeds": [534183807], "subseed_strength": 0, "width": 512, "height": 512, "sampler_name": "DPM++ 2M Karras", "cfg_scale": 6.5, "steps": 1, "batch_size": 1, "restore_faces": false, "face_restoration_model": null, "sd_model_name": "v1-5-pruned-emaonly", "sd_model_hash": "cc6cb27103", "sd_vae_name": null, "sd_vae_hash": null, "seed_resize_from_w": -1, "seed_resize_from_h": -1, "denoising_strength": 0, "extra_generation_params": {"Mask blur": 4}, "index_of_first_image": 0, "infotexts": ["Steps: 1, Sampler: DPM++ 2M Karras, CFG scale: 6.5, Seed: 417081445, Size: 512x512, Model hash: cc6cb27103, Model: v1-5-pruned-emaonly, Denoising strength: 0, NGMS: 0.01, Mask blur: 4, Version: v1.6.0-125-g59544321"], "styles": [], "job_timestamp": "20230912184502", "clip_skip": 1, "is_using_inpainting_conditioning": false, "version": "v1.6.0-125-g59544321"} Traceback (most recent call last): File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run result = context.run(func, *args) File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(*args, **kwargs) File "C:\Users\zloig\stable-diffusion-webui\modules\call_queue.py", line 13, in f res = func(*args, **kwargs) File "C:\Users\zloig\stable-diffusion-webui\modules\ui.py", line 172, in update_token_counter token_count, max_length = max([model_hijack.get_prompt_lengths(prompt) for prompt in prompts], key=lambda args: args[0]) File "C:\Users\zloig\stable-diffusion-webui\modules\ui.py", line 172, in <listcomp> token_count, max_length = max([model_hijack.get_prompt_lengths(prompt) for prompt in prompts], key=lambda args: args[0]) File "C:\Users\zloig\stable-diffusion-webui\modules\sd_hijack.py", line 308, in get_prompt_lengths _, token_count = self.clip.process_texts([text]) File "C:\Users\zloig\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1614, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'Identity' object has no attribute 'process_texts'

version: [v1.6.0-125-g59544321  •  python: 3.10.6  •  torch: 2.0.1+cu118  •  xformers: 0.0.20  •  gradio: 3.41.2

Extension: 42d1c75

I have got the same issue on dev branch with LDSR. But it also does not work on extra tab.

Has there been any fix for this, encountering this issue now