Woolverine94/biniou

Stable Diffusion error

Closed this issue ยท 8 comments

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to https://127.0.0.1:7860/?__theme=dark
  2. Click on Image, Stable Diffusion,
  3. Add any word in the Prompt
  4. "Error" in preview/output window

Expected behavior
Image to appear

Console log

A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
IMPORTANT: You are using gradio version 3.50.2, however version 4.29.0 is available, please upgrade.
--------
>>>[biniou ๐Ÿง ]: Up and running at https://192.168.0.98:7860/?__theme=dark
Running on local URL:  https://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
>>>[Stable Diffusion ๐Ÿ–ผ๏ธ ]: starting module
Some weights of the model checkpoint were not used when initializing CLIPTextModel:
 ['text_model.embeddings.position_ids']
Traceback (most recent call last):
  File "C:\Users\bpvar\biniou\env\Lib\site-packages\gradio\queueing.py", line 407, in call_prediction
    output = await route_utils.call_process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bpvar\biniou\env\Lib\site-packages\gradio\route_utils.py", line 226, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bpvar\biniou\env\Lib\site-packages\gradio\blocks.py", line 1550, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bpvar\biniou\env\Lib\site-packages\gradio\blocks.py", line 1185, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bpvar\biniou\env\Lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bpvar\biniou\env\Lib\site-packages\anyio\_backends\_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "C:\Users\bpvar\biniou\env\Lib\site-packages\anyio\_backends\_asyncio.py", line 859, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bpvar\biniou\env\Lib\site-packages\gradio\utils.py", line 661, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "C:\Users\bpvar\biniou\env\Lib\site-packages\gradio\utils.py", line 661, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "C:\Users\bpvar\biniou\ressources\common.py", line 573, in wrap_func
    result = func(*args, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bpvar\biniou\ressources\txt2img_sd.py", line 384, in image_txt2img_sd
    image = pipe_txt2img_sd(
            ^^^^^^^^^^^^^^^^
  File "C:\Users\bpvar\biniou\env\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bpvar\biniou\env\Lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 1006, in __call__
    noise_pred = self.unet(
                 ^^^^^^^^^^
  File "C:\Users\bpvar\biniou\env\Lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bpvar\biniou\env\Lib\site-packages\torch\nn\modules\module.py", line 1568, in _call_impl
    result = forward_call(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bpvar\biniou\env\Lib\site-packages\accelerate\hooks.py", line 169, in new_forward
    output = module._old_forward(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bpvar\biniou\env\Lib\site-packages\diffusers\models\unets\unet_2d_condition.py", line 1145, in forward
    aug_emb = self.get_aug_embed(
              ^^^^^^^^^^^^^^^^^^^
  File "C:\Users\bpvar\biniou\env\Lib\site-packages\diffusers\models\unets\unet_2d_condition.py", line 966, in get_aug_embed
    if "text_embeds" not in added_cond_kwargs:
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: argument of type 'NoneType' is not iterable

Screenshots
If applicable, add screenshots to help explain your problem.

Hardware (please complete the following information):

  • RAM size: 64GB]
  • Processor physical core number or vcore number :
    AMD Ryzen 9 7945HX with Radeon Graphics
    Base speed: 2.50 GHz
    Sockets: 1
    Cores: 16
    Logical processors: 32
  • Storage type of biniou installation : 1TB NVME
  • Free storage space for biniou installation :400GB
  • GPU : Nvidia Geforce RTX 4080 Laptop

Desktop (please complete the following information):

  • OS: Windows 11
  • Browser Chrome
  • Version 126.0.6478.127
  • Python version 3.11.5

**Smartphone (please complete the foll

Additional informations

  • I access the webui through the ip 127.0.0.1 (browser and biniou installation on the same system) :
    [X] Yes [ ] No

Hello @Bortus-AI,

Thanks for reporting the issue !

Unfortunately, I don't see anything in your log that can help making a definitive diagnosis.

Can you give a few more details :

1.Confirm the stable diffusion model you were using (value of the "model" field in the module "settings")
2.Confirm that you did not use custom settings loaded at startup (either in the global settings or at the module level)
3.Confirm if you have enabled CUDA or not and if you have cuda 12.1 installed
4.Confirm that other modules works.

Thanks for your answers.

It seems like it doesn't like any of the pony models.
no custom settings
tried it with cuda and without. 12.1 is installed
Other models work but no module works with pony models

@Bortus-AI,

Thanks for your feedback.

I can confirm that results with standalone models are not guaranteed, especially if they are SDXL models with names not reflecting it. Model detection is really basic, and only rely on model names for standalone models.

A workaround for that specific case could be to add "XL" in the filename of the model, to force its detection as an SDXL model, and not a SD 1.5 one.

I can probably make some kind of fix for Pony models.

Can you post the URL of one of the faulty repo(s), so that I can try to find a workaround ?

@Woolverine94 thanks for the quick reply.

I tried to add XL to the filename but i get this error. I have the safechecker/NSFW turned on but even if I turn it off I get the same message. One of the pony models already has XL in it but same error.

Two models I tried are
ponyDiffusionV6XL_v6StartWithThisOne.safetensors
ponyRealism_v21MainVAE.safetensors

https://civitai.com/models/257749/pony-diffusion-v6-xl
https://civitai.com/models/372465?modelVersionId=534642

             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/gradio/route_utils.py", line 226, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/gradio/blocks.py", line 1550, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/gradio/blocks.py", line 1185, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 859, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/gradio/utils.py", line 661, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/gradio/utils.py", line 661, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/ressources/common.py", line 573, in wrap_func
    result = func(*args, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/ressources/txt2img_sd.py", line 181, in image_txt2img_sd
    pipe_txt2img_sd = StableDiffusionXLPipeline.from_single_file(
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/biniou/biniou/env/lib/python3.11/site-packages/diffusers/loaders/single_file.py", line 556, in from_single_file
    pipe = pipeline_class(**init_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: StableDiffusionXLPipeline.__init__() got an unexpected keyword argument 'safety_checker'

@Bortus-AI

Definitely can't reproduce with another .safetensor file and got the same result as you with Pony v6.

I will try to make a fix, but I'm almost sure that best I can do is probably only adding an official support for Pony v6 via an HF repo.

okay sounds good. Another unrelated issue is if I I disable the NSFW filter and use the PicX_real model I get the error
Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed

but If I change to Realistic vision then the NSFW image shows up. So it seams to not be disabling the filter one all models

@Bortus-AI

Problem should probably be the same, but with different symptoms : these models settings seems to supersed biniou default settings. In one case, it introduce a "safety_checker" settings that doesn't exist in the pipeline, in the other case, it force activation of the safety checker.

I've probably missed something in the diffusers documentation about the from_singlefile method.

@Bortus-AI

Commit 63f68d9 introduces a bugfix that will fix the safetychecker issue when using ponyDiffusionV6XL_v6StartWithThisOne.safetensors

Fault was on me : it has no sense to pass a load_safety_checker argument to from_singlefile, as the safety checker is now external to the method.

Anyway, I still can reproduce the behavior you describe on PicX_real and can't find an explanation for it ...

I close this issue as the Ponydiffusion is now usable as a standalone .safetensor file, but don't hesitate to re-open it if required.

Thanks again for your contributions !