asomoza/gallerydl-beyond

i can't find a file custom_pipeline="pipeline_stable_diffusion_xl_differential_img2img",

Closed this issue · 7 comments

in your blog: https://huggingface.co/blog/OzzyGT/outpainting-differential-diffusion
when i run follow
pipeline = StableDiffusionXLPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
variant="fp16",
custom_pipeline="pipeline_stable_diffusion_xl_differential_img2img",
).to("cuda")

image = pipeline(
prompt=prompt="",
negative_prompt="",
width=1024,
height=1024,
guidance_scale=6.0,
num_inference_steps=25,
original_image=image,
image=image,
strength=1.0,
map=mask,
).images[0]

occurs:
raise EntryNotFoundError(message, response) from e
huggingface_hub.utils._errors.EntryNotFoundError: 404 Client Error. (Request ID: Root=1-66754f9f-3a2993354103c03c2837e594;6430dfdf-d5a4-4fda-8c89-1f2875a72d6d)

Entry Not Found for url: https://huggingface.co/datasets/diffusers/community-pipelines-mirror/resolve/main/v0.29.1/pipeline_stable_diffusion_xl_differential_img2img.py.

That's an error in the CI for the 29.1. I'll look into it and I'll tell you when it's fixed. In the meantime you can install the diffusers 0.29 version (not the 0.29.1)

Also this is not the repo for issues related to diffusers, if you have more problems with diffusers, you can just open an issue there.

Alternatively you can download this file: https://huggingface.co/datasets/diffusers/community-pipelines-mirror/resolve/main/v0.29.0/pipeline_stable_diffusion_xl_differential_img2img.py and use it directly instead of StableDiffusionXLPipeline.

thanks, it works.

i find it's hard to outpaint image like this:
f38ee329ab4a035a8fd2595b5b766612

prompt = "A yellow flower on a black background"
25-result_my_prompt

original image is
921718295966_ pic

girl2-25-128-4-result_my_prompt
sometimes

The code in that guide is just that, a guide, is not a solution for all of the use cases.

When you have half of an image it's really hard for the model to try to understand what it is, and it tries to interpret it with what it knows.

For example the flower image is not a bad one but just it isn't what you want which is to get exactly the original, if you want that you probably need to guide it more with a better prompt or maybe a controlnet with a drawing or something, but it will be really hard for the model to get that specific result you want.

About the anime one, I don't really generate anime and I really suck at prompting with tags, but you probably need a really good anime model to do an image expansion, but I can't really help that much there.

I recommend to start with at most with a quarter of an image to do the expansion unless it's a really easy image. Also as I told you, this is the starting point for you to start experimenting and learning, try different techniques and parameters, also with other models.

For example I used telea in my guide to fill the image but there are other images that work better with the other generative fills.

really thanks for your reply, i'll try other methods