Bad generations with Stable Diffusion when using modal
Closed this issue · 2 comments
Hello, thank you very much for your work!
I'm trying to deploy my Stable Diffusion based model on your platform, but it always gives me bad generations. When I run generation script in Google Colab, everything is fine. I think it's because of some environment variables are missing but I'm not sure about it.
Here is my code:
@stub.cls(gpu="T4")
class Model:
def __init__(self):
...
@modal.build()
def download_model_to_folder(self):
...
@modal.enter()
def setup(self):
...
@modal.method()
def generate(self, original_image, mode):
self.pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained(
self.models[mode],
torch_dtype=torch.float16,
safety_checker=None,
local_files_only=True,
cache_dir=mode
)
self.pipeline.scheduler = LCMScheduler.from_config(self.pipeline.scheduler.config)
self.pipeline.load_lora_weights(
pretrained_model_name_or_path_or_dict="latent-consistency/lcm-lora-sdv1-5",
weight_name="pytorch_lora_weights.safetensors",
cache_dir="lcm",
local_files_only=True)
self.pipeline.generator = torch.Generator(device='cuda:0').manual_seed(42)
self.pipeline.load_ip_adapter(
pretrained_model_name_or_path_or_dict="h94/IP-Adapter",
subfolder="models",
weight_name="ip-adapter_sd15.bin",
local_files_only=True,
cache_dir="adapter"
)
self.pipeline.set_ip_adapter_scale(1)
self.pipeline = self.pipeline.to("cuda")
print("Generating image...")
cropped_image = self.crop_img(original_image)
if not cropped_image:
return None
edited_image = self.pipeline(
prompt="Refashion the photo into a sticker.",
image=cropped_image,
ip_adapter_image=cropped_image,
num_inference_steps=4,
image_guidance_scale=1,
guidance_scale=2,
).images[0]
return edited_image
Generation script in Colab is the same as function def generate(self, original_image, mode)
.
Thanks for the kind words!
We keep the discussion in this repo focused on the examples themselves, rather than on issues in code derived from the examples.
Would you mind raising your issue in the Modal Slack? And we'll move faster if you include a link to the Colab (or send it to me via DM if you don't want to share it publicly).
Thanks for the kind words!
We keep the discussion in this repo focused on the examples themselves, rather than on issues in code derived from the examples.
Would you mind raising your issue in the Modal Slack? And we'll move faster if you include a link to the Colab (or send it to me via DM if you don't want to share it publicly).
Hello again! I've managed to solve my problem. Thank you for a quick answer!