How to compile instruct pix2pix using AIT
mzeynali opened this issue · 9 comments
Hello guys,
I have two problem:
1- How I can compile the instruct pix2pix ? Have you plan about this?
2- How I can compile the stable diffusion with controlnet?
Pix2pix PyTorch modules would need translating to AITemplate graph.
ControlNet is available, refer to the documentation and code for further details.
compile_controlnet.py
compile_alt.py
@hlky Thanks so much,
I check your compiler codes, AIT is fully compatible with diffusers hugging face library, and as diffusers fully supported Stable Diffusion instruct pix2pix, in my opinion, your codes have to have easily supported for pix2pix, right?
AIT is not fully compatible with Diffusers, certain aspects of Diffusers related to Stable Diffusion have been implemented as AITemplate graph, there are block types etc that have not been implemented.
I can confirm that timbrooks/instruct-pix2pix
UNet compiles without issue, block types are standard and in channels is power of 2. CLIP and VAE are the same as v1.5 architecture so do not need to be compiled separately from v1.5. I do not anticipate issues when running the UNet module, but it is untested, I have not implemented a pix2pix pipeline yet.
Please note timbrooks/instruct-pix2pix
modules will not work with any of the pipelines in this repo or my own repo, this will require a new pipeline that matches Diffusers'.
@hlky
Is it possible to change number of steps before or after compiling? If so, How I can?
@mzeynali It's unclear what you mean, inference step count is not involved in compilation.
@hlky In the stable diffusion models, we have num_inference_steps for denoise UNET steps, like this:
Stable diffusion pipeline using hugging face:
out_image = pipe( "disco dancer with colorful lights", num_inference_steps=20, generator=generator, image=canny_image ).images[0]
In AIT, How I can change this value, I compiled the SD with AIT, I see the num_inference_steps is 50 by default. How I can change default value 50 after compiling?
Second Question: I want to know, How I can set batch size in AIT, and Is this features runs stable diffusion model at parallel? or The stable diffusion models can't process more than one at parallel? My mean is that suppose I have 5 different input image or 5 different prompts, Is it possible to generate 5 different images for these prompts/images at the same time using batch size = 5?
If you see demo.py
uses default 50 steps, you know how to change it :) demo_alt.py
has --steps
option.
Yes it is possible to run different images as the batch, you will need to adjust the code for that purpose.
Please note, demo pipelines are not production ready, they are for demo purposes only, all your desired features will not be present.