haofanwang/ControlNet-for-Diffusers

Reduce Memory Consumption

ghpkishore opened this issue · 7 comments

Hi @haofanwang Is it possible to directly save the sdv 1.5 unet + depth control model into another new model called SD_V1.5_depth and load that model in our pipe control and not call the pipe_inpaint pipeline at all, there by saving space in VRAM? Is it possible? Or am I doing some mistake in my assumption?

yeah it works. And also saves 2 GB of VRAM! Total VRAM consumption is less than 8GB if you directly use the control_sd15_depth_inpaint model where it is the same as control_sd15_depth model with the unet folder replaced with the unet folder of stable-diffusion-inpainting .

Screenshot of VRAM consumption in controlnet with pipe_inpaint

Screenshot of VRAM consumption in controlnet without pipe_inpaint

Enjoy!

Adding pipe_control.enable_attention_slicing() enables 1024 x 1024 resolution images to be created with the 16GB VRAM GPU, if the pipe_inpaint is not loaded. By adding output_type="latent" in the pipe_control while generating outputs, saving them to a array and then using SD upscaler, we can get 2048 x 2048 images in a 16GB Vram machine using control Net.

@haofanwang can you add this to the readme? Or somehow get people to know that it is possible to load only one pipeline for one control net?

It is great to know that pipe_control.enable_attention_slicing() and output_type="latent" help saving memory. It would be very appreciated if you make a PR to cover both memory saving tricks and how to load one pipeline? @ghpkishore

I have never done that before. To give you context I started coding only 5 months ago. Should I make a fork and update readme? I do not understand how to do it yet. I will need sometime to figure it out. Hope that is fine.

I see. I will come back and make a PR once I'm available. Thanks.