How to convert FLUX.1-Depth/Canny/Fill-dev.safetensors to Q8?
ymzlygw opened this issue · 4 comments
Hi, blackforest just released their strong controlnet model. But It is too big sample as flux.1 dec fp16 (24GB).
could you please how to convert the FLUX.1-Depth/Canny/Fill-dev.safetensors to Q8 and can you support it ? Thanks!
search youtube video, there is a colab to do so. But I do not know how to do that.
search youtube video, there is a colab to do so. But I do not know how to do that.
can you share the video url ? I don't know search what key words
There are conversions available on huggingface. Here for example is quantized FLUX.1-Fill-dev:
https://huggingface.co/YarvixPA/FLUX.1-Fill-dev-gguf/tree/main
FLUX.1-Fill-dev.gguf itself is supported by the ComfyUI-GGUF nodes
The problem I have is that FLUX.1 Dev LoRAs are not working with FLUX.1-Fill-dev.gguf. This is the error I get:
ERROR lora diffusion_model.img_in.weight shape '[3072, 384]' is invalid for input of size 196608
Are the LoRAs even supposed to work with FLUX.1-Fill-dev ?
EDIT:
At least these turbo/hyper LoRAs cause the error message:
FLUX.1-Turbo-Alpha
Hyper-FLUX.1-dev-8steps-lora
Other LoRAs I tested do work, but the likeness of faces is much worse with FLUX.1-Fill-dev compared to FLUX.1 Dev.