Load model from civitai
oovm opened this issue · 5 comments
https://civitai.com/ is a website that hosts a lot of models, I try to convert the above models.
Many models only have the unet
part , while hf2pyke.py
seems to ask for the full directory.
I cloned runwayml/stable-diffusion-v1-5 and renamed the model file to unet/diffusion_pytorch_model.bin
.
It worked on pytorch exports.
But there are many file formats that are *.safetensors
, model_loader
doesn't seem to recognize these files, I don't know how to deal with it.
You should use the sd2pyke.py
script instead (looks like I forgot to update the docs when I added it, sorry).
Usage is almost identical to hf2pyke
, but it takes a .ckpt or .safetensors file instead of a Hugging Face model.
$ python scripts/sd2pyke.py ~/AbyssOrangeMix2_sfw.safetensors ~/models/abyss2 --fp16 -C v1-inference.yaml
How about VAE? Some models has two parts:
The VAE is included in the main checkpoint (anything-v4.0-pruned-fp16.safetensors
) and will be converted properly by sd2pyke.
Why each hash of text-encoder are different?
Isn't this the same encoder downloaded from the same place?
Why each hash of text-encoder are different?
Isn't this the same encoder downloaded from the same place?
I'm not sure tbh. It can be due to:
- usage of
--simplify-small-models
oroptimize.py
- Stable Diffusion v1 based models vs Stable Diffusion v2 based models - v2 uses OpenCLIP whereas v1 uses OpenAI CLIP.
- different versions of
transformers
where they changed the CLIPTextModel implementation, which changed the ONNX graph slightly
Maybe there's a better way to uniquely identify the models, or perhaps the text encoder should just never be replaced (outside of the planned textual inversion implementation, which would require separate text encoders with each new token added)