NUROISEA/anime-webui-colab

Cannot use "provide your own model" successfully with Taiyi model

mhcpan opened this issue · 1 comments

Try to use IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Anime-Chinese-v0.1 (https://huggingface.co/IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Anime-Chinese-v0.1/blob/main/model.ckpt) and IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1 (https://huggingface.co/IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1) models with "provide your own" colab notebook but fail.

IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1:
Model link: https://huggingface.co/IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/resolve/main/Taiyi-Stable-Diffusion-1B-Chinese-v0.1.ckpt
VAE link: https://huggingface.co/IDEA-CCNL/Taiyi-Stable-Diffusion-1B-Chinese-v0.1/resolve/main/vae/diffusion_pytorch_model.bin

Messages when launching web UI:
Launching Web UI with arguments: --xformers --lowram --no-hashing --enable-insecure-extension-access --no-half-vae --disable-safe-unpickle --opt-channelslast --gradio-queue --ckpt /content/models/Taiyi-Stable-Diffusion-1B-Chinese-v0.1.ckpt --vae-path /content/stable-diffusion-webui/models/VAE/diffusion_pytorch_model.bin --ckpt-dir /content/models --share
/usr/local/lib/python3.9/dist-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be removed in 0.17. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
warnings.warn(
Loading weights [None] from /content/models/Taiyi-Stable-Diffusion-1B-Chinese-v0.1.ckpt
Creating model from config: /content/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Downloading (…)olve/main/vocab.json: 100% 961k/961k [00:00<00:00, 1.14MB/s]
Downloading (…)olve/main/merges.txt: 100% 525k/525k [00:00<00:00, 24.4MB/s]
Downloading (…)cial_tokens_map.json: 100% 389/389 [00:00<00:00, 140kB/s]
Downloading (…)okenizer_config.json: 100% 905/905 [00:00<00:00, 285kB/s]
Downloading (…)lve/main/config.json: 100% 4.52k/4.52k [00:00<00:00, 1.67MB/s]
Downloading pytorch_model.bin: 100% 1.71G/1.71G [00:19<00:00, 86.8MB/s]
loading stable diffusion model: RuntimeError
Traceback (most recent call last):
File "/content/stable-diffusion-webui/webui.py", line 136, in initialize
modules.sd_models.load_model()
File "/content/stable-diffusion-webui/modules/sd_models.py", line 436, in load_model
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
File "/content/stable-diffusion-webui/modules/sd_models.py", line 277, in load_model_weights
model.load_state_dict(state_dict, strict=False)
File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 2041, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LatentDiffusion:
size mismatch for cond_stage_model.transformer.text_model.embeddings.position_ids: copying a param with shape torch.Size([1, 512]) from checkpoint, the shape in current model is torch.Size([1, 77]).

Stable diffusion model failed to load, exiting

It seems like the model you're trying to use is somewhat incompatible with the web UI.

Just to be sure, I've also downloaded other models (that I know) with the same notebook and all of which works, so the problem is not in my notebook.

I'm sorry but I don't think I can resolve this.