zqiu24/oft

error while trying to load dreambooth weights

Opened this issue · 1 comments

I am trying to load dreambooth weights to try inference after finetuning sd with train_dreambooth_oft.py

I have tried both .load_lora_weights() and .unet_load_attn_procs(). both giving the same error:

Traceback (most recent call last):
File "/home/depecikbora/oft/oft-db/fullimg.py", line 8, in
pipe.load_lora_weights("hardmodel")
File "/home/depecikbora/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/loaders.py", line 928, in load_lora_weights
self.load_lora_into_unet(state_dict, network_alphas=network_alphas, unet=self.unet)
File "/home/depecikbora/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/loaders.py", line 1210, in load_lora_into_unet
unet.load_attn_procs(state_dict, network_alphas=network_alphas)
File "/home/depecikbora/.pyenv/versions/3.10.6/lib/python3.10/site-packages/diffusers/loaders.py", line 479, in load_attn_procs
raise ValueError(
ValueError: None does not seem to be in the correct format expected by LoRA or Custom Diffusion training.

Here is the command i used to fine-tune the model:
accelerate launch train_dreambooth_oft.py
--pretrained_model_name_or_path="dreamlike-art/dreamlike-photoreal-2.0"
--instance_data_dir="input9"
--output_dir="hardmodel"
--instance_prompt="a photo of sks t-shirt"
--resolution=512
--train_batch_size=1
--gradient_accumulation_steps=1
--learning_rate=7e-4
--lr_scheduler="constant"
--lr_warmup_steps=0
--max_train_steps=600
--seed="0"
--eps=6e-5
--r="2"
--coft

am i missing something here? or is there another way to do inference without using the diffusers library?

me too.