haofanwang/Lora-for-Diffusers

SafetensorError: Error while deserializing header: HeaderTooLarge

ShaunXZ opened this issue · 12 comments

Hi,

I am trying to convert lora from safetensor format to bin using the script in format_convert.py. The bin file was generated successfully, but it always throws HeaderTooLarge error when loading it. Could you please help? Thanks in advance!

image

Below is the script that gives the above error. Env: google colab.

# load diffusers model
model_id = "runwayml/stable-diffusion-v1-5"
pipeline = StableDiffusionPipeline.from_pretrained(model_id,torch_dtype=torch.float32)

# convert
# you have to download a suitable safetensors, not all is supported!
# download example from https://huggingface.co/SenY/LoRA/tree/main
# wget https://huggingface.co/SenY/LoRA/resolve/main/CheapCotton.safetensors
safetensor_path = "CheapCotton.safetensors"

bin_path = "CheapCotton.bin"
safetensors_to_bin(safetensor_path, bin_path)

# load it into UNet
# please note that diffusers' load_attn_procs only support add LoRA into attention
# if you have LoRA with other insertion, it does not support now
pipeline.unet.load_attn_procs(bin_path)

Got the same issue

@ShaunXZ @Pirog17000 This issue may help, I also met this problem before and I solved it by re-downloaded the model (please check whether the base model and safetensor are downloaded correctly, if the filesize is too small, it should be problematic). By the way, can you share your colab link so that I can take a look for you?

@haofanwang Thank you for your quick response. I double checked the downloaded safetensor file and it seems to have the right size (over 100Mb). Below is the colab used to the test this script:
https://colab.research.google.com/drive/12wFobWFL_NZ64fOV0gEYXePzZMZlRpr_?usp=sharing

Thanks,

my issue is resolved with updating diffusers. since I run it locally, my steps were:
pip uninstall diffusers
pip install git+https://github.com/huggingface/diffusers.git

and no reinstall or update flags were helpful, straight-forward uninstall-install. no more issues, works well.

@Pirog17000 Hi, I tried your method in colab and it still didn't work... Could you take a look at the colab link above? Thank you!

+1 I'm also seeing this issue 😭 It's able to create the bin, but fails when running pipeline.unet.load_attn_procs(bin_path)

+1 +1 +1 I'm also seeing this issue 😭 It's able to create the bin, but fails when running pipeline.unet.load_attn_procs(bin_path)

According to: issue3367 pipeline.unet.load_attn_procs() takes the path where the .bin file is stored not the path to .bin file itself. Changing the input from "CheapCotton.bin" to "/PathToWhereItsStored" solved this error for me.

i meet this issue,
this is i slove it ,but i dont think my way is right:
error where:
lora_model_path = "./text_to_image/sddata/finetune/lora/pokemon/checkpoint-11000/pytorch_model.bin"
pipe.unet.load_attn_procs(lora_model_path)
error like yous , HeaderTooLarge
then change
lora_model_path = "./text_to_image/sddata/finetune/lora/pokemon/checkpoint-11000/"
pipe.unet.load_attn_procs(lora_model_path)
error : no file pytorch_lora_weights.bin
then change
cp pytorch_model.bin pytorch_lora_weights.bin
the run the code
lora_model_path = "./text_to_image/sddata/finetune/lora/pokemon/checkpoint-11000/"
pipe.unet.load_attn_procs(lora_model_path)
succ!
why ? why ? why ?

same issue here, any solution yet?

I think ur solution is right

  1. you should mkdir a new folder called whatever u want
  2. rename the new bin file into pytorch_lora_weights.bin
  3. put the pytorch_lora_weights.bin into the new folder u just created
  4. pipe.unet.load_attn_procs(new_file_path)

and it will work

however, i met the error:
File "/workspace/demo/Diffusion/models.py", line 301, in get_model
return basic_unet.load_attn_procs(self.lora)
File "/usr/local/lib/python3.8/dist-packages/diffusers/loaders.py", line 234, in load_attn_procs
rank = value_dict["to_k_lora.down.weight"].shape[0]
KeyError: 'to_k_lora.down.weight'