oneThousand1000/Portrait3D

KeyError: 'state_dict'

icekeg opened this issue · 5 comments

seed_everything(opt.seed) │

config = OmegaConf.load(f"{opt.config}") │
model = load_model_from_config(config, f"{opt.ckpt}") │

device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") │
model = model.to(device) │

/datasets/Portrait3D/stable-diffusion/scripts/txt2realistic_human.py:53 in │
load_model_from_config │

pl_sd = torch.load(ckpt, map_location="cpu") │
if "global_step" in pl_sd: │
print(f"Global Step: {pl_sd['global_step']}") │
sd = pl_sd["state_dict"] │
model = instantiate_from_config(config.model) │
m, u = model.load_state_dict(sd, strict=False) │
if len(m) > 0 and verbose:

KeyError: 'state_dict'

Did you convert the diffusers-version model to the original-stable-diffusion-version?
Could you print the keys in realisticVisionV51_v51VAE.ckpt?

I downloaded the repository directly from here https://github.com/huggingface/diffusers
git clone https://github.com/huggingface/diffusers
when I tried to convert the model into vae model, I got this error:

Traceback (most recent call last):
File "convert_diffusers_to_original_stable_diffusion.py", line 319, in
text_enc_dict = load_file(text_enc_path, device="cpu")
File "/data/home/alfredchen/anaconda3/envs/text_to_3dportrait/lib/python3.8/site-packages/safetensors/torch.py", line 311, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

please follow the following steps in readme:
image

I've downloaded Realistic_Vision_V5.1_noVAE
image
and pull the repository from https://github.com/huggingface/diffusers via http
image
and convert the model into vae model in the way you suggested and still got this error

Traceback (most recent call last):
File "convert_diffusers_to_original_stable_diffusion.py", line 319, in
text_enc_dict = load_file(text_enc_path, device="cpu")
File "/data/home/alfredchen/anaconda3/envs/text_to_3dportrait/lib/python3.8/site-packages/safetensors/torch.py", line 311, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge

You can try:

  1. Check that every .safetensor files you download is complete. You could carefully compare their file size, or try to use diffusers to load the whole model like this:
    pipe = StableDiffusionPipeline.from_pretrained(model_key, torch_dtype=self.precision_t)
  2. If the safetensor files are incomplete, redownload them.
  3. If things still do not work, try to download .bin file to replace all safetensor file.