OpenBMB/VisCPM

RuntimeError: Error(s) in loading state_dict for VLU_CPMBee: Missing key(s) in state_dict: "query", "vpm.beit3.text_embed.weight", "vpm.beit3.vision_embed.mask_token",

funykatebird opened this issue · 1 comments

Python 3.10.11 (main, May 16 2023, 00:28:57) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.

from VisCPM import VisCPMChat
from PIL import Image
model_path = '/home/jovyan/mm_large_model/viscpm_paint_zhplus_checkpoint.pt'
viscpm_chat = VisCPMChat(model_path, image_safety_checker=True)
Traceback (most recent call last):
File "", line 1, in
File "/home/jovyan/mm_large_model/test/VisCPM/VisCPM/viscpm_chat.py", line 50, in init
self.vlu_cpmbee.load_state_dict(vlu_state_dict)
File "/opt/conda/envs/viscpm2/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1671, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for VLU_CPMBee:
Missing key(s) in state_dict: "query", "vpm.beit3.text_embed.weight", "vpm.beit3.vision_embed.mask_token", "vpm.beit3.vision_embed.cls_token", "vpm.beit3.vision_embed.proj.weight", "vpm.beit3.vision_embed.proj.bias", "vpm.beit3.encoder.embed_positions.A.weight", "vpm.beit3.encoder.embed_positions.B.weight", "vpm.beit3.encoder.layers.0.self_attn.k_proj.A.weight", "vpm.beit3.encoder.layers.0.self_attn.k_proj.A.bias", "vpm.beit3.encoder.layers.0.self_attn.k_proj.B.weight", "vpm.beit3.encoder.layers.0.self_attn.k_proj.B.bias", "vpm.beit3.encoder.layers.0.self_attn.v_proj.A.weight", "vpm.beit3.encoder.layers.0.self_attn.v_proj.A.bias", "vpm.beit3.encoder.layers.0.s

use the wrong checkpoint file