导出MiniCPM-V-2_6的ONNX模型出错
Opened this issue · 3 comments
执行命令:
cd compile
python3 export_onnx.py --model_path your_minicpmv_path
报错:
File "/usr/local/anaconda3/envs/hulei/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 644, in forward
hidden_states, self_attn_weights, present_key_value = self.self_attn(
File "/usr/local/anaconda3/envs/hulei/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/anaconda3/envs/hulei/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/anaconda3/envs/hulei/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1522, in _slow_forward
result = self.forward(*input, **kwargs)
File "/usr/local/anaconda3/envs/hulei/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py", line 321, in forward
attn_weights = attn_weights + causal_mask
RuntimeError: The size of tensor a (513) must match the size of tensor b (28) at non-singleton dimension 3
将该库的modeling_qwen2.py拷贝到transformers中后,计算attn_weights时transpose之后,就出现shape不能匹配了。
欸LLM-TPU更新到最新了么,transformers版本对齐了么
LLM-TPU是最新的, 新建的conda环境,安装的transformers参考:pip install transformers_stream_generator einops tiktoken accelerate torch==2.0.1+cpu torchvision==0.15.2 transformers==4.40.0
cp files/openbmb-MiniCPM-V-2_6/modeling_qwen2.py /usr/local/lib/python3.10/dist-packages/transformers/models/qwen2/ @chuxiaoyi2023
@chuxiaoyi2023 export_onnx.py脚本也不支持cpu device以及seq_length 512,麻烦看一下这个脚本是否没有更新