microsoft/onnxruntime

onnxruntime/capi/onnxruntime_inference_collection.py", line 370, in _create_inference_session sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from onnx_data/cpm_large_opt.onnx failed:Protobuf parsing failed.

lucasjinreal opened this issue · 17 comments

onnxruntime/capi/onnxruntime_inference_collection.py", line 370, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from onnx_data/cpm_large_opt.onnx failed:Protobuf parsing failed.

image

A folder contains an very big ONNX model about 7 GB, with it's onnx and external data. Onnxruntime not able to load this model back.

snnn commented

To debug the problem, you may use google protobuf python API to load the model directly. If it fails, then something was wrong in the model. Else, it's a bug of onnxruntime.

Document: https://developers.google.com/protocol-buffers/docs/pythontutorial

Protobuf def file: https://github.com/onnx/onnx/blob/main/onnx/onnx-ml.proto . The data type you will need to use is: ModelProto.

onnx_data/cpm_large_opt.onnx can't be bigger than 2GB, but the folder can.

@snnn thanks. the onnx model is just 174kb. is that normal? the model is just exported via Pytorch, and I can using neuron visualize it and I can using onnx read the model successfully.

image

image

actually, what am doing, is using onnxruntime transformer optimization API to optimize this huge model. Seems it's onnxruntime problem.

snnn commented

is that normal?

It is. The error was thrown from protobuf API. So I was guessing it should be reproducible without using ONNX or ONNX Runtime.

@snnn that would be strange. As I shown above, I can read the onnx normal and prints it's information. Shouldn't be protobuf problem.

I think the reason maybe onnx just read the onnx model itself without huge data external, while ort reads all of them, and somehow broken. Would u help me take a deep investigate on model more than 7GB on ort?

snnn commented

Like, sometimes the other tools read/write text-format models but ONNX spec requires models in binary format.

snnn commented

The only places that can produce this error message are in https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/core/graph/model.cc

If you search "Protobuf parsing failed", there are only 3 places. They all because protobuf returned an error.

I also meet some problem, is there a solution, thanks ~

Hello her Ahmed, is have smaes broplem.

The same problem with big model

The same problem with options.ExecutionMode = ExecutionMode.ORT_PARALLEL; config. And model with 200 mb max.

ABZig commented

This error may occur if there's smth wrong with the model file inswapper_128.onnx

Try to download it manually from here and put it to the stable-diffusion-webui\models\insightface replacing existing one

This error may occur if there's smth wrong with the model file inswapper_128.onnx

Try to download it manually from here and put it to the stable-diffusion-webui\models\insightface replacing existing one

Thanks man, this fixed my issue

如果有人是因为 ComfyuiInstantID 插件报错而来,那可能是因为你没有下载对应的模型或者没有将相应的模型放到插件指定的位置。

  1. 只需要将后面链接中 https://huggingface.co/DIAMONIK7777/antelopev2/tree/main所有的模型放到指定的位置 ComfyUI//custom_nodes/ComfyUI-InstantID/models/antelopev2

@snnn that would be strange. As I shown above, I can read the onnx normal and prints it's information. Shouldn't be protobuf problem.

I think the reason maybe onnx just read the onnx model itself without huge data external, while ort reads all of them, and somehow broken. Would u help me take a deep investigate on model more than 7GB on ort?

I just tried. Current workaround might be save the model with save_as_external_data=True, then load with onnxruntime.InferenceSession(), but you have to make sure the onnx modelproto is checked.