Load onnx model failed with error
shangerxin opened this issue · 7 comments
Try to load the onnx model with the code
import winrt.windows.ai.machinelearning as winml
import os
def timed_op(fun):
import time
def wrapper(*args, **kwds):
print("Starting", fun.__name__)
start = time.perf_counter()
ret = fun(*args, **kwds)
end = time.perf_counter()
print(fun.__name__, "took", end - start, "seconds")
return ret
return wrapper
@timed_op
def load_model(model_path):
return winml.LearningModel.load_from_file_path(os.fspath(model_path))
model_path = r'...\Downloads\od_net_model.onnx'
model = load_model(model_path)
get error
RuntimeError Traceback (most recent call last)
<ipython-input-1-e4ff321a9c65> in <module>
22
23 model_path = r'...\od_net_model.onnx'
---> 24 model = load_model(model_path)
<ipython-input-1-e4ff321a9c65> in wrapper(*args, **kwds)
9 start = time.perf_counter()
10
---> 11 ret = fun(*args, **kwds)
12
13 end = time.perf_counter()
<ipython-input-1-e4ff321a9c65> in load_model(model_path)
19 @timed_op
20 def load_model(model_path):
---> 21 return winml.LearningModel.load_from_file_path(os.fspath(model_path))
22
23 model_path = r'...\Downloads\od_net_model.onnx'
RuntimeError: Node:ConstantOfShape_17 No Op or Function registered for ConstantOfShape with domain_version of 11
Hey @smk2007 or @wchao1115 - this should probably go over to the Onnx AI/ML repo, since it's complaining about the model rather than about how the APIs are being called?
From the error it looks like the model fails to load for some reason. If you can provide the model file, we can take a look and see what's wrong with it.
Hi,
It looks like you are using the winml xlang python projection. This projection was a prototype and is no longer supported.
The reason you see an error is because this projection works with the WinML engine that ships inbox on Windows. The latest version of Windows available does not support domain_version 11 and so you see that the model is not able to load correctly.
If you want to use Windows Machine Learning with python bindings on Windows, take a look at https://www.onnxruntime.ai/python/.
These bindings will enable you to evaluate against the onnxruntime core engine that WinML uses as well.
From the error it looks like the model fails to load for some reason. If you can provide the model file, we can take a look and see what's wrong with it.
Thanks for the response. The model file can not be shared due to company's policy.
i
Thank you for the response. I'm navigate to here with the reference the link in the document https://www.onnxruntime.ai/python/.
What am I trying to do is releasing onnx gpu/cpu together. The question is does onnx gpu required full install cuda and cudnn on the client machine. Can we only copy some specific DLLs and set the environment path to reduce package size? Thanks in advance.
Hi,
This question regarding CUDA deployment with ORT should be opened in the onnxruntime github project here: https://github.com/microsoft/onnxruntime/issues
I am curious as to why were you using the xlang Python projection? Understanding your scenario here may help us determine whether something like winrt winml projections in python are needed.
Generally speaking ONNX models can be evaluated on gpu and cpu depending on how evaluation is being performed.
For C++, and C# you have the ability to execute models on the GPU using DML or CUDA, using the WinML API or the OnnxRuntime API.
For Python the OnnxRuntime projection api is available.
@smk2007 Got it. It got the code sample from the document of onnxruntime. Maybe it is out date. Thank you.