Conversion of some models are buggy
KokinSok opened this issue · 0 comments
Describe the bug
In converting the Huggingface OpenAi Whisper models to Onnx with the Olive configuration, I am getting a few different problems.
Partly this depends on the version of the Onnxruntime I have tried. Current versions, from 1.15.0 onwards, the models are giving different errors, depending on how I load them.
To Reproduce
Convert the Whisper Model using Olive
set model="openai/whisper-large.en"
set config="whisper_cpu_int8.json"
python prepare_whisper_configs.py --model_name %model%
python -m olive.workflows.run --config %config% --setup
python -m olive.workflows.run --config %config%
Then try to load the model:
string path = "C:\Your\Path\Goes\Here\";
string ModelPath = Path.Combine(path, "Models\Whisper.Small.en\model.onnx");
Function Model = Function.Load(ModelPath, DeviceDescriptor.CPUDevice, ModelFormat.ONNX);
WhisperConfig Config = new WhisperConfig(ModelPath);
var Inputs = WhisperConfig.BuildWhisperInput(Path.Combine(path, "Audio\sampleaudio.wav"));
Gives the Error:
System.ApplicationException: Failed to load model: 'At top level graph without matching NodeArg that subgraph consumes. Name=s_d_decoder.model.decoder.embed_tokens.weight_quantized Graph may not conform to the ONNX spec and contain initializers that are not graph inputs.'
[CALL STACK]
> CNTK::TrainingParameterSchedule:: GetMinibatchSize
- CNTK:: XavierInitializer
- CNTK::Function:: Load
- CSharp_CNTK_Function__Load__SWIG_0
- 00007FF7AAEF2975 (SymFromAddr() error: The specified module could not be found.)
Or, using the example code found here, and loading the model using the Onnxruntime, I get the error:
at Microsoft.ML.OnnxRuntime.NativeApiStatus.VerifySuccess(IntPtr nativeStatus) in Microsoft.ML.OnnxRuntime\NativeApiStatus.cs:line 23
$(String[] args) in C:...
at Microsoft.ML.OnnxRuntime.InferenceSession.Init(String modelPath, SessionOptions options, PrePackedWeightsContainer prepackedWeightsContainer) in Microsoft.ML.OnnxRuntime\InferenceSession.cs:line 595
at Microsoft.ML.OnnxRuntime.InferenceSession..ctor(String modelPath, SessionOptions options) in Microsoft.ML.OnnxRuntime\InferenceSession.cs:line 124
at MLWebApi.MLServices.AutomaticSpeechRecognition..ctor(String modelPath) in C:\Users\User\Desktop\C# Code\OnnxConsoleApp\MLWebApi\MLServices\AutomaticSpeechRecognition.cs:line 88
at Program.
Sometimes, using different versions of the Runtime lib, I get this error:
[ErrorCode:Fail] subgraph_whisper_encoder.cc:43 onnxruntime::contrib::transformers::WhisperEncoderSubgraph::Validate expect 2 inputs, got:3
at Microsoft.ML.OnnxRuntime.NativeApiStatus.VerifySuccess(IntPtr nativeStatus)
$(String[] args) in C:...
at Microsoft.ML.OnnxRuntime.InferenceSession.Init(String modelPath, SessionOptions options, PrePackedWeightsContainer prepackedWeightsContainer) in Microsoft.ML.OnnxRuntime\InferenceSession.cs:line 595
at Microsoft.ML.OnnxRuntime.InferenceSession..ctor(String modelPath, SessionOptions options) in Microsoft.ML.OnnxRuntime\InferenceSession.cs:line 124
at MLWebApi.MLServices.AutomaticSpeechRecognition..ctor(String modelPath) in C:\Users\User\Desktop\C# Code\OnnxConsoleApp\MLWebApi\MLServices\AutomaticSpeechRecognition.cs:line 87
at Program.
Expected behavior
Would be fantastic for this to work!
Olive config
Standard, out of the box config with new env and all requirements.txt installed.
Olive logs
No errors were reported.
Other information
- OS: Windows 10
- Olive version: 0.5.0 or main]
- ONNXRuntime package and version: CPU atm
This is a fantastic tool, it is very useful! Once a few bugs are ironed out, it will be even better! Thank You All!