microsoft/Olive

Whisper model converted via onnxruntime 1.17.1 won't work

khmyznikov opened this issue · 5 comments

Describe the bug
If you use Olive example passes for whisper-small model using onnxruntime-1.17.1, for example whisper_cpu_int8.json,
model will not work. Just no output at all with no errors.

To Reproduce

  • Use onnxruntime 1.17.1
  • python .\prepare_whisper_configs.py --model_name "openai/whisper-small" --multilingual --enable_timestamps --package_model --skip_evaluation --no_audio_decoder
  • python -m olive.workflows.run --config .\whisper_cpu_int8.json
  • python test_transcription.py --config whisper_cpu_int8.json --predict_timestamps

Expected behavior
Should have some output, on 1.16.3 everything works fine

Other information

  • OS: Windows
  • Olive version: 0.6.0
  • ONNXRuntime package and version: onnxruntime-gpu: 1.17.1

Facing the same issue.

What output did you get? Can you try without --predict_timestamps if it works?

Hi, thanks for reporting this issue. I have opened a PR to fix this #1016

Fix has been merged. Please reopen if it doesn't work on your side.

@jambayk Still doesn't work for me... Have converted with latest Olive and still just silent no output.