optimizing whisper without audio_decoder
Aiurus opened this issue · 3 comments
Describe the bug
I tried to optimize whisper-tiny.en model without audio_decoder, but error occurred.
To Reproduce
- python3 prepare_whisper_configs.py --model_name openai/whisper-tiny.en --no_audio_decoder
- olive run --config whisper_cpu_int8.json --setup
- olive run --config whisper_cpu_int8.json 2> /dev/null
After running 3rd command, the model should be generated in 'model' folder but error occured.
Expected behavior
When I tried with audio_decoder, the code works well.
Olive logs
[olive_evaluator.py:236:generate_metric_user_config_with_model_io] Model input shapes are not static. Cannot use inferred input shapes for creating dummy data. This will cause an error when creating dummy data for tuning.
Other information
- OS: Debian
Can you try again by removing the "evaluator": "common_evaluator"
from the template. There might be an issue with the evaluator but it is not required.
If it still fails, please share the full log from the run.
We don't provide an option to remove this mode. It was added by onnxruntime-extensions in this PR microsoft/onnxruntime-extensions#681
please install the previous version of onnxruntime-extensions 0.10.1 and rerun the workflow. you can add "clean_run_cache" : true
at the same level as
Olive/examples/whisper/whisper_template.json
Line 105 in 80e1fa9