This example demonstrates how to run whisper tiny.en in your browser using onnxruntime-web and the browser's audio interfaces.
First, install the required dependencies by running the following command in your terminal:
npm install
Next, bundle the code using webpack by running:
npm run build
this generates the bundle file ./dist/bundle.min.js
To create an optimized end-to-end ONNX model from the original OpenAI Whisper model, follow these steps:
-
Goto: https://github.com/microsoft/Olive/tree/main/examples/whisper and follow the instructions.
-
Run the following commands
python prepare_whisper_configs.py --model_name openai/whisper-tiny.en --no_audio_decoder
python -m olive.workflows.run --config whisper_cpu_int8.json --setup
python -m olive.workflows.run --config whisper_cpu_int8.json
- Move the resulting model from models/whisper_cpu_int8_0_model.onnx to the same directory as this code.
Use NPM package light-server
to serve the current folder at http://localhost:8888/.
To start the server, run:
npx light-server -s . -p 8888
Once the web server is running, open your browser and navigate to http://localhost:8888/. You should now be able to run Whisper in your browser.