OpenAI Whisper rest api and task queue

OpenAI whisper model exposed as a Rest API with task queue. Powered by FastAPI and Celery.

Usage

Running locally

Build and start the project with docker-compose:

docker-compose up

Making test requests

When you run docker-compose, the server container will be available at http://localhost:8000/. You can make requests to this address.

If you want to test local mp3 files, you need to make them available via web requests. There is a pre-made script that exposes your file using the dummy server and makes a request to the running server. You can invoke the script specifying the path to the input mp3 file:

./scripts/test_request.sh /path/to/file.mp3

Development

Requirements

This project requires Python version 3.11 and poetry for dependency management. Make sure you have those installed

The project currently runs on linux machines only because of the openai whisper dependency. If you have MacOS or Windows, consider using virtual machines for development.

Set up

Install project dependencies

poetry install

Install pre-commit hooks

pre-commit install

Running tests

Run all tests

pytest

Start continuous test runner

ptw .