Local whisper api implementation

This is the server implementation of the telegram whisper bot

This implementation uses FastAPI to ingest mp3 files to the Whisper model and return a transcription of the audio.

Inside the repo there is a requests.py script that shows you how to use the endpoint

Working on localhost:8000, only uses the "/transcribe" endpoint

Installation

  • If you are using WSL you may have a problem with some dependencies, to fix it you have to add this line to the bash file
nano ~/.bashrc
LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
  • Because this is a FastAPI server, you have to install uvicorn
pip install uvicorn
  • Now create a virtual environment and install the requirements:
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
  • To run the server:
uvicorn main:app --reload

And the server will be ready to receive requests

Requirements

  • Python 3.8+
  • uvicorn

Usage

Just look at the requests.py file to know how to use the endpoint