/whisper.cpp.docker

run whisper.cpp in docker

Primary LanguageShellApache License 2.0Apache-2.0

Run the whisper.cpp in a Docker container with GPU support.

TLDR

docker compose up

or

MODEL=large-v2 LANGUAGE=ru docker compose up

Step by step

1. Build CUDA image (single run)

docker compose build --progress=plain

2. Download models (single run)

You may want to do it manually in order to see the progress

./models/download.sh large-v2 

This script is a plain copy of download-ggml-model.sh. You may find additional information and configurations here

3. Prepare your files

Place all the files in the ./volume/input/ directory

4. Run the docker compose

docker compose up

Configure defaults

MODEL=large-v2 \
LANGUAGE=ru \
    docker compose up
Argument Values Defaults
model base, medium, large, other options large-v2
language rn, ru, fr, etc. (depends on the model) ru

5. Result

You can find the result in the ./volume/output/ directory