movie.mp4
CUDA 12.1:
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cuda121CUDA 11.8 (for older cards):
$ docker run --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latestCPU (not recommended):
$ docker run -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest-cpuRun with a fine-tuned model:
Make sure the model folder /path/to/model/folder contains the following files:
config.jsonmodel.pthvocab.json
$ docker run -v /path/to/model/folder:/app/tts_models --gpus=all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 ghcr.io/coqui-ai/xtts-streaming-server:latest`Setting the COQUI_TOS_AGREED environment variable to 1 indicates you have read and agreed to
the terms of the CPML license. (Fine-tuned XTTS models also are under the CPML license)
To build the Docker container Pytorch 2.1 and CUDA 11.8 :
DOCKERFILE may be Dockerfile, Dockerfile.cpu, Dockerfile.cuda121, or your own custom Dockerfile.
$ git clone git@github.com:coqui-ai/xtts-streaming-server.git
$ cd xtts-streaming-server/server
$ docker build -t xtts-stream . -f DOCKERFILE
$ docker run --gpus all -e COQUI_TOS_AGREED=1 --rm -p 8000:80 xtts-streamSetting the COQUI_TOS_AGREED environment variable to 1 indicates you have read and agreed to
the terms of the CPML license. (Fine-tuned XTTS models also are under the CPML license)
Once your Docker container is running, you can test that it's working properly. You will need to run the following code from a fresh terminal.
$ git clone git@github.com:coqui-ai/xtts-streaming-server.git$ cd xtts-streaming-server
$ python -m pip install -r test/requirements.txt
$ python demo.py$ cd xtts-streaming-server/test
$ python -m pip install -r requirements.txt
$ python test_streaming.py