LOCAL LOW LATENCY SPEECH TO SPEECH
- git clone https://github.com/Zbrooklyn/Local-Low-Latency-Speech-to-Speech.git
- cd Local-Low-Latency-Speech-to-Speech
- run:
- conda create -n openvoice python=3.9
- conda activate openvoice
- conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
- pip install -r requirements.txt
- create a folder "checkpoints"
- Download checkpoints from HERE: https://myshell-public-repo-hosting.s3.amazonaws.com/checkpoints_1226.zip
- Unzip to Checkpoints (basespeakers + converter)
- Install LM Studio (https://lmstudio.ai/)
- Download Bloke Dolphin Mistral 7B V2 (https://huggingface.co/TheBloke/dolphin-2.2.1-mistral-7B-AWQ) in LM Studio
- Setup Local Server in LM Studio (https://youtu.be/IgcBuXFE6QE)
- Start Server
- Get a reference voice in PATH / PATHS (mp3) - Line 204
- RUN talk-cpu.py