README: EN | δΈζ
EmotiVoice is a powerful and modern open-source text-to-speech engine. EmotiVoice speaks both English and Chinese, and with over 2000 different voices. The most prominent feature is emotional synthesis, allowing you to create speech with a wide range of emotions, including happy, excited, sad, angry and others.
An easy-to-use web interface is provided. There is also a scripting interface for batch generation of results.
Here are a few samples that EmotiVoice generates:
-
emotivoice_intro_cn_im.1.mp4
-
emotivoice_intro_en_im.1.mp4
-
emotivoice_intro_en_fun_im.1.mp4
The easiest way to try EmotiVoice is by running the docker image. You need a machine with a NVidia GPU. If you have not done so, set up NVidia container toolkit by following the instructions for Linux or Windows WSL2. Then EmotiVoice can be run with,
docker run -dp 127.0.0.1:8501:8501 syq163/emoti-voice:latest
Now open your browser and navigate to http://localhost:8501 to start using EmotiVoice's powerful TTS capabilities.
conda create -n EmotiVoice python=3.8 -y
conda activate EmotiVoice
conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
pip install numpy numba scipy transformers==4.26.1 soundfile yacs g2p_en jieba pypinyin
git lfs install
git lfs clone https://huggingface.co/WangZeJun/simbert-base-chinese WangZeJun/simbert-base-chinese
or, you can run:
mkdir -p WangZeJun/simbert-base-chinese
wget https://huggingface.co/WangZeJun/simbert-base-chinese/resolve/main/config.json -P WangZeJun/simbert-base-chinese
wget https://huggingface.co/WangZeJun/simbert-base-chinese/resolve/main/pytorch_model.bin -P WangZeJun/simbert-base-chinese
wget https://huggingface.co/WangZeJun/simbert-base-chinese/resolve/main/vocab.txt -P WangZeJun/simbert-base-chinese
- You have to download the pretrained models, and run:
mkdir outputs\style_encoder\ckpt
mkdir outputs\prompt_tts_open_source_joint\ckpt
- And place
g_*
,do_*
underoutputs/prompt_tts_open_source_joint/ckpt
and putcheckpoint_*
inoutputs/style_encoder/ckpt
. - The inference text format is
<speaker>|<style_prompt/emotion_prompt/content>|<phoneme>|<content>
.
- inference text example:
Maria_Kasper|Happy|<sos/eos> [IH0] [M] [AA1] [T] engsp4 [V] [OY1] [S] engsp4 [AH0] engsp1 [M] [AH1] [L] [T] [IY0] engsp4 [V] [OY1] [S] engsp1 [AE1] [N] [D] engsp1 [P] [R] [AA1] [M] [P] [T] engsp4 [K] [AH0] [N] [T] [R] [OW1] [L] [D] engsp1 [T] [IY1] engsp4 [T] [IY1] engsp4 [EH1] [S] engsp1 [EH1] [N] [JH] [AH0] [N] . <sos/eos>|Emoti-Voice - a Multi-Voice and Prompt-Controlled T-T-S Engine
.
-
You can get phonemes by
python frontend_en.py data/my_text.txt > data/my_text_for_tts.txt
. -
Then run:
TEXT=data/inference/text
python inference_am_vocoder_joint.py \
--logdir prompt_tts_open_source_joint \
--config_folder config/joint \
--checkpoint g_00140000 \
--test_file $TEXT
the synthesized speech is under outputs/prompt_tts_open_source_joint/test_audio
.
- Or if you just want to use the interactive TTS demo page, run:
pip install streamlit
streamlit run demo_page.py
To be released.
- Our future plan can be found in the ROADMAP file.
- The current implementation focuses on emotion/style control by prompts. It uses only pitch, speed, energy, and emotion as style factors, and does not use gender. But it is not complicated to change it to style/timbre control.
- Suggestions are welcome. You can file issues or @ydopensource on twitter.
Welcome to scan the personal QR code below and join the WeChat group.
- PromptTTS. The PromptTTS paper is a key basis of this project.
- LibriTTS. The LibriTTS dataset is used in training of EmotiVoice.
- HiFiTTS. The HiFi TTS dataset is used in training of EmotiVoice.
- ESPnet.
- WeTTS
- HiFi-GAN
- Transformers
- tacotron
- KAN-TTS
- StyleTTS
- Simbert
EmotiVoice is provided under the Apache-2.0 License - see the LICENSE file for details.
The interactive page is provided under the User Agreement file.