/EmotiVoice

EmotiVoice 😊: a Multi-Voice and Prompt-Controlled TTS Engine

Primary LanguagePythonApache License 2.0Apache-2.0

README: EN | δΈ­ζ–‡

EmotiVoice 😊: a Multi-Voice and Prompt-Controlled TTS Engine

              

EmotiVoice is a powerful and modern open-source text-to-speech engine that is available to you at no cost. EmotiVoice speaks both English and Chinese, and with over 2000 different voices (refer to the List of Voices for details). The most prominent feature is emotional synthesis, allowing you to create speech with a wide range of emotions, including happy, excited, sad, angry and others.

An easy-to-use web interface is provided. There is also a scripting interface for batch generation of results.

Here are a few samples that EmotiVoice generates:

  • emotivoice_intro_cn_im.1.mp4
  • emotivoice_intro_en_im.1.mp4
  • emotivoice_intro_en_fun_im.1.mp4

Demo

A demo is hosted on Replicate, EmotiVoice.

Hot News

Features under development

  • Support for more languages, such as Japanese and Korean. #19 #22

EmotiVoice prioritizes community input and user requests. We welcome your feedback!

Quickstart

EmotiVoice Docker image

The easiest way to try EmotiVoice is by running the docker image. You need a machine with a NVidia GPU. If you have not done so, set up NVidia container toolkit by following the instructions for Linux or Windows WSL2. Then EmotiVoice can be run with,

docker run -dp 127.0.0.1:8501:8501 syq163/emoti-voice:latest

The Docker image was updated on January 4th, 2024. If you have an older version, please update it by running the following commands:

docker pull syq163/emoti-voice:latest
docker run -dp 127.0.0.1:8501:8501 -p 127.0.0.1:8000:8000 syq163/emoti-voice:latest

Now open your browser and navigate to http://localhost:8501 to start using EmotiVoice's powerful TTS capabilities.

Starting from this version, the 'OpenAI-compatible-TTS API' is now accessible via http://localhost:8000/.

Full installation

conda create -n EmotiVoice python=3.8 -y
conda activate EmotiVoice
pip install torch torchaudio
pip install numpy numba scipy transformers soundfile yacs g2p_en jieba pypinyin pypinyin_dict

Prepare model files

We recommend that users refer to the wiki page How to download the pretrained model files if they encounter any issues.

git lfs install
git lfs clone https://huggingface.co/WangZeJun/simbert-base-chinese WangZeJun/simbert-base-chinese

or, you can run:

git clone https://www.modelscope.cn/syq163/WangZeJun.git

Inference

  1. You can download the pretrained models by simply running the following command:
git clone https://www.modelscope.cn/syq163/outputs.git
  1. The inference text format is <speaker>|<style_prompt/emotion_prompt/content>|<phoneme>|<content>.
  • inference text example: 8051|Happy|<sos/eos> [IH0] [M] [AA1] [T] engsp4 [V] [OY1] [S] engsp4 [AH0] engsp1 [M] [AH1] [L] [T] [IY0] engsp4 [V] [OY1] [S] engsp1 [AE1] [N] [D] engsp1 [P] [R] [AA1] [M] [P] [T] engsp4 [K] [AH0] [N] [T] [R] [OW1] [L] [D] engsp1 [T] [IY1] engsp4 [T] [IY1] engsp4 [EH1] [S] engsp1 [EH1] [N] [JH] [AH0] [N] . <sos/eos>|Emoti-Voice - a Multi-Voice and Prompt-Controlled T-T-S Engine.
  1. You can get phonemes by python frontend.py data/my_text.txt > data/my_text_for_tts.txt.

  2. Then run:

TEXT=data/inference/text
python inference_am_vocoder_joint.py \
--logdir prompt_tts_open_source_joint \
--config_folder config/joint \
--checkpoint g_00140000 \
--test_file $TEXT

the synthesized speech is under outputs/prompt_tts_open_source_joint/test_audio.

  1. Or if you just want to use the interactive TTS demo page, run:
pip install streamlit
streamlit run demo_page.py

OpenAI-compatible-TTS API

Thanks to @lewangdev for adding an OpenAI compatible API #60. To set it up, use the following command:

pip install fastapi pydub uvicorn[standard] pyrubberband
uvicorn openaiapi:app --reload

Wiki page

You may find more information from our wiki page.

Training

Voice Cloning with your personal data has been released on December 13th, 2023.

Training a new language model

Training a new language model involves a considerable amount of resources, including computing power, time, and a large, diverse dataset. If you're interested in training a new language model, particularly using OpenAI's GPT-3 architecture, here are the general steps and considerations:

Access to GPT Codebase:

OpenAI has not released the training code for GPT-3, but they have released the codebase for GPT-2. You can find it on OpenAI's GitHub repository.

Compute Resources:

Training a large language model like GPT-3 requires substantial computational resources, including powerful GPUs or TPUs and large-scale distributed computing.

Dataset:

The size of your dataset is crucial. GPT-3 was trained on a massive and diverse dataset comprising a significant portion of the internet. The exact size is not disclosed, but it's on the order of hundreds of gigabytes.

Data Preprocessing:

You'll need to preprocess your dataset, tokenizing and formatting it appropriately for training. GPT models often use byte-pair encoding or other tokenization techniques.

Training Parameters:

Configuring training parameters, such as the number of layers, hidden units, and other hyperparameters, is a crucial step. These choices can impact the model's performance and training time.

Training Time:

Training large language models takes a substantial amount of time. GPT-3 was trained for weeks on powerful hardware. The exact duration will depend on the size of your model and the dataset.

Evaluation and Fine-Tuning:

After the initial training, you may need to evaluate your model's performance and fine-tune it on specific tasks or domains if necessary.

Ethical Considerations:

Ensure that your use of the language model aligns with ethical standards, and be aware of potential biases in your training data.

Remember that training a model like GPT-3 requires significant expertise in machine learning, access to substantial computational resources, and the ability to handle large datasets. If you don't have these resources, consider exploring pre-trained models or collaborating with research institutions that specialize in natural language processing.

Roadmap & Future work

  • Our future plan can be found in the ROADMAP file.
  • The current implementation focuses on emotion/style control by prompts. It uses only pitch, speed, energy, and emotion as style factors, and does not use gender. But it is not complicated to change it to style/timbre control.
  • Suggestions are welcome. You can file issues or @ydopensource on twitter.

WeChat group

Welcome to scan the QR code below and join the WeChat group.

qr

Credits

License

EmotiVoice is provided under the Apache-2.0 License - see the LICENSE file for details.

The interactive page is provided under the User Agreement file.