A... vits2_pytorch and MB-iSTFT-VITS hybrid... Gods, an abomination! Who created this atrocity?
This is an experimental build. Does not guarantee performance, therefore.
According to shigabeev's experiment, it can now dare claim the word SOTA for its performance (at least for Russian).
-
Python >= 3.8
-
CUDA
-
Pytorch version 1.13.1 (+cu117)
-
Clone this repository
-
Install python requirements.
pip install -r requirements.txt
1. You may need to install espeak first:apt-get install espeak
If you want to proceed with those cleaned texts in filelists, you need to install espeak.
apt-get install espeak
-
Prepare datasets & configuration
1. ex) Download and extract the LJ Speech dataset, then rename or create a link to the dataset folder:ln -s /path/to/LJSpeech-1.1/wavs DUMMY1
-
wav files (22050Hz Mono, PCM-16)
-
Prepare text files. One for training(ex) and one for validation(ex). Split your dataset to each files. As shown in these examples, the datasets in validation file should be fewer than the training one, while being unique from those of training text.
- Single speaker(ex)
wavfile_path|transcript
- Multi speaker(ex)
wavfile_path|speaker_id|transcript
-
Run preprocessing with a cleaner of your interest. You may change the symbols as well.
- Single speaker
python preprocess.py --text_index 1 --filelists PATH_TO_train.txt --text_cleaners CLEANER_NAME python preprocess.py --text_index 1 --filelists PATH_TO_val.txt --text_cleaners CLEANER_NAME
- Multi speaker
python preprocess.py --text_index 2 --filelists PATH_TO_train.txt --text_cleaners CLEANER_NAME python preprocess.py --text_index 2 --filelists PATH_TO_val.txt --text_cleaners CLEANER_NAME
The resulting cleaned text would be like this(single). ex - multi
-
-
Build Monotonic Alignment Search.
# Cython-version Monotonoic Alignment Search
cd monotonic_align
mkdir monotonic_align
python setup.py build_ext --inplace
- Edit configurations based on files and cleaners you used.
Setting json file in configs
Model | How to set up json file in configs | Sample of json file configuration |
---|---|---|
iSTFT-VITS2 | "istft_vits": true, "upsample_rates": [8,8], |
istft_vits2_base.json |
MB-iSTFT-VITS2 | "subbands": 4, "mb_istft_vits": true, "upsample_rates": [4,4], |
mb_istft_vits2_base.json |
MS-iSTFT-VITS2 | "subbands": 4, "ms_istft_vits": true, "upsample_rates": [4,4], |
ms_istft_vits2_base.json |
Mini-iSTFT-VITS2 | "istft_vits": true, "upsample_rates": [8,8], "hidden_channels": 96, "n_layers": 3, |
mini_istft_vits2_base.json |
Mini-MB-iSTFT-VITS2 | "subbands": 4, "mb_istft_vits": true, "upsample_rates": [4,4], "hidden_channels": 96, "n_layers": 3, "upsample_initial_channel": 256, |
mini_mb_istft_vits2_base.json |
# train_ms.py for multi speaker
python train.py -c configs/mb_istft_vits2_base.json -m models/test