- VITS
- 1.0
- 구축되는 학습데이터를 활용하여 발화자의 목소리 구분을 위해 Stochastic Duration Predictor 알고리즘과 음성을 분석하는 Encoder와 음성을 생성하는 Decoder를 이어주는 VAE 알고리즘을 적용하여 End-to-End 음성합성 학습모델
VITS at training | VITS at inference |
---|---|
- (x, x_lengths, y, y_lengths, waveform, aux_input={'d_vectors': None, 'language_ids': None, 'speaker_ids': None})
x: [B,Tseq]
x_lengths: [B]
y: [B,C,Tspec]
y_lengths: [B]
waveform: [B,1,Twav]
d_vectors: [B,C,1]
speaker_ids: [B]
language_ids: [B]
- (x, aux_input={'d_vectors': None, 'durations': None, 'language_ids': None, 'speaker_ids': None, 'x_lengths': None})
model_outputs: [B,1,Twav]
alignments: [B,Tseq,Tdec]
z: [B,C,Tdec]
z_p: [B,C,Tdec]
m_p: [B,C,Tdec]
logs_p: [B,C,Tdec]
WAV | TXT |
---|---|
381,456 개 | 381,456 개 |
- WAV 형식 조건: Mono/22050Hz
- TXT 형식 조건: 음원 위치|발화자 번호|스크립트
- train: {
"log_interval": 200,
"eval_interval": 1000,
"seed": 1234,
"epochs": 10000,
"learning_rate": 2e-4,
"betas": [0.8, 0.99],
"eps": 1e-9,
"batch_size": 32,
"fp16_run": false,
"lr_decay": 0.999875,
"segment_size": 8192,
"init_lr_ratio": 1,
"warmup_epochs": 0,
"c_mel": 45,
"c_kl": 1.0
}
- data: {
"training_files":"filelists/nia22_audio_text_train_filelist.txt.cleaned",
"validation_files":"filelists/nia22_audio_text_val_filelist.txt.cleaned",
"text_cleaners":["korean_cleaners"],
"max_wav_value": 32768.0,
"sampling_rate": 22050,
"filter_length": 1024,
"hop_length": 256,
"win_length": 1024,
"n_mel_channels": 80,
"mel_fmin": 0.0,
"mel_fmax": null,
"add_blank": true,
"n_speakers": 100,
"cleaned_text": true
}
- model: {
"inter_channels": 192,
"hidden_channels": 192,
"filter_channels": 768,
"n_heads": 2,
"n_layers": 6,
"kernel_size": 3,
"p_dropout": 0.1,
"resblock": "1",
"resblock_kernel_sizes": [3,7,11],
"resblock_dilation_sizes": [[1,3,5], [1,3,5], [1,3,5]],
"upsample_rates": [8,8,2,2],
"upsample_initial_channel": 512,
"upsample_kernel_sizes": [16,16,4,4],
"n_layers_q": 3,
"use_spectral_norm": false,
"gin_channels": 256
}
- MOS(Mean Opnion Score) 3.5 이상 목표