furukawa-ai/deeplearning_papers

Parallel WaveNet: Fast High-Fidelity Speech Synthesis

Closed this issue · 0 comments

msrks commented

推論時に並列処理可能な wavenet。wavenetの浅いレイヤーを、parallel wavenetに置き換える。
pararrel wavenetは、white noiseを入力にとり、wavenetの中間層(隠れ層)を再現するように学習される。
学習はIAFという難しい変分推論の方法(ちゃんと理解できず)を使って、wavenetからdistillationするような形で学習する。具体的には entropy正規化の元で、KL divergenceを教師側であるwavenetに近づけるように pararell wavenetを学習させる。これによって学習時間は長くなるが、推論時に並列処理可能になり圧倒的に高速化した。

2018-03-18 23 44 53

https://arxiv.org/pdf/1711.10433.pdf

The recently-developed WaveNet architecture is the current state of the art in realistic speech synthesis, consistently rated as more natural sounding for many different languages than any previous system. However, because WaveNet relies on sequential generation of one audio sample at a time, it is poorly suited to today's massively parallel computers, and therefore hard to deploy in a real-time production setting. This paper introduces Probability Density Distillation, a new method for training a parallel feed-forward network from a trained WaveNet with no significant difference in quality. The resulting system is capable of generating high-fidelity speech samples at more than 20 times faster than real-time, and is deployed online by Google Assistant, including serving multiple English and Japanese voices.

https://deepmind.com/blog/high-fidelity-speech-synthesis-wavenet/