/MT-KD

[IEEE/ACM-TASLP'2022] Decoding Knowledge Transfer for Neural Text-to-Speech Training

Primary LanguageHTML

MT-KD

"Decoding Knowledge Transfer for Neural Text-to-Speech Training"

Authors: Rui Liu, Berrak Sisman, Guanglai Gao and Haizhou Li

This paper was accepted by IEEE/ACM Transactions on Audio, Speech, and Language Processing (IEEE/ACM TASLP) 2022.

Speech samples

Speech samples are available at demo page.

Citing

To cite this repository:

@ARTICLE{9767637,
  author={Liu, Rui and Sisman, Berrak and Gao, Guanglai and Li, Haizhou},
  journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, 
  title={Decoding Knowledge Transfer for Neural Text-to-Speech Training}, 
  year={2022},
  volume={30},
  number={},
  pages={1789-1802},
  keywords={Decoding;Training;Speech processing;Knowledge transfer;Data models;Computational modeling;Adversarial machine learning;Autoregressive model;end-to-end TTS;exposure bias;knowledge distillation;knowledge transfer},
  doi={10.1109/TASLP.2022.3171974}}