List of speech synthesis papers (-> more papers <-). Welcome to recommend more awesome papers π.
Repositories for collecting awesome speech paper:
- awesome-speech-recognition-speech-synthesis-papers (from ponyzhang)
- awesome-python-scientific-audio (from Fabian-Robert StΓΆter)
- TTS-papers (from Eren GΓΆlge)
- awesome-speech-enhancement (from Vincent Liu)
- speech-recognition-papers (from Xingchen Song)
- awesome-tts-samples (from Seung-won Park)
- awesome-speech-translation (from dqqcasia)
What is the meaning of 'β
'? I add 'β
' to the papers which number of citations is over 50 (only in Acoustic Model
, Vocoder
and TTS towards Stylization
). Beginner can read these paper first to get basic knowledge of Deep-Learning-based TTS model (#1).
- Pre-trained Text Representations for Improving Front-End Text Processing in Mandarin Text-to-Speech Synthesis (Interspeech 2019)
- A unified sequence-to-sequence front-end model for Mandarin text-to-speech synthesis (ICASSP 2020)
- A hybrid text normalization system using multi-head self-attention for mandarin (ICASSP 2020)
- Unified Mandarin TTS Front-end Based on Distilled BERT Model (2021-01)
- Tacotron V1β : Tacotron: Towards End-to-End Speech Synthesis (Interspeech 2017)
- Tacotron V2β : Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions (ICASSP 2018)
- Deep Voice V1β : Deep Voice: Real-time Neural Text-to-Speech (ICML 2017)
- Deep Voice V2β : Deep Voice 2: Multi-Speaker Neural Text-to-Speech (NeurIPS 2017)
- Deep Voice V3β : Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning (ICLR 2018)
- Transformer-TTSβ : Neural Speech Synthesis with Transformer Network (AAAI 2019)
- DurIAN: DurIAN: Duration Informed Attention Network For Multimodal Synthesis (2019)
- Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis (ICASSP 2020)
- Flowtron (flow based): Flowtron: an Autoregressive Flow-based Generative Network for Text-to-Speech Synthesis (2020)
- Non-Attentive Tacotron: Robust and Controllable Neural TTS Synthesis Including Unsupervised Duration Modeling (under review ICLR 2021)
- RobuTrans (towards robust): RobuTrans: A Robust Transformer-Based Text-to-Speech Model (AAAI 2020)
- DeviceTTS: DeviceTTS: A Small-Footprint, Fast, Stable Network for On-Device Text-to-Speech (2020-10)
- ParaNet: Non-Autoregressive Neural Text-to-Speech (ICML 2020)
- FastSpeechβ : FastSpeech: Fast, Robust and Controllable Text to Speech (NeurIPS 2019)
- JDI-T: JDI-T: Jointly trained Duration Informed Transformer for Text-To-Speech without Explicit Alignment (2020)
- EATS: End-to-End Adversarial Text-to-Speech (2020)
- FastSpeech 2: FastSpeech 2: Fast and High-Quality End-to-End Text to Speech (2020)
- FastPitch: FastPitch: Parallel Text-to-speech with Pitch Prediction (2020)
- Glow-TTS (flow based, Monotonic Attention): Glow-TTS: A Generative Flow for Text-to-Speech via Monotonic Alignment Search (NeurIPS 2020)
- Flow-TTS (flow based): Flow-TTS: A Non-Autoregressive Network for Text to Speech Based on Flow (ICASSP 2020)
- SpeedySpeech: SpeedySpeech: Efficient Neural Speech Synthesis (Interspeech 2020)
- Parallel Tacotron: Parallel Tacotron: Non-Autoregressive and Controllable TTS (2020)
- Wave-Tacotron: Wave-Tacotron: Spectrogram-free end-to-end text-to-speech synthesis (2020-11)
- Monotonic Attentionβ : Online and Linear-Time Attention by Enforcing Monotonic Alignments (ICML 2017)
- Monotonic Chunkwise Attentionβ : Monotonic Chunkwise Attention (ICLR 2018)
- Forward Attention in Sequence-to-sequence Acoustic Modelling for Speech Synthesis (ICASSP 2018)
- RNN-T for TTS: Initial investigation of an encoder-decoder end-to-end TTS framework using marginalization of monotonic hard latent alignments (2019)
- Location-Relative Attention Mechanisms For Robust Long-Form Speech Synthesis (ICASSP 2020)
- Non-Attentive Tacotron: Robust and Controllable Neural TTS Synthesis Including Unsupervised Duration Modeling (under review ICLR 2021)
- EfficientTTS: EfficientTTS: An Efficient and High-Quality Text-to-Speech Architecture (2020-12)
- Semi-Supervised Training for Improving Data Efficiency in End-to-End Speech Synthesis (2018)
- Almost Unsupervised Text to Speech and Automatic Speech Recognition (ICML 2019)
- Unsupervised Learning For Sequence-to-sequence Text-to-speech For Low-resource Languages (Interspeech 2020)
- Multilingual Speech Synthesis: One Model, Many Languages: Meta-learning for Multilingual Text-to-Speech (InterSpeech 2020)
- Low-resource expressive text-to-speech using data augmentation (2020-11)
- WaveNetβ : WaveNet: A Generative Model for Raw Audio (2016)
- WaveRNNβ : Efficient Neural Audio Synthesis (ICML 2018)
- WaveGANβ : Adversarial Audio Synthesis (ICLR 2019)
- LPCNetβ : LPCNet: Improving Neural Speech Synthesis Through Linear Prediction (ICASSP 2019)
- Towards achieving robust universal neural vocoding (Interspeech 2019)
- GAN-TTS: High Fidelity Speech Synthesis with Adversarial Networks (2019)
- MultiBand-WaveRNN: DurIAN: Duration Informed Attention Network For Multimodal Synthesis (2019)
- Parallel-WaveNetβ : Parallel WaveNet: Fast High-Fidelity Speech Synthesis (2017)
- WaveGlowβ : WaveGlow: A Flow-based Generative Network for Speech Synthesis (2018)
- Parallel-WaveGANβ : Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram (2019)
- MelGANβ : MelGAN: Generative Adversarial Networks for Conditional Waveform Synthesis (NeurIPS 2019)
- MultiBand-MelGAN: Multi-band MelGAN: Faster Waveform Generation for High-Quality Text-to-Speech (2020)
- VocGAN: VocGAN: A High-Fidelity Real-time Vocoder with a Hierarchically-nested Adversarial Network (Interspeech 2020)
- WaveGrad: WaveGrad: Estimating Gradients for Waveform Generation (2020)
- DiffWave: DiffWave: A Versatile Diffusion Model for Audio Synthesis (2020)
- HiFi-GAN: HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis (NeurIPS 2020)
- Parallel-WaveGAN (New): Parallel waveform synthesis based on generative adversarial networks with voicing-aware conditional discriminators (2020-10)
- Improved parallel WaveGAN vocoder with perceptually weighted spectrogram loss (SLT 2021)
- Universal Vocoder Based on Parallel WaveNet: Universal Neural Vocoding with Parallel WaveNet (ICASSP 2021)
- LightSpeech: LightSpeech: Lightweight and Fast Text to Speech with Neural Architecture Search (ICASSP 2021)
- ReferenceEncoder-Tacotronβ : Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron (ICML 2018)
- GST-Tacotronβ : Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis (ICML 2018)
- Predicting Expressive Speaking Style From Text In End-To-End Speech Synthesis (2018)
- GMVAE-Tacotron2β : Hierarchical Generative Modeling for Controllable Speech Synthesis (ICLR 2019)
- BERT-TTS: Towards Transfer Learning for End-to-End Speech Synthesis from Deep Pre-Trained Language Models (2019)
- (Multi-style Decouple): Multi-Reference Neural TTS Stylization with Adversarial Cycle Consistency (2019)
- (Multi-style Decouple): Multi-reference Tacotron by Intercross Training for Style Disentangling,Transfer and Control in Speech Synthesis (InterSpeech 2019)
- Mellotron: Mellotron: Multispeaker expressive voice synthesis by conditioning on rhythm, pitch and global style tokens (2019)
- Flowtron (flow based): Flowtron: an Autoregressive Flow-based Generative Network for Text-to-Speech Synthesis (2020)
- (local style): Fully-hierarchical fine-grained prosody modeling for interpretable speech synthesis (ICASSP 2020)
- Controllable Neural Prosody Synthesis (Interspeech 2020)
- GraphSpeech: GraphSpeech: Syntax-Aware Graph Attention Network For Neural Speech Synthesis (2020-10)
- BERT-TTS: Improving Prosody Modelling with Cross-Utterance BERT Embeddings for End-to-end Speech Synthesis (2020-11)
- (Global Emotion Style Control): Controllable Emotion Transfer For End-to-End Speech Synthesis (2020-11)
- (Phone Level Style Control): Fine-grained Emotion Strength Transfer, Control and Prediction for Emotional Speech Synthesis (2020-11)
- (Phone Level Prosody Modelling): Mixture Density Network for Phone-Level Prosody Modelling in Speech Synthesis (ICASSP 2021)
- Meta-Learning for TTSβ : Sample Efficient Adaptive Text-to-Speech (ICLR 2019)
- SV-Tacotronβ : Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis (NeurIPS 2018)
- Deep Voice V3β : Deep Voice 3: Scaling Text-to-Speech with Convolutional Sequence Learning (ICLR 2018)
- Zero-Shot Multi-Speaker Text-To-Speech with State-of-the-art Neural Speaker Embeddings (ICASSP 2020)
- MultiSpeech: MultiSpeech: Multi-Speaker Text to Speech with Transformer (2020)
- SC-WaveRNN: Speaker Conditional WaveRNN: Towards Universal Neural Vocoder for Unseen Speaker and Recording Conditions (Interspeech 2020)
- MultiSpeaker Dataset: AISHELL-3: A Multi-speaker Mandarin TTS Corpus and the Baselines (2020)
- (introduce PPG into voice conversion): Phonetic posteriorgrams for many-to-one voice conversion without parallel data training (2016)
- A Vocoder-free WaveNet Voice Conversion with Non-Parallel Data (2019)
- TTS-Skins: TTS Skins: Speaker Conversion via ASR (2019)
- One-shot Voice Conversion by Separating Speaker and Content Representations with Instance Normalization (InterSpeech 2019)
- Cotatron (combine text information with voice conversion system): Cotatron: Transcription-Guided Speech Encoder for Any-to-Many Voice Conversion without Parallel Data (Interspeech 2020)
- (TTS & ASR): Voice Conversion by Cascading Automatic Speech Recognition and Text-to-Speech Synthesis with Prosody Transfer (InterSpeech 2020)
- FragmentVC (wav to vec): FragmentVC: Any-to-Any Voice Conversion by End-to-End Extracting and Fusing Fine-Grained Voice Fragments With Attention (2020)
- Towards Natural and Controllable Cross-Lingual Voice Conversion Based on Neural TTS Model and Phonetic Posteriorgram (ICASSP 2021)
- VAE-VC (VAE based): Voice Conversion from Non-parallel Corpora Using Variational Auto-encoder (2016)
- (Speech representation learning by VQ-VAE): Unsupervised speech representation learning using WaveNet autoencoders (2019)
- Blow (Flow based): Blow: a single-scale hyperconditioned flow for non-parallel raw-audio voice conversion (NeurIPS 2019)
- AutoVC: AUTOVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss (2019)
- F0-AutoVC: F0-consistent many-to-many non-parallel voice conversion via conditional autoencoder (ICASSP 2020)
- One-Shot Voice Conversion by Vector Quantization (ICASSP 2020)
- SpeechFlow (auto-encoder): Unsupervised Speech Decomposition via Triple Information Bottleneck (ICML 2020)
- CycleGAN-VC V1: Parallel-Data-Free Voice Conversion Using Cycle-Consistent Adversarial Networks (2017)
- StarGAN-VC: StarGAN-VC: non-parallel many-to-many Voice Conversion Using Star Generative Adversarial Networks (2018)
- CycleGAN-VC V2: CycleGAN-VC2: Improved CycleGAN-based Non-parallel Voice Conversion (2019)
- CycleGAN-VC V3: CycleGAN-VC3: Examining and Improving CycleGAN-VCs for Mel-spectrogram Conversion (2020)
- XiaoIce Band: XiaoIce Band: A Melody and Arrangement Generation Framework for Pop Music (KDD 2018)
- Mellotron: Mellotron: Multispeaker expressive voice synthesis by conditioning on rhythm, pitch and global style tokens (2019)
- ByteSing: ByteSing: A Chinese Singing Voice Synthesis System Using Duration Allocated Encoder-Decoder Acoustic Models and WaveRNN Vocoders (2020)
- JukeBox: Jukebox: A Generative Model for Music (2020)
- XiaoIce Sing: XiaoiceSing: A High-Quality and Integrated Singing Voice Synthesis System (2020)
- HiFiSinger: HiFiSinger: Towards High-Fidelity Neural Singing Voice Synthesis (2019)
- Sequence-to-sequence Singing Voice Synthesis with Perceptual Entropy Loss (2020)
- Learn2Sing: Learn2Sing: Target Speaker Singing Voice Synthesis by learning from a Singing Teacher (2020-11)
- A Universal Music Translation Network (2018)
- Unsupervised Singing Voice Conversion (Interspeech 2019)
- PitchNet: PitchNet: Unsupervised Singing Voice Conversion with Pitch Adversarial Network (ICASSP 2020)
- DurIAN-SC: DurIAN-SC: Duration Informed Attention Network based Singing Voice Conversion System (Interspeech 2020)
- Speech-to-Singing Conversion based on Boundary Equilibrium GAN (Interspeech 2020)
- PPG-based singing voice conversion with adversarial representation learning (2020)