This is a paper list for the pre-training based dialogue models. It involve both task-oriented and open-domain dialogue models.
Keyword: Dialgue model, Pre-training method, Natural Language Processing
- Improving Language Understanding by Generative Pre-Training, OpenAI Blog, [paper]
- Better Language Models and Their Implications, OpenAI Blog, [paper]
- ConveRT: Efficient and Accurate Conversational Representations from Transformers, Arxiv-2019, [paper]
- DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation, Arxiv-2019, [paper], [code]
- PLATO: Pre-trained Dialogue Generation Model with Discrete Latent Variable, Arxiv-2019, [paper]
- PLATO-2:Towards Building an Open-Domain Chatbot via Curriculum Learning [paper]
- MASS: Masked Sequence to Sequence Pre-training for Language Generation, ICML2019, [paper], [code]
- Unified Language Model Pre-training for Natural Language Understanding and Generation, NIPS2019, [paper], [code]
- Pretraining Methods for Dialog Context Representation Learning, ACL2019, [paper]
- Denoising based Sequence-to-Sequence Pre-training for Text Generation, EMNLP2019, [paper], [code]
- Masking Orchestration: Multi-task Pretraining for Multi-role Dialogue Representation Learning, AAAI2020
- Hello, It's GPT-2--How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems, WNGT2019, [paper]
- Alternating Roles Dialog Model with Large-scale Pre-trained Language Models, Arxiv-2019, [paper]
- Attention-Informed Mixed-Language Training for Zero-shot Cross-lingual Task-oriented Dialogue Systems, AAAI2020, [paper], [code]
- Relevance-Promoting Language Model for Short-Text Conversation, AAAI-2020, [paper]
- Transfertransfo: A transfer learning approach for neural network based conversational agents, NeurIPS 2018 CAI Workshop, [paper]
- Large-scale transfer learning for natural language generation, ACL2019, [paper], [code]
- Persona-aware Dialogue Generation with Enriched Profile, AAAI2020, [paper]
- Large-scale Pretraining for Visual Dialog: A Simple State-of-the-Art Baseline, Arxiv2019, [paper]
- Few-shot NLG with Pre-trained Language Model, Arxiv-2019, [paper], [code]
- Harnessing Pre-Trained Neural Networks with Rules for Formality Style Transfer, EMNLP-2019, [paper], [code]
By Yinhe Zheng (zhengyinhe1@163.com)
Welcome to open an issue or make a pull request!