카카오브레인 자연어처리(NLP) 팀에서 읽고 있는 논문 리뷰들을 공개하는 repository 입니다.
자연어처리 연구팀에서 읽는 논문들을 매주 업데이트 해나갈 예정입니다.
자연어처리에 대한 논문이 주가 되지만, 꼭 자연어처리 논문만을 읽지는 않습니다.
- 2020.10.28 What Have We Achieved on Text Summarization?
- 2020.10.28 Revisiting Modularized Multilingual NMT to Meet Industrial Demands
- 2020.10.14 The Dialogue Dodecathlon: Open-Domain Knowledge and Image Grounded Conversational Agents
- 2020.10.14 One model many languages meta-learning for multilingual text-to-speech
- 2020.09.29 Pattern-Exploiting Training (English)
- 2020.09.23 Can Unconditional Language Models Recover Arbitrary Sentences?
- 2020.09.23 Deep Double Descent
- 2020.09.16 AMBERT
- 2020.09.16 Wikipedia2Vec
- 2020.09.16 Wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations
- 2020.09.09 Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation
- 2020.09.02 SPECTER: Document-level Representation Learning using Citation-informed Transformer
- 2020.08.26 Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets
- 2020.08.19 Hybrid Discriminative-Generative Training via Contrastive Learning (English)
- 2020.08.12 Supervised Contrastive Learning
- 2020.08.12 Recent Advances in Neural Question Generation
- 2020.08.04 SciREX: A Challenge Dataset for Document-Level Information Extraction
- 2020.08.04 Big Bird: Transformers for Longer Sequences
- 2020.07.29 G-DAUG: Generative Data Augmentation for Commonsense Reasoning (English)
- 2020.07.29 Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks (English)
- 2020.07.22 Balancing Training for Multilingual NMT
- 2020.07.22 TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data
- 2020.07.15 Optimizing Data Usage via Differentiable Rewards
- 2020.07.15 Language-agnostic BERT Sentence Embedding
- 2020.07.08 PLATO & PLATO-2
- 2020.06.17 GPT-3 (English)
- 2020.03.25 XLU: XNLI, XLM, XLM-R (English)
- 2020.02.05 Meena (English)