/NLP-THU

NLP Course Material & QA

This repository provides reading materials recommended by NLP-THU Course.

1. Introduction

Introduction

  1. Foundations of statistical natural language processing. Christopher D. Manning and Hinrich Schütze. MIT Press 2001. [link]
  2. Introduction to information retrieval. Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze. Cambridge University Press 2008. [link]
  3. Semantic Relations Between Nominals. Vivi Nastase, Preslav Nakov, Diarmuid Ó Séaghdha and Stan Szpakowicz. Morgan & Claypool Publishers 2013. [link]

2. Word Representation and Neural Networks

a. Word Representation

  1. Linguistic Regularities in Continuous Space Word Representations. Tomas Mikolov, Wen-tau Yih and Geoffrey Zweig. NAACL 2013. [link]
  2. Glove: Global Vectors for Word Representation. Jeffrey Pennington, Richard Socher and Christopher D. Manning. EMNLP 2014. [link]
  3. Deep Contextualized Word Representations. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee and Luke Zettlemoyer. NAACL 2018. [link]
  4. Parallel Distributed Processing. Jerome A. Feldman, Patrick J. Hayes and David E. Rumelhart. 1986.

b. RNN & CNN

  1. ImageNet Classification with Deep Convolutional Neural Networks. NIPS 2012 [link]
  2. Convolutional Neural Networks for Sentence Classification. EMNLP 2014 [link]
  3. Long short-term memory. MIT Press 1997 [link]

3. Seq2Seq Modeling

a. Machine Translation

Must-read Papers

  1. The Mathematics of Statistical Machine Translation: Parameter Estimation. Peter EBrown, Stephen ADella Pietra, Vincent JDella Pietra, and Robert LMercer. Computational Linguistics 1993. [link]
  2. (Seq2seq) Sequence to Sequence Learning with Neural Networks. Ilya Sutskever, Oriol Vinyals, and Quoc VLe. NIPS 2014. [link]
  3. (BLEU) BLEU: a Method for Automatic Evaluation of Machine Translation. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. ACL 2002. [link]

Further Reading

  1. Statistical Phrase-Based Translation. Philipp Koehn, Franz JOch, and Daniel Marcu. NAACL 2003. [link]
  2. Hierarchical Phrase-Based Translation. David Chiang. Computational Linguistics 2007. [link]
  3. (Beam Search) Beam Search Strategies for Neural Machine Translation. Markus Freitag and Yaser Al-Onaizan. Workshop on Neural Machine Translation 2017 [link]
  4. Visualizing and Understanding Neural Machine Translation. Yanzhuo Ding, Yang Liu, Huanbo Luan, and Maosong Sun. ACL 2017 [link]
  5. MT paper list. [link]
  6. THUMT toolkit. [link]

b. Attention

  1. Introduction to attention. [link]
  2. Neural Machine Translation by Jointly Learning to Align and Translate. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. ICLR 2015. [link]

c. Transformer

Must-read Papers

  1. (Transformer) Attention is All You Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan NGomez, Lukasz Kaiser, and Illia Polosukhin. NIPS 2017. [link]
  2. (BPE) Neural Machine Translation of Rare Words with Subword Units. Rico Sennrich, Barry Haddow, and Alexandra Birch. ACL 2016. [link]

Further Reading

  1. Illustrated Transformer. [link]
  2. Layer normalization. Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016. [link]
  3. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. ACL 2020. [link]

4. Pre-Trained Language Models

Must-read papers

  1. Semi-supervised Sequence Learning. [link]
  2. (ELMo) Deep contextualized word representations. [link]
  3. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. [link]

Further Reading

  1. Introduction of Pre-trained LM. [link]
  2. Transformer code repo. [link]
  3. Transfer Learning in Natural Language Processing. Sebastian Ruder, Matthew E. Peters, Swabha Swayamdipta, Thomas Wolf. NAACL 2019 [link]
  4. PLM paper list. [link]

5. Knowledge Graph

a. Introduction to KG

  1. Towards a Definition of Knowledge Graphs. Lisa Ehrlinger, Wolfram Wöß [link]
  2. KG Definition & History Wiki [link]
  3. Semantic Network [link]

b. Knowledge Representation Learning

Must-read papers

  1. KRL paper list [link]
  2. Knowledge Graph Embedding: A Survey of Approaches and Applications. Quan Wang, Zhendong Mao, Bin Wang, Li Guo. TKDE 2017.  [link]
  3. TransE: Translating Embeddings for Modeling Multi-relational Data. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, Oksana Yakhnenko. NIPS 2013. [link]
  4. NTN: Reasoning With Neural Tensor Networks for Knowledge Base Completion. Richard Socher, Danqi Chen, Christopher D. Manning, Andrew Ng. NIPS 2013. [link]
  5. ComplEx: Complex Embeddings for Simple Link Prediction. Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier and Guillaume Bouchard. ICML 2016. [link]
  6. RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space. Zhiqing Sun, Zhi Hong Deng, Jian Yun Nie, Jian Tang. ICLR 2019. [link]
  7. TuckER: Tensor Factorization for Knowledge Graph Completion. Ivana Balazevic, Carl Allen, Timothy M. Hospedales. EMNLP 2019. [link]

Further reading

  1. OpenKE [link]

c. Reasoning

  1. KG Reasoning paper list [link] & PPT [link]
  2. Differentiable Learning of Logical Rules for Knowledge Base Reasoning. Fan Yang, Zhilin Yang, William W. Cohen. NeurIPS 2017. [link]
  3. Go for a Walk and Arrive at the Answer: Reasoning Over Paths in Knowledge Bases using Reinforcement Learning. Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Luke Vilnis, Ishan Durugkar, Akshay Krishnamurthy, Alex Smola, Andrew McCallum. ICLR 2018. [link]

6. Information Extraction - 1

a. Part of Speech Tagging (POS Tagging)

  1. Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss. Plank Barbara, Anders Søgaard, Yoav Goldberg. ACL 2016. [link]
  2. Blog: NLP Guide: Identifying Part of Speech Tags using Conditional Random Fields. [link]

b. Sequence Labelling

  1. Hierarchically-Refined Label Attention Network for Sequence Labeling. Cui Leyang, Yue Zhang. EMNLP-IJCNLP 2019. [link]
  2. End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF. Ma Xuezhe, Eduard Hovy. ACL 2016. [link]
  3. Comparisons of Sequence Labeling Algorithms and Extensions. Nguyen Nam, Yunsong Guo. ICML 2007. [link]

c. Named Entity Recognition

  1. Blog: Named Entity Recognition Tagging. CS230 [link]
  2. A Survey of Named Entity Recognition and Classification. David Nadeau, Satoshi Sekine. Computational Linguistics 2007. [link]
  3. Neural Architectures for Named Entity Recognition. Lample Guillaume, et al. NAACL 2016. [link]
  4. Named Entity Recognition with Bidirectional LSTM-CNNs. Jason P. C. Chiu, Eric Nichols. TACL 2016. [link]

7. Information Extraction - 2

a. Relation Extraction

Must-read papers

  1. Relation Classification via Convolutional Deep Neural Network. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao. COLING 2014. [link]
  2. Distant Supervision for Relation Extraction without Labeled Data. Mike Mintz, Steven Bills, Rion Snow, Dan Jurafsky. ACL-IJCNLP 2009. [link]
  3. Neural Relation Extraction with Selective Attention over Instances. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, Maosong Sun. ACL 2016. [link]

Further reading

  1. RE paper list [link]

b. Advanced Topics

- Event Extraction

  1. Joint Event Extraction via Structured Prediction with Global Features. Qi Li, Heng Ji and Liang Huang. ACL 2013. [link]
  2. Event Extraction via Dynamic Multi-Pooling Convolutional Neural Networks. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng and Jun Zhao. ACL 2015. [link]
  3. Adversarial Training for Weakly Supervised Event Detection. Xiaozhi Wang, Xu Han, Zhiyuan Liu, Maosong Sun and Peng Li. NAACL 2019. [link]
  4. CLEVE: Contrastive Pre-training for Event Extraction. Ziqi Wang, Xiaozhi Wang, Xu Han, Yankai Lin, Lei Hou, Zhiyuan Liu, Peng Li, Juanzi Li, Jie Zhou. ACL 2021. [link]

- OpenRE

  1. Open Relation Extraction: Relational Knowledge Transfer from Supervised Data to Unsupervised Data. Ruidong Wu, Yuan Yao, Xu Han, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, Maosong Sun. EMNLP 2019. [link]
  2. Discrete-state Variational Autoencoders for Joint Discovery and Factorization of Relations. Diego Marcheggiani and Ivan Titov. TACL 2016. [link]
  3. Open Hierarchical Relation Extraction. Kai Zhang, Yuan Yao, Ruobing Xie, Xu Han, Zhiyuan Liu, Fen Lin, Leyu Lin, Maosong Sun. NAACL 2021. [link]

- Document-Level RE

  1. DocRED: A Large-Scale Document-Level Relation Extraction Dataset. Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, Maosong Sun. ACL 2019. [link]
  2. A Walk-based Model on Entity Graphs for Relation Extraction. Fenia Christopoulou, Makoto Miwa, Sophia Ananiadou. ACL 2017. [link]
  3. Graph Neural Networks with Generated Parameters for Relation Extraction. Hao Zhu, Yankai Lin, Zhiyuan Liu, Jie Fu, Tat-Seng Chua, Maosong Sun. ACL 2019. [link]
  4. Reasoning with Latent Structure Refinement for Document-Level Relation Extraction. Guoshun Nan, Zhijiang Guo, Ivan Sekulić, Wei Lu. ACL 2020. [link]

- Few-shot RE

  1. FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, Maosong Sun. ACL 2019 [link]
  2. Matching Networks for One Shot Learning. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, Daan Wierstra [link]
  3. Prototypical Networks for Few-shot Learning. Jake Snell, Kevin Swersky, Richard SZemel [link]
  4. Meta-Information Guided Meta-Learning for Few-Shot Relation Classification. Bowen Dong, Yuan Yao, Ruobing Xie, Tianyu Gao, Xu Han, Zhiyuan Liu, Fen Lin, Leyu Lin, Maosong Sun. COLING 2020. [link]

8. Knowledge-Guided NLP

Must-read papers

  1. ERNIE: Enhanced Language Representation with Informative Entities Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, Qun Liu. ACL 2019 [link]
  2. Neural natural language inference models enhanced with external knowledge. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. ACL 2018 [link]
  3. Neural knowledge acquisition via mutual attention between knowledge graph and text. Xu Han, Zhiyuan Liu, and Maosong Sun. AAAI 2018 [link]

Further reading

  1. Language Models as Knowledge Bases? [link]
  2. Knowledge enhanced contextual word representations. Matthew EPeters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah ASmith. EMNLP 2019 [link]
  3. Barack’s wife hillary: Using knowledge graphs for fact-aware language modeling. Robert Logan, Nelson FLiu, Matthew EPeters, Matt Gardner, and Sameer Singh. ACL 2019 [link]
  4. Knowledgeable Reader: Enhancing Cloze-style Reading Comprehension with External Commonsense Knowledge. Todor Mihaylov and Anette Frank. ACL 2018 [link]
  5. Improving question answering by commonsense-based pre-training. Wanjun Zhong, Duyu Tang, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. 2018 [link]
  6. Adaptive knowledge sharing in multi-task learning: Improving low-resource neural machine translation. Poorya Zaremoodi, Wray Buntine, and Gholamreza Haffari. ACL 2018 [link]
  7. ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning. Yujia Qin, Yankai Lin, Ryuichi Takanobu, Zhiyuan Liu, Peng Li, Heng Ji, Minlie Huang, Maosong Sun, Jie Zhou. ACL 2021 [link]
  8. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu ji, Guihong Cao, Daxin Jiang, Ming Zhou. ACL 2021 [link]

9. Advanced Learning Methods

a. Adversarial Training

Must-read papers
  1. Explaining and Harnessing Adversarial Examples. Ian JGoodfellow, Jonathon Shlens, and Christian Szegedy. ICLR 2015 [link])
  2. Generative Adversarial Nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. NIPS 2015 [link]
  3. Wasserstein GAN. Martín Arjovsky, Soumith Chintala, and Léon Bottou. ICML 2017 [link]
Further reading
  1. Adversarial Examples for Evaluating Reading Comprehension Systems. Robin Jia, Percy Liang. EMNLP 2017 [link]
  2. Certified Defenses Against Adversarial Examples. Raghunathan, Aditi, Jacob Steinhardt, and Percy Liang. ICLR 2018 [link]
  3. Robust Neural Machine Translation with Doubly Adversarial Inputs. Yong Cheng, Lu Jiang, and Wolfgang Macherey. ACL 2019 [link]
  4. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. Radford Alec, Metz Luke, and Chintala SoumithICLR 2016 [link]
  5. Improved Training of Wasserstein GANs. Martin Arjovsky, Soumith Chintala, and Léon BottouGulrajani Ishaan, Ahmed Faruk, Arjovsky Martin, Dumoulin Vincent, and Courville Aaron. NIPS 2017 [link]
  6. Are GANs Created Equal? A Large-scale Study. Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. NIPS 2018 [link]
  7. Unsupervised Machine Translation Using Monolingual Corpora Only. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. ICLR 2018 [link]
  8. Adversarial Multi-task Learning for Text Classification. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. ACL 2017 [link]
  9. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. AAAI 2018 [link]

b. Reinforcement Learning

Must-read papers
  1. Playing atari with deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller. 2013 [link]
  2. Human-level control through deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, Demis Hassabis. Nature 2015 [link]
  3. Mastering the game of go with deep neural networks and tree search. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, Demis Hassabis. Nature 2016 [link]
  4. Reinforcement learning for relation classification from noisy data. Jun Feng, Minlie Huang, Li Zhao, Yang Yang, Xiaoyan Zhu. AAAI 2018 [link]
Further reading
  1. Reinforced co-training. Jiawei Wu, Lei Li, William Yang Wang. NAACL 2018 [link]
  2. Playing 20 question game with policy-based reinforcement learning. Huang Hu, Xianchao Wu, Bingfeng Luo, Chongyang Tao, Can Xu, Wei Wu, Zhan Chen. EMNLP 2018 [link]
  3. Entity-relation extraction as multi-turn question answering. Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, Jiwei Li. ACL 2019 [link]
  4. Language understanding for text-based games using deep reinforcement learning. Karthik Narasimhan, Tejas D Kulkarni, Regina Barzilay. EMNLP 2015 [link]
  5. Deep reinforcement learning with a natural language action space. Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf. ACL 2016 [link]

c. Few-Shot Learning

Must-read papers
  1. FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, Maosong Sun. ACL 2019 [link]
  2. Matching Networks for One Shot Learning. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, Daan Wierstra [link]
  3. Prototypical Networks for Few-shot Learning. Jake Snell, Kevin Swersky, Richard SZemel [link]
Further reading
  1. FewRel 2.0: Towards More Challenging Few-Shot Relation Classification. Tianyu Gao, Xu Han, Hao Zhu, Zhiyuan Liu, Peng Li, Maosong Sun, Jie Zhou. EMNLP 2019 [link]
  2. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. Chelsea Finn, Pieter Abbeel, Sergey Levine [link]
  3. Matching the Blanks: Distributional Similarity for Relation Learnin. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, Tom Kwiatkowski. ACL 2019 [link]

10. Information Retrieval

Must-read papers

  1. A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval. Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Grégoire Mesnil. CIKM 2014. [link]
  2. Text Matching as Image Recognition. Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengxian Wan, and Xueqi Cheng. AAAI 2016. [link]
  3. Query Expansion with Locally-Trained Word Embeddings. Fernando Diaz, Bhaskar Mitra, and Nick Craswell. ACL 2016. [link]
  4. PACRR: A Position-Aware Neural IR Model for Relevance Matching. Kai Hui, Andrew Yates, Klaus Berberich, Gerard de Melo. EMNLP 2017. [link]
  5. Dense Passage Retrieval for Open-Domain Question Answering. Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. EMNLP 2020. [link]
  6. Entity-Duet Neural Ranking: Understanding the Role of Knowledge Graph Semantics in Neural Information Retrieval. Zhenghao Liu, Chenyan Xiong, Maosong Sun, and Zhiyuan Liu. ACL 2018. [link]

Further reading

  1. Explicit Semantic Ranking for Academic Search via Knowledge Graph Embedding. Chenyan Xiong, Russell Power, Jamie Callan. WWW 2017. [link]
  2. Deeper Text Understanding for IR with Contextual Neural Language Modeling. Zhuyun Dai, Jamie Callan. SIGIR 2019. [link]

11. Question Answering

a. Reading Comprehension

  1. SQuAD: 100,000+ Questions for Machine Comprehension of Text. EMNLP 2016 [link]
  2. Bidirectional Attention Flow for Machine Comprehension. ICLR 2017 [link]
  3. Simple and Effective Multi-Paragraph Reading Comprehension. ACL 2018 [link]

b. Open-domain QA

  1. Reading Wikipedia to Answer Open-Domain Questions. ACL 2017 [link]
  2. Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text. EMNLP 2018 [link]

c. KBQA

  1. Question Answering with Subgraph Embedding. EMNLP 2014 [link]
  2. Semantic Parsing via Staged Query Graph Generation: Question Answering with Knowledge Base. ACL 2015 [link]

d. Other Topics

  1. (Multi-hop) Self-Assembling Modular Networks for Interpretable Multi-Hop Reasoning. EMNLP 2019 [link]
  2. (Symbolic) Neural symbolic Reader: scalable integration of distributed and symbolic representations for reading comprehension. ICLR 2020 [link]
  3. (Adversial) Adversarial Examples for Evaluating Reading Comprehension Systems. EMNLP 2017 [link]
  4. (PIQA) Phrase-indexed question answering: A new challenge for scalable document comprehension. EMNLP 2018 [link]
  5. (Common Sense) Graph-Based Reasoning over Heterogeneous External Knowledge for Commonsense Question Answering. [link]
  6. (CQA) SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering. [link]
  7. HOTPOTQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. EMNLP 2018 [link]

12. Text Generation

a. Survey

  1. Tutorial on variational autoencoders [link]
  2. Neural text generation: A practical guide [link]
  3. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation [link] 4. Neural Text Generation: Past, Present and Beyond [link] 5. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation [link]

b. Classic

  1. A neural probabilistic language model [link] (NNLM)
  2. Recurrent neural network based language model [link] (RNNLM)
  3. Sequence to sequence learning with neural networks [link] (seq2seq)

c. VAE based

  1. Generating Sentences from a Continuous Space [link]
  2. Long and Diverse Text Generation with Planning-based Hierarchical Variational Model [link]

d. GAN based

  1. Adversarial feature matching for text generation [link] (TextGAN)

e. Knowledge based

  1. Text Generation from Knowledge Graphs with Graph Transformers [link]
  2. Neural Text Generation from Rich Semantic Representations [link]

13. Discourse Analysis

a. Reference in Language & Coreference Resolution

  1. Unsupervised Models for Coreference Resolution. Vincent Ng. EMNLP 2008. [link]
  2. End-to-end Neural Coreference Resolution. Kenton Lee, Luheng He, Mike Lewis, Luke Zettlemoyer. EMNLP 2017. [link]
  3. Coreference Resolution as Query-based Span Prediction. Wei Wu, Fei Wang, Arianna Yuan, Fei Wu, Jiwei Li. ACL 2020. [link]

b. Coherence & Discourse Relation Classification

  1. Implicit Discourse Relation Classification via Multi-Task Neural Networks. Yang Liu, Sujian Li, Xiaodong Zhang, Zhifang Sui. AAAI 2016. [link]
  2. Implicit Discourse Relation Detection via a Deep Architecture with Gated Relevance Network. Jifan Chen, Qi Zhang, Pengfei Liu, Xipeng Qiu, Xuanjing Huang. ACL 2016. [link]
  3. Employing the Correspondence of Relations and Connectives to Identify Implicit Discourse Relations via Label Embeddings. Linh The Nguyen, Ngo Van Linh, Khoat Than, Thien Huu Nguyen. ACL 2019. [link]
  4. Linguistic Properties Matter for Implicit Discourse Relation Recognition: Combining Semantic Interaction, Topic Continuity and Attribution. Wenqiang Lei, Yuanxin Xiang, Yuwei Wang, Qian Zhong, Meichun Liu, Min-Yen Kan. AAAI 2018. [link]

c. Context Modeling and Conversation

  1. A Survey on Dialogue Systems: Recent Advances and New Frontiers. Hongshen Chen, Xiaorui Liu, Dawei Yin, Jiliang Tang. 2018. [link]
  2. A Diversity-Promoting Objective Function for Neural Conversation Models. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan. NAACL 2016. [link]
  3. A Persona-Based Neural Conversation Model. Jiwei Li, Michel Galley, Chris Brockett, Georgios PSpithourakis, Jianfeng Gao, Bill Dolan. ACL 2016. [link]

14. Interdiscipline

a. Cognitive Linguistics and NLP

  1. Computational Cognitive Linguistics [link]
  2. Ten Lectures on Cognitive Linguistics by George Lakoff [link (access using Tsinghua Laboratory Account)]

b. Psycholinguistics and NLP

  1. A Computational Psycholinguistic Model of Natural Language Processing [link]
  2. Slides of the Cambridge NLP course [link]
  3. Reading materials of the MIT course Computational Psycholinguistics [link]

c. Sociolinguistics and NLP

  1. Computational Sociolinguistics- A Survey [link]
  2. Research Topic of Computational Sociolinguistics in Frontiers [link]
  3. Introduction to Computational Sociolinguistics [link]