awesome-language-model-probes

Factual Knowledge Probes

Language Model as Knowledge Bases?
Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, Sebastian Riedel.
(EMNLP 2019)
[paper] [code]

Inducing relational knowledge from BERT.
Zied Bouraoui, Jose Camacho-Collados, Steven Schockaert.
(AAAI 2020)
[paper]

Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly.
Nora Kassner, Hinrich Schütze.
(ACL 2020)
[paper]

How Can We Know What Language Models Know?
Zhengbao Jiang, Frank F. Xu, Jun Araki, Graham Neubig.
(TACL 2020)
[paper] [code]

How Context Affects Language Models’ Factual Predictions.
Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rocktäschel, Yuxiang Wu, Alexander H. Miller, Sebastian Riedel.
(AKBC 2020)
[paper]

AUTOPROMPT: Eliciting Knowledge from Language Models with Automatically Generated Prompts.
Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Wallace, Sameer Singh.
(EMNLP 2020)
[paper] [website] [code]

X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models.
Zhengbao Jiang, Antonios Anastasopoulos, Jun Araki, Haibo Ding, Graham Neubig.
(EMNLP 2020)
[paper] [website] [code]

How Much Knowledge Can You Pack Into the Parameters of a Language Model?
Adam Roberts, Colin Raffel, Noam Shazeer.
(EMNLP 2020)
[paper] [code]

E-BERT: Efficient-Yet-Effective Entity Embeddings for BERT.
Nina Poerner, Ulli Waltinger, Hinrich Schütze.
(EMNLP-findings 2020)
[paper] [code]

Multilingual LAMA: Investigating Knowledge in Multilingual Pretrained Language Models.
Nora Kassner, Philipp Dufter, Hinrich Schütze.
(EACL 2021)
[paper] [code]

Factual Probing Is [MASK]: Learning vs. Learning to Recall.
Zexuan Zhong, Dan Friedman, Danqi Chen.
(NAACL 2021)
[paper] [code]

Learning How to Ask: Querying LMs with Mixtures of Soft Prompts.
Guanghui Qin, Jason Eisner.
(NAACL 2021)
[paper]

How Can We Know When Language Models Know? On the Calibration of Language Models for Question Answering.
Zhengbao Jiang, Jun Araki, Haibo Ding, Graham Neubig.
(TACL 2021)
[paper] [code]

Measuring and Improving Consistency in Pretrained Language Models.
Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, Yoav Goldberg.
(TACL 2021)
[paper]

Knowledge Neurons in Pretrained Transformers.
Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Furu Wei.
(arxiv)
[paper]

Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases.
Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun, Lingyong Yan, Meng Liao, Tong Xue, Jin Xu.
(ACL-IJCNLP 2021)
[paper] [code]

Syntactic Knowledge Probes

A Structural Probe for Finding Syntax in Word Representations.
John Hewitt, Christopher D. Manning.
(NAACL 2019)
[paper]

A Non-Linear Structural Probe.
Jennifer C. White, Tiago Pimentel, Naomi Saphra, Ryan Cotterell.
(NAACL 2021)
[paper]