Not all quantifiers are equal: Probing Transformer-based language models’ understanding of generalised quantifiers
- Tharindu Madusanka
- Iqra Zahid
- Hao Li
- Ian Pratt-Hartmann
- Riza Batista-Navarro
How do different generalised quantifiers affect the behaviour of transformer-based language models (TLMs)? The recent popularity of TLMs and the central role generalised quantifiers have traditionally played in linguistics and logic bring this question into particular focus. The current research investigating this subject has not utilised a task defined purely in a logical sense, and thus, has not captured the underlying logical significance of generalised quantifiers. Consequently, they have not answered the aforementioned question faithfully or adequately. Therefore, we investigate how different generalised quantifiers affect TLMs by employing a textual entailment problem defined in a purely logical sense, namely, model-checking with natural language. Our approach permits the automatic construction of datasets with respect to which we can assess the ability of TLMs to learn the meanings of generalised quantifiers. Our investigation reveals that TLMs generally can comprehend the logical semantics of the most common generalised quantifiers, but that distinct quantifiers influence TLMs in varying ways.
@inproceedings{madusanka-etal-2023-quantifiers,
title = "Not all quantifiers are equal: Probing Transformer-based language models{'} understanding of generalised quantifiers",
author = "Madusanka, Tharindu and
Zahid, Iqra and
Li, Hao and
Pratt-Hartmann, Ian and
Batista-Navarro, Riza",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.536",
doi = "10.18653/v1/2023.emnlp-main.536",
pages = "8680--8692"
}