/EmbeddingRegression

Repository for paper "Embedding Regression: Models for Context-Specific Description and Inference"

MIT LicenseMIT

Embedding Regression: Models for Context-Specific Description and Inference

Paper and related materials for Rodriguez, Spirling and Stewart (2022). The abstract for the paper is as follows

Social scientists commonly seek to make statements about how word use varies over circumstances—including time, partisan identity, or some other document-level covariate. For example, researchers might wish to know how Republicans and Democrats diverge in their understanding of the term "immigration." Building on the success of pretrained language models, we introduce the a la carte on text (conText) embedding regression model for this purpose. This fast and simple method produces valid vector representations of how words are used---and thus what words "mean"---in different contexts. We show that it outperforms slower, more complicated alternatives and works well even with very few documents. The model also allows for hypothesis testing and statements about statistical significance. We demonstrate that it can be used for a broad range of important tasks, including understanding US polarization, historical legislative development, and sentiment detection. We provide open-source software for fitting the model.

You can find the paper (open access) here and a non-technical explainer here.

R software for fitting our models is here, along with a vignette and links to data sets.

This paper is now published at the American Political Science Review, but comments are still very welcome: please send us an email, or open an "Issue" here.