/CognitiveSkills

Primary LanguageJupyter Notebook

Analyzing Cognitive Skills in NLP: A Comparative Study of Model Predictions vs. Human Performance in Language Prediction Tasks


Abstract

Natural Language Processing (NLP) models have made remarkable strides in various language-related tasks, demonstrating outstanding linguistic capabilities across a spectrum of tasks, including question answering, sentence comprehension, summarization, common sense reasoning and translation. However, understanding the extent of their cognitive capabilities remains a challenging endeavor. A pivotal question arises: to what degree do the mechanisms that underlie language comprehension in humans correspond to those employed by language models? In this study, we aim to uncover the similarities and disparities between these two entities at different checkpoints of the trained language model. We do this by studying the cognitive skills developed by LLMs, exemplified by MultiBERTs, in analyzing complex sentence structures and compare the models' predictions with human performance. Our preliminary results indicate that models outperformed human participants in specific linguistic scenarios.