UC Berkeley Cognition Seminar, Spring 2024, 2 Units
- Friday 11am - 1pm Berkeley Way West 1213
- Instructor: Bill Thompson wdt@berkeley.edu
- Course website: github.com/ecl-ucb/290Q
- In the course catalogue: classes.berkeley.edu
- Course paper due: Friday May 3rd [tentative]
Large language models (LLMs) are machine learning systems that generate text. Recent models (e.g. OpenAI's GPT4, Google DeepMind's Gemini) appear to exhibit capablilities in communication and reasoning that are more open-ended and human-like than any technology in history. As a result of these apparent capabilities, LLMs are often claimed to be on the verge of transformative impacts on education, science, art, business, and society. But how justified are these claims? What are LLMs? What can they actually do? How should we conceptualize these technologies and evaluate their capabilities?
In this course, we will discuss recent research into the capabilities of LLMs and their applications to questions in cognitive and social science. What might LLMs teach us about human cognition? Can we use these technologies to do more impactful science? Can we use cognitive science to advance evidence-based, ethical, equitable integration of LLMs into societies and economies? We will discuss these questions and others on Fridays over lunch 11am - 1pm at 2121 Berkeley Way West Room 1213.
We will read one paper per week (or a few short papers -- several are very brief perspective papers). Everyone reads the paper, one or more people present the paper, and we discuss. Some weeks we will invite speakers from cognitive science and from industry to present their recent LLM research in person or over zoom. Invited speaker schedule TBD.
A note on scope: This is a non-technical course. We will not focus on engineering considerations (model architectures, training, etc) or do any data analysis or prgaramming as part of the class. Attenndees should be comfortable with basic concepts in experimental psychology, cognitive science, and data analysis, but no technical experience with language models is required.
Background on Large Language Models (LLMs) and the wider social context of their application. What are large language models? Why so much excitement and fear? Is all this attention justified? How do LLMs work? Why are LLMs having such an outsized impact on society, science, education, and business? Part One will include a very brief introduction to large language models, a summary of their impacts on science and society so far, and some reflection on how LLMs are conceptualized by companies and scientists.
What are the notable capabilities of LLMs? Can they reason? Can they count? Do they use stereotypes and biases to make judgments and give advice? Can they solve analagoies or exhibit creativity? Do LLMs have rich social reasoning capabilities such as teaching, theory of mind, or information-seeking questioning strategies? Can an LLM have a personality? Can an LLM understand the visual world form language alone?
Questions such as these are the subject of intense research in cognitive and computer science. We will review some of this research and disccus questions such as: What relation does this research have to cognitive psychology?; Is it appropriate to talk about the cogitive abilities of an LLM at all, or compare them to other species? How do an LLMs abilities relate to human cognition? Are there alternative rehotorical frameworks (e.g. what computations can LLMs implement?) we could consider? What sort of experiments should be the standard for LLM research? Does current research meet these standards?
In what ways can we incorportate LLMs into cognitive and social science research for the greater good? On the one hand, LLMs are not yet ready for important tasks such as editorial work (would you be ok with a language model reviewing your manuscript?). On the other hand, LLMs hold the potential to be incredibly useful tools for behavioral simulation, semantic data analysis, computational piloting, stimuli generation, and many more applications in science. LLMs are already replacing human participants in market research, the design of social mdeia algorithms, and the creation of large-scale norming datasets among many other applications that have historically been out of reach. Should we be excited or fearful? Or skeptical?
We will end by discussing some of the most contentious questions that link cognitive science and Artificial Intellignece, theoretical and practical. What can we learn about human cognition from the successes and failures of large language models? Do we need more cognitive science in an age of intelligent machines, or less? What is the role of cognitive science in the evaluation and regulation of machine learning systems?
The goal for this seminar is to produce at least one perspective or review paper, co-authored by seminar participants. Depending on the number of participants, we may cluster into groups working on different papers, or collaborate on a single piece. A full first-draft of the paper is due Friday May 3rd. Participants who do not wish to co-athor a paper can choose to write a short single-authored reflection peice, review, or position paper relating to the intersection of language models and their own research (approximately 1500 words). I would also be happy to consider supervising relevant research projects as a subtitute for this requirement.
The schedule below is tenatative because the selection and sequence of readings may vary as a function of class participants' interests and background. Below the schedule there is a larger collection of papers for potential substitution into the schedule (I will keep adding new papers to this collection as they come out -- feel free to send me suggestions for aditional papers).
Date | Topic | Presenters | Reading |
---|---|---|---|
2024-01-19 | Introduction & Course Overview | Bill Thompson | OpenAI: Introducing ChatGPT & Google CEO Sundar Pichai on the coming age of AI |
2024-01-26 | Scene Setting: what are the stakes? | Guest Speaker: Mayank Agrawal, Co-founder of Roundatable.ai | Dillion, D., Tandon, N., Gu, Y., & Gray, K. (2023). Can AI language models replace human participants?. Trends in Cognitive Sciences. Crockett, M., & Messeri, L. (2023). Should large language models replace human participants? Psyarxiv preprint. And/Or Harding, J., D’Alessandro, W., Laskowski, N. G., & Long, R. (2023). AI language models cannot replace human research participants. AI & SOCIETY, 1-3. |
2024-02-02 | Scene Setting: what are the stakes? | Melanie Mitchell, Prof. at SFI, speaking @ the Kadish Seminar | Mitchell, M., & Krakauer, D. C. (2023). The debate over understanding in AI’s large language models. Proceedings of the National Academy of Sciences, 120(13), e2215907120 |
2024-02-09 | Models of who? | Bill Thompson | Atari, M., Xue, M. J., Park, P. S., Blasi, D., & Henrich, J. (2023). Which humans?. See also Anthroscore |
Date | Topic | Presenter | Reading |
---|---|---|---|
2024-02-16 | Capabilities: Agency | Josh Tenenbaum, Prof. at MIT, speaking @ the Kadish Seminar | Paul, L. A., Ullman, T., De Freitas, J., & Tenenbaum, J. (2023). Reverse-engineering the self. |
2024-02-23 | Capabilities: Reasoning | Alyson Wong | Stevenson, C. E., ter Veen, M., Choenni, R., van der Maas, H. L., & Shutova, E. (2023). Do large language models solve verbal analogies like children do?. arXiv preprint arXiv:2310.20384 |
2024-03-01 | LLMs as models of People | Fei Dai & Mingyu Yuan | Binz, M., & Schulz, E. (2023). Turning large language models into cognitive models. arXiv preprint arXiv:2306.03917. Agnew, W., Bergman, A. S., Chien, J., Díaz, M., El-Sayed, S., Pittman, J., ... & McKee, K. R. (2024). The illusion of artificial inclusion. arXiv preprint arXiv:2401.08572. |
2024-03-08 | Grounding and Embodiment in Intelligence & LLMs | Guest Speaker: Ishita Dasgupta, Research Scientist at Google Deepmind | Background reading: Alayrac, J. B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., ... & Simonyan, K. (2022). Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35, 23716-23736. Optional additional reading: Dasgupta, I., Lampinen, A. K., Chan, S. C., Creswell, A., Kumaran, D., McClelland, J. L., & Hill, F. (2022). Language models show human-like content effects on reasoning. arXiv preprint arXiv:2207.07051. |
2024-03-15 | Perspectives: What are language models? | Sophie Regan & Jing-Jing Li | McCoy, R. T., Yao, S., Friedman, D., Hardy, M., & Griffiths, T. L. (2023). Embers of autoregression: Understanding large language models through the problem they are trained to solve. arXiv preprint arXiv:2309.13638. Momennejad, I., Hasanbeig, H., Vieira Frujeri, F., Sharma, H., Jojic, N., Palangi, H., ... & Larson, J. (2024). Evaluating cognitive maps and planning in large language models with CogEval. Advances in Neural Information Processing Systems, 36 |
2024-03-29 | Perspectives: interacting with a language model | Ti-Fen Pan | Shanahan, M., McDonell, K., & Reynolds, L. (2023). Role play with large language models. Nature, 1-6. Also: Some great podcasts for Spring zrecess (optional): Ellie Pavlick on Brain Inspired & Raphaël Millière on Mindscape & Murray Shanahan on Many Minds |
Date | Topic | Presenter | Reading |
---|---|---|---|
2024-04-05 | Causal Understanding from Passive Training? | Andrew Lampinen, Google Deepmind | Lampinen, A., Chan, S., Dasgupta, I., Nam, A., & Wang, J. (2024). Passive learning of active causal strategies in agents and language models. Advances in Neural Information Processing Systems, 36. |
2024-04-12 | Real-world Planning with LLMs | Vijay Ramesh, VP of AI @ Regrello | Kambhampati, S., Valmeekam, K., Guan, L., Stechly, K., Verma, M., Bhambri, S., ... & Murthy, A. (2024). LLMs Can't Plan, But Can Help Planning in LLM-Modulo Frameworks arXiv preprint arXiv:2402.01817. |
Date | Topic | Presenter | Reading |
---|---|---|---|
2024-04-19 | Reflections | Binz, M., Alaniz, S., Roskies, A., Aczel, B., Bergstrom, C. T., Allen, C., ... & Schulz, E. (2023). How should the advent of large language models affect the practice of science?. arXiv preprint arXiv:2312.03759. Gary Lupyan on Metaphors for LLMs (~20 minute audio presentation) Atari, M., Xue, M. J., Park, P. S., Blasi, D., & Henrich, J. (2023). Which humans?. |
|
2024-04-26 Buttrick, N. (2024). Studying large language models as compression algorithms for human culture. Trends in Cognitive Sciences. |
Outlook and Conclusions | Frank, M.C. Openly accessible LLMs can help us to understand human cognition. Nature Human Behaviour (2023) | |
2024-05-03 | Closing thoughts & Course Paper Submission Deadline | Noah Goodman on LLMs and future psychology |
Here is a list of potential readings, approximately organized according to structure of the course. Participants can select from the readings below or suggest alternatives as replacements for papers listed in the tentative schedule above.
Part 1: Introduction
-
Dillion, D., Tandon, N., Gu, Y., & Gray, K. (2023). Can AI language models replace human participants?. Trends in Cognitive Sciences.
-
Kidd, C., & Birhane, A. (2023). How AI can distort human beliefs. Science, 380(6651), 1222-1223.
-
Lindsay, G.W. LLMs are not ready for editorial work. Nature Human Behaviour (2023).
-
Meta Fundamental AI Research Diplomacy Team (FAIR)†, Bakhtin, A., Brown, N., Dinan, E., Farina, G., Flaherty, C., ... & Zijlstra, M. (2022). Human-level play in the game of Diplomacy by combining language models with strategic reasoning. Science, 378(6624), 1067-1074.
-
Birhane, A., Kalluri, P., Card, D., Agnew, W., Dotan, R., & Bao, M. (2022, June). The values encoded in machine learning research. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 173-184).
-
Dillion, D., Tandon, N., Gu, Y., & Gray, K. (2023). Can AI language models replace human participants?. Trends in Cognitive Sciences.
-
Raphaël Millière and Charles Rathkopf (2023) Why it’s important to remember that AI isn’t human. Vox Piece
-
Atari, M., Xue, M. J., Park, P. S., Blasi, D., & Henrich, J. (2023). Which humans?.
-
Heyman, T., & Heyman, G. (2023). The impact of ChatGPT on human data collection: A case study involving typicality norming data. Behavior Research Methods, 1-8.
-
Atari, M., Xue, M. J., Park, P. S., Blasi, D., & Henrich, J. (2023). Which humans?.
Part 2: LLMs as Subjects
(How) Do LLMs do X?
-
Dentella, V., Günther, F., & Leivada, E. (2023). Systematic testing of three Language Models reveals low language accuracy, absence of response stability, and a yes-response bias. Proceedings of the National Academy of Sciences, 120(51), e2309583120.
-
Stevenson, C. E., ter Veen, M., Choenni, R., van der Maas, H. L., & Shutova, E. (2023). Do large language models solve verbal analogies like children do?. arXiv preprint arXiv:2310.20384
-
Eisape, T., Tessler, M. H., Dasgupta, I., Sha, F., van Steenkiste, S., & Linzen, T. (2023). A Systematic Comparison of Syllogistic Reasoning in Humans and Language Models. arXiv preprint arXiv:2311.00445.
-
Han, S. J., Ransom, K. J., Perfors, A., & Kemp, C. (2024). Inductive reasoning in humans and large language models. Cognitive Systems Research, 83, 101155.
-
Gupta, S., Shrivastava, V., Deshpande, A., Kalyan, A., Clark, P., Sabharwal, A., & Khot, T. (2023). Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs. arXiv preprint arXiv:2311.04892.
-
Trott, S., Jones, C., Chang, T., Michaelov, J., & Bergen, B. (2023). Do Large Language Models know what humans know?. Cognitive Science, 47(7), e13309.
-
Yamakoshi, T., McClelland, J. L., Goldberg, A. E., & Hawkins, R. D. (2023). Causal interventions expose implicit situation models for commonsense language understanding. arXiv preprint arXiv:2306.03882.
-
Trott, S., Jones, C., Chang, T., Michaelov, J., & Bergen, B. (2023). Do Large Language Models know what humans know?. Cognitive Science, 47(7), e13309.
-
Safdari, M., Serapio-García, G., Crepy, C., Fitz, S., Romero, P., Sun, L., ... & Matarić, M. (2023). Personality traits in large language models. arXiv preprint arXiv:2307.00184.
-
Griffin, L., Kleinberg, B., Mozes, M., Mai, K., Vau, M. D. M., Caldwell, M., & Mavor-Parker, A. (2023, July). Large Language Models respond to Influence like Humans. In Proceedings of the First Workshop on Social Influence in Conversations (SICon 2023) (pp. 15-24).
-
Webb, T., Holyoak, K. J., & Lu, H. (2023). Emergent analogical reasoning in large language models. Nature Human Behaviour, 7(9), 1526-1541.
-
Dasgupta, I., Lampinen, A. K., Chan, S. C., Creswell, A., Kumaran, D., McClelland, J. L., & Hill, F. (2022). Language models show human-like content effects on reasoning. arXiv preprint arXiv:2207.07051.
-
Ruis, L., Khan, A., Biderman, S., Hooker, S., Rocktäschel, T., & Grefenstette, E. (2022). Large language models are not zero-shot communicators. arXiv preprint arXiv:2210.14986.
-
Creswell, A., Shanahan, M., & Higgins, I. (2022). Selection-inference: Exploiting large language models for interpretable logical reasoning. arXiv preprint arXiv:2205.09712.
-
Hu, X., Storks, S., Lewis, R. L., & Chai, J. (2023). In-Context Analogical Reasoning with Pre-Trained Language Models. arXiv preprint arXiv:2305.17626.
-
Lampinen, A. K., Dasgupta, I., Chan, S. C., Matthewson, K., Tessler, M. H., Creswell, A., ... & Hill, F. (2022). Can language models learn from explanations in context?. arXiv preprint arXiv:2204.02329.
-
Kıcıman, E., Ness, R., Sharma, A., & Tan, C. (2023). Causal reasoning and large language models: Opening a new frontier for causality. arXiv preprint arXiv:2305.00050.
-
Valmeekam, K., Marquez, M., Sreedharan, S., & Kambhampati, S. (2023). On the Planning Abilities of Large Language Models--A Critical Investigation. arXiv preprint arXiv:2305.15771.
-
Marjieh, R., Sucholutsky, I., van Rijn, P., Jacoby, N., & Griffiths, T. L. (2023). What language reveals about perception: Distilling psychophysical knowledge from large language models. arXiv preprint arXiv:2302.01308.
-
Gandhi, K., Fränken, J. P., Gerstenberg, T., & Goodman, N. D. (2023). Understanding social reasoning in language models with language models. arXiv preprint arXiv:2306.15448.
-
Prystawski, B., & Goodman, N. D. (2023). Why think step-by-step? Reasoning emerges from the locality of experience. arXiv preprint arXiv:2304.03843.
-
Austin, J., Odena, A., Nye, M., Bosma, M., Michalewski, H., Dohan, D., ... & Sutton, C. (2021). Program synthesis with large language models. arXiv preprint arXiv:2108.07732.
-
Feng, J., & Steinhardt, J. (2023). How do Language Models Bind Entities in Context?. arXiv preprint arXiv:2310.17191.
Assessing LLMs
-
Binz, M., & Schulz, E. (2023). Using cognitive psychology to understand GPT-3. Proceedings of the National Academy of Sciences, 120(6), e2218523120.
-
Srivastava, A., Rastogi, A., Rao, A., Shoeb, A. A. M., Abid, A., Fisch, A., ... & Wang, G. (2022). Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615.
-
Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., ... & Zhang, Y. (2023). Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.
-
Shiffrin, R., & Mitchell, M. (2023). Probing the psychology of AI models. Proceedings of the National Academy of Sciences, 120(10), e2300963120.
-
Ivanova, A. A. (2023). Running cognitive evaluations on large language models: The do's and the don'ts. arXiv preprint arXiv:2312.01276.
-
Stella, M., Hills, T. T., & Kenett, Y. N. (2023). Using cognitive psychology to understand GPT-like models needs to extend beyond human biases. Proceedings of the National Academy of Sciences, 120(43), e2312911120.
-
Kosoy, E., Reagan, E. R., Lai, L., Gopnik, A., & Cobb, D. K. (2023). Comparing Machines and Children: Using Developmental Psychology Experiments to Assess the Strengths and Weaknesses of LaMDA Responses. arXiv preprint arXiv:2305.11243.
Knowledge from Language
-
Ha, D., & Schmidhuber, J. (2018). World models. arXiv preprint arXiv:1803.10122.
-
Li, K., Hopkins, A. K., Bau, D., Viégas, F., Pfister, H., & Wattenberg, M. (2022). Emergent world representations: Exploring a sequence model trained on a synthetic task. arXiv preprint arXiv:2210.13382.
-
Misra, K., Rayz, J. T., & Ettinger, A. (2022). Comps: Conceptual minimal pair sentences for testing property knowledge and inheritance in pre-trained language models. arXiv preprint arXiv:2210.01963.
-
Gurnee, W., & Tegmark, M. (2023). Language models represent space and time. arXiv preprint arXiv:2310.02207.
-
Meng, K., Bau, D., Andonian, A., & Belinkov, Y. (2022). Locating and editing factual associations in GPT. Advances in Neural Information Processing Systems, 35, 17359-17372.
-
Wang, R., Todd, G., Yuan, E., Xiao, Z., Côté, M. A., & Jansen, P. (2023). ByteSized32: A Corpus and Challenge Task for Generating Task-Specific World Models Expressed as Text Games. arXiv preprint arXiv:2305.14879.
-
Patel, R., & Pavlick, E. (2021, October). Mapping language models to grounded conceptual spaces. In International Conference on Learning Representations.
-
Qiu, Y., Zhao, Z., Ziser, Y., Korhonen, A., Ponti, E. M., & Cohen, S. B. (2023). Are Large Language Models Temporally Grounded?. arXiv preprint arXiv:2311.08398.
-
Li, B. Z., Nye, M., & Andreas, J. (2021). Implicit representations of meaning in neural language models. arXiv preprint arXiv:2106.00737.
-
Hazineh, D. S., Zhang, Z., & Chiu, J. (2023). Linear Latent World Models in Simple Transformers: A Case Study on Othello-GPT. arXiv preprint arXiv:2310.07582.
-
McGrath, T., Kapishnikov, A., Tomašev, N., Pearce, A., Wattenberg, M., Hassabis, D., ... & Kramnik, V. (2022). Acquisition of chess knowledge in alphazero. Proceedings of the National Academy of Sciences, 119(47), e2206625119.
-
DeLeo, M., & Guven, E. (2022). Learning Chess With Language Models and Transformers. arXiv preprint arXiv:2209.11902.
-
Wong, L., Grand, G., Lew, A. K., Goodman, N. D., Mansinghka, V. K., Andreas, J., & Tenenbaum, J. B. (2023). From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought. arXiv preprint arXiv:2306.12672.
Perspectives
-
Mitchell, M., & Krakauer, D. C. (2023). The debate over understanding in AI’s large language models. Proceedings of the National Academy of Sciences, 120(13), e2215907120.
-
Frank, M.C. Openly accessible LLMs can help us to understand human cognition. Nature Human Behaviour (2023)
-
Van Dis, E. A., Bollen, J., Zuidema, W., van Rooij, R., & Bockting, C. L. (2023). ChatGPT: five priorities for research. Nature, 614(7947), 224-226.
-
Chemero, A. LLMs differ from human cognition because they are not embodied. Nat Hum Behav (2023).
-
Noah Goodman (2023) LLMs and Future Psychology
-
Abdurahman, S., Atari, M., Karimi-Malekabadi, F., Xue, M. J., Trager, J., Park, P. S., … Dehghani, M. (2023, November 15). Perils and Opportunities in Using Large Language Models in Psychological Research. OSF Preprint
-
Demszky, D., Yang, D., Yeager, D.S. et al. Using large language models in psychology. Nature Reviews Psychology 2, 688–701 (2023)
-
Shanahan, M., McDonell, K., & Reynolds, L. (2023). Role play with large language models. Nature, 1-6.
-
Frank, M. C. (2023). Bridging the data gap between children and large language models. Trends in Cognitive Sciences.
-
Mahowald, K., Ivanova, A. A., Blank, I. A., Kanwisher, N., Tenenbaum, J. B., & Fedorenko, E. (2023). Dissociating language and thought in large language models: a cognitive perspective. arXiv preprint arXiv:2301.06627.
-
Andreas, J. (2022). Language models as agent models. arXiv preprint arXiv:2212.01681.
-
Blank, I. A. (2023). What are large language models supposed to model?. Trends in Cognitive Sciences.
-
McCoy, R. T., Yao, S., Friedman, D., Hardy, M., & Griffiths, T. L. (2023). Embers of autoregression: Understanding large language models through the problem they are trained to solve. arXiv preprint arXiv:2309.13638.
-
Steven T. Piantadosi (2023) Modern language models refute Chomsky’s approach to language
-
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the dangers of stochastic parrots: Can language models be too big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).
-
Buschoff, L. M. S., Akata, E., Bethge, M., & Schulz, E. (2023). Have we built machines that think like people?. arXiv preprint arXiv:2311.16093.
-
Harvey Lederman, Kyle Mahowald (2024) Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs arXiv preprint arxiv::2401.04854
Relevant Engineering/ML Papers
-
Fu, Y., Peng, H., Ou, L., Sabharwal, A., & Khot, T. (2023). Specializing Smaller Language Models towards Multi-Step Reasoning. arXiv preprint arXiv:2301.12726.
-
Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao, Y., & Narasimhan, K. (2023). Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601.
-
Liu, A., Wu, Z., Michael, J., Suhr, A., West, P., Koller, A., ... & Choi, Y. (2023). We're Afraid Language Models Aren't Modeling Ambiguity. arXiv preprint arXiv:2304.14399.
-
Schaeffer, R., Miranda, B., & Koyejo, S. (2023). Are emergent abilities of Large Language Models a mirage? arXiv preprint arXiv:2304.15004.
-
Lampinen, A. K., Chan, S. C., Dasgupta, I., Nam, A. J., & Wang, J. X. (2023). Passive learning of active causal strategies in agents and language models. arXiv preprint arXiv:2305.16183.
-
Lee, N., Sreenivasan, K., Lee, J. D., Lee, K., & Papailiopoulos, D. (2023). Teaching arithmetic to small transformers. arXiv preprint arXiv:2307.03381.
Part 3: LLMs as Tools
-
Li, B. Z., Tamkin, A., Goodman, N., & Andreas, J. (2023). Eliciting Human Preferences with Language Models. arXiv preprint arXiv:2310.11589.
-
Rathje, S., Mirea, D. M., Sucholutsky, I., Marjieh, R., Robertson, C., & Van Bavel, J. J. (2023). GPT is an effective tool for multilingual psychological text analysis. OSF Preprint
-
Liu, R., Yen, H., Marjieh, R., Griffiths, T. L., & Krishna, R. (2023). Improving Interpersonal Communication by Simulating Audiences with Language Models. arXiv preprint arXiv:2311.00687.
-
Törnberg, P., Valeeva, D., Uitermark, J., & Bail, C. (2023). Simulating Social Media Using Large Language Models to Evaluate Alternative News Feed Algorithms. arXiv preprint arXiv:2310.05984.
-
Aher, G., Arriaga, R. I., & Kalai, A. T. (2022). Using large language models to simulate multiple humans. arXiv preprint arXiv:2208.10264.
-
Wang, R. E., Zhang, Q., Robinson, C., Loeb, S., & Demszky, D. (2023). Step-by-Step Remediation of Students' Mathematical Mistakes. arXiv preprint arXiv:2310.10648.
-
Argyle, L. P., Bail, C. A., Busby, E. C., Gubler, J. R., Howe, T., Rytting, C., ... & Wingate, D. (2023). Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale. Proceedings of the National Academy of Sciences, 120(41), e2311627120.
-
Aher, G., Arriaga, R. I., & Kalai, A. T. (2022). Using large language models to simulate multiple humans. arXiv preprint arXiv:2208.10264.
-
Park, J. S., O'Brien, J., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023, October). Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (pp. 1-22).
-
Tjuatja, L., Chen, V., Wu, S. T., Talwalkar, A., & Neubig, G. (2023). Do LLMs exhibit human-like response biases? A case study in survey design. arXiv preprint arXiv:2311.04076.
-
Bakker, M., Chadwick, M., Sheahan, H., Tessler, M., Campbell-Gillingham, L., Balaguer, J., ... & Summerfield, C. (2022). Fine-tuning language models to find agreement among humans with diverse preferences. Advances in Neural Information Processing Systems, 35, 38176-38189.
-
Markel, J. M., Opferman, S. G., Landay, J. A., & Piech, C. (2023). GPTeach: Interactive TA Training with GPT Based Students.
Part 4: Implications for Cognitive Science and Cognitive Scientists
Theoretical Implications?
-
Fernyhough, C., & Borghi, A. M. (2023). Inner speech as language process and cognitive tool. Trends in cognitive sciences.
-
Stokel-Walker, C., & Van Noorden, R. (2023). What ChatGPT and generative AI mean for science. Nature, 614(7947), 214-216.
-
Grossmann, I., Feinberg, M., Parker, D. C., Christakis, N. A., Tetlock, P. E., & Cunningham, W. A. (2023). AI and the transformation of social science research. Science, 380(6650), 1108-1109.
-
van Rooij, I., Guest, O., Adolfi, F. G., de Haan, R., Kolokolova, A., & Rich, P. (2023). Reclaiming AI as a theoretical tool for cognitive science Psyarxiv preprint.
Practical Implications?
- Biever, C. (2023). ChatGPT broke the Turing test-the race is on for new ways to assess AI. Nature, 619(7971), 686-689.