/ai-psychology-starter

Code templates to get started as an AI psychologist

Primary LanguageJupyter Notebook

AI Psychology Starter

Code templates to get started as an AI psychologist.

Interpretability research is sometimes described as neuroscience for ML models. Neuroscience is one approach to understanding how human brains work. But empirical psychology research is another approach. I think more people should engage in the analogous activity for language models: trying to figure out how they work just by looking at their behavior, rather than trying to understand their internals. Read more.

It was created as part of the Language Model Hackathon and several projects were submitted that you can check out.

Inspiration

Starter code

Contains a small test experiment along with a standardized way to get responses out of the API. See R-starter.Rmd.

Contains the same test experiment as the R markdown starter code. See Python-starter.ipynb (this can run in the browser using Google Colab).

See the template here. This is a no-code experimental kit.

See some ways to extract quantitative information from the text, e.g. word frequency, TF-IDF, word embeddings, and topics.

From the inverse scaling prize. See the instructions page for how to use it. Basically allows you to generate plots that show how the performance of the models scale with the parameter counts.

Colab to test your data for inverse scaling: https://colab.research.google.com/drive/1IEXWy9aJaOdVKiy29LxlF-0vw9Cx-hi2

Data

Inverse scaling round 1 winning datasets

The winners of the first round winners.

https://drive.google.com/drive/u/1/folders/1mHrPQlfB3-pfwB3iBAKheIEO3EBAb_qg 

Inverse scaling

The inverse-scaling folder contains a lot of small datasets that can work as inspiration. E.g. biased statements, cognitive biases, sentiment analysis, and more.

https://github.com/inverse-scaling/prize/ 

Harmless and Helpful language model

A large list of "chosen" and "rejected" pairs of texts. A human received two language model outputs and selected the preferred one. It's in jsonl format, so you can open it with any Python interpreter or with VScode.

See the containing folder.

https://github.com/anthropics/hh-rlhf 

Red teaming dataset

Contains a lot of humans' attempts at tripping up a language model and getting it to answer in harmful ways.

red_team_attempts.jsonl

https://github.com/anthropics/hh-rlhf 

TruthfulQA

This repository contains code for evaluating model performance on the TruthfulQA benchmark. The full set of benchmark questions and reference answers is contained in TruthfulQA.csv. The paper introducing the benchmark can be found here.

https://github.com/sylinrl/TruthfulQA/blob/main/data/v0/TruthfulQA.csv

MathQA

This is the official repo for the ACL-2022 paper "Learning to Reason Deductively: Math Word Problem Solving as Complex Relation Extraction". Text describes free-form world states for elementary school math problems.

https://github.com/allanj/deductive-mwp

Language models are few-shot learners

You can train language models with training examples in its prompt.

Data: https://github.com/openai/gpt-3/tree/master/data

https://github.com/openai/gpt-3

Moral Uncertainty

We provide a dataset containing a mix of clear-cut (wrong or not-wrong) and morally ambiguous scenarios where a first-person character describes actions they took in some setting. The scenarios are often long (usually multiple paragraphs, up to 2,000 words) and involve complex social dynamics. Each scenario has a label which indicates whether, according to commonsense moral judgments, the first-person character should not have taken that action.

Our dataset was collected from a website where posters describe a scenario and users vote on whether the poster was in the wrong. Clear-cut scenarios are ones where voter agreement rate is 95% or more, while ambiguous scenarios had 50% ± 10% agreement. All scenarios have at least 100 total votes.

https://github.com/JunShern/moral-uncertainty#dataset

https://moraluncertainty.mlsafety.org/ 

IMDB dataset

This dataset contains a lot of movie reviews and their associated rating. It is classically used to train sentiment analysis models but maybe you can find something fun to do with it!

See containing folder.