/devevals

📊 Studying developmental evals on Pythia models

MIT LicenseMIT

Developmental Evaluations

Inspired by devinterp, we instead develop a developmental landscape based on performance on dangerous capabilities benchmarks.

This differs from layer-based nonlinear probes by investigating the development of capability throughout training.

This differs from developmental interpretability by constructing the training space from evaluation benchmark results.

The most comprehensive research in this direction is related to the Pythia model suite evaluations, running e.g. SciQ accuracy over the number of tokens trained for all 8 models (70M through 12B).

Despite the wish for Pythia models to be an opportunity to do in-depth developmental interpretability over the standardize learning schemas, surprisingly few papers have come out that investigate this. Mostly, researchers have evaluated models of varied sizes and origins, developing so-called "Scaling Laws".

Experimental design

We take a series of benchmarks used for evaluating dangerous capabilities and test them on the Pythia model series saved at multiple checkpoints for the 12B model.

Datasets evaluated:

From these datasets, we get performance metrics over training steps that together represent a type of feature space we can conduct PCA over.

We can investigate the principal components to understand what sort of patterns show up and investigate their training trajectory.

Additionally, we can attempt to compare sudden emergence of dangerous capabilities ("grokking") from the datasets to sudden potential shifts in the training trajectory.