TransformerLens
A Library for Mechanistic Interpretability of Generative Language Models.
This is a library for doing mechanistic interpretability of GPT-2 Style language models. The goal of mechanistic interpretability is to take a trained model and reverse engineer the algorithms the model learned during training from its weights.
TransformerLens lets you load in 50+ different open source language models, and exposes the internal activations of the model to you. You can cache any internal activation in the model, and add in functions to edit, remove or replace these activations as the model runs.
~~ OCTOBER SURVEY HERE ~~
Quick Start
Install
pip install transformer_lens
Use
import transformer_lens
# Load a model (eg GPT-2 Small)
model = transformer_lens.HookedTransformer.from_pretrained("gpt2-small")
# Run the model and get logits and activations
logits, activations = model.run_with_cache("Hello World")
Key Tutorials
Gallery
Research done involving TransformerLens:
- Progress Measures for Grokking via Mechanistic Interpretability (ICLR Spotlight, 2023) by Neel Nanda, Lawrence Chan, Tom Lieberum, Jess Smith, Jacob Steinhardt
- Finding Neurons in a Haystack: Case Studies with Sparse Probing by Wes Gurnee, Neel Nanda, Matthew Pauly, Katherine Harvey, Dmitrii Troitskii, Dimitris Bertsimas
- Towards Automated Circuit Discovery for Mechanistic Interpretability by Arthur Conmy, Augustine N. Mavor-Parker, Aengus Lynch, Stefan Heimersheim, Adrià Garriga-Alonso
- Actually, Othello-GPT Has A Linear Emergent World Representation by Neel Nanda
- A circuit for Python docstrings in a 4-layer attention-only transformer by Stefan Heimersheim and Jett Janiak
- A Toy Model of Universality (ICML, 2023) by Bilal Chughtai, Lawrence Chan, Neel Nanda
- N2G: A Scalable Approach for Quantifying Interpretable Neuron Representations in Large Language Models (2023, ICLR Workshop RTML) by Alex Foote, Neel Nanda, Esben Kran, Ioannis Konstas, Fazl Barez
- Eliciting Latent Predictions from Transformers with the Tuned Lens by Nora Belrose, Zach Furman, Logan Smith, Danny Halawi, Igor Ostrovsky, Lev McKinney, Stella Biderman, Jacob Steinhardt
User contributed examples of the library being used in action:
- Induction Heads Phase Change Replication: A partial replication of In-Context Learning and Induction Heads from Connor Kissane
- Decision Transformer Interpretability: A set of scripts for training decision transformers which uses transformer lens to view intermediate activations, perform attribution and ablations. A write up of the initial work can be found here.
Check out our demos folder for more examples of TransformerLens in practice
Getting Started in Mechanistic Interpretability
Mechanistic interpretability is a very young and small field, and there are a lot of open problems. This means there's both a lot of low-hanging fruit, and that the bar for entry is low - if you would like to help, please try working on one! The standard answer to "why has no one done this yet" is just that there aren't enough people! Key resources:
- A Guide to Getting Started in Mechanistic Interpretability
- ARENA Mechanistic Interpretability Tutorials from
Callum McDougall. A comprehensive practical introduction to mech interp, written in
TransformerLens - full of snippets to copy and they come with exercises and solutions! Notable
tutorials:
- Coding GPT-2 from scratch, with accompanying video tutorial from me (1 2) - a good introduction to transformers
- Introduction to Mech Interp and TransformerLens: An introduction to TransformerLens and mech interp via studying induction heads. Covers the foundational concepts of the library
- Indirect Object Identification: a replication of interpretability in the wild, that covers standard techniques in mech interp such as direct logit attribution, activation patching and path patching
- Mech Interp Paper Reading List
- 200 Concrete Open Problems in Mechanistic Interpretability
- A Comprehensive Mechanistic Interpretability Explainer: To look up all the jargon and unfamiliar terms you're going to come across!
- Neel Nanda's Youtube channel: A range of mech interp video content, including paper walkthroughs, and walkthroughs of doing research
Support & Community
If you have issues, questions, feature requests or bug reports, please search the issues to check if it's already been answered, and if not please raise an issue!
You're also welcome to join the open source mech interp community on Slack. Please use issues for concrete discussions about the package, and Slack for higher bandwidth discussions about eg supporting important new use cases, or if you want to make substantial contributions to the library and want a maintainer's opinion. We'd also love for you to come and share your projects on the Slack!
Credits
This library was created by Neel Nanda and is maintained by Joseph Bloom.
The core features of TransformerLens were heavily inspired by the interface to Anthropic's excellent Garcon tool. Credit to Nelson Elhage and Chris Olah for building Garcon and showing the value of good infrastructure for enabling exploratory research!
Creator's Note (Neel Nanda)
I (Neel Nanda) used to work for the Anthropic interpretability team, and I wrote this library because after I left and tried doing independent research, I got extremely frustrated by the state of open source tooling. There's a lot of excellent infrastructure like HuggingFace and DeepSpeed to use or train models, but very little to dig into their internals and reverse engineer how they work. This library tries to solve that, and to make it easy to get into the field even if you don't work at an industry org with real infrastructure! One of the great things about mechanistic interpretability is that you don't need large models or tons of compute. There are lots of important open problems that can be solved with a small model in a Colab notebook!
Citation
Please cite this library as:
@misc{nanda2022transformerlens,
title = {TransformerLens},
author = {Neel Nanda and Joseph Bloom},
year = {2022},
howpublished = {\url{https://github.com/neelnanda-io/TransformerLens}},
}