This is an open source replication of Anthropic's Towards Monosemanticity paper. The autoencoder was trained on the gelu-1l model in TransformerLens, you can access two trained autoencoders and the model using this tutorial.
This is a pretty scrappy training codebase, and won't run from the top. I mostly recommend reading the code and copying snippets. See also Hoagy Cunningham's Github.
utils.py
contains various utils to define the Autoencoder, data Buffer and training data.- Toggle
loading_data_first_time
to True to load and process the text data used to run the model and generate acts
- Toggle
train.py
is a scrappy training scriptcfg["remove_rare_dir"]
was an experiment in training an autoencoder whose features were all orthogonal to the shared direction among rare features, those lines of code can be ignored and weren't used for the open source autoencoders.- There was a bug in the code to set the decoder weights to have unit norm - it makes the gradients orthogonal, but I forgot to also set the norm to be 1 again after each gradient update (turns out a vector of unit norm plus a perpendicular vector does not remain unit norm!). I think I have now fixed the bug.
analysis.py
is a scrappy set of experiments for exploring the autoencoder. I recommend reading the Colab tutorial instead for something cleaner and better commented.