/SAELens

Training Sparse Autoencoders on Language Models

Primary LanguageHTMLMIT LicenseMIT

Screenshot 2024-03-21 at 3 08 28 pm

SAE Lens

PyPI License: MIT build Deploy Docs codecov

SAELens exists to help researchers:

  • Train sparse autoencoders.
  • Analyse sparse autoencoders / research mechanistic interpretability.
  • Generate insights which make it easier to create safe and aligned AI systems.

Please refer to the documentation for information on how to:

  • Download and Analyse pre-trained sparse autoencoders.
  • Train your own sparse autoencoders.
  • Generate feature dashboards with the SAE-Vis Library.

SAE Lens is the result of many contributors working collectively to improve humanities understanding of neural networks, many of whom are motivated by a desire to safeguard humanity from risks posed by artificial intelligence.

This library is maintained by Joseph Bloom and David Chanin.

Loading Pre-trained SAEs.

Pre-trained SAEs for various models can be imported via SAE Lens. See this page in the readme for a list of all SAEs.

Tutorials

Join the Slack!

Feel free to join the Open Source Mechanistic Interpretability Slack for support!

Citations and References

Research:

Reference Implementations: