This folder of code contains code and notebooks to supplement the Vision Transformers Explained series written by Skylar Callis for Towards Data Science. These articles can be read on Medium or in their equivalent Jupyter Notebook:
- Vision Transformers, Explained
- Attention for Vision Transformers, Explained
- Position Embeddings for Vision Transformers, Explained
- Tokens-to-Token Vision Transformers, Explained
This project were developed by Skylar Callis while working as a post-bachelors student at Los Alamos National Laboratory (LANL) from 2022 - 2024. To see what they are up to these days, visit Skylar's Website .
The Vision Transformers Explained code has been approved by LANL for a BSD-3 open source license under O#4693. The written components have been approved for release as LA-UR-23–33876.
The GitHub page for this code can be found here.