/transformer-from-scratch

Code implementation from my blog post: https://fkodom.substack.com/p/transformers-from-scratch-in-pytorch

Primary LanguagePythonMIT LicenseMIT

transformer-from-scratch

Code for my blog post: Transformers from Scratch in PyTorch

Note: This Transformer code does not include masked attention. That was intentional, because it led to a much cleaner implementation. This repository is intended for educational purposes only. I believe that everything here is correct, but make no guarantees if for some reason you decide to use it in your own project.

Citations

@misc{vaswani2023attention,
      title={Attention Is All You Need}, 
      author={Ashish Vaswani and Noam Shazeer and Niki Parmar and Jakob Uszkoreit and Llion Jones and Aidan N. Gomez and Lukasz Kaiser and Illia Polosukhin},
      year={2023},
      eprint={1706.03762},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}