/Transformer-From-Scratch

Attention Is All You Need

Primary LanguagePython

Transformer From Scratch

Overview

 

Positional encoding

Multi-Head Attention

Masked Attention (opt.)

References

[1] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention Is All You Need. arXiv preprint arXiv:1706.03762, 2017. [2] Tensorflow Transformer Tutorial