CUDA and Triton implementations of Flash Attention with SoftmaxN.
Primary LanguagePythonGNU General Public License v3.0GPL-3.0