/flash-linear-attention

Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton

Primary LanguagePythonMIT LicenseMIT

Stargazers