/flash-linear-attention

๐Ÿš€ Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton

Primary LanguagePythonMIT LicenseMIT

Issues