sustcsonglin/flash-linear-attention
Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton
PythonMIT
Stargazers
- 2khaBambou Tree Group
- atveitMicrosoft
- chen-yingfaTHUNLP
- chillingche
- DoraemonzzzShanghai
- evanzdShanghai, China
- fly51flyPRIS
- geonwooko
- Hannibal046Peking University; intern@DeepSeek
- HillZhang1999Bytedance
- IcecreamArtist
- ImKeTTUC Santa Cruz
- JeffCarpenterCanada
- JL-er
- justinchiu
- L1aoXingyuBeijing, China
- lambda7xxShanghai Jiao Tong University
- lirundongNVIDIA
- LouChao98
- lsvihPeking University
- lucky9-cyou
- LZhengismeHong Kong
- MARD1NOSiliconFlow
- okotakuOrange
- Pent
- phalanx-hkJapan
- radarFudanNUS
- renllMicrosoft
- Ryu1845
- speedcell4NICT
- sustcsonglinMIT
- VPeterVThe Hong Kong University of Science and Technology (HKUST)
- wawpaopaoBIo group
- xffxffBeijing, China
- yzhangcsSoochow University
- ZhousLab@MathEXLab