Blealtan/RWKV-LM-LoRA
RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
PythonApache-2.0
Stargazers
- aflah02Indraprastha Institute of Information Technology Delhi
- awatuna
- budhashSan Francisco Bay Area
- C00reNUT
- clcarwin
- cuighBeijing, China
- DaoMingze
- ds-sebastianRaleigh, NC
- feeelX
- fly51flyPRIS
- gowithwind深圳
- hiepxanhTiny Cosmos
- HighCWu
- Keith-Honcantonese.ai
- KochMartin
- konformal
- lawrencecchensf
- ljdavns
- lslym2010China
- ma1ze
- nabil6391Photobook WorldWide
- nnnikGermany
- panademo
- quernd@dottxt-ai
- Ryu1845
- sc0rp10n-py
- ShadowPowerShangHai China
- shyamsn97
- sparbzAgent Taskflow Inc.
- svjack
- syddharthEarth
- thomas-nicholsonSydney
- TkskKurumi
- voidfulTaiwan
- wanicca
- yingpengsha@KCFe