Lightning-AI/lit-llama
Implementation of the LLaMA language model based on nanoGPT. Supports flash attention, Int8 and GPTQ 4bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
PythonApache-2.0
Stargazers
- akihironitta@kumo-ai @pyg-team
- alecmerdler@authzed
- asifrNew York
- bailooLazy IITian
- batman-doVCCorp
- bhaddow
- bilelomrani1@illuin-tech
- BordaLightning.ai | Grid.ai
- daigo0927Tokyo, Japan
- DDanlov
- din0sZeta Alpha
- DonLeif
- edenlightning
- ElieAntoine
- faizwhb
- fer-gitSingapore
- florianbaudVisiativ
- goxccchina
- gyuro
- josephwinston
- JugglingNumbers
- laifiMunich
- mfranzoneXact-lab
- msaroufim@PyTorch
- msrivastavaUCLA
- pavelklymenkoSan Francisco Bay Area
- robmarkcole@earthdaily
- SamerW
- sarisel
- satani99
- senad96Sapienza - University of Rome
- tchatonLightning.ai | Pytorch Lightning
- thiagogalesi
- towzeurParis, France
- VikramxDElevate AI
- Yannlecun