bigscience-workshop/petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
PythonMIT
Stargazers
- AidanTilgner@Digital-Strata
- alexunderch
- artek0chumak
- astariulFleksy
- Codle@bytedance
- donglixpMicrosoft Research
- dvmazurMadvillain Bistro Bed and Breakfast Bar and Grill
- FCdSPFundThrough
- fly51flyPRIS
- IAL32@Tacto Technology
- jgardin
- jianggyPeking University
- jungokasai
- justheuristicYSDA
- Lazauya
- lkwq007CUHK
- lukaemonWA
- naturalnumberUPEI (Student)
- NobreHD
- NouamaneTazi@huggingface
- ocramz@unfoldml
- pommedeterresautee@ELS-RD Lefebvre Sarrut
- shawn2306Mumbai, india
- slyviacassell
- smiraldrBengaluru
- sodabeta7Apple
- subhobrata
- thomasw21MistralAI
- tombailey
- tripathiarpan20
- WellyZhangUCLA
- workingsunSeoul, South Korea
- yangapkuPeking Univ.
- younesbelkada@huggingface
- Yuxin-CVMicrosoft Research
- Zengyf-CVer