RahulSChand/gpu_poor
Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
JavaScript
Stargazers
- acvnace
- austineda
- bsenftnerDenver, CO
- budhashSan Francisco Bay Area
- carteakeyToronto, Ontario
- chorseng
- Curiosity007
- DanCardGoogle
- dbuades@clinia
- DrMarkVirginia Beach, VA
- drtiwariUniversity of Edinburgh, UK
- eomizrak
- euclaise@NousResearch
- geronimi73
- ibrahim-elsharUniversity of Pittsburgh
- iiLaurens
- intariEarth
- jonpage0
- jsilveira94A Coruña, Spain
- Kiv@redwoodresearch
- Klendat
- kuzheren
- Luis-MunuSpain
- nicdgonzalez@GuestBots
- onexixi
- PKXLIVE
- qaziquza
- robinrheemSouth Korea, Seoul
- Sejinkonye
- StephenHnilica
- tencerjoCalifornia
- thenbe
- weixin00
- wstrinz
- yishaik
- ZQ-Dev8California