vaibkumr/prompt-optimizer
Minimize LLM token complexity to save API costs and model computations.
PythonMIT
Stargazers
- 0MrMagic1
- AgeOfMarcusCullowhee, NC
- arunpatroNew York, USA
- arzel@GemsGateway
- ashuuya
- Banguiskode
- BanHammerYKTJSB "Almazergienbank"
- Bomlex
- ca-x-ap
- chicago666666
- christian-gheorgheResonance
- EvilFreelancerMoscow
- fly51flyPRIS
- guywaldman
- khursani8Kuala Lumpur
- koala73
- kucherenkoKiev/Ukraine
- Lyapsus
- mclausaudioSoftware Engineer at Postman
- nirav0999University of Illinois Urbana-Champaign
- omeh2003kazahstan
- ParAmbula
- Pent
- PhilipADManchester
- RedTriton
- rkunnamp
- selfsxWhite Sharx
- Shivak11
- shuxiaokaiM78 Nebula
- techultraverse
- tpaiTaiwan
- u-brixton
- Varun0801Tamil Nadu, India
- vishaal27University of Tübingen | University of Cambridge
- z81
- zkillpackPredata