braintrustdata/autoevals
AutoEvals is a tool for quickly and easily evaluating AI model outputs using best practices.
TypeScriptMIT
Stargazers
- abrenneke@Ironclad
- adham-elarabawyHarvey AI
- ankrgyl
- anthonyli358London, UK
- BharatSingla12
- blakejwc
- d33bsUniversity of Colorado Anschutz
- dkossnick-figma
- eladsegal@Loora-ai
- ericvanlare
- fblissjrAptitive
- gabrielalmeida@WhatsHQ
- giaosudauVietnam
- grantmwilliams@justvanilla
- gregnr
- guruchahalSF Bay Area, CA
- huytool157
- ishaan-jaffmetaverse
- jaschadub@tarnover
- jenniferli23
- jiito~/.build
- kasrakSan Francisco
- keeganmccallumMaking magic at Luma Labs
- leonardtangcookin'
- lhl@AUGMXNT
- luquitared
- MartinBuckoFreelancer
- merlinarerZhejiang University
- michaelsharpe
- Reed-Schimmel
- saikatmitra91empirical.run
- schultzjack
- smellslikemlSmellsLikeML
- timothyaspBellingham, WA
- wong-codaio
- wpusergithub