/openai-LLM-evals

Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.

Primary LanguagePythonMIT LicenseMIT

Watchers

No one’s watching this repository yet.