Stability-AI/StableLM

Consider using OpenAI Evals

walking-octopus opened this issue · 0 comments

OpenAI Evals is an open-source crowdsourced collection of tests designed to evaluate many newly emergent capabilities in LLMs. While it may be slightly GPT-4-centric, as tests that it can easily pass don't get merged, it still remains a valuable tool for automatically benchmarking LLMs.

Being a well-designed standard test, this may allow us to compare different open-access models against each other or even OpenAI's offerings objectively and effortlessly, which may allow us to gain deeper insight into what works and what doesn't.

For reference on testing non OpenAI models with Evals, see OpenAssistant model evals.