/HumanEval-Test

An extension of the HumanEval benchmark to evaluate the ability of LLMs to write effective tests

Primary LanguagePythonMIT LicenseMIT

Stargazers

No one’s star this repository yet.