/code-eval

Run evaluation on LLMs using human-eval benchmark

Primary LanguagePythonMIT LicenseMIT

Issues