/llm-code-eval

LLMCodeEval: An Execution-Based Multilingual Multitask Multidimensional Benchmark for Evaluating Large Language Models on Code Understanding and Generation

Primary LanguagePython

Stargazers

No one’s star this repository yet.