/llm-code-eval

LLMCodeEval: An Execution-Based Multilingual Multitask Multidimensional Benchmark for Evaluating Large Language Models on Code Understanding and Generation

Primary LanguagePython

No issues in this repository yet.