/CodeScope

[ACL 2024] CodeScope: An Execution-based Multilingual Multitask Multidimensional Benchmark for Evaluating LLMs on Code Understanding and Generation

Primary LanguagePythonMIT LicenseMIT

Leaderboard   |   πŸ“„ Paper   |   πŸ€— Access from HuggingFace datasets   |   Access from Google Drive datasets

CodeScope, an execution-based, multilingual, multi-task, multi-dimensional evaluation benchmark for comprehensively gauging LLM capabilities on coding tasks. CodeScope covers 43 programming languages and 8 coding tasks. It evaluates the coding performance of LLMs from three dimensions (perspectives): difficulty, efficiency, and length.

🌈 Update

  • [2024.05.15] CodeScope was accepted into the ACL 2024 Main Conference, thanking the academic community for its recognition.
  • [2023.11.15] πŸŽ‰πŸŽ‰πŸŽ‰ CodeScope is publishedοΌπŸŽ‰πŸŽ‰πŸŽ‰

Datasets

πŸ€—Hugging Face or Google Drive or Github Data

Code

CodeScope evaluates the comprehensive ability of LLMs in code understanding and code generation from eight coding tasks.

Code Understanding

  1. Code Summarization
  2. Code Smell
  3. Code Review
  4. Automated Testing

Code Generation

  1. Program Synthesis
  2. Code Translation
  3. Code Repair
  4. Code Optimization

Citation

Please cite the paper if you use the data or code from CodeScope.

@misc{yan2023codescope,
      title={CodeScope: An Execution-based Multilingual Multitask Multidimensional Benchmark for Evaluating LLMs on Code Understanding and Generation},
      author={Weixiang Yan and Haitian Liu and Yunkun Wang and Yunzhe Li and Qian Chen and Wen Wang and Tingyu Lin and Weishan Zhao and Li Zhu and Shuiguang Deng and Hari Sundaram},
      year={2023},
      eprint={2311.08588},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

Contact

For questions, please feel free to reach out via email at weixiangyan@ucsb.edu.