/ChEF

CHEF: A Comprehensive Evaluation Framework for Standardized Assessment of Multimodal Large Language Model

Primary LanguagePython



Zhelun Shi*,1,2  Zhipin Wang*,1  Hongxing Fan*,1  Zhenfei Yin2  Lu Sheng✉,1
Yu Qiao2  Jing Shao✉,2
1Beihang University  2Shanghai AI Lab
* Equal Contribution  Corresponding Author

Official Repository of CHEF: A COMPREHENSIVE EVALUATION FRAMEWORK FOR STANDARDIZED ASSESSMENT OF MULTIMODAL LARGE LANGUAGE MODEL

Introduction

ChEF is a Comprehensive Evaluation Framework for reliable and indicative assessment of MLLMs, which is highly scalable and can be flexibly modified to adapt to the evaluation of any new model or task.

ChEF decouples the evaluation pipeline into four components:

  • Scenario: A set of datasets concerning representative multimodal tasks that are suitable for MLLMs.
  • Instruction: The module of posing questions and setting instruction examples to the MLLMs.
  • Inferencer: Strategies for MLLMs to answer questions.
  • Metric: Score functions designed to evaluate the performance of MLLMs.

With a systematic selection of these four componets, ChEF facilitates versatile evaluations in a standardized framework. Users can easily build new evaluations according to new Recipes (i.e. specific choices of the four components). ChEF also sets up several new evaluations to quantify the desiderata (desired capabilities) that a competent MLLM model should possess.

News

📆 [2023-11]

  1. ChEF code is available!
  2. ChEF is now merged to LAMM. We will maintain the code in OpenGVLab/LAMM.

Scenarios and MLLMs

Supported Scenarios:

(click to collapse)

Supported MLLMs:

More details can be found in models.md.

(click to collapse)

Get Started

Please see get_start.md for the basic usage of ChEF.

License

The project is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.

Citation

@misc{shi2023chef,
      title={ChEF: A Comprehensive Evaluation Framework for Standardized Assessment of Multimodal Large Language Models}, 
      author={Zhelun Shi and Zhipin Wang and Hongxing Fan and Zhenfei Yin and Lu Sheng and Yu Qiao and Jing Shao},
      year={2023},
      eprint={2311.02692},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}