Yu Qiao2 Jing Shao✉,2
* Equal Contribution ✉ Corresponding Author
Official Repository of CHEF: A COMPREHENSIVE EVALUATION FRAMEWORK FOR STANDARDIZED ASSESSMENT OF MULTIMODAL LARGE LANGUAGE MODEL
ChEF is a Comprehensive Evaluation Framework for reliable and indicative assessment of MLLMs, which is highly scalable and can be flexibly modified to adapt to the evaluation of any new model or task.
ChEF decouples the evaluation pipeline into four components:
- Scenario: A set of datasets concerning representative multimodal tasks that are suitable for MLLMs.
- Instruction: The module of posing questions and setting instruction examples to the MLLMs.
- Inferencer: Strategies for MLLMs to answer questions.
- Metric: Score functions designed to evaluate the performance of MLLMs.
With a systematic selection of these four componets, ChEF facilitates versatile evaluations in a standardized framework. Users can easily build new evaluations according to new Recipes (i.e. specific choices of the four components). ChEF also sets up several new evaluations to quantify the desiderata (desired capabilities) that a competent MLLM model should possess.
📆 [2023-11]
- ChEF code is available!
- ChEF is now merged to LAMM. We will maintain the code in OpenGVLab/LAMM.
Supported Scenarios:
Supported MLLMs:
More details can be found in models.md.
(click to collapse)
Please see get_start.md for the basic usage of ChEF.
The project is CC BY NC 4.0 (allowing only non-commercial use) and models trained using the dataset should not be used outside of research purposes.
@misc{shi2023chef,
title={ChEF: A Comprehensive Evaluation Framework for Standardized Assessment of Multimodal Large Language Models},
author={Zhelun Shi and Zhipin Wang and Hongxing Fan and Zhenfei Yin and Lu Sheng and Yu Qiao and Jing Shao},
year={2023},
eprint={2311.02692},
archivePrefix={arXiv},
primaryClass={cs.CV}
}