BradyFU/Awesome-Multimodal-Large-Language-Models

What is MME_Benchmark_release_version?

Opened this issue · 2 comments

In many projects, evaluations on the MME benchmark mention MME_Benchmark_release_version, such as in LLaVA:

image

But nowhere does it specify how to obtain the MME_Benchmark_release_version. I attempted to use the dataset provided on HuggingFace, but some information seems to be missing, such as questions_answers_YN mentioned in convert_answer_to_mme.py.

So, where the issue might be?

You may follow the guidelines to apply for and arrange the MME data.

Or you can simply run the evaluation script provided by lmms-eval. The dataset provided by them (in HuggingFace) is the reformatted one, not the original files.

Have already done that, but there has been no response yet.