MMMU-Benchmark/MMMU

model evaluation

mactavish91 opened this issue · 2 comments

Thank you for your great evaluation, we have recently used a training strategy similar to LLava, which co-trains vqa and chat data, resulting in significant improvements. Could you re evaluate our model? https://github.com/THUDM/CogVLM/

You can submit your test set evaluation at eval.ai: https://eval.ai/web/challenges/challenge-page/2179. We will update your results in our paper based on your submission.