FlagEval is an evaluation toolkit for AI large foundation models. Our goal is to explore and integrate scientific, fair and open foundation model evaluation benchmarks, methods and tools. FlagEval will support multi-dimensional evaluation (such as accuracy, efficiency, robustness, etc.) of foundation models in/cross different modalities (such as NLP, audio, CV and multimodal) in the future. We hope that through the evaluation of the foundation models, we can deepen the understanding of the foundation models and promote related technological innovation and industrial application.
- A evaluation toolkit mCLIPEval for vision-language models (such as CLIP, Contrastive Language–Image Pre-training).
- Multilingual (12 languages) datasets and monolingual (English/Chinese) datasets.
- Support for zeroshot classification, zeroshot retrieval and zeroshot composition tasks.
- Adapted to FlagAI pretrained models (AltCLIP, EVA-CLIP), OpenCLIP pretrained models, Chinese CLIP models, Multilingual CLIP models, Taiyi Series pretrained models, or customized models.
- Data preparation from various resources, like torchvision, huggingface, kaggle, etc.
- Visualization of evaluation results through leaderboard figures or tables, and detailed comparsions between two specific models.
- Pytorch version >= 1.8.0
- Python version >= 3.8
- For evaluating models on GPUs, you'll also need install CUDA and NCCL
How to use mCLIPEval
git clone https://github.com/FlagOpen/FlagEval.git
cd FlagEval/mCLIPEval/
pip install -r requirements.txt
Please refer to mCLIPEval/README.md for more details.
- For help and issues associated with FlagEval, or reporting a bug, please open a GitHub Issue or e-mail to flageval@baai.ac.cn. Let's build a better & stronger FlagEval together :)
- We're hiring! If you are interested in working with us on foundation model evaluation, please contact flageval@baai.ac.cn.
- Welcome to collaborate with FlagEval! New task or new dataset submissions are encouraged. If you are interested in contributiong new task or new dataset or new tool to FlagEval, please contact flageval@baai.ac.cn.
The majority of FlagEval is licensed under the Apache 2.0 license, however portions of the project are available under separate license terms:
- The usage of CLIP_benchmark is licensed under the MIT license
- The usage of ImageNet1k datasets in under the huggingface datasets license and ImageNet licenese