/flan-eval

This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.

Primary LanguagePython

🍮 📚 Flan-Eval: Reproducible Held-Out Evaluation for Instruction Tuning

This repository contains code to evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks. We aim to facilitate simple and convenient benchmarking across multiple tasks and models.

Why?

Instruction-tuned models such as Flan-T5 and Alpaca represent an exciting direction to approximate the performance of large language models (LLMs) like ChatGPT at lower cost. However, it is challenging to compare the performance of different models qualitatively. To evaluate how well the models generalize across a wide range of unseen and challenging tasks, we can use academic benchmarks such as MMLU and BBH. Compared to existing libraries such as evaluation-harness and HELM, this repo enables simple and convenient evaluation for multiple models. Notably, we support most models from HuggingFace Transformers 🤗 :

Results

Model Name Model Path Paper Parameters MMLU Score BBH Score
seq_to_seq google/flan-t5-xl Link 3B 49.25 40.26
llama eachadea/vicuna-13b Link 13B 49.70 37.17
llama TheBloke/koala-13B-HF Link 13B 44.60 34.68
llama chavinlo/alpaca-native Link 7B 41.64 33.36
llama decapoda-research/llama-7b-hf Link 7B 35.22 30.96
chatglm THUDM/chatglm-6b Link 6B 36.16 31.38

Example Usage

Evaluate on Massive Multitask Language Understanding (MMLU) which includes exam questions from 57 tasks such as mathematics, history, law, and medicine.

python main.py mmlu --model_name llama --model_path chavinlo/alpaca-native
# 0.4163936761145136

python main.py mmlu --model_name seq_to_seq --model_path google/flan-t5-xl 
# 0.49252243270189433

Evaluate on Big Bench Hard (BBH) which includes 23 challenging tasks for which PaLM (540B) performs below an average human rater.

python main.py bbh --model_name llama --model_path TheBloke/koala-13B-HF --load_8bit
# 0.3468942926723247

python main.py bbh --model_name llama --model_path eachadea/vicuna-13b --load_8bit
# 0.3717117791946168

Setup

Install dependencies and download data.

conda create -n flan-eval python=3.8 -y
conda activate flan-eval
pip install -r requirements.txt
mkdir -p data
wget https://people.eecs.berkeley.edu/~hendrycks/data.tar -O data/mmlu.tar
tar -xf data/mmlu.tar -C data && mv data/data data/mmlu