/lm-evaluation

A framework for few-shot evaluation of autoregressive language models.

Primary LanguagePythonMIT LicenseMIT

Language Model Evaluation Harness

DOI

Announcement

A new v0.4.0 release of lm-evaluation-harness is available !

New updates and features include:

  • Internal refactoring
  • Config-based task creation and configuration
  • Easier import and sharing of externally-defined task config YAMLs
  • Support for Jinja2 prompt design, easy modification of prompts + prompt imports from Promptsource
  • More advanced configuration options, including output post-processing, answer extraction, and multiple LM generations per document, configurable fewshot settings, and more
  • Speedups and new modeling libraries supported, including: faster data-parallel HF model usage, vLLM support, MPS support with HuggingFace, and more
  • Logging and usability changes
  • New tasks including CoT BIG-Bench-Hard, Belebele, user-defined task groupings, and more

Please see our updated documentation pages in docs/ for more details.

Development will be continuing on the main branch, and we encourage you to give us feedback on what features are desired and how to improve the library further, or ask questions, either in issues or PRs on GitHub, or in the EleutherAI discord!

Overview

This project provides a unified framework to test generative language models on a large number of different evaluation tasks.

Features:

  • Over 60 standard academic benchmarks for LLMs, with hundreds of subtasks and variants implemented.
  • Support for models loaded via transformers (including quantization via AutoGPTQ), GPT-NeoX, and Megatron-DeepSpeed, with a flexible tokenization-agnostic interface.
  • Support for fast and memory-efficient inference with vLLM.
  • Support for commercial APIs including OpenAI, and TextSynth.
  • Support for evaluation on adapters (e.g. LoRA) supported in HuggingFace's PEFT library.
  • Support for local models and benchmarks.
  • Evaluation with publicly available prompts ensures reproducibility and comparability between papers.
  • Easy support for custom prompts and evaluation metrics.

The Language Model Evaluation Harness is the backend for 🤗 Hugging Face's popular Open LLM Leaderboard, has been used in hundreds of papers is used internally by dozens of companies including NVIDIA, Cohere, Nous Research, Booz Allen Hamilton, and Mosaic ML.

Install

To install the lm-eval package from the github repository, run:

git clone https://github.com/EleutherAI/lm-evaluation-harness
cd lm-evaluation-harness
pip install -e .

We also provide a number of optional dependencies for extended functionality. Extras can be installed via pip install -e ".[NAME]"

Name Use
anthropic For using Anthropic's models
gptq For loading models with GPTQ
dev You probably don't want to use this
multilingual For multilingual tokenizers
openai For using OpenAI's models
promptsource For using PromtSource prompts
sentencepiece For using the sentencepiece tokenizer
vllm For loading models with vLLM
zeno For visualizing results with Zeno
all Loads all extras

Basic Usage

Hugging Face transformers

To evaluate a model hosted on the HuggingFace Hub (e.g. GPT-J-6B) on hellaswag you can use the following command:

lm_eval --model hf \
    --model_args pretrained=EleutherAI/gpt-j-6B \
    --tasks hellaswag \
    --device cuda:0 \
    --batch_size 8

Additional arguments can be provided to the model constructor using the --model_args flag. Most notably, this supports the common practice of using the revisions feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model:

lm_eval --model hf \
    --model_args pretrained=EleutherAI/pythia-160m,revision=step100000,dtype="float" \
    --tasks lambada_openai,hellaswag \
    --device cuda:0 \
    --batch_size 8

Models that are loaded via both transformers.AutoModelForCausalLM (autoregressive, decoder-only GPT style models) and transformers.AutoModelForSeq2SeqLM (such as encoder-decoder models like T5) in Huggingface are supported.

Batch size selection can be automated by setting the --batch_size flag to auto. This will perform automatic detection of the largest batch size that will fit on your device. On tasks where there is a large difference between the longest and shortest example, it can be helpful to periodically recompute the largest batch size, to gain a further speedup. To do this, append :N to above flag to automatically recompute the largest batch size N times. For example, to recompute the batch size 4 times, the command would be:

lm_eval --model hf \
    --model_args pretrained=EleutherAI/pythia-160m,revision=step100000,dtype="float" \
    --tasks lambada_openai,hellaswag \
    --device cuda:0 \
    --batch_size auto:4

The full list of supported arguments are provided here, and on the terminal by calling lm_eval -h. Alternatively, you can use lm-eval instead of lm_eval.

Note

Just like you can provide a local path to transformers.AutoModel, you can also provide a local path to lm_eval via --model_args pretrained=/path/to/model

Multi-GPU Evaluation with Hugging Face accelerate

To parallelize evaluation of HuggingFace models across multiple GPUs, we leverage the accelerate 🚀 library as follows:

accelerate launch -m lm_eval --model hf \
    --tasks lambada_openai,arc_easy \
    --batch_size 16

This will perform data-parallel evaluation: that is, placing a single full copy of your model onto each available GPU and splitting batches across GPUs to evaluate on K GPUs K times faster than on one.

If your model is is too large to be run on a single one of your GPUs then you can use accelerate with Fully Sharded Data Parallel (FSDP) that splits the weights of the model across your data parallel ranks. To enable this, ensure you select YES when asked Do you want to use FullyShardedDataParallel? when running accelerate config. To enable memory-efficient loading, select YES when asked Do you want each individually wrapped FSDP unit to broadcast module parameters from rank 0 at the start?. This will ensure only the rank 0 process loads the model and then broadcasts the parameters to the other ranks instead of having each rank load all parameters which can lead to large RAM usage spikes around the start of the script that may cause errors.

To pass even more advanced keyword arguments to accelerate, we allow for the following arguments as well:

  • device_map_option: How to split model weights across available GPUs. defaults to "auto".
  • max_memory_per_gpu: the max GPU memory to use per GPU in loading the model.
  • max_cpu_memory: the max amount of CPU memory to use when offloading the model weights to RAM.
  • offload_folder: a folder where model weights will be offloaded to disk if needed.

To use accelerate with the lm-eval command, use

accelerate launch --no_python lm-eval --model ...

Tensor + Data Parallel and Optimized Inference with vLLM

We also support vLLM for faster inference on supported model types. For single-GPU or multi-GPU — tensor parallel, data parallel, or a combination of both — inference, for example:

lm_eval --model vllm \
    --model_args pretrained={model_name},tensor_parallel_size={GPUs_per_model},dtype=auto,gpu_memory_utilization=0.8,data_parallel_size={model_replicas} \
    --tasks lambada_openai \
    --batch_size auto

For a full list of supported vLLM configurations, please reference our vLLM integration and the vLLM documentation.

vLLM occasionally differs in output from Huggingface. We treat Huggingface as the reference implementation, and provide a script for checking the validity of vllm results against HF.

Model APIs and Inference Servers

Our library also supports the evaluation of models served via several commercial APIs, and we hope to implement support for the most commonly used performant local/self-hosted inference servers.

To call a hosted model, use:

export OPENAI_API_KEY=YOUR_KEY_HERE
lm_eval --model openai-completions \
    --model_args engine=davinci \
    --tasks lambada_openai,hellaswag

We also support using your own local inference server with an implemented version of the OpenAI ChatCompletions endpoint and passing trained HuggingFace artifacts and tokenizers.

lm_eval --model local-chat-completions --tasks gsm8k --model_args model=facebook/opt-125m,base_url=http://{yourip}:8000/v1

Note that for externally hosted models, configs such as --device and --batch_size should not be used and do not function. Just like you can use --model_args to pass arbitrary arguments to the model constructor for local models, you can use it to pass arbitrary arguments to the model API for hosted models. See the documentation of the hosting service for information on what arguments they support.

API or Inference Server Implemented? --model <xxx> name Models supported: Request Types:
OpenAI Completions ✔️ openai-completions up to code-davinci-002 generate_until, loglikelihood, loglikelihood_rolling
OpenAI ChatCompletions ✔️ openai-chat-completions, local-chat-completions All ChatCompletions API models generate_until (no logprobs)
Anthropic ✔️ anthropic Supported Anthropic Engines generate_until (no logprobs)
Textsynth ✔️ textsynth All supported engines generate_until, loglikelihood, loglikelihood_rolling
Cohere ⌛ - blocked on Cohere API bug N/A All cohere.generate() engines generate_until, loglikelihood, loglikelihood_rolling
Llama.cpp (via llama-cpp-python) ✔️ gguf, ggml All models supported by llama.cpp generate_until, loglikelihood, loglikelihood_rolling
vLLM ✔️ vllm Most HF Causal Language Models generate_until, loglikelihood, loglikelihood_rolling
Your local inference server! ✔️ local-chat-completions (using openai-completions model type) Any server address that accepts GET requests using HF models and mirror's OpenAI's ChatCompletions interface generate_until

It is on our roadmap to create task variants designed to enable models which do not serve logprobs/loglikelihoods to be compared with generation performance of open-source models.

Other Frameworks

A number of other libraries contain scripts for calling the eval harness through their library. These include GPT-NeoX, Megatron-DeepSpeed, and mesh-transformer-jax.

Additional Features

If you have a Metal compatible Mac, you can run the eval harness using the MPS back-end by replacing --device cuda:0 with --device mps (requires PyTorch version 2.1 or higher).

Note

You can inspect what the LM inputs look like by running the following command:

python write_out.py \
    --tasks all_tasks \
    --num_fewshot 5 \
    --num_examples 10 \
    --output_base_path /path/to/output/folder

This will write out one text file for each task.

To verify the data integrity of the tasks you're performing in addition to running the tasks themselves, you can use the --check_integrity flag:

lm_eval --model openai \
    --model_args engine=davinci \
    --tasks lambada_openai,hellaswag \
    --check_integrity

Advanced Usage Tips

For models loaded with the HuggingFace transformers library, any arguments provided via --model_args get passed to the relevant constructor directly. This means that anything you can do with AutoModel can be done with our library. For example, you can pass a local path via pretrained= or use models finetuned with PEFT by taking the call you would run to evaluate the base model and add ,peft=PATH to the model_args argument:

lm_eval --model hf \
    --model_args pretrained=EleutherAI/gpt-j-6b,parallelize=True,load_in_4bit=True,peft=nomic-ai/gpt4all-j-lora \
    --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq \
    --device cuda:0

GPTQ quantized models can be loaded by specifying their file names in ,gptq=NAME (or ,gptq=True for default names) in the model_args argument:

lm_eval --model hf \
    --model_args pretrained=model-name-or-path,gptq=model.safetensors,gptq_use_triton=True \
    --tasks hellaswag

We support wildcards in task names, for example you can run all of the machine-translated lambada tasks via --task lambada_openai_mt_*.

To save evaluation results provide an --output_path. We also support logging model responses with the --log_samples flag for post-hoc analysis.

Additionally, one can provide a directory with --use_cache to cache the results of prior runs. This allows you to avoid repeated execution of the same (model, task) pairs for re-scoring.

For a full list of supported arguments, check out the interface guide in our documentation!

Visualizing Results

You can use Zeno to visualize the results of your eval harness runs.

First, head to hub.zenoml.com to create an account and get an API key on your account page. Add this key as an environment variable:

export ZENO_API_KEY=[your api key]

You'll also need to install the lm_eval[zeno] package extra.

To visualize the results, run the eval harness with the log_samples and output_path flags. We expect output_path to contain multiple folders that represent individual model names. You can thus run your evaluation on any number of tasks and models and upload all of the results as projects on Zeno.

lm_eval \
    --model hf \
    --model_args pretrained=EleutherAI/gpt-j-6B \
    --tasks hellaswag \
    --device cuda:0 \
    --batch_size 8 \
    --log_samples \
    --output_path output/gpt-j-6B

Then, you can upload the resulting data using the zeno_visualize script:

python scripts/zeno_visualize.py \
    --data_path output \
    --project_name "Eleuther Project"

This will use all subfolders in data_path as different models and upload all tasks within these model folders to Zeno. If you run the eval harness on multiple tasks, the project_name will be used as a prefix and one project will be created per task.

How to Contribute or Learn More?

For more information on the library and how everything fits together, check out all of our documentation pages! We plan to post a larger roadmap of desired + planned library improvements soon, with more information on how contributors can help.

Implementing new tasks

To implement a new task in the eval harness, see this guide.

In general, we follow this priority list for addressing concerns about prompting and other eval details:

  1. If there is widespread agreement among people who train LLMs, use the agreed upon procedure.
  2. If there is a clear and unambiguous official implementation, use that procedure.
  3. If there is widespread agreement among people who evaluate LLMs, use the agreed upon procedure.
  4. If there are multiple common implementations but not universal or widespread agreement, use our preferred option among the common implementations. As before, prioritize choosing from among the implementations found in LLM training papers.

These are guidelines and not rules, and can be overruled in special circumstances.

We try to prioritize agreement with the procedures used by other groups to decrease the harm when people inevitably compare runs across different papers despite our discouragement of the practice. Historically, we also prioritized the implementation from Language Models are Few Shot Learners as our original goal was specifically to compare results with that paper.

Support

The best way to get support is to open an issue on this repo or join the EleutherAI Discord server. The #lm-thunderdome channel is dedicated to developing this project and the #release-discussion channel is for receiving support for our releases. If you've used the library and have had a positive (or negative) experience, we'd love to hear from you!

Cite as

@misc{eval-harness,
  author       = {Gao, Leo and Tow, Jonathan and Abbasi, Baber and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and Le Noac'h, Alain and Li, Haonan and McDonell, Kyle and Muennighoff, Niklas and Ociepa, Chris and Phang, Jason and Reynolds, Laria and Schoelkopf, Hailey and Skowron, Aviya and Sutawika, Lintang and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy},
  title        = {A framework for few-shot language model evaluation},
  month        = 12,
  year         = 2023,
  publisher    = {Zenodo},
  version      = {v0.4.0},
  doi          = {10.5281/zenodo.10256836},
  url          = {https://zenodo.org/records/10256836}
}