This repo serves as an open effort on instruction-tuning popular pretrained language models on publicly available datasets. We release this repo and will keep updating it with:
- Code for finetuning language models with latest techniques and instruction datasets in a unified format.
- Code for running standard evaluation on a range of benchmarks, targeting for differnt capabilities of these language models.
- Checkpoints or other useful artifacts that we build in our exploration.
Please see our first paper How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources for more thoughts behind this project and our initial findings. Please see our second paper Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2 for newer results using Llama-2 models and direct preference optimization. We are still working on more models, so stay tuned for future work!
- [2023-11-27] We released Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2. Check out our models here. We have added a DPO finetuning script for replicating our results.
- [2023-09-26] We switched to use the official alpaca-eval library to run AlpacaFarm evaluation but use regenerated longer reference outputs. This will change our numbers reported in the paper. We will update the paper soon.
- [2023-09-25] Supported using vLLM for our evaluations, which speeds up the evaluation by 10x.
- [2023-09-17] Supported LoRA and QLoRA finetuning. See here for more details.
- [2023-08-18] Added support for ToxiGen/TrutufulQA evaluation. Check our
scripts/eval/
for examples of running them. - [2023-08-08] Supported several new instruction dataset, including LIMA / WizardLM / Open-Orca. See the preparation script for details. Performance hasn't been evaluated yet.
- [2023-08-06] Supported LLaMa 2 finetuning and FlashAttention-2 by bumping the version of transformers and many other dependencies.
- [2023-06-29] Added licensing info for our released models.
- [2023-06-09] Released Tülu (a suite of LLaMa models fully-finetuned on a strong mix of datasets) and many other checkpoints on HuggingFace [Links].
- [2023-06-09] Initial release of the codebase containing the training and evaluation code for our arxiv paper.
To run training, evaluation, or inference for our finetuned models, you need to install the required packages by running the following command (after installing pytorch):
pip install -r requirements.txt
If you just want the dependencies for the weight diff script, use:
pip install -r weight-diff-requirements.txt
If you'd like to experiment with AI2's OLMo models, you should also install:
pip install ai2-olmo
If you'd like to run experiments within a Docker environment, you can create one using:
docker build --build-arg CUDA=11.8.0 --build-arg TARGET=cudnn8-devel --build-arg DIST=ubuntu20.04 . -t <your tag here>
If you are internally at AI2, you can use this pre-built beaker image here.
We include a collection of representative instruction datasets in our exploration and are adding new ones to our list. We unify them into the same chatting format. To download and prepare these datasets, simply run the following command:
./scripts/prepare_train_data.sh
Please check these datasets for licenses and restrictions around their use!
You can also find the processed Tulu v1 and Tulu v2 SFT datasets on HuggingFace.
Generally, most huggingface-compatible causal language models should work fine with our codebase, potentially with some adjusting for different tokenizers etc. Some models may require addtional requests to download. E.g., for LLaMa 1 and 2, please consult the Hugging Face documentation for requesting access and converting them to a huggingface-compatible format.
You can use the following command to run instruction tuning (finetuning a pretrained model to follow instructions):
./scripts/finetune_with_accelerate.sh
Make sure to adjust model_name_or_path
, tokenizer_name
, train_file
, and output_dir
to your models / data / setting. By default, this uses deepspeed
with accelerate
.
We support LoRA finetuning, wherein only a small number of parameters are updated, resulting in faster and cheaper training. For even more efficiency, we also support QLoRA finetuning, wherein the non-trained (underlying) model parameters are quantised during 4-bit training. This means you can train a 70b Llama model on a single 80GB A100! Please refer to the respective papers for more details.
Please also note you cannot currently run QLoRA with model parallelism - only data-parallel training is supported, so you cannot train a model that does not fit on one GPU. For LoRA, you can use deepspeed + zero-3 to achieve model parallelism (and FSDP is not currently supported).
Please see ./scripts/finetune_lora_with_accelerate.sh
and ./scripts/finetune_qlora_with_accelerate.sh
for example hyperparameters. We found a larger rank (e.g. 256) and higher learning rate (e.g. 2e-4) worked best. Additionally, we found that QLoRA tended to always achieve similar results to LoRA, while LoRA itself sometimes fell behind full-finetuning, especially in long, complex generation tasks. However, for most purposes, LoRA training essentially matches full-finetuning performance. We recommend merging modules learnt with QLoRA into a dequantised model (run our merge script with the --qlora
flag).
For an example of how to fully finetune a model with DPO, see scripts/dpo_train_with_accelerate.sh
. Note you will require at least 8 80GB A100s to be able to train a 7b size model, and will require more compute for anything larger. We have not tested multi-node training with this script, but it should work.
Our script also supports PEFT training with QLoRA. See scripts/dpo_train_with_qlora.sh
for an example. We have not trained models with this, so it may require additional hyperparameter tuning to achieve reasonable results.
Our checkpoints can be found:
Our Tulu V1 models were released as weight diffs (due to LLaMa 1 license). We use a slightly modified form of the Alpaca weight diff script, which runs the same.
To merge a model:
- Download the relevant LLaMa model and convert it to Hugging Face format (see above).
- Download our repository and install the right dependencies (see above).
- Download the model diff you want.
- Run the command below:
python scripts/weight_diff.py recover --path_raw ${hf_llama_path} --path_tuned ${output_path} --path_diff ${diff_location}
We provide the scripts for running evaluation of Huggingface/OpenAI models on a list of standard benchmarks targeting for the core capabilities of large language models. These benchmakrs include:
- MMLU
- Grade School Math (GSM)
- Big-Bench Hard (BBH)
- TydiQA
- Codex HumanEval
- IFEval
- ToxiGen
- XSTest
- TruthfulQA
- AlpacaEval
We are working on including more promising benchmarks into this list. Please stay tuned!
You can use the following script to download all the evaluation data:
./scripts/prepare_eval_data.sh
Evaluation scripts for different datasets are put under ./scripts
. For example, you can use the following command to run the MMLU evaluation script:
./scripts/eval/mmlu.sh
We release our human evaluation interface and collected annotations in the ./human_eval
folder. Please see the corresponding README for more details.
This codebase is licensed under Apache 2.0 as given in LICENSE.
The license we use for V1 models released (along with the base model licenses) can be found in model_licenses/tulu_license.txt - just replace <MODELNAME>
with the actual model name (i.e., the name on HuggingFace).
V2 models are licensed under the low-risk AI2 ImpACT license. See here for more details.
If you used this repository or our models, please cite our work:
@misc{wang2023far,
title={How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources},
author={Yizhong Wang and Hamish Ivison and Pradeep Dasigi and Jack Hessel and Tushar Khot and Khyathi Raghavi Chandu and David Wadden and Kelsey MacMillan and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2306.04751},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@misc{ivison2023camels,
title={Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2},
author={Hamish Ivison and Yizhong Wang and Valentina Pyatkin and Nathan Lambert and Matthew Peters and Pradeep Dasigi and Joel Jang and David Wadden and Noah A. Smith and Iz Beltagy and Hannaneh Hajishirzi},
year={2023},
eprint={2311.10702},
archivePrefix={arXiv},
primaryClass={cs.CL}
}