Pretrain, finetune, deploy 20+ LLMs on your own data
Uses the latest state-of-the-art techniques:
✅ fp4/8/16/32 ✅ LoRA, QLoRA, Adapter (v1, v2) ✅ flash attention ✅ FSDP ✅ 1-1000+ GPUs/TPUs
Lightning.ai • Install • Get started • Use LLMs • Finetune, pretrain LLMs • Models • Features • Training recipes (YAML) • Design principles
Install LitGPT with all dependencies (including CLI, quantization, tokenizers for all models, etc.):
pip install 'litgpt[all]'
Advanced install options
Install from source:
git clone https://github.com/Lightning-AI/litgpt
cd litgpt
pip install -e '.[all]'
LitGPT is a command-line tool to use, pretrain, finetune and deploy LLMs.
Here's an example showing how to use the Mistral 7B LLM.
# 1) Download a pretrained model
litgpt download --repo_id mistralai/Mistral-7B-Instruct-v0.2
# 2) Chat with the model
litgpt chat \
--checkpoint_dir checkpoints/mistralai/Mistral-7B-Instruct-v0.2
>> Prompt: What do Llamas eat?
For more information, refer to the download and inference tutorials.
Finetune a model to specialize it on your own custom dataset:
# 1) Download a pretrained model
litgpt download --repo_id microsoft/phi-2
# 2) Finetune the model
curl -L https://huggingface.co/datasets/medalpaca/medical_meadow_health_advice/raw/main/medical_meadow_health_advice.json -o my_custom_dataset.json
litgpt finetune lora \
--checkpoint_dir checkpoints/microsoft/phi-2 \
--data JSON \
--data.json_path my_custom_dataset.json \
--val_split_fraction 0.1 \
--out_dir out/phi-2-lora
# 3) Chat with the model
litgpt chat \
--checkpoint_dir out/phi-2-lora/final
Train an LLM from scratch on your own data via pretraining:
mkdir -p custom_texts
curl https://www.gutenberg.org/cache/epub/24440/pg24440.txt --output custom_texts/book1.txt
curl https://www.gutenberg.org/cache/epub/26393/pg26393.txt --output custom_texts/book2.txt
# 1) Download a tokenizer
litgpt download \
--repo_id EleutherAI/pythia-160m \
--tokenizer_only True
# 2) Pretrain the model
litgpt pretrain \
--model_name pythia-160m \
--tokenizer_dir checkpoints/EleutherAI/pythia-160m \
--data TextFiles \
--data.train_data_path "custom_texts/" \
--train.max_tokens 10_000_000 \
--out_dir out/custom-model
# 3) Chat with the model
litgpt chat \
--checkpoint_dir out/custom-model/final
This is another way of finetuning that specialize an already pretrained model by training on custom data:
mkdir -p custom_texts
curl https://www.gutenberg.org/cache/epub/24440/pg24440.txt --output custom_texts/book1.txt
curl https://www.gutenberg.org/cache/epub/26393/pg26393.txt --output custom_texts/book2.txt
# 1) Download a pretrained model
litgpt download --repo_id EleutherAI/pythia-160m
# 2) Continue pretraining the model
litgpt pretrain \
--model_name pythia-160m \
--initial_checkpoint_dir checkpoints/EleutherAI/pythia-160m \
--data TextFiles \
--data.train_data_path "custom_texts/" \
--train.max_tokens 10_000_000 \
--out_dir out/custom-model
# 3) Chat with the model
litgpt chat \
--checkpoint_dir out/custom-model/final
Note
Use, Finetune, pretrain, deploy over 20+ LLMs (full list).
Model | Model size | Author | Reference |
---|---|---|---|
CodeGemma | 7B | Google Team, Google Deepmind | |
Code Llama | 7B, 13B, 34B, 70B | Meta AI | Rozière et al. 2023 |
Dolly | 3B, 7B, 12B | Databricks | Conover et al. 2023 |
Falcon | 7B, 40B, 180B | TII UAE | TII 2023 |
FreeWilly2 (Stable Beluga 2) | 70B | Stability AI | Stability AI 2023 |
Function Calling Llama 2 | 7B | Trelis | Trelis et al. 2023 |
Gemma | 2B, 7B | Google Team, Google Deepmind | |
Llama 2 | 7B, 13B, 70B | Meta AI | Touvron et al. 2023 |
LongChat | 7B, 13B | LMSYS | LongChat Team 2023 |
Mistral | 7B | Mistral AI | Mistral website |
Nous-Hermes | 7B, 13B, 70B | NousResearch | Org page |
OpenLLaMA | 3B, 7B, 13B | OpenLM Research | Geng & Liu 2023 |
Phi | 1.3B, 2.7B | Microsoft Research | Li et al. 2023 |
Platypus | 7B, 13B, 70B | Lee et al. | Lee, Hunter, and Ruiz 2023 |
Pythia | {14,31,70,160,410}M, {1,1.4,2.8,6.9,12}B | EleutherAI | Biderman et al. 2023 |
RedPajama-INCITE | 3B, 7B | Together | Together 2023 |
StableCode | 3B | Stability AI | Stability AI 2023 |
StableLM | 3B, 7B | Stability AI | Stability AI 2023 |
StableLM Zephyr | 3B | Stability AI | Stability AI 2023 |
TinyLlama | 1.1B | Zhang et al. | Zhang et al. 2023 |
Vicuna | 7B, 13B, 33B | LMSYS | Li et al. 2023 |
✅ State-of-the-art optimizations: Flash Attention v2, multi-GPU support via fully-sharded data parallelism, optional CPU offloading, and TPU and XLA support.
✅ Pretrain, finetune, and deploy
✅ Reduce compute requirements with low-precision settings: FP16, BF16, and FP16/FP32 mixed.
✅ Lower memory requirements with quantization: 4-bit floats, 8-bit integers, and double quantization.
✅ Configuration files for great out-of-the-box performance.
✅ Parameter-efficient finetuning: LoRA, QLoRA, Adapter, and Adapter v2.
✅ Exporting to other popular model weight formats.
✅ Many popular datasets for pretraining and finetuning, and support for custom datasets.
✅ Readable and easy-to-modify code to experiment with the latest research ideas.
LitGPT comes with validated recipes (YAML configs) to train models under different conditions.
We've generated these recipes based on the parameters we found to perform the best for different training conditions.
litgpt finetune lora \
--config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml
Browse all training recipes here.
Configs let you customize training for all granular parameters like:
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
checkpoint_dir: checkpoints/meta-llama/Llama-2-7b-hf
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
out_dir: out/finetune/qlora-llama2-7b
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
precision: bf16-true
...
Example: LoRA finetuning config
# The path to the base model's checkpoint directory to load for finetuning. (type: <class 'Path'>, default: checkpoints/stabilityai/stablelm-base-alpha-3b)
checkpoint_dir: checkpoints/meta-llama/Llama-2-7b-hf
# Directory in which to save checkpoints and logs. (type: <class 'Path'>, default: out/lora)
out_dir: out/finetune/qlora-llama2-7b
# The precision to use for finetuning. Possible choices: "bf16-true", "bf16-mixed", "32-true". (type: Optional[str], default: null)
precision: bf16-true
# If set, quantize the model with this algorithm. See ``tutorials/quantize.md`` for more information. (type: Optional[Literal['nf4', 'nf4-dq', 'fp4', 'fp4-dq', 'int8-training']], default: null)
quantize: bnb.nf4
# How many devices/GPUs to use. (type: Union[int, str], default: 1)
devices: 1
# The LoRA rank. (type: int, default: 8)
lora_r: 32
# The LoRA alpha. (type: int, default: 16)
lora_alpha: 16
# The LoRA dropout value. (type: float, default: 0.05)
lora_dropout: 0.05
# Whether to apply LoRA to the query weights in attention. (type: bool, default: True)
lora_query: true
# Whether to apply LoRA to the key weights in attention. (type: bool, default: False)
lora_key: false
# Whether to apply LoRA to the value weights in attention. (type: bool, default: True)
lora_value: true
# Whether to apply LoRA to the output projection in the attention block. (type: bool, default: False)
lora_projection: false
# Whether to apply LoRA to the weights of the MLP in the attention block. (type: bool, default: False)
lora_mlp: false
# Whether to apply LoRA to output head in GPT. (type: bool, default: False)
lora_head: false
# Data-related arguments. If not provided, the default is ``litgpt.data.Alpaca``.
data:
class_path: litgpt.data.Alpaca2k
init_args:
mask_prompt: false
val_split_fraction: 0.05
prompt_style: alpaca
ignore_index: -100
seed: 42
num_workers: 4
download_dir: data/alpaca2k
# Training-related arguments. See ``litgpt.args.TrainArgs`` for details
train:
# Number of optimizer steps between saving checkpoints (type: Optional[int], default: 1000)
save_interval: 200
# Number of iterations between logging calls (type: int, default: 1)
log_interval: 1
# Number of samples between optimizer steps across data-parallel ranks (type: int, default: 128)
global_batch_size: 8
# Number of samples per data-parallel rank (type: int, default: 4)
micro_batch_size: 2
# Number of iterations with learning rate warmup active (type: int, default: 100)
lr_warmup_steps: 10
# Number of epochs to train on (type: Optional[int], default: 5)
epochs: 4
# Total number of tokens to train on (type: Optional[int], default: null)
max_tokens:
# Limits the number of optimizer steps to run (type: Optional[int], default: null)
max_steps:
# Limits the length of samples (type: Optional[int], default: null)
max_seq_length: 512
# Whether to tie the embedding weights with the language modeling head weights (type: Optional[bool], default: null)
tie_embeddings:
# (type: float, default: 0.0003)
learning_rate: 0.0002
# (type: float, default: 0.02)
weight_decay: 0.0
# (type: float, default: 0.9)
beta1: 0.9
# (type: float, default: 0.95)
beta2: 0.95
# (type: Optional[float], default: null)
max_norm:
# (type: float, default: 6e-05)
min_lr: 6.0e-05
# Evaluation-related arguments. See ``litgpt.args.EvalArgs`` for details
eval:
# Number of optimizer steps between evaluation calls (type: int, default: 100)
interval: 100
# Number of tokens to generate (type: Optional[int], default: 100)
max_new_tokens: 100
# Number of iterations (type: int, default: 100)
max_iters: 100
# The name of the logger to send metrics to. (type: Literal['wandb', 'tensorboard', 'csv'], default: csv)
logger_name: csv
# The random seed to use for reproducibility. (type: int, default: 1337)
seed: 1337
Override any parameter in the CLI:
litgpt finetune lora \
--config https://raw.githubusercontent.com/Lightning-AI/litgpt/main/config_hub/finetune/llama-2-7b/lora.yaml \
--lora_r 4
We appreciate your feedback and contributions. If you have feature requests, questions, or want to contribute code or config files, please don't hesitate to use the GitHub Issue tracker.
We welcome all individual contributors, regardless of their level of experience or hardware. Your contributions are valuable, and we are excited to see what you can accomplish in this collaborative and supportive environment.
Tip
Unsure about contributing? Check out our How to Contribute to LitGPT guide.
If you have general questions about building with LitGPT, please join our Discord.
Note
We recommend starting with the Zero to LitGPT: Getting Started with Pretraining, Finetuning, and Using LLMs if you are looking to get started with using LitGPT.
Tutorials and in-depth feature documentation can be found below:
- Finetuning, incl. LoRA, QLoRA, and Adapters (tutorials/finetune.md)
- Pretraining (tutorials/pretrain.md)
- Model evaluation (tutorials/evaluation.md)
- Supported and custom datasets (tutorials/prepare_dataset.md)
- Quantization (tutorials/quantize.md)
- Tips for dealing with out-of-memory (OOM) errors (tutorials/oom.md)
Lightning AI has partnered with Google to add first-class support for Cloud TPUs in Lightning's frameworks and LitGPT, helping democratize AI for millions of developers and researchers worldwide.
Using TPUs with Lightning is as straightforward as changing one line of code.
We provide scripts fully optimized for TPUs in the XLA directory.
This implementation extends on Lit-LLaMA and nanoGPT, and it's powered by Lightning Fabric ⚡.
- @karpathy for nanoGPT
- @EleutherAI for GPT-NeoX and the Evaluation Harness
- @TimDettmers for bitsandbytes
- @Microsoft for LoRA
- @tridao for Flash Attention 2
Check out the projects below that use and build on LitGPT. If you have a project you'd like to add to this section, please don't hesitate to open a pull request.
🏆 NeurIPS 2023 Large Language Model Efficiency Challenge: 1 LLM + 1 GPU + 1 Day
The LitGPT repository was the official starter kit for the NeurIPS 2023 LLM Efficiency Challenge, which is a competition focused on finetuning an existing non-instruction tuned LLM for 24 hours on a single GPU.
🦙 TinyLlama: An Open-Source Small Language Model
LitGPT powered the TinyLlama project and TinyLlama: An Open-Source Small Language Model research paper.
🍪 MicroLlama: MicroLlama-300M
MicroLlama is a 300M Llama model pretrained on 50B tokens powered by TinyLlama and LitGPT.
If you use LitGPT in your research, please cite the following work:
@misc{litgpt-2023,
author = {Lightning AI},
title = {LitGPT},
howpublished = {\url{https://github.com/Lightning-AI/litgpt}},
year = {2023},
}
LitGPT is released under the Apache 2.0 license.