| Documentation | Blog | Paper | Discord |
Please use the jais
branch for the Jais support. The current code only works with vLLM 0.2.1-post1
`.
- Clone this repo: vllm-jais
- Copy the file jais.py into directory
vllm/model_executor/models/
in you vLLM installation. - Update init.py
- Download the Jais model from HuggingFace
- Update the
config.json
file if required - Run
main_jais.py
file
- Tested only with
13b
and30b
models - Works only with
vLLM 0.2.1-post1
` tag 13b
can only be used on a single GPU due to non-divisibility of FF layer dim30b
can only be used either on a single GPU or two GPUs due to non-divisibility of FF layer dim- Need to modify the config.json file to add extra attributes
NOTE: You might need to modify the config.json
file after downloading from HuggingFace. The file will be located whereever the model weights are located.
I have added the config files for Jais-13B
and Jais-30B
at location: configs/config_13B.json
and configs/config_30B.json
respectively.
Replace the contents of the config.json
with the corresponding copy.
For example, the following config might not be present in the config.json
file:
"architectures": [
"GPT2LMHeadModel"
],
"""
The Second vLLM Bay Area Meetup (Jan 31st 5pm-7:30pm PT)
We are thrilled to announce our second vLLM Meetup! The vLLM team will share recent updates and roadmap. We will also have vLLM collaborators from IBM coming up to the stage to discuss their insights on LLM optimizations. Please register here and join us!
Latest News 🔥
- [2023/12] Added ROCm support to vLLM.
- [2023/10] We hosted the first vLLM meetup in SF! Please find the meetup slides here.
- [2023/09] We created our Discord server! Join us to discuss vLLM and LLM serving! We will also post the latest announcements and updates there.
- [2023/09] We released our PagedAttention paper on arXiv!
- [2023/08] We would like to express our sincere gratitude to Andreessen Horowitz (a16z) for providing a generous grant to support the open-source development and research of vLLM.
- [2023/07] Added support for LLaMA-2! You can run and serve 7B/13B/70B LLaMA-2s on vLLM with a single command!
- [2023/06] Serving vLLM On any Cloud with SkyPilot. Check out a 1-click example to start the vLLM demo, and the blog post for the story behind vLLM development on the clouds.
- [2023/06] We officially released vLLM! FastChat-vLLM integration has powered LMSYS Vicuna and Chatbot Arena since mid-April. Check out our blog post.
vLLM is a fast and easy-to-use library for LLM inference and serving.
vLLM is fast with:
- State-of-the-art serving throughput
- Efficient management of attention key and value memory with PagedAttention
- Continuous batching of incoming requests
- Fast model execution with CUDA/HIP graph
- Quantization: GPTQ, AWQ, SqueezeLLM
- Optimized CUDA kernels
vLLM is flexible and easy to use with:
- Seamless integration with popular Hugging Face models
- High-throughput serving with various decoding algorithms, including parallel sampling, beam search, and more
- Tensor parallelism support for distributed inference
- Streaming outputs
- OpenAI-compatible API server
- Support NVIDIA GPUs and AMD GPUs
vLLM seamlessly supports many Hugging Face models, including the following architectures:
- Aquila & Aquila2 (
BAAI/AquilaChat2-7B
,BAAI/AquilaChat2-34B
,BAAI/Aquila-7B
,BAAI/AquilaChat-7B
, etc.) - Baichuan & Baichuan2 (
baichuan-inc/Baichuan2-13B-Chat
,baichuan-inc/Baichuan-7B
, etc.) - BLOOM (
bigscience/bloom
,bigscience/bloomz
, etc.) - ChatGLM (
THUDM/chatglm2-6b
,THUDM/chatglm3-6b
, etc.) - DeciLM (
Deci/DeciLM-7B
,Deci/DeciLM-7B-instruct
, etc.) - Falcon (
tiiuae/falcon-7b
,tiiuae/falcon-40b
,tiiuae/falcon-rw-7b
, etc.) - GPT-2 (
gpt2
,gpt2-xl
, etc.) - GPT BigCode (
bigcode/starcoder
,bigcode/gpt_bigcode-santacoder
, etc.) - GPT-J (
EleutherAI/gpt-j-6b
,nomic-ai/gpt4all-j
, etc.) - GPT-NeoX (
EleutherAI/gpt-neox-20b
,databricks/dolly-v2-12b
,stabilityai/stablelm-tuned-alpha-7b
, etc.) - InternLM (
internlm/internlm-7b
,internlm/internlm-chat-7b
, etc.) - LLaMA & LLaMA-2 (
meta-llama/Llama-2-70b-hf
,lmsys/vicuna-13b-v1.3
,young-geng/koala
,openlm-research/open_llama_13b
, etc.) - Mistral (
mistralai/Mistral-7B-v0.1
,mistralai/Mistral-7B-Instruct-v0.1
, etc.) - Mixtral (
mistralai/Mixtral-8x7B-v0.1
,mistralai/Mixtral-8x7B-Instruct-v0.1
, etc.) - MPT (
mosaicml/mpt-7b
,mosaicml/mpt-30b
, etc.) - OPT (
facebook/opt-66b
,facebook/opt-iml-max-30b
, etc.) - Phi (
microsoft/phi-1_5
,microsoft/phi-2
, etc.) - Qwen (
Qwen/Qwen-7B
,Qwen/Qwen-7B-Chat
, etc.) - Qwen2 (
Qwen/Qwen2-7B-beta
,Qwen/Qwen-7B-Chat-beta
, etc.) - StableLM(
stabilityai/stablelm-3b-4e1t
,stabilityai/stablelm-base-alpha-7b-v2
, etc.) - Yi (
01-ai/Yi-6B
,01-ai/Yi-34B
, etc.)
Install vLLM with pip or from source:
pip install vllm
Visit our documentation to get started.
We welcome and value any contributions and collaborations. Please check out CONTRIBUTING.md for how to get involved.
If you use vLLM for your research, please cite our paper:
@inproceedings{kwon2023efficient,
title={Efficient Memory Management for Large Language Model Serving with PagedAttention},
author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},
booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},
year={2023}
}