👋 join us on Twitter, Discord and WeChat
- [2023/11] Turbomind supports loading hf model directly. Click here for details.
- [2023/11] TurboMind major upgrades, including: Paged Attention, faster attention kernels without sequence length limitation, 2x faster KV8 kernels, Split-K decoding (Flash Decoding), and W4A16 inference for sm_75
- [2023/09] TurboMind supports Qwen-14B
- [2023/09] TurboMind supports InternLM-20B
- [2023/09] TurboMind supports all features of Code Llama: code completion, infilling, chat / instruct, and python specialist. Click here for deployment guide
- [2023/09] TurboMind supports Baichuan2-7B
- [2023/08] TurboMind supports flash-attention2.
- [2023/08] TurboMind supports Qwen-7B, dynamic NTK-RoPE scaling and dynamic logN scaling
- [2023/08] TurboMind supports Windows (tp=1)
- [2023/08] TurboMind supports 4-bit inference, 2.4x faster than FP16, the fastest open-source implementation🚀. Check this guide for detailed info
- [2023/08] LMDeploy has launched on the HuggingFace Hub, providing ready-to-use 4-bit models.
- [2023/08] LMDeploy supports 4-bit quantization using the AWQ algorithm.
- [2023/07] TurboMind supports Llama-2 70B with GQA.
- [2023/07] TurboMind supports Llama-2 7B/13B.
- [2023/07] TurboMind supports tensor-parallel inference of InternLM.
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams. It has the following core features:
-
Efficient Inference Engine (TurboMind): Based on FasterTransformer, we have implemented an efficient inference engine - TurboMind, which supports the inference of LLaMA and its variant models on NVIDIA GPUs.
-
Interactive Inference Mode: By caching the k/v of attention during multi-round dialogue processes, it remembers dialogue history, thus avoiding repetitive processing of historical sessions.
-
Multi-GPU Model Deployment and Quantization: We provide comprehensive model deployment and quantification support, and have been validated at different scales.
-
Persistent Batch Inference: Further optimization of model execution efficiency.
LMDeploy
has two inference backends, Pytorch
and TurboMind
. You can run lmdeploy list
to check the supported model names.
Note
W4A16 inference requires Nvidia GPU with Ampere architecture or above.
Models | Tensor Parallel | FP16 | KV INT8 | W4A16 | W8A8 |
---|---|---|---|---|---|
Llama | Yes | Yes | Yes | Yes | No |
Llama2 | Yes | Yes | Yes | Yes | No |
SOLAR | Yes | Yes | Yes | Yes | No |
InternLM-7B | Yes | Yes | Yes | Yes | No |
InternLM-20B | Yes | Yes | Yes | Yes | No |
QWen-7B | Yes | Yes | Yes | Yes | No |
QWen-14B | Yes | Yes | Yes | Yes | No |
Baichuan-7B | Yes | Yes | Yes | Yes | No |
Baichuan2-7B | Yes | Yes | Yes | Yes | No |
Code Llama | Yes | Yes | No | No | No |
Models | Tensor Parallel | FP16 | KV INT8 | W4A16 | W8A8 |
---|---|---|---|---|---|
Llama | Yes | Yes | No | No | No |
Llama2 | Yes | Yes | No | No | No |
InternLM-7B | Yes | Yes | No | No | No |
Case I: output token throughput with fixed input token and output token number (1, 2048)
Case II: request throughput with real conversation data
Test Setting: LLaMA-7B, NVIDIA A100(80G)
The output token throughput of TurboMind exceeds 2000 tokens/s, which is about 5% - 15% higher than DeepSpeed overall and outperforms huggingface transformers by up to 2.3x. And the request throughput of TurboMind is 30% higher than vLLM.
Install lmdeploy with pip ( python 3.8+) or from source
pip install lmdeploy
Note
pip install lmdeploy
can only install the runtime required packages. If users want to run codes from modules likelmdeploy.lite
andlmdeploy.serve
, they need to install the extra required packages. For instance, runningpip install lmdeploy[lite]
would install extra dependencies forlmdeploy.lite
module.
all
: Install lmdeploy with all dependencies inrequirements.txt
lite
: Install lmdeploy with extra dependencies inrequirements/lite.txt
serve
: Install lmdeploy with dependencies inrequirements/serve.txt
To use TurboMind inference engine, you need to first convert the model into TurboMind format. Currently, we support online conversion and offline conversion. With online conversion, TurboMind can load the Huggingface model directly. While with offline conversion, you should save the converted model first before using it.
The following use internlm/internlm-chat-7b-v1_1 as a example to show how to use turbomind with online conversion. You can refer to load_hf.md for other methods.
lmdeploy chat turbomind internlm/internlm-chat-7b-v1_1 --model-name internlm-chat-7b
Note
The internlm/internlm-chat-7b-v1_1 model will be downloaded under.cache
folder. You can also use a local path here.
Note
When inferring with FP16 precision, the InternLM-7B model requires at least 15.7G of GPU memory overhead on TurboMind.
It is recommended to use NVIDIA cards such as 3090, V100, A100, etc. Disable GPU ECC can free up 10% memory, trysudo nvidia-smi --ecc-config=0
and reboot system.
Note
Tensor parallel is available to perform inference on multiple GPUs. Add--tp=<num_gpu>
onchat
to enable runtime TP.
# install lmdeploy with extra dependencies
pip install lmdeploy[serve]
lmdeploy serve gradio internlm/internlm-chat-7b-v1_1 --model-name internlm-chat-7b
Launch inference server by:
# install lmdeploy with extra dependencies
pip install lmdeploy[serve]
lmdeploy serve api_server internlm/internlm-chat-7b-v1_1 --model-name internlm-chat-7b --instance_num 32 --tp 1
Then, you can communicate with it by command line,
# api_server_url is what printed in api_server.py, e.g. http://localhost:23333
lmdeploy serve api_client api_server_url
or webui,
# api_server_url is what printed in api_server.py, e.g. http://localhost:23333
# server_ip and server_port here are for gradio ui
# example: lmdeploy serve gradio http://localhost:23333 --server_name localhost --server_port 6006
lmdeploy serve gradio api_server_url --server_name ${gradio_ui_ip} --server_port ${gradio_ui_port}
Refer to restful_api.md for more details.
For detailed instructions on Inference pytorch models, see here.
lmdeploy chat torch $NAME_OR_PATH_TO_HF_MODEL \
--max_new_tokens 64 \
--temperture 0.8 \
--top_p 0.95 \
--seed 0
deepspeed --module --num_gpus 2 lmdeploy.pytorch.chat \
$NAME_OR_PATH_TO_HF_MODEL \
--max_new_tokens 64 \
--temperture 0.8 \
--top_p 0.95 \
--seed 0
You need to install deepspeed first to use this feature.
pip install deepspeed
LMDeploy uses AWQ algorithm for model weight quantization
Click here to view the test results for weight int4 usage.
Click here to view the usage method, implementation formula, and test results for kv int8.
Warning
runtime Tensor Parallel for quantized model is not available. Please setup--tp
ondeploy
to enable static TP.
We appreciate all contributions to LMDeploy. Please refer to CONTRIBUTING.md for the contributing guideline.
This project is released under the Apache 2.0 license.