| Rust Documentation | Python Documentation | Discord |
Mistral.rs is a fast LLM inference platform supporting inference on a variety of devices, quantization, and easy-to-use application with an Open-AI API compatible HTTP server and Python bindings.
Please submit requests for new models here.
-
Deploy with our easy to use APIs
-
🦙 Run the Llama 3 model
After following installation instructions
./mistralrs_server -i plain -m meta-llama/Meta-Llama-3-8B-Instruct -a llama -
φ³ Run the Phi 3 model with 128K context window
After following installation instructions
./mistralrs_server -i plain -m microsoft/Phi-3-mini-128k-instruct -a phi3 -
φ³ 📷 Run the Phi 3 vision model: documentation and guide here
After following installation instructions
./mistralrs_server --port 1234 vision-plain -m microsoft/Phi-3-vision-128k-instruct -a phi3v -
Other models: see a support matrix and how to run them
Mistal.rs supports several model categories:
- text
- vision (see the docs)
Fast:
- Quantized model support: 2-bit, 3-bit, 4-bit, 5-bit, 6-bit and 8-bit for faster inference and optimized memory usage.
- Continuous batching.
- Prefix caching.
- Device mapping: load and run some layers on the device and the rest on the CPU.
Accelerator support:
- Apple silicon support with the Metal framework.
- CPU inference with
mkl,acceleratesupport and optimized backend. - CUDA support with flash attention and cuDNN.
Easy:
- Lightweight OpenAI API compatible HTTP server.
- Python API.
- Grammar support with Regex and Yacc.
- ISQ (In situ quantization): run
.safetensorsmodels directly from Hugging Face Hub by quantizing them after loading instead of creating a GGUF file.- This loads the ISQ-able weights on CPU before quantizing with ISQ and then moving to the device to avoid memory spikes.
- Provides methods to further reduce memory spikes.
Powerful:
- Fast LoRA support with weight merging.
- First X-LoRA inference platform with first class support.
- Speculative Decoding: Mix supported models as the draft model or the target model
- Dynamic LoRA adapter swapping at runtime with adapter preloading: examples and docs
This is a demo of interactive mode with streaming running Mistral GGUF:
demo_new.mp4
Note: See supported models for more information
| Model | Supports quantization | Supports adapters | Supports device mapping |
|---|---|---|---|
| Mistral v0.1/v0.2/v0.3 | ✅ | ✅ | ✅ |
| Gemma | ✅ | ✅ | ✅ |
| Llama 2/3 | ✅ | ✅ | ✅ |
| Mixtral | ✅ | ✅ | ✅ |
| Phi 2 | ✅ | ✅ | ✅ |
| Phi 3 | ✅ | ✅ | ✅ |
| Qwen 2 | ✅ | ✅ | |
| Phi 3 Vision | ✅ | ✅ | |
| Idefics 2 | ✅ | ✅ |
Rust Crate
Rust multithreaded/async API for easy integration into any application.
Python API
Python API for mistral.rs.
from mistralrs import Runner, Which, ChatCompletionRequest
runner = Runner(
which=Which.GGUF(
tok_model_id="mistralai/Mistral-7B-Instruct-v0.1",
quantized_model_id="TheBloke/Mistral-7B-Instruct-v0.1-GGUF",
quantized_filename="mistral-7b-instruct-v0.1.Q4_K_M.gguf",
tokenizer_json=None,
repeat_last_n=64,
)
)
res = runner.send_chat_completion_request(
ChatCompletionRequest(
model="mistral",
messages=[{"role":"user", "content":"Tell me a story about the Rust type system."}],
max_tokens=256,
presence_penalty=1.0,
top_p=0.1,
temperature=0.1,
)
)
print(res.choices[0].message.content)
print(res.usage)Llama Index integration
- CUDA:
- Enable with
cudafeature:--features cuda - Flash attention support with
flash-attnfeature, only applicable to non-quantized models:--features flash-attn - cuDNNsupport with
cudnnfeature:--features cudnn
- Enable with
- Metal:
- Enable with
metalfeature:--features metal
- Enable with
- CPU:
- Intel MKL with
mklfeature:--features mkl - Apple Accelerate with
acceleratefeature:--features accelerate
- Intel MKL with
Enabling features is done by passing --features ... to the build system. When using cargo run or maturin develop, pass the --features flag before the -- separating build flags from runtime flags.
- To enable a single feature like
metal:cargo build --release --features metal. - To enable multiple features, specify them in quotes:
cargo build --release --features "cuda flash-attn cudnn".
| Device | Mistral.rs Completion T/s | Llama.cpp Completion T/s | Model | Quant |
|---|---|---|---|---|
| A10 GPU, CUDA | 78 | 78 | mistral-7b | 4_K_M |
| Intel Xeon 8358 CPU, AVX | 6 | 19 | mistral-7b | 4_K_M |
| Raspberry Pi 5 (8GB), Neon | 2 | 3 | mistral-7b | 2_K |
| A100 GPU, CUDA | 119 | 119 | mistral-7b | 4_K_M |
Please submit more benchmarks via raising an issue!
Note: You can use our Docker containers here. Learn more about running Docker containers: https://docs.docker.com/engine/reference/run/
-
Install required packages
OpenSSL(Example on Ubuntu:sudo apt install libssl-dev)- Linux only:
pkg-config(Example on Ubuntu:sudo apt install pkg-config)
-
Install Rust: https://rustup.rs/
Example on Ubuntu:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh source $HOME/.cargo/env
-
Optional: Set HF token correctly (skip if already set or your model is not gated, or if you want to use the
token_sourceparameters in Python or the command line.)- Note: you can install
huggingface-clias documented here.
huggingface-cli login
- Note: you can install
-
Download the code
git clone https://github.com/EricLBuehler/mistral.rs.git cd mistral.rs -
Build or install
-
Base build command
cargo build --release
-
Build with CUDA support
cargo build --release --features cuda
-
Build with CUDA and Flash Attention V2 support
cargo build --release --features "cuda flash-attn" -
Build with Metal support
cargo build --release --features metal
-
Build with Accelerate support
cargo build --release --features accelerate
-
Build with MKL support
cargo build --release --features mkl
-
Install with
cargo installfor easy command line usagePass the same values to
--featuresas you would forcargo buildcargo install --path mistralrs-server --features cuda
-
-
The build process will output a binary
misralrs-serverat./target/release/mistralrs-serverwhich may be copied into the working directory with the following command:Example on Ubuntu:
cp ./target/release/mistralrs-server ./mistralrs_server -
Installing Python support
You can install Python support by following the guide here.
There are 2 ways to run a model with mistral.rs:
- From Hugging Face Hub (easiest)
- From local files
- Running a GGUF model fully locally
Mistral.rs can automatically download models from HF Hub. To access gated models, you should provide a token source. They may be one of:
literal:<value>: Load from a specified literalenv:<value>: Load from a specified environment variablepath:<value>: Load from a specified filecache: default: Load from the HF token at ~/.cache/huggingface/token or equivalent.none: Use no HF token
This is passed in the following ways:
- Command line:
./mistralrs_server --token-source none -i plain -m microsoft/Phi-3-mini-128k-instruct -a phi3- Python:
Here is an example of setting the token source.
If token cannot be loaded, no token will be used (i.e. effectively using none).
You can also instruct mistral.rs to load models fully locally by modifying the *_model_id arguments or options:
./mistralrs_server --port 1234 plain -m . -a mistralThroughout mistral.rs, any model ID argument or option may be a local path and should contain the following files for each model ID option:
--model-id(server) ormodel_id(python/rust) or--tok-model-id(server) ortok_model_id(python/rust):config.jsontokenizer_config.jsontokenizer.json(if not specified separately).safetensorsfiles.
--quantized-model-id(server) orquantized_model_id(python/rust):- Specified
.ggufor.ggmlfile.
- Specified
--x-lora-model-id(server) orxlora_model_id(python/rust):xlora_classifier.safetensorsxlora_config.json- Adapters
.safetensorsandadapter_config.jsonfiles in their respective directories
--adapters-model-id(server) oradapters_model_id(python/rust):- Adapters
.safetensorsandadapter_config.jsonfiles in their respective directories
- Adapters
To run GGUF models fully locally, the only mandatory arguments are the quantized model ID and the quantized filename.
The chat template can be automatically detected and loaded from the GGUF file if no other chat template source is specified including the tokenizer model ID.
you do not need to specify the tokenizer model ID argument and instead should pass a path to the
chat template JSON file (examples here, you will need to create your own by specifying the chat template and bos/eos tokens) as well as specifying a local model ID. For example:
./mistralrs-server --chat-template <chat_template> gguf -m . -f Phi-3-mini-128k-instruct-q4_K_M.ggufIf you do not specify a chat template, then the --tok-model-id/-t tokenizer model ID argument is expected where the tokenizer_config.json file should be provided. If that model ID contains a tokenizer.json, then that will be used over the GGUF tokenizer.
The following tokenizer model types are currently supported. If you would like one to be added, please raise an issue. Otherwise, please consider using the method demonstrated in examples below, where the tokenizer is sourced from Hugging Face.
Supported GGUF tokenizer types
llama(sentencepiece)gpt2(BPE)
Mistral.rs uses subcommands to control the model type. They are generally of format <XLORA/LORA>-<QUANTIZATION>. Please run ./mistralrs_server --help to see the subcommands.
Additionally, for models without quantization, the model architecture should be provided as the --arch or -a argument in contrast to GGUF models which encode the architecture in the file.
Note: for plain models, you can specify the data type to load and run in. This must be one of
f32,f16,bf16orautoto choose based on the device. This is specified in the--dype/-dparameter after the model architecture (plain).
mistralgemmamixtralllamaphi2phi3qwen2
Note: for vision models, you can specify the data type to load and run in. This must be one of
f32,f16,bf16orautoto choose based on the device. This is specified in the--dype/-dparameter after the model architecture (vision-plain).
phi3videfics2
Interactive mode:
You can launch interactive mode, a simple chat application running in the terminal, by passing -i:
./mistralrs_server -i plain -m microsoft/Phi-3-mini-128k-instruct -a phi3- X-LoRA with no quantization
To start an X-LoRA server with the exactly as presented in the paper:
./mistralrs_server --port 1234 x-lora-plain -o orderings/xlora-paper-ordering.json -x lamm-mit/x-lora- LoRA with a model from GGUF
To start an LoRA server with adapters from the X-LoRA paper (you should modify the ordering file to use only one adapter, as the adapter static scalings are all 1 and so the signal will become distorted):
./mistralrs_server --port 1234 lora-gguf -o orderings/xlora-paper-ordering.json -m TheBloke/zephyr-7B-beta-GGUF -f zephyr-7b-beta.Q8_0.gguf -a lamm-mit/x-loraNormally with a LoRA model you would use a custom ordering file. However, for this example we use the ordering from the X-LoRA paper because we are using the adapters from the X-LoRA paper.
- With a model from GGUF
To start a server running Mistral from GGUF:
./mistralrs_server --port 1234 gguf -t mistralai/Mistral-7B-Instruct-v0.1 -m TheBloke/Mistral-7B-Instruct-v0.1-GGUF -f mistral-7b-instruct-v0.1.Q4_K_M.gguf- With a model from GGML
To start a server running Llama from GGML:
./mistralrs_server --port 1234 ggml -t meta-llama/Llama-2-13b-chat-hf -m TheBloke/Llama-2-13B-chat-GGML -f llama-2-13b-chat.ggmlv3.q4_K_M.bin- Plain model from safetensors
To start a server running Mistral from safetensors.
./mistralrs_server --port 1234 plain -m mistralai/Mistral-7B-Instruct-v0.1 -a mistralWe provide a method to select models with a .toml file. The keys are the same as the command line, with no_kv_cache and tokenizer_json being "global" keys.
Example:
./mistralrs_server --port 1234 toml -f toml-selectors/gguf.tomlQuantization support
| Model | GGUF | GGML | ISQ |
|---|---|---|---|
| Mistral 7B | ✅ | ✅ | |
| Gemma | ✅ | ||
| Llama | ✅ | ✅ | ✅ |
| Mixtral 8x7B | ✅ | ✅ | |
| Phi 2 | ✅ | ✅ | |
| Phi 3 | ✅ | ✅ | |
| Qwen 2 | ✅ | ||
| Phi 3 Vision | ✅ | ||
| Idefics 2 | ✅ |
Device mapping support
| Model category | Supported |
|---|---|
| Plain | ✅ |
| GGUF | ✅ |
| GGML | |
| Vision Plain | ✅ |
X-LoRA and LoRA support
| Model | X-LoRA | X-LoRA+GGUF | X-LoRA+GGML |
|---|---|---|---|
| Mistral 7B | ✅ | ✅ | |
| Gemma | ✅ | ||
| Llama | ✅ | ✅ | ✅ |
| Mixtral 8x7B | ✅ | ✅ | |
| Phi 2 | ✅ | ||
| Phi 3 | ✅ | ✅ | |
| Qwen 2 | |||
| Phi 3 Vision | |||
| Idefics 2 |
To use a derivative model, select the model architecture using the correct subcommand. To see what can be passed for the architecture, pass --help after the subcommand. For example, when using a different model than the default, specify the following for the following types of models:
- Plain: Model id
- Quantized: Quantized model id, quantized filename, and tokenizer id
- X-LoRA: Model id, X-LoRA ordering
- X-LoRA quantized: Quantized model id, quantized filename, tokenizer id, and X-LoRA ordering
- LoRA: Model id, LoRA ordering
- LoRA quantized: Quantized model id, quantized filename, tokenizer id, and LoRA ordering
- Vision Plain: Model id
See this section to determine if it is necessary to prepare an X-LoRA/LoRA ordering file, it is always necessary if the target modules or architecture changed, or if the adapter order changed.
It is also important to check the chat template style of the model. If the HF hub repo has a tokenizer_config.json file, it is not necessary to specify. Otherwise, templates can be found in chat_templates and should be passed before the subcommand. If the model is not instruction tuned, no chat template will be found and the APIs will only accept a prompt, no messages.
For example, when using a Zephyr model:
./mistralrs_server --port 1234 --log output.txt gguf -t HuggingFaceH4/zephyr-7b-beta -m TheBloke/zephyr-7B-beta-GGUF -f zephyr-7b-beta.Q5_0.gguf
An adapter model is a model with X-LoRA or LoRA. X-LoRA support is provided by selecting the x-lora-* architecture, and LoRA support by selecting the lora-* architecture. Please find docs for adapter models here
Mistral.rs will attempt to automatically load a chat template and tokenizer. This enables high flexibility across models and ensures accurate and flexible chat templating. However, this behavior can be customized. Please find detailed documentation here.
Thank you for contributing! If you have any problems or want to contribute something, please raise an issue or pull request. If you want to add a new model, please contact us via an issue and we can coordinate how to do this.
- Debugging with the environment variable
MISTRALRS_DEBUG=1causes the following things- If loading a GGUF or GGML model, this will output a file containing the names, shapes, and types of each tensor.
mistralrs_gguf_tensors.txtormistralrs_ggml_tensors.txt
- More logging.
- If loading a GGUF or GGML model, this will output a file containing the names, shapes, and types of each tensor.
- Setting the CUDA compiler path:
- Set the
NVCC_CCBINenvironment variable during build.
- Set the
- Error:
recompile with -fPIE:- Some Linux distributions require compiling with
-fPIE. - Set the
CUDA_NVCC_FLAGSenvironment variable to-fPIEduring build:CUDA_NVCC_FLAGS=-fPIE
- Some Linux distributions require compiling with
- Error
CUDA_ERROR_NOT_FOUNDor symbol not found when using a normal or vison model:- For non-quantized models, you can specify the data type to load and run in. This must be one of
f32,f16,bf16orautoto choose based on the device.
- For non-quantized models, you can specify the data type to load and run in. This must be one of
This project would not be possible without the excellent work at candle. Additionally, thank you to all contributors! Contributing can range from raising an issue or suggesting a feature to adding some new functionality.
