Rust implementation of the DeepSeek-OCR inference stack with a fast CLI and an OpenAI-compatible HTTP server. The workspace packages the vision-language model, prompt tooling, and serving layer so you can build document understanding pipelines that run locally on CPU, Apple Metal, or (alpha) NVIDIA CUDA GPUs.
δΈζζζ‘£θ―·η README_CN.mdγ
Want ready-made binaries? Latest macOS (Metal-enabled) and Windows bundles live in the build-binaries workflow artifacts. Grab them from the newest green run.
- Vision preprocessing β
prepare_vision_input_from_imagebuilds a square global canvas with letterboxing (build_global_view) and, when crop mode is enabled, appliesdynamic_preprocesstiling to produce high-resolution local crops plus optional thumbnails. - SAM + CLIP fusion β each view is normalised via
image_to_tensor, pushed through the Candle ports of SAM (SamBackbone) and CLIP-L (ClipVisionModel), then flattened withbuild_clip_sam_tokensso the features stay spatially aligned. - Projector & layout tokens β the custom
ImageProjectorlinearly maps concatenated SAM/CLIP channels into the language hidden size while injecting learnedimage_newline/view_separatortokens to preserve grid structure, yielding the multimodal embeddings used during decoding. - Tokenizer alignment β
build_prompt_tokenssynthesises<image>spans whose length exactly matches the projected token count (global + local grids), ensuring OpenAI-style prompts remain consistent even after chat history pruning. - Decoder & caching β the text stack is a Candle reimplementation of DeepSeek-V2 (
DeepseekLanguageModel) with optional FlashAttention, rotary position embeddings, andDynamicCacheguards so both the CLI and server can stream tokens efficiently. - Observability & parity β debug builds expose CLIP/SAM traces (
VisionDebugFeatures) so we can diff intermediate tensors against the PyTorch reference; most stages are already numerically aligned, and the few remaining deltas (mainly projector normalisation + vision tiling) are tracked on the roadmap for upcoming releases.
The original DeepSeek-OCR ships as a Python + Transformers stackβpowerful, but hefty to deploy and awkward to embed. Rewriting the pipeline in Rust gives us:
- Smaller deployable artifacts with zero Python runtime or conda baggage.
- Memory-safe, thread-friendly infrastructure that blends into native Rust backends.
- Unified tooling (CLI + server) running on Candle + Rocket without the Python GIL overhead.
- Drop-in compatibility with OpenAI-style clients while tuned for single-turn OCR prompts.
- Candle for tensor compute, with Metal and CUDA backends and FlashAttention support.
- Rocket + async streaming for OpenAI-compatible
/v1/responsesand/v1/chat/completions. - tokenizers (upstream DeepSeek release) wrapped by
crates/assetsfor deterministic caching via Hugging Face and ModelScope mirrors. - Pure Rust vision/prompt pipeline shared by CLI and server to avoid duplicated logic.
- Faster cold-start on Apple Silicon, lower RSS, and native binary distribution.
- Deterministic dual-source (Hugging Face + ModelScope) asset download + verification built into the workspace.
- Automatic single-turn chat compaction so OCR outputs stay stable even when clients send history.
- Ready-to-use OpenAI compatibility for tools like Open WebUI without adapters.
- One repo, two entrypoints β a batteries-included CLI for batch jobs and a Rocket-based server that speaks
/v1/responsesand/v1/chat/completions. - Works out of the box β pulls model weights, configs, and tokenizer from whichever of Hugging Face or ModelScope responds fastest on first run.
- Optimised for Apple Silicon β optional Metal backend with FP16 execution for real-time OCR on laptops.
- CUDA (alpha) β experimental support via
--features cuda+--device cuda --dtype f16; expect rough edges while we finish kernel coverage. - Intel MKL (preview) β faster BLAS on x86 via
--features mkl(install Intel oneMKL beforehand). - OpenAI client compatibility β drop-in replacement for popular SDKs; the server automatically collapses chat history to the latest user turn for OCR-friendly prompts.
- Rust 1.78+ (edition 2024 support)
- Git
- Optional: Apple Silicon running macOS 13+ for Metal acceleration
- Optional: CUDA 12.2+ toolkit + driver for experimental NVIDIA GPU acceleration on Linux/Windows
- Optional: Intel oneAPI MKL for preview x86 acceleration (see below)
- (Recommended) Hugging Face account with
HF_TOKENwhen pulling from thedeepseek-ai/DeepSeek-OCRrepo (ModelScope is used automatically when itβs faster/reachable).
git clone https://github.com/TimmyOVO/deepseek-ocr.rs.git
cd deepseek-ocr.rs
cargo fetchThe first invocation of the CLI or server downloads the config, tokenizer, and model-00001-of-000001.safetensors (~6.3GB) into DeepSeek-OCR/. To prefetch manually:
cargo run -p deepseek-ocr-cli --release -- --help # dev profile is extremely slow; always prefer --releaseAlways include
--releasewhen running from source; debug builds on this model are extremely slow. SetHF_HOME/HF_TOKENif you store Hugging Face caches elsewhere (ModelScope downloads land alongside the same asset tree). The full model package is ~6.3GB on disk and typically requires ~13GB of RAM headroom during inference (model + activations).
The CLI and server share the same configuration. On first launch we create a config.toml populated with defaults; later runs reuse it so both entrypoints stay in sync.
| Platform | Config file (default) | Model cache root |
|---|---|---|
| Linux | ~/.config/deepseek-ocr/config.toml |
~/.cache/deepseek-ocr/models/<id>/β¦ |
| macOS | ~/Library/Application Support/deepseek-ocr/config.toml |
~/Library/Caches/deepseek-ocr/models/<id>/β¦ |
| Windows | %APPDATA%\deepseek-ocr\config.toml |
%LOCALAPPDATA%\deepseek-ocr\models\<id>\β¦ |
- Override the location with
--config /path/to/config.toml(available on both CLI and server). Missing files are created automatically. - Each
[models.entries."<id>"]record can point to customconfig,tokenizer, orweightsfiles. When omitted we fall back to the cache directory above and download/update assets as required. - Runtime values resolve in this order: command-line flags β values stored in
config.tomlβ built-in defaults. The HTTP API adds a final layer where request payload fields (for examplemax_tokens) override everything else for that call.
The generated file starts with the defaults below; adjust them to persistently change behaviour:
[models]
active = "deepseek-ocr"
[models.entries.deepseek-ocr]
[inference]
device = "cpu"
template = "plain"
base_size = 1024
image_size = 640
crop_mode = true
max_new_tokens = 512
use_cache = true
[server]
host = "0.0.0.0"
port = 8000
model_id = "deepseek-ocr"[models]picks the active model and lets you add more entries (each entry can point to its own config/tokenizer/weights).[inference]controls notebook-friendly defaults shared by the CLI and server (device, template, vision sizing, decoding budget, cache usage).[server]sets the network binding and the model identifier reported by/v1/models.
See crates/cli/README.md and crates/server/README.md for concise override tables.
Single-request Rust CLI (Accelerate backend on macOS) compared with the reference Python pipeline on the same prompt and image:
| Stage | ref total (ms) | ref avg (ms) | python total | python/ref |
|---|---|---|---|---|
Decode β Overall (decode.generate) |
30077.840 | 30077.840 | 56554.873 | 1.88x |
Decode β Token Loop (decode.iterative) |
26930.216 | 26930.216 | 39227.974 | 1.46x |
Decode β Prompt Prefill (decode.prefill) |
3147.337 | 3147.337 | 5759.684 | 1.83x |
Prompt β Build Tokens (prompt.build_tokens) |
0.466 | 0.466 | 45.434 | 97.42x |
Prompt β Render Template (prompt.render) |
0.005 | 0.005 | 0.019 | 3.52x |
Vision β Embed Images (vision.compute_embeddings) |
6391.435 | 6391.435 | 3953.459 | 0.62x |
Vision β Prepare Inputs (vision.prepare_inputs) |
62.524 | 62.524 | 45.438 | 0.73x |
Build and run directly from the workspace:
cargo run -p deepseek-ocr-cli --release -- \
--prompt "<image>\n<|grounding|>Convert this receipt to markdown." \
--image baselines/sample/images/test.png \
--device cpu --max-new-tokens 512Tip:
--releaseis required for reasonable throughput; debug builds can be 10x slower.
macOS tip: append
--features metalto thecargo run/cargo buildcommands to compile with Accelerate + Metal backends.CUDA tip (Linux/Windows): append
--features cudaand run with--device cuda --dtype f16to target NVIDIA GPUsβfeature is still alpha, so be ready for quirks.Intel MKL preview: install Intel oneMKL, then build with
--features mklfor faster CPU matmuls on x86.
Install the CLI as a binary:
cargo install --path crates/cli
deepseek-ocr-cli --helpKey flags:
--prompt/--prompt-file: text with<image>slots--image: path(s) matching<image>placeholders--deviceand--dtype: choosemetal+f16on Apple Silicon orcuda+f16on NVIDIA GPUs--max-new-tokens: decoding budget
Launch an OpenAI-compatible endpoint:
cargo run -p deepseek-ocr-server --release -- \
--host 0.0.0.0 --port 8000 \
--device cpu --max-new-tokens 512Keep
--releaseon the server as well; the debug profile is far too slow for inference workloads. macOS tip: add--features metalto thecargo run -p deepseek-ocr-servercommand when you want the server binary to link against Accelerate + Metal (and pair it with--device metalat runtime).CUDA tip: add
--features cudaand start the server with--device cuda --dtype f16to offload inference to NVIDIA GPUs (alpha-quality support).Intel MKL preview: install Intel oneMKL before building with
--features mklto accelerate CPU workloads on x86.
Notes:
- Use
data:URLs or remotehttp(s)links; local paths are rejected. - The server collapses multi-turn chat inputs to the latest user message to keep prompts OCR-friendly.
- Works out of the box with tools such as Open WebUI or any OpenAI-compatible clientβjust point the base URL to your server (
http://localhost:8000/v1) and select thedeepseek-ocrmodel. - Adjust the request body limit with Rocket config if you routinely send large images.
- Metal (macOS 13+ Apple Silicon) β pass
--device metal --dtype f16and build binaries with--features metalso Candle links against Accelerate + Metal. - CUDA (alpha, NVIDIA GPUs) β install CUDA 12.2+ toolkits, build with
--features cuda, and launch the CLI/server with--device cuda --dtype f16; still experimental. - Intel MKL (preview) β install Intel oneMKL and build with
--features mklto speed up CPU workloads on x86. - For either backend, prefer release builds (e.g.
cargo build --release -p deepseek-ocr-cli --features metal|cuda) to maximise throughput. - Combine GPU runs with
--max-new-tokensand crop tuning flags to balance latency vs. quality.
crates/coreβ shared inference pipeline, model loaders, conversation templates.crates/cliβ command-line frontend (deepseek-ocr-cli).crates/serverβ Rocket server exposing OpenAI-compatible endpoints.crates/assetsβ asset management (configuration, tokenizer, Hugging Face + ModelScope download helpers).baselines/β reference inputs and outputs for regression testing.
Detailed CLI usage lives in crates/cli/README.md. The serverβs OpenAI-compatible interface is covered in crates/server/README.md.
- Where do assets come from? β downloads automatically pick between Hugging Face and ModelScope based on latency; the CLI prints the chosen source for each file.
- Slow first response β model load and GPU warm-up (Metal/CUDA alpha) happen on the initial request; later runs are faster.
- Large image rejection β increase Rocket JSON limits in
crates/server/src/main.rsor downscale the input.
- β Apple Metal backend with FP16 support and CLI/server parity on macOS.
- β
NVIDIA CUDA backend (alpha) β build with
--features cuda, run with--device cuda --dtype f16for Linux/Windows GPUs; polishing in progress. - π Parity polish β finish projector normalisation + crop tiling alignment; extend intermediate-tensor diff suite beyond the current sample baseline.
- π Grounding & streaming β port the Python post-processing helpers (box extraction, markdown polish) and refine SSE streaming ergonomics.
- π Cross-platform acceleration β continue tuning CUDA kernels, add automatic device detection across CPU/Metal/CUDA, and publish opt-in GPU benchmarks.
- π Packaging & Ops β ship binary releases with deterministic asset checksums, richer logging/metrics, and Helm/docker references for server deploys.
- π Structured outputs β optional JSON schema tools for downstream automation once parity gaps close.
This repository inherits the licenses of its dependencies and the upstream DeepSeek-OCR model. Refer to DeepSeek-OCR/LICENSE for model terms and apply the same restrictions to downstream use.
