LitServe is an engine for scalable AI model deployment built on FastAPI. Features like batching, streaming, and GPU autoscaling eliminate the need to rebuild a FastAPI server for each model.
✅ 8x faster serving ✅ Streaming ✅ Auto-GPU, multi-GPU ✅ Multi-modal ✅ PyTorch/JAX/TF ✅ Full control ✅ Batching ✅ Built on Fast API ✅ Custom specs (Open AI)
Install LitServe via pip (or advanced installs):
pip install litserve
Here's a hello world example (explore real examples):
# server.py
import litserve as ls
# STEP 1: DEFINE A MODEL API
class SimpleLitAPI(ls.LitAPI):
# Called once at startup. Setup models, DB connections, etc...
def setup(self, device):
self.model = lambda x: x**2
# Convert the request payload to model input.
def decode_request(self, request):
return request["input"]
# Run inference on the the model, return the output.
def predict(self, x):
return self.model(x)
# Convert the model output to a response payload.
def encode_response(self, output):
return {"output": output}
# STEP 2: START THE SERVER
if __name__ == "__main__":
api = SimpleLitAPI()
server = ls.LitServer(api, accelerator="auto")
server.run(port=8000)
Now run the server via the command-line
python server.py
LitAPI
class gives full control and hackability.
LitServer
handles optimizations like batching, auto-GPU scaling, etc...
Use the automatically generated LitServe client:
python client.py
Write a custom client
import requests
response = requests.post(
"http://127.0.0.1:8000/predict",
json={"input": 4.0}
)
Use LitServe to deploy any type of model or AI service (embeddings, LLMs, vision, audio, multi-modal, etc).
Featured examples |
Key features |
Our benchmarks show that LitServe (built on FastAPI) handles more simultaneous requests than FastAPI and TorchServe (higher is better).
Reproduce the full benchmarks here.
These results are for image and text classification ML tasks. The performance relationships hold for other ML tasks (embedding, LLM serving, audio, segmentation, object detection, summarization etc...).
💡 Note on LLM serving: For high-performance LLM serving (like Ollama/VLLM), use LitGPT or build your custom VLLM-like server with LitServe. Optimizations like kv-caching, which can be done with LitServe, are needed to maximize LLM performance.
Self-manage LitServe deployments (just run it on any machine!), or deploy with one click on Lightning AI.
LitServe is developed by Lightning AI which provides infrastructure for deploying AI models.
Feature | Self Managed | Fully Managed on Studios |
---|---|---|
Deployment | ✅ Do it yourself deployment | ✅ One-button cloud deploy |
Load balancing | ❌ | ✅ |
Autoscaling | ❌ | ✅ |
Scale to zero | ❌ | ✅ |
Multi-machine inference | ❌ | ✅ |
Authentication | ❌ | ✅ |
Own VPC | ❌ | ✅ |
AWS, GCP | ❌ | ✅ |
Use your own cloud commits | ❌ | ✅ |
LitServe supports multiple advanced state-of-the-art features.
✅ All model types: LLMs, vision, time series, etc...
✅ Auto-GPU scaling
✅ Authentication
✅ Autoscaling
✅ Batching
✅ Streaming
✅ All ML frameworks: PyTorch, Jax, Tensorflow, Hugging Face...
✅ Open AI spec
Note: Our goal is not to jump on every hype train, but instead support features that scale under the most demanding enterprise deployments.
LitServe is a community project accepting contributions - Let's make the world's most advanced AI inference engine.