LitServe is a flexible serving engine for AI models built on FastAPI. Features like batching, streaming, and GPU autoscaling eliminate the need to rebuild a FastAPI server per model.
LitServe is at least 2x faster than plain FastAPI.
✅ (2x)+ faster serving ✅ Self-host or fully managed ✅ GPU autoscaling ✅ Multi-modal ✅ PyTorch/JAX/TF ✅ OpenAPI compliant ✅ Batching ✅ Built on Fast API ✅ Streaming
Install LitServe via pip (other install options):
pip install litserve
Here's a toy example with 2 models that highlights the flexibility (explore real examples):
# server.py
import litserve as ls
# STEP 1: DEFINE A MODEL API
class SimpleLitAPI(ls.LitAPI):
def setup(self, device):
# setup is called once at startup. Build a compound AI system (1+ models), connect DBs, load data, etc...
self.model1 = lambda x: x**2
self.model2 = lambda x: x**3
def decode_request(self, request):
# Convert the request payload to model input.
return request["input"]
def predict(self, x):
# Run inference on the the AI system, return the output.
squared = self.model1(x)
cubed = self.model2(x)
output = squared + cubed
return {"output": output}
def encode_response(self, output):
# Convert the model output to a response payload.
return {"output": output}
# STEP 2: START THE SERVER
if __name__ == "__main__":
api = SimpleLitAPI()
server = ls.LitServer(api, accelerator="auto")
server.run(port=8000)
Now run the server via the command-line
python server.py
LitAPI
class gives full control and hackability.
LitServer
handles optimizations like batching, auto-GPU scaling, etc...
Use the automatically generated LitServe client:
python client.py
Write a custom client
import requests
response = requests.post(
"http://127.0.0.1:8000/predict",
json={"input": 4.0}
)
Use LitServe to deploy any model or AI service: (Gen AI, classical ML, embedding servers, LLMs, vision, audio, multi-modal systems, etc...)
LitServe_Overview.mp4
Featured examples
Toy model: Hello world LLMs: Llama 3 (8B), LLM Proxy server NLP: Hugging face, BERT, Text embedding API Multimodal: OpenAI Clip, MiniCPM, Chameleon 30B, Phi-3.5 Vision Instruct Audio: Whisper, AudioCraft, StableAudio, Noise cancellation (DeepFilterNet) Vision: Stable diffusion 2, AuraFlow, Flux, Image super resolution (Aura SR) Speech: Text-speech (XTTS V2) Classical ML: Random forest, XGBoost Miscellaneous: Media conversion API (ffmpeg)
Browse 100s of community-built templates.
LitServe supports multiple advanced state-of-the-art features.
✅ (2x)+ faster serving than plain FastAPI
✅ Self host on your own machines
✅ Host fully managed on Lightning AI
✅ Serve all models: LLMs, vision, time series, etc...
✅ Auto-GPU scaling
✅ Authentication
✅ Autoscaling
✅ Batching
✅ Streaming
✅ Scale to zero (serverless)
✅ All ML frameworks: PyTorch, Jax, Tensorflow, Hugging Face...
✅ OpenAPI compliant
✅ Open AI compatibility
Note: Our goal is not to jump on every hype train, but instead support features that scale under the most demanding enterprise deployments.
LitServe is designed for AI workloads. Specialized multi-worker handling delivers a minimum 2x speedup over FastAPI.
Additional features like batching and GPU autoscaling can drive performance well beyond 2x, scaling efficiently to handle more simultaneous requests than FastAPI and TorchServe.
Reproduce the full benchmarks here (higher is better).
These results are for image and text classification ML tasks. The performance relationships hold for other ML tasks (embedding, LLM serving, audio, segmentation, object detection, summarization etc...).
💡 Note on LLM serving: For high-performance LLM serving (like Ollama/VLLM), use LitGPT or build your custom VLLM-like server with LitServe. Optimizations like kv-caching, which can be done with LitServe, are needed to maximize LLM performance.
LitServe can be hosted independently on your own machines or fully managed via Lightning Studios.
Self-hosting is ideal for hackers, students, and DIY developers, while fully managed hosting is ideal for enterprise developers needing easy autoscaling, security, release management, and 99.995% uptime and observability.
Feature | Self Managed | Fully Managed on Studios |
---|---|---|
Deployment | ✅ Do it yourself deployment | ✅ One-button cloud deploy |
Load balancing | ❌ | ✅ |
Autoscaling | ❌ | ✅ |
Scale to zero | ❌ | ✅ |
Multi-machine inference | ❌ | ✅ |
Authentication | ❌ | ✅ |
Own VPC | ❌ | ✅ |
AWS, GCP | ❌ | ✅ |
Use your own cloud commits | ❌ | ✅ |
LitServe is a community project accepting contributions - Let's make the world's most advanced AI inference engine.