model-serving

There are 129 repositories under model-serving topic.

  • vllm-project/vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Language:Python20.3k1952.9k2.8k
  • BentoML

    bentoml/BentoML

    The easiest way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Multi-model Inference Graph/Pipelines, LLM/RAG apps, and more!

    Language:Python6.7k741k754
  • ahkarami/Deep-Learning-in-Production

    In this repository, I will share some useful notes and references about deploying deep learning-based models in production.

  • FedML-AI/FedML

    FEDML - The unified and scalable ML library for large-scale distributed training, model serving, and federated learning. FEDML Launch, a cross-cloud scheduler, further enables running any AI jobs on any GPU cloud or on-premise cluster. Built on this library, TensorOpera AI (https://TensorOpera.ai) is your generative AI platform at scale.

    Language:Python4.1k114323769
  • kserve/kserve

    Standardized Serverless ML Inference Platform on Kubernetes

    Language:Python3.2k631.7k978
  • tensorchord/envd

    🏕️ Reproducible development environment

    Language:Go1.9k22529156
  • ModelTC/lightllm

    LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalability, and high-speed performance.

    Language:Python1.9k20162171
  • microsoft/aici

    AICI: Prompts as (Wasm) Programs

    Language:Rust1.8k197472
  • predibase/lorax

    Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs

    Language:Python1.7k29208116
  • mlrun/mlrun

    MLRun is an open source MLOps platform for quickly building and managing continuous ML applications across their lifecycle. MLRun integrates into your development and CI/CD environment and automates the delivery of production data, ML pipelines, and online applications.

    Language:Python1.3k25261239
  • logicalclocks/hopsworks

    Hopsworks - Data-Intensive AI platform with a Feature Store

    Language:Java1.1k3216143
  • truss

    basetenlabs/truss

    The simplest way to serve AI/ML models in production

    Language:Python8481111258
  • bentoml/Yatai

    Model Deployment at Scale on Kubernetes 🦄️

    Language:TypeScript7711911570
  • mosec

    mosecorg/mosec

    A high-performance ML model serving framework, offers dynamic batching and CPU/GPU pipelines to fully exploit your compute machine

    Language:Python713139648
  • openvinotoolkit/model_server

    A scalable inference server for models optimized with OpenVINO™

    Language:C++64130146196
  • underneathall/pinferencia

    Python + Inference - Model Deployment library in Python. Simplest model inference server ever.

    Language:Python558416785
  • alibaba/rtp-llm

    RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.

    Language:C++404115529
  • Lightning-Universe/stable-diffusion-deploy

    Learn to serve Stable Diffusion models on cloud infrastructure at scale. This Lightning App shows load-balancing, orchestrating, pre-provisioning, dynamic batching, GPU-inference, micro-services working together via the Lightning Apps framework.

    Language:Python393195438
  • eightBEC/fastapi-ml-skeleton

    FastAPI Skeleton App to serve machine learning models production-ready.

    Language:Python3516274
  • bentoml/OneDiffusion

    OneDiffusion: Run any Stable Diffusion models and fine-tuned weights with ease

    Language:Python32413824
  • chitra

    aniketmaurya/chitra

    A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.

    Language:Python22463537
  • lightbend/kafka-with-akka-streams-kafka-streams-tutorial

    Code samples for the Lightbend tutorial on writing microservices with Akka Streams, Kafka Streams, and Kafka

    Language:Scala21331570
  • kitops

    jozu-ai/kitops

    Tools for easing the handoff between AI/ML and App/SRE teams.

    Language:Go204910020
  • google/JetStream

    JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome).

    Language:Python149141117
  • spotify/zoltar

    Common library for serving TensorFlow, XGBoost and scikit-learn models in production.

    Language:Java138256733
  • FederatedAI/FATE-Serving

    A scalable, high-performance serving system for federated learning models

    Language:Java134297277
  • bentoml/gallery

    BentoML Example Projects 🎨

    Language:Python1336950
  • allegroai/clearml-serving

    ClearML - Model-Serving Orchestration and Repository Solution

    Language:Python128115340
  • serving-pytorch-models

    alvarobartt/serving-pytorch-models

    Serving PyTorch models with TorchServe :fire:

    Language:Jupyter Notebook1011315
  • FlinkML/flink-jpmml

    flink-jpmml is a fresh-made library for dynamic real time machine learning predictions built on top of PMML standard models and Apache Flink streaming engine

    Language:Scala97133330
  • notAI-tech/fastDeploy

    Deploy DL/ ML inference pipelines with minimal extra code.

    Language:Python938618
  • NimbleBoxAI/nbox

    The official python package for NimbleBox. Exposes all APIs as CLIs and contains modules to make ML 🌸

    Language:Python884113
  • Project-MONAI/monai-deploy-app-sdk

    MONAI Deploy App SDK offers a framework and associated tools to design, develop and verify AI-driven applications in the healthcare imaging domain.

    Language:Jupyter Notebook832421544
  • EmbeddedLLM/vllm-rocm

    vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs

    Language:Python782134
  • inferencedb

    aporia-ai/inferencedb

    🚀 Stream inferences of real-time ML models in production to any data lake (Experimental)

    Language:Python77602