/seldon-core

Machine Learning Deployment for Kubernetes

Primary LanguageJavaApache License 2.0Apache-2.0

Seldon Core API

Branch Status
master Build Status
release-0.2 Build Status
release-0.1 Build Status

Seldon Core is an open source platform for deploying machine learning models on Kubernetes.

Goals

Machine learning deployment has many challenges. Seldon Core intends to help with these challenges. Its high level goals are:

  • Allow data scientists to create models using any machine learning toolkit or programming language. We plan to initially cover the tools/languages below:
    • Python based models including
      • Tensorflow models
      • Sklearn models
    • Spark models
    • H2O models
    • R models
  • Expose machine learning models via REST and gRPC automatically when deployed for easy integration into business apps that need predictions.
  • Allow complex runtime inference graphs to be deployed as microservices. These graphs can be composed of:
    • Models - runtime inference executable for machine learning models
    • Routers - route API requests to sub-graphs. Examples: AB Tests, Multi-Armed Bandits.
    • Combiners - combine the responses from sub-graphs. Examples: ensembles of models
    • Transformers - transform request or responses. Example: transform feature requests.
  • Handle full lifecycle management of the deployed model:
    • Updating the runtime graph with no downtime
    • Scaling
    • Monitoring
    • Security

Prerequisites

A Kubernetes Cluster. Kubernetes can be deployed into many environments, both on cloud and on-premise.

Quick Start

Read the overview to using seldon-core.

Example Components

Seldon-core allows various types of components to be built and plugged into the runtime prediction graph. These include models, routers, transformers and combiners. Some example components that are available as part of the project are:

Integrations

Install

Follow the install guide for details on ways to install seldon onto your Kubernetes cluster.

Deployment Guide

API

Three steps:

  1. Wrap your runtime prediction model.
  2. Define your runtime inference graph in a seldon deployment custom resource.
  3. Deploy the graph.

Advanced Tutorials

Reference

Articles/Blogs/Videos

Release Highlights

Testing

Configuration

Community

Developer

Latest Seldon Images

Description Image URL Stable Version Development
Seldon Operator seldonio/cluster-manager 0.2.4 0.2.5-SNAPSHOT
Seldon Service Orchestrator seldonio/engine 0.2.4 0.2.5-SNAPSHOT
Seldon API Gateway seldonio/apife 0.2.4 0.2.5-SNAPSHOT
Seldon Python 3 (3.6) Wrapper for S2I seldonio/seldon-core-s2i-python3 0.3 0.4-SNAPSHOT
Seldon Python 3.6 Wrapper for S2I seldonio/seldon-core-s2i-python36 0.3 0.4-SNAPSHOT
Seldon Python 3.7 Wrapper for S2I seldonio/seldon-core-s2i-python37 0.3 0.4-SNAPSHOT
Seldon Python 2 Wrapper for S2I seldonio/seldon-core-s2i-python2 0.3 0.4-SNAPSHOT
Seldon Python ONNX Wrapper for S2I seldonio/seldon-core-s2i-python3-ngraph-onnx 0.2
Seldon Core Python Wrapper seldonio/core-python-wrapper 0.7
Seldon Java Build Wrapper for S2I seldonio/seldon-core-s2i-java-build 0.1
Seldon Java Runtime Wrapper for S2I seldonio/seldon-core-s2i-java-runtime 0.1
Seldon R Wrapper for S2I seldonio/seldon-core-s2i-r 0.1
Seldon NodeJS Wrapper for S2I seldonio/seldon-core-s2i-nodejs 0.1 0.2-SNAPSHOT
Seldon Tensorflow Serving proxy seldonio/tfserving-proxy 0.1
Seldon NVIDIA inference server proxy seldonio/nvidia-inference-server-proxy 0.1

Java Packages

Description Package Version
Seldon Core Wrapper seldon-core-wrapper 0.1.2
Seldon Core JPMML seldon-core-jpmml 0.0.1

Usage Reporting

Tools that help the development of Seldon Core from anonymous usage.