Branch | Status |
---|---|
master | |
release-0.1 |
Seldon Core is an open source platform for deploying machine learning models on Kubernetes.
Machine learning deployment has many challenges. Seldon Core intends to help with these challenges. Its high level goals are:
- Allow data scientists to create models using any machine learning toolkit or programming language. We plan to initially cover the tools/languages below:
- Python based models including
- Tensorflow models
- Sklearn models
- Spark models
- H2O models
- R models
- Python based models including
- Expose machine learning models via REST and gRPC automatically when deployed for easy integration into business apps that need predictions.
- Allow complex runtime inference graphs to be deployed as microservices. These graphs can be composed of:
- Models - runtime inference executable for machine learning models
- Routers - route API requests to sub-graphs. Examples: AB Tests, Multi-Armed Bandits.
- Combiners - combine the responses from sub-graphs. Examples: ensembles of models
- Transformers - transform request or responses. Example: transform feature requests.
- Handle full lifecycle management of the deployed model:
- Updating the runtime graph with no downtime
- Scaling
- Monitoring
- Security
A Kubernetes Cluster.
Kubernetes can be deployed into many environments, both in cloud and on-premise.
We have updated our core API to v1alpha2 which has a breaking change from v1alpha1 in the SeldonDeployments CRD
Read details of how to update your kubernetes SeldonDeployment resources.
- 0.2 releases will now respect the v1alpha2 API.
- 0.1 releases respect the v1alpha1 API and will not be worked on further.
It is possible to deploy Seldon with two operators that can handle both v1alpha1 resouces and v1alpha2 resources though this is not part of our standard deployment docs. If you need this please get in touch.
Read the overview to using seldon-core..
- Jupyter notebooks showing worked examples:
- Minikube:
- GCP:
- Azure
- Advanced graphs showing the various types of runtime prediction graphs that can be built.
Seldon-core allows various types of components to be built and plugged into the runtime prediction graph. These include models, routers, transformers and combiners. Some example components that are available as part of the project are:
-
Models : example that illustrate simple machine learning models to help you build your own integrations
-
routers
-
transformers
- Mahalanobis distance outlier detection. Example usage can be found in the Advanced graphs notebook
- kubeflow
- Seldon-core can be installed as part of the kubeflow project. A detailed end-to-end example provides a complete workflow for training various models and deploying them using seldon-core.
- IBM's Fabric for Deep Learning
- Istio and Seldon
Follow the install guide for details on ways to install seldon onto your Kubernetes cluster.
Three steps:
- Wrap your runtime prediction model.
- Define your runtime inference graph in a seldon deployment custom resource.
- Deploy the graph.