Using:
The example will be the MNIST handwritten digit classification task. We will train 3 different models to solve this task:
- A TensorFlow neural network model.
- A scikit-learn random forest model.
- An R least squares model.
We will then show various rolling deployments
- Deploy the single Tensorflow model.
- Do a rolling update to an AB test of the Tensorflow model and the sklearn model.
- Do a rolling update to a Multi-armed Bandit over all 3 models to direct traffic in real time to the best model.
In the follow we will:
Either :
- Follow the kubeflow docs to
- Create a persistent disk for NFS. Call it nfs-1.
- Install kubeflow with an NFS volume, Argo and seldon-core onto your cluster.
- Follow a consolidated guide to do the steps in 1.
- Python training code
- Python runtime prediction code
- Script to create wrap runtime prediction code to run under seldon-Core using Source-to-Image.
- Python training code
- Python runtime prediction code
- Script to create wrap runtime prediction code to run under seldon-Core using Source-to-Image.
- R training code
- R runtime prediction code
- Script to create wrap runtime prediction code to run under seldon-Core using Source-to-Image.
Follow the steps in ./notebooks/training.ipynb to:
- Run Argo Jobs for each model to:
- Creating training images and push to repo
- Run training
- Create runtime prediction images and push to repo
- Deploy individual runtime model
To push to your own repo the Docker images you will need to setup your docker credentials as a Kubernetes secret using the template in k8s_setup/docker-credentials-secret.yaml.tpl.
Follow the steps in ./notebooks/serving.ipynb to:
- Deploy the single Tensorflow model.
- Do a rolling update to an AB test of the Tensorflow model and the sklearn model.
- Do a rolling update to a Multi-armed Bandit over all 3 models to direct traffic in real time to the best model.
If you have installed the Seldon-Core analytics you can view them on the grafana dashboard: