MNIST MLOps
Points
- The training can be run using train.py file.
- Inference script can be run using inference.py.
- Inference Dockerfile is Dockerfile
- BentoML is used for model serving with prometheus integrated (exposed at /metrics route)
- Deployment manifests can be found in manifests folder
- MLFlow is integrated for logging of models and metrics
How to run
- Clone the repo
git clone https://github.com/sethusaim/mnist_mlops.git
- Setup MLFlow server
Run this the seprate terminal
mlflow server --backend-store-uri sqlite:///mlflow.db --default-artifact-root s3://<your-bucket-name>/ --host 0.0.0.0 -p 8000
Export MLFLOW_TRACKING_URI
export MLFLOW_TRACKING_URI=http://localhost:8000/
- To run the training script
python train.py
The training script generates bentoml dockerfile and builds and image
- Run the inference docker image
docker run -d -p 3000:3000 mnist-mlops
On http://localhost:3000/ url, we will get swagger ui, with predict route and prometheus metrics route