In this project, you will apply the skills you have acquired in this course to operationalize a Machine Learning Microservice API.
You are given a pre-trained, sklearn model that has been trained to predict housing prices in Boston according to several features, such as average rooms in a home and data about highway access, teacher-to-pupil ratios, and so on. You can read more about the data, which was initially taken from Kaggle, on the data source site. This project tests your ability to operationalize a Python flask app—in a provided file, app.py—that serves out predictions (inference) about housing prices through API calls. This project could be extended to any pre-trained machine learning model, such as those for image recognition and data labeling.
Your project goal is to operationalize this working, machine learning microservice using kubernetes, which is an open-source system for automating the management of containerized applications. In this project you will:
- Test your project code using linting
- Complete a Dockerfile to containerize this application
- Deploy your containerized application using Docker and make a prediction
- Improve the log statements in the source code for this application
- Configure Kubernetes and create a Kubernetes cluster
- Deploy a container using Kubernetes and make a prediction
- Upload a complete Github repo with CircleCI to indicate that your code has been tested
You can find a detailed project rubric, here.
The final implementation of the project will showcase your abilities to operationalize production microservices.
- Create a virtualenv with Python 3.7 and activate it. Refer to this link for help on specifying the Python version in the virtualenv.
python3 -m pip install --user virtualenv
# You should have Python 3.7 available in your host.
# Check the Python path using `which python3`
# Use a command similar to this one:
python3 -m virtualenv --python=<path-to-Python3.7> .devops
source .devops/bin/activate- Run
make installto install the necessary dependencies
- Standalone:
python app.py - Run in Docker:
./run_docker.sh - Run in Kubernetes:
./run_kubernetes.sh
- Setup and Configure Docker locally
- Setup and Configure Kubernetes locally
- Create Flask app in Container
- Run via kubectl
-
Files to build and run docker image
-- run_docker.sh
-
Files to upload images to docker hub
-- upload_docker.sh
-
Files to deploy to kubernetes
-- run_kubernetes.sh
-
Files to build application
-- Makefile
-
Application file
-- app.py
-- requirements.txt
-
Application Output Log Files
-- output_txt_files/docker_out.txt
-- output_txt_files/kubernetes_out.txt
-
Folder for Application Models
-- model_data/
-
Folder for Circleci Config Files
-- .circleci/
To see the screenshots of each task go to the screenshots directory.
- Specify your python version
- Specify a working directory.
- Copy the app.py source code to that directory
- Install any dependencies in requirements.txt (
make install) - Expose a port when the container is created (port 80 is standard).
- Specify that the app runs at container launch.
Note: If you want to install python dependencies and hadolint use
make installandmake install-hadolintTo runmake lintdon't forget create and activate the virtual env before:
$ make setup # create the virtual env
$ source ~/.devops/bin/activate # active the virtual env
$ make lint- Build the docker image from the Dockerfile; it is recommended that you use an optional --tag parameter as described in the build documentation.
- List the created docker images (for logging purposes).
- Run the containerized Flask app; publish the container’s port (
80) to a host port (8000).
Run the container using the run_docker.sh script created before following the steps above:
$ . ./run_docker.sh After running the container (docker app) we can able to run the prediction using the make_prediction.sh script:
$ . ./make_prediction.sh # Don't forget run the container before- Add a prediction log statement
- Run the container and make a prediction to check the logs
Note: If you don't see any logs on your terminal you can use the
docker logscommand, to get container id of your docker app you can usedocker psand used that with thedocker logscommand. e.g:docker psand the container id is4c01db0b339cyour command to get the logs isdocker logs 4c01db0b339c
$ docker psNote: Don't forget copy the output to
docker_out.txt
- Create a Docker Hub account
- Built the docker container with this command
docker build --tag=<your_tag> .(Don't forget the tag name) - Define a
dockerpathwhich is<docker_hub_username>/<project_name>e.g:osaigbovoemmanuel/skylearnmlproject - Authenticate and tag image
- Push your docker image to the
dockerpath
Note: replace <your_tag> with the tag name that you want to use. For example: api ->
docker build --tag=api .After complete all steps run the upload using theupload_docker.shscript:
$ . ./upload_docker.sh- Define a dockerpath which will be
<docker_hub_username>/<project_name>, this should be the same name as your uploaded repository (the same as in upload_docker.sh) - Run the docker container with kubectl; you’ll have to specify the container and the port
- List the kubernetes pods
- Forward the container port to a host port, using the same ports as before
After complete all steps run the kubernetes using run_kubernetes.sh script:
$ . ./run_kubernetes.shAfter running the kubernete make a prediction using the make_prediction.sh script as we do in the second task.
Note: Don't forget copy the output to the
kubernetes_out.txt
If you want to delete the kubernetes cluster just run this command minikube delete. You can also stop the kubernetes cluster with this command minikube stop
- Create a CircleCI Account (use your Github account for a better integration)
- Create a config using this template
- Add a status badge using this template:
[](https://circleci.com/gh/<github_username>/<repository>)replace<github_username>and<repository>with your data and paste on top of your readme file.