/ml-ops-exercise

Practices on machine learning operations

Primary LanguageJupyter Notebook

Operationalizing Machine Learning

Armando Medina

(December, 2020)

Project Overview

In this project, we are going to continue working with the banking marketing dataset. We are going to use Azure to set up a cloud-based machine learning production model, deploy it, and consume it. You'll also create, publish, and consume a pipeline.

Main steps of the project:

  1. Authentication
  2. Automated ML Experiment
  3. Deploy the best model
  4. Enable logging
  5. Swagger Documentation
  6. Consume model endpoints
  7. Create and publish a pipeline
  8. Screencast
  9. Future Works

1. Authentication

In this step, I install Azure Machine Learning Extension which allows you to interact with Azure Machine Learning Studio, which is part of the az command.

After you have the Azure Machine Learning Extension, create a service principal account and then associate it with the specific workspace.

2. Automated ML Experiment

In this step, I will create an experiment using automated machine learning and then configure a compute cluster and use that cluster to run the experiment.


Data set registered in Azure ML from a url.


We apply AutoML to our dataset. Here is the completed experiment.


Here we see the best model of our experiment: MaxAbSacler, XGBoostClasssifier.

3. Deploy the best model

In this step, implement the best model to be able to interact with the HTTP API service and interact with the model by sending data through POST requests.

4. Enable logging

Now that the best model has been implemented, enable Application Insights and retrieve the logs through a script.

Here we see "Application Insights" enable inthe deatials tab of the endpoint.

Here we see the output when you run logs.py.

5. Swagger Documentation

In this step, you will consume the deployed model using Swagger.

Here we see swagger run on localhost showing the HTTP API methods and reponse for the model.

6. Consume model endpoints

In this step I used the provided endpoint.py script to interact with the trained model.

Here we see the output of endpoint.py.

Here we see the output of Apache Benchmark run against the HTTP API.

7. Create and publish a pipeline

This step shows our work with:

  • The pipeline section of Azure ML studio, showing that the pipeline has been created.
  • The pipelines section in Azure ML Studio, showing the Pipeline Endpoint.
  • The Bankmarketing dataset with the AutoML module
  • The “Published Pipeline overview”, showing a REST endpoint and a status of ACTIVE.
  • In Jupyter Notebook, showing that the “Use RunDetails Widget” shows the step runs.
  • In ML studio showing the scheduled run.

8. Screencast

Screencast video

9. Future Works

In step some we create a service principal to be able to authenticate in this way, however in our script we authenticate ourselves with the help of the config.json file that we download from Azure Machine Learning, in future work in addition to better encapsulating some elements we can use the service princial as authentication method for our scripts.