/Bank-Marketing-Campaigns-with-Deployment

This project aims to deploy and consume a machine Learning Model generated using AutoML on Azure Machine Learning Studio.

Primary LanguageJupyter Notebook

Bank Marketing Campaigns ML Pipeline

Table of Contents

  • Overview

  • Architectural Diagram

  • Key Steps

    • Automated ML Experiment
    • Deploy the best model
    • Enable App Insights & Logging
    • Swagger Documentation
    • Consume model endpoints
    • Create and publish a pipeline
  • Screen Recording
  • Standout Suggestions
  • References

Overview

This projects aims to create a cloud-based machine learning model for the Bank Marketing Dataset, which contains data about marketing campaigns for a bank, to create this model we will utilize the power of AutoML for this classification problem to be able to predict whether a bank product would be subscribed by the client or not.
The project involves configuring, deploying and consuming the model. The project also includes creating, consuming and publishing an ML Pipeline.

Architectural Diagram

The diagram below illustrates the keys steps of our operation: Architecture

  • We use our dataset to create an AutoML run to find the best model to fit our data.
  • We use the model obtained and our data to train an ML pipeline.
  • We publish both the model & pipeline to endpoints to be ready for consumption.

Key Steps

Step 1: Automated ML Experiment

In this step, we first upload our dataset using the dataset's URI: Dataset

Then we use this data to create an AutoML run to determine the best model: Complete Run

In the Experiment section, we can see both the AutoML run and our pipeline as completed:
Complete Run

After the run is complete, we're able to determine the best model which is the Voting Ensemble with accuracy of 0.91866: Best Model

Step 2: Deploy the best model

In this step, we deply the Voting Ensemble model using Azure Container Instacne (ACI) while making sure that the Authentication option enabled. Deployment

Step 3: Enable App Insights & Logging

We enable the Application Insights by editing the logs.py script to match the deployed model ID and setting App Inights to True then we run the python script. App Insights

Running logs.py after editing it: Logs

Screenshots of Logs: Logs Logs

Step 4: Swagger Documentation

In this step we setup Swagger to be able to deploy and consume the model using Swagger. We start by download the swagger.json file from our deployed model on Azure. Then we run swagger.sh and serve.py script on Powershell command window. Swagger on localhost:
Swagger Swagger Swagger Swagger

Step 5: Consume model endpoints

Using the endpoint.py script to consume the model endpoints, we first edit the scoring_uri and the key in the script to match the key for your service and the URI that was generated after deployment, then we execute endpoint.py script. Endpoint

Step 6: Create and publish a pipeline

In this step, we first update our jupyter notebook variables to match our environment. And finally we deploy our training pipeline and publish it.

Here we can see the Pipeline section in Azure Machine Learning studio: Pipeline

Pipeline Endpoint: Pipeline

Bankingmarketing dataset with AutoML module: Pipeline

Published Pipeline Overview: Pipeline

Run Details Widget: Pipeline

Pipeline

Screen Recording

Screencast

Future Improvements

  • We can increase the Exit Criterion time from 1 hour to the default value of 3 hours to be able to find models of higher accuracy.
  • We can enable Data drift tracking to ensure accuracy of the model.