/model-interpretability-data-drift-meetup

Artifacts for the Model Interpretability and Data Drift meetup talk from the Citizen Data Scientists Melbourne group

Model Interpretability & Data Drift Meetup

This repository contains artifacts for the Model Interpretability and Data Drift meetup talk from the Citizen Data Scientists Melbourne group. More information found here: https://www.meetup.com/Citizen/events/265084753/

Abstract

During the training and development cycle of a machine learning model, the interpretability of model outputs is essential to verify hypotheses and build trust with stakeholders. However, what methods exist to interpret models? And is the application of these methods limited to the model development process?

Moreover, after deploying a machine learning model inference data can change relative to training data. This is one of the main reasons contributing to degraded model performance over time. But what causes this to occur? And how do you verify a model's robustness after deployment?

This meetup will focus on the topic of data drift and model interpretability. It will aim to answer the questions above with a balance between theory and practice, and explore how Microsoft Azure can help tackle these issues.

Speaker Bio

Nicholas has worked in several technical roles spanning finance, government and technology. He is currently a Cloud Solution Architect at Microsoft specializing in Data and AI. His professional interests include everything related to technology, artificial intelligence, big data, and cloud computing.