Unified ML Monitoring on Databricks Workshop

Whether you are monitoring one model or hundreds of models at a time, the ability to understand the performance of your model and the infrastructure supporting it is key to the long-term success of your Data Science efforts. Across the clouds, there are various solutions when it comes to managing the model artifacts, logs, versions, and other information; however, unifying these key data points can be difficult.

This workshop aims to show you how you can build a Unified Machine Learning Model Monitoring Solution on Databricks. In this virtual workshop, a Databricks ML Engineer will walk you through the collection of key model metrics from sources like MLFlow and external services like AzureML or SageMaker. They will then walk you through how you can leverage Delta Lake as a means of managing this collected model performance information and calculating model drift indicators. Lastly, they will show how it all comes together in SQL Analytics as a Unified ML Monitoring Dashboard, complete with an alerting mechanism that triggers a retraining job when one of the model drift calculations goes below a certain threshold.

In this virtual workshop, we will create a Unified ML Monitoring Solution, including:

  • Collecting model performance data
  • Estimating model drift based on collected model data
  • Unifying collected model monitoring data into a single dashboard