Detecting and analyzing incorrect model predictions with Amazon SageMaker Model Monitor and Debugger

This repository contains the notebook and scripts for the blogpost "Detecting and analyzing incorrect model predictions with Amazon SageMaker Model Monitor and Debugger"

Create a SageMaker notebook instance and clone the repository:

git clone git@github.com:aws-samples/amazon-sagemaker-analyze-model-predictions.git

In the notebook analyze_model_predictions.ipynb we first deploy a ResNet18 model that has been trained to distinguish between 43 categories of traffic signs using the German Traffic Sign dataset.

We will setup SageMaker Model Monitor to automatically capture inference requests and predictions. Afterwards we launch a monitoring schedule that periodically kicks off a custom processing job to inspect collected data and to detect unexpected model behavior.

We will then create adversarial images which lead to the model making incorrect predictions. Once Model Monitor detects this issue, we will use SageMaker Debugger to obtain visual explanations of the deployed model. This is done by updating the endpoint to emit tensors during inference and we will then use those tensors to compute saliency maps.

The saliency map can be rendered as heat-map and reveals the parts of an image that were critical in the prediction. Below is an example (taken from the German Traffic Sign dataset): the image on the left is the input into the fine-tuned ResNet model, which predicted the image class 25 (‘Road work’). The right image shows the input image overlaid with a heat-map where red indicates the most relevant and blue the least relevant pixels for predicting the class 25.

License

This project is licensed under the Apache-2.0 License.