

# Amazon SageMaker Model Dashboard
Model Dashboard

Amazon SageMaker Model Dashboard is a centralized portal, accessible from the SageMaker AI console, where you can view, search, and explore all of the models in your account. You can track which models are deployed for inference and if they are used in batch transform jobs or hosted on endpoints. If you set up monitors with Amazon SageMaker Model Monitor, you can also track the performance of your models as they make real-time predictions on live data. You can use the dashboard to find models that violate thresholds you set for data quality, model quality, bias and explainability. The dashboard’s comprehensive presentation of all your monitor results helps you quickly identify models that don’t have these metrics configured.

The Model Dashboard aggregates model-related information from several SageMaker AI features. In addition to the services provided in Model Monitor, you can view model cards, visualize workflow lineage, and track your endpoint performance. You no longer have to sort through logs, query in notebooks, or access other AWS services to collect the data you need. With a cohesive user experience and integration into existing services, SageMaker AI’s Model Dashboard provides an out-of-the-box model governance solution to help you ensure quality coverage across all your models.

**Prerequisites**

To use the Model Dashboard, you should have one or more models in your account. You can train models using Amazon SageMaker AI or import models you've trained elsewhere. To create a model in SageMaker AI, you can use the `CreateModel` API. For more information, see [CreateModel](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateModel.html). You can also use SageMaker AI-provided ML environments, such as Amazon SageMaker Studio Classic, which provides project templates that set up model training and deployment for you. For information about how to get started with Studio Classic, see [Amazon SageMaker Studio Classic](https://docs.aws.amazon.com/sagemaker/latest/dg/studio.htm).

While this is not a mandatory prerequisite, customers gain the most value out of the dashboard if they set up model monitoring jobs using SageMaker Model Monitor for models deployed to endpoints. For prerequisites and instructions on how to use SageMaker Model Monitor, see [Data and model quality monitoring with Amazon SageMaker Model Monitor](model-monitor.md).

## Model Dashboard elements


The Model Dashboard view extracts high-level details from each model to provide a comprehensive summary of every model in your account. If your model is deployed for inference, the dashboard helps you track the performance of your model and endpoint in real time.

Important details to highlight in this page include:
+ **Risk rating**: A user-specified parameter from the model card with a **low**, **medium**, or **high** value. The model card’s risk rating is a categorical measure of the business impact of the model’s predictions. Models are used for a variety of business applications, each of which assumes a different level of risk. For example, incorrectly detecting a cyber attack has much greater business impact than incorrectly categorizing an email. If you don’t know the model risk, you can set it to **unknown**. For information about Amazon SageMaker Model Cards see [Model Cards](https://docs.aws.amazon.com/sagemaker/latest/dg/model-cards.html).
+ Model Monitor alerts: Model Monitor alerts are a primary focus of the Model Dashboard, and reviewing the existing documentation on the various monitors provided by SageMaker AI is a helpful way to get started. For an in-depth explanation of the SageMaker Model Monitor feature and sample notebooks, see [Data and model quality monitoring with Amazon SageMaker Model Monitor](model-monitor.md).

  The Model Dashboard displays Model Monitor status values by the following monitor types:
  + *Data Quality*: Compares live data to training data. If they diverge, your model's inferences may no longer be accurate. For additional details about the Data Quality monitor, see [Data quality](model-monitor-data-quality.md).
  + *Model Quality*: Compares the predictions that the model makes with the actual Ground Truth labels that the model attempts to predict. For additional details about the Model Quality monitor, see [Model quality](model-monitor-model-quality.md).
  + *Bias Drift*: Compares the distribution of live data to training data, which can also cause inaccurate predictions. For additional details about the Bias Drift monitor, see [Bias drift for models in production](clarify-model-monitor-bias-drift.md).
  + *Feature Attribution Drift*: Also known as explainability drift. Compares the relative rankings of your features in training data versus live data, which could also be a result of bias drift. For additional details about the Feature Attribution Drift monitor, see [Feature attribution drift for models in production](clarify-model-monitor-feature-attribution-drift.md).

  Each Model Monitor status is one of the following values:
  + **None**: No monitor is scheduled
  + **Inactive**: A monitor was scheduled, but it was deactivated
  + **OK**: A monitor is scheduled and is active, and has not encountered the necessary number of violations in recent model monitor executions to raise an alert
  + Time and date: An active monitor raised an alert at the specified time and date
+ **Endpoint**: The endpoints which host your model for real-time inference. Within the Model Dashboard, you can select the endpoint column to view performance metrics such as CPU, GPU, disk, and memory utilization of your endpoints in real time to help you track the performance of your compute instances.
+ **Batch transform job**: The most recent batch transform job that ran using this model. This column helps you determine if a model is actively used for batch inference.
+ Model details: Each entry in the dashboard links to a model details page where you can dive deeper into an individual model. You can access the model’s lineage graph, which visualizes the workflow from data preparation to deployment, and metadata for each step. You can also create and view the model card, review alert details and history, assess the performance of your real-time endpoints, and access other infrastructure-related details.

# Model Monitor schedules and alerts


Using the Python SDK, you can create a model monitor for data quality, model quality, bias drift, or feature attribution drift. For more information about using SageMaker Model Monitor, see [Data and model quality monitoring with Amazon SageMaker Model Monitor](model-monitor.md). The Model Dashboard populates information from all the monitors you create on all your models in your account. You can track the status of each monitor, which indicates whether your monitor is running as expected or failed due to an internal error. You can also activate or deactivate any monitor in the model details page itself. For instructions about how to view scheduled monitors for a model, see [View scheduled monitors](model-dashboard-schedule-view.md). For instructions about how to activate or deactivate model monitors, see [Activate or deactivate a model monitor](model-dashboard-schedule-activate.md).

A properly-configured and actively-running model monitor might raise alerts, in which case the monitoring executions produce violation reports. For details about how alerts work and how to view alert results, history, and links to job reports for debug, see [View and edit alerts](model-dashboard-alerts.md).

# View scheduled monitors


Use SageMaker Model Monitor to continuously monitor your machine learning models for data drift, model quality, bias, and other issues that might impact model performance. After you've set up monitoring schedules, you can view the details of these scheduled monitors through the SageMaker AI console. The following procedure outlines the steps to access and review the scheduled monitors for a particular model, including their current status:

**To view a model’s scheduled monitors**

1. Open the [SageMaker AI console](https://console.aws.amazon.com/sagemaker/).

1. Choose **Governance** in the left panel.

1. Choose **Model Dashboard**.

1. In the **Models** section of the Model Dashboard, select the model name of the scheduled monitors you want to view.

1. View the scheduled monitors in the **Monitor schedule** section. You can review the status for each monitor in the **Status schedule** column, which is one of the following values:
   + **Failed**: The monitoring schedule failed due to a problem with the configuration or settings (such as incorrect user permissions).
   + **Pending**: The monitor is in the process of becoming scheduled.
   + **Stopped**: The schedule is stopped by the user.
   + **Scheduled**: The schedule is created and runs at the frequency you specified.

# Activate or deactivate a model monitor


Use the following procedure to activate or deactivate a model monitor.

**To activate or deactivate a model monitor, complete the following steps:**

1. Open the [SageMaker AI console](https://console.aws.amazon.com/sagemaker/).

1. Choose **Governance** in the left panel.

1. Choose **Model Dashboard**.

1. In the **Models** section of the Model Dashboard, select the model name of the alert you want to modify.

1. Choose the radio box next to the monitor schedule of the alert you want to modify.

1. (optional) Choose **Deactivate monitor schedule** if you want to deactivate your monitor schedule.

1. (optional) Choose **Activate monitor schedule** if you want to activate your monitor schedule.

# View and edit alerts


The Model Dashboard displays alerts you configured in Amazon CloudWatch. You can modify the alert criteria within the dashboard itself. The alert criteria depend upon two parameters:
+ **Datapoints to alert**: Within the evaluation period, how many execution failures raise an alert.
+ **Evaluation period**: The number of most recent monitoring executions to consider when evaluating alert status.

The following image shows an example scenario of a series of Model Monitor executions in which we set a hypothetical **Evaluation period** of 3 and a **Datapoints to alert** value of 2. After every monitoring execution, the number of failures are counted within the **Evaluation period** of 3. If the number of failures meets or exceeds the **Datapoints to alert** value 2, the monitor raises an alert and remains in alert status until the number of failures within the **Evaluation period** becomes less than 2 in subsequent iterations. In the image, the evaluation windows are red when the monitor raises an alert or remains in alert status, and green otherwise.

Note that even if the evaluation window size has not reached the **Evaluation period** of 3, as shown in the first 2 rows of the image, the monitor still raises an alert if the number of failures meets or exceeds the **Datapoints to alert** value of 2.

![\[A sequence of seven example monitoring executions.\]](http://docs.aws.amazon.com/sagemaker/latest/dg/images/model_monitor/model-dashboard-alerts-window.png)


Within the monitor details page, you can view your alert history, edit existing alert criteria, and view job reports to help you debug alert failures. For instructions about how to view alert history or job reports for failed monitoring executions, see [View alert history or job reports](model-dashboard-alerts-view.md). For instructions about how to edit alert criteria, see [Edit alert criteria](model-dashboard-alerts-edit.md).

# View alert history or job reports


**To view alert history or job reports of failed executions, complete the following steps:**

1. Open the [SageMaker AI console](https://console.aws.amazon.com/sagemaker/).

1. Choose **Governance** in the left panel.

1. Choose **Model Dashboard**.

1. In the **Models** section of the Model Dashboard, select the model name of the alert history you want to view.

1. In the **Schedule name** column, select the monitor name of the alert history you want to view.

1. To view alert history, select the **Alert history** tab.

1. (optional) To view job reports of monitoring executions, complete the following steps:

   1. In the **Alert history** tab, choose **View executions** for the alert you want to investigate.

   1. In the **Execution history** table, choose **View report** of the monitoring execution you want to investigate.

**The report displays the following information:**
      + **Feature**: The user-defined ML feature monitored
      + **Constraint**: The specific check within the monitor
      + **Violation details**: Information about why the constraint was violated

# Edit alert criteria


**To edit an alert in the Model Dashboard, complete the following steps:**

1. Open the [SageMaker AI console](https://console.aws.amazon.com/sagemaker/).

1. Choose **Governance** in the left panel.

1. Choose **Model Dashboard**.

1. In the **Models** section of the Model Dashboard, select the model name of the alert you want to modify.

1. Choose the radio box next to the monitor schedule of the alert you want to modify.

1. Choose **Edit Alert** in the **Monitor schedule** section.

1. (optional) Change **Datapoints to alert** if you want to change the number of failures within the **Evaluation period** that initiate an alert.

1. (optional) Change **Evaluation period** if you want to change the number of most recent monitoring executions to consider when evaluating alert status.

# View a model lineage graph


When you train a model, Amazon SageMaker AI creates a visualization of your entire ML workflow from data preparation to deployment. This visualization is called a model lineage graph. The following page describes how to view a model lineage graph in the SageMaker AI console.

Model lineage graphs use entities to represent individual steps in your workflow. For example, a basic model lineage graph might have an entity representing your training set, which is associated with an entity representing your training job, which is associated with another entity representing your model. In addition, the graph stores information about each step in your workflow. With this information, you can recreate any step in the workflow or track model and dataset lineage. For example, SageMaker AI Lineage stores the S3 URI of your input data sources with each job so you can perform further analysis of the data sources for compliance verification.

While the model lineage graph can help you view the steps in individual workflows, there are many other capabilities that you can leverage using the AWS SDK. For example, with the AWS SDK you can create or query your entities. For more information about the full set of features in SageMaker AI Lineage and example notebooks, see [Amazon SageMaker ML Lineage Tracking](lineage-tracking.md).

# Introduction to entities


Amazon SageMaker AI automatically creates tracking entities for SageMaker AI jobs, models, model packages, and endpoints if the data is available. For a basic workflow, suppose you train a model using a dataset. SageMaker AI automatically generates a lineage graph with three entities: 
+ **Dataset** : A type of artifact, which is an entity representing a URI addressable object or data. An artifact is generally either an input or an output to a trial component or action.
+ **TrainingJob**: A type of trial component, which is an entity representing processing, training, and transform jobs.
+ **Model**: Another type of artifact. Like the **Dataset** artifact, a **Model** is a URI addressable object. In this case, it is an output of the **TrainingJob** trial component. 

Your model lineage graph expands quickly if you add additional steps to your workflow, such as data preprocessing or postprocessing, if you deploy your model to an endpoint, or if you include your model in a model package, among many other possibilities. For the complete list of SageMaker AI entities, see [Amazon SageMaker ML Lineage Tracking](lineage-tracking.md).

## Entity properties


Each node in the graph displays the entity type, but you can choose the vertical ellipsis to the right of the entity type to see specific details related to your workflow. In our previous barebones lineage graph, you can choose the vertical ellipsis next to **DataSet** to see specific values for the following properties (common to all artifact entities):
+ **Name**: The name of your dataset.
+ **Source URI**: The Amazon S3 location of your dataset.

For the `TrainingJob` entity, you can see the specific values for the following properties (common to all `TrialComponent` entities):
+ **Name**: The name of the training job.
+ **Job ARN**: The Amazon Resource Name (ARN) of your training job.

For the **Model** entity, you see the same properties as listed for **DataSet** since they are both artifact entities. For a list of the entities and their associated properties, see [Lineage Tracking Entities](lineage-tracking-entities.md).

## Entity queries


Amazon SageMaker AI automatically generates graphs of lineage entities as you use them. However if you are running many iterations of an experiment and don't want to view every lineage graph, the AWS SDK can help you perform queries across all your workflows. For example, you can query your lineage entities for all the processing jobs that use an endpoint. Or, you can see all the downstream trails that use an artifact. For a list of all the queries you can perform, see [Querying Lineage Entities](querying-lineage-entities.md).

## View a model’s lineage graph


**To view the lineage graph for a model, complete the following steps:**

1. Open the [SageMaker AI console](https://console.aws.amazon.com/sagemaker/).

1. Choose **Governance** in the left panel.

1. Choose **Model Dashboard**.

1. In the **Models** section of the Model Dashboard, select the model name of the lineage graph you want to view.

1. Choose **View lineage** in the **Model Overview** section.

# View Endpoint Status


If you want to use your trained model to perform inference on live data, you deploy your model to a real-time endpoint. To ensure appropriate latency of your predictions, you want to make sure the instances that host your model are running efficiently. Model Dashboard’s endpoint monitoring feature displays real-time information about your endpoint configuration and helps you track endpoint performance with metrics. 

**Monitor settings**

The Model Dashboard links to existing SageMaker AI endpoint details pages which display real-time graphs of metrics you can select in Amazon CloudWatch. Within your dashboard, you can track these metrics as your endpoint is handling real-time inference requests. Some metrics you can select are the following:
+ `CpuUtilization`: The sum of each individual CPU core's utilization, with each ranging from 0%–100%.
+ `MemoryUtilization`: The percentage of memory used by the containers on an instance, ranging from 0%–100%.
+ `DiskUtilization`: The percentage of disk space used by the containers on an instance, ranging from 0%–100%.

For the complete list of metrics you can view in real time, see [Amazon SageMaker AI metrics in Amazon CloudWatch](monitoring-cloudwatch.md).

**Runtime settings**

Amazon SageMaker AI supports automatic scaling (auto scaling) for your hosted models. Auto scaling dynamically adjusts the number of instances provisioned for a model in response to changes in your workload. When the workload increases, auto scaling brings more instances online. When the workload decreases, auto scaling removes unnecessary instances so that you don't pay for provisioned instances that you aren't using. You can customize the following runtime settings in the Model Dashboard:
+ *Update weights*: Change the amount of workload assigned to each instance with numerical weighting. For more information about instance weighting during auto scaling, see [Configure instance weighting for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-mixed-instances-groups-instance-weighting.html).
+ *Update instance count*: Change the number of total instances that can service your workload when it increases.

For more information about endpoint runtime settings, see [CreateEndpointConfig](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateEndpointConfig.html).

**Endpoint configuration settings**

Endpoint configuration settings display the settings you specified when you created the endpoint. These settings inform SageMaker AI which resources to provision for your endpoint. Some settings included are the following:
+ *Data capture*: You can choose to capture information about your endpoint's inputs and outputs. For example, you may want to sample incoming traffic to see if the results correlate to training data. You can customize your sampling frequency, the format of the stored data, and Amazon S3 location of stored data. For more information about setting up your data capture configuration, see [Data capture](model-monitor-data-capture.md).
+ *Production variants*: See the previous discussion in *Runtime settings*.
+ *Async invocation configuration*: If your endpoint is asynchronous, this section includes the maximum number of concurrent requests sent by the SageMaker AI client to the model container, the Amazon S3 location of your success and failure notifications, and the output location of your endpoint outputs. For more information about asynchronous outputs, see [Asynchronous endpoint operations](async-inference-create-invoke-update-delete.md).
+ *Encryption key*: You can enter your encryption key if you want to encrypt your outputs.

For more information about endpoint configuration settings, see [CreateEndpointConfig](https://docs.aws.amazon.com/sagemaker/latest/APIReference/API_CreateEndpointConfig.html).

## View status and configuration for an endpoint


**To view the status and configuration for a model’s endpoint, complete the following steps:**

1. Open the [SageMaker AI console](https://console.aws.amazon.com/sagemaker/).

1. Choose **Governance** in the left panel.

1. Choose **Model Dashboard**.

1. In the **Models** section of the Model Dashboard, select the model name of the endpoint you want to view.

1. Select the endpoint name in the **Endpoints** section.

# Model Dashboard FAQ


Refer to the following FAQ topics for answers to commonly asked questions about Amazon SageMaker Model Dashboard.

## Q. What is Model Dashboard?


Amazon SageMaker Model Dashboard is a centralized repository of all models created in your account. The models are generally the outputs of SageMaker training jobs, but you can also import models trained elsewhere and host them on SageMaker AI. Model Dashboard provides a single interface for IT administrators, model risk managers, and business leaders to track all deployed models and aggregates data from multiple AWS services to provide indicators about how your models are performing. You can view details about model endpoints, batch transform jobs, and monitoring jobs for additional insights into model performance. The dashboard’s visual display helps you quickly identify which models have missing or inactive monitors so you can ensure all models are periodically checked for data drift, model drift, bias drift, and feature attribution drift. Lastly, the dashboard’s ready access to model details helps you dive deep so you can access logs, infrastructure-related information, and resources to help you debug monitoring failures.

## Q. What are the prerequisites to use Model Dashboard?


You should have one or more models created in SageMaker AI, either trained on SageMaker AI or externally trained. While this is not a mandatory prerequisite, you gain the most value from the dashboard if you set up model monitoring jobs via Amazon SageMaker Model Monitor for models deployed to endpoints.

## Q. Who should use Model Dashboard?


Model risk managers, ML practitioners, data scientists and business leaders can get a comprehensive overview of models using the Model Dashboard. The dashboard aggregates and displays data from Amazon SageMaker Model Cards, Endpoints and Model Monitor services to display valuable information such as model metadata from the model card and model registry, endpoints where the models are deployed, and insights from model monitoring.

## Q. How do I use Model Dashboard?


Model Dashboard is available out of the box with Amazon SageMaker AI and does not require any prior configuration. However, if you have set up model monitoring jobs using SageMaker Model Monitor and Clarify, you use Amazon CloudWatch to configure alerts that raise a flag in the dashboard when model performance deviates from an acceptable range. You can create and add new model cards to the dashboard, and view all the monitoring results associated with endpoints. Model Dashboard currently does not support cross-account models.

## Q. What is Amazon SageMaker Model Monitor?


With Amazon SageMaker Model Monitor, you can select the data you want to monitor and analyze without writing any code. SageMaker Model Monitor lets you select data, such as prediction output, from a menu of options and captures metadata such as timestamp, model name, and endpoint so you can analyze model predictions. You can specify the sampling rate of data capture as a percentage of overall traffic in the case of high volume real-time predictions. This data is stored in your own Amazon S3 bucket. You can also encrypt this data, configure fine-grained security, define data retention policies, and implement access control mechanisms for secure access.

## Q. What types of model monitors does SageMaker AI support?


SageMaker Model Monitor provides the following types of [model monitors](https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.html):
+ *Data Quality*: Monitor drift in data quality.
+ *Model Quality*: Monitor drift in model quality metrics, such as accuracy.
+ *Bias Drift for Models in Production*: Monitor bias in your model's predictions by comparing the distribution of training and live data.
+ *Feature Attribution Drift for Models in Production*: Monitor drift in feature attribution by comparing the relative rankings of features in training and live data.

## Q. What inference methods does SageMaker Model Monitor support?


Model Monitor currently supports endpoints that host a single model for real-time inference and does not support monitoring of [multi-model endpoints](https://docs.aws.amazon.com/sagemaker/latest/dg/multi-model-endpoints.html).

## Q. How can I get started with SageMaker Model Monitor?


You can use the following resources to get started with model monitoring:
+ [Data quality monitor example notebook](https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker_model_monitor/introduction/SageMaker-ModelMonitoring.ipynb)
+ [Model quality monitor example notebook](https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker_model_monitor/introduction/SageMaker-ModelMonitoring.ipynb)
+ [Bias drift monitor example notebook](https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker_model_monitor/fairness_and_explainability/SageMaker-Model-Monitor-Fairness-and-Explainability.ipynb)
+ [Feature attribution drift monitor example notebook](https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker_model_monitor/fairness_and_explainability/SageMaker-Model-Monitor-Fairness-and-Explainability.ipynb)

For more examples of model monitoring, see the GitHub repository [amazon-sagemaker-examples](https://github.com/aws/amazon-sagemaker-examples/tree/main/sagemaker_model_monitor).

## Q. How does Model Monitor work?


Amazon SageMaker Model Monitor automatically monitors machine learning models in production, using rules to detect drift in your model. Model Monitor notifies you when quality issues arise through alerts. To learn more, see [How Amazon SageMaker Model Monitor works](model-monitor.md#model-monitor-how-it-works).

## Q. When and how do you bring your own container (BYOC) for Model Monitor?


Model Monitor computes model metrics and statistics on tabular data only. For use cases other than tabular datasets, such as images or text, you can bring your own containers (BYOC) to monitor your data and models. For example, you can use BYOC to monitor an image classification model that takes images as input and outputs a label. To learn more about container contracts, see [Support for Your Own Containers With Amazon SageMaker Model Monitor](model-monitor-byoc-containers.md).

## Q. Where can I find examples of BYOC for Model Monitor?


You can find helpful BYOC examples in the following links:
+ [Data and model quality monitoring with Amazon SageMaker Model Monitor](model-monitor.md)
+ [GitHub example repository](https://github.com/aws/amazon-sagemaker-examples/tree/master/sagemaker_model_monitor)
+ [Support for Your Own Containers With Amazon SageMaker Model Monitor](model-monitor-byoc-containers.md)
+ [Detecting data drift in NLP using BYOC Model Monitor](https://aws.amazon.com/blogs/machine-learning/detect-nlp-data-drift-using-custom-amazon-sagemaker-model-monitor)
+ [ Detecting and analyzing incorrect predictions in CV ](https://aws.amazon.com/blogs//machine-learning/detecting-and-analyzing-incorrect-model-predictions-with-amazon-sagemaker-model-monitor-and-debugger)

## Q. How do I integrate Model Monitor with Pipelines?


For details about how to integrate Model Monitor and Pipelines, see [ Amazon Pipelines now integrates with SageMaker Model Monitor and SageMaker Clarify ](https://aws.amazon.com/about-aws/whats-new/2021/12/amazon-sagemaker-pipelines-integrates-sagemaker-model-monitor-sagemaker-clarify/).

For an example, see the GitHub sample notebook [Pipelines integration with Model Monitor and Clarify](https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker-pipelines/tabular/model-monitor-clarify-pipelines/sagemaker-pipeline-model-monitor-clarify-steps.ipynb).

## Q. Are there any performance concerns using `DataCapture`?


When turned on, data capture occurs asynchronously on the SageMaker AI endpoints. To prevent impact to inference requests, `DataCapture` stops capturing requests at high levels of disk usage. It is recommended you keep your disk utilization below 75% to ensure `DataCapture` continues capturing requests.