

# Alerts in Grafana version 9
<a name="v9-alerts"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Grafana alerting provides you with robust and actionable alerts that help you learn about problems in the systems moments after they occur, minimizing disruption to your services.

Amazon Managed Grafana includes access to an updated alerting system, *Grafana alerting*, that centralizes alerting information in a single, searchable view. It includes the following features:
+ Create and manage Grafana alerts in a centralized view.
+ Create and manage Cortex and Loki managed alerts through a single interface.
+ View alerting information from Prometheus, Amazon Managed Service for Prometheus, and other Alertmanager compatible data sources.

When you create your Amazon Managed Grafana workspace, you have the choice of using Grafana alerting, or the [Classic dashboard alerts](old-alerts-overview.md). This section covers Grafana alerting.

**Note**  
If you created your workspace with the Classic alerts enabled, and want to switch to Grafana alerting, you can [switch between the two alerting systems.](v9-alerting-use-grafana-alerts.md).

## Grafana alerting limitations
<a name="v9-alert-limitations"></a>
+ The Grafana alerting system can retrieve rules from all available Amazon Managed Service for Prometheus, Prometheus, Loki, and Alertmanager data sources. It might not be able to fetch rules from other supported data sources.
+ Alert rules defined in Grafana, rather than in Prometheus, send multiple notifications to your contact point. If you are using native Grafana alerts, we recommend that you stay on classic dashboard alerting and not enable the new Grafana alerting feature. If you would like to view Alerts defined in your Prometheus data source, then we recommend you enable Grafana Alerting, which sends only a single notification for alerts created in Prometheus Alertmanager.
**Note**  
This limitation is no longer a limitation in Amazon Managed Grafana workspaces that support Grafana v10.4 and later.

**Topics**
+ [Grafana alerting limitations](#v9-alert-limitations)
+ [Overview](v9-alerting-overview.md)
+ [Exploring alerting](v9-alerting-explore.md)
+ [Set up Alerting](v9-alerting-setup.md)
+ [Migrating classic dashboard alerts to Grafana alerting](v9-alerting-use-grafana-alerts.md)
+ [Manage your alert rules](v9-alerting-managerules.md)
+ [Manage your alert notifications](v9-alerting-managenotifications.md)

# Overview
<a name="v9-alerting-overview"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

The following gives you an overview of how Grafana Alerting works and introduces you to some of the key concepts that work together and form the core of its flexible and powerful alerting engine.

1. **Data source**

   Connects to data to be used by alerting. This data is often time-series data, for alerts, and shows the details of a system to be monitored and analyzed. For more information, see [data sources](AMG-data-sources-builtin.md).

1. **Alert rules**

   Set evaluation criteria that determines whether an alert instance will fire. An alert rule consists of one or more queries and expressions to pull data from the datasource, a condition describing what constitutes the need for an alert, the frequency of evaluation, and optionally, the duration over which the condition must be met for an alert to fire.

   Grafana managed alerts support multi-dimensional alerting, which means that each alert rule can create multiple alert instances. This is exceptionally powerful if you are observing multiple series in a single expression.

1. **Labels**

   Match an alert rule and its instances to notification policies and silences. They can also be used to group your alerts by severity.

1. **Notification policies**

   Set where, when, and how the alerts get routed to notify your team when the alert fires. Each notification policy specifies a set of label matchers to indicate which alerts they are responsible for. A notification policy has a contact point assigned to it that consists of one or more notifiers.

1. **Contact points**

   Define how your contacts are notified when an alert fires. We support a multitude of ChatOps tools to ensure the alerts come to your team.

## Features
<a name="v9-alerting-features"></a>

**One page for all alerts**

A single Grafana Alerting page consolidates both Grafana-managed alerts and alerts that reside in your Prometheus-compatible data source in one single place.

**Multi-dimensional alerts**

Alert rules can create multiple individual alert instances per alert rule, known as multi-dimensional alerts, giving you the power and flexibility to gain visibility into your entire system with just a single alert.

**Routing alerts**

Route each alert instance to a specific contact point based on labels you define. Notification policies are the set of rules for where, when, and how the alerts are routed to contact points.

**Silencing alerts**

Silences allow you to stop receiving persistent notifications from one or more alerting rules. You can also partially pause an alert based on certain criteria. Silences have their own dedicated section for better organization and visibility, so that you can scan your paused alert rules without cluttering the main alerting view.

**Mute timings**

With mute timings, you can specify a time interval when you don’t want new notifications to be generated or sent. You can also freeze alert notifications for recurring periods of time, such as during a maintenance period.

# Exploring alerting
<a name="v9-alerting-explore"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Whether you’re starting or expanding your implementation of Grafana Alerting, learn more about the key concepts and available features that help you create, manage, and take action on your alerts and improve your team’s ability to resolve issues quickly.

First of all, let’s look at the different alert rule types that Grafana Alerting offers.

## Alert rule types
<a name="v9-alerting-explore-rule-types"></a>

**Grafana-managed rules**

Grafana-managed rules are the most flexible alert rule type. They allow you to create alerts that can act on data from any of our supported data sources. In addition to supporting multiple data sources, you can also add expressions to transform your data and set alert conditions. This is the only type of rule that allows alerting from multiple data sources in a single rule definition.

**Mimir and Loki rules**

To create Mimir or Loki alerts you must have a compatible Prometheus or Loki data source. You can check if your data source supports rule creation via Grafana by testing the data source and observing if the ruler API is supported.

**Recording rules**

Recording rules are only available for compatible Prometheus or Loki data sources. A recording rule allows you to pre-compute frequently needed or computationally expensive expressions and save their result as a new set of time series. This is useful if you want to run alerts on aggregated data or if you have dashboards that query computationally expensive expressions repeatedly.

## Key concepts and features
<a name="v9-alerting-explore-features"></a>

The following table includes a list of key concepts, features and their definitions, designed to help you make the most of Grafana Alerting.


| Key concept or feature | Definition | 
| --- | --- | 
|  Data sources for Alerting  |  Select data sources you want to query and visualize metrics, logs and traces from.  | 
|  Provisioning for Alerting  |  Manage your alerting resources and provision them into your Grafana system using file provisioning or Terraform.  | 
|  Alertmanager  |  Manages the routing and grouping of alert instances.  | 
|  Alert rule  |  A set of evaluation criteria for when an alert rule should fire. An alert rule consists of one or more queries and expressions, a condition, the frequency of evaluation, and the duration over which the condition is met. An alert rule can produce multiple alert instances.  | 
|  Alert instance  |  An alert instance is an instance of an alert rule. A single-dimensional alert rule has one alert instance. A multidimensional alert rule has one or more alert instances. A single alert rule that matches to multiple results, such as CPU against 10 VMs, is counted as multiple (in this case 10) alert instances. This number can vary over time. For example, an alert rule that monitors CPU usage for all VMs in a system has more alert instances as VMs are added. For more information about alert-instance quotas, see [Quota reached errors](v9-alerting-managerules-grafana.md#v9-alerting-rule-quota-reached).  | 
|  Alert group  |  The Alertmanager groups alert instances by default using the labels for the root notification policy. This controls de-duplication and groups of alert instances, which are sent to contact points.  | 
|  Contact point  |  Define how your contacts are notified when an alert rule fires.  | 
|  Message templating  |  Create reusable custom templates and use them in contact points.  | 
|  Notification policy  |  Set of rules for where, when, and how the alerts are grouped and routed to contact points.  | 
|  Labels and label matchers  |  Labels uniquely identify alert rules. They link alert rules to notification policies and silences, determining which policy should handle them and which alert rules should be silenced.  | 
|  Silences  |  Stop notifications from one or more alert instances. The difference between a silence and a mute timing is that a silence only lasts for only a specified window of time whereas a mute timing is meant to be recurring on a schedule. Uses label matchers to silence alert instances.  | 
|  Mute timings  |  Specify a time interval when you don’t want new notifications to be generated or sent. You can also freeze alert notifications for recurring periods of time, such as during a maintenance period. Must be linked to an existing notification policy.  | 

# Data sources
<a name="v9-alerting-explore-datasources"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

There are a number of [data sources](AMG-data-sources-builtin.md) that are compatible with Grafana Alerting. Each data source is supported by a plugin. You can use one of the built-in data sources listed below.

These are the data sources that are compatible with and supported by Amazon Managed Grafana.
+ [Connect to an Alertmanager data source](data-source-alertmanager.md)
+ [Connect to an Amazon CloudWatch data source](using-amazon-cloudwatch-in-AMG.md)
+ [Connect to an Amazon OpenSearch Service data source](using-Amazon-OpenSearch-in-AMG.md)
+ [Connect to an AWS IoT SiteWise data source](using-iotsitewise-in-AMG.md)
+ [Connect to an AWS IoT TwinMaker data source](AMG-iot-twinmaker.md)
+ [Connect to Amazon Managed Service for Prometheus and open-source Prometheus data sources](prometheus-data-source.md)
+ [Connect to an Amazon Timestream data source](timestream-datasource.md)
+ [Connect to an Amazon Athena data source](AWS-Athena.md)
+ [Connect to an Amazon Redshift data source](AWS-Redshift.md)
+ [Connect to an AWS X-Ray data source](x-ray-data-source.md)
+ [Connect to an Azure Monitor data source](using-azure-monitor-in-AMG.md)
+ [Connect to a Google Cloud Monitoring data source](using-google-cloud-monitoring-in-grafana.md)
+ [Connect to a Graphite data source](using-graphite-in-AMG.md)
+ [Connect to an InfluxDB data source](using-influxdb-in-AMG.md)
+ [Connect to a Loki data source](using-loki-in-AMG.md)
+ [Connect to a Microsoft SQL Server data source](using-microsoft-sql-server-in-AMG.md)
+ [Connect to a MySQL data source](using-mysql-in-AMG.md)
+ [Connect to an OpenTSDB data source](using-opentsdb-in-AMG.md)
+ [Connect to a PostgreSQL data source](using-postgresql-in-AMG.md)
+ [Connect to a Jaeger data source](jaeger-data-source.md)
+ [Connect to a Zipkin data source](zipkin-data-source.md)
+ [Connect to a Tempo data source](tempo-data-source.md)
+ [Configure a TestData data source for testing](testdata-data-source.md)

# About alert rules
<a name="v9-alerting-explore-rules"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

An alerting rule is a set of evaluation criteria that determines whether an alert instance will fire. The rule consists of one or more queries and expressions, a condition, the frequency of evaluation, and optionally, the duration over which the condition is met.

While queries and expressions select the data set to evaluate, a condition sets the threshold that an alert must meet or exceed to create an alert.

An interval specifies how frequently an alerting rule is evaluated. Duration, when configured, indicates how long a condition must be met. The alert rules can also define alerting behavior in the absence of data.

**Topics**
+ [Alert rule types](v9-alerting-explore-rules-types.md)
+ [Alert instances](v9-alerting-rules-instances.md)
+ [Namespaces and groups](v9-alerting-rules-grouping.md)
+ [Notification templating](v9-alerting-rules-notification-templates.md)

# Alert rule types
<a name="v9-alerting-explore-rules-types"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Grafana supports several alert rule types. The following sections will explain their merits and demerits and help you choose the right alert type for your use case.

Grafana managed rules

Grafana-managed rules are the most flexible alert rule type. They allow you to create alerts that can act on data from any of your existing data sources.

In addition to supporting any data source, you can add [expressions](v9-panels-query-xform-expressions.md) to transform your data and express alert conditions.

Mimir, Loki and Cortex rules

To create Mimir, Loki or Cortex alerts you must have a compatible Prometheus data source. You can check if your data source is compatible by testing the data source and checking the details if the ruler API is supported.

Recording rules

Recording rules are only available for compatible Prometheus data sources like Mimir, Loki and Cortex.

A recording rule allows you to save an expression’s result to a new set of time series. This is useful if you want to run alerts on aggregated data or if you have dashboards that query the same expression repeatedly.

Read more about [recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/) in Prometheus.

# Alert instances
<a name="v9-alerting-rules-instances"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Grafana managed alerts support multi-dimensional alerting. Each alert rule can create multiple alert instances. This is powerful if you are observing multiple series in a single expression.

Consider the following PromQL expression:

```
sum by(cpu) (
  rate(node_cpu_seconds_total{mode!="idle"}[1m])
)
```

A rule using this expression will create as many alert instances as the amount of CPUs observed during evaluation, allowing a single rule to report the status of each CPU.

# Namespaces and groups
<a name="v9-alerting-rules-grouping"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Alerts can be organized using Folders for Grafana-managed rules and namespaces for Mimir, Loki, or Prometheus rules and group names.

**Namespaces**

When creating Grafana-managed rules, the folder can be used to perform access control and grant or deny access to all rules within a specific folder.

**Groups**

All rules within a group are evaluated at the same **interval**.

Alert rules and recording rules within a group will always be evaluated **sequentially**, meaning no rules will be evaluated at the same time and in order of appearance.

**Tip**  
If you want rules to be evaluated concurrently and with different intervals, consider storing them in different groups.

# Notification templating
<a name="v9-alerting-rules-notification-templates"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Notifications sent via contact points are built using notification templates. Grafana’s default templates are based on the [Go templating system](https://golang.org/pkg/text/template) where some fields are evaluated as text, while others are evaluated as HTML (which can affect escaping).

The default template [default\$1template.go](https://github.com/grafana/alerting/blob/main/templates/default_template.go) is a useful reference for custom templates.

Since most of the contact point fields can be templated, you can create reusable custom templates and use them in multiple contact points. To learn about custom notifications using templates, see [Customize notifications](v9-alerting-notifications.md).

**Nested templates**

You can embed templates within other templates.

For example, you can define a template fragment using the `define` keyword.

```
{{ define "mytemplate" }}
  {{ len .Alerts.Firing }} firing. {{ len .Alerts.Resolved }} resolved.
{{ end }}
```

You can then embed custom templates within this fragment using the `template` keyword. For example:

```
Alert summary:
{{ template "mytemplate" . }}
```

You can use any of the following built-in template options to embed custom templates.


| Name | Notes | 
| --- | --- | 
|  `default.title`  |  Displays high-level status information.  | 
|  `default.message`  |  Provides a formatted summary of firing and resolved alerts.  | 
|  `teams.default.message`  |  Similar to `default.messsage`, formatted for Microsoft Teams.  | 

**HTML in notification templates**

HTML in alerting notification templates is escaped. We do not support rendering of HTML in the resulting notification.

Some notifiers support alternative methods of changing the look and feel of the resulting notification. For example, Grafana installs the base template for alerting emails to `<grafana-install-dir>/public/emails/ng_alert_notification.html`. You can edit this file to change the appearance of all alerting emails.

# Alerting on numeric data
<a name="v9-alerting-explore-numeric"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

This topic describes how Grafana handles alerting on numeric rather than time series data.

Among certain data sources numeric data that is not time series can be directly alerted on, or passed into Server Side Expressions (SSE). This allows for more processing and resulting efficiency within the data source, and it can also simplify alert rules. When alerting on numeric data instead of time series data, there is no need to reduce each labeled time series into a single number. Instead labeled numbers are returned to Grafana instead.

**Tabular Data**

This feature is supported with backend data sources that query tabular data:
+ SQL data sources such as MySQL, Postgres, MSSQL, and Oracle.
+ The Azure Kusto based services: Azure Monitor (Logs), Azure Monitor (Azure Resource Graph), and Azure Data Explorer.

A query with Grafana managed alerts or SSE is considered numeric with these data sources, if:
+ The “Format AS” option is set to “Table” in the data source query.
+ The table response returned to Grafana from the query includes only one numeric (e.g. int, double, float) column, and optionally additional string columns.

If there are string columns, then those columns become labels. The name of column becomes the label name, and the value for each row becomes the value of the corresponding label. If multiple rows are returned, then each row should be uniquely identified their labels.

**Example**

For a MySQL table called “DiskSpace”:


| Time | Host | Disk | PercentFree | 
| --- | --- | --- | --- | 
|  2021-June-7  |  web1  |  /etc  |  3  | 
|  2021-June-7  |  web2  |  /var  |  4  | 
|  2021-June-7  |  web3  |  /var  |  8  | 
|  ...  |  ...  |  ...  |  ...  | 

You can query the data filtering on time, but without returning the time series to Grafana. For example, an alert that would trigger per Host, Disk when there is less than 5% free space:

```
SELECT Host , Disk , CASE WHEN PercentFree  < 5.0 THEN PercentFree  ELSE 0 END FROM ( 
   SELECT
      Host, 
      Disk, 
      Avg(PercentFree) 
   FROM DiskSpace
   Group By
      Host, 
      Disk 
   Where __timeFilter(Time)
```

This query returns the following Table response to Grafana:


| Host | Disk | PercentFree | 
| --- | --- | --- | 
|  web1  |  /etc  |  3  | 
|  web2  |  /var  |  4  | 
|  web3  |  /var  |  0  | 

When this query is used as the **condition** in an alert rule, then the non-zero will be alerting. As a result, three alert instances are produced:


| Labels | Status | 
| --- | --- | 
|  \$1Host=web1,disk=/etc\$1  |  Alerting  | 
|  \$1Host=web2,disk=/var\$1  |  Alerting  | 
|  \$1Host=web3,disk=/var\$1  |  Normal  | 

# Labels and annotations
<a name="v9-alerting-explore-labels"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Labels and annotations contain information about an alert. Both labels and annotations have the same structure: a set of named values; however their intended uses are different. An example of label, or the equivalent annotation, might be `alertname="test"`.

The main difference between a label and an annotation is that labels are used to differentiate an alert from all other alerts, while annotations are used to add additional information to an existing alert.

For example, consider two high CPU alerts: one for `server1` and another for `server2`. In such an example, we might have a label called `server` where the first alert has the label `server="server1"` and the second alert has the label `server="server2"`. However, we might also want to add a description to each alert such as `"The CPU usage for server1 is above 75%."`, where `server1` and `75%` are replaced with the name and CPU usage of the server (please refer to the documentation on [Templating labels and annotations](v9-alerting-explore-labels-templating.md) for how to do this). This kind of description would be more suitable as an annotation.

## Labels
<a name="v9-alerting-explore-labels-labels"></a>

Labels contain information that identifies an alert. An example of a label might be `server=server1`. Each alert can have more than one label, and the complete set of labels for an alert is called its label set. It is this label set that identifies the alert.

For example, an alert might have the label set `{alertname="High CPU usage",server="server1"}` while another alert might have the label set `{alertname="High CPU usage",server="server2"}`. These are two separate alerts because although their `alertname` labels are the same, their `server` labels are different.

The label set for an alert is a combination of the labels from the datasource, custom labels from the alert rule, and a number of reserved labels such as `alertname`.

**Custom Labels**

Custom labels are additional labels from the alert rule. Like annotations, custom labels must have a name, and their value can contain a combination of text and template code that is evaluated when an alert is fired. Documentation on how to template custom labels can be found [here](v9-alerting-explore-labels-templating.md).

When using custom labels with templates, it is important to make sure that the label value does not change between consecutive evaluations of the alert rule as this will end up creating large numbers of distinct alerts. However, it is OK for the template to produce different label values for different alerts. For example, do not put the value of the query in a custom label as this will end up creating a new set of alerts each time the value changes. Instead use annotations.

It is also important to make sure that the label set for an alert does not have two or more labels with the same name. If a custom label has the same name as a label from the datasource then it will replace that label. However, should a custom label have the same name as a reserved label then the custom label will be omitted from the alert.

## Annotations
<a name="v9-alerting-explore-labels-annotations"></a>

Annotations are named pairs that add additional information to existing alerts. There are a number of suggested annotations in Grafana such as `description`, `summary`, `runbook_url`, `dashboardUId` and `panelId`. Like custom labels, annotations must have a name, and their value can contain a combination of text and template code that is evaluated when an alert is fired. If an annotation contains template code, the template is evaluated once when the alert is fired. It is not re-evaluated, even when the alert is resolved. Documentation on how to template annotations can be found [here](v9-alerting-explore-labels-templating.md).

# How label matching works
<a name="v9-alerting-explore-labels-matching"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Use labels and label matchers to link alert rules to notification policies and silences. This allows for a very flexible way to manage your alert instances, specify which policy should handle them, and which alerts to silence.

A label matchers consists of 3 distinct parts, the **label**, the **value** and the **operator**.
+ The **Label** field is the name of the label to match. It must exactly match the label name.
+ The **Value** field matches against the corresponding value for the specified **Label** name. How it matches depends on the **Operator** value.
+ The **Operator** field is the operator to match against the label value. The available operators are:


| Operator | Description | 
| --- | --- | 
|  `=`  |  Select labels that are exactly equal to the value.  | 
|  `!=`  |  Select labels that are not equal to the value.  | 
|  `=~`  |  Select labels that regex-match the value.  | 
|  `!~`  |  Select labels that do not regex-match the value.  | 

If you are using multiple label matchers, they are combined using the AND logical operator. This means that all matchers must match in order to link a rule to a policy.

**Example scenario**

If you define the following set of labels for your alert:

```
{ foo=bar, baz=qux, id=12 }
```

then:
+ A label matcher defined as `foo=bar` matches this alert rule.
+ A label matcher defined as `foo!=bar` does *not* match this alert rule.
+ A label matcher defined as `id=~[0-9]+` matches this alert rule.
+ A label matcher defined as `baz!~[0-9]+` matches this alert rule.
+ Two label matchers defined as `foo=bar` and `id=~[0-9]+` match this alert rule.

# Labels in Grafana Alerting
<a name="v9-alerting-explore-labels-alerting"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

This topic explains why labels are a fundamental component of alerting.
+ The complete set of labels for an alert is what uniquely identifies an alert within Grafana alerts.
+ The Alertmanager uses labels to match alerts for silences and alert groups in notification policies.
+ The alerting UI shows labels for every alert instance generated during evaluation of that rule.
+ Contact points can access labels to dynamically generate notifications that contain information specific to the alert that is resulting in a notification.
+ You can add labels to an [alerting rule](v9-alerting-managerules.md). Labels are manually configurable, use template functions, and can reference other labels. Labels added to an alerting rule take precedence in the event of a collision between labels (except in the case of Grafana reserved labels, see below for more information).

**External Alertmanager Compatibility**

Grafana’s built-in Alertmanager supports both Unicode label keys and values. If you are using an external Prometheus Alertmanager, label keys must be compatible with their [data model](https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels). This means that label keys must only contain **ASCII letters**, **numbers**, as well as **underscores** and match the regex `[a-zA-Z_][a-zA-Z0-9_]*`. Any invalid characters will be removed or replaced by the Grafana alerting engine before being sent to the external Alertmanager according to the following rules:
+ `Whitespace` will be removed.
+ `ASCII characters` will be replaced with `_`.
+ `All other characters` will be replaced with their lower-case hex representation. If this is the first character it will be prefixed with `_`.

**Note**  
If multiple label keys are sanitized to the same value, the duplicates will have a short hash of the original label appended as a suffix.

**Grafana reserved labels**

**Note**  
Labels prefixed with `grafana_` are reserved by Grafana for special use. If a manually configured label is added beginning with `grafana_` it can be overwritten in case of collision.

Grafana reserved labels can be used in the same way as manually configured labels. The current list of available reserved labels are:


| Label | Description | 
| --- | --- | 
|  grafana\$1folder  |  Title of the folder containing the alert.  | 

# Templating labels and annotations
<a name="v9-alerting-explore-labels-templating"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

In Grafana, you template labels and annotations just like you would in Prometheus. If you have used Prometheus before then you should be familiar with the `$labels` and `$value` variables, which contain the labels and value of the alert. You can use the same variables in Grafana, even if the alert does not use a Prometheus datasource. If you haven’t used Prometheus before then don’t worry as each of these variables, and how to template them, will be explained as you follow the rest of this page.

## Go’s templating language
<a name="v9-alerting-explore-labels-templating-go"></a>

Templates for labels and annotations are written in Go’s templating language, [text/template](https://pkg.go.dev/text/template).

**Opening and closing tags**

In text/template, templates start with `{{` and end with `}}` irrespective of whether the template prints a variable or runs control structures such as if statements. This is different from other templating languages such as Jinja where printing a variable uses `{{` and `}}` and control structures use `{%` and `%}`.

**Print**

To print the value of something use `{{` and `}}`. You can print the the result of a function or the value of a variable. For example, to print the `$labels` variable you would write the following:

```
{{ $labels }}
```

**Iterate over labels**

To iterate over each label in `$labels` you can use a `range`. Here `$k` refers to the name and `$v` refers to the value of the current label. For example, if your query returned a label `instance=test` then `$k` would be `instance` and `$v` would be `test`.

```
{{ range $k, $v := $labels }}
{{ $k }}={{ $v }}
{{ end }}
```

## The labels, value and values variables
<a name="v9-alerting-explore-labels-templating-variables"></a>

**The labels variable**

The `$labels` variable contains the labels from the query. For example, a query that checks if an instance is down might return an instance label with the name of the instance that is down. For example, suppose you have an alert rule that fires when one of your instances has been down for more than 5 minutes. You want to add a summary to the alert that tells you which instance is down. With the `$labels` variable, you can create a summary that prints the instance label in the summary:

```
Instance {{ $labels.instance }} has been down for more than 5 minutes
```

**Labels with dots**

If the label you want to print contains a dot (full stop or period) in its name using the same dot in the template will not work:

```
Instance {{ $labels.instance.name }} has been down for more than 5 minutes
```

This is because the template is attempting to use a non-existing field called `name` in `$labels.instance`. You should instead use the `index` function, which prints the label `instance.name` in the `$labels` variable:

```
Instance {{ index $labels "instance.name" }} has been down for more than 5 minutes
```

**The value variable**

The `$value` variable works different from Prometheus. In Prometheus `$value` is a floating point number containing the value of the expression, but in Grafana it is a string containing the labels and values of all Threshold, Reduce and Math expressions, and Classic Conditions for this alert rule. It does not contain the results of queries, as these can return anywhere from 10s to 10,000s of rows or metrics.

If you were to use the `$value` variable in the summary of an alert:

```
{{ $labels.service }} has over 5% of responses with 5xx errors: {{ $value }})
```

The summary might look something like the following:

```
api has an over 5% of responses with 5xx errors: [ var='B' labels={service=api} value=6.789 ]
```

Here `var='B'` refers to the expression with the RefID B. In Grafana, all queries and expressions are identified by a RefID that identifies each query and expression in an alert rule. Similarly `labels={service=api}` refers to the labels, and `value=6.789` refers to the value.

You might have observed that there is no RefID A. That is because in most alert rules the RefID A refers to a query, and since queries can return many rows or time series they are not included in `$value`.

**The values variable**

If the `$value` variable contains more information than you need, you can instead print the labels and value of individual expressions using `$values`. Unlike `$value`, the `$values` variable is a table of objects containing the labels and floating point values of each expression, indexed by their RefID.

If you were to print the value of the expression with RefID `B` in the summary of the alert:

```
{{ $labels.service }} has over 5% of responses with 5xx errors: {{ $values.B }}%
```

The summary will contain just the value:

```
api has an over 5% of responses with 5xx errors: 6.789%
```

However, while `{{ $values.B }}` prints the number 6.789, it is actually a string as you are printing the object that contains both the labels and value for RefID B, not the floating point value of B. To use the floating point value of RefID B you must use the `Value` field from `$values.B`. If you were to humanize the floating point value in the summary of an alert:

```
{{ $labels.service }} has over 5% of responses with 5xx errors: {{ humanize $values.B.Value }}%
```

**No data, runtime errors and timeouts**

If the query in your alert rule returns no data, or fails because of a datasource error or timeout, then any Threshold, Reduce or Math expressions that use that query will also return no data or an error. When this happens these expression will be absent from `$values`. It is good practice to check that a RefID is present before using it as otherwise your template will break should your query return no data or an error. You can do this using an if statement:

```
{{ if $values.B }}{{ $labels.service }} has over 5% of responses with 5xx errors: {{ humanizePercentage $values.B.Value }}{{ end }}
```

## Classic Conditions
<a name="v9-alerting-explore-labels-templating-classic"></a>

If the rule uses Classic Conditions instead of Threshold, Reduce and Math expressions, then the `$values` variable is indexed by both the Ref ID and position of the condition in the Classic Condition. For example, if you have a Classic Condition with RefID B containing two conditions, then `$values` will contain two conditions `B0` and `B1`.

```
The first condition is {{ $values.B0 }}, and the second condition is {{ $values.B1 }}
```

## Functions
<a name="v9-alerting-explore-labels-templating-functions"></a>

The following functions are also available when expanding labels and annotations:

**args**

The `args` function translates a list of objects to a map with keys arg0, arg1 etc. This is intended to allow multiple arguments to be passed to templates.

**Example**

```
{{define "x"}}{{.arg0}} {{.arg1}}{{end}}{{template "x" (args 1 "2")}}
```

```
1 2
```

**externalURL**

The `externalURL` function returns the external URL of the Grafana server as configured in the ini file(s).

**Example**

```
{{ externalURL }}
```

```
https://example.com/grafana
```

**graphLink**

The `graphLink` function returns the path to the graphical view in [Explore in Grafana version 9](v9-explore.md) for the given expression and data source.

**Example**

```
{{ graphLink "{\"expr\": \"up\", \"datasource\": \"gdev-prometheus\"}" }}
```

```
/explore?left=["now-1h","now","gdev-prometheus",{"datasource":"gdev-prometheus","expr":"up","instant":false,"range":true}]
```

**humanize**

The `humanize` function humanizes decimal numbers.

**Example**

```
{{ humanize 1000.0 }}
```

```
1k
```

**humanize1024**

The `humanize1024` works similar to `humanize` but uses 1024 as the base rather than 1000.

**Example**

```
{{ humanize1024 1024.0 }}
```

```
1ki
```

**humanizeDuration**

The `humanizeDuration` function humanizes a duration in seconds.

**Example**

```
{{ humanizeDuration 60.0 }}
```

```
1m 0s
```

**humanizePercentage**

The `humanizePercentage` function humanizes a ratio value to a percentage.

**Example**

```
{{ humanizePercentage 0.2 }}
```

```
20%
```

**humanizeTimestamp**

The `humanizeTimestamp` function humanizes a Unix timestamp.

**Example**

```
{{ humanizeTimestamp 1577836800.0 }}
```

```
2020-01-01 00:00:00 +0000 UTC
```

**match**

The `match` function matches the text against a regular expression pattern.

**Example**

```
{{ match "a.*" "abc" }}
```

```
true
```

**pathPrefix**

The `pathPrefix` function returns the path of the Grafana server as configured in the ini file(s).

**Example**

```
{{ pathPrefix }}
```

```
/grafana
```

**tableLink**

The `tableLink` function returns the path to the tabular view in [Explore in Grafana version 9](v9-explore.md) for the given expression and data source.

**Example**

```
{{ tableLink "{\"expr\": \"up\", \"datasource\": \"gdev-prometheus\"}" }}
```

```
/explore?left=["now-1h","now","gdev-prometheus",{"datasource":"gdev-prometheus","expr":"up","instant":true,"range":false}]
```

**title**

The `title` function capitalizes the first character of each word.

**Example**

```
{{ title "hello, world!" }}
```

```
Hello, World!
```

**toLower**

The `toLower` function returns all text in lowercase.

**Example**

```
{{ toLower "Hello, world!" }}
```

```
hello, world!
```

**toUpper**

The `toUpper` function returns all text in uppercase.

**Example**

```
{{ toUpper "Hello, world!" }}
```

```
HELLO, WORLD!
```

**reReplaceAll**

The `reReplaceAll` function replaces text matching the regular expression.

**Example**

```
{{ reReplaceAll "localhost:(.*)" "example.com:$1" "localhost:8080" }}
```

```
example.com:8080
```

# State and health of alerting rules
<a name="v9-alerting-explore-state"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

The state and health of alerting rules help you understand several key status indicators about your alerts.

There are three key components: *alert rule state*, *alert instance state*, and *alert rule health*. Although related, each component conveys subtly different information.

**Alert rule state**

An alert rule can be in either of the following states:


| State | Description | 
| --- | --- | 
|  Normal  |  None of the time series returned by the evaluation engine is in a `Pending` or `Firing` state.  | 
|  Pending  |  At least one time series returned by the evaluation engine is `Pending`.  | 
|  Firing  |  At least one time series returned by the evaluation engine is `Firing`.  | 

**Note**  
Alerts will transition first to `pending` and then `firing`, thus it will take at least two evaluation cycles before an alert is fired.

**Alert instance state**

An alert instance can be in either of the following states:


| State | Description | 
| --- | --- | 
|  Normal  |  The state of an alert that is neither firing nor pending, everything is working correctly.  | 
|  Pending  |  The state of an alert that has been active for less than the configured threshold duration.  | 
|  Alerting  |  The state of an alert that has been active for longer than the configured threshold duration.  | 
|  NoData  |  No data has been received for the configured time window.  | 
|  Error  |  The error that occurred when attempting to evaluate an alerting rule.  | 

**Alert rule health**

An alert rule can have one the following health statuses:


| State | Description | 
| --- | --- | 
|  Ok  |  No error when evaluating an alerting rule.  | 
|  Error  |  An error occurred when evaluating an alerting rule.  | 
|  NoData  |  The absence of data in at least one time series returned during a rule evaluation.  | 

**Special alerts for `NoData` and `Error`**

When evaluation of an alerting rule produces state `NoData` or `Error`, Grafana Alerting will generate alert instances that have the following additional labels:


| Label | Description | 
| --- | --- | 
|  alertname  |  Either `DatasourceNoData` or `DatasourceError` depending on the state.  | 
|  datasource\$1uid  |  The UID of the data source that caused the state.  | 

You can handle these alerts the same way as regular alerts by adding a silence, route to a contact point, and so on.

# Contact points
<a name="v9-alerting-explore-contacts"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Use contact points to define how your contacts are notified when an alert rule fires. A contact point can have one or more contact point types, for example, email, Slack, webhook, and so on. When an alert rule fires, a notification is sent to all contact point types listed for a contact point. Contact points can be configured for the Grafana Alertmanager as well as external alertmanagers.

You can also use notification templating to customize notification messages for contact point types.

**Supported contact point types**

The following table lists the contact point types supported by Grafana.


| Name | Type | 
| --- | --- | 
|  Amazon SNS  |  `sns`  | 
|  OpsGenie  |  `opsgenie`  | 
|  Pager Duty  |  `pagerduty`  | 
|  Slack  |  `slack`  | 
|  VictorOps  |  `victorops`  | 

For more information about contact points, see [Working with contact points](v9-alerting-contact-points.md) and [Customize notifications](v9-alerting-notifications.md).

# Notifications
<a name="v9-alerting-explore-notifications"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Grafana uses Alertmanagers to send notifications for firing and resolved alerts. Grafana has its own Alertmanager, referred to as “Grafana” in the user interface, but also supports sending notifications from other Alertmanagers too, such as the [Prometheus Alertmanager](https://prometheus.io/docs/alerting/latest/alertmanager/). The Grafana Alertmanager uses notification policies and contact points to configure how and where a notification is sent; how often a notification should be sent; and whether alerts should all be sent in the same notification, sent in grouped notifications based on a set of labels, or as separate notifications.

## Notification policies
<a name="v9-alerting-explore-notifications-policies"></a>

Notification policies control when and where notifications are sent. A notification policy can choose to send all alerts together in the same notification, send alerts in grouped notifications based on a set of labels, or send alerts as separate notifications. You can configure each notification policy to control how often notifications should be sent as well as having one or more mute timings to inhibit notifications at certain times of the day and on certain days of the week.

Notification policies are organized in a tree structure where at the root of the tree there is a notification policy called the root policy. There can be only one root policy and the root policy cannot be deleted.

Specific routing policies are children of the root policy and can be used to match either all alerts or a subset of alerts based on a set of matching labels. A notification policy matches an alert when its matching labels match the labels in the alert.

A specific routing policy can have its own child policies, called nested policies, which allow for additional matching of alerts. An example of a specific routing policy could be sending infrastructure alerts to the Ops team; while a child policy might send high priority alerts to Pagerduty and low priority alerts to Slack.

All alerts, irrespective of their labels, match the root policy. However, when the root policy receives an alert it looks at each specific routing policy and sends the alert to the first specific routing policy that matches the alert. If the specific routing policy has further child policies, then it can attempt to the match the alert against one of its nested policies. If no nested policies match the alert then the specific routing policy is the matching policy. If there are no specific routing policies, or no specific routing policies match the alert, then the root policy is the matching policy.

## Contact points
<a name="v9-alerting-explore-notifications-contacts"></a>

Contact points contain the configuration for sending notifications. A contact point is a list of integrations, each of which sends a notification to a particular email address, service or URL. Contact points can have multiple integrations of the same kind, or a combination of integrations of different kinds. For example, a contact point could contain a Pager Duty integration; a Pager Duty and Slack integration; or a Pager Duty integration, a Slack integration, and two Amazon SNS integrations. You can also configure a contact point with no integrations; in which case no notifications are sent.

A contact point cannot send notifications until it has been added to a notification policy. A notification policy can only send alerts to one contact point, but a contact point can be added to a number of notification policies at the same time. When an alert matches a notification policy, the alert is sent to the contact point in that notification policy, which then sends a notification to each integration in its configuration.

**Note**  
For information about supported integrations for contact points, see [Contact points](v9-alerting-explore-contacts.md).

## Templating notifications
<a name="v9-alerting-explore-notifications-templating"></a>

You can customize notifications with templates. For example, templates can be used to change the title and message of notifications sent to Slack.

Templates are not limited to an individual integration or contact point, but instead can be used in a number of integrations in the same contact point and even integrations across different contact points. For example, a Grafana user can create a template called `custom_subject_or_title` and use it for both templating subjects in Pager Duty and titles of Slack messages without having to create two separate templates.

All notifications templates are written in [Go’s templating language](https://pkg.go.dev/text/template), and are in the Contact points tab on the Alerting page.

## Silences
<a name="v9-alerting-explore-notifications-silences"></a>

You can use silences to mute notifications from one or more firing rules. Silences do not stop alerts from firing or being resolved, or hide firing alerts in the user interface. A silence lasts as long as its duration which can be configured in minutes, hours, days, months or years.

# Set up Alerting
<a name="v9-alerting-setup"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Configure the features and integrations that you need to create and manage your alerts.

**Topics**
+ [Add an external Alertmanager](v9-alerting-setup-alertmanager.md)
+ [Provisioning Grafana Alerting resources](v9-alerting-setup-provision.md)

# Add an external Alertmanager
<a name="v9-alerting-setup-alertmanager"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Set up Grafana to use an external Alertmanager as a single Alertmanager to receive all of your alerts. This external Alertmanager can then be configured and administered from within Grafana itself.

Once you have added the Alertmanager, you can use the Grafana Alerting UI to manage silences, contact points, and notification policies. A dropdown option in these pages allows you to switch between alertmanagers.

**Note**  
Starting with Grafana 9.2, the URL configuration of external alertmanagers from the Admin tab on the Alerting page is deprecated. It will be removed in a future release.

External alertmanagers should now be configured as data sources using Grafana Configuration from the main Grafana navigation menu. This enables you to manage the contact points and notification policies of external alertmanagers from within Grafana and also encrypts HTTP basic authentication credentials that were previously visible when configuring external alertmanagers by URL.

To add an external Alertmanager, complete the following steps.

1. Click Configuration and then Data sources.

1. Search for Alertmanager.

1. Choose your Implementation and fill out the fields on the page, as required.

   If you are provisioning your data source, set the flag `handleGrafanaManagedAlerts` in the `jsonData` field to `true` to send Grafana-managed alerts to this Alertmanager.
**Note**  
Prometheus, Grafana Mimir, and Cortex implementations of Alertmanager are supported. For Prometheus, contact points and notification policies are read-only in the Grafana Alerting UI.

1. Click Save & test.

# Provisioning Grafana Alerting resources
<a name="v9-alerting-setup-provision"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Alerting infrastructure is often complex, with many pieces of the pipeline that often live in different places. Scaling this across multiple teams and organizations is an especially challenging task. Grafana Alerting provisioning makes this process easier by enabling you to create, manage, and maintain your alerting data in a way that best suits your organization.

There are two options to choose from:

1. Provision your alerting resources using the Alerting Provisioning HTTP API.
**Note**  
Typically, you cannot edit API-provisioned alert rules from the Grafana UI.  
In order to enable editing, add the x-disable-provenance header to the following requests when creating or editing your alert rules in the API:  

   ```
   POST /api/v1/provisioning/alert-rules
   PUT /api/v1/provisioning/alert-rules/{UID}
   ```

1. Provision your alerting resources using Terraform.

**Note**  
Currently, provisioning for Grafana Alerting supports alert rules, contact points, mute timings, and templates. Provisioned alerting resources using file provisioning or Terraform can only be edited in the source that created them and not from within Grafana or any other source. For example, if you provision your alerting resources using files from disk, you cannot edit the data in Terraform or from within Grafana.

**Topics**
+ [Create and manage alerting resources using Terraform](v9-alerting-setup-provision-terraform.md)
+ [Viewing provisioned alerting resources in Grafana](v9-alerting-setup-provision-view.md)

# Create and manage alerting resources using Terraform
<a name="v9-alerting-setup-provision-terraform"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Use Terraform’s Grafana Provider to manage your alerting resources and provision them into your Grafana system. Terraform provider support for Grafana Alerting makes it easy to create, manage, and maintain your entire Grafana Alerting stack as code.

For more information on managing your alerting resources using Terraform, refer to the [Grafana Provider](https://registry.terraform.io/providers/grafana/grafana/latest/docs) documentation in the Terraform documentation.

Complete the following tasks to create and manage your alerting resources using Terraform.

1. Create an API key for provisioning.

1. Configure the Terraform provider.

1. Define your alerting resources in Terraform.

1. Run `terraform apply` to provision your alerting resources.

## Prerequisites
<a name="v9-alerting-setup-provision-tf-prerequisites"></a>
+ Ensure you have the grafana/grafana [Terraform provider](https://registry.terraform.io/providers/grafana/grafana/1.28.0) 1.27.0 or higher.
+ Ensure you are using Grafana 9.1 or higher. If you created your Amazon Managed Grafana instance with Grafana version 9, this will be true.

## Create an API key for provisioning
<a name="v9-alerting-setup-provision-tf-apikey"></a>

You can [create a normal Grafana API key](Using-Grafana-APIs.md) to authenticate Terraform with Grafana. Most existing tooling using API keys should automatically work with the new Grafana Alerting support. For information specifically about creating keys for use with Terraform, see [Using Terraform for Amazon Managed Grafana automation](https://aws-observability.github.io/observability-best-practices/recipes/recipes/amg-automation-tf/).

**To create an API key for provisioning**

1. Create a new service account for your CI pipeline.

1. Assign the role “Access the alert rules Provisioning API.”

1. Create a new service account token.

1. Name and save the token for use in Terraform.

Alternatively, you can use basic authentication. To view all the supported authentication formats, see [Grafana authentication](https://registry.terraform.io/providers/grafana/grafana/latest/docs#authentication) in the Terraform documentation.

## Configure the Terraform provider
<a name="v9-alerting-setup-provision-tf-configure"></a>

Grafana Alerting support is included as part of the [Grafana Terraform provider](https://registry.terraform.io/providers/grafana/grafana/latest/docs).

The following is an example you can use to configure the Terraform provider.

```
terraform {
    required_providers {
        grafana = {
            source = "grafana/grafana"
            version = ">= 1.28.2"
        }
    }
}

provider "grafana" {
    url = <YOUR_GRAFANA_URL>
    auth = <YOUR_GRAFANA_API_KEY>
}
```

## Provision contact points and templates
<a name="v9-alerting-setup-provision-tf-contacts"></a>

Contact points connect an alerting stack to the outside world. They tell Grafana how to connect to your external systems and where to deliver notifications. There are over fifteen different [integrations](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/contact_point#optional) to choose from. This example uses a Slack contact point.

**To provision contact points and templates**

1. Copy this code block into a .tf file on your local machine. Replace *<slack-webhook-url>* with your Slack webhook URL (or other contact

   This example creates a contact point that sends alert notifications to Slack.

   ```
   resource "grafana_contact_point" "my_slack_contact_point" {
       name = "Send to My Slack Channel"
   
       slack {
           url = <slack-webhook-url>
           text = <<EOT
   {{ len .Alerts.Firing }} alerts are firing!
   
   Alert summaries:
   {{ range .Alerts.Firing }}
   {{ template "Alert Instance Template" . }}
   {{ end }}
   EOT
       }
   }
   ```

1. Enter text for your notification in the text field.

   The `text` field supports [Go-style templating](https://pkg.go.dev/text/template). This enables you to manage your Grafana Alerting notification templates directly in Terraform.

1. Run the command `terraform apply`.

1. Go to the Grafana UI and check the details of your contact point.

   You cannot edit resources provisioned via Terraform from the UI. This ensures that your alerting stack always stays in sync with your code.

1. Click **Test** to verify that the contact point works correctly.

**Note**  
You can re-use the same templates across many contact points. In the example above, a shared template ie embedded using the statement `{{ template "Alert Instance Template" . }}`  
This fragment can then be managed separately in Terraform:  

```
resource "grafana_message_template" "my_alert_template" {
    name = "Alert Instance Template"

    template = <<EOT
{{ define "Alert Instance Template" }}
Firing: {{ .Labels.alertname }}
Silence: {{ .SilenceURL }}
{{ end }}
EOT
}
```

## Provision notification policies and routing
<a name="v9-alerting-setup-provision-tf-notifications"></a>

Notification policies tell Grafana how to route alert instances, as opposed to where. They connect firing alerts to your previously defined contact points using a system of labels and matchers.

**To provision notification policies and routing**

1. Copy this code block into a .tf file on your local machine.

   In this example, the alerts are grouped by `alertname`, which means that any notifications coming from alerts which share the same name, are grouped into the same Slack message.

   If you want to route specific notifications differently, you can add sub-policies. Sub-policies allow you to apply routing to different alerts based on label matching. In this example, we apply a mute timing to all alerts with the label a=b.

   ```
   resource "grafana_notification_policy" "my_policy" {
       group_by = ["alertname"]
       contact_point = grafana_contact_point.my_slack_contact_point.name
   
       group_wait = "45s"
       group_interval = "6m"
       repeat_interval = "3h"
   
       policy {
           matcher {
               label = "a"
               match = "="
               value = "b"
           }
           group_by = ["..."]
           contact_point = grafana_contact_point.a_different_contact_point.name
           mute_timings = [grafana_mute_timing.my_mute_timing.name]
   
           policy {
               matcher {
                   label = "sublabel"
                   match = "="
                   value = "subvalue"
               }
               contact_point = grafana_contact_point.a_third_contact_point.name
               group_by = ["..."]
           }
       }
   }
   ```

1. In the mute\$1timings field, link a mute timing to your notification policy.

1. Run the command `terraform apply`.

1. Go to the Grafana UI and check the details of your notification policy.
**Note**  
You cannot edit resources provisioned from Terraform from the UI. This ensures that your alerting stack always stays in sync with your code.

1. Click **Test** to verify that the notification point is working correctly.

## Provision mute timings
<a name="v9-alerting-setup-provision-tf-mutetiming"></a>

Mute timings provide the ability to mute alert notifications for defined time periods.

**To provision mute timings**

1. Copy this code block into a .tf file on your local machine.

   In this example, alert notifications are muted on weekends.

   ```
   resource "grafana_mute_timing" "my_mute_timing" {
       name = "My Mute Timing"
   
       intervals {
           times {
             start = "04:56"
             end = "14:17"
           }
           weekdays = ["saturday", "sunday", "tuesday:thursday"]
           months = ["january:march", "12"]
           years = ["2025:2027"]
       }
   }
   ```

1. Run the command `terraform apply`.

1. Go to the Grafana UI and check the details of your mute timing.

1. Reference your newly created mute timing in a notification policy using the `mute_timings` field. This will apply your mute timing to some or all of your notifications.
**Note**  
You cannot edit resources provisioned from Terraform from the UI. This ensures that your alerting stack always stays in sync with your code.

1. Click **Test** to verify that the mute timing is working correctly.

## Provision alert rules
<a name="v9-alerting-setup-provision-tf-rules"></a>

[Alert rules](v9-alerting-managerules.md) enable you to alert against any Grafana data source. This can be a data source that you already have configured, or you can [define your data sources in Terraform](https://registry.terraform.io/providers/grafana/grafana/latest/docs/resources/data_source) alongside your alert rules.

**To provision alert rules**

1. Create a data source to query and a folder to store your rules in.

   In this example, the [Configure a TestData data source for testing](testdata-data-source.md) data source is used.

   Alerts can be defined against any backend datasource in Grafana.

   ```
   resource "grafana_data_source" "testdata_datasource" {
       name = "TestData"
       type = "testdata"
   }
   
   resource "grafana_folder" "rule_folder" {
       title = "My Rule Folder"
   }
   ```

1. Define an alert rule.

   For more information on alert rules, refer to [how to create Grafana-managed alerts](https://grafana.com/blog/2022/08/01/grafana-alerting-video-how-to-create-alerts-in-grafana-9/).

1. Create a rule group containing one or more rules.

   In this example, the `grafana_rule_group` resource group is used.

   ```
   resource "grafana_rule_group" "my_rule_group" {
       name = "My Alert Rules"
       folder_uid = grafana_folder.rule_folder.uid
       interval_seconds = 60
       org_id = 1
   
       rule {
           name = "My Random Walk Alert"
           condition = "C"
           for = "0s"
   
           // Query the datasource.
           data {
               ref_id = "A"
               relative_time_range {
                   from = 600
                   to = 0
               }
               datasource_uid = grafana_data_source.testdata_datasource.uid
               // `model` is a JSON blob that sends datasource-specific data.
               // It's different for every datasource. The alert's query is defined here.
               model = jsonencode({
                   intervalMs = 1000
                   maxDataPoints = 43200
                   refId = "A"
               })
           }
   
           // The query was configured to obtain data from the last 60 seconds. Let's alert on the average value of that series using a Reduce stage.
           data {
               datasource_uid = "__expr__"
               // You can also create a rule in the UI, then GET that rule to obtain the JSON.
               // This can be helpful when using more complex reduce expressions.
               model = <<EOT
   {"conditions":[{"evaluator":{"params":[0,0],"type":"gt"},"operator":{"type":"and"},"query":{"params":["A"]},"reducer":{"params":[],"type":"last"},"type":"avg"}],"datasource":{"name":"Expression","type":"__expr__","uid":"__expr__"},"expression":"A","hide":false,"intervalMs":1000,"maxDataPoints":43200,"reducer":"last","refId":"B","type":"reduce"}
   EOT
               ref_id = "B"
               relative_time_range {
                   from = 0
                   to = 0
               }
           }
   
           // Now, let's use a math expression as our threshold.
           // We want to alert when the value of stage "B" above exceeds 70.
           data {
               datasource_uid = "__expr__"
               ref_id = "C"
               relative_time_range {
                   from = 0
                   to = 0
               }
               model = jsonencode({
                   expression = "$B > 70"
                   type = "math"
                   refId = "C"
               })
           }
       }
   }
   ```

1. Go to the Grafana UI and check your alert rule.

   You can see whether the alert rule is firing. You can also see a visualization of each of the alert rule’s query stages.

   When the alert fires, Grafana routes a notification through the policy you defined.

   For example, if you chose Slack as a contact point, Grafana’s embedded [Alertmanager](https://github.com/prometheus/alertmanager) automatically posts a message to Slack.

# Viewing provisioned alerting resources in Grafana
<a name="v9-alerting-setup-provision-view"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

 You can verify that your alerting resources were created in Grafana.

**To view your provisioned resources in Grafana**

1. Open your Grafana instance.

1. Navigate to Alerting.

1. Click an alerting resource folder, for example, Alert rules.

   Provisioned resources are labeled **Provisioned**, so that it is clear that they were not created manually.

**Note**  
You cannot edit provisioned resources from Grafana. You can only change the resource properties by changing the provisioning file and restarting Grafana or carrying out a hot reload. This prevents changes being made to the resource that would be overwritten if a file is provisioned again or a hot reload is carried out.

# Migrating classic dashboard alerts to Grafana alerting
<a name="v9-alerting-use-grafana-alerts"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Workspaces that choose not to use Grafana alerting, use the classic dashboard alerting. To switch to the new Grafana alerting, you must opt in to the feature.

You can configure your Amazon Managed Grafana instance to use Grafana alerting using the AWS Management Console, the AWS CLI, or the Amazon Managed Grafana API. For details about how to configure Amazon Managed Grafana, including turning Grafana alerting on or off, see [Configure a Amazon Managed Grafana workspace](AMG-configure-workspace.md).

**Note**  
When using Grafana alerting, alert rules defined in Grafana, rather than in Prometheus, send multiple notifications to your contact point. If you are using native Grafana alerts, we recommend that you stay on classic dashboard alerting and not enable the new Grafana alerting feature. If you would like to view Alerts defined in your Prometheus data source, then we recommend you enable Grafana Alerting, which sends only a single notification for alerts created in Prometheus Alertmanager.  
This limitation has been removed in Amazon Managed Grafana workspaces that support Grafana v10.4 and later.

## Migrating to Grafana alerting system
<a name="v9-alerting-use-grafana-alerts-opt-in"></a>

When Grafana alerting is turned on, existing classic dashboard alerts migrate in a format compatible with the Grafana alerting. In the Alerting page of your Grafana instance, you can view the migrated alerts alongside new alerts. With Grafana alerting, your Grafana-managed alert rules send multiple notifications rather than a single alert when they are matched.

Read and write access to classic dashboard alerts and Grafana alerts are governed by the permissions of the folders storing them. During migration, classic dashboard alert permissions are matched to the new rules permissions as follows:
+ If the original alert's dashboard has permissions, migration creates a folder named with this format `Migrated {"dashboardUid": "UID", "panelId": 1, "alertId": 1}` to match permissions of the original dashboard (including the inherited permissions from the folder).
+ If there are no dashboard permissions and the dashboard is under a folder, then the rule is linked to this folder and inherits its permissions.
+ If there are no dashboard permissions and the dashboard is under the General folder, then the rule is linked to the General Alerting folder, and the rule inherits the default permissions.

**Note**  
Since there is no `Keep Last State` option for `NoData` in Grafana alerting, this option becomes `NoData` during the classic rules migration. Option `Keep Last State` for `Error` handling is migrated to a new option `Error`. To match the behavior of the `Keep Last State`, in both cases, during the migration Amazon Managed Grafana automatically creates a silence for each alert rule with a duration of one year.

Notification channels are migrated to an Alertmanager configuration with the appropriate routes and receivers. Default notification channels are added as contact points to the default route. Notification channels not associated with any Dashboard alert go to the `autogen-unlinked-channel-recv` route.

### Limitations
<a name="v9-alerting-use-grafana-alerts-limitations"></a>
+ Grafana alerting system can retrieve rules from all available Prometheus, Loki, and Alertmanager data sources. It might not be able to fetch alerting rules from other supported data sources.
+ Migrating back and forth between Grafana alerts and the classic dashboard alerting can result in data loss for features supported in one system, but not the other.
**Note**  
If you migrate back to the classic dashboard alerting, you lose all changes made to alerting configuration made while you had Grafana alerting enabled, including any new alert rules that were created.

# Manage your alert rules
<a name="v9-alerting-managerules"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

An alert rule is a set of evaluation criteria that determines whether an alert will fire. The alert rule consists of one or more queries and expressions, a condition, the frequency of evaluation, and optionally, the duration over which the condition is met.

While queries and expressions select the data set to evaluate, a condition sets the threshold that an alert must meet or exceed to create an alert. An interval specifies how frequently an alert rule is evaluated. Duration, when configured, indicates how long a condition must be met. Alert rules can also define alerting behavior in the absence of data.

**Note**  
Grafana managed alert rules can only be edited or deleted by users with Edit permissions for the folder storing the rules.  
Alert rules for an external Grafana Mimir or Loki instance can be edited or deleted by users with Editor or Admin roles.

**Topics**
+ [Creating Grafana managed alert rules](v9-alerting-managerules-grafana.md)
+ [Creating Grafana Mimir or Loki managed alert rules](v9-alerting-managerules-mimir-loki.md)
+ [Creating Grafana Mimir or Loki managed recording rules](v9-alerting-managerules-mimir-loki-recording.md)
+ [Grafana Mimir or Loki rule groups and namespaces](v9-alerting-managerules-mimir-loki-groups.md)
+ [View and edit alerting rules](v9-alerting-managerules-view-edit.md)

# Creating Grafana managed alert rules
<a name="v9-alerting-managerules-grafana"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Grafana allows you to create alerting rules that query one or more data sources, reduce or transform the results and compare them to each other or to fixed thresholds. When these are run, Grafana sends notifications to the contact point.

**To add a Grafana managed rule**

1. From your Grafana console, in the Grafana menu, choose the **Alerting** (bell) icon to open the **Alerting** page listing existing alerts.

1. Choose **New alert rule**.

1. In **Step 1**, add the rule name, type and storage location, as follows:
   + In **Rule name**, add a descriptive name. This name is displayed in the alert rules list. It is also the `alertname` label for every alert instance that is created from this rule.
   + From the **Rule type** dropdown, select **Grafana managed alert**.
   + From the **Folder** dropdown, select the folder where you want to store the rule. If you do not select a folder, the rule is stored in the `General` folder. To create a folder, select the dropdown and enter a new folder name.

1. In **Step 2**, add the queries and expressions to evaluate.
   + Keep the default name or hover over and choose the edit icon to change the name.
   + For queries, select a data source from the dropdown.
   + Add one or more [queries or expressions](v9-panels-query-xform-expressions.md).
   + For each expression, select either **Classic condition** to create a single alert rule, or choose from **Math**, **Reduce**, **Resample **options to generate separate alerts for each series. For details on these options, see [Single and multidimensional rules](#v9-alerting-single-multi-rule).
   + Choose **Run queries** to verify that the query is successful.

1. In **Step 3**, add conditions.
   + From the **Condition** dropdown, select the query or expression to initiate the alert rule.
   + For **Evaluate every**, specify the frequency of evaluation. Must be a multiple of 10 seconds. For example, `1m`, `30s`.
   + For **Evaluate for**, specify the duration for which the condition must be true before an alert is initiated.
**Note**  
After a condition is breached, the alert goes into `Pending` state. If the condition remains breached for the duration specified, the alert transitions to the `Firing` state. If it is no longer met, it reverts to the `Normal` state.
   + In **Configure no data and error handling**, configure alerting behavior in the absence of data. use the guidelines in [Handling no data or error cases](#v9-alerting-rule-no-data-error).
   + Choose **Preview alerts** to check the result of running the query at this moment. Preview excludes no data and error handling conditions.

1. In **Step 4**, add additional metadata associated with the rule.
   + Add a description and summary to customize alert messages. Use the guidelines in [Labels and annotations](v9-alerting-explore-labels.md).
   + Add Runbook URL, panel, dashboard, and alert IDs.
   + Add custom labels.

1. Choose **Save** to save the rule or **Save and exit** to save the rule and go back to the **Alerting** page.

After you have created your rule, you can create a notification for your rule. For more information about notifications, see [Manage your alert notifications](v9-alerting-managenotifications.md).

## Single and multidimensional rules
<a name="v9-alerting-single-multi-rule"></a>

For Grafana managed alert rules, you can create a rule with a classic condition or you can create a multidimensional rule.

**Single dimensional rule (classic condition)**

Use a classic condition expression to create a rule that initiates a single alert when its condition is met. For a query that returns multiple series, Grafana does not track the alert state of each series. As a result, Grafana sends only a single alert even when alert conditions are met for multiple series.

For more information about how to format expressions, see [Expressions](https://grafana.com/docs/grafana/next/panels/query-a-data-source/) in the *Grafana documentation*.

**Multidimensional rule**

To generate a separate alert instance for each series returned in the query, create a multidimensional rule.

**Note**  
Each alert instance generated by a multi-dimensional rule counts toward your total quota of alerts. Rules are not evaluated when you reach your quota of alerts. For more information about quotas for multi-dimensional rules, see [Quota reached errors](#v9-alerting-rule-quota-reached).

To create multiple instances from a single rule, use `Math`, `Reduce`, or `Resample` expressions to create a multidimensional rule. For example, you can:
+ Add a `Reduce` expression for each query to aggregate values in the selected time range into a single value. (Not needed for [rules using numeric data](v9-alerting-explore-numeric.md)).
+ Add a `Math` expression with the condition for the rule. This is not needed in case a query or a reduce expression already returns 0 if rule should not initiate an alert, or a positive number if it should initiate an alert. 

  Some examples: 
  + `$B > 70` if it should initiate an alert in case value of B query/expression is more than 70. 
  + `$B < $C * 100` in case it should initiate an alert if value of B is less than value of C multiplied by 100. If queries being compared have multiple series in their results, series from different queries are matched if they have the same labels, or one is a subset of the other.

**Note**  
Grafana does not support alert queries with template variables. More information is available at the community page [Template variables are not supported in alert queries while setting up Alert](https://community.grafana.com/t/template-variables-are-not-supported-in-alert-queries-while-setting-up-alert/2514).



**Performance considerations for multidimensional rules**

Each alert instance counts toward the alert quota. Multidimensional rules that create more instances than can be accommodated within the alert quota are not evaluated and return a quota error. For more information, see [Quota reached errors](#v9-alerting-rule-quota-reached).

Multidimensional alerts can have a high impact on the performance of your Grafana workspace, as well as on the performance of your data sources as Grafana queries them to evaluate your alert rules. The following considerations can be helpful as you are trying to optimize the performance of your monitoring system.
+ **Frequency of rule evaluation** – The **Evaluate Every** property of an alert rule controls the frequency of rule evaluation. We recommend using the lowest acceptable evaluation frequency. 
+ **Result set cardinality** – The number of alert instances you create with a rule affects its performance. Suppose you are monitoring API response errors for every API path, on every VM in your fleet. This set has a cardinality of the number of paths multiplied by the number of VMs. You can reduce the cardinality of the result set, for example, by monitoring total errors per VM instead of per path per VM.
+ **Complexity of the query** – Queries that data sources can process and respond to quickly consume fewer resources. Although this consideration is less important than the other considerations listed above, if you have reduced those as much as possible, looking at individual query performance could make a difference. You should also be aware of the performance impact that evaluating these rules has on your data sources. Alerting queries are often the vast majority of queries handled by monitoring databases, so the same load factors that affect the Grafana instance affect them as well.

## Quota reached errors
<a name="v9-alerting-rule-quota-reached"></a>

There is a quota for the number of alert instances you can have within a single workspace. When you reach that number, you can no longer create new alert rules in that workspace. With multidimensional alerts, the number of alert instances can vary over time.

The following are important to remember when working with alert instances.
+ If you create only single-dimensional rules, each rule is a single alert instance. You can create the same number of rules in a single workspace as your alert-instance quota, and no more.
+ Multidimensional rules create multiple alert instances, however, the number is not known until they are evaluated. For example, if you create an alert rule that tracks the CPU usage of your Amazon EC2 instances, there might be 50 EC2 instances when you create it (and therefore 50 alert instances), but if you add 10 more EC2 instances a week later, the next evaluation has 60 alert instances.

  The number of alert instances is evaluated when you create a multidimensional alert, and you can't create one that immediately puts you over your alert instance quota. Because the number of alert instances can change, your quota is checked each time that your rules are evaluated.
+ At rule evaluation time, if a rule causes you to go beyond your quota for alert instances, that rule is not evaluated until an update is made to the alert rule that brings the total count of alert instances below the service quota. When this happens, you receive an alert notification letting you know that your quota has been reached (the notification uses the notification policy for the rule being evaluated). The notification includes an `Error` annotation with the value `QuotaReachedError`.
+ A rule that causes a `QuotaReachedError` stops being evaluated. Evaluation is only resumed when an update is made and the evaluation after the update does not itself cause a `QuotaReachedError`. A rule that is not being evaluated shows the **Quota reached** error in the Grafana console.
+ You can lower the number of alert instances by removing alert rules, or by editing multidimensional alerts to have fewer alert instances (for example, by having one alert on errors per VM, rather than one alert on error per API in a VM).
+ To resume evaluations, update the alert and save it. You can update it to lower the number of alert instances, or if you have made other changes to lower the number of alert instances, you can save it with no changes. If it can be resumed, it is. If it causes another `QuotaReachedError`, you are not able to save it.
+ When an alert is saved and resumes evaluation without going over the alerts quota, the **Quota reached** error can continue to show in the Grafana console for some time (up to its evaluation interval), however, the alert rule evaluation does start and alerts are sent if the rule threshold is met.
+ For details on the alerts quota, as well as other quotas, see [Amazon Managed Grafana service quotas](AMG_quotas.md).

## Handling no data or error cases
<a name="v9-alerting-rule-no-data-error"></a>

Choose options for how to handle alerting behavior in the absence of data or when there are errors.

The options for handling no data are listed in the following table.


| No Data option | Behavior | 
| --- | --- | 
|  No Data  |  Create an alert `DatasourceNoData` with the name and UID of the alert rule, and UID of the data source that returned no data as labels.  | 
|  Alerting  |  Set alert rule state to `Alerting`.  | 
|  OK  |  Set alert rule state to `Normal`.  | 

The options for handling error cases are listed in the following table.


| Error or timeout option | Behavior | 
| --- | --- | 
|  Alerting  |  Set alert rule state to `Alerting`  | 
|  OK  |  Set alert rule state to `Normal`  | 
|  Error  |  Create an alert `DatasourceError` with the name and UID of the alert rule, and UID of the data source that returned no data as labels.  | 

# Creating Grafana Mimir or Loki managed alert rules
<a name="v9-alerting-managerules-mimir-loki"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Using Grafana, you can create alerting rules for an external Grafana Mimir or Loki instance.

**Note**  
Grafana Mimir can connect to Amazon Managed Service for Prometheus and Prometheus data sources.

**Prerequisites**
+ Verify that you have write permissions to the Prometheus data source. Otherwise, you are not able to create or update Cortex managed alerting rules.
+ For Grafana Mimir and Loki data sources, enable the ruler API by configuring their respective services.
  + **Loki** – The `local` rule storage type, default for the Loki data source, supports only viewing of rules. To edit rules, configure one of the other storage types.
  + **Grafana Mimir** – Use the legacy `/api/prom` prefix, not `/prometheus`. The Prometheus data source supports both Grafana Mimir and Prometheus, and Grafana expects that both the Query API and Ruler API are under the same URL. You cannot provide a separate URL for the Ruler API.

**Note**  
If you do not want to manage alerting rules for a particular Loki or Prometheus data source, go to its settings and clear the **Manage alerts via Alerting UI** checkbox.

**To add a Grafana Mimir or Loki managed alerting rule**

1. From your Grafana console, in the Grafana menu, choose the **Alerting** (bell) icon to open the **Alerting** page listing existing alerts.

1. Choose **Create alert rule**.

1. In **Step 1**, choose the rule type, and details, as follows:
   + Choose **Mimir or Loki alert**.
   + In **Rule name**, add a descriptive name. This name is displayed in the alert rules list. It is also the `alertname` label for every alert instance that is created from this rule.
   + From the **Select data source** dropdown, select a Prometheus, or Loki data source.
   + From the **Namespace** dropdown, select an existing rule namespace. Otherwise, choose **Add new** and enter a name to create one. Namespaces can contain one or more rule groups and only have an organizational purpose. For more information, see [Cortex or Loki rule groups and namespaces](alert-rules.md#alert-rule-groups).
   + From the **Group** dropdown, select an existing group within the selected namespace. Otherwise, choose **Add new** and enter a name to create one. Newly created rules are appended to the end of the group. Rules within a group run sequentially at a regular interval, with the same evaluation time.

1. In **Step 2**, add the query to evaluate.

   The value can be a PromQL or LogQL expression. The rule initiates an alert if the evaluation result has at least one series with a value that is greater than 0. An alert is created for each series.

1. In **Step 3**, specify the alert evaluation interval.

   In the **For** text box of the condition, specify the duration for which the condition must be true before the alert is initiated. If you specify `5m`, the conditions must be true for five minutes before the alert is initiated.
**Note**  
After a condition is met, the alert goes into `Pending` state. If the condition remains active for the duration specified, the alert transitions to the `Firing` state. If it is no longer met, it reverts to the `Normal` state.

1. In **Step 4**, add additional metadata associated with the rule.
   + Add a description and summary to customize alert messages. Use the guidelines in [Labels and annotations](v9-alerting-explore-labels.md).
   + Add Runbook URL, panel, dashboard, and alert IDs.
   + Add custom labels.

1. Choose **Preview alerts** to evaluate the rule and see what alerts it would produce. It displays a list of alerts with state and value of each one.

1. Choose **Save** to save the rule or **Save and exit** to save the rule and go back to the **Alerting** page.

After you have created your rule, you can create a notification for your rule. For more information about notifications, see [Manage your alert notifications](v9-alerting-managenotifications.md).

# Creating Grafana Mimir or Loki managed recording rules
<a name="v9-alerting-managerules-mimir-loki-recording"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

You can create and manage recording rules for an external Grafana Mimir or Loki instance. Recording rules calculate frequently needed expressions or computationally expensive expressions in advance and save the result as a new set of time series. Querying this new time series is faster, especially for dashboards since they query the same expression every time the dashboards refresh.

**Prerequisites**

For Grafana Mimir and Loki data sources, enable the ruler API by configuring their respective services.
+ **Loki** – The `local` rule storage type, default for the Loki data source, supports only viewing of rules. To edit rules, configure one of the other storage types.
+ **Grafana Mimir** – When configuring a data source to point to Grafana Mimir, use the legacy `/api/prom` prefix, not `/prometheus`. The Prometheus data source supports both Grafana Mimir and Prometheus, and Grafana expects that both the Query API and Ruler API are under the same URL. You cannot provide a separate URL for the Ruler API.

**Note**  
If you do not want to manage alerting rules for a particular Loki or Prometheus data source, go to its settings and clear the **Manage alerts via Alerting UI** check box.

**To add a Grafana Mimir or Loki managed recording rule**

1. From your Grafana console, in the Grafana menu, choose the **Alerting** (bell) icon to open the **Alerting** page listing existing alerts.

1. Choose **Create alert rule**.

1. In **Step 1**, add the rule type, rule name, and storage location, as follows.
   + Select the **Mimir or Loki recording rule** option.
   + In **Rule name**, add a descriptive name. This name is displayed in the alert rules list. It is also the `alertname` label for every alert instance that is created from this rule.
   + From the **Select data source** dropdown, select a Prometheus, or Loki data source.
   + From the **Namespace** dropdown, select an existing rule namespace. Otherwise, choose **Add new** and enter a name to create one. Namespaces can contain one or more rule groups and only have an organizational purpose. For more information, see [Cortex or Loki rule groups and namespaces](alert-rules.md#alert-rule-groups).
   + From the **Group** dropdown, select an existing group within the selected namespace. Otherwise, choose **Add new** and enter a name to create one. Newly created rules are appended to the end of the group. Rules within a group run sequentially at a regular interval, with the same evaluation time.

1. In **Step 2**, add the query to evaluate.

   The value can be a PromQL or LogQL expression. The rule initiates an alert if the evaluation result has at least one series with a value that is greater than 0. An alert is created for each series.

1. In **Step 3**, add additional metadata associated with the rule.
   + Add a description and summary to customize alert messages. Use the guidelines in [Annotations and labels for alerting rules](alert-rules.md#alert-rule-labels).
   + Add Runbook URL, panel, dashboard, and alert IDs.
   + Add custom labels.

1. Choose **Save** to save the rule or **Save and exit** to save the rule and go back to the **Alerting** page.

# Grafana Mimir or Loki rule groups and namespaces
<a name="v9-alerting-managerules-mimir-loki-groups"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

You can organize your rules. Rules are created within rule groups, and rule groups are organized into namespaces. The rules within a rule group are run sequentially at a regular interval. The default interval is one minute. You can rename Grafana Mimir or Loki namespaces and rule groups, and edit rule group evaluation intervals.

**To edit a rule group or namespace**

1. From your Grafana console, in the Grafana menu, choose the **Alerting** (bell) icon to open the **Alerting** page.

1. Navigate to a rule within the rule group or namespace you want to edit.

1. Choose the **Edit** (pen) icon.

1. Make changes to the rule group or namespace.
**Note**  
For namespaces, you can only edit the name. For rule groups, you change the name, or the evaluation interval for rules in the group. For example, you can choose `1m` to have the rules be evaluated once per minute, or `30s` to evaluate once every 30 seconds.

1. Choose **Save changes**.

# View and edit alerting rules
<a name="v9-alerting-managerules-view-edit"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

The **Alerting** page lists alerting rules. By default, rules are grouped by types of data sources. The **Grafana** section lists rules managed by Grafana, and the **Cortex/Loki** section lists rules for Prometheus compatible data sources. You can view alerting rules for Prometheus compatible data sources but you cannot edit them.

The Mimir/Cortex/Loki rules section lists all rules for Mimir, Cortex, or Loki data sources. Cloud alert rules are also listed in this section.

When managing large volumes of alerts, you can use extended alert rule search capabilities to filter on folders, evaluation groups, and rules. Additionally, you can filter alert rules by their properties like labels, state, type, and health.

**Note**  
You can view query definitions for provisioned alerts, but you cannot edit them. Being able to view them allows you to verify that your queries and rule definitions are correct without going back to your provisioning repository for rule definitions.

## View alerting rules
<a name="v9-alerting-managerules-view"></a>

Using Grafana alerts, you can view all of your alerts in one page.

**To view alerting details**

1. From your Grafana console, in the Grafana menu, choose the **Alerting** (bell) icon to open the **Alerting** page. By default, rules are displayed in groups by data source type. You can also view by the current state of each alert (these are described in more detail in the following text).

1. In **View as**, you can toggle between the group and state views by choosing the option you prefer.

1. Choose the arrow next to a row to view more details for that row. The details for a rule include the rule labels, annotations, data sources, and queries, as well as a list of alert instances resulting from the rule.

**Note**  
For more information about understanding the details of your alerts, see [State and health of alerting rules](v9-alerting-explore-state.md).

**Group view**

Group view shows Grafana alert rules grouped by folder and Loki or Prometheus alert rules grouped by `namespace` \$1 `group`. This is the default rule list view, intended for managing rules. You can expand each group to view a list of rules in this group. Expand a rule further to view its details. You can also expand action buttons and alerts resulting from the rule to view their details.

**State view**

State view shows alert rules grouped by state. Use this view to get an overview of which rules are in what state. Each rule can be expanded to view its details. Action buttons and any alerts generated by this rule, and each alert can be further expanded to view its details.

**Filter alerting rules**

You can filter the alerting rules that appear on the **Alerting** page in several ways.
+ You can filter to display the rules that query a specific data source by choosing **Select data sources**, then selecting a data source to filter to.
+ You can filter by labels by choosing search criteria in **Search by label**. For example, you could type `environment=production,region=~US|EU,severity!=warning` to filter on production warnings in the US and EU.
+ You can filter to display the rules in a specific state by choosing **Filter alerts by state**, and then selecting the state you want to view.

## Edit or delete alerting rules
<a name="v9-alerting-managerules-edit"></a>

Grafana managed alerting rules can only be edited or deleted by users with Edit permissions for the folder storing the rules. Alerting rules for an external Mimir or Loki instance can be edited or deleted by users with Editor or Admin roles.

**To edit or delete a rule**

1. Expand a rule until you can see the rule controls for **View**, **Edit**, and **Delete**.

1. Choose **Edit** to open the create rule page. Make updates in the same way that you create a rule. For details, see the instructions in [Creating Grafana managed alert rules](v9-alerting-managerules-grafana.md) or [Creating Grafana Mimir or Loki managed alert rules](v9-alerting-managerules-mimir-loki.md).

1. Optionally, choose **Delete** to delete a rule.

## Export alert rules
<a name="v9-alerting-managerules-export"></a>

You can export rules to YAML or JSON in the Grafana workspace, by choosing **Export**. It will give you the option to define a new rule, then export it. You can create a rule using the UI and then export it for use in the provisioning API or terraform scripts.

**Note**  
This is supported in both the Grafana workspace and the provisioning interface.

# Manage your alert notifications
<a name="v9-alerting-managenotifications"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Choosing how, when, and where to send your alert notifications is an important part of setting up your alerting system. These decisions will have a direct impact on your ability to resolve issues quickly and not miss anything important.

As a first step, define your *contact points*; where to send your alert notifications to. A contact point can be a set of destinations for matching notifications. Add notification templates to contact points for reuse and consistent messaging in your notifications.

Next, create a *notification policy*, which is a set of rules for where, when and how your alerts are routed to contact points. In a notification policy, you define where to send your alert notifications by choosing one of the contact points you created. Add mute timings to your notification policy. A *mute timing* is a recurring interval of time during which you don’t want any notifications to be sent out.

When an alert rule is evaluated, the alert ruler sends alert instances to the Alertmanager — one alert rule can trigger multiple individual *alert instances*.

The Alertmanager receives these alert instances and then handles mute timings, groups alerts, and sends notifications to your contact points as defined in the notification policy.

**Topics**
+ [Alertmanager](v9-alerting-managenotifications-alertmanager.md)
+ [Working with contact points](v9-alerting-contact-points.md)
+ [Working with notification policies](v9-alerting-notification-policies.md)
+ [Customize notifications](v9-alerting-notifications.md)
+ [Silencing alert notifications for Prometheus data sources](v9-alerting-silences.md)
+ [Mute timings](v9-alerting-notification-muting.md)
+ [View and filter by alert groups](v9-alerting-viewfiltergroups.md)
+ [View notification errors](v9-alerting-viewnotificationerrors.md)

# Alertmanager
<a name="v9-alerting-managenotifications-alertmanager"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Alertmanager enables you to quickly and efficiently manage and respond to alerts. It receives alerts, handles mutings, inhibition, grouping, and routing by sending notifications out via your channel of choice, for example, email or Slack.

In Grafana, you can use the Grafana Alertmanager, or an external Alertmanager. You can also run multiple alertmanagers; your decision depends on your set up and where your alerts are being generated.

**Grafana Alertmanager**

Grafana Alertmanager is an internal Alertmanager that is pre-configured and available for selection by default if you run Grafana on-premise or open-source.

The Grafana Alertmanager can receive alerts from Grafana, but it cannot receive alerts from outside Grafana, for example, from Mimir or Loki.

**Note**  
Inhibition rules are not supported in the Grafana Alertmanager.

**External Alertmanager**

If you want to use a single alertmanager to receive all your Grafana, Loki, Mimir, and Prometheus alerts, you can set up Grafana to use an external Alertmanager. This external Alertmanager can be configured and administered from within Grafana itself.

Here are two examples of when you might want to configure your own external alertmanager and send your alerts there instead of the Grafana Alertmanager:

1. You already have alertmanagers on-premise in your own Cloud infrastructure that you have set up and still want to use, because you have other alert generators, such as Prometheus.

1. You want to use both Prometheus on-premise and hosted Grafana to send alerts to the same alertmanager that runs in your Cloud infrastructure.

Alertmanagers are visible from the dropdown menu on the Alerting Contact Points, and Notification Policies pages.

If you are provisioning your data source, set the flag `handleGrafanaManagedAlerts` in the `jsonData` field to `true` to send Grafana-managed alerts to this Alertmanager.

# Working with contact points
<a name="v9-alerting-contact-points"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Use contact points to define how your contacts are notified when an alert is initiated. A contact point can have one or more contact point integrations, for example, Amazon Simple Notification Service or Slack. When an alert is initiated, a notification is sent to all contact point integrations listed for a contact point. Optionally, use [notification templates](v9-alerting-create-templates.md) to customize the notification messages for the contact point types.

**Note**  
You can create and edit contact points for Grafana managed alerts. Contact points for Alertmanager alerts are read-only.

## Working with contact points
<a name="v9-alerting-working-contact-points"></a>

The following procedures detail how to add, edit, test, and delete contact points.

**To add a contact point**

1. From your Grafana console, in the Grafana menu, choose the **Alerting** (bell) icon to open the **Alerting** page.

1. Choose **Contact points**, then **Add contact point**.

1. From the **Alertmanager** dropdown, select an Alertmanager. The Grafana Alertmanager is selected by default.

1. Enter a **Name** for the contact point.

1. From **Contact point integration**, choose a type, and the mandatory fields based on that type. For example, if you choose Slack, enter the Slack channels and users who should be contacted.

1. If available for the contact point you selected, choose any desired **Optional settings** to specify additional settings.

1. Under **Notification settings**, optionally select **Disable resolved message** if you do not want to be notified when an alert resolves.

1. If your contact point needs more contact points types, you can choose **Add contact point integration** and repeat the steps for each contact point type needed.

1. Choose **Save contact point** to save your changes.

**To edit a contact point**

1. Choose **Contact points** to see a list of existing contact points.

1. Select the contact point to edit, then choose the **Edit** icon (pen).

1. Make any necessary changes, and then choose **Save contact point** to save your changes.

After your contact point is created, you can send a test notification to verify that it is configured properly.

**To send a test notification**

1. Choose **Contact points** to open the list of existing contact points.

1. Select the contact point to test, then choose the **Edit** icon (pen).

1. Select the **Test** icon (paper airplane).

1. Choose whether to send a predefined test notification or choose **Custom** to add your own custom annotations and labels in the test notification.

1. Choose **Send test notification** to test the alert with the given contact points.

You can delete contact points that are not in use by a notification policy.

**To delete a contact point**

1. Choose **Contact points** to open the list of existing contact points.

1. Select the contact point to delete, then choose the **Delete** icon (trash can).

1. In the confirmation dialog box, choose **Yes, delete**.

**Note**  
If the contact point is in use by a notification policy, you must delete the notification policy or edit it to use a different contact point before deleting the contact point.

## List of supported notifiers
<a name="v9-alerting-contactpoint-supported-notifiers"></a>


|  Name  |  Type  | 
| --- | --- | 
| Amazon SNS  |  sns  | 
|  OpsGenie  |  opsgenie  | 
| Pager Duty  |  pagerduty  | 
| Slack  |  slack  | 
|  VictorOps  |  victorops  | 

# Working with notification policies
<a name="v9-alerting-notification-policies"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Notification policies determine how alerts are routed to contact points. Policies have a tree structure, where each policy can have one or more child policies. Each policy, except for the root policy, can also match specific alert labels. Each alert is evaluated by the root policy and then by each child policy. If you enable the `Continue matching subsequent sibling nodes` option for a specific policy, then evaluation continues even after one or more matches. A parent policy’s configuration settings and contact point information govern the behavior of an alert that does not match any of the child policies. A root policy governs any alert that does not match a specific policy.

**Note**  
You can create and edit notification policies for Grafana managed alerts. Notification policies for Alertmanager alerts are read-only.

**Grouping notifications**

Grouping categorizes alert notifications of similar nature into a single funnel. This allows you to control alert notifications during larger outages when many parts of a system fail at once causing a high number of alerts to initiate simultaneously.

**Grouping example**

Suppose you have 100 services connected to a database in different environments. These services are differentiated by the label `env=environmentname`. An alert rule is in place to monitor whether your services can reach the database. The alert rule creates alerts named `alertname=DatabaseUnreachable`.

If a network partition occurs, where half of your services can no longer reach the database, 50 different alerts are initiated. For this situation, you want to receive a single-page notification (as opposed to 50) with a list of the environments that are affected.

You can configure grouping to be `group_by: [alertname]` (not using the `env` label, which is different for each service). With this configuration in place, Grafana sends a single compact notification that has all the affected environments for this alert rule.

**Special Groups**

Grafana has two special groups. The default group, `group_by: null` groups *all* alerts together into a single group. You can also use a special label named `...` to group alerts by all labels, effectively disabling grouping, and sending each alert into its own group.

## Working with notifications
<a name="v9-alerting-notification-policies-working"></a>

The following procedures show you how to create and manage notification policies.

**To edit the root notification policy**

1. From your Grafana console, in the Grafana menu, choose the **Alerting** (bell) icon to open the **Alerting** page.

1. Choose **Notification policies**.

1. From the **Alertmanager** dropdown, select the Alertmanager you want to edit.

1. In the **Root policy** section, choose the **Edit** icon (pen).

1. In **Default contact point**, update the contact point where notifications should be sent for rules when alert rules do not match any specific policy.

1. In **Group by**, choose the labels (or special groups) to group alerts by.

1. In **Timing options**, select from the following options.
   + **Group wait** – Time to wait to buffer alerts of the same group before sending an initial notification. The default is 30 seconds.
   + **Group interval** – Minimum time interval between two notifications for a group. The default is 5 minutes.
   + **Repeat interval** – Minimum time interval before resending a notification if no new alerts were added to the group. The default is 4 hours.

1. Choose **Save** to save your changes.

**To add a new, top-level specific policy**

1. From your Grafana console, in the Grafana menu, choose the **Alerting** (bell) icon to open the **Alerting** page.

1. Choose **Notification policies**.

1. From the **Alertmanager** dropdown, select the Alertmanager you want to edit.

1. In the **Specific routing** section, choose **New specific policy**.

1. In the **Matching labels** section, add one or more matching alert labels. More information about label matching is later in this topic.

1. In **Contact point**, add the contact point to send notifications to if the alert matches this specific policy. Nested policies override this contact point.

1. Optionally, enable **Continue matching subsequent sibling nodes** to continue matching sibling policies even after the alert matched the current policy. When this policy is enabled, you can get more than one notification for the same alert.

1. Optionally select **Override grouping** to specify a grouping different from the root policy.

1. Optionally select **Override general timings** to override the timing options in the group notification policy.

1. Choose **Save policy** to save your changes.

**To add a nested policy**

1. Expand the specific policy you want to create a nested policy under.

1. Choose **Add nested policy**, then add the details (as when adding a top-level specific policy).

1. Choose **Save policy** to save your changes.

**To edit a specific policy**

1. From the **Alerting** page, choose **Notification policies** to open the page that listing existing policies.

1. Select the policy that you want to edit, then choose the **Edit** icon (pen).

1. Make any changes (as when adding a top-level specific policy).

1. Choose **Save policy**.

**Searching for policies**

You can search within the tree of policies by *Label matchers* or *contact points*.
+ To search by contact point, enter a partial or full name of a contact point in the **Search by contact point** field.
+ To search by label, enter a valid label matcher in the **Search by label** field. Multiple matchers can be entered, separated by a comma. For example, a valid matcher input could be `severity=high, region=~EMEA|NA`.
**Note**  
When searching by label, all matched policies will be exact matches. Partial matches and regex-style matches are not supported.

**How label matching works**

A policy matches an alert if the alert's labels match all the *Matching Labels* specified on the policy.
+ **Label** – The name of the label to match. It must exactly match the label name of the alert.
+ **Operator** – The operator used to compare the label value with the matching label value. The available operators are:
  + `=` Select labels whose value exactly matches the provided string.
  + `!=` Select labels whose value does not match the provided string.
  + `=~` Select labels whose value match the regex interpreted value of the provided string (the provided string is interpreted as a regular expression.
  + `!=` Select labels that do not match the provided regular expression.
+ **Value** – The value to match the label value to. It can match as a string or as a regular expression, depending on the operator chosen.

# Customize notifications
<a name="v9-alerting-notifications"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Customize your notifications with notifications templates.

You can use notification templates to change the title, message, and format of the message in your notifications.

Notification templates are not tied to specific contact point integrations, such as email or Slack. However, you can choose to create separate notification templates for different contact point integrations.

You can use notification templates to:
+ Add, remove, or re-order information in the notification including the summary, description, labels and annotations, values, and links
+ Format text in bold and italic, and add or remove line breaks

You cannot use notification templates to:
+ Change the design of notifications in instant messaging services such as Slack and Microsoft Teams

**Topics**
+ [Using Go’s templating language](v9-alerting-notifications-go-templating.md)
+ [Create notification templates](v9-alerting-create-templates.md)
+ [Template reference](v9-alerting-template-reference.md)

# Using Go’s templating language
<a name="v9-alerting-notifications-go-templating"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

You write notification templates in Go’s templating language, [text/template](https://pkg.go.dev/text/template).

This section provides an overview of Go’s templating language and writing templates in text/template.

## Dot
<a name="v9-go-dot"></a>

In text/template there is a special cursor called dot, and is written as `.`. You can think of this cursor as a variable whose value changes depending where in the template it is used. For example, at the start of a notification template `.` refers to the `ExtendedData` object, which contains a number of fields including `Alerts`, `Status`, `GroupLabels`, `CommonLabels`, `CommonAnnotations` and `ExternalURL`. However, dot might refer to something else when used in a `range` over a list, when used inside a `with`, or when writing feature templates to be used in other templates. You can see examples of this in [Create notification templates](v9-alerting-create-templates.md), and all data and functions in the [Template reference](v9-alerting-template-reference.md).

## Opening and closing tags
<a name="v9-go-openclosetags"></a>

In text/template, templates start with `{{` and end with `}}` irrespective of whether the template prints a variable or runs control structures such as if statements. This is different from other templating languages such as Jinja where printing a variable uses `{{` and `}}` and control structures use `{%` and `%}`.

## Print
<a name="v9-go-print"></a>

To print the value of something use `{{` and `}}`. You can print the value of dot, a field of dot, the result of a function, and the value of a [variable](#v9-go-variables). For example, to print the `Alerts` field where dot refers to `ExtendedData` you would write the following:

```
{{ .Alerts }}
```

## Iterate over alerts
<a name="v9-go-iterate-alerts"></a>

To print just the labels of each alert, rather than all information about the alert, you can use a `range` to iterate the alerts in `ExtendedData`:

```
{{ range .Alerts }}
{{ .Labels }}
{{ end }}
```

Inside the range dot no longer refers to `ExtendedData`, but to an `Alert`. You can use `{{ .Labels }}` to print the labels of each alert. This works because `{{ range .Alerts }}` changes dot to refer to the current alert in the list of alerts. When the range is finished dot is reset to the value it had before the start of the range, which in this example is `ExtendedData`:

```
{{ range .Alerts }}
{{ .Labels }}
{{ end }}
{{/* does not work, .Labels does not exist here */}}
{{ .Labels }}
{{/* works, cursor was reset */}}
{{ .Status }}
```

## Iterate over annotations and labels
<a name="v9-go-iterate-labels"></a>

Let’s write a template to print the labels of each alert in the format `The name of the label is $name, and the value is $value`, where `$name` and `$value` contain the name and value of each label.

Like in the previous example, use a range to iterate over the alerts in `.Alerts` such that dot refers to the current alert in the list of alerts, and then use a second range on the sorted labels so dot is updated a second time to refer to the current label. Inside the second range use `.Name` and `.Value` to print the name and value of each label:

```
{{ range .Alerts }}
{{ range .Labels.SortedPairs }}
The name of the label is {{ .Name }}, and the value is {{ .Value }}
{{ end }}
{{ range .Annotations.SortedPairs }}
The name of the annotation is {{ .Name }}, and the value is {{ .Value }}
{{ end }}
{{ end }}
```

## If statements
<a name="v9-go-if"></a>

You can use if statements in templates. For example, to print `There are no alerts` if there are no alerts in `.Alerts` you would write the following:

```
{{ if .Alerts }}
There are alerts
{{ else }}
There are no alerts
{{ end }}
```

## With
<a name="v9-go-with"></a>

With is similar to if statements, however unlike if statements, `with` updates dot to refer to the value of the with:

```
{{ with .Alerts }}
There are {{ len . }} alert(s)
{{ else }}
There are no alerts
{{ end }}
```

## Variables
<a name="v9-go-variables"></a>

Variables in text/template must be created within the template. For example, to create a variable called `$variable` with the current value of dot you would write the following:

```
{{ $variable := . }}
```

You can use `$variable` inside a range or `with` and it will refer to the value of dot at the time the variable was defined, not the current value of dot.

For example, you cannot write a template that use `{{ .Labels }}` in the second range because here dot refers to the current label, not the current alert:

```
{{ range .Alerts }}
{{ range .Labels.SortedPairs }}
{{ .Name }} = {{ .Value }}
{{/* does not work because in the second range . is a label not an alert */}}
There are {{ len .Labels }}
{{ end }}
{{ end }}
```

You can fix this by defining a variable called `$alert` in the first range and before the second range:

```
{{ range .Alerts }}
{{ $alert := . }}
{{ range .Labels.SortedPairs }}
{{ .Name }} = {{ .Value }}
{{/* works because $alert refers to the value of dot inside the first range */}}
There are {{ len $alert.Labels }}
{{ end }}
{{ end }}
```

## Range with index
<a name="v9-go-rangeindex"></a>

You can get the index of each alert within a range by defining index and value variables at the start of the range:

```
{{ $num_alerts := len .Alerts }}
{{ range $index, $alert := .Alerts }}
This is alert {{ $index }} out of {{ $num_alerts }}
{{ end }}
```

## Define templates
<a name="v9-go-define"></a>

You can define templates that can be used within other templates, using `define` and the name of the template in double quotes. You should not define templates with the same name as other templates, including default templates such as `__subject`, `__text_values_list`, `__text_alert_list`, `default.title` and `default.message`. Where a template has been created with the same name as a default template, or a template in another notification template, Grafana might use either template. Grafana does not prevent, or show an error message, when there are two or more templates with the same name.

```
{{ define "print_labels" }}
{{ end }}
```

## Embed templates
<a name="v9-go-embed"></a>

You can embed a defined template within your template using `template`, the name of the template in double quotes, and the cursor that should be passed to the template:

```
{{ template "print_labels" . }}
```

## Pass data to templates
<a name="v9-go-passdata"></a>

Within a template dot refers to the value that is passed to the template.

For example, if a template is passed a list of firing alerts then dot refers to that list of firing alerts:

```
{{ template "print_alerts" .Alerts }}
```

If the template is passed the sorted labels for an alert then dot refers to the list of sorted labels:

```
{{ template "print_labels" .SortedLabels }}
```

This is useful when writing reusable templates. For example, to print all alerts you might write the following:

```
{{ template "print_alerts" .Alerts }}
```

Then to print just the firing alerts you could write this:

```
{{ template "print_alerts" .Alerts.Firing }}
```

This works because both `.Alerts` and `.Alerts.Firing` are lists of alerts.

```
{{ define "print_alerts" }}
{{ range . }}
{{ template "print_labels" .SortedLabels }}
{{ end }}
{{ end }}
```

## Comments
<a name="v9-go-comments"></a>

You can add comments with `{{/*` and `*/}}`:

```
{{/* This is a comment */}}
```

To prevent comments from adding line breaks use:

```
{{- /* This is a comment with no leading or trailing line breaks */ -}}
```

## Indentation
<a name="v9-go-indentation"></a>

You can use indentation, both tabs and spaces, and line breaks, to make templates more readable:

```
{{ range .Alerts }}
  {{ range .Labels.SortedPairs }}
    {{ .Name }} = {{ .Value }}
  {{ end }}
{{ end }}
```

However, indentation in the template will also be present in the text. Next we will see how to remove it.

## Remove spaces and line breaks
<a name="v9-go-removespace"></a>

In text/template use `{{-` and `-}}` to remove leading and trailing spaces and line breaks.

For example, when using indentation and line breaks to make a template more readable:

```
{{ range .Alerts }}
  {{ range .Labels.SortedPairs }}
    {{ .Name }} = {{ .Value }}
  {{ end }}
{{ end }}
```

The indentation and line breaks will also be present in the text:

```
    alertname = "Test"

    grafana_folder = "Test alerts"
```

You can remove the indentation and line breaks from the text changing `}}` to `-}}` at the start of each range:

```
{{ range .Alerts -}}
  {{ range .Labels.SortedPairs -}}
    {{ .Name }} = {{ .Value }}
  {{ end }}
{{ end }}
```

The indentation and line breaks in the template are now absent from the text:

```
alertname = "Test"
grafana_folder = "Test alerts"
```

# Create notification templates
<a name="v9-alerting-create-templates"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Create reusable notification templates to send to your contact points.

You can add one or more templates to your notification template.

Your notification template name must be unique. You cannot have two templates with the same name in the same notification template or in different notification templates. Avoid defining templates with the same name as default templates, such as: `__subject`, `__text_values_list`, `__text_alert_list`, `default.title` and `default.message`.

In the Contact points tab, you can see a list of your notification templates.

## Creating notification templates
<a name="v9-alerting-creating-templates"></a>

**To create a notification template**

1. Click **Add template**.

1. Choose a name for the notification template, such as `email.subject`.

1. Write the content of the template in the content field.

   For example:

   ```
   {{ if .Alerts.Firing -}}
      {{ len .Alerts.Firing }} firing alerts
      {{ end }}
      {{ if .Alerts.Resolved -}}
      {{ len .Alerts.Resolved }} resolved alerts
      {{ end }}
   ```

1. Click Save.

   `{{ define "email.subject" }}` (where `email.subject` is the name of your template) and `{{ end }}` is automatically added to the start and end of the content.

**To create a notification template that contains more than one template:**

1. Click **Add Template**.

1. Enter a name for the overall notification template. For example, `email`.

1. Write each template in the Content field, including `{{ define "name-of-template" }}` and `{{ end }}` at the start and end of each template. You can use descriptive names for each of the templates in the notification template, for example, `email.subject` or `email.message`. In this case, do not reuse the name of the notification template you entered above.

   The following sections show detailed examples for templates you might create.

1. Click Save.

## Creating a template for the subject of an email
<a name="v9-alerting-create-template-subject"></a>

Create a template for the subject of an email that contains the number of firing and resolved alerts, as in this example:

```
1 firing alerts, 0 resolved alerts
```

**To create a template for the subject of an email**

1. Create a template called `email.subject` with the following content:

   ```
   {{ define "email.subject" }}
   {{ len .Alerts.Firing }} firing alerts, {{ len .Alerts.Resolved }} resolved alerts
   {{ end }}
   ```

1. Use the template when creating your contact point integration by putting it into the **Subject** field with the `template` keyword.

   ```
   {{ template "email.subject" . }}
   ```

## Creating a template for the message of an email
<a name="v9-alerting-create-template-message"></a>

Create a template for the message of an email that contains a summary of all firing and resolved alerts, as in this example:

```
There are 2 firing alerts, and 1 resolved alerts

Firing alerts:

- alertname=Test 1 grafana_folder=GrafanaCloud has value(s) B=1
- alertname=Test 2 grafana_folder=GrafanaCloud has value(s) B=2

Resolved alerts:

- alertname=Test 3 grafana_folder=GrafanaCloud has value(s) B=0
```

**To create a template for the message of an email**

1. Create a notification template called `email` with two templates in the content: `email.message_alert` and `email.message`.

   The `email.message_alert` template is used to print the labels and values for each firing and resolved alert while the `email.message` template contains the structure of the email.

   ```
   {{- define "email.message_alert" -}}
   {{- range .Labels.SortedPairs }}{{ .Name }}={{ .Value }} {{ end }} has value(s)
   {{- range $k, $v := .Values }} {{ $k }}={{ $v }}{{ end }}
   {{- end -}}
   
   {{ define "email.message" }}
   There are {{ len .Alerts.Firing }} firing alerts, and {{ len .Alerts.Resolved }} resolved alerts
   
   {{ if .Alerts.Firing -}}
   Firing alerts:
   {{- range .Alerts.Firing }}
   - {{ template "email.message_alert" . }}
   {{- end }}
   {{- end }}
   
   {{ if .Alerts.Resolved -}}
   Resolved alerts:
   {{- range .Alerts.Resolved }}
   - {{ template "email.message_alert" . }}
   {{- end }}
   {{- end }}
   
   {{ end }}
   ```

1. Use the template when creating your contact point integration by putting it into the **Text Body** field with the `template` keyword.

   ```
   {{ template "email.message" . }}
   ```

## Creating a template for the title of a Slack message
<a name="v9-alerting-create-template-slack-title"></a>

Create a template for the title of a Slack message that contains the number of firing and resolved alerts, as in the following example:

```
1 firing alerts, 0 resolved alerts
```

**To create a template for the title of a Slack message**

1. Create a template called `slack.title` with the following content:

   ```
   {{ define "slack.title" }}
   {{ len .Alerts.Firing }} firing alerts, {{ len .Alerts.Resolved }} resolved alerts
   {{ end }}
   ```

1. Use the template when creating your contact point integration by putting it into the **Title** field with the `template` keyword.

   ```
   {{ template "slack.title" . }}
   ```

## Creating a template for the content of a Slack message
<a name="v9-alerting-create-template-slack-message"></a>

Create a template for the content of a Slack message that contains a description of all firing and resolved alerts, including their labels, annotations, and Dashboard URL:

```
1 firing alerts:

[firing] Test1
Labels:
- alertname: Test1
- grafana_folder: GrafanaCloud
Annotations:
- description: This is a test alert
Go to dashboard: https://example.com/d/dlhdLqF4z?orgId=1

1 resolved alerts:

[firing] Test2
Labels:
- alertname: Test2
- grafana_folder: GrafanaCloud
Annotations:
- description: This is another test alert
Go to dashboard: https://example.com/d/dlhdLqF4z?orgId=1
```

**To create a template for the content of a Slack message**

1. Create a template called `slack` with two templates in the content: `slack.print_alert` and `slack.message`.

   The `slack.print_alert` template is used to print the labels, annotations, and DashboardURL while the `slack.message` template contains the structure of the notification.

   ```
   {{ define "slack.print_alert" -}}
   [{{.Status}}] {{ .Labels.alertname }}
   Labels:
   {{ range .Labels.SortedPairs -}}
   - {{ .Name }}: {{ .Value }}
   {{ end -}}
   {{ if .Annotations -}}
   Annotations:
   {{ range .Annotations.SortedPairs -}}
   - {{ .Name }}: {{ .Value }}
   {{ end -}}
   {{ end -}}
   {{ if .DashboardURL -}}
     Go to dashboard: {{ .DashboardURL }}
   {{- end }}
   {{- end }}
   
   {{ define "slack.message" -}}
   {{ if .Alerts.Firing -}}
   {{ len .Alerts.Firing }} firing alerts:
   {{ range .Alerts.Firing }}
   {{ template "slack.print_alert" . }}
   {{ end -}}
   {{ end }}
   {{ if .Alerts.Resolved -}}
   {{ len .Alerts.Resolved }} resolved alerts:
   {{ range .Alerts.Resolved }}
   {{ template "slack.print_alert" .}}
   {{ end -}}
   {{ end }}
   {{- end }}
   ```

1. Use the template when creating your contact point integration by putting it into the **Text Body** field with the `template` keyword.

   ```
   {{ template "slack.message" . }}
   ```

## Template both email and Slack with shared templates
<a name="v9-alerting-create-shared-templates"></a>

Instead of creating separate notification templates for each contact point, such as email and Slack, you can share the same template.

For example, if you want to send an email with this subject and Slack message with this title `1 firing alerts, 0 resolved alerts`, you can create a shared template.

**To create a shared template**

1. Create a template called `common.subject_title` with the following content:

   ```
   {{ define "common.subject_title" }}
   {{ len .Alerts.Firing }} firing alerts, {{ len .Alerts.Resolved }} resolved alerts
   {{ end }}
   ```

1. For email, run the template from the subject field in your email contact point integration:

   ```
   {{ template "common.subject_title" . }}
   ```

1. For Slack, run the template from the title field in your Slack contact point integration:

   ```
   {{ template "common.subject_title" . }}
   ```

## Using notification templates
<a name="v9-alerting-use-notification-templates"></a>

Use templates in contact points to customize your notifications.

**To use a template when creating a contact point**

1. From the **Alerting** menu, choose **Contact points** to see a list of existing contact points.

1. Choose **Add contact point**. Alternately, you can edit an existing contact point by choosing the **Edit** icon (pen) next to the contact point you wish to edit.

1. Enter the templates you wish to use in one or more field, such as **Message** or **Subject**. To enter a template, use the form `{{ template "template_name" . }}`, replacing *template\$1name* with the name of the template you want to use.

1. Click **Save contact point**.

# Template reference
<a name="v9-alerting-template-reference"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

This section provides reference information for creating your templates.

## Template data
<a name="v9-alerting-template-data"></a>

The following data is passed to message templates.


| Name | Type | Notes | 
| --- | --- | --- | 
|  `Receiver`  |  string  |  Name of the contact point that the notification is being sent to.  | 
|  `Status`  |  string  |  firing if at least one alert is firing, otherwise resolved.  | 
|  `Alerts`  |  Alert  |  List of alert objects that are included in this notification (see below).  | 
|  `GroupLabels`  |  KeyValue  |  Labels these alerts were grouped by.  | 
|  `CommonLabels`  |  KeyValue  |  Labels common to all the alerts included in this notification.  | 
|  `CommonAnnotations`  |  KeyValue  |  Annotations common to all the alerts included in this notification.  | 
|  `ExternalURL`  |  string  |  Back link to the Grafana that sent the notification. If using external Alertmanager, back link to this Alertmanager.  | 

The `Alerts` type exposes two functions for filtering the alerts returned.
+ `Alerts.Firing` – Returns a list of firing alerts.
+ `Alerts.Resolved` – Returns a list of resolved alerts.

**Alert (type)**

The alert type contains the following data.


| Name | Type | Notes | 
| --- | --- | --- | 
|  Status  |  string  |  `firing` or `resolved`.  | 
|  Labels  |  KeyValue  |  A set of labels attached to the alert.  | 
|  Annotations  |  KeyValue  |  A set of annotations attached to the alert.  | 
| Values | KeyValue | The values of all expressions, including Classic Conditions | 
|  StartsAt  |  time.Time  |  Time the alert started firing.  | 
|  EndsAt  |  time.Time  |  Only set if the end time of an alert is known. Otherwise set to a configurable timeout period from the time since the last alert was received.  | 
|  GeneratorURL  |  string  |  A back link to Grafana or external Alertmanager.  | 
|  SilenceURL  |  string  |  A link to silence the alert (with labels for this alert pre-filled). Only for Grafana managed alerts.  | 
|  DashboardURL  |  string  |  Link to grafana dashboard, if alert rule belongs to one. Only for Grafana managed alerts.  | 
|  PanelURL  |  string  |  Link to grafana dashboard panel, if alert rule belongs to one. Only for Grafana managed alerts.  | 
|  Fingerprint  |  string  |  Fingerprint that can be used to identify the alert.  | 
|  ValueString  |  string  |  A string that contains the labels and value of each reduced expression in the alert.  | 

 **ExtendedData**

The ExtendedData object contains the following properties.


| Name | Kind | Description | Example | 
| --- | --- | --- | --- | 
|  Receiver  |  `string`  |  The name of the contact point sending the notification.  |  `{{ .Receiver }}`  | 
|  Status  |  `string`  |  The status is `firing if at least one alert is firing, otherwise resolved.`  |  `{{ .Status }}`  | 
|  Alerts  |  `[]Alert`  |  List of all firing and resolved alerts in this notification.  |  `There are {{ len .Alerts }} alerts`  | 
|  Firing alerts  |  `[]Alert`  |  List of all firing alerts in this notification.  |  `There are {{ len .Alerts.Firing }} firing alerts`  | 
|  Resolved alerts  |  `[]Alert`  |  List of all resolved alerts in this notification.  |  `There are {{ len .Alerts.Resolved }} resolved alerts`  | 
|  GroupLabels  |  `KeyValue`  |  The labels that group these alerts int his notification.  |  `{{ .GroupLabels }}`  | 
|  CommonLabels  |  `KeyValue`  |  The labels common to all alerts in this notification.  |  `{{ .CommonLabels }}`  | 
|  CommonAnnotations  |  `KeyValue`  |  The annotations common to all alerts in this notification.  |  `{{ .CommonAnnotations }}`  | 
|  ExternalURL  |  `string`  |  A link to the Grafana workspace or Alertmanager that sent this notification.  |  `{{ .ExternalURL }}`  | 

**KeyValue type**

The `KeyValue` type is a set of key/value string pairs that represent labels and annotations.

In addition to direct access of the data stored as a `KeyValue`, there are also methods for sorting, removing, and transforming the data.


| Name | Arguments | Returns | Notes | Example | 
| --- | --- | --- | --- | --- | 
|  SortedPairs  |    |  Sorted list of key and value string pairs  |    | `{{ .Annotations.SortedPairs }}` | 
|  Remove  |  []string  |  KeyValue  |  Returns a copy of the Key/Value map without the given keys.  | `{{ .Annotations.Remove "summary" }}` | 
|  Names  |    |  []string  |  List of label names  | `{{ .Names }}` | 
|  Values  |    |  []string  |  List of label values  | `{{ .Values }}` | 

**Time**

Time is from the Go [https://pkg.go.dev/time#Time](https://pkg.go.dev/time#Time) package. You can print a time in a number of different formats. For example, to print the time that an alert fired in the format `Monday, 1st January 2022 at 10:00AM`, you write the following template:

```
{{ .StartsAt.Format "Monday, 2 January 2006 at 3:04PM" }}
```

You can find a reference for Go’s time format [here](https://pkg.go.dev/time#pkg-constants).

## Template functions
<a name="v9-alerting-template-functions"></a>

Using template functions you can process labels and annotations to generate dynamic notifications. The following functions are available.


| Name | Argument type | Return type | Description | 
| --- | --- | --- | --- | 
|  `humanize`  |  number or string  |  string  |  Converts a number to a more readable format, using metric prefixes.  | 
|  `humanize1024`  |  number or string  |  string  |  Like humanize, but uses 1024 as the base rather than 1000.  | 
|  `humanizeDuration`  |  number or string  |  string  |  Converts a duration in seconds to a more readable format.  | 
|  `humanizePercentage`  |  number or string  |  string  |  Converts a ratio value to a fraction of 100.  | 
|  `humanizeTimestamp`  |  number or string  |  string  |  Converts a Unix timestamp in seconds to a more readable format.  | 
|  `title`  |  string  |  string  |  strings.Title, capitalizes first character of each word.  | 
|  `toUpper`  |  string  |  string  |  strings.ToUpper, converts all characters to upper case.  | 
|  `toLower`  |  string  |  string  |  strings.ToLower, converts all characters to lower case.  | 
|  `match`  |  pattern, text  |  Boolean  |  regexp.MatchString Tests for an unanchored regexp match.  | 
|  `reReplaceAll`  |  pattern, replacement, text  |  string  |  Regexp.ReplaceAllString Regexp substitution, unanchored.  | 
|  `graphLink`  |  string - JSON Object with `expr` and `datasource` fields  |  string  |  Returns the path to graphical view in Explore for the given expression and data source.  | 
|  `tableLink`  |  string - JSON Object with `expr` and `datasource` fields  |  string  |  Returns the path to tabular view in Explore for the given expression and data source.  | 
|  `args`  |  []interface\$1\$1  |  map[string]interface\$1\$1  |  Converts a list of objects to a map with keys, for example, arg0, arg1. Use this function to pass multiple arguments to templates.  | 
|  `externalURL`  |  nothing  |  string  |  Returns a string representing the external URL.  | 
|  `pathPrefix`  |  nothing  |  string  |  Returns the path of the external URL.  | 

The following table shows examples of using each function.


| Function | TemplateString | Input | Expected | 
| --- | --- | --- | --- | 
|  humanize  |  \$1 humanize \$1value \$1  |  1234567.0  |  1.235M  | 
|  humanize1024  |  \$1 humanize1024 \$1value \$1  |  1048576.0  |  1Mi  | 
|  humanizeDuration  |  \$1 humanizeDuration \$1value \$1  |  899.99  |  14m 59s  | 
|  humanizePercentage  |  \$1 humanizePercentage \$1value \$1  |  0.1234567  |  12.35%  | 
|  humanizeTimestamp  |  \$1 humanizeTimestamp \$1value \$1  |  1435065584.128  |  2015-06-23 13:19:44.128 \$10000 UTC  | 
|  title  |  \$1 \$1value \$1 title \$1  |  aa bB CC  |  Aa Bb Cc  | 
|  toUpper  |  \$1 \$1value \$1 toUpper \$1  |  aa bB CC  |  AA BB CC  | 
|  toLower  |  \$1 \$1value \$1 toLower \$1  |  aa bB CC  |  aa bb cc  | 
|  match  |  \$1 match "a\$1" \$1labels.instance \$1  |  aa  |  true  | 
|  reReplaceAll  |  \$1\$1 reReplaceAll "localhost:(.\$1)" "my.domain:\$11" \$1labels.instance \$1\$1  |  localhost:3000  |  my.domain:3000  | 
|  graphLink  |  \$1\$1 graphLink "\$1\$1"expr\$1": \$1"up\$1", \$1"datasource\$1": \$1"gdev-prometheus\$1"\$1" \$1\$1  |    |  /explore?left=["now-1h","now","gdev-prometheus",\$1"datasource":"gdev-prometheus","expr":"up","instant":false,"range":true\$1]  | 
|  tableLink  |  \$1\$1 tableLink "\$1\$1"expr\$1":\$1"up\$1", \$1"datasource\$1":\$1"gdev-prometheus\$1"\$1" \$1\$1  |    |  /explore?left=["now-1h","now","gdev-prometheus",\$1"datasource":"gdev-prometheus","expr":"up","instant":true,"range":false\$1]  | 
|  args  |  \$1\$1define "x"\$1\$1\$1\$1.arg0\$1\$1 \$1\$1.arg1\$1\$1\$1\$1end\$1\$1\$1\$1template "x" (args 1 "2")\$1\$1  |    |  1 2  | 
|  externalURL  |  \$1 externalURL \$1  |    |  http://localhost/path/prefix  | 
|  pathPrefix  |  \$1 pathPrefix \$1  |    |  /path/prefix  | 

# Silencing alert notifications for Prometheus data sources
<a name="v9-alerting-silences"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

For external Alert manager data sources (including Amazon Managed Service for Prometheus), you can suppress alert notifications with a *silence*. A silence only stops notifications from being created: Silences do not prevent alert rules from being evaluated, and they do not stop alerting instances from being shown in the user interface. When you silence an alert, you specify a window of time for it to be suppressed.

You can configure silences for an external Alertmanager data source.

**Note**  
To suppress alert notifications at regular time intervals, or for other data sources, (for example, during regular maintenance periods), use [Mute timings](v9-alerting-notification-muting.md) rather than silences.

**To add a silence**

1. From your Grafana console, in the Grafana menu, choose the **Alerting** (bell) icon to open the **Alerting** page.

1. Choose **Silences** to open a page listing existing [Working with contact points](v9-alerting-contact-points.md).

1. Choose the external Alertmanager from the **Alertmanager** dropdown.

1. Select **Add Silence**.

1. Select the start and end date in **Silence start and end** to indicate when the silence should go into effect and when it should end.

   As an alternative to setting an end time, in **Duration**, specify how long the silence is enforced. This automatically updates the end time in the **Silence start and end** field.

1. In the **Name** and **Value** fields, enter one or more *Matching Labels*. Matchers determine which rules the silence applies to. Label matching is discussed in more detail following this procedure.

1. Optionally, add a **Comment**, or modify the **Creator** to set the owner of the silence.

1. Choose **Create** to create the silence.

You can edit an existing silence by choosing the **Edit** icon (pen).

**Label matching for alert suppression**

When you create a silence, you create a set of *matching labels* as part of the silence. This is a set of rules about labels that must match for the alert to be suppressed. The matching labels consist of three parts:
+ **Label** – The name of the label to match. It must exactly match the label name of the alert.
+ **Operator** – The operator used to compare the label value with the matching label value. The available operators are:
  + `=` Select labels whose value exactly matches the provided string.
  + `!=` Select labels whose value does not match the provided string.
  + `=~` Select labels whose value match the regex interpreted value of the provided string (the provided string is interpreted as a regular expression).
  + `!=` Select labels that do not match the provided regular expression.
+ **Value** – The value to match the label value to. It can match as a string or as a regular expression, depending on the operator chosen.

A silence ends at the indicated end date, but you can manually end the suppression at any time.

**To end a silence manually**

1. In the **Alerting** page, choose **Silences** to view the list of existing silences.

1. Select the silence that you want to end, and choose **Unsilence**. This ends the alert suppression.
**Note**  
Unsilencing ends the alert suppression, as if the end time was set for the current time. Silences that have ended (automatically or manually) are retained and listed for five days. You cannot remove a silence from the list manually.

**Creating a link to the silence creation form**

You can create a URL to the silence creation form with details already filled in. Operators can use this to suppress an alarm quickly during an operational event.

When creating a link to a silence form, use a `matchers` query parameter to specify the matching labels, and a `comment` query parameter to specify a comment. The `matchers` parameter requires one or more values in the form `[label][operator][value]`, separated by commas.

**Example URL**

To link to a silence form, with matching labels `severity=critical` and `cluster!~europe-.*`, with a comment that says `Silencing critical EU alerts`, use a URL like the following. Replace *mygrafana* with the hostname of your Grafana instance.

```
https://mygrafana/alerting/silence/new?matchers=severity%3Dcritical%2Ccluster!~europe-*&comment=Silence%20critical%20EU%20alert
```

To link to a new silence page for an external Alertmanager, add an `alertmanager` query parameter with the Alertmanage data source name, such as `alertmanager=myAlertmanagerdatasource`.

# Mute timings
<a name="v9-alerting-notification-muting"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

A mute timing is a recurring interval of time when no new notifications for a policy are generated or sent. Use them to prevent alerts from firing a specific and reoccurring period, for example, a regular maintenance period.

Similar to silences, mute timings do not prevent alert rules from being evaluated, nor do they stop alert instances from being shown in the user interface. They only prevent notifications from being created.

You can configure Grafana managed mute timings as well as mute timings for an external Alertmanager data source.

**Mute timings compared to silences**

The following table highlights the differences between mute timings and silences.


| Mute timing | Silence | 
| --- | --- | 
|  Uses time interval definitions that can reoccur.  |  Has a fixed start and end time.  | 
|  Is created and then added to notification policies.  |  Uses labels to match against an alert to determine whether to silence or not.  | 
|  Works with Grafana alerting and external Alertmanagers.  |  Works only with external Alertmanagers.  | 

**To create a mute timing**

1. From your Grafana console, in the Grafana menu, choose the **Alerting** (bell) icon to open the **Alerting** page.

1. Choose **Notification policies**.

1. From the **Alertmanager** dropdown, select the Alertmanager you want to edit.

1. In the **Mute timings** section, choose the **Add mute timing** button.

1. Choose the time interval for which you want the mute timing to apply.

1. Choose **Submit** to create the mute timing.

**To add a mute timing to a notification policy**

1. Select the notification policy you would like to add the mute timing to, and choose the **Edit** button.

1. From the **Mute timings** dropdown, select the mute timings you would like to add to the policy.

   Choose the **Save policy** button.

**Time intervals**

A time interval is a definition for a range of time. If an alert is initiated during this interval it is suppressed. Ranges are supported using `:` (for example, `monday:thursday`). A mute timing can contain multiple time intervals. A time interval consists of multiple fields (details in the following list), all of which must match in order to suppress the alerts. For example, if you specify days of the week `monday:friday` and time range from 8:00-9:00, then alerts are suppressed from 8–9, Monday through Friday, but not, for example, 8–9 on Saturday.
+ **Time range** – The time of day to suppress notifications. Consists of two sub-fields, **Start time** and **End time**. An example time is `14:30`. Time is in 24 hour notation, in UTC.
+ **Days of the week** – The days of the week. Can be a single day, such as `monday`, a range, such as `monday:friday`, or a comma-separate list of days, such as `monday, tuesday, wednesday`.
+ **Months** – The months to select. You can specify months with numeric designations, or with the full month name, for example `1` or `january` both specify January. You can specify a single month, a range of months, or a comma-separated list of months.
+ **Days of the month** – The dates within a month. Values can range from `1`-`31`. Negative values specify days of the month in reverse order, so `-1` represents the last day of the month. Days of the month can be specified as a single day, a range of days, or a comma-separate list of days.
+ **Years** – The year or years for the interval. For example, `2023:2025`.

Each of these elements can be a list, and at least one item in the element must be satisfied to be a match. So if you set years to `2023:2025, 2027`, then it would be true during 2023, 2024, 2025, and 2027 (but not 2026).

If a field is left blank, any moment of time will match the field. A moment of time must match all fields to match a complete time interval.

If you want to specify an exact duration, specify all the options needed for that duration. For example, if you wanted to create a time interval for the first Monday of the month, for March, June, September, and December, between the hours of 12:00 and 24:00 UTC, your time interval specification could be:
+ Time range:
  + Start time: `12:00`
  + End time: `24:00`
+ Days of the week: `monday`
+ Months: `3, 6, 9, 12`
+ Days of the month: `1:7`

# View and filter by alert groups
<a name="v9-alerting-viewfiltergroups"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

Alert groups show grouped alerts from an Alertmanager instance. By default, alert rules are grouped by the label keys for the root policy in notification policies. Grouping common alert rules into a single alert group prevents duplicate alert rules from being fired.

You can view alert groups and also filter for alert rules that match specific criteria.

**To view alert groups**

1. In the Grafana menu, click the **Alerting** (bell) icon to open the Alerting page listing existing alerts.

1. Click **Alert groups** to open the page listing existing groups.

1. From the **Alertmanager** dropdown, select an external Alertmanager as your data source.

1. From **custom group by** dropdown, select a combination of labels to view a grouping other than the default. This is useful for debugging and verifying your grouping of notification policies.

If an alert does not contain labels specified either in the grouping of the root policy or the custom grouping, then the alert is added to a catch all group with a header of `No grouping`.

**To filter by label**
+ In **Search**, enter an existing label to view alerts matching the label.

  For example, `environment=production,region=~US|EU,severity!=warning`.

**To filter by state**
+ In **States**, select from Active, Suppressed, or Unprocessed states to view alerts matching your selected state. All other alerts are hidden.

# View notification errors
<a name="v9-alerting-viewnotificationerrors"></a>

****  
This documentation topic is designed for Grafana workspaces that support **Grafana version 9.x**.  
For Grafana workspaces that support Grafana version 12.x, see [Working in Grafana version 12](using-grafana-v12.md).  
For Grafana workspaces that support Grafana version 10.x, see [Working in Grafana version 10](using-grafana-v10.md).  
For Grafana workspaces that support Grafana version 8.x, see [Working in Grafana version 8](using-grafana-v8.md).

View notification errors and understand why they failed to be sent or were not received.

**Note**  
This feature is only supported for Grafana Alertmanager.

**To view notification errors**

1. In the Grafana menu, click the **Alerting** (bell) icon to open the Alerting page listing existing alerts.

1. Choose **Contact points** to see a list of the existing contact points.

   If any contact points are failing, a message at the right-hand corner of the screen alerts the user to the fact that there are errors and how many.

1. Click on a contact point to view the details of errors for that contact point.

   Error details are displayed if you hover over the Error icon.

   If a contact point has more than one integration, you see all errors for each of the integrations listed.

1. In the Health column, check the status of the notification.

   This can be either OK, No attempts, or Error.