

# Connect to Enterprise data sources
<a name="AMG-data-sources-enterprise"></a>

The following data sources are supported in workspaces that have been upgraded to Amazon Managed Grafana Enterprise plugins. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).

Enterprise plugins are regularly updated. This includes both updates to the existing plugins, and sometimes new data sources. This following documentation may not include all available data sources. For a list of the current Enterprise plugins supported by Amazon Managed Grafana Enterprise plugins, see [Grafana Enterprise plugins](https://grafana.com/docs/plugins/) in the *Grafana documentation*.

For workspaces that support version 9 and newer, Enterprise data sources are no longer installed by default. You must install the correct data source plugin. You can install plugins for all Enterprise data sources, including any that aren't listed here. You can also choose to update the version of a plugin that you already have installed. For more information about managing plugins, see [Extend your workspace with plugins](grafana-plugins.md).

**Topics**
+ [AppDynamics](appdynamics-AMG-datasource.md)
+ [Databricks](AMG-databricks-datasource.md)
+ [Datadog](AMG-datadog-datasource-plugin.md)
+ [Dynatrace](dynatrace-AMG-datasource.md)
+ [GitLab](gitlab-AMG-datasource.md)
+ [Honeycomb](honeycomb-AMG-datasource.md)
+ [Jira](jira-AMG-datasource.md)
+ [MongoDB](AMG-mongodb-datasource.md)
+ [New Relic](new-relic-data-source.md)
+ [Oracle Database](oracle-datasource-AMG.md)
+ [Salesforce](salesforce-AMG-datasource.md)
+ [SAP HANA](saphana-AMG-datasource.md)
+ [ServiceNow](grafana-enterprise-servicenow-datasource.md)
+ [Snowflake](snowflake-datasource-for-AMG.md)
+ [Splunk](splunk-datasource.md)
+ [Splunk Infrastructure Monitoring](AMG-datasource-splunkinfra.md)
+ [Wavefront](wavefront-datasource-for-AMG.md)

# Connect to an AppDynamics data source
<a name="appdynamics-AMG-datasource"></a>

 The AppDynamics data source for Amazon Managed Grafana enables you to query metrics from AppDynamics using its Metrics API and visualize them in Grafana dashboards. 

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).

## Note on the Data source configuration
<a name="note-on-the-datasource-config"></a>

 Use Server (proxy) access (to avoid CORS and users looking up your password) and basic authentication. Remember that the username should be "user@account", (that is, your.name@customer1 or my\$1user@saas\$1account\$1name).

 Configure the password using the following steps: 

1.  Navigate to [https://accounts.appdynamics.com/subscriptions](https://accounts.appdynamics.com/subscriptions) 

1.  Choose the link in the **Name** column on the row for your subscription. 

1.  Navigate to the **License details** by choosing the tab at top of the page. 

1.  The Access Key field has a **Show** button. Choose the **Show ** button to show the Access Key. 

1.  Copy the Access Key into the Password field in the Basic Auth Details on config page in Grafana. 

 Set up a user and role for Amazon Managed Grafana using the following steps. 

1.  In AppDynamics, navigate to Settings, Administration. 

1.  Select the **Roles** tab, and choose the "\$1" button to create a new role; for example, `grafana_readonly.` 

1.  In the **Account** tab of the Create Role section, add the permission `View Business Flow`.

1.  In the **Applications** tab, check the **View** box to allow Grafana to view application data. 

1.  In the **Databases** tab, check the **View** box to allow Grafana to view database data. 

1.  In the **Analytics** tab, check the **Can view data from all Applications** box to allow Grafana to view application analytics data. 

1.  In the **Users** tab of the Administration page, create a new user; for example, `grafana`. Assign the new user (or a Group to which the user belongs) to the role you just created; for example, `grafana_readonly`.

## Templating
<a name="appdynamics-templating"></a>

 The supported template queries for now are: 

1.  `Applications` (All Applications) 

1.  `AppName.BusinessTransactions` (All BTs for the Application Name) 

1.  `AppName.Tiers` (All Tiers for the Application Name) 

1.  `AppName.Nodes` (All Nodes for the Application Name) 

1.  `AppName.TierName.BusinessTransactions` (All BTs for a specific Tier) 

1.  `AppName.TierName.Nodes` (All Nodes for a specific Tier) 

1.  `AppName.Path.<Any Metric Path>` (Any metric Path can be specified) 

## Legend keys
<a name="legend-keys"></a>

 The default for the legend key can be quite long but this formatting can be customized. 

 The legend key can be prefixed with the application name by choosing the `App on legend` option. For example: `MyApp - Overall Application Performance|Average Response Time (ms)`. 

 If the query is for a singlestat or other panel where you cannot see the legend key, then choose the Show Metadata option to see what the legend key (also called an alias) for the query is. 

 The Legend dropdown list has three options: `Full Path`, `Segments` and `Custom`. 

### Legend option – full path
<a name="legend-option---full-path"></a>

 The legend key is the full metric path; for example, `Overall Application Performance|Average Response Time (ms)`. 

### Legend option – segments
<a name="legend-option---segments"></a>

 The metric name is made up of segments. You can choose which segments to show. 

 For example, with a metric name: 

 `Errors|mywebsite|Error|Errors per Minute` 

 entering the following `2,4` in the Segments field returns `mywebsite|Errors per minute`. 

 The indexing starts with 1 so `1` returns `Errors`. 

### Legend option – custom
<a name="legend-option---custom"></a>

 Create a custom legend by combining text with the following aliasing patterns to be able to mix in metric metadata. 
+  `{{app}}` returns the Application name 
+  `{{1}}` returns a segment from the metric path. 

   For example, the metric: `Overall Application Performance|Average Response Time (ms)` has two segments. `{{1}}` returns the first segment, `{{2}}` returns the second segment. 

 Examples of legend key patterns and the legend keys that are generated: 
+  `custom legend key` => `custom legend key` 
+  `App: {{app}} MetricPart2: {{2}}` => `App: myApp MetricPart2: Average Response Time (ms)` 

# Connect to a Databricks data source
<a name="AMG-databricks-datasource"></a>

The Databricks data source enables you to query and visualize Databricks data within Amazon Managed Grafana. It includes a SQL editor to format and color code your queries.

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).

## Adding a Databricks data source
<a name="AMG-databricks-add-datasource"></a>

Follow these steps to add a Databricks data source in the Grafana console.

**To add a Databricks data source**

1. Open the side menu by choosing the Grafana icon in the top header.

1. In the side menu, under the **Dashboards** link, select **Data Sources**.
**Note**  
If you don't see the **Data Sources** link, you do not have the `Admin` role for Grafana.

1. Choose the **\$1 Add data source** button in the top header. 

1. Select **Databricks** from the **Type** dropdown list.
**Note**  
If you don't see the Databricks option, and need it, you must upgrade to Grafana Enterprise.

1. Choose the options to connect to and edit your data.

## Notes when using the Databricks data source
<a name="AMG-databricks-notes"></a>

**Time series**

Time series visualizations are selectable when you add a `datetime` field to your query. This field will be used as the timestamp for the series. If the field does not include a specific time zone, Grafana will assume that the time is UTC.

**Multi-line time series**

To create a multi-line time series visualization, the query must include at least three fields in the following order.

1. A `datetime` field with an alias of `time`.

1. A value to `GROUP BY`.

1. One or more metric values to visualize.

The following is an example of a query that will return multi-line time series options.

```
SELECT log_time AS time, machine_group, avg(disk_free) AS avg_disk_free
FROM mgbench.logs1
GROUP BY machine_group, log_time
ORDER BY log_time
```

# Connect to a Datadog data source
<a name="AMG-datadog-datasource-plugin"></a>

 The Datadog data source enables you to visualize metrics from the Datadog monitoring service in Amazon Managed Grafana. 

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).

## Usage
<a name="datadog-usage"></a>

### Caching
<a name="datadog-caching"></a>

 For large dashboards, that make lots of queries it is possible to be rate limited by the Datadog API (reach the maximum number of API calls per hour that the Datadog API allows). The caching feature caches unique queries for 60 seconds. This interval can be changed to be longer or shorter on the config page. 

### Query editor
<a name="datadog-query-editor"></a>

 It’s easy - select aggregation and metric. If you want to filter result, select one or more tags. 

 The Datadog data source supports all of the advanced functions that the Datadog query editor supports. Select it from the dropdown list and arrange by choosing a function name.

 **Alias by field usage possibilities**: 
+  Enter the alias into the "Alias by" field. 
+  Use scoped variables: 
  +  `$__metric` = replaced with metric name 
  +  `$__display_name` = replaced with metric name 
  +  `$__expression` = replaced with full metric expression 
  +  `$__aggr` = replaced with metric aggregation function (for example, avg, max, min, sum) 
  +  `$__scope` = replaced with metric scope (for example, region, site, env, host) 
+  Use regular expressions: 
  +  Enter your regular expression into "Alias RegExp" field in `/you regexp here/flags` format. 
  +  If "Alias by" field is empty, RegExp results will be joined using. Example with metric expression = `avg:system.load.5{*}` : "Alias by" field input: """Alias RegExp" field input: `avg:(.+)\.(\d)` Result: `system.load, 5` 
  +  Use `$<group_number>` variables in "Alias by" field. Example with metric expression = `avg:system.load.5{*}` : "Alias by" field input: `$1: $2 seconds` "Alias RegExp" field input: `avg:(.+)\.(\d)` Result: `system.load: 5 seconds` 
  +  Use `$0` to get the whole expression. Example with metric expression = `datadog.dogstatsd.packet.count{*}` : "Alias by" field input: `Expression: $0` "Alias RegExp" field input: `DOGstatsd\.(.*)\.(.*){\*}/i` Result: `Expression: datadog.dogstatsd.packet.count{*}` 

   Note: you’ll get an error using nonexistent group number. 

#### Metric arithmetic
<a name="datadog-metric-arithmetic"></a>

 To use metric arithmetic set *Query type* to *Arithmetic*. Link to the metric that you want by using `#` sign. For example, `#A * 2` will double the result of query `A`. Arithmetic between two metrics works in the same way - add queries which results you want to use for the calculation and then link to these metrics in the third query, such as `#A / #B`. 

### Annotations
<a name="datadog-annotations"></a>

 An annotation is an event that is overlaid on top of graphs - an example of an event is a deployment or an outage. With this data source, you can fetch events from Datadog and overlay them on graphs in Amazon Managed Grafana. Annotations events can be filtered by source, tag or priority. 

### Templating
<a name="datadog-templating"></a>

 There are a few options for getting values of template variable - metrics and tags. To fetch the list of available metrics specify `*` in the *Query* field. 

 To return all tags use the value: `tag` or `scope`. 

 To return tags for a specified tag group then use one of the following default category values: 
+  `host` 
+  `device` 
+  `env` 
+  `region` 
+  `site` 
+  `status` 
+  `version` 

 For custom tag groups, then just enter the tag group name. For example, if your custom tag group name is `subscription_name`, enter that in the *Query* field. 

 Filter results by using the *Regex* field. Multi-value variables are supported when using tags - multiple selected tag values will be converted into a comma separated list of tags. 

#### Ad-hoc filters
<a name="datadog-ad-hoc-filters"></a>

 There is a new special type of template variable in Grafana called *Ad-hoc filters*. This variable will apply to *all* the Datadog queries in a dashboard. This allows using it like a quick filter. An ad-hoc variable for Datadog fetches all the key-value pairs from tags, for example, `region:east, region:west`, and uses them as query tags. To create this variable, select the *Ad-hoc filters* type and choose your Datadog data source. You can set any name for this variable. 

# Connect to a Dynatrace data source
<a name="dynatrace-AMG-datasource"></a>

Data source for [https://www.dynatrace.com/](https://www.dynatrace.com). To use this data source, you must have a Dynatrace account.

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).

**Known limitations**

Template variables can't be multi-select. Only single selection is supported.

Only v2 metric APIs are supported.

## Features
<a name="features"></a>

### Core features
<a name="core-features"></a>
+  Template Variables 
  +  Metric Names 
  +  Single selection only (**no multi-select**) 
  +  Ad-Hoc Filters 
+  Annotations 
  +  Not currently supported 
+  Aliasing 
  +  Metric Names 
  +  Aggregation 
  +  Display Name 
  +  Host 
  +  Description 
+  Alerting 
  +  Full alerting support 

### Dynatrace specific features
<a name="dynatrace-specific-features"></a>

 Supports both built-in and custom metrics using the Dynatrace metrics v2 API. For more information, see the Dynatrace documentation: [Metrics API v2](https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v2/) and [Metric ingestion](https://www.dynatrace.com/support/help/how-to-use-dynatrace/metrics/metric-ingestion/). 

Depending on the metric, the API might support additional transformation options.

## Dynatrace permissions
<a name="dynatrace-permissions"></a>

 You will need the following permissions in Dynatrace - Read metrics using API V2 (metrics.read) permission - Read entities using API V2 (entities.read) permission 

## Get an API key from Dynatrace
<a name="dynatrace-apikey"></a>

To set up an API token, see [Dynatrace API - Tokens and authentication](https://www.dynatrace.com/support/help/dynatrace-api/basics/dynatrace-api-authentication/?api-token%3C-%3Epersonal-access-token=api-token) 

Set the `metrics.read` and `entities.read` permissions for your API token.

### Configuration
<a name="configuration"></a>

1.  Choose **Settings/Data Sources** within the logical Grafana server UI and choose **Add data source**. 

1.  On the **Add data source** page, filter for **Dynatrace**, and select the Dynatrace plugin. 

1. Configuring a Dynatrace data source requires the following parameters: 
   +  `Name` - The name you want to apply to the Dynatrace data source (default: Dynatrace). 
   +  `Dynatrace API Type` - The type of Dynatrace instance that you’re connecting to. This is either `SaaS` or `Managed Cluster`. 
   +  `Dynatrace API Token` - This is the API token you generated in the previous step.. 

   The next two settings depend on whether you are Dynatrace SaaS or managed
   + In a SaaS example of `yfc55578.live.dynatrace.com`, your **Environment ID** would be `yfc55578`.
   + In the Managed example of `yd8888.managed-sprint.dynalabs.io/e/abc99984-3af2-55tt-72kl-0672983gc45`, your **Environment ID** would be `abc99984-3af2-55tt-72kl-0672983gc45` and your **Domain** would be `yd8888.managed-sprint.dynalabs.io`

1.  After all of the configuration values have been set, choose **Save & Test** to validate the configuration and save your changes. 

### Query the data source
<a name="dynatrace-usage"></a>

Use the query editor to query Dynatrace metrics and problems. The query type can be `metric` or `problem`.

**Metric query type**
+ `Metric`— Select the metric that you want to see. To get the metric list from Dynatrace again, choose **Refresh** button.
+ `Aggregations`— Select the aggregation you want to use for a specific metric. Choose the aggregations value to change the aggregation type or choose **\$1** to add another aggregation.
+ `Transformations`— You can select transformations in the query editor. Afterwards, enter a number of parameters into the selected transformation. Currently, only the merge transformation is supported. For more information about the merge transforms, see [Merge transformation](https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v2/metric-selector/#merge-transformation).
+ `Filters`— The Dynatrace data source dynamically queries the appropriate filters for each metric. To add a filter, choose the **\$1** symbol next to the **Filters** label on the Dynatrace query editor, select which field to filter on, select the operator to use, and then select a value to filter by. The Dynatrace data source allows you to create Filter Groups that you can join together to create complex logical comparisons. For most use cases, Filter Groups are not required. When creating filters with Tags, regardless of the conjunction selected, Dynatrace will always use AND. Dynatrace does not support OR filters with Tags.
+ `Alias`— There are two different types of aliases you will encounter while using the Dynatrace data source. The first is a static alias. An alias of this type is available on every query that you build, and the name of the alias starts with a lowercase letter. The second is a dynamic alias, which changes based on the metric that you are using in your query, and the name of the alias starts with an uppercase letter. The Dynatrace plugin supports several different aliases: `Metric Names`, `Aggregation`, `Display Name`, `Host`, and `Description`.


|  Name  |  Value  | 
| --- | --- | 
|  \$1name  |  builtin:apps.other.keyUserActions.reportedErrorCount.os  | 
|  \$1aggregation  |  auto,value  | 
|  \$1displayName  | Reported error count (by key user action, OS) [mobile, custom] | 

**Problems query type**
+ `Problem Query Type`— Select a problem query type. Currently, only the feed problem query type is supported. For information about the feed problem query type, see [Merge transformation](https://www.dynatrace.com/support/help/dynatrace-api/environment-api/metric-v2/metric-selector/#merge-transformation)
+ `Status Filter`— Filter the result problems by the status.
+ `Impact Filter`— Filter the result problems by the impact level.
+ `Severity Filter`— Filter the result problems by the severity level.
+ `Expand Details`— Include related events to the response, if set..

#### Using template variables
<a name="using-template-variables"></a>

 To add a new Dynatrace query variable, see [add a new template variable](variables-types.md#add-a-query-variable). Use your Dynatrace data source as your data source for the following available queries: 
+ `Query type`— Select a query type. The query type associates some data with some key or descriptor.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/grafana/latest/userguide/dynatrace-AMG-datasource.html)
+ `Regex`— (Optional) Filter out any of the returned values from your query with a regular expression.

**Note**  
`Multi-value` and `Include All option` are currently not supported by the Dynatrace data source.

After creating a variable, you can find it in the **Metric** drop-down menu. 

##### Import a dashboard for Dynatrace
<a name="dynatrace-import"></a>

To import a dashboard, see [Importing a dashboard](dashboard-export-and-import.md#importing-a-dashboard). Imported dashboards can be found in **Configuration** > **Data Sources** > select your Dynatrace data source > select the **Dashboards** tab to see available pre-made dashboards.

# Connect to a GitLab data source
<a name="gitlab-AMG-datasource"></a>

The GitLab data source allows you to keep track of detailed GitLab statistics, such as top contributors, commits per day, or deployments per day. You can also use template variables, such as projects, to set up filters for your dashboards. You can combine data from the GitLab API with data from other sources.

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).

## Known limitations
<a name="gitlab-known-limitations"></a>

Alerting is not supported yet on this plugin because transformations are not supported in alert queries and transformations is the only way to obtain meaningful aggregate metrics from GitLab API raw data.

## Adding the data source
<a name="gitlab-adding-the-data-source"></a>

1.  Open the Grafana console in the Amazon Managed Grafana workspace and make sure you are logged in. 

1.  In the side menu under **Configuration** (the gear icon), choose **Data Sources**. 

1.  Choose **Add data source**. 
**Note**  
 If you don't see the **Data Sources** link in your side menu, it means that your current user does not have the `Admin` role. 

1.  Select **GitLab** from the list of data sources. 

1. Enter the following information:
   + For **Name**, enter a name for this GitLab data source.
   + For **URL**, enter the root URL for your GitLab instance, such as **https://gitlab.com/api/v4**.
   + For **Access token**, enter your GitLab personal access token.

## Query the GitLab data source
<a name="gitlab-query"></a>

From the GitLab Query Editor you can select different resource types, such as commits, issues, or releases.

**Filter and view projects**

1.  From the dropdown menu, choose **Projects**. 

1.  (Optional) Filter by the projects that you own. 

1.  Use the dropdown and select **Yes** or **No** to filter the results. 
**Note**  
 Fetching all the projects **Owned = No** can take a long time. 

**Filter and view commits**

1.  From the dropdown menu, choose **Commits**. 

1.  Use the input field to add the project ID. 

1.  (Optional) To filter by branch/tag use the input field to add a branch/tag reference. 

**Filter and view issues**

1.  From the dropdown menu, choose **Issues**. 

1.  Use the input field to add the project ID. 

1.  (Optional) To filter by title/description, use the input field to search issues based on their **title** and **description**. 

**View releases**

1.  From the dropdown menu, choose **Deployments**. 

1.  Use the input field to add the project ID. 

1.  (Optional) To filter by environment/status, use the input fields. The **status** attribute can be one of the following values: `created`, `running`, `success`, `failed`, or `canceled`. 

**View labels**

1.  From the dropdown menu, choose **Labels**. 

1.  Use the input field to add the project ID. 

## Templates and variables
<a name="gitlab-templates"></a>

To add a new GitLab query variable, see [Adding a query variable](variables-types.md#add-a-query-variable). Use your GitLab data source as the data source. Choose a resource type: **Releases**, **Projects**, or **Labels**.

To get a dynamic list of projects, labels, and so on to choose from, create a Query type variable. Query type variables use the GitLab Query Editor to query and return Projects, Labels, and so on. The following example creates a Project variable to parameterize your queries

**Create a Project variable to parameterize your queries**

1.  Add a variable of type **Query** named **project**. 

1.  Select your GitLab data source and refresh **On Dashboard Load**. 

1.  Select the **Projects** resource type, **Yes** for **Owned**, **name** for **display field** and **id** for **value field **. 

1. Choose **Update** to add the variable to the dashboard.

1. Add a new panel to the dashboard and use **\$1project** as the project ID.

   Now, when choosing from the dropdown, you get the results that belong to that project.

## Using transformations from Grafana to answer common questions
<a name="gitlab-transformations"></a>

Now that you can perform basic GitLab queries to find commits, issues, etc, you can use Transformations to visualize, aggregate, group, and join datasets, along with many other types of transformations to transform simple results into answers for complex questions. Below are a few common questions and how to use transformations to answer them.

**How many commits/issues/deployments per day in my project?**

1.  Add a query. Select **Commits** for the resource type and add the project ID. 

1.  Add a new **Group by** transformation: for **Group by**, select **created\$1at\$1date** and then calculate **(Count)=id** 

1. Choose the **Graph** visualization.

**What is the average time to close issues in my project?**

1.  Add a query. Select **Issues** for the resource type and add the project ID. 

1.  Add a new **Add field from calculation** transformation: for **Mode**, select **Binary Operation**, for **Operation**, select **closed\$1at = created\$1at** and for **Alias** choose **resolution\$1time**. 

1.  Add a new **Add field from calculation** transformation: for **Mode**, select **Binary Operation**, for **Operation**, select **resolution\$1time / 86400000** and for **Alias** choose **resolution\$1time**. 

   For **Replace all fields**, choose **True**.

1. Choose the **Stat** visualization.
   + Show = Calculate
   + Calculation = Mean
   + Fields = **resolution\$1time**

# Connect to a Honeycomb data source
<a name="honeycomb-AMG-datasource"></a>

The Honeycomb data source allows you to query and visualize Honeycomb metrics and link to Honeycomb traces from within Amazon Managed Grafana.

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).

## Known limitations
<a name="honeycomb-known-limitations"></a>
+  This data source does not support ad-hoc queries. 
+  Because of API limitations, the variable editor can only return the first 1000 unique values for a selected column. 
+  Because of API limitations, the data source can query only the last 7 days of data. 

## Adding the data source
<a name="honeycomb-adding-the-data-source"></a>

1.  Open the Grafana console in the Amazon Managed Grafana workspace and make sure you are logged in. 

1.  In the side menu under **Configuration** (the gear icon), choose **Data Sources**. 

1.  Choose **Add data source**. 

1.  Select **Honeycomb** from the list of data sources. 

**Note**  
 If you don't see the **Data Sources** link in your side menu, it means that your current user does not have the `Admin` role. 

**Honeycomb settings**


|  Name  |  Description  | 
| --- | --- | 
|  Name  |  The data source name. This is how you see the data source in panels, queries, and Explore.  | 
|  Honeycomb API key  |  The API key that you obtained from Honeycomb.  | 
|  URL  |  The URL of the Honeycomb API. For example, https://api.honeycomb.io.  | 
|  Team  |  The Honeycomb team associated with the API key.  | 

## Query the Honeycomb data source
<a name="honeycomb-query"></a>

To query metrics, enter values into the editor fields:
+  Select a dataset. 
+  The default query is a `COUNT` over the selected dataset. 
+  To refine the query, select values for any of the remaining fields, such as **Visualization**, **Visualization**, **Where**, **Constraint**, **Group by**, **Order by**, or **Limit**. 

## Templates and variables
<a name="honeycomb-templates"></a>

To add a new Honeycomb query variable, see [Adding a query variable](variables-types.md#add-a-query-variable).

YOu can create variables containing Datasets, Columns, or Column Values.
+  If no dataset is selected, the variable will contain datasets. 
+  If only a dataset is selected, the variable will contain column names. 
+  If both a dataset and a column are selected, the variable will contain column values. Column values can be further constrained using the **Where** fields in the editor. . 

## View query in Honeycomb UI
<a name="honeycomb-view"></a>

To see the query you have created in the Honeycomb UI from the dashboard panel, choose any point in the graph and choose **Open in Honeycomb**. 

To see the query you have created in the Honeycomb UI from the Query Editor, choose **Open in Honeycomb**. 

## Import a dashboard for Honeycomb
<a name="honeycomb-import"></a>

To import a dashboard, see [Importing a dashboard](dashboard-export-and-import.md#importing-a-dashboard). 

To find your imported dashboards, choose **Configuration**, **Data sources**. 

To see the avaialble pre-made dashboards, choose the Honeycomb data source and choose the **Dashboards** tab. 

# Connect to a Jira data source
<a name="jira-AMG-datasource"></a>

Get the whole picture of your development process by combining issue data from Jira with application performance data from other sources.

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).
+ Create annotations based on issue creation or resolution, to see the relationship between issues and metrics.
+ Track detailed Jira stats, such as mean time to resolution and issue throughput.

In order to use the Jira data source, you need an Atlassian account with access to a Jira project.

## Known limitations
<a name="jira-known-limitations"></a>

Custom field types from Jira addons might not be supported.

## Adding the data source
<a name="jira-adding-the-data-source"></a>

1.  Open the Grafana console in the Amazon Managed Grafana workspace and make sure you are logged in. 

1.  In the side menu under **Configuration** (the gear icon), choose **Data Sources**. 

1.  Choose **Add data source**. 
**Note**  
 If you don't see the **Data Sources** link in your side menu, it means that your current user does not have the `Admin` role. 

1.  Select **Jira** from the list of data sources. 

1. Enter the following information:
   + For **Name**, enter a name for this Jira data source.
   + For **URL**, enter the root URL for your Atlassian instance, such as **https://bletchleypark.atlassian.net**.
   + For **User**, enter an email address for the user/service account.
   + For **API token**, enter an API token generated for the user.

## Query the Jira data source
<a name="jira-query"></a>

From the Jira Query Editor you can select fields and query issues.

The Jira data source queries Jira for issues, which can represent bugs, user stories, support tickets, or other tasks in Jira

**Filter and view issues**

1.  Choose **Fields** choose the dropdown and use type-ahead to select from any of the fields in your Jira instance, including custom fields. Some fields to try: 
   + **Summary**— The name of the issue
   + **Epic Name**— The epis that an issue belongs to
   + **Story Point Estimate**— The number of story points that the team has estimated for an issue

1.  Filter or sort the issues. To do so, enter any valid JQL expression to filter or sort the issues based on any of their fields such as **Project**, **Assignee**, or **Sprint** with the Atlassian query language JQL. 

From here, you can display your data in a table or use Grafana transformations to manipulate that issue data, run calculations, or turn the data into a time series graph. For more information, see [Applying a transformation](panel-transformations.md#apply-a-transformation).

## Time series query
<a name="jira-timeseries-query"></a>

To show time series data, choose a **Date** field along with a numeric field, then switch to graph visualization. For example: **Sprint Start Date**, **Story point estimate**.

The preceding example, on its own, is not very useful. The numeric field can be (and will most likely be) calculated from Transformations. Using the **Group By** Transformation would allow grouping by **Sprint Start Date** and summarizing the **Story point estimate** allowing a visualization of Story Points over time per Sprint. For more information about transformations, see [Applying a transformation](panel-transformations.md#apply-a-transformation). 

## Templates and variables
<a name="jira-templates"></a>

To add a new Jira query variable, see [Adding a query variable](variables-types.md#add-a-query-variable). Use your Jira data source as the data source.

You can define variables on your dashboards and reference them in JQL expressions. For example, you can create a project status dashboard and choose between projects, or an epic status dashboard and choose different epics, or a task status dashboard and choose different assignees.

To get a dynamic list of projects, epics, assignees, and so on to choose from, create a Query type variable. Query type variables use JQL to query issues and return Projects, Epics, Assignees, or anything related to issues. The following is an example:

**Create an Assignee variable to get the status of issues by Assignee**

1.  Add a variable of type **Query** named **assignee**. 

1.  Select **Field: Assignee**. 

1.  )Optional) Add a JQL filter **project = 'your project'**. 

1.  Choose **Run** to see a list of Assignees. 

1. Choose **Update** to add the variable to the dashboard.

1. Add a new panel to the dashboard and edit the JQL to filter using your new variable **assignee = \$1assignee**.

   Now, when choosing from the dropdown, you see only the issues assigned to that user.

Multi-value variables allow selecting multiple options and can be used as part of the IN clause. For example, **assignee IN (\$1assignee)**.

## Using transformations from Grafana to answer common questions
<a name="jira-macros"></a>

Macros are variables that reference the Dashboard time window so you can filter issues only within the range of the Dashboard window. There are 2 macros: 
+ **\$1\$1\$1timeFrom**
+ **\$1\$1\$1timeTo.**

The following example JQL query filters issues created within the dashboard time window: `createdDate >= $__timeFrom AND createdDate <= $__timeTo`

## Get the most out of the data source
<a name="jira-getmost"></a>

Using Grafana's transformations and other built-in features can help you meaningly view your Jira data.

### Using transformations to augment JQL
<a name="gitlab-transformations-JQL"></a>

While there are many Transformations in Grafana to choose from, the following provide a powerful augmentation to give JQL some of the features/power of SQL.

**Group By** This transformation provides a key feature that is not part of the standard Jira JQL syntax: Grouping. Using the **Group By** transformation, you can group by Sprints or other Issue fields, and aggregate by group to get metrics like velocity and story point estimates vs actual completed in a Sprint.

**Outer Join** Similar to SQL joins, you can join 2 or more queries together by common fields. This provides a way to combine datasets from queries and use other transformations to calculate values from multiple queries/datasets.

**Add Field from Calculation** Similar to SQL expressions, this transformation allows adding new fields to your dataset based on calculations of other fields. The fields used in the calculation can be from a single query or from queries you've joined together. You can also chain together calculations and perform calculations from calculated fields.

### Using transformations from Grafana to answer common questions
<a name="gitlab-transformations-common"></a>

You can use Transformations to visualize, aggregate, group, and join datasets, along with many other types of transformations to transform simple results into answers for complex questions.

**How do I show Velocity per Sprint?**

1.  Select Fields: **Sprint Name**, **Story point estimate**. 

1.  Add a JQL filter: `project = "Your Project" AND type != epic AND status = done order by created ASC` 

1.  Add a **Group By** transformation: 
   + Sprint Name \$1 Group By
   + Story Point Estimate \$1 Calculate \$1 Total

1. Choose the **Bar Gauage** visualization.

**How do I show what was Completed vs Estimated in a Sprint?**

1.  Add a query. First, select Fields: **Sprint Name**, **Sprint Start Date,**, **Story point estimate**. 

   Then add a JQL filter: `project = 'Your Project' AND type != epic` 

1.  Add a second query. First, select Fields: **Sprint Name**, **Sprint Start Date,**, **Story point estimate**. 

   Then add a JQL filter: `project = 'Your Project' AND type != epic AND status = done` 

1.  Add a **Group By** transformation: 
   + Sprint Name \$1 Group By
   + Sprint Start Date \$1 Group By
   + Story Point Estimate \$1 Calculate \$1 Total

1. Choose the **Graph** visualization.

**What is the Average time to complete issues in my project?**

1.  Add a query. First, select Fields: **Created**, **Status Category Changed**. 

   Then add a JQL filter: `project = 'Your Project' AND type != epic AND status = done` 

1.  Add a transformation: **Add field from calculation**
   + Mode = Reduce Row
   + Calculation = Difference

1.  Add a transformation: **Add field from calculation**
   + Mode = Binary Operation
   + Operation = Difference / 86000000
   + Alias = Days

1.  Add a transformation: **Organize fields**
   + Hide Different field

1.  Add a transformation: **Filter data by values**
   + Filter Type = Include
   + conditions = Match any
     + Field = Days \$1 Match = Is Greater \$1 Value = 1

1.  Add a transformation: **Reduce**
   + Mode = Series to Rows
   + Calculations = mean

1. Choose the **Stat** visualization.

# Connect to a MongoDB data source
<a name="AMG-mongodb-datasource"></a>

 The MongoDB Data source enables you to visualize data from MongoDB in Amazon Managed Grafana. 

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).

## Usage
<a name="mongo-usage"></a>

### Query editor
<a name="mongo-query-editor"></a>

 The query editor supports the same syntax as the MongoDB Shell, with some limitations: \$1 You can only run one command/query. \$1 Only read commands are supported: **find** and **aggregate** \$1 *Most* Object constructors are not supported (with the exception of **ISODate**, which is supported) 

 The editor expands upon the MongoDB Shell syntax in the following ways: 
+  **Database selection** – You can supply the name of the database in place of the normal "db": 
**Note**  
You can still use "db". It will refer to the default database in your connection string.

  ```
  sample_mflix.movies.find()
  ```
+  **Aggregate sorting** – Normally sorting happens with a step within the aggregate pipeline, however the MongoDB Atlas free tier doesn’t allow sorting. We have expanded the syntax to allow it for those using the free tier. 
**Note**  
MongoDB doesn’t perform the sort with this syntax. The sort happens after the results are queried from the collection.

  ```
  sample_mflix.movies.aggregate({}).sort({"time": 1})
  ```
+  With a blank editor, **Ctrl \$1 Space** will show a selection of all the available databases. 
+  Entering a dot after the database will show a selection of all the available collections for that database. 
+  Entering a dot after the collection will show the available query methods. 
+  Entering a dot after the query method will show additional functions: sort/limit. 

#### Running the query
<a name="mongo-running-the-query"></a>

 Press **Cmd \$1 S** to run the query 

### Time series
<a name="mongo-time-series"></a>

 When visualizing time series data, the plugin needs to know which field to use as the time. Simply project the field with a name alias of "time". The field data type must be a date. 

 You can coerce non-date data types to date. Doing so will allow using non-date fields as the time series time. The following example shows how to convert the int field "year" to a date that is projected as "time" using the MongoDB \$1dateFromParts pipeline operator. 

```
sample_mflix.movies.aggregate([
{"$match": { "year": {"$gt" : 2000} }},
{"$group": { "_id": "$year", "count": { "$sum": 1 }}},
{"$project": { "_id": 0, "count": 1, "time": { "$dateFromParts": {"year": "$_id", "month": 2}}}}
]
).sort({"time": 1})
```

### Diagnostics
<a name="mongo-diagnostics"></a>

 [Diagnostic Commands](https://docs.mongodb.com/manual/reference/command/nav-diagnostic/) 

 The following diagnostic commands are currently supported: "stats", "serverStatus", "replSetGetStatus", "getLog", "connPoolStats", "connectionStatus", "buildInfo", "dbStats", "hostInfo", "lockInfo" 

 Examples: 

```
admin.connectionStatus()  // run the connectionStatus command
admin.connectionStatus({"authInfo.authenticatedUserRoles": 1})  // run and only return the "authInfo.authenticatedUserRoles" field
admin.connPoolStats({arg: "pool"})  // run the connPoolStats command and pass 1 argument
admin.serverStatus({args: {repl: 0, metrics:0}})  // run the serverStatus command and pass multiple args
```

### Macros
<a name="mongo-macros"></a>

 You can reference the dashboard time range in your queries.
+ ` $__timeFrom `— a macro that references the dashboard start time
+ ` $__timeTo `— a macro that references the dashboard end time

```
          $__timeTo -  ``` sample_mflix.movies.find({released: {$gt:
          "$__timeFrom"}}).sort({year: 1})
```

#### Template variables
<a name="mongo-variables"></a>

MongoDB supports the idea of "Compound Variables", which enable you to use one variable as multiple variables to perform complex multi-key filters.

To create a Compound Variable, use the naming convention of breaking the variables up by using underscores (must start with underscore): `_var1_var2` When querying, the response must be in the format: `val1-val2`

**Example: I want to filter results on both movie name and year.**

1. Create a variable of type Query: `_movie_year`

1. Set the variable query to a query that will return an array of items with one movie-year property, as shown in the following example.

   ```
   // Example sample_mflix.movies.aggregate([
             {"$match": {year: {"$gt": 2011}}},
             {"$project": {_id: 0, movie_year: {"$concat":
             ["$title", " - ", {"$toString":"$year"}]}}}
             ])
   ```

   ```
    // [{"movie-year": "Ted - 2016"},
             {"movie-year": "The Terminator -
             1985"}]
   ```

1. Now in your query, you can reference "Movie" and "Year" as separate template variables using the syntax "\$1\$1variable". 

##### Using ad-hoc filters
<a name="mongo-adhoc"></a>

In addition to the standard "ad-hoc filter" type variable of any name, a second helper variable must be created. It should be a "constant" type with the name `mongodb\$1adhoc\$1query` and a value compatible with the query editor. The query result will be used to populate the selectable filters. You can choose to hide this variable from view as it serves no further purpose.

```
          sample_mflix.movies.aggregate([
          {"$group": { "_id": "$year"}},
          {"$project": { "year": "$_id","_id":
          0 }} ] )
```

# Connect to a New Relic data source
<a name="new-relic-data-source"></a>

 This section covers New Relic [APM](https://newrelic.com/products/application-monitoring) and [Insights](https://newrelic.com/products/insights) for Grafana. 

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).

## Features
<a name="newrelic-features"></a>
+  Template variables 
  +  Metric names 
  +  Metric values 
+  Annotations 
+  Aliasing 
  +  Metric names 
  +  Metric values 
+  Ad-hoc filters 
  +  Not currently supported 
+  Alerting 

## Configuration
<a name="newrelic-configuration"></a>

 Add the data source, filling out the fields for your [admin API key](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#admin), [personal API key](https://docs.newrelic.com/docs/apis/get-started/intro-apis/types-new-relic-api-keys#personal-api-key) and [account ID](https://docs.newrelic.com/docs/accounts/install-new-relic/account-setup/account-id). 

## Usage
<a name="newrelic-usage"></a>

### Service types
<a name="newrelic-service-types"></a>
+  **Metrics**; for querying New Relic APM via New Relic’s [REST API](https://docs.newrelic.com/docs/apis/rest-api-v2). 
+  **Insights**; for querying New Relic Insights via [NRQL](https://docs.newrelic.com/docs/insights/nrql-new-relic-query-language/nrql-resources/nrql-syntax-components-functions). 

### Aliases
<a name="newrelic-aliases"></a>

 You can combine plaintext with the following variables to produce custom output. 


|  Variable  |  Description  |  Example value  | 
| --- | --- | --- | 
|  \$1\$1\$1nr\$1metric  |  Metric name  |  CPU/User time  | 
|  \$1\$1\$1nr\$1metric\$1value  |  Metric values  |  average\$1value  | 

For example:

```
    <para>
      Server: $__nr_server Metric: $__nr_metric
    </para>
    <programlisting>
```

### Templates and variables
<a name="newrelic-templates-and-variables"></a>

1.  Create a template variable for your dashboard. For more information, see [Templates and variables](templates-and-variables.md). 

1.  Select the "Query" type. 

1.  Select the "New Relic" data source. 

1.  Formulate a query using relative [REST API](https://docs.newrelic.com/docs/apis/rest-api-v2) endpoints (excluding file extensions). 

List of available applications:

```
    <para>
      applications
    </para>
    <programlisting>
```

List of available metrics for an application:

```
    <para>
      applications/{application_id}/metrics
    </para>
    <programlisting>
```

### NRQL macros
<a name="nrql-macros"></a>

 To improve the writing experience when creating New Relic Query Language (NRQL) queries, the editor supports predefined macros: 
+  `$__timeFilter` (or `[[timeFilter]]`) will interpolate to `SINCE &lt;from&gt; UNTIL &lt;to&gt;` based on your dashboard’s time range. 

Example:

```
    <para>
      SELECT average(value) FROM $event_template_variable
      $__timeFilter TIMESERIES
    </para>
    <programlisting>
```

 For further hints on how to use macros and template variables, refer to the editor’s help section. 

### Alert events
<a name="newrelic-alert-events"></a>

 Select your New Relic data source and set additional filters. Without any filters set, all events will be returned. 

 If you want to filter events by *Entity ID*, use template variables because you will be able to select the entity name instead of ID. For example, to filter events for a particular application, create a variable `_$app_` which retrieves a list of apps and uses it as an *Entity ID* filter. 

### Deployment events
<a name="newrelic-deployment-events"></a>

 *Application ID* is a required field. 

# Connect to an Oracle Database data source
<a name="oracle-datasource-AMG"></a>

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).

## Adding the data source
<a name="datasource-configuration"></a>

 Select **Data sources** on the left panel of Grafana. 

 Select Add Datasource: 

 Enter **oracle** to find the data source. 

 Enter Oracle server details. 

 Enter a hostname (or IP address) along with the port number, and the username and password to connect. 

 With the tnsnames option toggle, any valid entry found in your tnsnames.ora configuration file can be used, along with basic authentication. 

 Similar to the previous example, but using Kerberos for authentication. See the kerberos specific setup guide for details on how to configure the OS or docker container to use kerberos. 

 Optionally change the time zone used to connect to the Oracle server and to be used by timezone aware macros. The default setting is UTC. 

 Save and Test the data source, you should see a green message with "Database Connection OK" 

## Usage
<a name="usage-4"></a>

### Macros
<a name="macros-1"></a>

 To simplify syntax and to allow for dynamic parts, such as date range filters, the query can contain macros. The column name must be contained within double-quotes (`"`). 


|  Macro example  |  Description  | 
| --- | --- | 
|  \$1\$1\$1\$1time(dateColumn)\$1 \$1 Will be replaced by an expression to rename the column to `time`. For example, `dateColumn as time` \$1\$1\$1\$1timeEpoch(dateColumn)\$1  |  Will be replaced by an expression to rename the column to time and converting the value to unix timestamp (in milliseconds).  | 
|  \$1\$1\$1\$1timeFilter(dateColumn)\$1 \$1 Will be replaced by a time range filter using the specified column name. For example, `dateColumn BETWEEN TO\$1DATE('19700101','yyyymmdd') \$1 (1/24/60/60/1000) \$1 1500376552001 AND TO\$1DATE('19700101','yyyymmdd') \$1 (1/24/60/60/1000) \$1 1500376552002` \$1\$1\$1\$1timeFrom()\$1  |  Will be replaced by the start of the currently active time selection converted to DATE data type. For example, TO\$1DATE('19700101','yyyymmdd') \$1 (1/24/60/60/1000) \$1 1500376552001.  | 
|  \$1\$1\$1\$1timeTo()\$1 \$1 Will be replaced by the end of the currently active time selection converted to `DATE` data type. \$1\$1\$1\$1timeGroup(dateColumn,"5m")\$1  |  Will be replaced by an expression usable in GROUP BY clause.  | 
|  \$1\$1\$1\$1timeGroup(dateColumn,"5m"[, fillvalue])\$1  |  Will be replaced by an expression usable in GROUP BY clause. Providing a fillValue of NULL or floating value will automatically fill empty series in time range with that value. For example, timeGroupcreatedAt, ′1m′, 0.\$1\$1\$1timeGroup(dateColumn,"5m", 0)\$1.  | 
|  \$1timeGroup(dateColumn, ‘5m’, NULL) \$1 \$1SameasabovebutNULLwillbeusedasvalueformissingpoints.\$1\$1\$1timeGroup(dateColumn,"5m", previous)\$1  |  Same as above but the previous value in that series will be used as fill value if no value has been seen yet NULL will be used.  | 
|  \$1\$1\$1\$1unixEpochFilter(dateColumn)\$1 \$1 Will be replaced by a time range filter using the specified column name with times represented as unix timestamp (in milliseconds). For example, `dateColumn >= 1500376552001 AND dateColumn <= 1500376552002` \$1\$1\$1\$1unixEpochFrom()\$1  |  Will be replaced by the start of the currently active time selection as unix timestamp. For example, 1500376552001.  | 
|  \$1\$1\$1\$1unixEpochTo()\$1  |  Will be replaced by the end of the currently active time selection as unix timestamp. For example, 1500376552002.  | 

 The plugin also supports notation using braces `{}`. Use this notation when queries are needed inside parameters. 

**Note**  
Use one notation type per query. If the query needs braces, all macros in the query must use braces. 

```
$__timeGroup{"dateColumn",'5m'}
$__timeGroup{SYS_DATE_UTC("SDATE"),'5m'}
$__timeGroup{FROM_TZ(CAST("SDATE" as timestamp), 'UTC'), '1h'}
```

 The query editor has a **Generated SQL** link that shows up after a query has run, while in panel edit mode. When you choose the link, it expands and shows the raw interpolated SQL string that was run. 

### Table queries
<a name="table-queries"></a>

 If the **Format as** query option is set to **Table** then you can basically do any type of SQL query. The table panel will automatically show the results of whatever columns & rows your query returns. You can control the name of the Table panel columns by using regular `as` SQL column selection syntax. 

### Time series queries
<a name="time-series-queries"></a>

 If you set **Format as** to **Time series**, for use in Graph panel for example, the query must return a column named `time` that returns either a SQL datetime or any numeric data type representing unix epoch in seconds. Grafana interprets DATE and TIMESTAMP columns without explicit time zone as UTC. Any column except `time` and `metric` is treated as a value column. You can return a column named `metric` that is used as metric name for the value column. 

 The following code example shows the `metric` column. 

```
SELECT
  $__timeGroup("time_date_time", '5m') AS time,
  MIN("value_double"),
  'MIN' as metric
FROM test_data
WHERE $__timeFilter("time_date_time")
GROUP BY $__timeGroup("time_date_time", '5m')
ORDER BY time
```

### More queries – using oracle-fake-data-gen
<a name="more-queries---using-oracle-fake-data-gen"></a>

```
SELECT
  $__timeGroup("createdAt", '5m') AS time,
  MIN("value"),
  'MIN' as metric
FROM "grafana_metric"
WHERE $__timeFilter("createdAt")
GROUP BY $__timeGroup("createdAt", '5m')
ORDER BY time
```

 The following code example shows a Fake Data time series. 

```
SELECT
  "createdAt",
  "value"
FROM "grafana_metric"
WHERE $__timeFilter("createdAt")
ORDER BY "createdAt" ASC
```

```
SELECT
  "createdAt" as time,
  "value" as value
FROM "grafana_metric"
WHERE $__timeFilter("createdAt")
ORDER BY time ASC
```

 The following example shows a useful table result. 

```
select tc.table_name Table_name
,tc.column_id Column_id
,lower(tc.column_name) Column_name
,lower(tc.data_type) Data_type
,nvl(tc.data_precision,tc.data_length) Length
,lower(tc.data_scale) Data_scale
,tc.nullable nullable
FROM all_tab_columns tc
,all_tables t
WHERE tc.table_name = t.table_name
```

### Templating
<a name="templating-3"></a>

 Instead of hardcoding things such as server, application and sensor name in you metric queries you can use variables in their place. Variables are shown as dropdown select boxes at the top of the dashboard. These dropdown boxes makes it easy to change the data being displayed in your dashboard. 

#### Query variable
<a name="query-variable-1"></a>

 If you add a template variable of the type `Query`, you can write a Oracle query that can return things such as measurement names, key names or key values that are shown as a dropdown select box. 

 For example, you can have a variable that contains all values for the `hostname` column in a table if you specify a query like this in the templating variable *Query* setting. 

```
SELECT "hostname" FROM host
```

 A query can return multiple columns and Grafana will automatically create a list from them. For example, the following query will return a list with values from `hostname` and `hostname2`. 

```
SELECT "host.hostname", "other_host.hostname2" FROM host JOIN other_host ON host.city = other_host.city
```

 To use time range dependent macros such as `$__timeFilter("time_column")` in your query the refresh mode of the template variable needs to be set to *On Time Range Change*. 

```
SELECT "event_name" FROM event_log WHERE $__timeFilter("time_column")
```

 Another option is a query that can create a key/value variable. The query should return two columns that are named `__text` and `__value`. The `__text` column value should be unique (if it is not unique then the first value is used). The options in the dropdown list will have a text and value that allows you to have a friendly name as text and an id as the value. The following example code shows a query with `hostname` as the text and `id` as the value. 

```
SELECT "hostname" AS __text, "id" AS __value FROM host
```

 You can also create nested variables. For example, if you had another variable named `region`. Then you could have the hosts variable only show hosts from the current selected region with a query like this (if `region` is a multi-value variable then use the `IN` comparison operator rather than `=` to match against multiple values). 

```
SELECT "hostname" FROM host WHERE region IN('$region')
```

#### Using variables in queries
<a name="using-variables-in-queries-1"></a>

 Template variable values are only quoted when the template variable is a `multi-value`. 

 If the variable is a multi-value variable then use the `IN` comparison operator rather than `=` to match against multiple values. 

 There are two syntaxes: 

 `$<varname>` Example with a template variable named `hostname`: 

```
SELECT
  "atimestamp" as time,
  "aint" as value
FROM table
WHERE $__timeFilter("atimestamp") AND "hostname" IN('$hostname')
ORDER BY "atimestamp" ASC
```

 `[[varname]]` Example with a template variable named `hostname`: 

```
SELECT
  "atimestamp" as time,
  "aint" as value
FROM table
WHERE $__timeFilter("atimestamp") AND "hostname" IN('[[hostname]]')
ORDER BY atimestamp ASC
```

# Connect to a Salesforce data source
<a name="salesforce-AMG-datasource"></a>

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).

The Salesforce data source allows you to visualize data from Salesforce within Amazon Managed Grafana.

To use this data source, you must have a [Salesforce](https://www.salesforce.com/) account and a [Salesforce Connected App](https://help.salesforce.com/articleView?id=sf.connected_app_overview.htm&type=5).

## Known limitations
<a name="salesforce-known-limitations"></a>
+  Ad-hoc filters are not supported yet. 
+  Only SOQL queries, and data that is accessible via SOQL are currently supported. SOSL and SAQL query formats are not yet supported. 

## Required settings
<a name="salesforce-settings"></a>

The following settings are required.

**Note**  
The plugin currently uses the OAuth 2.0 Username-Password Flow. The required callback URL in the Connected App is not used. Thus, you can set it to any valid URL.


|  Name  |  Description  | 
| --- | --- | 
|  Enable OAuth settings  |  You must check this to enable OAuth.  | 
|  Callback URL  |  Not used in this plugin, so you can specify any valid URL.  | 
|  Selected OAuth Scopes (minimum requirements)  | Access and Manage your data (api). | 
|  Require Secret for Refresh Token Flow  |  You can either enable or disable this.  | 

## Adding the data source
<a name="salesforce-adding-the-data-source"></a>

1.  Open the Grafana console in the Amazon Managed Grafana workspace and make sure you are logged in. 

1.  In the side menu under **Configuration** (the gear icon), choose **Data Sources**. 

1.  Choose **Add data source**. 
**Note**  
 If you don't see the **Data Sources** link in your side menu, it means that your current user does not have the `Admin` role. 

1.  Select **Salesforce** from the list of data sources. 

1. Enter the following information:
   + For **User Name**, enter the username for the Salesforce account that you want to use to connect and query Salesforce.
   + For **Password**, enter the password for that user.
   + For **Security Token**, enter the security token for that user.
   + For **Consumer Key**, enter A Consumer key to connect to Salesforce. You can obtain this from your Salesforce Connected App.
   + For **Consumer Secret**, enter A Consumer secrete to connect to Salesforce. You can obtain this from your Salesforce Connected App.
   + For **Use Sandbox**, select this if you want to use a Salesforce sandbox.

## Query the Salesforce data source
<a name="salesforce-query"></a>

The query editor supports the modes Query Builder and SOQL Editor. SOQL stands for [ Salesforce Object Query Language](https://developer.salesforce.com/docs/atlas.en-us.soql_sosl.meta/soql_sosl/sforce_api_calls_soql.htm). 

### Query Builder (SOQL Builder)
<a name="salesforce-query-builder"></a>

Query Builder is a user friendly interface for building SOQL queries. If you are not familiar with writing SOQL queries, you can use this mode to build the SOQL to query Salesforce objects. The **FROM** field in the query builder refers to the entity or entities in Salesforce. You need to select the **FROM** field before any other operation in the query builder. After you choose the **FROM** field, you need to choose the builder mode. SOQL Builder currently supports the following modes.
+ `List`— List the items with their fields from the selected table/salesforce. Use this mode to get results such as, "Show me list of opportunities created in this fiscal quarter along with their name, value, and stage."
+ `Aggregate`— Aggregate the items in an entity. Use this mode to get results such as,"Count the opportunities created in last month." or "What is the total value of the opportunities grouped by their stage name?"
+ `Trend`— Display the aggregated results over time. Use this mode to get results such as, "Count the number of opportunities by CreatedDate." or "What is the total sum of value grouped by opportunities' closing dates."

After you choose the `Entity/FROM` and the **mode** in the query editor, build your query using the following options. 


|  **Fields**  |  **Appplicable to**  |  **Descriptions**  | 
| --- | --- | --- | 
|  SELECT |  ALL  |  Select the list of fields that you want to see. For the aggregate or trend view, also select how you want to aggregate the values. | 
|  WHERE |  ALL  |  (Optional) Specify the filter conditions. The results are filtered based on the conditions that you select. | 
|  ORDER BY |  LIST, AGGREGATE  |  (Optional) Select the field name and the sort order that you want for the results. | 
|  LIMIT |  LIST, AGGREGATE  |  (Optional) Limit the number fo results returned. The default is 100. | 
|  GROUP BY |  AGGREGATE  |  (Optional) Select the field if you want to split the aggregated value by any specific field. | 
|  TIME FIELD |  TREND  |  Specify the date field by which you want to group your results. Results are filtered based on Grafana's time picker range. | 

s you configure the preceding fields in the query editor, you will also see a preview of generated SOQL below the query editor. If you are blocked with any limitations in the query builder, you can safely switch to SOQL Editor, where you can customize the generated SOQL query.

### SOQL editor
<a name="salesforce-SOQL-editor"></a>

The raw SOQL editor provides the option to query Salesforce objects via raw a SOQL query. The SOQL editor provides autocomplete suggestions, such as available entities per tables and corresponding fields. Use Ctrl\$1Space after SELECT or WHERE to see the available entities per tables. You can see the available fields, if you enter a dot after the entity name.

**Shortcuts**

Use CTRL \$1 SPACE to show code completion, which shows available contextual options.

CMD \$1 S runs the query.

**Query as time series**

Make a time series query by aliasing a date field to time, and a metric field to metric, then grouping by the metric and date. The following is an example:

```
SELECT sum(Amount) amount, CloseDate time, Type metric from Opportunity
group by Type, CloseDate
```

**Macros**

To filter by the dashboard time range, you can use Macros in your SOQL queries:
+ `$__timeFrom`— Will be replaced by the start of the currently active time selection converted to the `time` data type.
+ `$__timeTo`— Will be replaced by the end of the currently active time selection converted to the `time` data type.
+ `$__quarterStart`— The start of the fiscal quarter (derived from SalesForce Fiscal Year Settings).
+ `$__quarterEnd`— The end of the fiscal quarter (derived from SalesForce Fiscal Year Settings).

```
SELECT UserId, LoginTime from LoginHistory where LoginTime > $__timeFrom
```

## Templates and variables
<a name="salesforce-templates"></a>

To add a new Salesforce query variable, see [Adding a query variable](variables-types.md#add-a-query-variable). Use your Salesforce data source as your data source. You can use any SOQL query here.

If you want to use name/value pairs, for example a user id and user name, return two fields from your SOQL query. The first field will be used as the ID. Do this when you want to filter by key (ID, etc) in your query editor SOQL.

Use the variable in your SOQL queries by using Variable syntax. For more information, see [Variable syntax](templates-and-variables.md#variable-syntax).

# Connect to an SAP HANA data source
<a name="saphana-AMG-datasource"></a>

[SAP HANA](https://www.sap.com/products/technology-platform/hana.html)is a high-performance, in-memory database that speeds up data-driven, real-time decisions and \$1 actions. It is developed and marketed by SAP. The SAP HANA data source plugin helps you to connect your SAP HANA instance with Grafana.

With the SAP HANA Grafana Enterprise plugin, you can visualize your SAP HANA data alongside all of your other data sources in Grafana as well as log and metric data in context. This plugin includes a built-in query editor, supports annotations, and it allows you to set alerting thresholds, control access, set permissions, and more.

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).

## Features
<a name="saphana-features"></a>
+ **Query editor**— The plugin comes with an built-in SQL query editor with syntax highlighting that allows you to visualize time series or table data and auto completes basic Grafana macros.
+ **Data source permissions**— Control who can view or query SAP HANA data in Grafana.
+ **Annotations**— Overlay SAP HANA events or data on any Grafana graph to correlate events with other graph data.
+ **Alerting**— Set alerts-based metrics stores in SAP HANA.
+ **Variables for queries**— Create template variables in Grafana, which are based on SAP HANA data, and include variables in SAP HANA queries to make dashboards interactive.

## Adding the data source
<a name="saphana-adding-the-data-source"></a>

1.  Open the Grafana console in the Amazon Managed Grafana workspace and make sure you are logged in. 

1.  In the side menu under **Configuration** (the gear icon), choose **Data Sources**. 

1.  Choose **Add data source**. 
**Note**  
 If you don't see the **Data Sources** link in your side menu, it means that your current user does not have the `Admin` role. 

1.  Select **SAP HANA** from the list of data sources. 

1. In the Config editor, enter the following information:
   + For **Server address**, Provide the address of the SAP HANA instance. Example : `xxxxxxx-xxxx-xxxx-xxxx-xxxxxxx.hana.trial-us10.hanacloud.ondemand.com`.
   + For **Server port**, provide the port of the SAP HANA instance.
   + For **Username**, enter the username to use to connect to the SAP HANA instance.
   + For **Password**, enter the password for this user.
   + (Optional) Enable **Skip TLS verify** if you want to skip TLS verification.
   + (Optional) Enable **TLS Client Auth** if you need to provide a client cert and key.
   + (Optional) Enable **With CA cert** if you want to enable verifying self-signed TLS Certs.
   + (Optional) For **Default schema**, enter a default schema to be used. If you omit this, you will need to specify the schema in every query. 

**Access and permissions**

To connect Grafana to SAP HANA, use dedicated credentials. Only provide required permissions to the user. First, create a restricted user with username and password. The following query is an example to create a restricted user. This query also disables the force password change.

```
CREATE RESTRICTED USER <USER> PASSWORD <PASSWORD> NO FORCE_FIRST_PASSWORD_CHANGE;
```

Next, allow the the user to connect the system through clients such as Grafana with the following:

```
ALTER USER <USER> ENABLE CLIENT CONNECT;
```

Finally, give the user access to the necessary views, tables, and schemas.

```
ALTER USER <USER> GRANT ROLE PUBLIC;
GRANT SELECT ON SCHEMA <SCHEMA> TO <USER>;
```

**User level permissions**

Limit access to SAP HANA by clicking on the Permissions tab in the data source configuration page to enable data source permissions. On the permission page, Admins can enable permissions and restrict query permissions to specific Users and Teams.

## Query editor
<a name="saphana-queryeditor"></a>

The SAP HANA Grafana plugin comes with an SQL query editor where you can enter any HANA queries. If your query return timeseries data, you can format it as timeseries for visualizing them in a graph panel. The query editor provides auto completion for supported Grafana macros and syntax highlighting of your SQL query.

## Annotations
<a name="saphana-annotations"></a>

You can use SAP HANA queries as the sources of Grafana annotations. Your annotation query should return at least one time column and one text column. For more information about annotations, see [Annotations](dashboard-annotations.md).

**To create annotations from SAP HANA**

1.  Choose the **Dashboard settings** gear icon. 

1.  From the left menu, choose **Annotations**, **New**. 

1.  From the **Data source** drop-down menu, select your SAP HANA data source instance. 

1.  In the **Query** field, enter a SAP HANA query that returns at least one time field and one text field. 

1.  In the **Format as** drop-down menu, select **Time Series**. 

1.  For each annotation, configure the **From** fields. 

## Templates and variables
<a name="saphana-templates"></a>

To add a new SAP HANA query variable, see [Adding a query variable](variables-types.md#add-a-query-variable). Use your SAP HANA data source as your data source.

The following example query returns the distinct list of `username` from the `users` table.

```
select distinct("username") from "users"
```

**Note**  
Be sure to only select one column in your variable query. If your query returns two columns, the first column will used as display value and the second column will be used as the actual value of the variable. If your query returns more than two columns, they will be rejected.

### Templates and variables
<a name="saphana-Grafana-variables"></a>

You can use any Grafana variable in your query. The following examples shows how to use the single / multi variable in your query.

```
-- For example, following query
select * from "users" where "city" = ${city}
-- will be translated into
select * from "users" where "city" = 'london'
--- where you can see ${city} variable translated into actual value in the variable
```

Similar to text, variables also works for numeric fields. In the below example, `${age}` is a text box variable where it accepts numbers and then compares against the numeric field in the table.

```
select * from "users" where "age" > ${age}
--- wil be translated into
select * from "users" where "age" > '36'
```

If your variable returns multiple values, then you can use it in SAP HANA query's `in` condition like below. Note the brackets surrounding the variable to make the `where in` condition valid in SAP HANA.

```
select * from "users" where "city" in (${cities})
--- will be translated into
select * from "users" where "city" in ('london','perth','delhi')
--- where you can see ${cities} turned into a list of grafana variables selected.
--- You can also write the same query using shorthand notation as shown below
select * from "users" where "city" in ($cities)
```

### Macros
<a name="saphana-macros"></a>
+ `$__timeFilter(<time_column>)`— Applies Grafana's time range to the specified column when used in the raw query. Applicable to date/timestamp/long time columns.
+ `$__timeFilter(<time_column>, <format>)`— Same as above. But gives the ability to specify the format of the time\$1column stored in the database.
+ `$__timeFilter(<time_column>, "epoch", <format>)`— Same as above but can be used when your time column is in epoch. format can be one of 's','ms' and 'ns'.
+ `$__fromTimeFilter(<time_column>)`— Same as above but can be used when your time column is in epoch. format can be one of 's','ms' and 'ns'.
+ `$__fromTimeFilter(<time_column>, <comparison_predicate>)`— Same as above but able to specify comparison\$1predicate.
+ `$__fromTimeFilter(<time_column>, <format>)`— Same as above but able to specify format of the time column.
+ `$__fromTimeFilter(<time_column>, <format>, <comparison_predicate>)`— Same as above but able to specify comparison\$1predicate.
+ `$__toTimeFilter(<time_column>)`— Returns time condition based on grafana's to time over a time field.
+ `$__toTimeFilter(<time_column>, <comparison_predicate>)`— Same as above but able to specify comparison\$1predicate.
+ `$__toTimeFilter(<time_column>, <format>)`— Same as above but able to specify format of the time column.
+ `$__toTimeFilter(<time_column>, <comparison_predicate>)`— Same as above but able to specify comparison\$1predicate.
+ `$__timeGroup(<time_column>, <interval>)`— Expands the time column into interval groups. Applicable to date/timestamp/long time columns..

**\$1\$1\$1timeFilter(<time\$1column>) macro**

The following example explains the `$__timeFilter(<time_column>)` macro:

```
- In the following example, the query
select ts, temperature from weather where $__timeFilter(ts)
--- will be translated into
select ts, temperature from weather where ts > '2021-02-24T12:52:48Z' AND ts < '2021-03-24T12:52:48Z'
--- where you can see the grafana dashboard's time range is applied to the column ts in the query.
```

**\$1\$1\$1timeFilter(<time\$1column>, <format>) macro**

In some cases, time columns in the database are stored in custom formats. The following example explains the `$__timeFilter(<time_column>, <format>)` macro, which helps to filter custom timestamps based on grafana's time picker:

```
SELECT TO_TIMESTAMP("TS",'YYYYMMDDHH24MISS') AS METRIC_TIME , "VALUE" FROM "SCH"."TBL" WHERE $__timeFilter("TS","YYYYMMDDHH24MISS") -- TS is in 20210421162012 format
SELECT TO_TIMESTAMP("TS",'YYYY-MON-DD') AS METRIC_TIME , "VALUE" FROM "SCH"."TBL" WHERE $__timeFilter("TS","YYYY-MON-DD") -- TS is in 2021-JAN-15 format
```

In the macro, the format can be one of the valid HANA formats matchting your timestamp column. For example, `YYYYMMDDHH24MISS` is a valid format when your data is stored in `20210421162012` format.

**\$1\$1\$1timeFilter(<time\$1column>, "epoch" <format>) macro**

In some cases, timestamps are stored as epoch timestamps in your DB. The following example explains the `$__timeFilter(<time_column>, "epoch" <format>)` macro which helps to filter epoch timestamps based on grafana's time picker. In the macro, format can be one of ms,s or ns. If not specified, s will be treated as default format.

```
SELECT ADD_SECONDS('1970-01-01', "TIMESTAMP") AS "METRIC_TIME", "VALUE" FROM "SCH"."TBL" WHERE $__timeFilter("TIMESTAMP","epoch") -- Example : TIMESTAMP field stored in epoch_second format 1257894000
SELECT ADD_SECONDS('1970-01-01', "TIMESTAMP") AS "METRIC_TIME", "VALUE" FROM "SCH"."TBL" WHERE $__timeFilter("TIMESTAMP","epoch","s") -- Example : TIMESTAMP field stored in epoch_second format 1257894000
SELECT ADD_SECONDS('1970-01-01', "TIMESTAMP"/1000) AS "METRIC_TIME", "VALUE" FROM "SCH"."TBL" WHERE $__timeFilter("TIMESTAMP","epoch","ms") -- Example : TIMESTAMP field stored in epoch_ms format 1257894000000
SELECT ADD_SECONDS('1970-01-01', "TIMESTAMP"/1000000000) AS "METRIC_TIME", "VALUE" FROM "SCH"."TBL" WHERE $__timeFilter("TIMESTAMP","epoch","ns") -- Example : TIMESTAMP field stored in epoch_nanoseconds format 1257894000000000000
```

Instead of using third argument to the \$1\$1\$1timeFilter, you can use one of epoch\$1s, epoch\$1ms or epoch\$1ns as your second argument..

```
SELECT ADD_SECONDS('1970-01-01', "TIMESTAMP"/1000) AS "METRIC_TIME", "VALUE" FROM "SCH"."TBL" WHERE $__timeFilter("TIMESTAMP","epoch","ms")
-- is same as
SELECT ADD_SECONDS('1970-01-01', "TIMESTAMP"/1000) AS "METRIC_TIME", "VALUE" FROM "SCH"."TBL" WHERE $__timeFilter("TIMESTAMP","epoch_ms")
```

**\$1\$1\$1fromTimeFilter() and \$1\$1\$1toTimeFilter() macros**

The `$__fromTimeFilter()` macro expands to a condition over a time field based on Grafana time picker's from time.

This accepts three parameters. First parameter is time field name. You can pass comparison\$1predicate or format of the time column as second argument. If you want to pass both, then format is second parameter and use comparison\$1predicate as your third parameter.

**<format>** If the format is not specified, plugin wil assume that the time column is of timestamp/date type. If your time column is stored in any other format than timestamp/date, then pass the format as second argument. <format> can be one of epoch\$1s, epoch\$1ms,epoch\$1ns or any other custom format like YYYY-MM-DD.

**<comparison\$1predicate>** optional parameter. If not passed, plugin will use > as comparison predicate. <comparison\$1predicate> can be one of =, \$1=, <>, <, <=, >, >=

`$__toTimeFilter()` works the same as \$1\$1\$1fromTimeFilter(). Instead of using Grafana's from time, it will use to time. Also the default comparison predicate will be <.

**\$1\$1\$1timeGroup(<time\$1column>, <interval>)** 

For example, the macro \$1\$1\$1timeGroup(timecol,1h) is expanded to SERIES\$1ROUND("timecol", 'INTERVAL 1 HOUR') in the query.

The following example explains the `$__timeGroup(<time_column>, <interval>) macro.`

```
SELECT $__timeGroup(timestamp,1h),  "user", sum("value") as "value"
FROM "salesdata"
WHERE $__timeFilter("timestamp")
GROUP BY $__timeGroup(timestamp,1h), "user"
ORDER BY $__timeGroup(timestamp,1h) ASC
```

This is translated into the following query where `$__timeGroup(timestamp,1h)` is expanded into `SERIES_ROUND("timestamp", 'INTERVAL 1 HOUR')`.

```
SELECT SERIES_ROUND("timestamp", 'INTERVAL 1 HOUR') as "timestamp",  "user", sum("value") as "value"
FROM "salesdata"
WHERE "timestamp" > '2020-01-01T00:00:00Z' AND "timestamp" < '2020-01-01T23:00:00Z'
GROUP BY SERIES_ROUND("timestamp", 'INTERVAL 1 HOUR'), "user"
ORDER BY "timestamp" ASC
```

**Note**  
When using group by with \$1\$1\$1timeGroup macro, make sure that your select, sort by fields follows the same name as your group by field. Otherwise, HANA might not recognize the query.

If you don't want to hard code the interval in \$1\$1\$1timeGroup() function, then you can leave that to Grafana by specifying \$1\$1\$1interval as your interval. Grafana will calculate that interval from dashboard time range. Example query:

```
SELECT $__timeGroup(timestamp, $__interval), sum("value") as "value"
FROM "salesdata"
WHERE $__timeFilter("timestamp")
GROUP BY $__timeGroup(timestamp, $__interval)
ORDER BY $__timeGroup(timestamp, $__interval) ASC
```

That query is translated into the followin query based on the dashboard time range.

```
SELECT SERIES_ROUND("timestamp", 'INTERVAL 1 MINUTE'), sum("value") as "value"
FROM "salesdata"
WHERE "timestamp" > '2019-12-31T23:09:14Z' AND "timestamp" < '2020-01-01T23:17:54Z'
GROUP BY SERIES_ROUND("timestamp", 'INTERVAL 1 MINUTE')
ORDER BY SERIES_ROUND("timestamp", 'INTERVAL 1 MINUTE') ASC
```

### Alerting
<a name="saphana-alerting"></a>

**To set up a SAP HANA alert in Grafana**

1. Create a graph panel in your dashboard.

1. Create a SAP HANA query in time series format.

1. Choose the **Alert** tab and specify the alerting criteria.

1. Choose **Test Rule** to test the alert query.

1. Specify the alert recipients, message, and error handling.

1. Save the dashboard.

#### Alerting on non-timeseries data
<a name="saphana-alerting-nontimeseries"></a>

To alert on non-timeseries data, use the `TO_TIMESTAMP('${__to:date}')` macro to make non-timeseries metrics into timeseries. This will convert your metric into single point time series query. The format of the query is given below

```
SELECT TO_TIMESTAMP('${__to:date}'),  <METRIC> FROM <TABLE≶ WHERE <YOUR CONDITIONS>
```

In the following example, a table has four fields called username, age, city and role. This table doesn't have any time field. We want to notify when the number of users with dev role is less than three.

```
SELECT  TO_TIMESTAMP('${__to:date}'), count(*) as "count" FROM (
   SELECT 'John' AS "username", 32 AS "age", 'Chennai' as "city", 'dev' as "role" FROM dummy
   UNION ALL SELECT 'Jacob' AS "username", 32 AS "age", 'London' as "city", 'accountant' as "role" FROM dummy
   UNION ALL SELECT 'Ali' AS "username", 42 AS "age", 'Delhi' as "city", 'admin' as "role" FROM dummy
   UNION ALL SELECT 'Raja' AS "username", 12 AS "age", 'New York' as "city", 'ceo' as "role" FROM dummy
   UNION ALL SELECT 'Sara' AS "username", 35 AS "age", 'Cape Town' as "city", 'dev' as "role" FROM dummy
   UNION ALL SELECT 'Ricky' AS "username", 25 AS "age", 'London' as "city", 'accountant' as "role" FROM dummy
   UNION ALL SELECT 'Angelina' AS "username", 31 AS "age", 'London' as "city", 'cxo' as "role" FROM dummy
) WHERE "role" = 'dev'
```

# Connect to a ServiceNow data source
<a name="grafana-enterprise-servicenow-datasource"></a>

This is the ServiceNow data source that is used to connect to ServiceNow instances.

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).

## Features
<a name="features-1"></a>
+  Queries 
  +  Stat API Queries 
  +  Table API Queries 
    +  Incidents, Changes, and any other table 
+  Alerts 
+  Annotations (beta feature) 
+  Template Variables 

## Configuration
<a name="configuration-2"></a>

 Select data sources on the left panel of Grafana. 

 Select Add Datasource: 

 Enter **servicenow** to find the data source plugin: 

 Enter ServiceNow URL: 

 Choose **Save & Test**. You should see a green message with "ServiceNow Connection OK". 

### Example dashboards
<a name="example-dashboards"></a>

 Pre-made dashboards are included with the plugin and can be imported through the data source configuration page, under the dashboards tab. 

## Usage
<a name="usage-2"></a>

 There are two ways to return data in the query editor. 
+  TableAPI 
+  AggregateAPI 

 Users can currently choose between querying pre-defined tables, such as the following: 
+  Changes 
+  Incidents 

 Or, as of `v1.4.0`, an API-driven list of tables and fields using the **Other (Custom Table)** option. This option will allow you to query data that is in any table available to the user used to set up the ServiceNow data source. 

 The **Custom Table** option should support all of the same features as the pre-defined table lists. 

### TableAPI queries
<a name="tableapi-queries"></a>

 The TableAPI returns data suitable for displaying in a table panel. It allows for an ordered selection of fields to display plus filtering options. The query editor also provides a field to limit the number of rows returned by a query. 

 Example table panel showing results from the previous query. 

#### Show
<a name="show"></a>

 The *Show* row provides a selector for a field to be displayed. Multiple fields can be also be specified. The fields will be returned in the exact order specified. 

#### Display Values
<a name="display-values"></a>

 The *Display Values* flag will cause the query to return human-friendly values, or display vaules, instead of numeric values. 

 For example, a severity of `1` without this flag would only display `1`. If the flag is enabled, the value displayed will be `1 - High`. 

 According to the [ServiceNow API documentation](https://developer.servicenow.com/dev.do#!/reference/api/orlando/rest/c_TableAPI), this can have a negative performance impact. 

**Note**  
 […] specifying the display value can cause performance issues since it is not reading directly from the database and could include referencing other fields and records. 

#### Filters (general)
<a name="filters-general"></a>

 The *Filters* row provides the ability to narrow down the displayed rows based on multiple field and value criteria. 

 All filters are combined with an *AND* or an *OR* operation. 

 The following fields are available when not using a custom table (this list will expand in the future).

```
Active
Asset
Group
Assigned To
Escalation
Issue Number
Description
Priority
State
Type
Change Risk
Change State
Start Date
End Date
On Hold
```

 When selecting a custom table, fields are automatically populated from the Service Now API. 

##### Date filters
<a name="date-filters"></a>


|  Time Field  |  Operators  |  Value  | 
| --- | --- | --- | 
|  Opened At  |  At or Before Today Not Today Before At or Before After At or After  |  timestamp javascript:gs.daysAgo(30)  | 
|  Activity Due  |   |   | 
|  Closed At  |   |   | 
|  Due Date  |   |   | 
|  Expected Start  |   |   | 
|  Reopened Time  |   |   | 
|  Resolved At  |   |   | 
|  Work End  |   |   | 
|  Work Start  |   |   | 
|  Ignore Time  |   |   | 

 For additional date values, see: https://developer.servicenow.com/app.do\$1\$1/api\$1doc?v=newyork&id=r\$1SGSYS-dateGenerate\$1S\$1S 

##### Operators (general, string-based)
<a name="operators-generalstring-based"></a>
+  Starts With 
+  Ends With 
+  Like 
+  Not Like 
+  Equals 
+  Not Equals 
+  Is Empty 

##### Operators (time-based)
<a name="operators-time-based"></a>
+  Today 
+  Not Today 
+  Before 
+  At or Before 
+  After 
+  At or After 

##### Values
<a name="values"></a>

 Value selection depends on the type of filter selected. 
+  Boolean filters have True/False options 
+  Text filters will allow typing any value 
+  Escalation, Priority has a fixed set of numerical values 

#### Sort By
<a name="sort-by"></a>

 The *Sort By* row provides the ability to narrow down the displayed rows based on multiple field and value criteria. 

 All filters are combined with an *AND* operation. Support for additional operators will be added. 

#### Limit
<a name="limit"></a>

 A row limit can be specified to prevent returning too much data. The default value is 25. 

#### Time Field
<a name="time-field"></a>

 The `Time Field` is what turns your queried data into a time series. Your data being handled as a time series means that values in your selected "time field" that do not fall within your dashboard / panel’s time range will not be displayed. 

 The default time field used is "Opened At", but can be changed to any available field that holds a time value. 

 A special value "Ignore Time" is provided to allow results "up until now" and also to enable the filters to control what data is displayed. 

### AggregateAPI queries (Stats)
<a name="aggregateapi-queries-stats"></a>

 The AggregateAPI will always return metrics, with the following aggregations: avg, min, max, sum. Filtering is also available to narrow queries. 

#### Show
<a name="show-1"></a>

 The *Show* row provides a selector for a metric to be displayed. Multiple metrics can be also be specified. 

#### Filters (general)
<a name="filters-general-1"></a>

 Aggregate *Filters* provide the ability to narrow down the displayed metrics based on field and value criteria, similar to the table option. 

 All filters are combined with an *AND* operation. Support for additional operators will be added. 

 Stat filter options are the same as the TableAPI. 

#### Aggregation
<a name="aggregation"></a>

 There are four types of metric aggregations, plus a "count": 
+  Average 
+  Minimum 
+  Maximum 
+  Sum 
+  Count - this returns the "number" of metrics returned by a query 

##### Group By
<a name="group-by"></a>

 This selector provides the ability to split metrics into lesser aggregates. Grouping by "priority" would return the metrics with a "tag" of priority and the unique values separated. 

### Templating
<a name="templating-2"></a>

 Instead of hardcoding names in your queries, you can use variables in their place. Variables are shown as dropdown select boxes at the top of the dashboard. You can use these dropdown boxes to change the data being displayed on your dashboard. 

 See the example in the **Query Variable** section on how to add a query variable and reference that with a Template value. 

#### Query variable
<a name="query-variable"></a>

 If you add a template variable of the type `Query`, you can write a query that can return items such as category names, key names, or key values that are shown as a dropdown select box. 

 For example, you can have a variable that contains all values for `categories` by specifying a query such as this in the templating variable *Query* setting. 

 When choosing the **Query** setting, a **Filter** section is displayed, allowing you to choose a **Type** and **Field**. Currently, **Type** is limited to Incidents and Changes. When selecting a type, you are provided with a list of fields applicable to that Type. Once a **Type** and **Field** are selected, a preview of values will be displayed at the bottom showing the options available for that Type/Field. Those values will be displayed in a dropdown list on the Dashboard, which you can use along with Templating to filter data on your Dashboard Panels. 

 For example, if you add a Variable named *category* then select Type = Incidents and Field = Category, you will see a list of options for Category. If you then add a Filter to a panel, and select Category Equals \$1\$1category\$1, the panel data will show only data for that Category that is selected from the Dashboard dropdown list. 

 Import the **Incidents By Category** dashboard to see an example. 

#### Using variables in queries
<a name="using-variables-in-queries"></a>

 There are two syntaxes: 

 `$<varname>` Example with a template variable named `hostname`: 

 `[[varname]]` Example with a template variable named `hostname`: 

## Alerting
<a name="servicenow-alerting"></a>

 Standard Grafana alerting is supported. Any queries defined in a graph panel can be used to generate alerts. 

 The following is an example query and an alert. This query will return a graph of all open critical high priority incidents: 

 This alert will be initiated when there are more than five open critical high priority incidents: 

 Testing the alert rule will display output from the alert rule, and selecting the state history will show the alert transitioning from ok to pending to alerting. 

 The graph view will show a vertical line and the heart icon at the top will turn orange while the alert is pending. 

 Once the criteria for alerting has been met, the rule transitions to red.

 In the graph view, the red vertical line will appear and the heart icon at the top will turn red. 

### Writing incidents for alerts
<a name="writing-incidents-for-alerts"></a>

 **Beta feature** 
+  Configure a Notification Channel for your ServiceNow data source. 

 This will configure a [Grafana Notification Channel](https://grafana.com/docs/grafana/latest/alerting/notifications/) which uses your configured user to create incidents on the ServiceNow instance for this data source. 

 This action requires that the ServiceNow data source user has permissions for writing incidents. 

## Annotations
<a name="annotations-1"></a>

 Grafana Annotations are a **beta feature** as of `v1.4.0` of this data source. Annotations give you the ability to overlay events on graphs. 

 The Annotations query supports the same options as the standard query editor with a few minor differences: 
+  Only one "Show" column is selectable. This is likely going to be fixed in a future improvement. 
+  The time field is required. 

## FAQ
<a name="faq-1"></a>

### What if we don’t have the ITSM Roles Plugin?
<a name="what-if-we-dont-have-the-itsm-roles-plugin"></a>

 **Administrator access is required to perform the following actions** 

 Option 1: Grant Grafana user admin permissions to allow access to all tables. 

 Option 2: Create a role and apply ACLs to all tables that must be accessed by Grafana.

 Administrator access is required to perform the following actions.

1.  The logged in administrator needs to elevate access to security\$1admin.

   1.  In the top right navigation pane, choose the profile icon. The profile icon has dropdown caret indicator. 

   1.  From the dropdown list, choose **Elevate Roles**. 

   1.  From the modal that is shown, select the **security\$1admin** check box.

   1.  Choose OK. 

1. Create a new role with whatever naming convention you want.

   1.  Navigate to the roles section in the left hand navigation System Security => Users and Groups => Roles 

   1.  Choose **New** at the top.

   1.  Enter a name for the role and a relevant description. 

   1.  Choose **Submit**. 

1.  Create a new user or modify an existing user with the needed roles. 

   1.  The role you create in Step 2 

   1.  personalize\$1dictionary 

   1.  personalize\$1choices 

   1.  cmdb\$1read (this will grant read access to all cmdb tables) 

1.  Create Table ACLs for the required tables and fields. 

   1.  Create an ACL for the sys\$1db\$1object table. 

     1.  In the second search header column **Name**, enter **sys\$1db\$1object**, and press **Enter**. 

     1.  The filtered result should show **Table**. Choose **Table** to navigate into the record. 

     1.  On the tab section, choose **Controls**.

     1.  On the lower portion of the page, make sure that **Access Controls** is the selected tab. 

     1.  Choose **New** to create a new ACL. 

     1.  Change the **Operation** selection to read. 

     1.  In the **Requires Role** section in the lower part of the screen, choose (double-click) **Insert New Row**, and search for the role that you created. 

     1. After you select the role you created, choose the green check mark. 

     1.  Choose **Submit** in the lower part of the screen to create the ACL, and then choose **Continue** when the modal appears. 

1.  Create ACLs for specific sys\$1db\$1object fields. The following steps must be repeated for each of the following fields: Name, Label, Display Name, and Extends table. 

   1.  While still on the table record view for sys\$1db\$1object, select the **Columns** tab in the tab group closest to the top of the screen.

   1.  Locate the field name and select it. 

   1.  In the lower tab section, choose **New** on the **Access Controls** tab. 

   1.  Change the operation to read 

   1.  Choose (double-click) the insert a row text in the bottom "Requires role" table. 

   1.  Search for the role that you created, and choose the green check mark. 

   1.  Choose **Submit**. 

   1.  Make sure that you’ve repeated these steps for all required fields: Name, Label, Display Name, and Extends table. 

1.  Repeat the steps from 4.1 on Change, Incident, and any other non-CMDB tables that you want to query from Grafana. Do not repeat the steps from 4.2; that step is only required for sys\$1db\$1object. 

# Connect to a Snowflake data source
<a name="snowflake-datasource-for-AMG"></a>

 With the Snowflake Enterprise data source, you can visualize your Snowflake data alongside all of your other data sources in Grafana as well as log and metric data in context. This data source includes a powerful type-ahead query editor, supports complex annotations, set alerting thresholds, control access and permissions and more. 

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).

## Overview
<a name="snowflake-overview"></a>

### What is Snowflake?
<a name="what-is-snowflake"></a>

 Snowflake offers a cloud-based data storage and analytics service, generally termed "data warehouse-as-a-service" that offers a solution for data warehousing, data lakes, data engineering, data science, data application development, and data sharing. Over the last few years, Snowflake has gained massive popularity because of its ability to affordably store and analyze data using cloud-based hardware and software; recently culminating in the largest software IPO ever. Today, many companies use Snowflake as their primary database to store application and business data such as transaction counts, active user sessions, and even time series and metric data. 

### Making the most of Snowflake and Amazon Managed Grafana
<a name="making-the-most-of-snowflake-and-AMG"></a>

 **Visualize Snowflake data without moving it**: Grafana’s unique architecture queries data directly where it lives rather than moving it and paying for redundant storage and ingestion. 

 **Compose panels from varied sources:** With pre-built and custom dashboards, bring data together from many different data sources into a single pane of glass. 

 **Transform and compute at the user level**: Users can transform data and run various computations on data they see, requiring less data preparation. 

 **Combine, compute, and visualize within panels**: Create mixed-data source panels that display related data from Snowflake and other sources. 

### Features
<a name="snowflake-features"></a>

 **Query editor:** The query editor is a Smart SQL autocompletion editor that allows you to visualize time series or table data, handles SQL syntax errors, and autocompletes basic SQL keywords. 

 **Data source permissions**: Control who can view or query Snowflake data in Grafana 

 **Annotations:** Overlay Snowflake events on any Grafana graph, to correlate events with other graph data 

 **Alerting:** Set alerts based metrics stores in Snowflake 

 **Variables for queries:** Create template variables in Grafana based on Snowflake data, and include variables in Snowflake queries to make interactive dashboards. 

 **Multi-metric queries:** Write a single query that returns multiple metrics, each in its own column 

## Get started with the Snowflake plugin
<a name="get-started-with-the-snowflake-plugin"></a>

 Here are five quick steps to get started with the Snowflake plugin in Grafana: 

### Step 1: Set up the Snowflake Data Source
<a name="set-up-the-snowflake-data-source"></a>

 To configure the data source, choose **Configuration**, **Data Sources**, **Add data source**, Snowflake. 

 Add your authentication details, and the data source is ready to query\$1 

 The following configuration fields are available. 


|  Name  |  Description  | 
| --- | --- | 
|  Account  |  Account for Snowflake.  | 
|  Username  |  Username for the service account.  | 
|  Password  |  Password for the service account.  | 
|  Schema (optional)  |  Sets a default schema for queries.  | 
|  Warehouse (optional)  |  Sets a default warehouse for queries.  | 
|  Database (optional)  |  Sets a default database for queries.  | 
|  Role (optional)  |  Assumes a role for queries.  | 

### Step 2: Write queries for your Snowflake data
<a name="write-queries-for-your-snowflake-data"></a>

 Create a panel in a dashboard, and select a Snowflake Data Source to start using the query editor. 
+  Date / time can appear anywhere in the query as long as it is included. 
+  A numerical column must be included. This can be an aggregation or an int/float column. 
+  Optionally, you can include string columns to create separate data series, if your time series data is formatted for different metrics. 

#### Layout of a Snowflake query
<a name="layout-of-a-snowflake-query"></a>

```
select
  <time_column>,
  <any_numerical_column>
  <other_column_1>,
  <other_column_2>,
  <...>
from
  <any_table>
where
  $__timeFilter(<time_column>) // predefined where clause for time range
  and $<custom_variable> = 1 // custom variables start with dollar sign
```

#### SQL query format for time series group by interval
<a name="sql-query-format-for-timeseries-group-by-interval"></a>

```
select
  $__timeGroup(created_ts, '1h'), // group time by interval of 1h
  <time_column>, 
  <any_numerical_column>,
  <metric_column>
from
  <any_table>
where
  $__timeFilter(<time_column>) // predefined where clause for time range
  and $<custom_variable> = 1 // custom variables start with dollar sign
group by <time_column>
```

#### SQL query format for tables
<a name="sql-query-format-for-tables"></a>

```
select
  <time_column>, // optional if result format option is table
  <any_column_1>
  <any_column_2>
  <any_column_3>
from
  <any_table>
where
  $__timeFilter(time_column) // macro for time range, optional if format as option is table
  and $<custom_variable> = 1 // custom variables start with dollar sign
```

### Step 3: Create and use Template Variables
<a name="snowflake-create-and-use-template-variables"></a>

#### Using template variables
<a name="snowflake-using-template-variables-1"></a>

 You can include template variables in queries, as shown in the following example. 

```
 select
   <column>
 from 
   <table>
 WHERE column >= '$variable'
```

 The following example shows using multi-value variables in a query. 

```
select
  <column>
from 
  <table>
WHERE <column> regexp '${variable:regex}'
```

#### Using the Snowflake data source to create variables
<a name="using-the-snowflake-datasource-to-create-variables"></a>

 In the dashboard settings, choose **Variables**, and choose **New**. 

 Using the "Query" variable type, select the Snowflake data source as the "Data source". 

**Important**  
 Be sure to select only one column in your variable query. 

 Example: 

```
SELECT DISTINCT query_type from account_usage.query_history;
```

 will give you these variables: 

```
All DESCRIBE USE UNKNOWN GRANT SELECT CREATE DROP SHOW
```

### Step 4: Set up an alert
<a name="snowflake-set-up-an-alert"></a>

 You can set alerts on specific Snowflake metrics or on queries you’ve created. 

 Choose the alert tab button within the query editor, and choose **Create Alert**. 

### Step 5. Create an annotation
<a name="snowflake-create-an-annotation"></a>

 Annotations allow you to overlay events on a graph. 

 To create an annotation, in the dashboard settings, choose **Annotations**, and **New**, and select Snowflake as the data source. 

 Because annotations are events, they require at least one time column and one column to describe the event. 

 The following example code shows a query to annotate all failed logins to Snowflake. 

```
SELECT
  EVENT_TIMESTAMP as time,
  EVENT_TYPE,
  CLIENT_IP
FROM ACCOUNT_USAGE.LOGIN_HISTORY
WHERE $__timeFilter(time) AND IS_SUCCESS!='YES'
ORDER BY time ASC;
```

 And 
+  time: `TIME` 
+  title: `EVENT_TYPE` 
+  text: `CLIENT_IP` 

 This will overlay annotations of all failed logins to Snowflake on your dashboard panels. 

## Additional functionality
<a name="additional-functionality"></a>

### Using the Display Name field
<a name="snowflake-using-display-name"></a>

 This plugin uses the Display Name field in the Field tab of the Options panel to shorten or alter a legend key based on its name, labels, or values. Other data sources use custom `alias` functionality to modify legend keys, but the Display Name function is a more consistent way to do so. 

### Data source permissions
<a name="snowflake-data-source-permissions"></a>

 Limit access to Snowflake by choosing the **Permissions** tab in the data source configuration page to enable data source permissions. On the permission page, Admins can enable permissions and restrict query permissions to specific Users and Teams. 

### Understand your Snowflake billing and usage data
<a name="understand-your-snowflake-billing-and-usage-data"></a>

 Within the Snowflake data source, you can import a billing and usage dashboard that shows you useful billing and usage information. 

 Add the dashboard in the Snowflake Data Source configuration page: 

 This dashboard uses the ACCOUNT\$1USAGE database, and requires the querier to have the ACCOUNTADMIN role. To do this securely, create a new Grafana data source that has a user with the ACCOUNTADMIN role. Then select that data source in the variables. 

# Connect to a Splunk data source
<a name="splunk-datasource"></a>

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).

## Configuration
<a name="splunk-configuration-3"></a>

### Data source configuration
<a name="splunk-data-source-config"></a>

 When configuring the Data Source, ensure that the URL field utilizes `https` and points to the your configured Splunk port. The default Splunk API point is 8089, not 8000 (this is default web UI port). Enable *Basic Auth* and specify Splunk username and password. 

#### Browser (direct) access mode and CORS
<a name="splunk-browser-direct-access-mode-and-cors"></a>

 Amazon Managed Grafana does not support browser direct access for the Splunk data source. 

### Advanced options
<a name="splunk-advanced-options"></a>

#### Stream mode
<a name="stream-mode"></a>

 Enable stream mode if you want to get search results as they become available. This is experimental feature, don’t enable it until you really need it. 

#### Poll result
<a name="splunk-poll-result"></a>

 Run search and then periodically check for result. Under the hood this option runs `search/jobs` API call with `exec_mode` set to `normal`. In this case API request returns job SID, and then Grafana checks job status time to time, in order to get job result. This option can be helpful for slow queries. By default this option is disabled and Grafana sets `exec_mode` to `oneshot` which allows returning search result in the same API call. See more about `search/jobs` API endpoint in [Splunk docs](https://docs.splunk.com/Documentation/Splunk/latest/RESTREF/RESTsearch#search.2Fjobs). 

#### Search polling interval
<a name="splunk-search-polling-interval"></a>

 This option allow to adjust how often Amazon Managed Grafana will poll splunk for search results. Time for next poll choosing randomly from [min, max) interval. If you run a lot of heavy searches, it makes sense to increase these values. Tips: increase *Min* if search jobs execution takes a long time, and *Max* if you run a lot of parallel searches (a lot of splunk metrics on Grafana dashboard). Default is [500, 3000) milliseconds interval. 

#### Automatic cancellation
<a name="auto-cancel"></a>

 If specified, the job automatically cancels after this many seconds of inactivity (0 means never auto-cancel). Default is 30. 

#### Status buckets
<a name="status-buckets"></a>

 The most status buckets to generate. 0 indicates to not generate timeline information. Default is 300. 

#### Fields search mode
<a name="splunk-fields-search-mode"></a>

 When you use visual query editor, data source attempts to get list of available fields for selected source type. 
+  quick - use first available result from preview 
+  full - wait for job finish and get full result. 

#### Default earliest time
<a name="default-earliest-time"></a>

 Some searches can’t use dashboard time range (such as template variable queries). This option helps to prevent search for all time, which can slow down Splunk. The syntax is an integer and a time unit `[+|-]<time_integer><time_unit>`. For example `-1w`. [Time unit](https://docs.splunk.com/Documentation/Splunk/latest/Search/Specifytimemodifiersinyoursearch) can be `s, m, h, d, w, mon, q, y`. 

#### Variables search mode
<a name="splunk-variables-search-mode"></a>

 Search mode for template variable queries. Possible values: 
+  fast - Field discovery off for event searches. No event or field data for stats searches. 
+  smart - Field discovery on for event searches. No event or field data for stats searches. 
+  verbose - All event & field data. 

## Usage
<a name="splunk-usage-5"></a>

### Query editor
<a name="splunk-query-editor-2"></a>

#### Editor modes
<a name="splunk-editor-modes"></a>

 Query editor support two modes: raw and visual. To switch between these modes choose hamburger icon at the right side of editor and select *Toggle Editor Mode*. 

#### Raw mode
<a name="raw-mode"></a>

 Use `timechart` command for time series data, as shown in the following code example. 

```
index=os sourcetype=cpu | timechart span=1m avg(pctSystem) as system, avg(pctUser) as user, avg(pctIowait) as iowait
index=os sourcetype=ps | timechart span=1m limit=5 useother=false avg(cpu_load_percent) by process_name
```

 Queries support template variables, as shown in the following example. 

```
sourcetype=cpu | timechart span=1m avg($cpu)
```

 Keep in mind that Grafana is time series–oriented application and your search should return time series data (timestamp and value) or single value. You can read about [timechart](https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Timechart) command and find more search examples in official [Splunk Search Reference](https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/WhatsInThisManual) 

#### Splunk Metrics and `mstats`
<a name="splunk-metrics-and-mstats"></a>

 Splunk 7.x provides `mstats` command for analyzing metrics. To get charts working properly with `mstats`, it should be combined with `timeseries` command and `prestats=t` option must be set. 

```
Deprecated syntax:
| mstats prestats=t avg(_value) AS Value WHERE index="collectd" metric_name="disk.disk_ops.read" OR metric_name="disk.disk_ops.write" by metric_name span=1m
| timechart avg(_value) span=1m by metric_name

Actual:
| mstats prestats=t avg(disk.disk_ops.read) avg(disk.disk_ops.write) WHERE index="collectd" by metric_name span=1m
| timechart avg(disk.disk_ops.read) avg(disk.disk_ops.write) span=1m
```

 Read more about `mstats` command in [Splunk Search Reference](https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Mstats). 

#### Format as
<a name="format-as"></a>

 There are two supported result format modes - *Time series* (default) and *Table*. Table mode suitable for using with Table panel when you want to display aggregated data. That works with raw events (returns all selected fields) and `stats` search function, which returns table-like data. Examples: 

```
index="os" sourcetype="vmstat" | fields host, memUsedMB
index="os" sourcetype="ps" | stats avg(PercentProcessorTime) as "CPU time", latest(process_name) as "Process", avg(UsedBytes) as "Memory" by PID
```

 The result is similar to *Statistics* tab in Splunk UI.

 Read more about `stats` function usage in [Splunk Search Reference](https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Stats). 

#### Visual mode
<a name="splunk-visual-mode"></a>

This mode provides step-by-step search creating. Note that this mode creates `timechart` splunk search. Just select index, source type, and metrics, and set split by fields if you want. 

##### Metric
<a name="splunk-metric"></a>

 You can add multiple metrics to search by choosing *plus* button at the right side of metric row. Metric editor contains list of frequently used aggregations, but you can specify here any other function. Just choose agg segment (`avg` by default) and type what you need. Select interested field from the dropdown list (or enter it), and set alias if you want. 

##### Split by and Where
<a name="split-by-and-where"></a>

 If you set Split by field and use *Time series* mode, Where editor will be available. Choose *plus* and select operator, aggregation and value, for example *Where avg in top 10*. Note, this *Where* clause is a part of *Split by*. See more at [timechart docs](https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/timechart#where_clause). 

#### Options
<a name="splunk-options"></a>

 To change default timechart options, choose **Options** at the last row.

See more about these options in [timechart docs](https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/timechart). 

#### Rendered splunk search
<a name="rendered-splunk-search"></a>

 Choose the target letter at the left to collapse the editor and show the rendered splunk search. 

### Annotations
<a name="splunk-annotations-2"></a>

Use annotations if you want to show Splunk alerts or events on graph. Annotation can be either predefined Splunk alert or regular splunk search. 

#### Splunk alert
<a name="splunk-alert"></a>

 Specify an alert name, or keep the field blank to get all fired alerts. Template variables are supported. 

#### Splunk search
<a name="splunk-search"></a>

 Use splunk search to get needed events, as shown in the following example. 

```
index=os sourcetype=iostat | where total_ops > 400
index=os sourcetype=iostat | where total_ops > $io_threshold
```

 Template variables are supported. 

 The **Event field as text** option is suitable if you want to use field value as annotation text. The following example shows error message text from logs. 

```
Event field as text: _raw
Regex: WirelessRadioManagerd\[\d*\]: (.*)
```

 Regex allows to extract a part of message. 

### Template variables
<a name="splunk-template-variables"></a>

 Template variables feature supports Splunk queries which return list of values, for example with `stats` command. 

```
index=os sourcetype="iostat" | stats values(Device)
```

 This query returns list of `Device` field values from `iostat` source. Then you can use these device names for time series queries or annotations. 

 There are two possible types of variable queries can be used in Grafana. The first is a simple query (as presented earlier), which returns a list of values. The second type is a query that can create a key/value variable. The query should return two columns that are named `_text` and `_value`. The `_text` column value should be unique (if it is not unique then the first value is used). The options in the dropdown list will have a text and value so that you can have a friendly name as text and an ID as the value. 

 For example, this search returns table with columns `Name` (Docker container name) and `Id` (container id). 

```
source=docker_inspect | stats count latest(Name) as Name by Id | table Name, Id
```

 To use container name as a visible value for variable and id as it’s real value, query should be modified, as in the following example. 

```
source=docker_inspect | stats count latest(Name) as Name by Id | table Name, Id | rename Name as "_text", Id as "_value"
```

#### Multi-value variables
<a name="splunk-multi-value-variables"></a>

 It’s possible to use multi-value variables in queries. An interpolated search will be depending on variable usage context. There are a number of that contexts which plugin supports. Assume there’s a variable `$container` with selected values `foo` and `bar`: 
+  Basic filter for `search` command 

  ```
  source=docker_stats $container
  =>
  source=docker_stats (foo OR bar)
  ```
+  Field-value filter 

  ```
  source=docker_stats container_name=$container
  =>
  source=docker_stats (container_name=foo OR container_name=bar)
  ```
+  Field-value filter with the `IN` operator and `in()` function 

  ```
  source=docker_stats container_name IN ($container)
  =>
  source=docker_stats container_name IN (foo, bar)
  
  source=docker_stats | where container_name in($container)
  =>
  source=docker_stats | where container_name in(foo, bar)
  ```

#### Multi-value variables and quotes
<a name="multi-value-variables-and-quotes"></a>

 If variable wrapped in quotes (both double or single), its values also will be quoted, as in the following example. 

```
source=docker_stats container_name="$container"
=>
source=docker_stats (container_name="foo" OR container_name="bar")

source=docker_stats container_name='$container'
=>
source=docker_stats (container_name='foo' OR container_name='bar')
```

# Connect to a Splunk Infrastructure Monitoring data source
<a name="AMG-datasource-splunkinfra"></a>

Provides support for Splunk Infrastructure Monitoring (formerly SignalFx).

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).

## Adding the data source
<a name="bigquery-adding-the-data-source"></a>

1.  Open the Grafana console in the Amazon Managed Grafana workspace and make sure you are logged in. 

1.  In the side menu under **Configuration** (the gear icon), choose **Data Sources**. 

1.  Choose **Add data source**. 
**Note**  
 If you don't see the **Data Sources** link in your side menu, it means that your current user does not have the `Admin` role. 

1.  Select **Splunk Infrastructure Monitoring** from the list of data sources. 

1. Enter the following information:
   + For **Access Token**, enter the token that is generated by your SignalFx account. For more information, see [Authentication Tokens](https://docs.signalfx.com/en/latest/admin-guide/tokens.html).
   + **Realm** A self-contained deployment that hosts your organization. You can find your realm name on your profile page when signed in to the SignalFx user interface.

## Using the query editor
<a name="splunkinfra-query"></a>

The query editor accepts a [ SignalFlow](https://dev.splunk.com/observability/docs/signalflow/) program/query.

For labels, a Signalflow label `publish(label = 'foo')` is applied as metadata to the results: **"label":"foo"**

For query type template variables, there is no **Query** field. Instead, you select one of the following query types:
+ Dimensions
+ Metrics
+ Tags

Ad-hoc filters are supported, allowing global filters using dimensions.

Grafana annotations are supported. When you create annotations, use SignalFlow Alerts or Events queries.

Example of getting alerts for a detector:

```
alerts(detector_name='Deployment').publish();
```

Example of getting custom Events by type:

```
events(eventType='simulated').publish();
```

# Connect to a Wavefront data source (VMware Tanzu Observability by Wavefront)
<a name="wavefront-datasource-for-AMG"></a>

 The Wavefront (VMware Tanzu Observability by Wavefront) data source enables Amazon Managed Grafana users to query and visualize the data they’re collecting directly from Wavefront and easily visualize it alongside any other metric, log, tracing, or other data source. This flexible, single-pane view makes it easier to track system health and debug issues. 

**Note**  
This data source is for Grafana Enterprise only. For more information, see [Manage access to Enterprise plugins](upgrade-to-enterprise-plugins.md).  
Additionally, in workspaces that support version 9 or newer, this data source might require you to install the appropriate plugin. For more information, see [Extend your workspace with plugins](grafana-plugins.md).

## What is Wavefront?
<a name="what-is-wavefront"></a>

 [Wavefront](https://www.wavefront.com) is a cloud monitoring and analytics tool developed by VMware. Wavefront is a cloud-hosted service where you send your time-series (metric) data – from CollectD, StatsD, JMX, Ruby’s logger, AWS, or other tools. With Wavefront, users can perform mathematical operations on those series, render charts to see anomalies, track KPIs, and create alerts. 

## Maximizing your tech stack with Wavefront and Grafana
<a name="maximizing-your-tech-stack-with-wavefront-and-AMG"></a>

 While on the surface, Grafana and Wavefront sound similar, many organizations use both Wavefront and Grafana as critical parts of their observability workflows. 

 **Visualize without Moving Data Sources:** Grafana’s unique architecture queries data directly where it lives rather than moving it and paying for redundant storage and ingestion. 

 **Compose Panels from Varied Sources** With pre-built and custom dashboards, bring data together from many different data sources into a single pane of glass. 

 **Transform and Compute at the User Level:** Users can transform data and run various computations on data they see, requiring less data preparation. 

 **Combine, Compute, and Visualize within Panels:** Create mixed-data source panels that display related data from Waveferont and other sources, such as Prometheus and InfluxDB. 

## Documentation
<a name="wavefront-documentation"></a>

### Features
<a name="wavefront-features-3"></a>
+  Timeseries Visualizations 
+  Table Visualizations 
+  Heatmap Visualizations 
+  Single Stat Visualizations 
+  Guided Query Editor 
+  Raw WQL Query Editor 
+  Annotations for event data 
+  Template Variables 
+  Ad-Hoc Filters 
+  Alerting 

### Configuration
<a name="wavefront-configuration-4"></a>

 Configuring the Wavefront data source is relatively straightforward. There are only two fields required to complete the configuration: `API URL` and `Token`. 
+  `API URL` will be the URL you use to access your wavefront environment. Example: `https://myenvironment.wavefront.com`. 
+  `Token` must be generated from a user account or service account. 

  1.  To create a user account based token, log into your Wavefront environment, choose the cog on the top right corner of the page, choose your username (for example, `me@grafana.com`), select the **API Access** tab at the top of the user page, then copy an existing key or choose **generate**.

  1. To create a service account based token, log into your Wavefront environment, choose the cog on the top right corner of the page, choose account management. On the left navigation, select **Accounts, Groups, & Roles**, choose the **Service Accounts** tab at the top, and then choose **Create New Account**. Enter a name for the service account. This can be anything you want. Copy the token that is provided under the **Tokens** section.

  1. The last step is to make sure that the **Accounts, Groups, & Roles** check box selected under **Permissions**. 

 After you have the token, add that to the `Token` configuration field and you should be set\$1 

 The finalized configuration page should look similar to this: 

### Usage
<a name="wavefront-usage-6"></a>

#### Using the query editor
<a name="wavefront-using-the-query-editor"></a>

 The Wavefront query editor has two modes: **Query Builder** and **Raw Query**. To toggle between them, use the selector in the top right of the query form: 

 In **Query Builder** mode, you will presented with four choices to make: 

1.  What metric do you want to query? 

1.  What aggregation do you want to perform on that metric? 

1.  How do you want to filter the results from that metric query? 

1.  Do you want to apply any additional functions to the result? 

 The metric selector is a categorized hierarchy. Select a category, then choose again to drill into the subcategories. Repeat this process until you have reached the metric that you want. 

 After selecting a metric, the available filters and filter values will be automatically populated for you. 

 In **Raw Query** mode, you will see a single field labeled **Query**. This allows you to run any [WQL](#wavefront-references) query that you want. 

#### Using filters
<a name="wavefront-using-filters-1"></a>

 The Wavefront plugin will dynamically query the appropriate filters for each metric. 

 To add a filter, choose the **\$1** next to the **Filters** label on the Wavefront query editor, select which field you want to filter on, and select a value to filter by. 

#### Using functions
<a name="wavefront-using-functions"></a>

 Functions provide an additional way to aggregate, manipulate, and perform calculations on the metric response data. To view the available functions, choose the dropdown list by the function label on the **Query Builder**. Based on the function you select, you will be able to perform further actions such as setting a group by field or applying thresholds. Users are able to chain multiple functions together to perform advanced calculations or data manipulations. 

#### Adding a query template variable
<a name="wavefront-adding-a-query-template-variable-1"></a>

1.  To create a new Wavefront template variable for a dashboard, choose the settings cog on the top right portion of the dashboard. 

1.  In the panel at the left, choose **Variables**. 

1.  At the top right of the Variables page, choose **New**. 

1.  Enter a **Name** and a **Label** for the template variable you want to create. **Name** is the value you will use inside of queries to reference the template variable. **Label** is a human friendly name to display for the template variable on the dashboard select panel. 

1.  Select the type **Query** for the type field (it should be selected by default). 

1.  Under the **Query Options** heading, select **Wavefront** in the **Data source** dropdown list. 

1.  See [Template Variable Query Structure](#template-variable-query-structure) for details on what should be entered into the **Query** field. 

1.  If you want to filter out any of the returned values from your query, enter a regular expression in the **Regex** input field. 

1.  Apply any sorting preferences you might have by choosing a sort type in the **Sort** dropdown list. 

1.  After verifying the configuration, choose **Add** to add the template variable, then choose **Save dashboard** on the left hand navigation panel to save your changes. 

#### Template variable query structure
<a name="template-variable-query-structure"></a>

 metric lists: metrics: ts(…) 

 source lists: sources: ts(…) 

 source tag lists: sourceTags: ts(…) 

 matching source tag lists: matchingSourceTags: ts(…) 

 tag name lists: tagNames: ts(…) 

 tag value lists: tagValues(<tag>): ts(…) 

 **Notes** 
+  The s at the end of each query type is optional 
+  Support for all lowercase. You can use tagnames or tagNames, but not TAGNAMES. 
+  Using spaces around the : is optional 

   **WARNING** 

   `Multi-value` and `Include All option` are currently not supported by the Wavefront plugin. 

#### Using template variables
<a name="wavefront-using-template-variables-2"></a>

 After completing the steps to [add a new template variable](#wavefront-adding-a-query-template-variable-1), you’re now ready to use the template variable within your dashboard panels to create dynamic visualizations. 

1.  Add a new dashboard panel using the panel\$1 icon in the top right corner of your dashboard. 

1.  Select the aggregate you want to use for your query. 

1.  Choose the \$1 icon beside **Filters** label and select the key type that matches your template variable. `host=` for a host filter, for example. 

1.  Enter the name of the template variable you created in the **Value** input field of the filter. 

1.  Save the dashboard. 

 You should now be able to cycle through different values of your template variable and have your panel dynamically update\$1 

#### Using Ad-Hoc filters
<a name="wavefront-using-ad-hoc-filters"></a>

 To use ad-hoc filters, we must create two template variables. The first one is a helper variable that will be used to select a metric so that add-hoc filters can be populated for that metric name. The other will be the actual ad-hoc filter variable. 

**Important**  
 The helper variable that is required has to be named `metriclink`. This can be an custom variable with the list of metrics that you want to use or a query based variable using the [Template Variable Query Structure](#template-variable-query-structure). If you want to populate the ad-hoc filter fields with only the values from a single metric, you can hide the `metriclink` template variable. 

 After creating the `metriclink` variable, you can now add the ad-hoc filter by following the same steps detailed in [Adding a Query Template Variable](#wavefront-adding-a-query-template-variable-1). The difference being that you will select **Ad Hoc Filters** as the **Type** and no inputs are required for a query. 

#### Adding annotations
<a name="wavefront-adding-annotations"></a>

1.  To create a new Wavefront annotation for a dashboard, choose the settings cog on the top right portion of the dashboard. 

1.  In the panel at the left, choose **Annotations**. 

1.  At the top right of the Annotations page, choose **New**. 

1.  Enter a name for the annotation (this will be used as the name of the toggle on the dashboard).

1.  Select the **Data source** of Wavefront. 

1.  By default, annotations have a limit of 100 alert events that will be returned. To change that, set the **Limit** field to the value that you want. 

1.  Choose **Add**. 

#### Using annotations
<a name="using-annotations"></a>

 When annotations are toggled on, you should now see the alert events and issues that correlate with a given time period. 

 If you pause on the bottom of an annotated section of a visualization, a pop-up window will be displayed that shows the alert name and provides a direct link to the alert in Wavefront. 

#### Using the Display Name field
<a name="wavefront-using-display-name-1"></a>

 This data source uses the Display Name field in the Field tab of the Options panel to shorten or alter a legend key based on its name, labels, or values. Other datasources use custom `alias` functionality to modify legend keys, but the Display Name function is a more consistent way to do so. 

### References
<a name="wavefront-references"></a>
+  [WQL (Wavefront Query Language)](https://docs.wavefront.com/query_language_reference.html) 