

# Use metric streams


You can use *metric streams* to continually stream CloudWatch metrics to a destination of your choice, with near-real-time delivery and low latency. Supported destinations include AWS destinations such as Amazon Simple Storage Service and several third-party service provider destinations.

There are three main usage scenarios for CloudWatch metric streams:
+ **Custom setup with Firehose**— Create a metric stream and direct it to an Amazon Data Firehose delivery stream that delivers your CloudWatch metrics to where you want them to go. You can stream them to a data lake such as Amazon S3, or to any destination or endpoint supported by Firehose including third-party providers. JSON, OpenTelemetry 1.0.0, and OpenTelemetry 0.7.0 formats are supported natively, or you can configure transformations in your Firehose delivery stream to convert the data to a different format such as Parquet. With a metric stream, you can continually update monitoring data, or combine this CloudWatch metric data with billing and performance data to create rich datasets. You can then use tools such as Amazon Athena to get insight into cost optimization, resource performance, and resource utilization.
+ **Quick S3 setup**— Stream to Amazon Simple Storage Service with a quick setup process. By default, CloudWatch creates the resources needed for the stream. JSON, OpenTelemetry 1.0.0, and OpenTelemetry 0.7.0 formats are supported.
+ **Quick AWS partner setup**— CloudWatch provides a quick setup experience for some third-party partners. You can use third-party service providers to monitor, troubleshoot, and analyze your applications using the streamed CloudWatch data. When you use the quick partner setup workflow, you need to provide only a destination URL and API key for your destination, and CloudWatch handles the rest of the setup. Quick partner setup is available for the following third-party providers: 

  
  + Datadog
  + Dynatrace
  + Elastic
  + New Relic
  + Splunk Observability Cloud
  + SumoLogic

You can stream all of your CloudWatch metrics, or use filters to stream only specified metrics. Each metric stream can include up to 1000 filters that either include or exclude metric namespaces or specific metrics. A single metric stream can have only include or exclude filters, but not both.

After a metric stream is created, if new metrics are created that match the filters in place, the new metrics are automatically included in the stream.

There is no limit on the number of metric streams per account or per Region, and no limit on the number of metric updates being streamed.

Each stream can use either JSON format, OpenTelemetry 1.0.0, or OpenTelemetry 0.7.0 format. You can edit the output format of a metric stream at any time, such as for upgrading from OpenTelemetry 0.7.0 to OpenTelemetry 1.0.0. For more information about output formats, see [CloudWatch metric stream output in JSON format](CloudWatch-metric-streams-formats-json.md), [CloudWatch metric stream output in OpenTelemetry 1.0.0 format](CloudWatch-metric-streams-formats-opentelemetry-100.md), and [CloudWatch metric stream output in OpenTelemetry 0.7.0 format](CloudWatch-metric-streams-formats-opentelemetry.md).

For metric streams in monitoring accounts, you can choose whether to include metrics from the source accounts linked to that monitoring account. For more information, see [CloudWatch cross-account observability](CloudWatch-Unified-Cross-Account.md).

Metric streams always include the `Minimum`, `Maximum`, `SampleCount`, and `Sum` statistics. You can also choose to include additional statistics at an additional charge. For more information, see [Statistics that can be streamed](CloudWatch-metric-streams-statistics.md). 

Metric streams pricing is based on the number of metric updates. You also incur charges from Firehose for the delivery stream used for the metric stream. For more information, see [Amazon CloudWatch Pricing](https://aws.amazon.com/cloudwatch/pricing/).

**Topics**
+ [

# Set up a metric stream
](CloudWatch-metric-streams-setup.md)
+ [

# Statistics that can be streamed
](CloudWatch-metric-streams-statistics.md)
+ [

# Metric stream operation and maintenance
](CloudWatch-metric-streams-operation.md)
+ [

# Monitoring your metric streams with CloudWatch metrics
](CloudWatch-metric-streams-monitoring.md)
+ [

# Trust between CloudWatch and Firehose
](CloudWatch-metric-streams-trustpolicy.md)
+ [

# CloudWatch metric stream output in JSON format
](CloudWatch-metric-streams-formats-json.md)
+ [

# CloudWatch metric stream output in OpenTelemetry 1.0.0 format
](CloudWatch-metric-streams-formats-opentelemetry-100.md)
+ [

# CloudWatch metric stream output in OpenTelemetry 0.7.0 format
](CloudWatch-metric-streams-formats-opentelemetry.md)
+ [

# Troubleshooting metric streams in CloudWatch
](CloudWatch-metric-streams-troubleshoot.md)

# Set up a metric stream


Use the steps in the following sections to set up a CloudWatch metric stream.

After a metric stream is created, the time it takes for metric data to appear at the destination depends on the configured buffering settings on the Firehose delivery stream. The buffering is expressed in maximum payload size or maximum wait time, whichever is reached first. If these are set to the minimum values (60 seconds, 1MB) the expected latency is within 3 minutes if the selected CloudWatch namespaces have active metric updates.

In a CloudWatch metric stream, data is sent every minute. Data might arrive at the final destination out of order. All specified metrics in the specified namespaces are sent in the metric stream, except metrics with a timestamp that is more than two days old. 

For each combination of metric name and namespace that you stream, all dimension combinations of that metric name and namespace are streamed.

For metric streams in monitoring accounts, you can choose whether to include metrics from the source accounts linked to that monitoring account. For more information, see [CloudWatch cross-account observability](CloudWatch-Unified-Cross-Account.md).

To create and manage metric streams, you must be logged on to an account that has the **CloudWatchFullAccess** policy and the `iam:PassRole` permission, or an account that has the following list of permissions:
+ `iam:PassRole`
+ `cloudwatch:PutMetricStream`
+ `cloudwatch:DeleteMetricStream`
+ `cloudwatch:GetMetricStream`
+ `cloudwatch:ListMetricStreams`
+ `cloudwatch:StartMetricStreams`
+ `cloudwatch:StopMetricStreams`

If you're going to have CloudWatch set up the IAM role needed for metric streams, you must also have the `iam:CreateRole` and `iam:PutRolePolicy` permissions.

**Important**  
A user with the `cloudwatch:PutMetricStream` has access to the CloudWatch metric data that is being streamed, even if they don't have the `cloudwatch:GetMetricData` permission.

**Topics**
+ [

# Custom setup with Firehose
](CloudWatch-metric-streams-setup-datalake.md)
+ [

# Use Quick Amazon S3 setup
](CloudWatch-metric-streams-setup-Quick-S3.md)
+ [

# Quick partner setup
](CloudWatch-metric-streams-QuickPartner.md)

# Custom setup with Firehose


Use this method to create a metric stream and direct it to an Amazon Data Firehose delivery stream that delivers your CloudWatch metrics to where you want them to go. You can stream them to a data lake such as Amazon S3, or to any destination or endpoint supported by Firehose including third-party providers.

JSON, OpenTelemetry 1.0.0, and OpenTelemetry 0.7.0 formats are supported natively, or you can configure transformations in your Firehose delivery stream to convert the data to a different format such as Parquet. With a metric stream, you can continually update monitoring data, or combine this CloudWatch metric data with billing and performance data to create rich datasets. You can then use tools such as Amazon Athena to get insight into cost optimization, resource performance, and resource utilization.

You can use the CloudWatch console, the AWS CLI, AWS CloudFormation, or the AWS Cloud Development Kit (AWS CDK) to set up a metric stream.

The Firehose delivery stream that you use for your metric stream must be in the same account and the same Region where you set up the metric stream. To achieve cross-Region functionality, you can configure the Firehose delivery stream to stream to a final destination that is in a different account or a different Region.

## CloudWatch console


This section describes how to use the CloudWatch console to set up a metric stream using Firehose.

**To set up a custom metric stream using Firehose**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Metrics**, **Streams**. Then choose **Create metric stream**.

1. (Optional) If you are signed in to an account that is set up as a monitoring account in CloudWatch cross-account observability, you can choose whether to include metrics from linked source accounts in this metric stream. To include metrics from source accounts, choose **Include source account metrics**.

1. Choose **Custom setup with Firehose**.

1. For **Select your Kinesis Data Firehose stream**, select the Firehose delivery stream to use. It must be in the same account. The default format for this option is OpenTelemetry 0.7.0, but you can change the format later in this procedure.

   Then select the Firehose delivery stream to use under **Select your Firehose delivery stream**.

1. (Optional)You can choose **Select existing service role** to use an existing IAM role instead of having CloudWatch create a new one for you.

1. (Optional) To change the output format from the default format for your scenario, choose **Change output format**. The supported formats are JSON, OpenTelemetry 1.0.0, and OpenTelemetry 0.7.0.

1. For **Metrics to be streamed**, choose either **All metrics** or **Select metrics**.

   If you choose **All metrics**, all metrics from this account will be included in the stream.

   Consider carefully whether to stream all metrics, because the more metrics that you stream the higher your metric stream charges will be.

   If you choose **Select metrics**, do one of the following:
   + To stream most metric namespaces, choose **Exclude** and select the namespaces or metrics to exclude. When you specify a namespace in **Exclude**, you can optionally select some specific metrics from that namespace to exclude. If you choose to exclude a namespace but don't then select metrics in that namespace, all metrics from that namespace are excluded.
   + To include only a few metric namespaces or metrics in the metric stream, choose **Include** and then select the namespaces or metrics to include. If you choose to include a namespace but don't then select metrics in that namespace, all metrics from that namespace are included.

1. (Optional) To stream additional statistics for some of these metrics beyond Minimum, Maximum, SampleCount, and Sum, choose **Add additional statistics**. Either choose **Add recommended metrics** to add some commonly used statistics, or manually select the namespace and metric name to stream additional statistics for. Next, select the additional statistics to stream.

   To then choose another group of metrics to stream a different set of additional statistics for, choose **Add additional statistics**. Each metric can include as many as 20 additional statistics, and as many as 100 metrics within a metric stream can include additional statistics.

   Streaming additional statistics incurs more charges. For more information, see [Statistics that can be streamed](CloudWatch-metric-streams-statistics.md).

   For definitions of the additional statistics, see [CloudWatch statistics definitions](Statistics-definitions.md).

1. (Optional) Customize the name of the new metric stream under **Metric stream name**.

1. Choose **Create metric stream**.

## AWS CLI or AWS API


Use the following steps to create a CloudWatch metric stream.

**To use the AWS CLI or AWS API to create a metric stream**

1. If you're streaming to Amazon S3, first create the bucket. For more information, see [ Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html).

1. Create the Firehose delivery stream. For more information, see [ Creating a Firehose stream](https://docs.aws.amazon.com/firehose/latest/dev/basic-create.html).

1. Create an IAM role that enables CloudWatch to write to the Firehose delivery stream. For more information about the contents of this role, see [Trust between CloudWatch and Firehose](CloudWatch-metric-streams-trustpolicy.md).

1. Use the `aws cloudwatch put-metric-stream` CLI command or the `PutMetricStream` API to create the CloudWatch metric stream.

## AWS CloudFormation


You can use CloudFormation to set up a metric stream. For more information, see [ AWS::CloudWatch::MetricStream](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cloudwatch-metricstream.html).

**To use CloudFormation to create a metric stream**

1. If you're streaming to Amazon S3, first create the bucket. For more information, see [ Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html).

1. Create the Firehose delivery stream. For more information, see [ Creating a Firehose stream](https://docs.aws.amazon.com/firehose/latest/dev/basic-create.html).

1. Create an IAM role that enables CloudWatch to write to the Firehose delivery stream. For more information about the contents of this role, see [Trust between CloudWatch and Firehose](CloudWatch-metric-streams-trustpolicy.md).

1. Create the stream in CloudFormation. For more information, see [ AWS::CloudWatch::MetricStream](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cloudwatch-metricstream.html).

## AWS Cloud Development Kit (AWS CDK)


You can use AWS Cloud Development Kit (AWS CDK) to set up a metric stream. 

**To use the AWS CDK to create a metric stream**

1. If you're streaming to Amazon S3, first create the bucket. For more information, see [ Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html).

1. Create the Firehose delivery stream. For more information, see [ Creating an Amazon Data Firehose Delivery Stream](https://docs.aws.amazon.com/firehose/latest/dev/basic-create.html).

1. Create an IAM role that enables CloudWatch to write to the Firehose delivery stream. For more information about the contents of this role, see [Trust between CloudWatch and Firehose](CloudWatch-metric-streams-trustpolicy.md).

1. Create the metric stream. The metric stream resource is available in AWS CDK as a Level 1 (L1) Construct named `CfnMetricStream`. For more information, see [ Using L1 constructs](https://docs.aws.amazon.com/cdk/latest/guide/constructs.html#constructs_l1_using.html).

# Use Quick Amazon S3 setup


The **Quick S3 Setup** method works well if you want to quickly set up a stream to Amazon S3 and you don't need any formatting transformation beyond the supported JSON, OpenTelemetry 1.0.0, and OpenTelemetry 0.7.0 formats. CloudWatch will create all necessary resources including the Firehose delivery stream and the necessary IAM roles. The default format for this option is JSON, but you can change the format while you set up the stream.

Alternatively, if you want the final format to be Parquet format or Optimized Row Columnar (ORC), you should instead follow the steps in [Custom setup with Firehose](CloudWatch-metric-streams-setup-datalake.md).

## CloudWatch console


This section describes how to use the CloudWatch console to set up a metric stream Amazon S3 using Quick S3 setup.

**To set up a metric stream using Quick S3 setup**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Metrics**, **Streams**. Then choose **Create metric stream**.

1. (Optional) If you are signed in to an account that is set up as a monitoring account in CloudWatch cross-account observability, you can choose whether to include metrics from linked source accounts in this metric stream. To include metrics from source accounts, choose **Include source account metrics**.

1. Choose **Quick S3 setup**. CloudWatch will create all necessary resources including the Firehose delivery stream and the necessary IAM roles. The default format for this option is JSON, but you can change the format later in this procedure.

1. (Optional) Choose **Select existing resources** to use an existing S3 bucket or existing IAM roles instead of having CloudWatch create new ones for you.

1. (Optional) To change the output format from the default format for your scenario, choose **Change output format**. The supported formats are JSON, OpenTelemetry 1.0.0, and OpenTelemetry 0.7.0.

1. For **Metrics to be streamed**, choose either **All metrics** or **Select metrics**.

   If you choose **All metrics**, all metrics from this account will be included in the stream.

   Consider carefully whether to stream all metrics, because the more metrics that you stream the higher your metric stream charges will be.

   If you choose **Select metrics**, do one of the following:
   + To stream most metric namespaces, choose **Exclude** and select the namespaces or metrics to exclude. When you specify a namespace in **Exclude**, you can optionally select some specific metrics from that namespace to exclude. If you choose to exclude a namespace but don't then select metrics in that namespace, all metrics from that namespace are excluded.
   + To include only a few metric namespaces or metrics in the metric stream, choose **Include** and then select the namespaces or metrics to include. If you choose to include a namespace but don't then select metrics in that namespace, all metrics from that namespace are included.

1. (Optional) To stream additional statistics for some of these metrics beyond Minimum, Maximum, SampleCount, and Sum, choose **Add additional statistics**. Either choose **Add recommended metrics** to add some commonly used statistics, or manually select the namespace and metric name to stream additional statistics for. Next, select the additional statistics to stream.

   To then choose another group of metrics to stream a different set of additional statistics for, choose **Add additional statistics**. Each metric can include as many as 20 additional statistics, and as many as 100 metrics within a metric stream can include additional statistics.

   Streaming additional statistics incurs more charges. For more information, see [Statistics that can be streamed](CloudWatch-metric-streams-statistics.md).

   For definitions of the additional statistics, see [CloudWatch statistics definitions](Statistics-definitions.md).

1. (Optional) Customize the name of the new metric stream under **Metric stream name**.

1. Choose **Create metric stream**.

# Quick partner setup


CloudWatch provides a quick setup experience for the following third-party partners. To use this workflow, you need to provide only a destination URL and API key for your destination. CloudWatch handles the rest of setup including creating the Firehose delivery stream and the necessary IAM roles.

**Important**  
Before you use quick partner setup to create a metric stream, we strongly recommend that you read that partner's documentation, linked in the following list.
+ [Datadog](https://docs.datadoghq.com/integrations/guide/aws-cloudwatch-metric-streams-with-kinesis-data-firehose/)
+ [Dynatrace](https://www.dynatrace.com/support/help/dynatrace-api/basics/dynatrace-api-authentication)
+ [Elastic](https://www.elastic.co/docs/current/integrations/awsfirehose)
+ [New Relic](https://docs.newrelic.com/install/aws-cloudwatch/)
+ [Splunk Observability Cloud](https://docs.splunk.com/observability/en/gdi/get-data-in/connect/aws/aws-console-ms.html)
+ [SumoLogic](https://www.sumologic.com)

When you set up a metric stream to one of these partners, the stream is created with some default settings, as listed in the following sections.

**Topics**
+ [

## Set up a metric stream using quick partner setup
](#CloudWatch-metric-streams-QuickPartner-setup)
+ [

## Datadog stream defaults
](#CloudWatch-metric-streams-QuickPartner-Datadog)
+ [

## Dynatrace stream defaults
](#CloudWatch-metric-streams-QuickPartner-Dynatrace)
+ [

## Elastic stream defaults
](#CloudWatch-metric-streams-QuickPartner-Elastic)
+ [

## New Relic stream defaults
](#CloudWatch-metric-streams-QuickPartner-NewRelic)
+ [

## Splunk Observability Cloud stream defaults
](#CloudWatch-metric-streams-QuickPartner-Splunk)
+ [

## Sumo Logic stream defaults
](#CloudWatch-metric-streams-QuickPartner-Sumologic)

## Set up a metric stream using quick partner setup


CloudWatch provides a quick setup option for some third-party partners. Before you start the steps in this section, you must have certain information for the partner. This information might include a destination URL and/or an API key for your partner destination. You should also read the documentation at the partner's website linked in the previous section, and the defaults for that partner listed in the following sections.

To stream to a third-party destination not supported by quick setup, you can follow the instructions in Follow the instructions in [Custom setup with Firehose](CloudWatch-metric-streams-setup-datalake.md) to set up a stream using Firehose, and then send those metrics from Firehose to the final destination.

**To use quick partner setup to create a metric stream to third-party provider**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Metrics**, **Streams**. Then choose **Create metric stream**.

1. (Optional) If you are signed in to an account that is set up as a monitoring account in CloudWatch cross-account observability, you can choose whether to include metrics from linked source accounts in this metric stream. To include metrics from source accounts, choose **Include source account metrics**.

1. Choose **Quick Amazon Web Services partner setup**

1. Select the name of the partner that you want to stream metrics to.

1. For **Endpoint URL**, enter the destination URL.

1. For **Access Key** or **API Key**, enter the access key for the partner. Not all partners require an access key.

1. For **Metrics to be streamed**, choose either **All metrics** or **Select metrics**.

   If you choose **All metrics**, all metrics from this account will be included in the stream.

   Consider carefully whether to stream all metrics, because the more metrics that you stream the higher your metric stream charges will be.

   If you choose **Select metrics**, do one of the following:
   + To stream most metric namespaces, choose **Exclude** and select the namespaces or metrics to exclude. When you specify a namespace in **Exclude**, you can optionally select some specific metrics from that namespace to exclude. If you choose to exclude a namespace but don't then select metrics in that namespace, all metrics from that namespace are excluded.
   + To include only a few metric namespaces or metrics in the metric stream, choose **Include** and then select the namespaces or metrics to include. If you choose to include a namespace but don't then select metrics in that namespace, all metrics from that namespace are included.

1. (Optional) To stream additional statistics for some of these metrics beyond Minimum, Maximum, SampleCount, and Sum, choose **Add additional statistics**. Either choose **Add recommended metrics** to add some commonly used statistics, or manually select the namespace and metric name to stream additional statistics for. Next, select the additional statistics to stream.

   To then choose another group of metrics to stream a different set of additional statistics for, choose **Add additional statistics**. Each metric can include as many as 20 additional statistics, and as many as 100 metrics within a metric stream can include additional statistics.

   Streaming additional statistics incurs more charges. For more information, see [Statistics that can be streamed](CloudWatch-metric-streams-statistics.md).

   For definitions of the additional statistics, see [CloudWatch statistics definitions](Statistics-definitions.md).

1. (Optional) Customize the name of the new metric stream under **Metric stream name**.

1. Choose **Create metric stream**.

## Datadog stream defaults


Quick partner setup streams to Datadog use the following defaults:
+ **Output format:** OpenTelemetry 0.7.0
+ **Firehose stream content encoding** GZIP
+ **Firehose stream buffering options** Interval of 60 seconds, size of 4 MBs
+ **Firehose stream retry option** Duration of 60 seconds

When you use quick partner setup to create a metric stream to Datadog and you stream certain metrics, by default those metrics include some additional statistics. Streaming additional statistics can incur additional charges. For more information about statistics and their charges, see [Statistics that can be streamed](CloudWatch-metric-streams-statistics.md).

The following list shows the metrics that have additional statistics streamed by default, if you choose to stream those metrics. You can choose to de-select these additional statistics before you start the stream.
+ **`Duration` in `AWS/Lambda`:** p50, p80, p95, p99, p99.9
+ **`PostRuntimeExtensionDuration` in `AWS/Lambda`:** p50, p99
+ **`FirstByteLatency` and `TotalRequestLatency`in `AWS/S3`:** p50, p90, p95, p99, p99.9
+ **`ResponseLatency` in `AWS/Polly` and `TargetResponseTime` in AWS/ApplicationELB:** p50, p90, p95, p99
+ **`Latency` and `IntegrationLatency` in `AWS/ApiGateway`:** p90, p95, p99
+ **`Latency` and `TargetResponseTime` in `AWS/ELB`:** p95, p99
+ **`RequestLatency` in `AWS/AppRunner`:** p50, p95, p99
+ **`ActivityTime`, `ExecutionTime`, `LambdaFunctionRunTime`, `LambdaFunctionScheduleTime`, `LambdaFunctionTime`, `ActivityRunTime`, and `ActivityScheduleTime` in `AWS/States`:** p95, p99
+ **`EncoderBitRate`, `ConfiguredBitRate`, and `ConfiguredBitRateAvailable` in `AWS/MediaLive`:** p90
+ **`Latency` in `AWS/AppSync`:** p90

## Dynatrace stream defaults


Quick partner setup streams to Dynatrace use the following defaults:
+ **Output format:** OpenTelemetry 0.7.0
+ **Firehose stream content encoding** GZIP
+ **Firehose stream buffering options** Interval of 60 seconds, size of 5 MBs
+ **Firehose stream retry option** Duration of 600 seconds

## Elastic stream defaults


Quick partner setup streams to Elastic use the following defaults:
+ **Output format:** OpenTelemetry 1.0.0
+ **Firehose stream content encoding** GZIP
+ **Firehose stream buffering options** Interval of 60 seconds, size of 1 MB
+ **Firehose stream retry option** Duration of 60 seconds

## New Relic stream defaults


Quick partner setup streams to New Relic use the following defaults:
+ **Output format:** OpenTelemetry 0.7.0
+ **Firehose stream content encoding** GZIP
+ **Firehose stream buffering options** Interval of 60 seconds, size of 1 MB
+ **Firehose stream retry option** Duration of 60 seconds

## Splunk Observability Cloud stream defaults


Quick partner setup streams to Splunk Observability Cloud use the following defaults:
+ **Output format:** OpenTelemetry 1.0.0
+ **Firehose stream content encoding** GZIP
+ **Firehose stream buffering options** Interval of 60 seconds, size of 1 MB
+ **Firehose stream retry option** Duration of 300 seconds

## Sumo Logic stream defaults


Quick partner setup streams to Sumo Logic use the following defaults:
+ **Output format:** OpenTelemetry 0.7.0
+ **Firehose stream content encoding** GZIP
+ **Firehose stream buffering options** Interval of 60 seconds, size of 1 MB
+ **Firehose stream retry option** Duration of 60 seconds

# Statistics that can be streamed


Metric streams always include the following statistics: `Minimum`, `Maximum`, `SampleCount`, and `Sum`. You can also choose to include the following additional statistics in a metric stream. This choice is on a per-metric basis. For more information about these statistics, see [CloudWatch statistics definitions](Statistics-definitions.md).
+ Percentile values such as p95 or p99 (For streams with either JSON or OpenTelemetry format) 
+ Trimmed mean (Only for streams with the JSON format)
+ Winsorized mean (Only for streams with the JSON format)
+ Trimmed count (Only for streams with the JSON format)
+ Trimmed sum (Only for streams with the JSON format)
+ Percentile rank (Only for streams with the JSON format)
+ Interquartile mean (Only for streams with the JSON format)

Streaming additional statistics incurs additional charges. Streaming between one and five of these additional statistics for a particular metric is billed as an additional metric update. Thereafter, each additional set of up to five of these statistics is billed as another metric update. 

 For example, suppose that for one metric you are streaming the following six additional statistics: p95, p99, p99.9, Trimmed mean, Winsorized mean, and Trimmed sum. Each update of this metric is billed as three metric updates: one for the metric update which includes the default statistics, one for the first five additional statistics, and one for the sixth additional statistic. Adding up to four more additional statistics for a total of ten would not increase the billing, but an eleventh additional statistic would do so.

When you specify a metric name and namespace combination to stream additional statistics, all dimension combinations of that metric name and namespace are streamed with the additional statistics. 

CloudWatch metric streams publishes a new metric, `TotalMetricUpdate`, which reflects the base number of metric updates plus extra metric updates incurred by streaming additional statistics. For more information, see [Monitoring your metric streams with CloudWatch metrics](CloudWatch-metric-streams-monitoring.md).

For more information, see [Amazon CloudWatch Pricing](https://aws.amazon.com/cloudwatch/pricing/).

**Note**  
Some metrics do not support percentiles. Percentile statistics for these metrics are excluded from the stream and do not incur metric stream charges. An example of these statistics that do not support percentiles are some metrics in the `AWS/ECS` namespace.

The additional statistics that you configure are streamed only if they match the filters for the stream. For example, if you create a stream that has only `EC2` and `RDS` in the include filters, and then your statistics configuration lists `EC2` and `Lambda`, then the stream includes `EC2` metrics with additional statistics, `RDS` metrics with only the default statistics, and doesn't include `Lambda` statistics at all.

# Metric stream operation and maintenance


Metric streams are always in one of two states, **Running** or **Stopped**.
+ **Running**— The metric stream is running correctly. There might not be any metric data streamed to the destination because of the filters on the stream.
+ **Stopped**— The metric stream has been explicitly stopped by someone, and not because of an error. It might be useful to stop your stream to temporarily pause the streaming of data without deleting the stream.

If you stop and restart a metric stream, the metric data that was published to CloudWatch while the metric stream was stopped is not backfilled to the metric stream.

If you change the output format of a metric stream, in certain cases you might see a small amount of metric data written to the destination in both the old format and the new format. To avoid this situation, you can create a new Firehose delivery stream with the same configuration as your current one, then change to the new Firehose delivery stream and change the output format at the same time. This way, the Kinesis records with different output format are stored on Amazon S3 in separate objects. Later, you can direct the traffic back to the original Firehose delivery stream and delete the second delivery stream. 

**To view, edit, stop, and start your metric streams**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Metrics**, **Streams**.

   The list of streams appears, and the **Status** column displays whether each stream is running or stopped.

1. To stop or start a metric stream, select the stream and choose **Stop** or **Start**.

1. To see the details about a metric stream, select the stream and choose **View details**.

1. To change the stream's output format, filters, destination Firehose stream, or roles, choose **Edit** and make the changes that you want.

   If you change the filters, there might be some gaps in the metric data during the transition.

# Monitoring your metric streams with CloudWatch metrics
Monitoring your metric streams

Metric streams emit CloudWatch metrics about their health and operation in the `AWS/CloudWatch/MetricStreams` namespace. The following metrics are emitted. These metrics are emitted with a `MetricStreamName` dimension and with no dimension. You can use the metrics with no dimensions to see aggregated metrics for all of your metric streams. You can use the metrics with the `MetricStreamName` dimension to see the metrics about only that metric stream.

For all of these metrics, values are emitted only for metric streams that are in the **Running** state.


| Metric | Description | 
| --- | --- | 
|  `MetricUpdate`  |  The number of metric updates sent to the metric stream. If no metric updates are streamed during a time period, this metric is not emitted during that time period. If you stop the metric stream, this metric stops being emitted until the metric stream is started again. Valid Statistic: `Sum` Units: None | 
|  `TotalMetricUpdate`  |  This is calculated as **MetricUpdate \$1 a number based on additional statistics that are being streamed**. For each unique namespace and metric name combination, streaming 1-5 additional statistics adds 1 to the `TotalMetricUpdate`, streaming 6-10 additional statistics adds 2 to `TotalMetricUpdate`, and so on.  Valid Statistic: `Sum` Units: None | 
|  `PublishErrorRate`  |  The number of unrecoverable errors that occur when putting data into the Firehose delivery stream. If no errors occur during a time period, this metric is not emitted during that time period. If you stop the metric stream, this metric stops being emitted until the metric stream is started again. Valid Statistic: `Average` to see the rate of metric updates unable to be written. This value will be between 0.0 and 1.0. Units: None  | 

# Trust between CloudWatch and Firehose


The Firehose delivery stream must trust CloudWatch through an IAM role that has write permissions to Firehose. These permissions can be limited to the single Firehose delivery stream that the CloudWatch metric stream uses. The IAM role must trust the `streams.metrics.cloudwatch.amazonaws.com` service principal.

If you use the CloudWatch console to create a metric stream, you can have CloudWatch create the role with the correct permissions. If you use another method to create a metric stream, or you want to create the IAM role itself, it must contain the following permissions policy and trust policy.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Action": [
                "firehose:PutRecord",
                "firehose:PutRecordBatch"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:firehose:us-east-1:123456789012:deliverystream/*"
        }
    ]
}
```

------

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "streams.metrics.cloudwatch.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
```

------

Metric data is streamed by CloudWatch to the destination Firehose delivery stream on behalf of the source that owns the metric stream resource. 

# CloudWatch metric stream output in JSON format
JSON output format

In a CloudWatch metric stream that uses the JSON format, each Firehose record contains multiple JSON objects separated by a newline character (\$1n). Each object includes a single data point of a single metric.

The JSON format that is used is fully compatible with AWS Glue and with Amazon Athena. If you have a Firehose delivery stream and an AWS Glue table formatted correctly, the format can be automatically transformed into Parquet format or Optimized Row Columnar (ORC) format before being stored in S3. For more information about transforming the format, see [Converting Your Input Record Format in Firehose](https://docs.aws.amazon.com/firehose/latest/dev/record-format-conversion.html). For more information about the correct format for AWS Glue, see [Which AWS Glue schema should I use for JSON output format?](#CloudWatch-metric-streams-format-glue).

In the JSON format, the valid values for `unit` are the same as for the value of `unit` in the `MetricDatum` API structure. For more information, see [ MetricDatum](https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_MetricDatum.html). The value for the `timestamp` field is in epoch milliseconds, such as `1616004674229`.

The following is an example of the format. In this example, the JSON is formatted for easy reading, but in practice the whole format is on a single line.

```
{
    "metric_stream_name": "MyMetricStream",
    "account_id": "1234567890",
    "region": "us-east-1",
    "namespace": "AWS/EC2",
    "metric_name": "DiskWriteOps",
    "dimensions": {
        "InstanceId": "i-123456789012"
    },
    "timestamp": 1611929698000,
    "value": {
        "count": 3.0,
        "sum": 20.0,
        "max": 18.0,
        "min": 0.0,
        "p99": 17.56,
        "p99.9": 17.8764,
        "TM(25%:75%)": 16.43
    },
    "unit": "Seconds"
}
```

## Which AWS Glue schema should I use for JSON output format?


The following is an example of a JSON representation of the `StorageDescriptor` for an AWS Glue table, which would then be used by Firehose. For more information about `StorageDescriptor`, see [ StorageDescriptor](https://docs.aws.amazon.com/glue/latest/webapi/API_StorageDescriptor.html).

```
{
  "Columns": [
    {
      "Name": "metric_stream_name",
      "Type": "string"
    },
    {
      "Name": "account_id",
      "Type": "string"
    },
    {
      "Name": "region",
      "Type": "string"
    },
    {
      "Name": "namespace",
      "Type": "string"
    },
    {
      "Name": "metric_name",
      "Type": "string"
    },
    {
      "Name": "timestamp",
      "Type": "timestamp"
    },
    {
      "Name": "dimensions",
      "Type": "map<string,string>"
    },
    {
      "Name": "value",
      "Type": "struct<min:double,max:double,count:double,sum:double,p99:double,p99.9:double>"
    },
    {
      "Name": "unit",
      "Type": "string"
    }
  ],
  "Location": "s3://amzn-s3-demo-bucket/",
  "InputFormat": "org.apache.hadoop.mapred.TextInputFormat",
  "OutputFormat": "org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat",
  "SerdeInfo": {
    "SerializationLibrary": "org.apache.hive.hcatalog.data.JsonSerDe"
  },
  "Parameters": {
    "classification": "json"
  }
}
```

The preceding example is for data written on Amazon S3 in JSON format. Replace the values in the following fields with the indicated values to store the data in Parquet format or Optimized Row Columnar (ORC) format.
+ **Parquet:**
  + inputFormat: org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat
  + outputFormat: org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat
  + SerDeInfo.serializationLib: org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe
  + parameters.classification: parquet
+ **ORC:**
  + inputFormat: org.apache.hadoop.hive.ql.io.orc.OrcInputFormat
  + outputFormat: org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat
  + SerDeInfo.serializationLib: org.apache.hadoop.hive.ql.io.orc.OrcSerde
  + parameters.classification: orc

# CloudWatch metric stream output in OpenTelemetry 1.0.0 format
OpenTelemetry 1.0.0 output format

**Note**  
With the OpenTelemetry 1.0.0 format, metric attributes are encoded as a list of `KeyValue` objects instead of the `StringKeyValue` type used in the 0.7.0 format. As a consumer, this is the only major change between the 0.7.0 and 1.0.0 formats. A parser generated from the 0.7.0 proto files won't parse metric attributes encoded in the 1.0.0 format. The same is true in reverse, a parser generated from the 1.0.0 proto files will not parse metric attributes encoded in the 0.7.0 format.

OpenTelemetry is a collection of tools, APIs, and SDKs. You can use it to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) for analysis. OpenTelemetry is part of the Cloud Native Computing Foundation. For more information, see [OpenTelemetry](https://opentelemetry.io/).

For information about the full OpenTelemetry 1.0.0 specification, see [Release version 1.0.0](https://github.com/open-telemetry/opentelemetry-proto/releases/tag/v1.0.0).

A Kinesis record can contain one or more `ExportMetricsServiceRequest` OpenTelemetry data structures. Each data structure starts with a header with an `UnsignedVarInt32` indicating the record length in bytes. Each `ExportMetricsServiceRequest` may contain data from multiple metrics at once.

The following is a string representation of the message of the `ExportMetricsServiceRequest` OpenTelemetry data structure. OpenTelemetry serializes the Google Protocol Buffers binary protocol, and this is not human-readable.

```
resource_metrics {
  resource {
    attributes {
      key: "cloud.provider"
      value {
        string_value: "aws"
      }
    }
    attributes {
      key: "cloud.account.id"
      value {
        string_value: "123456789012"
      }
    }
    attributes {
      key: "cloud.region"
      value {
        string_value: "us-east-1"
      }
    }
    attributes {
      key: "aws.exporter.arn"
      value {
        string_value: "arn:aws:cloudwatch:us-east-1:123456789012:metric-stream/MyMetricStream"
      }
    }
  }
  scope_metrics {
    metrics {
      name: "amazonaws.com/AWS/DynamoDB/ConsumedReadCapacityUnits"
      unit: "NoneTranslated"
      summary {
        data_points {
          start_time_unix_nano: 60000000000
          time_unix_nano: 120000000000
          count: 1
          sum: 1.0
          quantile_values {
            value: 1.0
          }
          quantile_values {
            quantile: 0.95
            value: 1.0
          }
          quantile_values {
            quantile: 0.99
            value: 1.0
          }
          quantile_values {
            quantile: 1.0
            value: 1.0
          }
          attributes {
            key: "Namespace"
            value {
              string_value: "AWS/DynamoDB"
            }
          }
          attributes {
            key: "MetricName"
            value {
              string_value: "ConsumedReadCapacityUnits"
            }
          }
          attributes {
            key: "Dimensions"
            value {
              kvlist_value {
                values {
                  key: "TableName"
                  value {
                    string_value: "MyTable"
                  }
                }
              }
            }
          }
        }
        data_points {
          start_time_unix_nano: 70000000000
          time_unix_nano: 130000000000
          count: 2
          sum: 5.0
          quantile_values {
            value: 2.0
          }
          quantile_values {
            quantile: 1.0
            value: 3.0
          }
          attributes {
            key: "Namespace"
            value {
              string_value: "AWS/DynamoDB"
            }
          }
          attributes {
            key: "MetricName"
            value {
              string_value: "ConsumedReadCapacityUnits"
            }
          }
          attributes {
            key: "Dimensions"
            value {
              kvlist_value {
                values {
                  key: "TableName"
                  value {
                    string_value: "MyTable"
                  }
                }
              }
            }
          }
        }
      }
    }
  }
}
```

**Top-level object to serialize OpenTelemetry metric data**

`ExportMetricsServiceRequest` is the top-level wrapper to serialize an OpenTelemetry exporter payload. It contains one or more `ResourceMetrics`.

```
message ExportMetricsServiceRequest {
  // An array of ResourceMetrics.
  // For data coming from a single resource this array will typically contain one
  // element. Intermediary nodes (such as OpenTelemetry Collector) that receive
  // data from multiple origins typically batch the data before forwarding further and
  // in that case this array will contain multiple elements.
  repeated opentelemetry.proto.metrics.v1.ResourceMetrics resource_metrics = 1;
}
```

`ResourceMetrics` is the top-level object to represent MetricData objects. 

```
// A collection of ScopeMetrics from a Resource.
message ResourceMetrics {
  reserved 1000;

  // The resource for the metrics in this message.
  // If this field is not set then no resource info is known.
  opentelemetry.proto.resource.v1.Resource resource = 1;

  // A list of metrics that originate from a resource.
  repeated ScopeMetrics scope_metrics = 2;

  // This schema_url applies to the data in the "resource" field. It does not apply
  // to the data in the "scope_metrics" field which have their own schema_url field.
  string schema_url = 3;
}
```

**The Resource object**

A `Resource` object is a value-pair object that contains some information about the resource that generated the metrics. For metrics created by AWS, the data structure contains the Amazon Resource Name (ARN) of the resource related to the metric, such as an EC2 instance or an S3 bucket.

The `Resource` object contains an attribute called `attributes`, which store a list of key-value pairs.
+ `cloud.account.id` contains the account ID
+ `cloud.region` contains the Region
+ `aws.exporter.arn` contains the metric stream ARN
+ `cloud.provider` is always `aws`.

```
// Resource information.
message Resource {
  // Set of attributes that describe the resource.
  // Attribute keys MUST be unique (it is not allowed to have more than one
  // attribute with the same key).
  repeated opentelemetry.proto.common.v1.KeyValue attributes = 1;

  // dropped_attributes_count is the number of dropped attributes. If the value is 0, then
  // no attributes were dropped.
  uint32 dropped_attributes_count = 2;
}
```

**The ScopeMetrics object**

The `scope` field will not be filled. We fill only the metrics field that we are exporting.

```
// A collection of Metrics produced by an Scope.
message ScopeMetrics {
  // The instrumentation scope information for the metrics in this message.
  // Semantically when InstrumentationScope isn't set, it is equivalent with
  // an empty instrumentation scope name (unknown).
  opentelemetry.proto.common.v1.InstrumentationScope scope = 1;

  // A list of metrics that originate from an instrumentation library.
  repeated Metric metrics = 2;

  // This schema_url applies to all metrics in the "metrics" field.
  string schema_url = 3;
}
```

**The Metric object**

The metric object contains some metadata and a `Summary` data field that contains a list of `SummaryDataPoint`.

For metric streams, the metadata is as follows:
+ `name` will be `amazonaws.com/metric_namespace/metric_name`
+ `description` will be blank
+ `unit` will be filled by mapping the metric datum's unit to the case-sensitive variant of the Unified code for Units of Measure. For more information, see [Translations with OpenTelemetry 1.0.0 format in CloudWatch](CloudWatch-metric-streams-formats-opentelemetry-translation-100.md) and [The Unified Code For Units of Measure](https://ucum.org/ucum.html).
+ `type` will be `SUMMARY`

```
message Metric {
  reserved 4, 6, 8;

  // name of the metric, including its DNS name prefix. It must be unique.
  string name = 1;

  // description of the metric, which can be used in documentation.
  string description = 2;

  // unit in which the metric value is reported. Follows the format
  // described by http://unitsofmeasure.org/ucum.html.
  string unit = 3;

  // Data determines the aggregation type (if any) of the metric, what is the
  // reported value type for the data points, as well as the relatationship to
  // the time interval over which they are reported.
  oneof data {
    Gauge gauge = 5;
    Sum sum = 7;
    Histogram histogram = 9;
    ExponentialHistogram exponential_histogram = 10;
    Summary summary = 11;
  }
}

message Summary {
  repeated SummaryDataPoint data_points = 1;
}
```

**The SummaryDataPoint object**

The SummaryDataPoint object contains the value of a single data point in a time series in a DoubleSummary metric.

```
// SummaryDataPoint is a single data point in a timeseries that describes the
// time-varying values of a Summary metric.
message SummaryDataPoint {
  reserved 1;

  // The set of key/value pairs that uniquely identify the timeseries from
  // where this point belongs. The list may be empty (may contain 0 elements).
  // Attribute keys MUST be unique (it is not allowed to have more than one
  // attribute with the same key).
  repeated opentelemetry.proto.common.v1.KeyValue attributes = 7;

  // StartTimeUnixNano is optional but strongly encouraged, see the
  // the detailed comments above Metric.
  //
  // Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January
  // 1970.
  fixed64 start_time_unix_nano = 2;

  // TimeUnixNano is required, see the detailed comments above Metric.
  //
  // Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January
  // 1970.
  fixed64 time_unix_nano = 3;

  // count is the number of values in the population. Must be non-negative.
  fixed64 count = 4;

  // sum of the values in the population. If count is zero then this field
  // must be zero.
  //
  // Note: Sum should only be filled out when measuring non-negative discrete
  // events, and is assumed to be monotonic over the values of these events.
  // Negative events *can* be recorded, but sum should not be filled out when
  // doing so.  This is specifically to enforce compatibility w/ OpenMetrics,
  // see: https://github.com/OpenObservability/OpenMetrics/blob/main/specification/OpenMetrics.md#summary
  double sum = 5;

  // Represents the value at a given quantile of a distribution.
  //
  // To record Min and Max values following conventions are used:
  // - The 1.0 quantile is equivalent to the maximum value observed.
  // - The 0.0 quantile is equivalent to the minimum value observed.
  //
  // See the following issue for more context:
  // https://github.com/open-telemetry/opentelemetry-proto/issues/125
  message ValueAtQuantile {
    // The quantile of a distribution. Must be in the interval
    // [0.0, 1.0].
    double quantile = 1;

    // The value at the given quantile of a distribution.
    //
    // Quantile values must NOT be negative.
    double value = 2;
  }

  // (Optional) list of values at different quantiles of the distribution calculated
  // from the current snapshot. The quantiles must be strictly increasing.
  repeated ValueAtQuantile quantile_values = 6;

  // Flags that apply to this specific data point.  See DataPointFlags
  // for the available flags and their meaning.
  uint32 flags = 8;
}
```

For more information, see [Translations with OpenTelemetry 1.0.0 format in CloudWatch](CloudWatch-metric-streams-formats-opentelemetry-translation-100.md).

# Translations with OpenTelemetry 1.0.0 format in CloudWatch


CloudWatch performs some transformations to put CloudWatch data into OpenTelemetry format.

**Translating namespace, metric name, and dimensions**

These attributes are key-value pairs encoded in the mapping.
+ One attribute has the key `Namespace` and its value is the namespace of the metric
+ One attribute has the key `MetricName` and its value is the name of the metric
+ One pair has the key `Dimensions` and its value is a nested list of key-value pairs. Each pair in this list maps to a CloudWatch metric dimension, where the pair's key is the name of the dimension and its value is the value of the dimension.

**Translating Average, Sum, SampleCount, Min and Max**

The Summary datapoint enables CloudWatch to export all of these statistics using one datapoint.
+ `startTimeUnixNano` contains the CloudWatch `startTime`
+ `timeUnixNano` contains the CloudWatch `endTime`
+ `sum` contains the Sum statistic.
+ `count` contains the SampleCount statistic.
+ `quantile_values` contains two `valueAtQuantile.value` objects:
  + `valueAtQuantile.quantile = 0.0` with `valueAtQuantile.value = Min value`
  + `valueAtQuantile.quantile = 0.99` with `valueAtQuantile.value = p99 value`
  + `valueAtQuantile.quantile = 0.999` with `valueAtQuantile.value = p99.9 value`
  + `valueAtQuantile.quantile = 1.0` with `valueAtQuantile.value = Max value`

Resources that consume the metric stream can calculate the Average statistic as **Sum/SampleCount**.

**Translating units**

CloudWatch units are mapped to the case-sensitive variant of the Unified code for Units of Measure, as shown in the following table. For more information, see [The Unified Code For Units of Measure](https://ucum.org/ucum.html).


| CloudWatch | OpenTelemetry | 
| --- | --- | 
|  Second |  s | 
|  Second or Seconds |  s | 
|  Microseconds |  us | 
|  Milliseconds |  ms | 
|  Bytes |  By | 
|  Kilobytes |  kBy | 
|  Megabytes |  MBy | 
|  Gigabytes |  GBy | 
|  Terabytes |  TBy | 
|  Bits |  bit | 
|  Kilobits |  kbit | 
|  Megabits |  MBit | 
|  Gigabits |  GBit | 
|  Terabits |  Tbit | 
|  Percent |  % | 
|  Count |  \$1Count\$1 | 
|  None |  1 | 

Units that are combined with a slash are mapped by applying the OpenTelemetry conversion of both the units. For example, Bytes/Second is mapped to By/s.

# How to parse OpenTelemetry 1.0.0 messages


This section provides information to help you get started with parsing OpenTelemetry 1.0.0.

First, you should get language-specific bindings, which enable you to parse OpenTelemetry 1.0.0 messages in your preferred language.

**To get language-specific bindings**
+ The steps depend on your preferred language.
  + To use Java, add the following Maven dependency to your Java project: [OpenTelemetry Java >> 0.14.1](https://mvnrepository.com/artifact/io.opentelemetry/opentelemetry-proto/0.14.1).
  + To use any other language, follow these steps:

    1. Make sure that your language is supported by checking the list at [Generating Your Classes](https://developers.google.com/protocol-buffers/docs/proto3#generating).

    1. Install the Protobuf compiler by following the steps at [Download Protocol Buffers](https://developers.google.com/protocol-buffers/docs/downloads).

    1. Download the OpenTelemetry 1.0.0 ProtoBuf definitions at [Release version 1.0.0](https://github.com/open-telemetry/opentelemetry-proto/releases/tag/v1.0.0). 

    1. Confirm that you are in the root folder of the downloaded OpenTelemetry 1.0.0 ProtoBuf definitions. Then create a `src` folder and then run the command to generate language-specific bindings. For more information, see [Generating Your Classes](https://developers.google.com/protocol-buffers/docs/proto3#generating). 

       The following is an example for how to generate Javascript bindings.

       ```
       protoc --proto_path=./ --js_out=import_style=commonjs,binary:src \
       opentelemetry/proto/common/v1/common.proto \
       opentelemetry/proto/resource/v1/resource.proto \
       opentelemetry/proto/metrics/v1/metrics.proto \
       opentelemetry/proto/collector/metrics/v1/metrics_service.proto
       ```

The following section includes examples of using the language-specific bindings that you can build using the previous instructions.

**Java**

```
package com.example;

import io.opentelemetry.proto.collector.metrics.v1.ExportMetricsServiceRequest;

import java.io.IOException;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.List;

public class MyOpenTelemetryParser {

    public List<ExportMetricsServiceRequest> parse(InputStream inputStream) throws IOException {
        List<ExportMetricsServiceRequest> result = new ArrayList<>();

        ExportMetricsServiceRequest request;
        /* A Kinesis record can contain multiple `ExportMetricsServiceRequest`
           records, each of them starting with a header with an
           UnsignedVarInt32 indicating the record length in bytes:
            ------ --------------------------- ------ -----------------------
           |UINT32|ExportMetricsServiceRequest|UINT32|ExportMetricsService...
            ------ --------------------------- ------ -----------------------
         */
        while ((request = ExportMetricsServiceRequest.parseDelimitedFrom(inputStream)) != null) {
            // Do whatever we want with the parsed message
            result.add(request);
        }

        return result;
    }
}
```

**Javascript**

This example assumes that the root folder with the bindings generated is `./`

The data argument of the function `parseRecord` can be one of the following types:
+ `Uint8Array` this is optimal
+ `Buffer` optimal under node
+ `Array.number` 8-bit integers

```
const pb = require('google-protobuf')
const pbMetrics =
    require('./opentelemetry/proto/collector/metrics/v1/metrics_service_pb')

function parseRecord(data) {
    const result = []

    // Loop until we've read all the data from the buffer
    while (data.length) {
        /* A Kinesis record can contain multiple `ExportMetricsServiceRequest`
           records, each of them starting with a header with an
           UnsignedVarInt32 indicating the record length in bytes:
            ------ --------------------------- ------ -----------------------
           |UINT32|ExportMetricsServiceRequest|UINT32|ExportMetricsService...
            ------ --------------------------- ------ -----------------------
         */
        const reader = new pb.BinaryReader(data)
        const messageLength = reader.decoder_.readUnsignedVarint32()
        const messageFrom = reader.decoder_.cursor_
        const messageTo = messageFrom + messageLength

        // Extract the current `ExportMetricsServiceRequest` message to parse
        const message = data.subarray(messageFrom, messageTo)

        // Parse the current message using the ProtoBuf library
        const parsed =
            pbMetrics.ExportMetricsServiceRequest.deserializeBinary(message)

        // Do whatever we want with the parsed message
        result.push(parsed.toObject())

        // Shrink the remaining buffer, removing the already parsed data
        data = data.subarray(messageTo)
    }

    return result
}
```

**Python**

You must read the `var-int` delimiters yourself or use the internal methods `_VarintBytes(size)` and `_DecodeVarint32(buffer, position)`. These return the position in the buffer just after the size bytes. The read-side constructs a new buffer that is limited to reading only the bytes of the message. 

```
size = my_metric.ByteSize()
f.write(_VarintBytes(size))
f.write(my_metric.SerializeToString())
msg_len, new_pos = _DecodeVarint32(buf, 0)
msg_buf = buf[new_pos:new_pos+msg_len]
request = metrics_service_pb.ExportMetricsServiceRequest()
request.ParseFromString(msg_buf)
```

**Go**

Use `Buffer.DecodeMessage()`.

**C\$1**

Use `CodedInputStream`. This class can read size-delimited messages.

**C\$1\$1**

The functions described in `google/protobuf/util/delimited_message_util.h` can read size-delimited messages.

**Other languages**

For other languages, see [Download Protocol Buffers](https://developers.google.com/protocol-buffers/docs/downloads).

When implementing the parser, consider that a Kinesis record can contain multiple `ExportMetricsServiceRequest` Protocol Buffers messages, each of them starting with a header with an `UnsignedVarInt32` that indicates the record length in bytes.

# CloudWatch metric stream output in OpenTelemetry 0.7.0 format
OpenTelemetry 0.7.0 output format

OpenTelemetry is a collection of tools, APIs, and SDKs. You can use it to instrument, generate, collect, and export telemetry data (metrics, logs, and traces) for analysis. OpenTelemetry is part of the Cloud Native Computing Foundation. For more information, see [OpenTelemetry](https://opentelemetry.io/).

For information about the full OpenTelemetry 0.7.0 specification, see [v0.7.0 release](https://github.com/open-telemetry/opentelemetry-proto/releases/tag/v0.7.0).

A Kinesis record can contain one or more `ExportMetricsServiceRequest` OpenTelemetry data structures. Each data structure starts with a header with an `UnsignedVarInt32` indicating the record length in bytes. Each `ExportMetricsServiceRequest` may contain data from multiple metrics at once.

The following is a string representation of the message of the `ExportMetricsServiceRequest` OpenTelemetry data structure. OpenTelemetry serializes the Google Protocol Buffers binary protocol, and this is not human-readable.

```
resource_metrics {
  resource {
    attributes {
      key: "cloud.provider"
      value {
        string_value: "aws"
      }
    }
    attributes {
      key: "cloud.account.id"
      value {
        string_value: "2345678901"
      }
    }
    attributes {
      key: "cloud.region"
      value {
        string_value: "us-east-1"
      }
    }
    attributes {
      key: "aws.exporter.arn"
      value {
        string_value: "arn:aws:cloudwatch:us-east-1:123456789012:metric-stream/MyMetricStream"
      }
    }
  }
  instrumentation_library_metrics {
    metrics {
      name: "amazonaws.com/AWS/DynamoDB/ConsumedReadCapacityUnits"
      unit: "1"
      double_summary {
        data_points {
          labels {
            key: "Namespace"
            value: "AWS/DynamoDB"
          }
          labels {
            key: "MetricName"
            value: "ConsumedReadCapacityUnits"
          }
          labels {
            key: "TableName"
            value: "MyTable"
          }
          start_time_unix_nano: 1604948400000000000
          time_unix_nano: 1604948460000000000
          count: 1
          sum: 1.0
          quantile_values {
            quantile: 0.0
            value: 1.0
          }
          quantile_values {
            quantile: 0.95
            value: 1.0
          }          
          quantile_values {
            quantile: 0.99
            value: 1.0
          }
          quantile_values {
            quantile: 1.0
            value: 1.0
          }
        }
        data_points {
          labels {
            key: "Namespace"
            value: "AWS/DynamoDB"
          }
          labels {
            key: "MetricName"
            value: "ConsumedReadCapacityUnits"
          }
          labels {
            key: "TableName"
            value: "MyTable"
          }
          start_time_unix_nano: 1604948460000000000
          time_unix_nano: 1604948520000000000
          count: 2
          sum: 5.0
          quantile_values {
            quantile: 0.0
            value: 2.0
          }
          quantile_values {
            quantile: 1.0
            value: 3.0
          }
        }
      }
    }
  }
}
```

**Top-level object to serialize OpenTelemetry metric data**

`ExportMetricsServiceRequest` is the top-level wrapper to serialize an OpenTelemetry exporter payload. It contains one or more `ResourceMetrics`.

```
message ExportMetricsServiceRequest {
  // An array of ResourceMetrics.
  // For data coming from a single resource this array will typically contain one
  // element. Intermediary nodes (such as OpenTelemetry Collector) that receive
  // data from multiple origins typically batch the data before forwarding further and
  // in that case this array will contain multiple elements.
  repeated opentelemetry.proto.metrics.v1.ResourceMetrics resource_metrics = 1;
}
```

`ResourceMetrics` is the top-level object to represent MetricData objects. 

```
// A collection of InstrumentationLibraryMetrics from a Resource.
message ResourceMetrics {
  // The resource for the metrics in this message.
  // If this field is not set then no resource info is known.
  opentelemetry.proto.resource.v1.Resource resource = 1;
  
  // A list of metrics that originate from a resource.
  repeated InstrumentationLibraryMetrics instrumentation_library_metrics = 2;
}
```

**The Resource object**

A `Resource` object is a value-pair object that contains some information about the resource that generated the metrics. For metrics created by AWS, the data structure contains the Amazon Resource Name (ARN) of the resource related to the metric, such as an EC2 instance or an S3 bucket.

The `Resource` object contains an attribute called `attributes`, which store a list of key-value pairs.
+ `cloud.account.id` contains the account ID
+ `cloud.region` contains the Region
+ `aws.exporter.arn` contains the metric stream ARN
+ `cloud.provider` is always `aws`.

```
// Resource information.
message Resource {
  // Set of labels that describe the resource.
  repeated opentelemetry.proto.common.v1.KeyValue attributes = 1;
  
  // dropped_attributes_count is the number of dropped attributes. If the value is 0,
  // no attributes were dropped.
  uint32 dropped_attributes_count = 2;
}
```

**The InstrumentationLibraryMetrics object**

The instrumentation\$1library field will not be filled. We will fill only the metrics field that we are exporting.

```
// A collection of Metrics produced by an InstrumentationLibrary.
message InstrumentationLibraryMetrics {
  // The instrumentation library information for the metrics in this message.
  // If this field is not set then no library info is known.
  opentelemetry.proto.common.v1.InstrumentationLibrary instrumentation_library = 1;
  // A list of metrics that originate from an instrumentation library.
  repeated Metric metrics = 2;
}
```

**The Metric object**

The metric object contains a `DoubleSummary` data field that contains a list of `DoubleSummaryDataPoint`.

```
message Metric {
  // name of the metric, including its DNS name prefix. It must be unique.
  string name = 1;

  // description of the metric, which can be used in documentation.
  string description = 2;

  // unit in which the metric value is reported. Follows the format
  // described by http://unitsofmeasure.org/ucum.html.
  string unit = 3;

  oneof data {
    IntGauge int_gauge = 4;
    DoubleGauge double_gauge = 5;
    IntSum int_sum = 6;
    DoubleSum double_sum = 7;
    IntHistogram int_histogram = 8;
    DoubleHistogram double_histogram = 9;
    DoubleSummary double_summary = 11;
  }
}

message DoubleSummary {
  repeated DoubleSummaryDataPoint data_points = 1;
}
```

**The MetricDescriptor object**

The MetricDescriptor object contains metadata. For more information, see [metrics.proto](https://github.com/open-telemetry/opentelemetry-proto/blob/main/opentelemetry/proto/metrics/v1/metrics.proto#L110) on GitHub.

For metric streams, the MetricDescriptor has the following contents:
+ `name` will be `amazonaws.com/metric_namespace/metric_name`
+ `description` will be blank.
+ `unit` will be filled by mapping the metric datum's unit to the case-sensitive variant of the Unified code for Units of Measure. For more information, see [Translations with OpenTelemetry 0.7.0 format in CloudWatch](CloudWatch-metric-streams-formats-opentelemetry-translation.md) and [The Unified Code For Units of Measure](https://ucum.org/ucum.html).
+ `type` will be `SUMMARY`.

**The DoubleSummaryDataPoint object**

The DoubleSummaryDataPoint object contains the value of a single data point in a time series in a DoubleSummary metric.

```
// DoubleSummaryDataPoint is a single data point in a timeseries that describes the
// time-varying values of a Summary metric.
message DoubleSummaryDataPoint {
  // The set of labels that uniquely identify this timeseries.
  repeated opentelemetry.proto.common.v1.StringKeyValue labels = 1;

  // start_time_unix_nano is the last time when the aggregation value was reset
  // to "zero". For some metric types this is ignored, see data types for more
  // details.
  //
  // The aggregation value is over the time interval (start_time_unix_nano,
  // time_unix_nano].
  //
  // Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January
  // 1970.
  //
  // Value of 0 indicates that the timestamp is unspecified. In that case the
  // timestamp may be decided by the backend.
  fixed64 start_time_unix_nano = 2;

  // time_unix_nano is the moment when this aggregation value was reported.
  //
  // Value is UNIX Epoch time in nanoseconds since 00:00:00 UTC on 1 January
  // 1970.
  fixed64 time_unix_nano = 3;

  // count is the number of values in the population. Must be non-negative.
  fixed64 count = 4;

  // sum of the values in the population. If count is zero then this field
  // must be zero.
  double sum = 5;

  // Represents the value at a given quantile of a distribution.
  //
  // To record Min and Max values following conventions are used:
  // - The 1.0 quantile is equivalent to the maximum value observed.
  // - The 0.0 quantile is equivalent to the minimum value observed.
  message ValueAtQuantile {
    // The quantile of a distribution. Must be in the interval
    // [0.0, 1.0].
    double quantile = 1;

    // The value at the given quantile of a distribution.
    double value = 2;
  }

  // (Optional) list of values at different quantiles of the distribution calculated
  // from the current snapshot. The quantiles must be strictly increasing.
  repeated ValueAtQuantile quantile_values = 6;
}
```

For more information, see [Translations with OpenTelemetry 0.7.0 format in CloudWatch](CloudWatch-metric-streams-formats-opentelemetry-translation.md).

# Translations with OpenTelemetry 0.7.0 format in CloudWatch


CloudWatch performs some transformations to put CloudWatch data into OpenTelemetry format.

**Translating namespace, metric name, and dimensions**

These attributes are key-value pairs encoded in the mapping.
+ One pair contains the namespace of the metric
+ One pair contains the name of the metric
+ For each dimension, CloudWatch stores the following pair: `metricDatum.Dimensions[i].Name, metricDatum.Dimensions[i].Value`

**Translating Average, Sum, SampleCount, Min and Max**

The Summary datapoint enables CloudWatch to export all of these statistics using one datapoint.
+ `startTimeUnixNano` contains the CloudWatch `startTime`
+ `timeUnixNano` contains the CloudWatch `endTime`
+ `sum` contains the Sum statistic.
+ `count` contains the SampleCount statistic.
+ `quantile_values` contains two `valueAtQuantile.value` objects:
  + `valueAtQuantile.quantile = 0.0` with `valueAtQuantile.value = Min value`
  + `valueAtQuantile.quantile = 0.99` with `valueAtQuantile.value = p99 value`
  + `valueAtQuantile.quantile = 0.999` with `valueAtQuantile.value = p99.9 value`
  + `valueAtQuantile.quantile = 1.0` with `valueAtQuantile.value = Max value`

Resources that consume the metric stream can calculate the Average statistic as **Sum/SampleCount**.

**Translating units**

CloudWatch units are mapped to the case-sensitive variant of the Unified code for Units of Measure, as shown in the following table. For more information, see [The Unified Code For Units of Measure](https://ucum.org/ucum.html).


| CloudWatch | OpenTelemetry | 
| --- | --- | 
|  Second |  s | 
|  Second or Seconds |  s | 
|  Microsecond |  us | 
|  Milliseconds |  ms | 
|  Bytes |  By | 
|  Kilobytes |  kBy | 
|  Megabytes |  MBy | 
|  Gigabytes |  GBy | 
|  Terabytes |  TBy | 
|  Bits |  bit | 
|  Kilobits |  kbit | 
|  Megabits |  MBit | 
|  Gigabits |  GBit | 
|  Terabits |  Tbit | 
|  Percent |  % | 
|  Count |  \$1Count\$1 | 
|  None |  1 | 

Units that are combined with a slash are mapped by applying the OpenTelemetry conversion of both the units. For example, Bytes/Second is mapped to By/s.

# How to parse OpenTelemetry 0.7.0 messages


This section provides information to help you get started with parsing OpenTelemetry 0.7.0.

First, you should get language-specific bindings, which enable you to parse OpenTelemetry 0.7.0 messages in your preferred language.

**To get language-specific bindings**
+ The steps depend on your preferred language.
  + To use Java, add the following Maven dependency to your Java project: [OpenTelemetry Java >> 0.14.1](https://mvnrepository.com/artifact/io.opentelemetry/opentelemetry-proto/0.14.1).
  + To use any other language, follow these steps:

    1. Make sure that your language is supported by checking the list at [Generating Your Classes](https://developers.google.com/protocol-buffers/docs/proto3#generating).

    1. Install the Protobuf compiler by following the steps at [Download Protocol Buffers](https://developers.google.com/protocol-buffers/docs/downloads).

    1. Download the OpenTelemetry 0.7.0 ProtoBuf definitions at [v0.7.0 release](https://github.com/open-telemetry/opentelemetry-proto/releases/tag/v0.7.0). 

    1. Confirm that you are in the root folder of the downloaded OpenTelemetry 0.7.0 ProtoBuf definitions. Then create a `src` folder and then run the command to generate language-specific bindings. For more information, see [Generating Your Classes](https://developers.google.com/protocol-buffers/docs/proto3#generating). 

       The following is an example for how to generate Javascript bindings.

       ```
       protoc --proto_path=./ --js_out=import_style=commonjs,binary:src \
       opentelemetry/proto/common/v1/common.proto \
       opentelemetry/proto/resource/v1/resource.proto \
       opentelemetry/proto/metrics/v1/metrics.proto \
       opentelemetry/proto/collector/metrics/v1/metrics_service.proto
       ```

The following section includes examples of using the language-specific bindings that you can build using the previous instructions.

**Java**

```
package com.example;

import io.opentelemetry.proto.collector.metrics.v1.ExportMetricsServiceRequest;

import java.io.IOException;
import java.io.InputStream;
import java.util.ArrayList;
import java.util.List;

public class MyOpenTelemetryParser {

    public List<ExportMetricsServiceRequest> parse(InputStream inputStream) throws IOException {
        List<ExportMetricsServiceRequest> result = new ArrayList<>();

        ExportMetricsServiceRequest request;
        /* A Kinesis record can contain multiple `ExportMetricsServiceRequest`
           records, each of them starting with a header with an
           UnsignedVarInt32 indicating the record length in bytes:
            ------ --------------------------- ------ -----------------------
           |UINT32|ExportMetricsServiceRequest|UINT32|pExportMetricsService...
            ------ --------------------------- ------ -----------------------
         */
        while ((request = ExportMetricsServiceRequest.parseDelimitedFrom(inputStream)) != null) {
            // Do whatever we want with the parsed message
            result.add(request);
        }

        return result;
    }
}
```

**Javascript**

This example assumes that the root folder with the bindings generated is `./`

The data argument of the function `parseRecord` can be one of the following types:
+ `Uint8Array` this is optimal
+ `Buffer` optimal under node
+ `Array.number` 8-bit integers

```
const pb = require('google-protobuf')
const pbMetrics =
    require('./opentelemetry/proto/collector/metrics/v1/metrics_service_pb')

function parseRecord(data) {
    const result = []

    // Loop until we've read all the data from the buffer
    while (data.length) {
        /* A Kinesis record can contain multiple `ExportMetricsServiceRequest`
           records, each of them starting with a header with an
           UnsignedVarInt32 indicating the record length in bytes:
            ------ --------------------------- ------ -----------------------
           |UINT32|ExportMetricsServiceRequest|UINT32|ExportMetricsService...
            ------ --------------------------- ------ -----------------------
         */
        const reader = new pb.BinaryReader(data)
        const messageLength = reader.decoder_.readUnsignedVarint32()
        const messageFrom = reader.decoder_.cursor_
        const messageTo = messageFrom + messageLength

        // Extract the current `ExportMetricsServiceRequest` message to parse
        const message = data.subarray(messageFrom, messageTo)

        // Parse the current message using the ProtoBuf library
        const parsed =
            pbMetrics.ExportMetricsServiceRequest.deserializeBinary(message)

        // Do whatever we want with the parsed message
        result.push(parsed.toObject())

        // Shrink the remaining buffer, removing the already parsed data
        data = data.subarray(messageTo)
    }

    return result
}
```

**Python**

You must read the `var-int` delimiters yourself or use the internal methods `_VarintBytes(size)` and `_DecodeVarint32(buffer, position)`. These return the position in the buffer just after the size bytes. The read-side constructs a new buffer that is limited to reading only the bytes of the message. 

```
size = my_metric.ByteSize()
f.write(_VarintBytes(size))
f.write(my_metric.SerializeToString())
msg_len, new_pos = _DecodeVarint32(buf, 0)
msg_buf = buf[new_pos:new_pos+msg_len]
request = metrics_service_pb.ExportMetricsServiceRequest()
request.ParseFromString(msg_buf)
```

**Go**

Use `Buffer.DecodeMessage()`.

**C\$1**

Use `CodedInputStream`. This class can read size-delimited messages.

**C\$1\$1**

The functions described in `google/protobuf/util/delimited_message_util.h` can read size-delimited messages.

**Other languages**

For other languages, see [Download Protocol Buffers](https://developers.google.com/protocol-buffers/docs/downloads).

When implementing the parser, consider that a Kinesis record can contain multiple `ExportMetricsServiceRequest` Protocol Buffers messages, each of them starting with a header with an `UnsignedVarInt32` that indicates the record length in bytes.

# Troubleshooting metric streams in CloudWatch


If you're not seeing metric data at your final destination, check the following:
+ Check that the metric stream is in the running state. For steps on how to use the CloudWatch console to do this, see [Metric stream operation and maintenance](CloudWatch-metric-streams-operation.md).
+ Metrics published more than two days in the past are not streamed. To determine whether a particular metric will be streamed, graph the metric in the CloudWatch console and check how old the last visible datapoint is. If it is more than two days in the past, then it won't be picked up by metric streams.
+ Check the metrics emitted by the metric stream. In the CloudWatch console, under **Metrics**, look at the **AWS/CloudWatch/MetricStreams ** namespace for the **MetricUpdate**, **TotalMetricUpdate**, and **PublishErrorRate** metrics.
+ If the **PublishErrorRate** metric is high, confirm that the destination that is used by the Firehose delivery stream exists and that the IAM role specified in the metric stream's configuration grants the `CloudWatch` service principal permissions to write to it. For more information, see [Trust between CloudWatch and Firehose](CloudWatch-metric-streams-trustpolicy.md).
+ Check that the Firehose delivery stream has permission to write to the final destination.
+ In the Firehose console, view the Firehose delivery stream that is used for the metric stream and check the **Monitoring** tab to see whether the Firehose delivery stream is receiving data.
+ Confirm that you have configured your Firehose delivery stream with the correct details.
+ Check any available logs or metrics for the final destination that the Firehose delivery stream writes to.
+ To get more detailed information, enable CloudWatch Logs error logging on the Firehose delivery stream. For more information, see [ Monitoring Amazon Data Firehose Using CloudWatch Logs](https://docs.aws.amazon.com/firehose/latest/dev/monitoring-with-cloudwatch-logs.html).

**Note**  
After a data point for a specific metric and timestamp has been sent, it won’t be sent again even if the value of the data point changes later.