Amazon Managed Service for Prometheus service quotas
The following two sections describe the quotas and limits associated with Amazon Managed Service for Prometheus.
Service quotas
Amazon Managed Service for Prometheus has the following quotas. Amazon Managed Service for Prometheus vends CloudWatch usage metrics to monitor Prometheus resource usage. Using the Amazon CloudWatch usage metrics alarm feature, you can monitor Prometheus resources and usage to prevent limit errors.
As your projects and workspaces grow, the most common quotas that you should monitor or request an increase for are: Active series per workspace, and Ingestion rate per workspace.
For all adjustable quotas, you can request a quota increase by choosing the link in
the Adjustable column, or by requesting a quota
increase
The Active series per workspace limit is dynamically applied. For more information, see Active series default quotas. The Ingestion rate per workspace quota determines how quickly you can ingest data into your workspace. For more information see Ingestion throttling.
Note
Unless otherwise noted, these quotas are per workspace. The maximum value for active series per workspace is one billion.
Name | Default | Adjustable | Description |
---|---|---|---|
Active metrics with metadata per workspace | Each supported Region: 20,000 | No | The number of unique active metrics with metadata per workspace. Note: If the limit is reached, metric sample is recorded, but metadata over the limit is dropped. |
Active series per workspace | Each supported Region: 50,000,000 |
Yes |
The number of unique active series per workspace (up to a maximum of 1 billion). A series is active if a sample has been reported in the past 2 hours. Capacity from 2 M to 50 M is automatically adjusted based on the last 30 min of usage. |
Alert aggregation group size in alert manager definition file | Each supported Region: 1,000 |
Yes |
The maximum size of an alert aggregation group in alert manager definition file. Each label value combination of group_by would create an aggregation group. |
Alert manager definition file size | Each supported Region: 1,000,000 | No | The maximum size of an alert manager definition file, in bytes. |
Alert payload size in Alert Manager | Each supported Region: 20 | No | The maximum alert payload size of all Alert Manager alerts per workspace, in megabytes. Alert size is dependent on labels and annotations. |
Alerts in Alert Manager | Each supported Region: 1,000 |
Yes |
The maximum number of concurrent Alert Manager alerts per workspace. |
HA tracker clusters | Each supported Region: 500 | No | The maximum number of clusters that HA tracker will keep track of for ingested samples per workspace. |
Ingestion rate per workspace | Each supported Region: 170,000 |
Yes |
Metric sample ingestion rate per workspace per second. |
Inhibition rules in alert manager definition file | Each supported Region: 100 |
Yes |
The maximum number of inhibition rules in alert manager definition file. |
Label size | Each supported Region: 7 | No | The maximum combined size of all labels and label values accepted for a series, in kilobytes. |
LabelSet limits per workspace | Each supported Region: 100 |
Yes |
The maximum number of labelset limits that can be created per workspace. |
Labels per metric series | Each supported Region: 150 |
Yes |
Number of labels per metric series. |
Metadata length | Each supported Region: 1 | No | The maximum length accepted for metric metadata, in kilobytes. Metadata refers to Metric Name, Type, Unit and Help Text. |
Metadata per metric | Each supported Region: 10 | No | The number of metadata per metric. Note: If the limit is reached, metric sample is recorded, but metadata over the limit is dropped. |
Nodes in alert manager routing tree | Each supported Region: 100 |
Yes |
The maximum number of nodes in the alert manager routing tree. |
Number of API operations per region in transactions per second | Each supported Region: 10 |
Yes |
The maximum number of API operations per second per region for all Amazon Managed Service for Prometheus APIs, including workspace CRUD APIs, tagging APIs, rule groups namespace CRUD APIs, and alert manager definition CRUD APIs. |
Number of GetSeries, GetLabels and GetMetricMetadata API operations per workspace in transactions per second | Each supported Region: 10 | No | The maximum number of GetSeries, GetLabels and GetMetricMetadata Prometheus-compatible API operations per second per workspace. |
Number of QueryMetrics API operations per workspace in transactions per second | Each supported Region: 300 | No | The maximum number of QueryMetrics Prometheus-compatible API operations per second per workspace. |
Number of RemoteWrite API operations per workspace in transactions per second | Each supported Region: 3,000 | No | The maximum number of RemoteWrite Prometheus-compatible API operations per second per workspace. |
Number of other Prometheus-compatible API operations per workspace in transactions per second | Each supported Region: 100 | No | The maximum number of API operations per second per workspace for all other Prometheus-compatible APIs including ListAlerts, ListRules, etc. |
Query bytes for instant queries | Each supported Region: 5 | No | The maximum bytes that can be scanned by a single instant query, in gigabytes. |
Query bytes for range queries | Each supported Region: 5 | No | The maximum bytes that can be scanned per 24-hour interval in a single range query, in gigabytes. |
Query samples | Each supported Region: 50,000,000 | No | The maximum number of samples that can be scanned during a single query. |
Query series fetched | Each supported Region: 12,000,000 | No | The maximum number of series that can be scanned during a single query. |
Query time range in days | Each supported Region: 95 | No | The maximum time range of QueryMetrics, GetSeries, and GetLabels APIs. |
Request size | Each supported Region: 1 | No | The maximum request size for ingestion or query, in megabytes. |
Rule evaluation interval | Each supported Region: 30 |
Yes |
The minimum rule evaluation interval of a rule group per workspace, in seconds. |
Rule group namespace definition file size | Each supported Region: 1,000,000 | No | The maximum size of a rule group namespace definition file, in bytes. |
Rules per workspace | Each supported Region: 2,000 |
Yes |
The maximum number of rules per workspace. |
Silences per workspace | Each supported Region: 1,000 |
Yes |
Maximum number of silences, including expired, active, and pending silences, per workspace. |
Templates in alert manager definition file | Each supported Region: 100 |
Yes |
The maximum number of templates in the alert manager definition file. |
Workspaces per region per account | Each supported Region: 25 |
Yes |
The maximum number of workspaces per region. |
Active series default quotas
Amazon Managed Service for Prometheus workspaces automatically adapt to your ingestion usage. As your usage increases, the service automatically increases your time series capacity up to the default quota.
Your Amazon Managed Service for Prometheus workspace scales automatically, based on your usage, in two ways:
-
When your 30-minute average usage is below 5 million series, the capacity doubles (for example, a workspace with 3.5M usage gets 7M capacity).
-
When usage exceeds 5 million series, the workspace adds a 10 million buffer (for example, a workspace with 25M usage gets 35M capacity).
Amazon Managed Service for Prometheus automatically allocates more capacity as your ingestion increases, up to your quota. This helps ensure your workload does not experience sustained throttling. However, throttling can occur if you double or exceed 10 million above your previous baseline computed over the last 30 minutes. To avoid throttling, Amazon Managed Service for Prometheus recommends gradually increasing ingestion when increasing beyond your previous baseline.
Note
The minimum capacity for active time series is 2 million, and there is no throttling when you have less than 2 million series.
To go beyond your default quota, you can request a quota
increase
Scaling above the default quota
When you request a quota increase above the default active series quota, Amazon Managed Service for Prometheus adjusts your workspace capacity accordingly. If you don't fully utilize the increased capacity, the service will reclaim the unused portion over time. As your usage grows, the workspace will scale up again automatically.
However, throttling can occur if you more than double or exceed 50 million active time series over your previous baseline computed from the last 2 hours. For example:
-
If your quota is 100 million and your baseline is 30 million, you can scale up to 60 million within 2 hours without throttling.
-
If your quota is 100 million and your baseline is 50 million, you can scale up to the full 100 million within 2 hours without throttling.
Ingestion throttling
Amazon Managed Service for Prometheus throttles ingestion for each workspace, based on your current limits. This helps
maintain the performance of the workspace. If you exceed the limit, you will see
DiscardedSamples
in CloudWatch metrics (with the rate_limited
reason). You can use CloudWatch to monitor your ingestion, and to create an alarm to warn you
when you are close to reaching the throttling limits. For more information, see Use CloudWatch metrics to monitor Amazon Managed Service for Prometheus
resources.
Amazon Managed Service for Prometheus uses the token bucket
algorithm
Each data sample ingested removes one token from the bucket. If your bucket size (Ingestion rate per workspace) is 1,000,000, your workspace can ingest one million data samples in one second. If it exceeds one million samples to ingest, it will be throttled, and will not ingest any more records. Additional data samples will be discarded.
The bucket automatically refills at a set rate. If the bucket is below its maximum capacity, a set number of tokens is added back to it every second until it reaches its maximum capacity. If the bucket is full when the refill tokens arrive, they are discarded. The bucket can't hold more than its maximum number of tokens. The refill rate for sample ingestion is set by the Ingestion rate per workspace limit. If your Ingestion rate per workspace is set to 170,000, then the refill rate for the bucket is 170,000 tokens per second.
If your workspace ingests 1,000,000 data samples in a second, your bucket is immediately reduced to zero tokens. The bucket is then refilled by 170,000 tokens every second, until it reaches it's maximum capacity of 1,000,000 tokens. If there is no more ingestion, the previously empty bucket will return to it's maximum capacity in 6 seconds.
Note
Ingestion happens in batched requests. If you have 100 tokens available, and send a request with 101 samples, the entire request is rejected. Amazon Managed Service for Prometheus does not partially accept requests. If you are writing a collector, you can manage retries (with smaller batches or after some time has passed).
You do not need to wait for the bucket to be full before your workspace can ingest more data samples. You can use tokens as they are added to the bucket. If you immediately use the refill tokens, the bucket does not reach its maximum capacity. For example, if you deplete the bucket, you can continue to ingest 170,000 data samples per second. The bucket can refill to maximum capacity only if you ingest fewer than 170,000 data samples per second.
Additional limits on ingested data
Amazon Managed Service for Prometheus also has the following additional requirements for data ingested into the workspace. These are not adjustable.
-
Metric samples older than 1 hour are refused from being ingested.
-
Every sample and metadata must have a metric name.