

# Set up managed Prometheus collectors for Amazon MSK
<a name="prom-msk-integration"></a>

To use an Amazon Managed Service for Prometheus collector, you create a scraper that discovers and pulls metrics in your Amazon Managed Streaming for Apache Kafka cluster. You can also create a scraper that integrates with Amazon Elastic Kubernetes Service. For more information, see [Integrate Amazon EKS](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-collector-how-to.html).

## Create a scraper
<a name="prom-msk-create-scraper"></a>

An Amazon Managed Service for Prometheus collector consists of a scraper that discovers and collects metrics from an Amazon MSK cluster. Amazon Managed Service for Prometheus manages the scraper for you, giving you the scalability, security, and reliability that you need, without having to manage any instances, agents, or scrapers yourself.

You can create a scraper using either the AWS API or the AWS CLI as described in the following procedures.

There are a few prerequisites for creating your own scraper:
+ You must have an Amazon MSK cluster created.
+ Configure your Amazon MSK cluster's security group to allow inbound traffic on ports **11001 (JMX Exporter)** and **11002 (Node Exporter)** within your Amazon VPC, as the scraper requires access to these DNS records to collect Prometheus metrics.
+ The Amazon VPC in which the Amazon MSK cluster resides must have [DNS enabled](https://docs.aws.amazon.com/vpc/latest/userguide/AmazonDNS-concepts.html).

**Note**  
The cluster will be associated with the scraper by its Amazon resource name (ARN). If you delete a cluster, and then create a new one with the same name, the ARN will be reused for the new cluster. Because of this, the scraper will attempt to collect metrics for the new cluster. You [delete scrapers](#prom-msk-delete-scraper) separately from deleting the cluster.

------
#### [ To create a scraper using the AWS API ]

Use the `CreateScraper` API operation to create a scraper with the AWS API. The following example creates a scraper in the US East (N. Virginia) Region. Replace the *example* content with your Amazon MSK cluster information, and provide your scraper configuration.

**Note**  
Configure the security group and subnets to match your target cluster. Include at least two subnets across two availability zones.

```
                POST /scrapers HTTP/1.1
Content-Length: 415 
Authorization: AUTHPARAMS
X-Amz-Date: 20201201T193725Z
User-Agent: aws-cli/1.18.147 Python/2.7.18 Linux/5.4.58-37.125.amzn2int.x86_64 botocore/1.18.6

{
    "alias": "myScraper",
    "destination":  {
        "ampConfiguration": {
            "workspaceArn": "arn:aws:aps:us-east-1:123456789012:workspace/ws-workspace-id"
        }
    },
    "source": {
        "vpcConfiguration": {
            "securityGroupIds": ["sg-security-group-id"],
            "subnetIds": ["subnet-subnet-id-1", "subnet-subnet-id-2"]
        }
    },
    "scrapeConfiguration": {
        "configurationBlob": base64-encoded-blob
    }
}
```

In the example, the `scrapeConfiguration` parameter requires a base64-encoded Prometheus configuration YAML file that specifies the DNS records of the MSK cluster.

Each DNS record represents a broker endpoint in a specific Availability Zone, allowing clients to connect to brokers distributed across your chosen AZs for high availability.

The number of DNS records in your MSK cluster properties corresponds to the number of broker nodes and Availability Zones in your cluster configuration:
+ **Default configuration** – 3 broker nodes across 3 AZs = 3 DNS records
+ **Custom configuration** – 2 broker nodes across 2 AZs = 2 DNS records

To get the DNS records for your MSK cluster, open the MSK console at [https://console.aws.amazon.com/msk/home?region=us-east-1\$1/home/](https://console.aws.amazon.com/msk/home?region=us-east-1#/home/). Go to your MSK cluster. Choose **Properties**, **Brokers**, and **Endpoints**.

You have two options for configuring Prometheus to scrape metrics from your MSK cluster:

1. **Cluster-level DNS resolution (Recommended)** – Use the cluster's base DNS name to automatically discover all brokers. If your broker endpoint is `b-1.clusterName.xxx.xxx.xxx`, use `clusterName.xxx.xxx.xxx` as the DNS record. This allows Prometheus to automatically scrape all brokers in the cluster.

   **Individual broker endpoints** – Specify each broker endpoint individually for granular control. Use the full broker identifiers (b-1, b-2) in your configuration. For example:

   ```
   dns_sd_configs:
     - names:
       - b-1.clusterName.xxx.xxx.xxx
       - b-2.clusterName.xxx.xxx.xxx  
       - b-3.clusterName.xxx.xxx.xxx
   ```

**Note**  
Replace `clusterName.xxx.xxx.xxx` with your actual MSK cluster endpoint from the AWS Console.

For more information, see [<dns\$1sd\$1config>](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config) in the *Prometheus* documentation.

The following is an example of the scraper configuration file:

```
global:
  scrape_interval: 30s
  external_labels:
    clusterArn: msk-test-1

scrape_configs:
  - job_name: msk-jmx
    scheme: http
    metrics_path: /metrics
    scrape_timeout: 10s
    dns_sd_configs:
      - names:
          - dns-record-1
          - dns-record-2
          - dns-record-3
        type: A
        port: 11001
    relabel_configs:
      - source_labels: [__meta_dns_name]
        target_label: broker_dns
      - source_labels: [__address__]
        target_label: instance
        regex: '(.*)'
        replacement: '${1}'

  - job_name: msk-node
    scheme: http
    metrics_path: /metrics
    scrape_timeout: 10s
    dns_sd_configs:
      - names:
          - dns-record-1
          - dns-record-2
          - dns-record-3
        type: A
        port: 11002
    relabel_configs:
      - source_labels: [__meta_dns_name]
        target_label: broker_dns
      - source_labels: [__address__]
        target_label: instance
        regex: '(.*)'
        replacement: '${1}'
```

Run one of the following commands to convert the YAML file to base64. You can also use any online base64 converter to convert the file.

**Example Linux/macOS**  

```
echo -n scraper config updated with dns records | base64 
```

**Example Windows PowerShell**  

```
[Convert]::ToBase64String([System.Text.Encoding]::UTF8.GetBytes(scraper config updated with dns records))
```

------
#### [ To create a scraper using the AWS CLI ]

Use the `create-scraper` command to create a scraper using the AWS Command Line Interface. The following example creates a scraper in the US East (N. Virginia) Region. Replace the *example* content with your Amazon MSK cluster information, and provide your scraper configuration.

**Note**  
Configure the security group and subnets to match your target cluster. Include at least two subnets across two availability zones.

```
aws amp create-scraper \
 --source vpcConfiguration="{securityGroupIds=['sg-security-group-id'],subnetIds=['subnet-subnet-id-1', 'subnet-subnet-id-2']}" \ 
--scrape-configuration configurationBlob=base64-encoded-blob \
 --destination ampConfiguration="{workspaceArn='arn:aws:aps:us-west-2:123456789012:workspace/ws-workspace-id'}"
```

------
+ The following is a full list of the scraper operations that you can use with the AWS API:

  Create a scraper with the [CreateScraper](https://docs.aws.amazon.com/prometheus/latest/APIReference/API_CreateScraper.html) API operation.
+ List your existing scrapers with the [ListScrapers](https://docs.aws.amazon.com/prometheus/latest/APIReference/API_ListScrapers.html) API operation.
+ Update the alias, configuration, or destination of a scraper with the [UpdateScraper](https://docs.aws.amazon.com/prometheus/latest/APIReference/API_UpdateScraper.html) API operation.
+ Delete a scraper with the [DeleteScraper](https://docs.aws.amazon.com/prometheus/latest/APIReference/API_DeleteScraper.html) API operation.
+ Get more details about a scraper with the [DescribeScraper](https://docs.aws.amazon.com/prometheus/latest/APIReference/API_DescribeScraper.html) API operation.

## Cross-account setup
<a name="prom-msk-cross-account"></a>

To create a scraper in a cross-account setup when your Amazon MSK cluster from which you want to collect metrics is in a different account from the Amazon Managed Service for Prometheus collector, use the procedure below.

For example, when you have two accounts, the first source account `account_id_source` where the Amazon MSK is located, and a second target account `account_id_target` where the Amazon Managed Service for Prometheus workspace resides.

**To create a scraper in a cross-account setup**

1. In the source account, create a role `arn:aws:iam::111122223333:role/Source` and add the following trust policy.

   ```
   {
       "Effect": "Allow",
       "Principal": {
       "Service": [
           "scraper.aps.amazonaws.com"
        ]
       },
       "Action": "sts:AssumeRole",
       "Condition": {
           "ArnEquals": {
               "aws:SourceArn": "arn:aws:aps:aws-region:111122223333:scraper/scraper-id"
           },
           "StringEquals": {
               "AWS:SourceAccount": "111122223333"
           }
       }
   }
   ```

1. On every combination of source (Amazon MSK cluster) and target (Amazon Managed Service for Prometheus workspace), you need to create a role `arn:aws:iam::444455556666:role/Target` and add the following trust policy with permissions for [AmazonPrometheusRemoteWriteAccess](https://docs.aws.amazon.com/prometheus/latest/userguide/security-iam-awsmanpol.html).

   ```
   {
     "Effect": "Allow",
     "Principal": {
        "AWS": "arn:aws:iam::111122223333:role/Source"
     },
     "Action": "sts:AssumeRole",
     "Condition": {
        "StringEquals": {
           "sts:ExternalId": "arn:aws:aps:aws-region:111122223333:scraper/scraper-id"
         }
     }
   }
   ```

1. Create a scraper with the `--role-configuration` option.

   ```
   aws amp create-scraper \ --source vpcConfiguration="{subnetIds=[subnet-subnet-id], "securityGroupIds": ["sg-security-group-id"]}" \ --scrape-configuration configurationBlob=<base64-encoded-blob> \ --destination ampConfiguration="{workspaceArn='arn:aws:aps:aws-region:444455556666:workspace/ws-workspace-id'}"\ --role-configuration '{"sourceRoleArn":"arn:aws:iam::111122223333:role/Source", "targetRoleArn":"arn:aws:iam::444455556666:role/Target"}'
   ```

1. Validate the scraper creation.

   ```
   aws amp list-scrapers
   {
       "scrapers": [
           {
               "scraperId": "s-example123456789abcdef0",
               "arn": "arn:aws:aps:aws-region:111122223333:scraper/s-example123456789abcdef0": "arn:aws:iam::111122223333:role/Source",
               "status": "ACTIVE",
               "creationTime": "2025-10-27T18:45:00.000Z",
               "lastModificationTime": "2025-10-27T18:50:00.000Z",
               "tags": {},
               "statusReason": "Scraper is running successfully",
               "source": {
                   "vpcConfiguration": {
                       "subnetIds": ["subnet-subnet-id"],
                       "securityGroupIds": ["sg-security-group-id"]
                   }
               },
               "destination": {
                   "ampConfiguration": {
                       "workspaceArn": "arn:aws:aps:aws-region:444455556666:workspace/ws-workspace-id'"
                   }
               },
               "scrapeConfiguration": {
                   "configurationBlob": "<base64-encoded-blob>"
               }
           }
       ]
   }
   ```

## Changing between RoleConfiguration and service-linked role
<a name="prom-msk-changing-roles"></a>

When you want to switch back to a service-linked role instead of the `RoleConfiguration` to write to an Amazon Managed Service for Prometheus workspace, you must update the `UpdateScraper` and provide a workspace in the same account as the scraper without the `RoleConfiguration`. The `RoleConfiguration` will be removed from the scraper and the service-linked role will be used.

When you are changing workspaces in the same account as the scraper and you want to continue using the `RoleConfiguration`, you must again provide the `RoleConfiguration` on `UpdateScraper`.

## Find and delete scrapers
<a name="prom-msk-delete-scraper"></a>

You can use the AWS API or the AWS CLI to list the scrapers in your account or to delete them.

**Note**  
Make sure that you are using the latest version of the AWS CLI or SDK. The latest version provides you with the latest features and functionality, as well as security updates. Alternatively, use [AWS CloudShell](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html), which provides an always up-to-date command line experience, automatically.

To list all the scrapers in your account, use the [ListScrapers](https://docs.aws.amazon.com/prometheus/latest/APIReference/API_ListScrapers.html) API operation.

Alternatively, with the AWS CLI, call:

```
aws amp list-scrapers
```

`ListScrapers` returns all of the scrapers in your account, for example:

```
{
    "scrapers": [
        {
            "scraperId": "s-1234abcd-56ef-7890-abcd-1234ef567890",
            "arn": "arn:aws:aps:aws-region:123456789012:scraper/s-1234abcd-56ef-7890-abcd-1234ef567890",
            "roleArn": "arn:aws:iam::123456789012:role/aws-service-role/AWSServiceRoleForAmazonPrometheusScraper_1234abcd-2931",
            "status": {
                "statusCode": "DELETING"
            },
            "createdAt": "2023-10-12T15:22:19.014000-07:00",
            "lastModifiedAt": "2023-10-12T15:55:43.487000-07:00",
            "tags": {},
            "source": {
                "vpcConfiguration": {
                   "securityGroupIds": [
                        "sg-1234abcd5678ef90"
                    ],
                    "subnetIds": [
                        "subnet-abcd1234ef567890", 
                        "subnet-1234abcd5678ab90"
                    ]
                }
            },
            "destination": {
                "ampConfiguration": {
                    "workspaceArn": "arn:aws:aps:aws-region:123456789012:workspace/ws-1234abcd-5678-ef90-ab12-cdef3456a78"
                }
            }
        }
    ]
}
```

To delete a scraper, find the `scraperId` for the scraper that you want to delete, using the `ListScrapers` operation, and then use the [DeleteScraper](https://docs.aws.amazon.com/prometheus/latest/APIReference/API_DeleteScraper.html) operation to delete it.

Alternatively, with the AWS CLI, call:

```
aws amp delete-scraper --scraper-id scraperId
```

## Metrics collected from Amazon MSK
<a name="prom-msk-metrics"></a>

When you integrate with Amazon MSK, the Amazon Managed Service for Prometheus collector automatically scrapes the following metrics:

### Metrics: jmx\$1exporter and pod\$1exporter jobs
<a name="broker-metrics"></a>


| Metric | Description / Purpose | 
| --- | --- | 
|  jmx\$1config\$1reload\$1failure\$1total  |  Total number of times the JMX exporter failed to reload its configuration file.  | 
|  jmx\$1scrape\$1duration\$1seconds  |  Time taken to scrape JMX metrics in seconds for the current collection cycle.  | 
|  jmx\$1scrape\$1error  |  Indicates whether an error occurred during JMX metric scraping (1 = error, 0 = success).  | 
|  java\$1lang\$1Memory\$1HeapMemoryUsage\$1used  |  Amount of heap memory (in bytes) currently used by the JVM.  | 
|  java\$1lang\$1Memory\$1HeapMemoryUsage\$1max  |  Maximum amount of heap memory (in bytes) that can be used for memory management.  | 
|  java\$1lang\$1Memory\$1NonHeapMemoryUsage\$1used  |  Amount of non-heap memory (in bytes) currently used by the JVM.  | 
|  kafka\$1cluster\$1Partition\$1Value  |  Current state or value related to Kafka cluster partitions, broken down by partition ID and topic.  | 
|  kafka\$1consumer\$1consumer\$1coordinator\$1metrics\$1assigned\$1partitions  |  Number of partitions currently assigned to this consumer.  | 
|  kafka\$1consumer\$1consumer\$1coordinator\$1metrics\$1commit\$1latency\$1avg  |  Average time taken to commit offsets in milliseconds.  | 
|  kafka\$1consumer\$1consumer\$1coordinator\$1metrics\$1commit\$1rate  |  Number of offset commits per second.  | 
|  kafka\$1consumer\$1consumer\$1coordinator\$1metrics\$1failed\$1rebalance\$1total  |  Total number of failed consumer group rebalances.  | 
|  kafka\$1consumer\$1consumer\$1coordinator\$1metrics\$1last\$1heartbeat\$1seconds\$1ago  |  Number of seconds since the last heartbeat was sent to the coordinator.  | 
|  kafka\$1consumer\$1consumer\$1coordinator\$1metrics\$1rebalance\$1latency\$1avg  |  Average time taken for consumer group rebalances in milliseconds.  | 
|  kafka\$1consumer\$1consumer\$1coordinator\$1metrics\$1rebalance\$1total  |  Total number of consumer group rebalances.  | 
|  kafka\$1consumer\$1consumer\$1fetch\$1manager\$1metrics\$1bytes\$1consumed\$1rate  |  Average number of bytes consumed per second by the consumer.  | 
|  kafka\$1consumer\$1consumer\$1fetch\$1manager\$1metrics\$1fetch\$1latency\$1avg  |  Average time taken for a fetch request in milliseconds.  | 
|  kafka\$1consumer\$1consumer\$1fetch\$1manager\$1metrics\$1fetch\$1rate  |  Number of fetch requests per second.  | 
|  kafka\$1consumer\$1consumer\$1fetch\$1manager\$1metrics\$1records\$1consumed\$1rate  |  Average number of records consumed per second.  | 
|  kafka\$1consumer\$1consumer\$1fetch\$1manager\$1metrics\$1records\$1lag\$1max  |  Maximum lag in terms of number of records for any partition in this consumer.  | 
|  kafka\$1consumer\$1consumer\$1metrics\$1connection\$1count  |  Current number of active connections.  | 
|  kafka\$1consumer\$1consumer\$1metrics\$1incoming\$1byte\$1rate  |  Average number of bytes received per second from all servers.  | 
|  kafka\$1consumer\$1consumer\$1metrics\$1last\$1poll\$1seconds\$1ago  |  Number of seconds since the last consumer poll() call.  | 
|  kafka\$1consumer\$1consumer\$1metrics\$1request\$1rate  |  Number of requests sent per second.  | 
|  kafka\$1consumer\$1consumer\$1metrics\$1response\$1rate  |  Number of responses received per second.  | 
|  kafka\$1consumer\$1group\$1ConsumerLagMetrics\$1Value  |  Current consumer lag value for a consumer group, indicating how far behind the consumer is.  | 
|  kafka\$1controller\$1KafkaController\$1Value  |  Current state or value of the Kafka controller (1 = active controller, 0 = not active).  | 
|  kafka\$1controller\$1ControllerEventManager\$1Count  |  Total number of controller events processed.  | 
|  kafka\$1controller\$1ControllerEventManager\$1Mean  |  Mean (average) time taken to process controller events.  | 
|  kafka\$1controller\$1ControllerStats\$1MeanRate  |  Mean rate of controller statistics operations per second.  | 
|  kafka\$1coordinator\$1group\$1GroupMetadataManager\$1Value  |  Current state or value of the group metadata manager for consumer groups.  | 
|  kafka\$1log\$1LogFlushStats\$1Count  |  Total number of log flush operations.  | 
|  kafka\$1log\$1LogFlushStats\$1Mean  |  Mean (average) time taken for log flush operations.  | 
|  kafka\$1log\$1LogFlushStats\$1MeanRate  |  Mean rate of log flush operations per second.  | 
|  kafka\$1network\$1RequestMetrics\$1Count  |  Total count of network requests processed.  | 
|  kafka\$1network\$1RequestMetrics\$1Mean  |  Mean (average) time taken to process network requests.  | 
|  kafka\$1network\$1RequestMetrics\$1MeanRate  |  Mean rate of network requests per second.  | 
|  kafka\$1network\$1Acceptor\$1MeanRate  |  Mean rate of accepted connections per second.  | 
|  kafka\$1server\$1Fetch\$1queue\$1size  |  Current size of the fetch request queue.  | 
|  kafka\$1server\$1Produce\$1queue\$1size  |  Current size of the produce request queue.  | 
|  kafka\$1server\$1Request\$1queue\$1size  |  Current size of the general request queue.  | 
|  kafka\$1server\$1BrokerTopicMetrics\$1Count  |  Total count of broker topic operations (messages in/out, bytes in/out).  | 
|  kafka\$1server\$1BrokerTopicMetrics\$1MeanRate  |  Mean rate of broker topic operations per second.  | 
|  kafka\$1server\$1BrokerTopicMetrics\$1OneMinuteRate  |  One-minute moving average rate of broker topic operations.  | 
|  kafka\$1server\$1DelayedOperationPurgatory\$1Value  |  Current number of delayed operations in the purgatory (waiting to be completed).  | 
|  kafka\$1server\$1DelayedFetchMetrics\$1MeanRate  |  Mean rate of delayed fetch operations per second.  | 
|  kafka\$1server\$1FetcherLagMetrics\$1Value  |  Current lag value for replica fetcher threads (how far behind the leader).  | 
|  kafka\$1server\$1FetcherStats\$1MeanRate  |  Mean rate of fetcher operations per second.  | 
|  kafka\$1server\$1ReplicaManager\$1Value  |  Current state or value of the replica manager.  | 
|  kafka\$1server\$1ReplicaManager\$1MeanRate  |  Mean rate of replica manager operations per second.  | 
|  kafka\$1server\$1LeaderReplication\$1byte\$1rate  |  Rate of bytes replicated per second for partitions where this broker is the leader.  | 
|  kafka\$1server\$1group\$1coordinator\$1metrics\$1group\$1completed\$1rebalance\$1count  |  Total number of completed consumer group rebalances.  | 
|  kafka\$1server\$1group\$1coordinator\$1metrics\$1offset\$1commit\$1count  |  Total number of offset commit operations.  | 
|  kafka\$1server\$1group\$1coordinator\$1metrics\$1offset\$1commit\$1rate  |  Rate of offset commit operations per second.  | 
|  kafka\$1server\$1socket\$1server\$1metrics\$1connection\$1count  |  Current number of active connections.  | 
|  kafka\$1server\$1socket\$1server\$1metrics\$1connection\$1creation\$1rate  |  Rate of new connection creation per second.  | 
|  kafka\$1server\$1socket\$1server\$1metrics\$1connection\$1close\$1rate  |  Rate of connection closures per second.  | 
|  kafka\$1server\$1socket\$1server\$1metrics\$1failed\$1authentication\$1total  |  Total number of failed authentication attempts.  | 
|  kafka\$1server\$1socket\$1server\$1metrics\$1incoming\$1byte\$1rate  |  Rate of incoming bytes per second.  | 
|  kafka\$1server\$1socket\$1server\$1metrics\$1outgoing\$1byte\$1rate  |  Rate of outgoing bytes per second.  | 
|  kafka\$1server\$1socket\$1server\$1metrics\$1request\$1rate  |  Rate of requests per second.  | 
|  kafka\$1server\$1socket\$1server\$1metrics\$1response\$1rate  |  Rate of responses per second.  | 
|  kafka\$1server\$1socket\$1server\$1metrics\$1network\$1io\$1rate  |  Rate of network I/O operations per second.  | 
|  kafka\$1server\$1socket\$1server\$1metrics\$1io\$1ratio  |  Fraction of time spent in I/O operations.  | 
|  kafka\$1server\$1controller\$1channel\$1metrics\$1connection\$1count  |  Current number of active connections for controller channels.  | 
|  kafka\$1server\$1controller\$1channel\$1metrics\$1incoming\$1byte\$1rate  |  Rate of incoming bytes per second for controller channels.  | 
|  kafka\$1server\$1controller\$1channel\$1metrics\$1outgoing\$1byte\$1rate  |  Rate of outgoing bytes per second for controller channels.  | 
|  kafka\$1server\$1controller\$1channel\$1metrics\$1request\$1rate  |  Rate of requests per second for controller channels.  | 
|  kafka\$1server\$1replica\$1fetcher\$1metrics\$1connection\$1count  |  Current number of active connections for replica fetcher.  | 
|  kafka\$1server\$1replica\$1fetcher\$1metrics\$1incoming\$1byte\$1rate  |  Rate of incoming bytes per second for replica fetcher.  | 
|  kafka\$1server\$1replica\$1fetcher\$1metrics\$1request\$1rate  |  Rate of requests per second for replica fetcher.  | 
|  kafka\$1server\$1replica\$1fetcher\$1metrics\$1failed\$1authentication\$1total  |  Total number of failed authentication attempts for replica fetcher.  | 
|  kafka\$1server\$1ZooKeeperClientMetrics\$1Count  |  Total count of ZooKeeper client operations.  | 
|  kafka\$1server\$1ZooKeeperClientMetrics\$1Mean  |  Mean latency of ZooKeeper client operations.  | 
|  kafka\$1server\$1KafkaServer\$1Value  |  Current state or value of the Kafka server (typically indicates server is running).  | 
|  node\$1cpu\$1seconds\$1total  |  Total seconds the CPUs spent in each mode (user, system, idle, etc.), broken down by CPU and mode.  | 
|  node\$1disk\$1read\$1bytes\$1total  |  Total number of bytes read successfully from disks, broken down by device.  | 
|  node\$1disk\$1reads\$1completed\$1total  |  Total number of reads completed successfully for disks, broken down by device.  | 
|  node\$1disk\$1writes\$1completed\$1total  |  Total number of writes completed successfully for disks, broken down by device.  | 
|  node\$1disk\$1written\$1bytes\$1total  |  Total number of bytes written successfully to disks, broken down by device.  | 
|  node\$1filesystem\$1avail\$1bytes  |  Available filesystem space in bytes for non-root users, broken down by device and mount point.  | 
|  node\$1filesystem\$1size\$1bytes  |  Total size of the filesystem in bytes, broken down by device and mount point.  | 
|  node\$1filesystem\$1free\$1bytes  |  Free filesystem space in bytes, broken down by device and mount point.  | 
|  node\$1filesystem\$1files  |  Total number of file nodes (inodes) on the filesystem, broken down by device and mount point.  | 
|  node\$1filesystem\$1files\$1free  |  Number of free file nodes (inodes) on the filesystem, broken down by device and mount point.  | 
|  node\$1filesystem\$1readonly  |  Indicates whether the filesystem is mounted read-only (1 = read-only, 0 = read-write).  | 
|  node\$1filesystem\$1device\$1error  |  Indicates whether an error occurred while getting filesystem statistics (1 = error, 0 = success).  | 

## Limitations
<a name="prom-msk-limitations"></a>

The current Amazon MSK integration with Amazon Managed Service for Prometheus has the following limitations:
+ Only supported for Amazon MSK Provisioned clusters (not available for Amazon MSK Serverless)
+ Not supported for Amazon MSK clusters with public access enabled in combination with KRaft metadata mode
+ Not supported for Amazon MSK Express brokers
+ Currently supports a 1:1 mapping between Amazon MSK clusters and Amazon Managed Service for Prometheus collectors/workspaces