

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Grant IAM users and roles access to Kubernetes APIs
<a name="grant-k8s-access"></a>

Your cluster has a Kubernetes API endpoint. Kubectl uses this API. You can authenticate to this API using two types of identities:
+  **An AWS Identity and Access Management (IAM) *principal* (role or user)** – This type requires authentication to IAM. Users can sign in to AWS as an [IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) user or with a [federated identity](https://aws.amazon.com/identity/federation/) by using credentials provided through an identity source. Users can only sign in with a federated identity if your administrator previously set up identity federation using IAM roles. When users access AWS by using federation, they’re indirectly [assuming a role](https://docs.aws.amazon.com/IAM/latest/UserGuide/when-to-use-iam.html#security-iam-authentication-iamrole). When users use this type of identity, you:
  + Can assign them Kubernetes permissions so that they can work with Kubernetes objects on your cluster. For more information about how to assign permissions to your IAM principals so that they’re able to access Kubernetes objects on your cluster, see [Grant IAM users access to Kubernetes with EKS access entries](access-entries.md).
  + Can assign them IAM permissions so that they can work with your Amazon EKS cluster and its resources using the Amazon EKS API, AWS CLI, AWS CloudFormation, AWS Management Console, or `eksctl`. For more information, see [Actions defined by Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-actions-as-permissions) in the Service Authorization Reference.
  + Nodes join your cluster by assuming an IAM role. The ability to access your cluster using IAM principals is provided by the [AWS IAM Authenticator for Kubernetes](https://github.com/kubernetes-sigs/aws-iam-authenticator#readme), which runs on the Amazon EKS control plane.
+  **A user in your own OpenID Connect (OIDC) provider** – This type requires authentication to your [OIDC](https://openid.net/connect/) provider. For more information about setting up your own OIDC provider with your Amazon EKS cluster, see [Grant users access to Kubernetes with an external OIDC provider](authenticate-oidc-identity-provider.md). When users use this type of identity, you:
  + Can assign them Kubernetes permissions so that they can work with Kubernetes objects on your cluster.
  + Can’t assign them IAM permissions so that they can work with your Amazon EKS cluster and its resources using the Amazon EKS API, AWS CLI, AWS CloudFormation, AWS Management Console, or `eksctl`.

You can use both types of identities with your cluster. The IAM authentication method cannot be disabled. The OIDC authentication method is optional.

## Associate IAM Identities with Kubernetes Permissions
<a name="authentication-modes"></a>

The [AWS IAM Authenticator for Kubernetes](https://github.com/kubernetes-sigs/aws-iam-authenticator#readme) is installed on your cluster’s control plane. It enables [AWS Identity and Access Management](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) (IAM) principals (roles and users) that you allow to access Kubernetes resources on your cluster. You can allow IAM principals to access Kubernetes objects on your cluster using one of the following methods:
+  **Creating access entries** – If your cluster is at or later than the platform version listed in the [Prerequisites](access-entries.md) section for your cluster’s Kubernetes version, we recommend that you use this option.

  Use *access entries* to manage the Kubernetes permissions of IAM principals from outside the cluster. You can add and manage access to the cluster by using the EKS API, AWS Command Line Interface, AWS SDKs, AWS CloudFormation, and AWS Management Console. This means you can manage users with the same tools that you created the cluster with.

  To get started, follow [Change authentication mode to use access entries](setting-up-access-entries.md), then [Migrating existing aws-auth ConfigMap entries to access entries](migrating-access-entries.md).
+  **Adding entries to the `aws-auth` `ConfigMap` ** – If your cluster’s platform version is earlier than the version listed in the [Prerequisites](access-entries.md) section, then you must use this option. If your cluster’s platform version is at or later than the platform version listed in the [Prerequisites](access-entries.md) section for your cluster’s Kubernetes version, and you’ve added entries to the `ConfigMap`, then we recommend that you migrate those entries to access entries. You can’t migrate entries that Amazon EKS added to the `ConfigMap` however, such as entries for IAM roles used with managed node groups or Fargate profiles. For more information, see [Grant IAM users and roles access to Kubernetes APIs](#grant-k8s-access).
  + If you have to use the `aws-auth` `ConfigMap` option, you can add entries to the `ConfigMap` using the `eksctl create iamidentitymapping` command. For more information, see [Manage IAM users and roles](https://eksctl.io/usage/iam-identity-mappings/) in the `eksctl` documentation.

## Set Cluster Authentication Mode
<a name="set-cam"></a>

Each cluster has an *authentication mode*. The authentication mode determines which methods you can use to allow IAM principals to access Kubernetes objects on your cluster. There are three authentication modes.

**Important**  
Once the access entry method is enabled, it cannot be disabled.  
If the `ConfigMap` method is not enabled during cluster creation, it cannot be enabled later. All clusters created before the introduction of access entries have the `ConfigMap` method enabled.  
If you are using hybrid nodes with your cluster, you must use the `API` or `API_AND_CONFIG_MAP` cluster authentication modes.

 **The `aws-auth` `ConfigMap` inside the cluster**   
This is the original authentication mode for Amazon EKS clusters. The IAM principal that created the cluster is the initial user that can access the cluster by using `kubectl`. The initial user must add other users to the list in the `aws-auth` `ConfigMap` and assign permissions that affect the other users within the cluster. These other users can’t manage or remove the initial user, as there isn’t an entry in the `ConfigMap` to manage.

 **Both the `ConfigMap` and access entries**   
With this authentication mode, you can use both methods to add IAM principals to the cluster. Note that each method stores separate entries; for example, if you add an access entry from the AWS CLI, the `aws-auth` `ConfigMap` is not updated.

 **Access entries only**   
With this authentication mode, you can use the EKS API, AWS Command Line Interface, AWS SDKs, AWS CloudFormation, and AWS Management Console to manage access to the cluster for IAM principals.  
Each access entry has a *type* and you can use the combination of an *access scope* to limit the principal to a specific namespace and an *access policy* to set preconfigured reusable permissions policies. Alternatively, you can use the STANDARD type and Kubernetes RBAC groups to assign custom permissions.


| Authentication mode | Methods | 
| --- | --- | 
|   `ConfigMap` only (`CONFIG_MAP`)  |   `aws-auth` `ConfigMap`   | 
|  EKS API and `ConfigMap` (`API_AND_CONFIG_MAP`)  |  access entries in the EKS API, AWS Command Line Interface, AWS SDKs, AWS CloudFormation, and AWS Management Console and `aws-auth` `ConfigMap`   | 
|  EKS API only (`API`)  |  access entries in the EKS API, AWS Command Line Interface, AWS SDKs, AWS CloudFormation, and AWS Management Console   | 

**Note**  
Amazon EKS Auto Mode requires Access entries.

# Grant IAM users access to Kubernetes with EKS access entries
<a name="access-entries"></a>

This section is designed to show you how to manage IAM principal access to Kubernetes clusters in Amazon Elastic Kubernetes Service (EKS) using access entries and policies. You’ll find details on changing authentication modes, migrating from legacy `aws-auth` ConfigMap entries, creating, updating, and deleting access entries, associating policies with entries, reviewing predefined policy permissions, and key prerequisites and considerations for secure access management.

## Overview
<a name="_overview"></a>

EKS access entries are the best way to grant users access to the Kubernetes API. For example, you can use access entries to grant developers access to use kubectl. Fundamentally, an EKS access entry associates a set of Kubernetes permissions with an IAM identity, such as an IAM role. For example, a developer may assume an IAM role and use that to authenticate to an EKS Cluster.

## Features
<a name="_features"></a>
+  **Centralized Authentication and Authorization**: Controls access to Kubernetes clusters directly via Amazon EKS APIs, eliminating the need to switch between AWS and Kubernetes APIs for user permissions.
+  **Granular Permissions Management**: Uses access entries and policies to define fine-grained permissions for AWS IAM principals, including modifying or revoking cluster-admin access from the creator.
+  **IaC Tool Integration**: Supports infrastructure as code tools like AWS CloudFormation, Terraform, and AWS CDK to define access configurations during cluster creation.
+  **Misconfiguration Recovery**: Allows restoring cluster access through the Amazon EKS API without direct Kubernetes API access.
+  **Reduced Overhead and Enhanced Security**: Centralizes operations to lower overhead while leveraging AWS IAM features like CloudTrail audit logging and multi-factor authentication.

## How to attach permissions
<a name="_how_to_attach_permissions"></a>

You can attach Kubernetes permissions to access entries in two ways:
+ Use an access policy. Access policies are pre-defined Kubernetes permissions templates maintained by AWS. For more information, see [Review access policy permissions](access-policy-permissions.md).
+ Reference a Kubernetes group. If you associate an IAM Identity with a Kubernetes group, you can create Kubernetes resources that grant the group permissions. For more information, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.

## Considerations
<a name="_considerations"></a>

When enabling EKS access entries on existing clusters, keep the following in mind:
+  **Legacy Cluster Behavior**: For clusters created before the introduction of access entries (those with initial platform versions earlier than specified in [Platform version requirements](https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html)), EKS automatically creates an access entry reflecting pre-existing permissions. This entry includes the IAM identity that originally created the cluster and the administrative permissions granted to that identity during cluster creation.
+  **Handling Legacy `aws-auth` ConfigMap**: If your cluster relies on the legacy `aws-auth` ConfigMap for access management, only the access entry for the original cluster creator is automatically created upon enabling access entries. Additional roles or permissions added to the ConfigMap (e.g., custom IAM roles for developers or services) are not automatically migrated. To address this, manually create corresponding access entries.

## Get started
<a name="_get_started"></a>

1. Determine the IAM Identity and Access policy you want to use.
   +  [Review access policy permissions](access-policy-permissions.md) 

1. Enable EKS Access Entries on your cluster. Confirm you have a supported platform version.
   +  [Change authentication mode to use access entries](setting-up-access-entries.md) 

1. Create an access entry that associates an IAM Identity with Kubernetes permission.
   +  [Create access entries](creating-access-entries.md) 

1. Authenticate to the cluster using the IAM identity.
   +  [Set up AWS CLI](install-awscli.md) 
   +  [Set up `kubectl` and `eksctl`](install-kubectl.md) 

# Associate access policies with access entries
<a name="access-policies"></a>

You can assign one or more access policies to *access entries* of *type* `STANDARD`. Amazon EKS automatically grants the other types of access entries the permissions required to function properly in your cluster. Amazon EKS access policies include Kubernetes permissions, not IAM permissions. Before associating an access policy to an access entry, make sure that you’re familiar with the Kubernetes permissions included in each access policy. For more information, see [Review access policy permissions](access-policy-permissions.md). If none of the access policies meet your requirements, then don’t associate an access policy to an access entry. Instead, specify one or more *group names* for the access entry and create and manage Kubernetes role-based access control objects. For more information, see [Create access entries](creating-access-entries.md).
+ An existing access entry. To create one, see [Create access entries](creating-access-entries.md).
+ An AWS Identity and Access Management role or user with the following permissions: `ListAccessEntries`, `DescribeAccessEntry`, `UpdateAccessEntry`, `ListAccessPolicies`, `AssociateAccessPolicy`, and `DisassociateAccessPolicy`. For more information, see [Actions defined by Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-actions-as-permissions) in the *Service Authorization Reference*.

Before associating access policies with access entries, consider the following requirements:
+ You can associate multiple access policies to each access entry, but you can only associate each policy to an access entry once. If you associate multiple access policies, the access entry’s IAM principal has all permissions included in all associated access policies.
+ You can scope an access policy to all resources on a cluster or by specifying the name of one or more Kubernetes namespaces. You can use wildcard characters for a namespace name. For example, if you want to scope an access policy to all namespaces that start with `dev-`, you can specify `dev-*` as a namespace name. Make sure that the namespaces exist on your cluster and that your spelling matches the actual namespace name on the cluster. Amazon EKS doesn’t confirm the spelling or existence of the namespaces on your cluster.
+ You can change the *access scope* for an access policy after you associate it to an access entry. If you’ve scoped the access policy to Kubernetes namespaces, you can add and remove namespaces for the association, as necessary.
+ If you associate an access policy to an access entry that also has *group names* specified, then the IAM principal has all the permissions in all associated access policies. It also has all the permissions in any Kubernetes `Role` or `ClusterRole` object that is specified in any Kubernetes `Role` and `RoleBinding` objects that specify the group names.
+ If you run the `kubectl auth can-i --list` command, you won’t see any Kubernetes permissions assigned by access policies associated with an access entry for the IAM principal you’re using when you run the command. The command only shows Kubernetes permissions if you’ve granted them in Kubernetes `Role` or `ClusterRole` objects that you’ve bound to the group names or username that you specified for an access entry.
+ If you impersonate a Kubernetes user or group when interacting with Kubernetes objects on your cluster, such as using the `kubectl` command with `--as username ` or `--as-group group-name `, you’re forcing the use of Kubernetes RBAC authorization. As a result, the IAM principal has no permissions assigned by any access policies associated to the access entry. The only Kubernetes permissions that the user or group that the IAM principal is impersonating has are the Kubernetes permissions that you’ve granted them in Kubernetes `Role` or `ClusterRole` objects that you’ve bound to the group names or user name. For your IAM principal to have the permissions in associated access policies, don’t impersonate a Kubernetes user or group. The IAM principal will still also have any permissions that you’ve granted them in the Kubernetes `Role` or `ClusterRole` objects that you’ve bound to the group names or user name that you specified for the access entry. For more information, see [User impersonation](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#user-impersonation) in the Kubernetes documentation.

You can associate an access policy to an access entry using the AWS Management Console or the AWS CLI.

## AWS Management Console
<a name="access-associate-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that has an access entry that you want to associate an access policy to.

1. Choose the **Access** tab.

1. If the type of the access entry is **Standard**, you can associate or disassociate Amazon EKS **access policies**. If the type of your access entry is anything other than **Standard**, then this option isn’t available.

1. Choose **Associate access policy**.

1. For **Policy name**, select the policy with the permissions you want the IAM principal to have. To view the permissions included in each policy, see [Review access policy permissions](access-policy-permissions.md).

1. For **Access scope**, choose an access scope. If you choose **Cluster**, the permissions in the access policy are granted to the IAM principal for resources in all Kubernetes namespaces. If you choose **Kubernetes namespace**, you can then choose **Add new namespace**. In the **Namespace** field that appears, you can enter the name of a Kubernetes namespace on your cluster. If you want the IAM principal to have the permissions across multiple namespaces, then you can enter multiple namespaces.

1. Choose **Add access policy**.

## AWS CLI
<a name="access-associate-cli"></a>

1. Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.

1. View the available access policies.

   ```
   aws eks list-access-policies --output table
   ```

   An example output is as follows.

   ```
   ---------------------------------------------------------------------------------------------------------
   |                                          ListAccessPolicies                                           |
   +-------------------------------------------------------------------------------------------------------+
   ||                                           accessPolicies                                            ||
   |+---------------------------------------------------------------------+-------------------------------+|
   ||                                 arn                                 |             name              ||
   |+---------------------------------------------------------------------+-------------------------------+|
   ||  {arn-aws}eks::aws:cluster-access-policy/AmazonEKSAdminPolicy        |  AmazonEKSAdminPolicy         ||
   ||  {arn-aws}eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy |  AmazonEKSClusterAdminPolicy  ||
   ||  {arn-aws}eks::aws:cluster-access-policy/AmazonEKSEditPolicy         |  AmazonEKSEditPolicy          ||
   ||  {arn-aws}eks::aws:cluster-access-policy/AmazonEKSViewPolicy         |  AmazonEKSViewPolicy          ||
   |+---------------------------------------------------------------------+-------------------------------+|
   ```

   To view the permissions included in each policy, see [Review access policy permissions](access-policy-permissions.md).

1. View your existing access entries. Replace *my-cluster* with the name of your cluster.

   ```
   aws eks list-access-entries --cluster-name my-cluster
   ```

   An example output is as follows.

   ```
   {
       "accessEntries": [
           "arn:aws:iam::111122223333:role/my-role",
           "arn:aws:iam::111122223333:user/my-user"
       ]
   }
   ```

1. Associate an access policy to an access entry. The following example associates the `AmazonEKSViewPolicy` access policy to an access entry. Whenever the *my-role* IAM role attempts to access Kubernetes objects on the cluster, Amazon EKS will authorize the role to use the permissions in the policy to access Kubernetes objects in the *my-namespace1* and *my-namespace2* Kubernetes namespaces only. Replace *my-cluster* with the name of your cluster, *111122223333* with your AWS account ID, and *my-role* with the name of the IAM role that you want Amazon EKS to authorize access to Kubernetes cluster objects for.

   ```
   aws eks associate-access-policy --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/my-role \
       --access-scope type=namespace,namespaces=my-namespace1,my-namespace2 --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy
   ```

   If you want the IAM principal to have the permissions cluster-wide, replace `type=namespace,namespaces=my-namespace1,my-namespace2 ` with `type=cluster`. If you want to associate multiple access policies to the access entry, run the command multiple times, each with a unique access policy. Each associated access policy has its own scope.
**Note**  
If you later want to change the scope of an associated access policy, run the previous command again with the new scope. For example, if you wanted to remove *my-namespace2*, you’d run the command again using `type=namespace,namespaces=my-namespace1 ` only. If you wanted to change the scope from `namespace` to `cluster`, you’d run the command again using `type=cluster`, removing `type=namespace,namespaces=my-namespace1,my-namespace2 `.

1. Determine which access policies are associated to an access entry.

   ```
   aws eks list-associated-access-policies --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/my-role
   ```

   An example output is as follows.

   ```
   {
       "clusterName": "my-cluster",
       "principalArn": "arn:aws:iam::111122223333",
       "associatedAccessPolicies": [
           {
               "policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy",
               "accessScope": {
                   "type": "cluster",
                   "namespaces": []
               },
               "associatedAt": "2023-04-17T15:25:21.675000-04:00",
               "modifiedAt": "2023-04-17T15:25:21.675000-04:00"
           },
           {
               "policyArn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy",
               "accessScope": {
                   "type": "namespace",
                   "namespaces": [
                       "my-namespace1",
                       "my-namespace2"
                   ]
               },
               "associatedAt": "2023-04-17T15:02:06.511000-04:00",
               "modifiedAt": "2023-04-17T15:02:06.511000-04:00"
           }
       ]
   }
   ```

   In the previous example, the IAM principal for this access entry has view permissions across all namespaces on the cluster, and administrator permissions to two Kubernetes namespaces.

1. Disassociate an access policy from an access entry. In this example, the `AmazonEKSAdminPolicy` policy is disassociated from an access entry. The IAM principal retains the permissions in the `AmazonEKSViewPolicy` access policy for objects in the *my-namespace1* and *my-namespace2* namespaces however, because that access policy is not disassociated from the access entry.

   ```
   aws eks disassociate-access-policy --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/my-role \
       --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy
   ```

To list available access policies, see [Review access policy permissions](access-policy-permissions.md).

# Migrating existing `aws-auth ConfigMap` entries to access entries
<a name="migrating-access-entries"></a>

If you’ve added entries to the `aws-auth` `ConfigMap` on your cluster, we recommend that you create access entries for the existing entries in your `aws-auth` `ConfigMap`. After creating the access entries, you can remove the entries from your `ConfigMap`. You can’t associate [access policies](access-policies.md) to entries in the `aws-auth` `ConfigMap`. If you want to associate access polices to your IAM principals, create access entries.

**Important**  
When a cluster is in `API_AND_CONFIGMAP` authentication mode and there’s a mapping for the same IAM role in both the `aws-auth` `ConfigMap` and in access entries, the role will use the access entry’s mapping for authentication. Access entries take precedence over `ConfigMap` entries for the same IAM principal.
Before removing existing `aws-auth` `ConfigMap` entries that were created by Amazon EKS for [managed node group](managed-node-groups.md) or a [Fargate profile](fargate-profile.md) to your cluster, double check if the correct access entries for those specific resources exist in your Amazon EKS cluster. If you remove entries that Amazon EKS created in the `ConfigMap` without having the equivalent access entries, your cluster won’t function properly.

## Prerequisites
<a name="migrating_access_entries_prereq"></a>
+ Familiarity with access entries and access policies. For more information, see [Grant IAM users access to Kubernetes with EKS access entries](access-entries.md) and [Associate access policies with access entries](access-policies.md).
+ An existing cluster with a platform version that is at or later than the versions listed in the Prerequisites of the [Grant IAM users access to Kubernetes with EKS access entries](access-entries.md) topic.
+ Version `0.215.0` or later of the `eksctl` command line tool installed on your device or AWS CloudShell. To install or update `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.
+ Kubernetes permissions to modify the `aws-auth` `ConfigMap` in the `kube-system` namespace.
+ An AWS Identity and Access Management role or user with the following permissions: `CreateAccessEntry` and `ListAccessEntries`. For more information, see [Actions defined by Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-actions-as-permissions) in the Service Authorization Reference.

## `eksctl`
<a name="migrating_access_entries_eksctl"></a>

1. View the existing entries in your `aws-auth ConfigMap`. Replace *my-cluster* with the name of your cluster.

   ```
   eksctl get iamidentitymapping --cluster my-cluster
   ```

   An example output is as follows.

   ```
   ARN                                                                                             USERNAME                                GROUPS                                                  ACCOUNT
   arn:aws:iam::111122223333:role/EKS-my-cluster-Admins                                            Admins                                  system:masters
   arn:aws:iam::111122223333:role/EKS-my-cluster-my-namespace-Viewers                              my-namespace-Viewers                    Viewers
   arn:aws:iam::111122223333:role/EKS-my-cluster-self-managed-ng-1                                 system:node:{{EC2PrivateDNSName}}       system:bootstrappers,system:nodes
   arn:aws:iam::111122223333:user/my-user                                                          my-user
   arn:aws:iam::111122223333:role/EKS-my-cluster-fargateprofile1                                   system:node:{{SessionName}}             system:bootstrappers,system:nodes,system:node-proxier
   arn:aws:iam::111122223333:role/EKS-my-cluster-managed-ng                                        system:node:{{EC2PrivateDNSName}}       system:bootstrappers,system:nodes
   ```

1.  [Create access entries](creating-access-entries.md) for any of the `ConfigMap` entries that you created returned in the previous output. When creating the access entries, make sure to specify the same values for `ARN`, `USERNAME`, `GROUPS`, and `ACCOUNT` returned in your output. In the example output, you would create access entries for all entries except the last two entries, since those entries were created by Amazon EKS for a Fargate profile and a managed node group.

1. Delete the entries from the `ConfigMap` for any access entries that you created. If you don’t delete the entry from the `ConfigMap`, the settings for the access entry for the IAM principal ARN override the `ConfigMap` entry. Replace *111122223333* with your AWS account ID and *EKS-my-cluster-my-namespace-Viewers* with the name of the role in the entry in your `ConfigMap`. If the entry you’re removing is for an IAM user, rather than an IAM role, replace `role` with `user` and *EKS-my-cluster-my-namespace-Viewers* with the user name.

   ```
   eksctl delete iamidentitymapping --arn arn:aws:iam::111122223333:role/EKS-my-cluster-my-namespace-Viewers --cluster my-cluster
   ```

# Review access policy permissions
<a name="access-policy-permissions"></a>

Access policies include `rules` that contain Kubernetes `verbs` (permissions) and `resources`. Access policies don’t include IAM permissions or resources. Similar to Kubernetes `Role` and `ClusterRole` objects, access policies only include `allow` `rules`. You can’t modify the contents of an access policy. You can’t create your own access policies. If the permissions in the access policies don’t meet your needs, then create Kubernetes RBAC objects and specify *group names* for your access entries. For more information, see [Create access entries](creating-access-entries.md). The permissions contained in access policies are similar to the permissions in the Kubernetes user-facing cluster roles. For more information, see [User-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) in the Kubernetes documentation.

## List all policies
<a name="access-policies-cli-command"></a>

Use any one of the access policies listed on this page, or retrieve a list of all available access policies using the AWS CLI:

```
aws eks list-access-policies
```

The expected output should look like this (abbreviated for brevity):

```
{
    "accessPolicies": [
        {
            "name": "AmazonAIOpsAssistantPolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonAIOpsAssistantPolicy"
        },
        {
            "name": "AmazonARCRegionSwitchScalingPolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonARCRegionSwitchScalingPolicy"
        },
        {
            "name": "AmazonEKSAdminPolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy"
        },
        {
            "name": "AmazonEKSAdminViewPolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminViewPolicy"
        },
        {
            "name": "AmazonEKSAutoNodePolicy",
            "arn": "arn:aws:eks::aws:cluster-access-policy/AmazonEKSAutoNodePolicy"
        }
        // Additional policies omitted
    ]
}
```

## AmazonEKSAdminPolicy
<a name="access-policy-permissions-amazoneksadminpolicy"></a>

This access policy includes permissions that grant an IAM principal most permissions to resources. When associated to an access entry, its access scope is typically one or more Kubernetes namespaces. If you want an IAM principal to have administrator access to all resources on your cluster, associate the [AmazonEKSClusterAdminPolicy](#access-policy-permissions-amazoneksclusteradminpolicy) access policy to your access entry instead.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminPolicy` 


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `apps`   |   `daemonsets`, `deployments`, `deployments/rollback`, `deployments/scale`, `replicasets`, `replicasets/scale`, `statefulsets`, `statefulsets/scale`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `apps`   |   `controllerrevisions`, `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `replicasets`, `replicasets/scale`, `replicasets/status`, `statefulsets`, `statefulsets/scale`, `statefulsets/status`   |   `get`, `list`, `watch`   | 
|   `authorization.k8s.io`   |   `localsubjectaccessreviews`   |   `create`   | 
|   `autoscaling`   |   `horizontalpodautoscalers`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `autoscaling`   |   `horizontalpodautoscalers`, `horizontalpodautoscalers/status`   |   `get`, `list`, `watch`   | 
|   `batch`   |   `cronjobs`, `jobs`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `batch`   |   `cronjobs`, `cronjobs/status`, `jobs`, `jobs/status`   |   `get`, `list`, `watch`   | 
|   `discovery.k8s.io`   |   `endpointslices`   |   `get`, `list`, `watch`   | 
|   `extensions`   |   `daemonsets`, `deployments`, `deployments/rollback`, `deployments/scale`, `ingresses`, `networkpolicies`, `replicasets`, `replicasets/scale`, `replicationcontrollers/scale`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `extensions`   |   `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `ingresses`, `ingresses/status`, `networkpolicies`, `replicasets`, `replicasets/scale`, `replicasets/status`, `replicationcontrollers/scale`   |   `get`, `list`, `watch`   | 
|   `networking.k8s.io`   |   `ingresses`, `ingresses/status`, `networkpolicies`   |   `get`, `list`, `watch`   | 
|   `networking.k8s.io`   |   `ingresses`, `networkpolicies`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `policy`   |   `poddisruptionbudgets`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `policy`   |   `poddisruptionbudgets`, `poddisruptionbudgets/status`   |   `get`, `list`, `watch`   | 
|   `rbac.authorization.k8s.io`   |   `rolebindings`, `roles`   |   `create`, `delete`, `deletecollection`, `get`, `list`, `patch`, `update`, `watch`   | 
|  |   `configmaps`, `endpoints`, `persistentvolumeclaims`, `persistentvolumeclaims/status`, `pods`, `replicationcontrollers`, `replicationcontrollers/scale`, `serviceaccounts`, `services`, `services/status`   |   `get`, `list`, `watch`   | 
|  |   `pods/attach`, `pods/exec`, `pods/portforward`, `pods/proxy`, `secrets`, `services/proxy`   |   `get`, `list`, `watch`   | 
|  |   `configmaps`, `events`, `persistentvolumeclaims`, `replicationcontrollers`, `replicationcontrollers/scale`, `secrets`, `serviceaccounts`, `services`, `services/proxy`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|  |   `pods`, `pods/attach`, `pods/exec`, `pods/portforward`, `pods/proxy`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|  |   `serviceaccounts`   |   `impersonate`   | 
|  |   `bindings`, `events`, `limitranges`, `namespaces/status`, `pods/log`, `pods/status`, `replicationcontrollers/status`, `resourcequotas`, `resourcequotas/status`   |   `get`, `list`, `watch`   | 
|  |   `namespaces`   |   `get`, `list`, `watch`   | 

## AmazonEKSClusterAdminPolicy
<a name="access-policy-permissions-amazoneksclusteradminpolicy"></a>

This access policy includes permissions that grant an IAM principal administrator access to a cluster. When associated to an access entry, its access scope is typically the cluster, rather than a Kubernetes namespace. If you want an IAM principal to have a more limited administrative scope, consider associating the [AmazonEKSAdminPolicy](#access-policy-permissions-amazoneksadminpolicy) access policy to your access entry instead.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy` 


| Kubernetes API groups | Kubernetes nonResourceURLs | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | --- | 
|   `*`   |  |   `*`   |   `*`   | 
|  |   `*`   |  |   `*`   | 

## AmazonEKSAdminViewPolicy
<a name="access-policy-permissions-amazoneksadminviewpolicy"></a>

This access policy includes permissions that grant an IAM principal access to list/view all resources in a cluster. Note this includes [Kubernetes Secrets.](https://kubernetes.io/docs/concepts/configuration/secret/) 

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSAdminViewPolicy` 


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `*`   |   `*`   |   `get`, `list`, `watch`   | 

## AmazonEKSEditPolicy
<a name="access-policy-permissions-amazonekseditpolicy"></a>

This access policy includes permissions that allow an IAM principal to edit most Kubernetes resources.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSEditPolicy` 


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `apps`   |   `daemonsets`, `deployments`, `deployments/rollback`, `deployments/scale`, `replicasets`, `replicasets/scale`, `statefulsets`, `statefulsets/scale`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `apps`   |   `controllerrevisions`, `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `replicasets`, `replicasets/scale`, `replicasets/status`, `statefulsets`, `statefulsets/scale`, `statefulsets/status`   |   `get`, `list`, `watch`   | 
|   `autoscaling`   |   `horizontalpodautoscalers`, `horizontalpodautoscalers/status`   |   `get`, `list`, `watch`   | 
|   `autoscaling`   |   `horizontalpodautoscalers`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `batch`   |   `cronjobs`, `jobs`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `batch`   |   `cronjobs`, `cronjobs/status`, `jobs`, `jobs/status`   |   `get`, `list`, `watch`   | 
|   `discovery.k8s.io`   |   `endpointslices`   |   `get`, `list`, `watch`   | 
|   `extensions`   |   `daemonsets`, `deployments`, `deployments/rollback`, `deployments/scale`, `ingresses`, `networkpolicies`, `replicasets`, `replicasets/scale`, `replicationcontrollers/scale`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `extensions`   |   `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `ingresses`, `ingresses/status`, `networkpolicies`, `replicasets`, `replicasets/scale`, `replicasets/status`, `replicationcontrollers/scale`   |   `get`, `list`, `watch`   | 
|   `networking.k8s.io`   |   `ingresses`, `networkpolicies`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `networking.k8s.io`   |   `ingresses`, `ingresses/status`, `networkpolicies`   |   `get`, `list`, `watch`   | 
|   `policy`   |   `poddisruptionbudgets`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|   `policy`   |   `poddisruptionbudgets`, `poddisruptionbudgets/status`   |   `get`, `list`, `watch`   | 
|  |   `namespaces`   |   `get`, `list`, `watch`   | 
|  |   `pods/attach`, `pods/exec`, `pods/portforward`, `pods/proxy`, `secrets`, `services/proxy`   |   `get`, `list`, `watch`   | 
|  |   `serviceaccounts`   |   `impersonate`   | 
|  |   `pods`, `pods/attach`, `pods/exec`, `pods/portforward`, `pods/proxy`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|  |   `configmaps`, `events`, `persistentvolumeclaims`, `replicationcontrollers`, `replicationcontrollers/scale`, `secrets`, `serviceaccounts`, `services`, `services/proxy`   |   `create`, `delete`, `deletecollection`, `patch`, `update`   | 
|  |   `configmaps`, `endpoints`, `persistentvolumeclaims`, `persistentvolumeclaims/status`, `pods`, `replicationcontrollers`, `replicationcontrollers/scale`, `serviceaccounts`, `services`, `services/status`   |   `get`, `list`, `watch`   | 
|  |   `bindings`, `events`, `limitranges`, `namespaces/status`, `pods/log`, `pods/status`, `replicationcontrollers/status`, `resourcequotas`, `resourcequotas/status`   |   `get`, `list`, `watch`   | 

## AmazonEKSViewPolicy
<a name="access-policy-permissions-amazoneksviewpolicy"></a>

This access policy includes permissions that allow an IAM principal to view most Kubernetes resources.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSViewPolicy` 


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `apps`   |   `controllerrevisions`, `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `replicasets`, `replicasets/scale`, `replicasets/status`, `statefulsets`, `statefulsets/scale`, `statefulsets/status`   |   `get`, `list`, `watch`   | 
|   `autoscaling`   |   `horizontalpodautoscalers`, `horizontalpodautoscalers/status`   |   `get`, `list`, `watch`   | 
|   `batch`   |   `cronjobs`, `cronjobs/status`, `jobs`, `jobs/status`   |   `get`, `list`, `watch`   | 
|   `discovery.k8s.io`   |   `endpointslices`   |   `get`, `list`, `watch`   | 
|   `extensions`   |   `daemonsets`, `daemonsets/status`, `deployments`, `deployments/scale`, `deployments/status`, `ingresses`, `ingresses/status`, `networkpolicies`, `replicasets`, `replicasets/scale`, `replicasets/status`, `replicationcontrollers/scale`   |   `get`, `list`, `watch`   | 
|   `networking.k8s.io`   |   `ingresses`, `ingresses/status`, `networkpolicies`   |   `get`, `list`, `watch`   | 
|   `policy`   |   `poddisruptionbudgets`, `poddisruptionbudgets/status`   |   `get`, `list`, `watch`   | 
|  |   `configmaps`, `endpoints`, `persistentvolumeclaims`, `persistentvolumeclaims/status`, `pods`, `replicationcontrollers`, `replicationcontrollers/scale`, `serviceaccounts`, `services`, `services/status`   |   `get`, `list`, `watch`   | 
|  |   `bindings`, `events`, `limitranges`, `namespaces/status`, `pods/log`, `pods/status`, `replicationcontrollers/status`, `resourcequotas`, `resourcequotas/status`   |   `get`, `list`, `watch`   | 
|  |   `namespaces`   |   `get`, `list`, `watch`   | 

## AmazonEKSSecretReaderPolicy
<a name="_amazonekssecretreaderpolicy"></a>

This access policy includes permissions that allow an IAM principal to read [Kubernetes Secrets.](https://kubernetes.io/docs/concepts/configuration/secret/) 

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSSecretReaderPolicy` 


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|  |   `secrets`   |   `get`, `list`, `watch`   | 

## AmazonEKSAutoNodePolicy
<a name="_amazoneksautonodepolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSAutoNodePolicy` 

This policy includes the following permissions that allow Amazon EKS components to complete the following tasks:
+  `kube-proxy` – Monitor network endpoints and services, and manage related events. This enables cluster-wide network proxy functionality.
+  `ipamd` – Manage AWS VPC networking resources and container network interfaces (CNI). This allows the IP address management daemon to handle pod networking.
+  `coredns` – Access service discovery resources like endpoints and services. This enables DNS resolution within the cluster.
+  `ebs-csi-driver` – Work with storage-related resources for Amazon EBS volumes. This allows dynamic provisioning and attachment of persistent volumes.
+  `neuron` – Monitor nodes and pods for AWS Neuron devices. This enables management of AWS Inferentia and Trainium accelerators.
+  `node-monitoring-agent` – Access node diagnostics and events. This enables cluster health monitoring and diagnostics collection.

Each component uses a dedicated service account and is restricted to only the permissions required for its specific function.

If you manually specify a Node IAM role in a NodeClass, you need to create an Access Entry that associates the new Node IAM role with this Access Policy.

## AmazonEKSBlockStoragePolicy
<a name="_amazoneksblockstoragepolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSBlockStoragePolicy` 

This policy includes permissions that allow Amazon EKS to manage leader election and coordination resources for storage operations:
+  `coordination.k8s.io` – Create and manage lease objects for leader election. This enables EKS storage components to coordinate their activities across the cluster through a leader election mechanism.

The policy is scoped to specific lease resources used by the EKS storage components to prevent conflicting access to other coordination resources in the cluster.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the block storage capability to function properly.

## AmazonEKSLoadBalancingPolicy
<a name="_amazoneksloadbalancingpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSLoadBalancingPolicy` 

This policy includes permissions that allow Amazon EKS to manage leader election resources for load balancing:
+  `coordination.k8s.io` – Create and manage lease objects for leader election. This enables EKS load balancing components to coordinate activities across multiple replicas by electing a leader.

The policy is scoped specifically to load balancing lease resources to ensure proper coordination while preventing access to other lease resources in the cluster.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the networking capability to function properly.

## AmazonEKSNetworkingPolicy
<a name="_amazoneksnetworkingpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSNetworkingPolicy` 

This policy includes permissions that allow Amazon EKS to manage leader election resources for networking:
+  `coordination.k8s.io` – Create and manage lease objects for leader election. This enables EKS networking components to coordinate IP address allocation activities by electing a leader.

The policy is scoped specifically to networking lease resources to ensure proper coordination while preventing access to other lease resources in the cluster.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the networking capability to function properly.

## AmazonEKSComputePolicy
<a name="_amazonekscomputepolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSComputePolicy` 

This policy includes permissions that allow Amazon EKS to manage leader election resources for compute operations:
+  `coordination.k8s.io` – Create and manage lease objects for leader election. This enables EKS compute components to coordinate node scaling activities by electing a leader.

The policy is scoped specifically to compute management lease resources while allowing basic read access (`get`, `watch`) to all lease resources in the cluster.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the compute management capability to function properly.

## AmazonEKSBlockStorageClusterPolicy
<a name="_amazoneksblockstorageclusterpolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSBlockStorageClusterPolicy` 

This policy grants permissions necessary for the block storage capability of Amazon EKS Auto Mode. It enables efficient management of block storage resources within Amazon EKS clusters. The policy includes the following permissions:

CSI Driver Management:
+ Create, read, update, and delete CSI drivers, specifically for block storage.

Volume Management:
+ List, watch, create, update, patch, and delete persistent volumes.
+ List, watch, and update persistent volume claims.
+ Patch persistent volume claim statuses.

Node and Pod Interaction:
+ Read node and pod information.
+ Manage events related to storage operations.

Storage Classes and Attributes:
+ Read storage classes and CSI nodes.
+ Read volume attribute classes.

Volume Attachments:
+ List, watch, and modify volume attachments and their statuses.

Snapshot Operations:
+ Manage volume snapshots, snapshot contents, and snapshot classes.
+ Handle operations for volume group snapshots and related resources.

This policy is designed to support comprehensive block storage management within Amazon EKS clusters running in Auto Mode. It combines permissions for various operations including provisioning, attaching, resizing, and snapshotting of block storage volumes.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the block storage capability to function properly.

## AmazonEKSComputeClusterPolicy
<a name="_amazonekscomputeclusterpolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSComputeClusterPolicy` 

This policy grants permissions necessary for the compute management capability of Amazon EKS Auto Mode. It enables efficient orchestration and scaling of compute resources within Amazon EKS clusters. The policy includes the following permissions:

Node Management:
+ Create, read, update, delete, and manage status of NodePools and NodeClaims.
+ Manage NodeClasses, including creation, modification, and deletion.

Scheduling and Resource Management:
+ Read access to pods, nodes, persistent volumes, persistent volume claims, replication controllers, and namespaces.
+ Read access to storage classes, CSI nodes, and volume attachments.
+ List and watch deployments, daemon sets, replica sets, and stateful sets.
+ Read pod disruption budgets.

Event Handling:
+ Create, read, and manage cluster events.

Node Deprovisioning and Pod Eviction:
+ Update, patch, and delete nodes.
+ Create pod evictions and delete pods when necessary.

Custom Resource Definition (CRD) Management:
+ Create new CRDs.
+ Manage specific CRDs related to node management (NodeClasses, NodePools, NodeClaims, and NodeDiagnostics).

This policy is designed to support comprehensive compute management within Amazon EKS clusters running in Auto Mode. It combines permissions for various operations including node provisioning, scheduling, scaling, and resource optimization.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the compute management capability to function properly.

## AmazonEKSLoadBalancingClusterPolicy
<a name="_amazoneksloadbalancingclusterpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSLoadBalancingClusterPolicy` 

This policy grants permissions necessary for the load balancing capability of Amazon EKS Auto Mode. It enables efficient management and configuration of load balancing resources within Amazon EKS clusters. The policy includes the following permissions:

Event and Resource Management:
+ Create and patch events.
+ Read access to pods, nodes, endpoints, and namespaces.
+ Update pod statuses.

Service and Ingress Management:
+ Full management of services and their statuses.
+ Comprehensive control over ingresses and their statuses.
+ Read access to endpoint slices and ingress classes.

Target Group Bindings:
+ Create and modify target group bindings and their statuses.
+ Read access to ingress class parameters.

Custom Resource Definition (CRD) Management:
+ Create and read all CRDs.
+ Specific management of targetgroupbindings.eks.amazonaws.com and ingressclassparams.eks.amazonaws.com CRDs.

Webhook Configuration:
+ Create and read mutating and validating webhook configurations.
+ Manage the eks-load-balancing-webhook configuration.

This policy is designed to support comprehensive load balancing management within Amazon EKS clusters running in Auto Mode. It combines permissions for various operations including service exposure, ingress routing, and integration with AWS load balancing services.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the load balancing capability to function properly.

## AmazonEKSNetworkingClusterPolicy
<a name="_amazoneksnetworkingclusterpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSNetworkingClusterPolicy` 

AmazonEKSNetworkingClusterPolicy

This policy grants permissions necessary for the networking capability of Amazon EKS Auto Mode. It enables efficient management and configuration of networking resources within Amazon EKS clusters. The policy includes the following permissions:

Node and Pod Management:
+ Read access to NodeClasses and their statuses.
+ Read access to NodeClaims and their statuses.
+ Read access to pods.

CNI Node Management:
+ Permissions for CNINodes and their statuses, including create, read, update, delete, and patch.

Custom Resource Definition (CRD) Management:
+ Create and read all CRDs.
+ Specific management (update, patch, delete) of the cninodes.eks.amazonaws.com CRD.

Event Management:
+ Create and patch events.

This policy is designed to support comprehensive networking management within Amazon EKS clusters running in Auto Mode. It combines permissions for various operations including node networking configuration, CNI (Container Network Interface) management, and related custom resource handling.

The policy allows the networking components to interact with node-related resources, manage CNI-specific node configurations, and handle custom resources critical for networking operations in the cluster.

Amazon EKS automatically creates an access entry with this access policy for the cluster IAM role when Auto Mode is enabled, ensuring that the necessary permissions are in place for the networking capability to function properly.

## AmazonEKSHybridPolicy
<a name="access-policy-permissions-amazonekshybridpolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

This access policy includes permissions that grant EKS access to the nodes of a cluster. When associated to an access entry, its access scope is typically the cluster, rather than a Kubernetes namespace. This policy is used by Amazon EKS hybrid nodes.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSHybridPolicy` 


| Kubernetes API groups | Kubernetes nonResourceURLs | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | --- | 
|   `*`   |  |   `nodes`   |   `list`   | 

## AmazonEKSClusterInsightsPolicy
<a name="access-policy-permissions-AmazonEKSClusterInsightsPolicy"></a>

**Note**  
This policy is designated for AWS service-linked roles only and cannot be used with customer-managed roles.

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterInsightsPolicy` 

This policy grants read-only permissions for Amazon EKS Cluster Insights functionality. The policy includes the following permissions:

Node Access: - List and view cluster nodes - Read node status information

DaemonSet Access: - Read access to kube-proxy configuration

This policy is automatically managed by the EKS service for Cluster Insights. For more information, see [Prepare for Kubernetes version upgrades and troubleshoot misconfigurations with cluster insights](cluster-insights.md).

## AWSBackupFullAccessPolicyForBackup
<a name="_awsbackupfullaccesspolicyforbackup"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AWSBackupFullAccessPolicyForBackup` 

AWSBackupFullAccessPolicyForBackup

This policy grants the permissions necessary for AWS Backup to manage and create backups of the EKS Cluster. This policy includes the following permissions:


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `*`   |   `*`   |   `list`, `get`   | 

## AWSBackupFullAccessPolicyForRestore
<a name="_awsbackupfullaccesspolicyforrestore"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AWSBackupFullAccessPolicyForRestore` 

AWSBackupFullAccessPolicyForRestore

This policy grants the permissions necessary for AWS Backup to manage and restore backups of the EKS Cluster. This policy includes the following permissions:


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `*`   |   `*`   |   `list`, `get`, `create`   | 

## AmazonEKSACKPolicy
<a name="_amazoneksackpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSACKPolicy` 

This policy grants permissions necessary for the AWS Controllers for Kubernetes (ACK) capability to manage AWS resources from Kubernetes. The policy includes the following permissions:

ACK Custom Resource Management:
+ Full access to all ACK service custom resources across 50\$1 AWS services including S3, RDS, DynamoDB, Lambda, EC2, and more.
+ Create, read, update, and delete ACK custom resource definitions.

Namespace Access:
+ Read access to namespaces for resource organization.

Leader Election:
+ Create and read coordination leases for leader election.
+ Update and delete specific ACK service controller leases.

Event Management:
+ Create and patch events for ACK operations.

This policy is designed to support comprehensive AWS resource management through Kubernetes APIs. Amazon EKS automatically creates an access entry with this access policy for the capability IAM role that you supply when the ACK capability is created.


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|  |   `namespaces`   |   `get`, `watch`, `list`   | 
|   `services.k8s.aws`, `acm.services.k8s.aws`, `acmpca.services.k8s.aws`, `apigateway.services.k8s.aws`, `apigatewayv2.services.k8s.aws`, `applicationautoscaling.services.k8s.aws`, `athena.services.k8s.aws`, `bedrock.services.k8s.aws`, `bedrockagent.services.k8s.aws`, `bedrockagentcorecontrol.services.k8s.aws`, `cloudfront.services.k8s.aws`, `cloudtrail.services.k8s.aws`, `cloudwatch.services.k8s.aws`, `cloudwatchlogs.services.k8s.aws`, `codeartifact.services.k8s.aws`, `cognitoidentityprovider.services.k8s.aws`, `documentdb.services.k8s.aws`, `dynamodb.services.k8s.aws`, `ec2.services.k8s.aws`, `ecr.services.k8s.aws`, `ecrpublic.services.k8s.aws`, `ecs.services.k8s.aws`, `efs.services.k8s.aws`, `eks.services.k8s.aws`, `elasticache.services.k8s.aws`, `elbv2.services.k8s.aws`, `emrcontainers.services.k8s.aws`, `eventbridge.services.k8s.aws`, `iam.services.k8s.aws`, `kafka.services.k8s.aws`, `keyspaces.services.k8s.aws`, `kinesis.services.k8s.aws`, `kms.services.k8s.aws`, `lambda.services.k8s.aws`, `memorydb.services.k8s.aws`, `mq.services.k8s.aws`, `networkfirewall.services.k8s.aws`, `opensearchservice.services.k8s.aws`, `organizations.services.k8s.aws`, `pipes.services.k8s.aws`, `prometheusservice.services.k8s.aws`, `ram.services.k8s.aws`, `rds.services.k8s.aws`, `recyclebin.services.k8s.aws`, `route53.services.k8s.aws`, `route53resolver.services.k8s.aws`, `s3.services.k8s.aws`, `s3control.services.k8s.aws`, `sagemaker.services.k8s.aws`, `secretsmanager.services.k8s.aws`, `ses.services.k8s.aws`, `sfn.services.k8s.aws`, `sns.services.k8s.aws`, `sqs.services.k8s.aws`, `ssm.services.k8s.aws`, `wafv2.services.k8s.aws`   |   `*`   |   `*`   | 
|   `coordination.k8s.io`   |   `leases`   |   `create`, `get`, `list`, `watch`   | 
|   `coordination.k8s.io`   |   `leases` (specific ACK service controller leases only)  |   `delete`, `update`, `patch`   | 
|  |   `events`   |   `create`, `patch`   | 
|   `apiextensions.k8s.io`   |   `customresourcedefinitions`   |   `*`   | 

## AmazonEKSArgoCDClusterPolicy
<a name="_amazoneksargocdclusterpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSArgoCDClusterPolicy` 

This policy grants cluster-level permissions necessary for the Argo CD capability to discover resources and manage cluster-scoped objects. The policy includes the following permissions:

Namespace Management:
+ Create, read, update, and delete namespaces for application namespace management.

Custom Resource Definition Management:
+ Manage Argo CD-specific CRDs (Applications, AppProjects, ApplicationSets).

API Discovery:
+ Read access to Kubernetes API endpoints for resource discovery.

This policy is designed to support cluster-level Argo CD operations including namespace management and CRD installation. Amazon EKS automatically creates an access entry with this access policy for the capability IAM role that you supply when the Argo CD capability is created.


| Kubernetes API groups | Kubernetes nonResourceURLs | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | --- | 
|  |  |   `namespaces`   |   `create`, `get`, `update`, `patch`, `delete`   | 
|   `apiextensions.k8s.io`   |  |   `customresourcedefinitions`   |   `create`   | 
|   `apiextensions.k8s.io`   |  |   `customresourcedefinitions` (Argo CD CRDs only)  |   `get`, `update`, `patch`, `delete`   | 
|  |   `/api`, `/api/*`, `/apis`, `/apis/*`   |  |   `get`   | 

## AmazonEKSArgoCDPolicy
<a name="_amazoneksargocdpolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSArgoCDPolicy` 

This policy grants namespace-level permissions necessary for the Argo CD capability to deploy and manage applications. The policy includes the following permissions:

Secret Management:
+ Full access to secrets for Git credentials and cluster secrets.

ConfigMap Access:
+ Read access to ConfigMaps to send warnings if customers try to use unsupported Argo CD ConfigMaps.

Event Management:
+ Read and create events for application lifecycle tracking.

Argo CD Resource Management:
+ Full access to Applications, ApplicationSets, and AppProjects.
+ Manage finalizers and status for Argo CD resources.

This policy is designed to support namespace-level Argo CD operations including application deployment and management. Amazon EKS automatically creates an access entry with this access policy for the capability IAM role that you supply when the Argo CD capability is created, scoped to the Argo CD namespace.


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|  |   `secrets`   |   `*`   | 
|  |   `configmaps`   |   `get`, `list`, `watch`   | 
|  |   `events`   |   `get`, `list`, `watch`, `patch`, `create`   | 
|   `argoproj.io`   |   `applications`, `applications/finalizers`, `applications/status`, `applicationsets`, `applicationsets/finalizers`, `applicationsets/status`, `appprojects`, `appprojects/finalizers`, `appprojects/status`   |   `*`   | 

## AmazonEKSKROPolicy
<a name="_amazonekskropolicy"></a>

 **ARN** – ` arn:aws:eks::aws:cluster-access-policy/AmazonEKSKROPolicy` 

This policy grants permissions necessary for the kro (Kube Resource Orchestrator) capability to create and manage custom Kubernetes APIs. The policy includes the following permissions:

kro Resource Management:
+ Full access to all kro resources including ResourceGraphDefinitions and custom resource instances.

Custom Resource Definition Management:
+ Create, read, update, and delete CRDs for custom APIs defined by ResourceGraphDefinitions.

Leader Election:
+ Create and read coordination leases for leader election.
+ Update and delete the kro controller lease.

Event Management:
+ Create and patch events for kro operations.

This policy is designed to support comprehensive resource composition and custom API management through kro. Amazon EKS automatically creates an access entry with this access policy for the capability IAM role that you supply when the kro capability is created.


| Kubernetes API groups | Kubernetes resources | Kubernetes verbs (permissions) | 
| --- | --- | --- | 
|   `kro.run`   |   `*`   |   `*`   | 
|   `apiextensions.k8s.io`   |   `customresourcedefinitions`   |   `*`   | 
|   `coordination.k8s.io`   |   `leases`   |   `create`, `get`, `list`, `watch`   | 
|   `coordination.k8s.io`   |   `leases` (kro controller lease only)  |   `delete`, `update`, `patch`   | 
|  |   `events`   |   `create`, `patch`   | 

## Access policy updates
<a name="access-policy-updates"></a>

View details about updates to access policies, since they were introduced. For automatic alerts about changes to this page, subscribe to the RSS feed in [Document history](doc-history.md).


| Change | Description | Date | 
| --- | --- | --- | 
|  Add policies for EKS Capabilities  |  Publish `AmazonEKSACKPolicy`, `AmazonEKSArgoCDClusterPolicy`, `AmazonEKSArgoCDPolicy`, and `AmazonEKSKROPolicy` for managing EKS Capabilities  |  November 22, 2025  | 
|  Add `AmazonEKSSecretReaderPolicy`   |  Add a new policy for read-only access to secrets  |  November 6, 2025  | 
|  Add policy for EKS Cluster Insights  |  Publish `AmazonEKSClusterInsightsPolicy`   |  December 2, 2024  | 
|  Add policies for Amazon EKS Hybrid  |  Publish `AmazonEKSHybridPolicy`   |  December 2, 2024  | 
|  Add policies for Amazon EKS Auto Mode  |  These access policies give the Cluster IAM Role and Node IAM Role permission to call Kubernetes APIs. AWS uses these to automate routine tasks for storage, compute, and networking resources.  |  December 2, 2024  | 
|  Add `AmazonEKSAdminViewPolicy`   |  Add a new policy for expanded view access, including resources like Secrets.  |  April 23, 2024  | 
|  Access policies introduced.  |  Amazon EKS introduced access policies.  |  May 29, 2023  | 

# Change authentication mode to use access entries
<a name="setting-up-access-entries"></a>

To begin using access entries, you must change the authentication mode of the cluster to either the `API_AND_CONFIG_MAP` or `API` modes. This adds the API for access entries.

## AWS Console
<a name="access-entries-setup-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that you want to create an access entry in.

1. Choose the **Access** tab.

1. The **Authentication mode** shows the current authentication mode of the cluster. If the mode says EKS API, you can already add access entries and you can skip the remaining steps.

1. Choose **Manage access**.

1. For **Cluster authentication mode**, select a mode with the EKS API. Note that you can’t change the authentication mode back to a mode that removes the EKS API and access entries.

1. Choose **Save changes**. Amazon EKS begins to update the cluster, the status of the cluster changes to Updating, and the change is recorded in the **Update history** tab.

1. Wait for the status of the cluster to return to Active. When the cluster is Active, you can follow the steps in [Create access entries](creating-access-entries.md) to add access to the cluster for IAM principals.

## AWS CLI
<a name="access-setup-cli"></a>

1. Install the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the * AWS Command Line Interface User Guide*.

1. Run the following command. Replace *my-cluster* with the name of your cluster. If you want to disable the `ConfigMap` method permanently, replace `API_AND_CONFIG_MAP` with `API`.

   Amazon EKS begins to update the cluster, the status of the cluster changes to UPDATING, and the change is recorded in the ** aws eks list-updates **.

   ```
   aws eks update-cluster-config --name my-cluster --access-config authenticationMode=API_AND_CONFIG_MAP
   ```

1. Wait for the status of the cluster to return to Active. When the cluster is Active, you can follow the steps in [Create access entries](creating-access-entries.md) to add access to the cluster for IAM principals.

## Required platform version
<a name="_required_platform_version"></a>

To use *access entries*, the cluster must have a platform version that is the same or later than the version listed in the following table, or a Kubernetes version that is later than the versions listed in the table. If your Kubernetes version is not listed, all platform versions support access entries.


| Kubernetes version | Platform version | 
| --- | --- | 
|  Not Listed  |  All Supported  | 
|   `1.30`   |   `eks.2`   | 
|   `1.29`   |   `eks.1`   | 
|   `1.28`   |   `eks.6`   | 

For more information, see [platform-versions](https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html).

# Create access entries
<a name="creating-access-entries"></a>

Before creating access entries, consider the following:
+ A properly set authentication mode. See [Change authentication mode to use access entries](setting-up-access-entries.md).
+ An *access entry* includes the Amazon Resource Name (ARN) of one, and only one, existing IAM principal. An IAM principal can’t be included in more than one access entry. Additional considerations for the ARN that you specify:
  + IAM best practices recommend accessing your cluster using IAM *roles* that have short-term credentials, rather than IAM *users* that have long-term credentials. For more information, see [Require human users to use federation with an identity provider to access AWS using temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-users-federation-idp) in the *IAM User Guide*.
  + If the ARN is for an IAM role, it *can* include a path. ARNs in `aws-auth` `ConfigMap` entries, *can’t* include a path. For example, your ARN can be ` arn:aws:iam::<111122223333>:role/<development/apps/my-role>` or ` arn:aws:iam::<111122223333>:role/<my-role>`.
  + If the type of the access entry is anything other than `STANDARD` (see next consideration about types), the ARN must be in the same AWS account that your cluster is in. If the type is `STANDARD`, the ARN can be in the same, or different, AWS account than the account that your cluster is in.
  + You can’t change the IAM principal after the access entry is created.
  + If you ever delete the IAM principal with this ARN, the access entry isn’t automatically deleted. We recommend that you delete the access entry with an ARN for an IAM principal that you delete. If you don’t delete the access entry and ever recreate the IAM principal, even if it has the same ARN, the access entry won’t work. This is because even though the ARN is the same for the recreated IAM principal, the `roleID` or `userID` (you can see this with the `aws sts get-caller-identity` AWS CLI command) is different for the recreated IAM principal than it was for the original IAM principal. Even though you don’t see the IAM principal’s `roleID` or `userID` for an access entry, Amazon EKS stores it with the access entry.
+ Each access entry has a *type*. The type of the access entry depends on the type of resource it is associated with, and does not define the permissions. If you don’t specify a type, Amazon EKS automatically sets the type to `STANDARD` 
  +  `EC2_LINUX` - For an IAM role used with Linux or Bottlerocket self-managed nodes
  +  `EC2_WINDOWS` - For an IAM role used with Windows self-managed nodes
  +  `FARGATE_LINUX` - For an IAM role used with AWS Fargate (Fargate)
  +  `HYBRID_LINUX` - For an IAM role used with hybrid nodes
  +  `STANDARD` - Default type if none specified
  +  `EC2` - For EKS Auto Mode custom node classes. For more information, see [Create node class access entry](create-node-class.md#auto-node-access-entry).
  + You can’t change the type after the access entry is created.
+ It’s unnecessary to create an access entry for an IAM role that’s used for a managed node group or a Fargate profile. EKS will create access entries (if enabled), or update the auth config map (if access entries are unavailable)
+ If the type of the access entry is `STANDARD`, you can specify a *username* for the access entry. If you don’t specify a value for username, Amazon EKS sets one of the following values for you, depending on the type of the access entry and whether the IAM principal that you specified is an IAM role or IAM user. Unless you have a specific reason for specifying your own username, we recommend that don’t specify one and let Amazon EKS auto-generate it for you. If you specify your own username:
  + It can’t start with `system:`, `eks:`, `aws:`, `amazon:`, or `iam:`.
  + If the username is for an IAM role, we recommend that you add `{{SessionName}}` or `{{SessionNameRaw}}` to the end of your username. If you add either `{{SessionName}}` or `{{SessionNameRaw}}` to your username, the username must include a colon *before* \$1\$1SessionName\$1\$1. When this role is assumed, the name of the AWS STS session name that is specified when assuming the role is automatically passed to the cluster and will appear in CloudTrail logs. For example, you can’t have a username of `john{{SessionName}}`. The username would have to be `:john{{SessionName}}` or `jo:hn{{SessionName}}`. The colon only has to be before `{{SessionName}}`. The username generated by Amazon EKS in the following table includes an ARN. Since an ARN includes colons, it meets this requirement. The colon isn’t required if you don’t include `{{SessionName}}` in your username. Note that in `{{SessionName}}` the special character "@" is replaced with "-" in the session name. `{{SessionNameRaw}}` keeps all special characters in the session name.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/creating-access-entries.html)

    You can change the username after the access entry is created.
+ If an access entry’s type is `STANDARD`, and you want to use Kubernetes RBAC authorization, you can add one or more *group names* to the access entry. After you create an access entry you can add and remove group names. For the IAM principal to have access to Kubernetes objects on your cluster, you must create and manage Kubernetes role-based authorization (RBAC) objects. Create Kubernetes `RoleBinding` or `ClusterRoleBinding` objects on your cluster that specify the group name as a `subject` for `kind: Group`. Kubernetes authorizes the IAM principal access to any cluster objects that you’ve specified in a Kubernetes `Role` or `ClusterRole` object that you’ve also specified in your binding’s `roleRef`. If you specify group names, we recommend that you’re familiar with the Kubernetes role-based authorization (RBAC) objects. For more information, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.
**Important**  
Amazon EKS doesn’t confirm that any Kubernetes RBAC objects that exist on your cluster include any of the group names that you specify. For example, if you create an access entry for group that currently doesn’t exist, Amazon EKS will accept the configuration without returning an error, but the IAM principal won’t have any permissions until matching Kubernetes RBAC resources are created.

  Instead of, or in addition to, Kubernetes authorizing the IAM principal access to Kubernetes objects on your cluster, you can associate Amazon EKS *access policies* to an access entry. Amazon EKS authorizes IAM principals to access Kubernetes objects on your cluster with the permissions in the access policy. You can scope an access policy’s permissions to Kubernetes namespaces that you specify. Use of access policies don’t require you to manage Kubernetes RBAC objects. For more information, see [Associate access policies with access entries](access-policies.md).
+ If you create an access entry with type `EC2_LINUX` or `EC2_Windows`, the IAM principal creating the access entry must have the `iam:PassRole` permission. For more information, see [Granting a user permissions to pass a role to an AWS service](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_passrole.html) in the *IAM User Guide*.
+ Similar to standard [IAM behavior](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_general.html#troubleshoot_general_eventual-consistency), access entry creation and updates are eventually consistent, and may take several seconds to be effective after the initial API call returns successfully. You must design your applications to account for these potential delays. We recommend that you don’t include access entry creates or updates in the critical, high- availability code paths of your application. Instead, make changes in a separate initialization or setup routine that you run less frequently. Also, be sure to verify that the changes have been propagated before production workflows depend on them.
+ Access entries do not support [service linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html). You cannot create access entries where the principal ARN is a service linked role. You can identify service linked roles by their ARN, which is in the format ` arn:aws:iam::*:role/aws-service-role/*`.

You can create an access entry using the AWS Management Console or the AWS CLI.

## AWS Management Console
<a name="access-create-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that you want to create an access entry in.

1. Choose the **Access** tab.

1. Choose **Create access entry**.

1. For **IAM principal**, select an existing IAM role or user. IAM best practices recommend accessing your cluster using IAM *roles* that have short-term credentials, rather than IAM *users* that have long-term credentials. For more information, see [Require human users to use federation with an identity provider to access AWS using temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-users-federation-idp) in the *IAM User Guide*.

1. For **Type**, if the access entry is for the node role used for self-managed Amazon EC2 nodes, select **EC2 Linux** or **EC2 Windows**. Otherwise, accept the default (**Standard**).

1. If the **Type** you chose is **Standard** and you want to specify a **Username**, enter the username.

1. If the **Type** you chose is **Standard** and you want to use Kubernetes RBAC authorization for the IAM principal, specify one or more names for **Groups**. If you don’t specify any group names and want to use Amazon EKS authorization, you can associate an access policy in a later step, or after the access entry is created.

1. (Optional) For **Tags**, assign labels to the access entry. For example, to make it easier to find all resources with the same tag.

1. Choose **Next**.

1. On the **Add access policy** page, if the type you chose was **Standard** and you want Amazon EKS to authorize the IAM principal to have permissions to the Kubernetes objects on your cluster, complete the following steps. Otherwise, choose **Next**.

   1. For **Policy name**, choose an access policy. You can’t view the permissions of the access policies, but they include similar permissions to those in the Kubernetes user-facing `ClusterRole` objects. For more information, see [User-facing roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) in the Kubernetes documentation.

   1. Choose one of the following options:
      +  **Cluster** – Choose this option if you want Amazon EKS to authorize the IAM principal to have the permissions in the access policy for all Kubernetes objects on your cluster.
      +  **Kubernetes namespace** – Choose this option if you want Amazon EKS to authorize the IAM principal to have the permissions in the access policy for all Kubernetes objects in a specific Kubernetes namespace on your cluster. For **Namespace**, enter the name of the Kubernetes namespace on your cluster. If you want to add additional namespaces, choose **Add new namespace** and enter the namespace name.

   1. If you want to add additional policies, choose **Add policy**. You can scope each policy differently, but you can add each policy only once.

   1. Choose **Next**.

1. Review the configuration for your access entry. If anything looks incorrect, choose **Previous** to go back through the steps and correct the error. If the configuration is correct, choose **Create**.

## AWS CLI
<a name="access-create-cli"></a>

1. Install the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.

1. To create an access entry You can use any of the following examples to create access entries:
   + Create an access entry for a self-managed Amazon EC2 Linux node group. Replace *my-cluster* with the name of your cluster, *111122223333* with your AWS account ID, and *EKS-my-cluster-self-managed-ng-1* with the name of your [node IAM role](create-node-role.md). If your node group is a Windows node group, then replace *EC2\$1LINUX* with `EC2_Windows`.

     ```
     aws eks create-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/EKS-my-cluster-self-managed-ng-1 --type EC2_LINUX
     ```

     You can’t use the `--kubernetes-groups` option when you specify a type other than `STANDARD`. You can’t associate an access policy to this access entry, because its type is a value other than `STANDARD`.
   + Create an access entry that allows an IAM role that’s not used for an Amazon EC2 self-managed node group, that you want Kubernetes to authorize access to your cluster with. Replace *my-cluster* with the name of your cluster, *111122223333* with your AWS account ID, and *my-role* with the name of your IAM role. Replace *Viewers* with the name of a group that you’ve specified in a Kubernetes `RoleBinding` or `ClusterRoleBinding` object on your cluster.

     ```
     aws eks create-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/my-role --type STANDARD --username Viewers --kubernetes-groups Viewers
     ```
   + Create an access entry that allows an IAM user to authenticate to your cluster. This example is provided because this is possible, though IAM best practices recommend accessing your cluster using IAM *roles* that have short-term credentials, rather than IAM *users* that have long-term credentials. For more information, see [Require human users to use federation with an identity provider to access AWS using temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-users-federation-idp) in the *IAM User Guide*.

     ```
     aws eks create-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:user/my-user --type STANDARD --username my-user
     ```

     If you want this user to have more access to your cluster than the permissions in the Kubernetes API discovery roles, then you need to associate an access policy to the access entry, since the `--kubernetes-groups` option isn’t used. For more information, see [Associate access policies with access entries](access-policies.md) and [API discovery roles](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#discovery-roles) in the Kubernetes documentation.

# Update access entries
<a name="updating-access-entries"></a>

You can update an access entry using the AWS Management Console or the AWS CLI.

## AWS Management Console
<a name="access-update-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that you want to create an access entry in.

1. Choose the **Access** tab.

1. Choose the access entry that you want to update.

1. Choose **Edit**.

1. For **Username**, you can change the existing value.

1. For **Groups**, you can remove existing group names or add new group names. If the following groups names exist, don’t remove them: **system:nodes** or **system:bootstrappers**. Removing these groups can cause your cluster to function improperly. If you don’t specify any group names and want to use Amazon EKS authorization, associate an [access policy](access-policies.md) in a later step.

1. For **Tags**, you can assign labels to the access entry. For example, to make it easier to find all resources with the same tag. You can also remove existing tags.

1. Choose **Save changes**.

1. If you want to associate an access policy to the entry, see [Associate access policies with access entries](access-policies.md).

## AWS CLI
<a name="access-update-cli"></a>

1. Install the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.

1. To update an access entry Replace *my-cluster* with the name of your cluster, *111122223333* with your AWS account ID, and *EKS-my-cluster-my-namespace-Viewers* with the name of an IAM role.

   ```
   aws eks update-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/EKS-my-cluster-my-namespace-Viewers --kubernetes-groups Viewers
   ```

   You can’t use the `--kubernetes-groups` option if the type of the access entry is a value other than `STANDARD`. You also can’t associate an access policy to an access entry with a type other than `STANDARD`.

# Delete access entries
<a name="deleting-access-entries"></a>

If you discover that you deleted an access entry in error, you can always recreate it. If the access entry that you’re deleting is associated to any access policies, the associations are automatically deleted. You don’t have to disassociate access policies from an access entry before deleting the access entry.

You can delete an access entry using the AWS Management Console or the AWS CLI.

## AWS Management Console
<a name="access-delete-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Choose the name of the cluster that you want to delete an access entry from.

1. Choose the **Access** tab.

1. In the **Access entries** list, choose the access entry that you want to delete.

1. Choose Delete.

1. In the confirmation dialog box, choose **Delete**.

## AWS CLI
<a name="access-delete-cli"></a>

1. Install the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.

1. To delete an access entry Replace *my-cluster* with the name of your cluster, *111122223333* with your AWS account ID, and *my-role* with the name of the IAM role that you no longer want to have access to your cluster.

   ```
   aws eks delete-access-entry --cluster-name my-cluster --principal-arn arn:aws:iam::111122223333:role/my-role
   ```

# Set a custom username for EKS access entries
<a name="set-custom-username"></a>

When creating access entries for Amazon EKS, you can either use the automatically generated username or specify a custom username. This page explains both options and guides you through setting a custom username.

## Overview
<a name="_overview"></a>

The username in an access entry is used to identify the IAM principal in Kubernetes logs and audit trails. By default, Amazon EKS generates a username based on the IAM identity’s ARN, but you can specify a custom username if needed.

## Default username generation
<a name="_default_username_generation"></a>

If you don’t specify a value for username, Amazon EKS automatically generates a username based on the IAM Identity:
+  **For IAM Users**:
  + EKS sets the Kubernetes username to the ARN of the IAM User
  + Example:

    ```
    {arn-aws}iam::<111122223333>:user/<my-user>
    ```
+  **For IAM Roles**:
  + EKS sets the Kubernetes username based on the ARN of the IAM Role
  + The STS ARN of the role when it’s assumed. Amazon EKS appends `{{SessionName}}` to the role. If the ARN of the role that you specified contained a path, Amazon EKS removes it in the generated username.
  + Example:

    ```
    {arn-aws}sts::<111122223333>:assumed-role/<my-role>/{{SessionName}}
    ```

Unless you have a specific reason for specifying your own username, we recommend that you don’t specify one and let Amazon EKS auto-generate it for you.

## Setting a custom username
<a name="_setting_a_custom_username"></a>

When creating an access entry, you can specify a custom username using the `--username` parameter:

```
aws eks create-access-entry --cluster-name <cluster-name> --principal-arn <iam-identity-arn> --type STANDARD --username <custom-username>
```

### Requirements for custom usernames
<a name="_requirements_for_custom_usernames"></a>

If you specify a custom username:
+ The username can’t start with `system:`, `eks:`, `aws:`, `amazon:`, or `iam:`.
+ If the username is for an IAM role, we recommend that you add `{{SessionName}}` or `{{SessionNameRaw}}` to the end of your username.
  + If you add either `{{SessionName}}` or `{{SessionNameRaw}}` to your username, the username must include a colon *before* \$1\$1SessionName\$1\$1.

# Create an access entry for an IAM role or user using an access policy and the AWS CLI
<a name="create-standard-access-entry-policy"></a>

Create Amazon EKS access entries that use AWS-managed EKS access policies to grant IAM identities standardized permissions for accessing and managing Kubernetes clusters.

## Overview
<a name="_overview"></a>

Access entries in Amazon EKS define how IAM identities (users and roles) can access and interact with your Kubernetes clusters. By creating access entries with EKS access policies, you can:
+ Grant specific IAM users or roles permission to access your EKS cluster
+ Control permissions using AWS-managed EKS access policies that provide standardized, predefined permission sets
+ Scope permissions to specific namespaces or cluster-wide
+ Simplify access management without modifying the `aws-auth` ConfigMap or creating Kubernetes RBAC resources
+ Use AWS-integrated approach to Kubernetes access control that covers common use cases while maintaining security best practices

This approach is recommended for most use cases because it provides AWS-managed, standardized permissions without requiring manual Kubernetes RBAC configuration. EKS access policies eliminate the need to manually configure Kubernetes RBAC resources and offer predefined permission sets that cover common use cases.

## Prerequisites
<a name="_prerequisites"></a>
+ The *authentication mode* of your cluster must be configured to enable *access entries*. For more information, see [Change authentication mode to use access entries](setting-up-access-entries.md).
+ Install and configure the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.

## Step 1: Define access entry
<a name="ap1-s1"></a>

1. Find the ARN of IAM identity, such as a user or role, that you want to grant permissions to.
   + Each IAM identity can have only one EKS access entry.

1. Determine if you want the Amazon EKS access policy permissions to apply to only a specific Kubernetes namespace, or across the entire cluster.
   + If you want to limit the permissions to a specific namespace, make note of the namespace name.

1. Select the EKS access policy you want for the IAM identity. This policy gives in-cluster permissions. Note the ARN of the policy.
   + For a list of policies, see [available access policies](access-policy-permissions.md).

1. Determine if the auto-generated username is appropriate for the access entry, or if you need to manually specify a username.
   +  AWS auto-generates this value based on the IAM identity. You can set a custom username. This is visible in Kubernetes logs.
   + For more information, see [Set a custom username for EKS access entries](set-custom-username.md).

## Step 2: Create access entry
<a name="ap1-s2"></a>

After planning the access entry, use the AWS CLI to create it.

The following example covers most use cases. [View the CLI reference for all configuration options](https://docs.aws.amazon.com/cli/latest/reference/eks/create-access-entry.html).

You will attach the access policy in the next step.

```
aws eks create-access-entry --cluster-name <cluster-name> --principal-arn <iam-identity-arn> --type STANDARD
```

## Step 3: Associate access policy
<a name="_step_3_associate_access_policy"></a>

The command differs based on whether you want the policy to be limited to a specified Kubernetes namespace.

You need the ARN of the access policy. Review the [available access policies](access-policy-permissions.md).

### Create policy without namespace scope
<a name="_create_policy_without_namespace_scope"></a>

```
aws eks associate-access-policy --cluster-name <cluster-name> --principal-arn <iam-identity-arn> --access-scope type=cluster --policy-arn <access-policy-arn>
```

### Create with namespace scope
<a name="_create_with_namespace_scope"></a>

```
aws eks associate-access-policy --cluster-name <cluster-name> --principal-arn <iam-identity-arn> \
    --access-scope type=namespace,namespaces=my-namespace1,my-namespace2 --policy-arn <access-policy-arn>
```

## Next steps
<a name="_next_steps"></a>
+  [Create a kubeconfig so you can use kubectl with an IAM identity](create-kubeconfig.md) 

# Create an access entry using Kubernetes groups with the AWS CLI
<a name="create-k8s-group-access-entry"></a>

Create Amazon EKS access entries that use Kubernetes groups for authorization and require manual RBAC configuration.

**Note**  
For most use cases, we recommend using EKS Access Policies instead of the Kubernetes groups approach described on this page. EKS Access Policies provide a simpler, more AWS-integrated way to manage access without requiring manual RBAC configuration. Use the Kubernetes groups approach only when you need more granular control than what EKS Access Policies offer.

## Overview
<a name="_overview"></a>

Access entries define how IAM identities (users and roles) access your Kubernetes clusters. The Kubernetes groups approach grants IAM users or roles permission to access your EKS cluster through standard Kubernetes RBAC groups. This method requires creating and managing Kubernetes RBAC resources (Roles, RoleBindings, ClusterRoles, and ClusterRoleBindings) and is recommended when you need highly customized permission sets, complex authorization requirements, or want to maintain consistent access control patterns across hybrid Kubernetes environments.

This topic does not cover creating access entries for IAM identities used for Amazon EC2 instances to join EKS clusters.

## Prerequisites
<a name="_prerequisites"></a>
+ The *authentication mode* of your cluster must be configured to enable *access entries*. For more information, see [Change authentication mode to use access entries](setting-up-access-entries.md).
+ Install and configure the AWS CLI, as described in [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) in the AWS Command Line Interface User Guide.
+ Familiarity with Kubernetes RBAC is recommended. For more information, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.

## Step 1: Define access entry
<a name="k8s-group-s1"></a>

1. Find the ARN of the IAM identity, such as a user or role, that you want to grant permissions to.
   + Each IAM identity can have only one EKS access entry.

1. Determine which Kubernetes groups you want to associate with this IAM identity.
   + You will need to create or use existing Kubernetes `Role`/`ClusterRole` and `RoleBinding`/`ClusterRoleBinding` resources that reference these groups.

1. Determine if the auto-generated username is appropriate for the access entry, or if you need to manually specify a username.
   +  AWS auto-generates this value based on the IAM identity. You can set a custom username. This is visible in Kubernetes logs.
   + For more information, see [Set a custom username for EKS access entries](set-custom-username.md).

## Step 2: Create access entry with Kubernetes groups
<a name="k8s-group-s2"></a>

After planning the access entry, use the AWS CLI to create it with the appropriate Kubernetes groups.

```
aws eks create-access-entry --cluster-name <cluster-name> --principal-arn <iam-identity-arn> --type STANDARD --kubernetes-groups <groups>
```

Replace:
+  `<cluster-name>` with your EKS cluster name
+  `<iam-identity-arn>` with the ARN of the IAM user or role
+  `<groups>` with a comma-separated list of Kubernetes groups (e.g., "system:developers,system:readers")

 [View the CLI reference for all configuration options](https://docs.aws.amazon.com/cli/latest/reference/eks/create-access-entry.html).

## Step 3: Configure Kubernetes RBAC
<a name="_step_3_configure_kubernetes_rbac"></a>

For the IAM principal to have access to Kubernetes objects on your cluster, you must create and manage Kubernetes role-based access control (RBAC) objects:

1. Create Kubernetes `Role` or `ClusterRole` objects that define the permissions.

1. Create Kubernetes `RoleBinding` or `ClusterRoleBinding` objects on your cluster that specify the group name as a `subject` for `kind: Group`.

For detailed information about configuring groups and permissions in Kubernetes, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.

## Next steps
<a name="_next_steps"></a>
+  [Create a kubeconfig so you can use kubectl with an IAM identity](create-kubeconfig.md) 

# Grant IAM users access to Kubernetes with a ConfigMap
<a name="auth-configmap"></a>

**Important**  
The `aws-auth ConfigMap` is deprecated. For the recommended method to manage access to Kubernetes APIs, see [Grant IAM users access to Kubernetes with EKS access entries](access-entries.md).

Access to your cluster using [IAM principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) is enabled by the [AWS IAM Authenticator for Kubernetes](https://github.com/kubernetes-sigs/aws-iam-authenticator#readme), which runs on the Amazon EKS control plane. The authenticator gets its configuration information from the `aws-auth` `ConfigMap`. For all `aws-auth` `ConfigMap` settings, see [Full Configuration Format](https://github.com/kubernetes-sigs/aws-iam-authenticator#full-configuration-format) on GitHub.

## Add IAM principals to your Amazon EKS cluster
<a name="aws-auth-users"></a>

When you create an Amazon EKS cluster, the [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) that creates the cluster is automatically granted `system:masters` permissions in the cluster’s role-based access control (RBAC) configuration in the Amazon EKS control plane. This principal doesn’t appear in any visible configuration, so make sure to keep track of which principal originally created the cluster. To grant additional IAM principals the ability to interact with your cluster, edit the `aws-auth ConfigMap` within Kubernetes and create a Kubernetes `rolebinding` or `clusterrolebinding` with the name of a `group` that you specify in the `aws-auth ConfigMap`.

**Note**  
For more information about Kubernetes role-based access control (RBAC) configuration, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.

1. Determine which credentials `kubectl` is using to access your cluster. On your computer, you can see which credentials `kubectl` uses with the following command. Replace *\$1/.kube/config* with the path to your `kubeconfig` file if you don’t use the default path.

   ```
   cat ~/.kube/config
   ```

   An example output is as follows.

   ```
   [...]
   contexts:
   - context:
       cluster: my-cluster.region-code.eksctl.io
       user: admin@my-cluster.region-code.eksctl.io
     name: admin@my-cluster.region-code.eksctl.io
   current-context: admin@my-cluster.region-code.eksctl.io
   [...]
   ```

   In the previous example output, the credentials for a user named *admin* are configured for a cluster named *my-cluster*. If this is the user that created the cluster, then it already has access to your cluster. If it’s not the user that created the cluster, then you need to complete the remaining steps to enable cluster access for other IAM principals. [IAM best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) recommend that you grant permissions to roles instead of users. You can see which other principals currently have access to your cluster with the following command:

   ```
   kubectl describe -n kube-system configmap/aws-auth
   ```

   An example output is as follows.

   ```
   Name:         aws-auth
   Namespace:    kube-system
   Labels:       <none>
   Annotations:  <none>
   
   Data
   ====
   mapRoles:
   ----
   - groups:
     - system:bootstrappers
     - system:nodes
     rolearn: arn:aws:iam::111122223333:role/my-node-role
     username: system:node:{{EC2PrivateDNSName}}
   
   
   BinaryData
   ====
   
   Events:  <none>
   ```

   The previous example is a default `aws-auth` `ConfigMap`. Only the node instance role has access to the cluster.

1. Make sure that you have existing Kubernetes `roles` and `rolebindings` or `clusterroles` and `clusterrolebindings` that you can map IAM principals to. For more information about these resources, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.

   1. View your existing Kubernetes `roles` or `clusterroles`. `Roles` are scoped to a `namespace`, but `clusterroles` are scoped to the cluster.

      ```
      kubectl get roles -A
      ```

      ```
      kubectl get clusterroles
      ```

   1. View the details of any `role` or `clusterrole` returned in the previous output and confirm that it has the permissions (`rules`) that you want your IAM principals to have in your cluster.

      Replace *role-name* with a `role` name returned in the output from the previous command. Replace *kube-system* with the namespace of the `role`.

      ```
      kubectl describe role role-name -n kube-system
      ```

      Replace *cluster-role-name* with a `clusterrole` name returned in the output from the previous command.

      ```
      kubectl describe clusterrole cluster-role-name
      ```

   1. View your existing Kubernetes `rolebindings` or `clusterrolebindings`. `Rolebindings` are scoped to a `namespace`, but `clusterrolebindings` are scoped to the cluster.

      ```
      kubectl get rolebindings -A
      ```

      ```
      kubectl get clusterrolebindings
      ```

   1. View the details of any `rolebinding` or `clusterrolebinding` and confirm that it has a `role` or `clusterrole` from the previous step listed as a `roleRef` and a group name listed for `subjects`.

      Replace *role-binding-name* with a `rolebinding` name returned in the output from the previous command. Replace *kube-system* with the `namespace` of the `rolebinding`.

      ```
      kubectl describe rolebinding role-binding-name -n kube-system
      ```

      An example output is as follows.

      ```
      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: eks-console-dashboard-restricted-access-role-binding
        namespace: default
      subjects:
      - kind: Group
        name: eks-console-dashboard-restricted-access-group
        apiGroup: rbac.authorization.k8s.io
      roleRef:
        kind: Role
        name: eks-console-dashboard-restricted-access-role
        apiGroup: rbac.authorization.k8s.io
      ```

      Replace *cluster-role-binding-name* with a `clusterrolebinding` name returned in the output from the previous command.

      ```
      kubectl describe clusterrolebinding cluster-role-binding-name
      ```

      An example output is as follows.

      ```
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: eks-console-dashboard-full-access-binding
      subjects:
      - kind: Group
        name: eks-console-dashboard-full-access-group
        apiGroup: rbac.authorization.k8s.io
      roleRef:
        kind: ClusterRole
        name: eks-console-dashboard-full-access-clusterrole
        apiGroup: rbac.authorization.k8s.io
      ```

1. Edit the `aws-auth` `ConfigMap`. You can use a tool such as `eksctl` to update the `ConfigMap` or you can update it manually by editing it.
**Important**  
We recommend using `eksctl`, or another tool, to edit the `ConfigMap`. For information about other tools you can use, see [Use tools to make changes to the aws-authConfigMap](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#use-tools-to-make-changes-to-the-aws-auth-configmap) in the Amazon EKS best practices guides. An improperly formatted `aws-auth` `ConfigMap` can cause you to lose access to your cluster.
   + View steps to [edit configmap with eksctl](#configmap-eksctl).
   + View steps to [edit configmap manually](#configmap-manual).

### Edit Configmap with Eksctl
<a name="configmap-eksctl"></a>

1. You need version `0.215.0` or later of the `eksctl` command line tool installed on your device or AWS CloudShell. To install or update `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.

1. View the current mappings in the `ConfigMap`. Replace *my-cluster* with the name of your cluster. Replace *region-code* with the AWS Region that your cluster is in.

   ```
   eksctl get iamidentitymapping --cluster my-cluster --region=region-code
   ```

   An example output is as follows.

   ```
   ARN                                                                                             USERNAME                                GROUPS                          ACCOUNT
   arn:aws:iam::111122223333:role/eksctl-my-cluster-my-nodegroup-NodeInstanceRole-1XLS7754U3ZPA    system:node:{{EC2PrivateDNSName}}       system:bootstrappers,system:nodes
   ```

1. Add a mapping for a role. Replace *my-role* with your role name. Replace *eks-console-dashboard-full-access-group* with the name of the group specified in your Kubernetes `RoleBinding` or `ClusterRoleBinding` object. Replace *111122223333* with your account ID. You can replace *admin* with any name you choose.

   ```
   eksctl create iamidentitymapping --cluster my-cluster --region=region-code \
       --arn arn:aws:iam::111122223333:role/my-role --username admin --group eks-console-dashboard-full-access-group \
       --no-duplicate-arns
   ```
**Important**  
The role ARN can’t include a path such as `role/my-team/developers/my-role`. The format of the ARN must be ` arn:aws:iam::111122223333:role/my-role `. In this example, `my-team/developers/` needs to be removed.

   An example output is as follows.

   ```
   [...]
   2022-05-09 14:51:20 [ℹ]  adding identity "{arn-aws}iam::111122223333:role/my-role" to auth ConfigMap
   ```

1. Add a mapping for a user. [IAM best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) recommend that you grant permissions to roles instead of users. Replace *my-user* with your user name. Replace *eks-console-dashboard-restricted-access-group* with the name of the group specified in your Kubernetes `RoleBinding` or `ClusterRoleBinding` object. Replace *111122223333* with your account ID. You can replace *my-user* with any name you choose.

   ```
   eksctl create iamidentitymapping --cluster my-cluster --region=region-code \
       --arn arn:aws:iam::111122223333:user/my-user --username my-user --group eks-console-dashboard-restricted-access-group \
       --no-duplicate-arns
   ```

   An example output is as follows.

   ```
   [...]
   2022-05-09 14:53:48 [ℹ]  adding identity "arn:aws:iam::111122223333:user/my-user" to auth ConfigMap
   ```

1. View the mappings in the `ConfigMap` again.

   ```
   eksctl get iamidentitymapping --cluster my-cluster --region=region-code
   ```

   An example output is as follows.

   ```
   ARN                                                                                             USERNAME                                GROUPS                                  ACCOUNT
   arn:aws:iam::111122223333:role/eksctl-my-cluster-my-nodegroup-NodeInstanceRole-1XLS7754U3ZPA    system:node:{{EC2PrivateDNSName}}       system:bootstrappers,system:nodes
   arn:aws:iam::111122223333:role/admin                                                            my-role                                 eks-console-dashboard-full-access-group
   arn:aws:iam::111122223333:user/my-user                                                          my-user                                 eks-console-dashboard-restricted-access-group
   ```

### Edit Configmap manually
<a name="configmap-manual"></a>

1. Open the `ConfigMap` for editing.

   ```
   kubectl edit -n kube-system configmap/aws-auth
   ```
**Note**  
If you receive an error stating " `Error from server (NotFound): configmaps "aws-auth" not found` ", then use the procedure in [Apply the aws-auth   ConfigMap to your cluster](#aws-auth-configmap) to apply the stock `ConfigMap`.

1. Add your IAM principals to the `ConfigMap`. An IAM group isn’t an IAM principal, so it can’t be added to the `ConfigMap`.
   +  **To add an IAM role (for example, for [federated users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html)):** Add the role details to the `mapRoles` section of the `ConfigMap`, under `data`. Add this section if it does not already exist in the file. Each entry supports the following parameters:
     +  **rolearn**: The ARN of the IAM role to add. This value can’t include a path. For example, you can’t specify an ARN such as ` arn:aws:iam::111122223333:role/my-team/developers/role-name `. The ARN needs to be ` arn:aws:iam::111122223333:role/role-name ` instead.
     +  **username**: The user name within Kubernetes to map to the IAM role.
     +  **groups**: The group or list of Kubernetes groups to map the role to. The group can be a default group, or a group specified in a `clusterrolebinding` or `rolebinding`. For more information, see [Default roles and role bindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings) in the Kubernetes documentation.
   +  **To add an IAM user:** [IAM best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) recommend that you grant permissions to roles instead of users. Add the user details to the `mapUsers` section of the `ConfigMap`, under `data`. Add this section if it does not already exist in the file. Each entry supports the following parameters:
     +  **userarn**: The ARN of the IAM user to add.
     +  **username**: The user name within Kubernetes to map to the IAM user.
     +  **groups**: The group, or list of Kubernetes groups to map the user to. The group can be a default group, or a group specified in a `clusterrolebinding` or `rolebinding`. For more information, see [Default roles and role bindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings) in the Kubernetes documentation.

1. For example, the following YAML block contains:
   + A `mapRoles` section that maps the IAM node instance to Kubernetes groups so that nodes can register themselves with the cluster and the `my-console-viewer-role` IAM role that is mapped to a Kubernetes group that can view all Kubernetes resources for all clusters. For a list of the IAM and Kubernetes group permissions required for the `my-console-viewer-role` IAM role, see [Required permissions](view-kubernetes-resources.md#view-kubernetes-resources-permissions).
   + A `mapUsers` section that maps the `admin` IAM user from the default AWS account to the `system:masters` Kubernetes group and the `my-user` user from a different AWS account that is mapped to a Kubernetes group that can view Kubernetes resources for a specific namespace. For a list of the IAM and Kubernetes group permissions required for the `my-user` IAM user, see [Required permissions](view-kubernetes-resources.md#view-kubernetes-resources-permissions).

     Add or remove lines as necessary and replace all example values with your own values.

     ```
     # Please edit the object below. Lines beginning with a '#' will be ignored,
     # and an empty file will abort the edit. If an error occurs while saving this file will be
     # reopened with the relevant failures.
     #
     apiVersion: v1
     data:
       mapRoles: |
         - groups:
           - system:bootstrappers
           - system:nodes
           rolearn: arn:aws:iam::111122223333:role/my-role
           username: system:node:{{EC2PrivateDNSName}}
         - groups:
           - eks-console-dashboard-full-access-group
           rolearn: arn:aws:iam::111122223333:role/my-console-viewer-role
           username: my-console-viewer-role
       mapUsers: |
         - groups:
           - system:masters
           userarn: arn:aws:iam::111122223333:user/admin
           username: admin
         - groups:
           - eks-console-dashboard-restricted-access-group
           userarn: arn:aws:iam::444455556666:user/my-user
           username: my-user
     ```

1. Save the file and exit your text editor.

## Apply the `aws-auth`   `ConfigMap` to your cluster
<a name="aws-auth-configmap"></a>

The `aws-auth` `ConfigMap` is automatically created and applied to your cluster when you create a managed node group or when you create a node group using `eksctl`. It is initially created to allow nodes to join your cluster, but you also use this `ConfigMap` to add role-based access control (RBAC) access to IAM principals. If you’ve launched self-managed nodes and haven’t applied the `aws-auth` `ConfigMap` to your cluster, you can do so with the following procedure.

1. Check to see if you’ve already applied the `aws-auth` `ConfigMap`.

   ```
   kubectl describe configmap -n kube-system aws-auth
   ```

   If you receive an error stating " `Error from server (NotFound): configmaps "aws-auth" not found` ", then proceed with the following steps to apply the stock `ConfigMap`.

1. Download, edit, and apply the AWS authenticator configuration map.

   1. Download the configuration map.

      ```
      curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/aws-auth-cm.yaml
      ```

   1. In the `aws-auth-cm.yaml` file, set the `rolearn` to the Amazon Resource Name (ARN) of the IAM role associated with your nodes. You can do this with a text editor, or by replacing *my-node-instance-role* and running the following command:

      ```
      sed -i.bak -e 's|<ARN of instance role (not instance profile)>|my-node-instance-role|' aws-auth-cm.yaml
      ```

      Don’t modify any other lines in this file.
**Important**  
The role ARN can’t include a path such as `role/my-team/developers/my-role`. The format of the ARN must be ` arn:aws:iam::111122223333:role/my-role `. In this example, `my-team/developers/` needs to be removed.

      You can inspect the AWS CloudFormation stack outputs for your node groups and look for the following values:
      +  **InstanceRoleARN** – For node groups that were created with `eksctl` 
      +  **NodeInstanceRole** – For node groups that were created with Amazon EKS vended AWS CloudFormation templates in the AWS Management Console 

   1. Apply the configuration. This command may take a few minutes to finish.

      ```
      kubectl apply -f aws-auth-cm.yaml
      ```
**Note**  
If you receive any authorization or resource type errors, see [Unauthorized or access denied (`kubectl`)](troubleshooting.md#unauthorized) in the troubleshooting topic.

1. Watch the status of your nodes and wait for them to reach the `Ready` status.

   ```
   kubectl get nodes --watch
   ```

   Enter `Ctrl`\$1`C` to return to a shell prompt.

# Grant users access to Kubernetes with an external OIDC provider
<a name="authenticate-oidc-identity-provider"></a>

Amazon EKS supports using OpenID Connect (OIDC) identity providers as a method to authenticate users to your cluster. OIDC identity providers can be used with, or as an alternative to AWS Identity and Access Management (IAM). For more information about using IAM, see [Grant IAM users and roles access to Kubernetes APIs](grant-k8s-access.md). After configuring authentication to your cluster, you can create Kubernetes `roles` and `clusterroles` to assign permissions to the roles, and then bind the roles to the identities using Kubernetes `rolebindings` and `clusterrolebindings`. For more information, see [Using RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in the Kubernetes documentation.
+ You can associate one OIDC identity provider to your cluster.
+ Kubernetes doesn’t provide an OIDC identity provider. You can use an existing public OIDC identity provider, or you can run your own identity provider. For a list of certified providers, see [OpenID Certification](https://openid.net/certification/) on the OpenID site.
+ The issuer URL of the OIDC identity provider must be publicly accessible, so that Amazon EKS can discover the signing keys. Amazon EKS doesn’t support OIDC identity providers with self-signed certificates.
+ You can’t disable IAM authentication to your cluster, because it’s still required for joining nodes to a cluster.
+ An Amazon EKS cluster must still be created by an AWS [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal), rather than an OIDC identity provider user. This is because the cluster creator interacts with the Amazon EKS APIs, rather than the Kubernetes APIs.
+ OIDC identity provider-authenticated users are listed in the cluster’s audit log if CloudWatch logs are turned on for the control plane. For more information, see [Enable or disable control plane logs](control-plane-logs.md#enabling-control-plane-log-export).
+ You can’t sign in to the AWS Management Console with an account from an OIDC provider. You can only [View Kubernetes resources in the AWS Management Console](view-kubernetes-resources.md) by signing into the AWS Management Console with an AWS Identity and Access Management account.

## Associate an OIDC identity provider
<a name="associate-oidc-identity-provider"></a>

Before you can associate an OIDC identity provider with your cluster, you need the following information from your provider:

 **Issuer URL**   
The URL of the OIDC identity provider that allows the API server to discover public signing keys for verifying tokens. The URL must begin with `https://` and should correspond to the `iss` claim in the provider’s OIDC ID tokens. In accordance with the OIDC standard, path components are allowed but query parameters are not. Typically the URL consists of only a host name, like `https://server.example.org` or `https://example.com`. This URL should point to the level below `.well-known/openid-configuration` and must be publicly accessible over the internet.

 **Client ID (also known as *audience*)**   
The ID for the client application that makes authentication requests to the OIDC identity provider.

You can associate an identity provider using `eksctl` or the AWS Management Console.

### Associate an identity provider using eksctl
<a name="identity-associate-eksctl"></a>

1. Create a file named `associate-identity-provider.yaml` with the following contents. Replace the example values with your own. The values in the `identityProviders` section are obtained from your OIDC identity provider. Values are only required for the `name`, `type`, `issuerUrl`, and `clientId` settings under `identityProviders`.

   ```
   ---
   apiVersion: eksctl.io/v1alpha5
   kind: ClusterConfig
   
   metadata:
     name: my-cluster
     region: your-region-code
   
   identityProviders:
     - name: my-provider
       type: oidc
       issuerUrl: https://example.com
       clientId: kubernetes
       usernameClaim: email
       usernamePrefix: my-username-prefix
       groupsClaim: my-claim
       groupsPrefix: my-groups-prefix
       requiredClaims:
         string: string
       tags:
         env: dev
   ```
**Important**  
Don’t specify `system:`, or any portion of that string, for `groupsPrefix` or `usernamePrefix`.

1. Create the provider.

   ```
   eksctl associate identityprovider -f associate-identity-provider.yaml
   ```

1. To use `kubectl` to work with your cluster and OIDC identity provider, see [Using kubectl](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#using-kubectl) in the Kubernetes documentation.

### Associate an identity provider using the AWS Console
<a name="identity-associate-console"></a>

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. Select your cluster, and then select the **Access** tab.

1. In the **OIDC Identity Providers** section, select\$1 Associate Identity Provider\$1.

1. On the **Associate OIDC Identity Provider** page, enter or select the following options, and then select **Associate**.
   + For **Name**, enter a unique name for the provider.
   + For **Issuer URL**, enter the URL for your provider. This URL must be accessible over the internet.
   + For **Client ID**, enter the OIDC identity provider’s client ID (also known as **audience**).
   + For **Username claim**, enter the claim to use as the username.
   + For **Groups claim**, enter the claim to use as the user’s group.
   + (Optional) Select **Advanced options**, enter or select the following information.
     +  **Username prefix** – Enter a prefix to prepend to username claims. The prefix is prepended to username claims to prevent clashes with existing names. If you do not provide a value, and the username is a value other than `email`, the prefix defaults to the value for **Issuer URL**. You can use the value `-` to disable all prefixing. Don’t specify `system:` or any portion of that string.
     +  **Groups prefix** – Enter a prefix to prepend to groups claims. The prefix is prepended to group claims to prevent clashes with existing names (such as `system: groups`). For example, the value `oidc:` creates group names like `oidc:engineering` and `oidc:infra`. Don’t specify `system:` or any portion of that string.
     +  **Required claims** – Select **Add claim** and enter one or more key value pairs that describe required claims in the client ID token. The pairs describe required claims in the ID Token. If set, each claim is verified to be present in the ID token with a matching value.

       1. To use `kubectl` to work with your cluster and OIDC identity provider, see [Using kubectl](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#using-kubectl) in the Kubernetes documentation.

## Example IAM policy
<a name="oidc-identity-provider-iam-policy"></a>

If you want to prevent an OIDC identity provider from being associated with a cluster, create and associate the following IAM policy to the IAM accounts of your Amazon EKS administrators. For more information, see [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) and [Adding IAM identity permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#add-policies-console) in the *IAM User Guide* and [Actions](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelasticcontainerserviceforkubernetes.html) in the Service Authorization Reference.

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "denyOIDC",
            "Effect": "Deny",
            "Action": [
                "eks:AssociateIdentityProviderConfig"
            ],
            "Resource": "arn:aws:eks:us-west-2:111122223333:cluster/*"

        },
        {
            "Sid": "eksAdmin",
            "Effect": "Allow",
            "Action": [
                "eks:*"
            ],
            "Resource": "*"
        }
    ]
}
```

The following example policy allows OIDC identity provider association if the `clientID` is `kubernetes` and the `issuerUrl` is `https://cognito-idp.us-west-2.amazonaws.com/*`.

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowCognitoOnly",
            "Effect": "Deny",
            "Action": "eks:AssociateIdentityProviderConfig",
            "Resource": "arn:aws:eks:us-west-2:111122223333:cluster/my-instance",
            "Condition": {
                "StringNotLikeIfExists": {
                    "eks:issuerUrl": "https://cognito-idp.us-west-2.amazonaws.com/*"
                }
            }
        },
        {
            "Sid": "DenyOtherClients",
            "Effect": "Deny",
            "Action": "eks:AssociateIdentityProviderConfig",
            "Resource": "arn:aws:eks:us-west-2:111122223333:cluster/my-instance",
            "Condition": {
                "StringNotEquals": {
                    "eks:clientId": "kubernetes"
                }
            }
        },
        {
            "Sid": "AllowOthers",
            "Effect": "Allow",
            "Action": "eks:*",
            "Resource": "*"
        }
    ]
}
```

# Disassociate an OIDC identity provider from your cluster
<a name="disassociate-oidc-identity-provider"></a>

If you disassociate an OIDC identity provider from your cluster, users included in the provider can no longer access the cluster. However, you can still access the cluster with [IAM principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal).

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the **OIDC Identity Providers** section, select **Disassociate**, enter the identity provider name, and then select `Disassociate`.