

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Learn about VPC CNI modes and configuration
Modes and configuration

The Amazon VPC CNI plugin for Kubernetes provides networking for Pods. Use the following table to learn more about the available networking features.


| Networking feature | Learn more | 
| --- | --- | 
|  Configure your cluster to assign IPv6 addresses to clusters, Pods, and services  |   [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md)   | 
|  Use IPv4 Source Network Address Translation for Pods  |   [Enable outbound internet access for Pods](external-snat.md)   | 
|  Restrict network traffic to and from your Pods  |   [Restrict Pod network traffic with Kubernetes network policies](cni-network-policy-configure.md)   | 
|  Customize the secondary network interface in nodes  |   [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md)   | 
|  Increase IP addresses for your node  |   [Assign more IP addresses to Amazon EKS nodes with prefixes](cni-increase-ip-addresses.md)   | 
|  Use security groups for Pod network traffic  |   [Assign security groups to individual Pods](security-groups-for-pods.md)   | 
|  Use multiple network interfaces for Pods  |   [Attach multiple network interfaces to Pods](pod-multiple-network-interfaces.md)   | 

# Learn about IPv6 addresses to clusters, Pods, and services
IPv6

 **Applies to**: Pods with Amazon EC2 instances and Fargate Pods

By default, Kubernetes assigns `IPv4` addresses to your Pods and services. Instead of assigning `IPv4` addresses to your Pods and services, you can configure your cluster to assign `IPv6` addresses to them. Amazon EKS doesn’t support dual-stacked Pods or services, even though Kubernetes does. As a result, you can’t assign both `IPv4` and `IPv6` addresses to your Pods and services.

You select which IP family you want to use for your cluster when you create it. You can’t change the family after you create the cluster.

For a tutorial to deploy an Amazon EKS `IPv6` cluster, see [Deploying an Amazon EKS `IPv6` cluster and managed Amazon Linux nodes](deploy-ipv6-cluster.md).

The following are considerations for using the feature:

## `IPv6` Feature support

+  **No Windows support**: Windows Pods and services aren’t supported.
+  **Nitro-based EC2 nodes required**: You can only use `IPv6` with AWS Nitro-based Amazon EC2 or Fargate nodes.
+  **EC2 and Fargate nodes supported**: You can use `IPv6` with [Assign security groups to individual Pods](security-groups-for-pods.md) with Amazon EC2 nodes and Fargate nodes.
+  **Outposts not supported**: You can’t use `IPv6` with [Deploy Amazon EKS on-premises with AWS Outposts](eks-outposts.md).
+  **FSx for Lustre is not supported**: The [Use high-performance app storage with Amazon FSx for Lustre](fsx-csi.md) is not supported.
+  **Custom networking not supported**: If you previously used [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md) to help alleviate IP address exhaustion, you can use `IPv6` instead. You can’t use custom networking with `IPv6`. If you use custom networking for network isolation, then you might need to continue to use custom networking and the `IPv4` family for your clusters.

## IP address assignments

+  **Kubernetes services**: Kubernetes services are only assigned an `IPv6` addresses. They aren’t assigned IPv4 addresses.
+  **Pods**: Pods are assigned an IPv6 address and a host-local IPv4 address. The host-local IPv4 address is assigned by using a host-local CNI plugin chained with VPC CNI and the address is not reported to the Kubernetes control plane. It is only used when a pod needs to communicate with an external IPv4 resources in another Amazon VPC or the internet. The host-local IPv4 address gets SNATed (by VPC CNI) to the primary IPv4 address of the primary ENI of the worker node.
+  **Pods and services**: Pods and services receive only `IPv6` addresses, not `IPv4` addresses. When Pods need to communicate with external `IPv4` endpoints, they use NAT on the node itself. This built-in NAT capability eliminates the need for [DNS64 and NAT64](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html#nat-gateway-nat64-dns64). For traffic requiring public internet access, the Pod’s traffic is source network address translated to a public IP address.
+  **Routing addresses**: When a Pod communicates outside the VPC, its original `IPv6` address is preserved (not translated to the node’s `IPv6` address). This traffic is routed directly through an internet gateway or egress-only internet gateway.
+  **Nodes**: All nodes are assigned an `IPv4` and `IPv6` address.
+  **Fargate Pods**: Each Fargate Pod receives an `IPv6` address from the CIDR that’s specified for the subnet that it’s deployed in. The underlying hardware unit that runs Fargate Pods gets a unique `IPv4` and `IPv6` address from the CIDRs that are assigned to the subnet that the hardware unit is deployed in.

## How to use `IPv6` with EKS

+  **Create new cluster**: You must create a new cluster and specify that you want to use the `IPv6` family for that cluster. You can’t enable the `IPv6` family for a cluster that you updated from a previous version. For instructions on how to create a new cluster, see Considerations .
+  **Use recent VPC CNI**: Deploy Amazon VPC CNI version `1.10.1` or later. This version or later is deployed by default. After you deploy the add-on, you can’t downgrade your Amazon VPC CNI add-on to a version lower than `1.10.1` without first removing all nodes in all node groups in your cluster.
+  **Configure VPC CNI for `IPv6` **: If you use Amazon EC2 nodes, you must configure the Amazon VPC CNI add-on with IP prefix delegation and `IPv6`. If you choose the `IPv6` family when creating your cluster, the `1.10.1` version of the add-on defaults to this configuration. This is the case for both a self-managed or Amazon EKS add-on. For more information about IP prefix delegation, see [Assign more IP addresses to Amazon EKS nodes with prefixes](cni-increase-ip-addresses.md).
+  **Configure `IPv4` and `IPv6` addresses**: When you create a cluster, the VPC and subnets that you specify must have an `IPv6` CIDR block that’s assigned to the VPC and subnets that you specify. They must also have an `IPv4` CIDR block assigned to them. This is because, even if you only want to use `IPv6`, a VPC still requires an `IPv4` CIDR block to function. For more information, see [Associate an IPv6 CIDR block with your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-vpcs.html#vpc-associate-ipv6-cidr) in the Amazon VPC User Guide.
+  **Auto-assign IPv6 addresses to nodes:** When you create your nodes, you must specify subnets that are configured to auto-assign `IPv6` addresses. Otherwise, you can’t deploy your nodes. By default, this configuration is disabled. For more information, see [Modify the IPv6 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#subnet-ipv6) in the Amazon VPC User Guide.
+  **Set route tables to use `IPv6` **: The route tables that are assigned to your subnets must have routes for `IPv6` addresses. For more information, see [Migrate to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the Amazon VPC User Guide.
+  **Set security groups for `IPv6` **: Your security groups must allow `IPv6` addresses. For more information, see [Migrate to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the Amazon VPC User Guide.
+  **Set up load balancer**: Use version `2.3.1` or later of the AWS Load Balancer Controller to load balance HTTP applications using the [Route application and HTTP traffic with Application Load Balancers](alb-ingress.md) or network traffic using the [Route TCP and UDP traffic with Network Load Balancers](network-load-balancing.md) to `IPv6` Pods with either load balancer in IP mode, but not instance mode. For more information, see [Route internet traffic with AWS Load Balancer Controller](aws-load-balancer-controller.md).
+  **Add `IPv6` IAM policy**: You must attach an `IPv6` IAM policy to your node IAM or CNI IAM role. Between the two, we recommend that you attach it to a CNI IAM role. For more information, see [Create IAM policy for clusters that use the `IPv6` family](cni-iam-role.md#cni-iam-role-create-ipv6-policy) and [Step 1: Create the Amazon VPC CNI plugin for Kubernetes IAM role](cni-iam-role.md#cni-iam-role-create-role).
+  **Evaluate all components**: Perform a thorough evaluation of your applications, Amazon EKS add-ons, and AWS services that you integrate with before deploying `IPv6` clusters. This is to ensure that everything works as expected with `IPv6`.

# Deploying an Amazon EKS `IPv6` cluster and managed Amazon Linux nodes
Deploy

In this tutorial, you deploy an `IPv6` Amazon VPC, an Amazon EKS cluster with the `IPv6` family, and a managed node group with Amazon EC2 Amazon Linux nodes. You can’t deploy Amazon EC2 Windows nodes in an `IPv6` cluster. You can also deploy Fargate nodes to your cluster, though those instructions aren’t provided in this topic for simplicity.

## Prerequisites


Complete the following before you start the tutorial:

Install and configure the following tools and resources that you need to create and manage an Amazon EKS cluster.
+ We recommend that you familiarize yourself with all settings and deploy a cluster with the settings that meet your requirements. For more information, see [Create an Amazon EKS cluster](create-cluster.md), [Simplify node lifecycle with managed node groups](managed-node-groups.md), and the [Considerations](cni-ipv6.md) for this topic. You can only enable some settings when creating your cluster.
+ The `kubectl` command line tool is installed on your device or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is `1.29`, you can use `kubectl` version `1.28`, `1.29`, or `1.30` with it. To install or upgrade `kubectl`, see [Set up `kubectl` and `eksctl`](install-kubectl.md).
+ The IAM security principal that you’re using must have permissions to work with Amazon EKS IAM roles, service linked roles, AWS CloudFormation, a VPC, and related resources. For more information, see [Actions](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html) and [Using service-linked roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html) in the IAM User Guide.
+ If you use the eksctl, install version `0.215.0` or later on your computer. To install or update to it, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.
+ Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*. If you use the AWS CloudShell, you may need to [install version 2.12.3 or later or 1.27.160 or later of the AWS CLI](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software), because the default AWS CLI version installed in the AWS CloudShell may be an earlier version.

You can use the eksctl or CLI to deploy an `IPv6` cluster.

## Deploy an IPv6 cluster with eksctl


1. Create the `ipv6-cluster.yaml` file. Copy the command that follows to your device. Make the following modifications to the command as needed and then run the modified command:
   + Replace *my-cluster* with a name for your cluster. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in.
   + Replace *region-code* with any AWS Region that is supported by Amazon EKS. For a list of AWS Regions, see [Amazon EKS endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/eks.html) in the AWS General Reference guide.
   + The value for `version` with the version of your cluster. For more information, see [Amazon EKS supported versions](https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html).
   + Replace *my-nodegroup* with a name for your node group. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters.
   + Replace *t3.medium* with any [AWS Nitro System instance type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances).

     ```
     cat >ipv6-cluster.yaml <<EOF
     ---
     apiVersion: eksctl.io/v1alpha5
     kind: ClusterConfig
     
     metadata:
       name: my-cluster
       region: region-code
       version: "X.XX"
     
     kubernetesNetworkConfig:
       ipFamily: IPv6
     
     addons:
       - name: vpc-cni
         version: latest
       - name: coredns
         version: latest
       - name: kube-proxy
         version: latest
     
     iam:
       withOIDC: true
     
     managedNodeGroups:
       - name: my-nodegroup
         instanceType: t3.medium
     EOF
     ```

1. Create your cluster.

   ```
   eksctl create cluster -f ipv6-cluster.yaml
   ```

   Cluster creation takes several minutes. Don’t proceed until you see the last line of output, which looks similar to the following output.

   ```
   [...]
   [✓]  EKS cluster "my-cluster" in "region-code" region is ready
   ```

1. Confirm that default Pods are assigned `IPv6` addresses.

   ```
   kubectl get pods -n kube-system -o wide
   ```

   An example output is as follows.

   ```
   NAME                       READY   STATUS    RESTARTS   AGE     IP                                       NODE                                            NOMINATED NODE   READINESS GATES
   aws-node-rslts             1/1     Running   1          5m36s   2600:1f13:b66:8200:11a5:ade0:c590:6ac8   ip-192-168-34-75.region-code.compute.internal   <none>           <none>
   aws-node-t74jh             1/1     Running   0          5m32s   2600:1f13:b66:8203:4516:2080:8ced:1ca9   ip-192-168-253-70.region-code.compute.internal  <none>           <none>
   coredns-85d5b4454c-cw7w2   1/1     Running   0          56m     2600:1f13:b66:8203:34e5::                ip-192-168-253-70.region-code.compute.internal  <none>           <none>
   coredns-85d5b4454c-tx6n8   1/1     Running   0          56m     2600:1f13:b66:8203:34e5::1               ip-192-168-253-70.region-code.compute.internal  <none>           <none>
   kube-proxy-btpbk           1/1     Running   0          5m36s   2600:1f13:b66:8200:11a5:ade0:c590:6ac8   ip-192-168-34-75.region-code.compute.internal   <none>           <none>
   kube-proxy-jjk2g           1/1     Running   0          5m33s   2600:1f13:b66:8203:4516:2080:8ced:1ca9   ip-192-168-253-70.region-code.compute.internal  <none>           <none>
   ```

1. Confirm that default services are assigned `IPv6` addresses.

   ```
   kubectl get services -n kube-system -o wide
   ```

   An example output is as follows.

   ```
   NAME       TYPE        CLUSTER-IP          EXTERNAL-IP   PORT(S)         AGE   SELECTOR
   kube-dns   ClusterIP   fd30:3087:b6c2::a   <none>        53/UDP,53/TCP   57m   k8s-app=kube-dns
   ```

1. (Optional) [Deploy a sample application](sample-deployment.md) or deploy the [AWS Load Balancer Controller](aws-load-balancer-controller.md) and a sample application to load balance HTTP applications with [Route application and HTTP traffic with Application Load Balancers](alb-ingress.md) or network traffic with [Route TCP and UDP traffic with Network Load Balancers](network-load-balancing.md) to `IPv6` Pods.

1. After you’ve finished with the cluster and nodes that you created for this tutorial, you should clean up the resources that you created with the following command.

   ```
   eksctl delete cluster my-cluster
   ```

## Deploy an IPv6 cluster with AWS CLI


**Important**  
You must complete all steps in this procedure as the same user. To check the current user, run the following command:  

  ```
  aws sts get-caller-identity
  ```
You must complete all steps in this procedure in the same shell. Several steps use variables set in previous steps. Steps that use variables won’t function properly if the variable values are set in a different shell. If you use the [AWS CloudShell](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html) to complete the following procedure, remember that if you don’t interact with it using your keyboard or pointer for approximately 20–30 minutes, your shell session ends. Running processes do not count as interactions.
The instructions are written for the Bash shell, and may need adjusting in other shells.

Replace all example values in the steps of this procedure with your own values.

1. Run the following commands to set some variables used in later steps. Replace *region-code* with the AWS Region that you want to deploy your resources in. The value can be any AWS Region that is supported by Amazon EKS. For a list of AWS Regions, see [Amazon EKS endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/eks.html) in the AWS General Reference guide. Replace *my-cluster* with a name for your cluster. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in. Replace *my-nodegroup* with a name for your node group. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters. Replace *111122223333* with your account ID.

   ```
   export region_code=region-code
   export cluster_name=my-cluster
   export nodegroup_name=my-nodegroup
   export account_id=111122223333
   ```

1. Create an Amazon VPC with public and private subnets that meets Amazon EKS and `IPv6` requirements.

   1. Run the following command to set a variable for your AWS CloudFormation stack name. You can replace *my-eks-ipv6-vpc* with any name you choose.

      ```
      export vpc_stack_name=my-eks-ipv6-vpc
      ```

   1. Create an `IPv6` VPC using an AWS CloudFormation template.

      ```
      aws cloudformation create-stack --region $region_code --stack-name $vpc_stack_name \
        --template-url https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-ipv6-vpc-public-private-subnets.yaml
      ```

      The stack takes a few minutes to create. Run the following command. Don’t continue to the next step until the output of the command is `CREATE_COMPLETE`.

      ```
      aws cloudformation describe-stacks --region $region_code --stack-name $vpc_stack_name --query Stacks[].StackStatus --output text
      ```

   1. Retrieve the IDs of the public subnets that were created.

      ```
      aws cloudformation describe-stacks --region $region_code --stack-name $vpc_stack_name \
          --query='Stacks[].Outputs[?OutputKey==`SubnetsPublic`].OutputValue' --output text
      ```

      An example output is as follows.

      ```
      subnet-0a1a56c486EXAMPLE,subnet-099e6ca77aEXAMPLE
      ```

   1. Enable the auto-assign `IPv6` address option for the public subnets that were created.

      ```
      aws ec2 modify-subnet-attribute --region $region_code --subnet-id subnet-0a1a56c486EXAMPLE --assign-ipv6-address-on-creation
      aws ec2 modify-subnet-attribute --region $region_code --subnet-id subnet-099e6ca77aEXAMPLE --assign-ipv6-address-on-creation
      ```

   1. Retrieve the names of the subnets and security groups created by the template from the deployed AWS CloudFormation stack and store them in variables for use in a later step.

      ```
      security_groups=$(aws cloudformation describe-stacks --region $region_code --stack-name $vpc_stack_name \
          --query='Stacks[].Outputs[?OutputKey==`SecurityGroups`].OutputValue' --output text)
      
      public_subnets=$(aws cloudformation describe-stacks --region $region_code --stack-name $vpc_stack_name \
          --query='Stacks[].Outputs[?OutputKey==`SubnetsPublic`].OutputValue' --output text)
      
      private_subnets=$(aws cloudformation describe-stacks --region $region_code --stack-name $vpc_stack_name \
          --query='Stacks[].Outputs[?OutputKey==`SubnetsPrivate`].OutputValue' --output text)
      
      subnets=${public_subnets},${private_subnets}
      ```

1. Create a cluster IAM role and attach the required Amazon EKS IAM managed policy to it. Kubernetes clusters managed by Amazon EKS make calls to other AWS services on your behalf to manage the resources that you use with the service.

   1. Run the following command to create the `eks-cluster-role-trust-policy.json` file.

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "Service": "eks.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
          }
        ]
      }
      ```

   1. Run the following command to set a variable for your role name. You can replace *myAmazonEKSClusterRole* with any name you choose.

      ```
      export cluster_role_name=myAmazonEKSClusterRole
      ```

   1. Create the role.

      ```
      aws iam create-role --role-name $cluster_role_name --assume-role-policy-document file://"eks-cluster-role-trust-policy.json"
      ```

   1. Retrieve the ARN of the IAM role and store it in a variable for a later step.

      ```
      CLUSTER_IAM_ROLE=$(aws iam get-role --role-name $cluster_role_name --query="Role.Arn" --output text)
      ```

   1. Attach the required Amazon EKS managed IAM policy to the role.

      ```
      aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy --role-name $cluster_role_name
      ```

1. Create your cluster.

   ```
   aws eks create-cluster --region $region_code --name $cluster_name --kubernetes-version 1.XX \
      --role-arn $CLUSTER_IAM_ROLE --resources-vpc-config subnetIds=$subnets,securityGroupIds=$security_groups \
      --kubernetes-network-config ipFamily=ipv6
   ```

   1. NOTE: You might receive an error that one of the Availability Zones in your request doesn’t have sufficient capacity to create an Amazon EKS cluster. If this happens, the error output contains the Availability Zones that can support a new cluster. Retry creating your cluster with at least two subnets that are located in the supported Availability Zones for your account. For more information, see [Insufficient capacity](troubleshooting.md#ice).

      The cluster takes several minutes to create. Run the following command. Don’t continue to the next step until the output from the command is `ACTIVE`.

      ```
      aws eks describe-cluster --region $region_code --name $cluster_name --query cluster.status
      ```

1. Create or update a `kubeconfig` file for your cluster so that you can communicate with your cluster.

   ```
   aws eks update-kubeconfig --region $region_code --name $cluster_name
   ```

   By default, the `config` file is created in `~/.kube` or the new cluster’s configuration is added to an existing `config` file in `~/.kube`.

1. Create a node IAM role.

   1. Run the following command to create the `vpc-cni-ipv6-policy.json` file.

      ```
      {
          "Version":"2012-10-17",		 	 	 
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "ec2:AssignIpv6Addresses",
                      "ec2:DescribeInstances",
                      "ec2:DescribeTags",
                      "ec2:DescribeNetworkInterfaces",
                      "ec2:DescribeInstanceTypes"
                  ],
                  "Resource": "*"
              },
              {
                  "Effect": "Allow",
                  "Action": [
                      "ec2:CreateTags"
                  ],
                  "Resource": [
                      "arn:aws:ec2:*:*:network-interface/*"
                  ]
              }
          ]
      }
      ```

   1. Create the IAM policy.

      ```
      aws iam create-policy --policy-name AmazonEKS_CNI_IPv6_Policy --policy-document file://vpc-cni-ipv6-policy.json
      ```

   1. Run the following command to create the `node-role-trust-relationship.json` file.

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "Service": "ec2.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
          }
        ]
      }
      ```

   1. Run the following command to set a variable for your role name. You can replace *AmazonEKSNodeRole* with any name you choose.

      ```
      export node_role_name=AmazonEKSNodeRole
      ```

   1. Create the IAM role.

      ```
      aws iam create-role --role-name $node_role_name --assume-role-policy-document file://"node-role-trust-relationship.json"
      ```

   1. Attach the IAM policy to the IAM role.

      ```
      aws iam attach-role-policy --policy-arn arn:aws:iam::$account_id:policy/AmazonEKS_CNI_IPv6_Policy \
          --role-name $node_role_name
      ```
**Important**  
For simplicity in this tutorial, the policy is attached to this IAM role. In a production cluster however, we recommend attaching the policy to a separate IAM role. For more information, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md).

   1. Attach two required IAM managed policies to the IAM role.

      ```
      aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy \
        --role-name $node_role_name
      aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \
        --role-name $node_role_name
      ```

   1. Retrieve the ARN of the IAM role and store it in a variable for a later step.

      ```
      node_iam_role=$(aws iam get-role --role-name $node_role_name --query="Role.Arn" --output text)
      ```

1. Create a managed node group.

   1. View the IDs of the subnets that you created in a previous step.

      ```
      echo $subnets
      ```

      An example output is as follows.

      ```
      subnet-0a1a56c486EXAMPLE,subnet-099e6ca77aEXAMPLE,subnet-0377963d69EXAMPLE,subnet-0c05f819d5EXAMPLE
      ```

   1. Create the node group. Replace *0a1a56c486EXAMPLE*, *099e6ca77aEXAMPLE*, *0377963d69EXAMPLE*, and *0c05f819d5EXAMPLE* with the values returned in the output of the previous step. Be sure to remove the commas between subnet IDs from the previous output in the following command. You can replace *t3.medium* with any [AWS Nitro System instance type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances).

      ```
      aws eks create-nodegroup --region $region_code --cluster-name $cluster_name --nodegroup-name $nodegroup_name \
          --subnets subnet-0a1a56c486EXAMPLE subnet-099e6ca77aEXAMPLE subnet-0377963d69EXAMPLE subnet-0c05f819d5EXAMPLE \
          --instance-types t3.medium --node-role $node_iam_role
      ```

      The node group takes a few minutes to create. Run the following command. Don’t proceed to the next step until the output returned is `ACTIVE`.

      ```
      aws eks describe-nodegroup --region $region_code --cluster-name $cluster_name --nodegroup-name $nodegroup_name \
          --query nodegroup.status --output text
      ```

1. Confirm that the default Pods are assigned `IPv6` addresses in the `IP` column.

   ```
   kubectl get pods -n kube-system -o wide
   ```

   An example output is as follows.

   ```
   NAME                       READY   STATUS    RESTARTS   AGE     IP                                       NODE                                            NOMINATED NODE   READINESS GATES
   aws-node-rslts             1/1     Running   1          5m36s   2600:1f13:b66:8200:11a5:ade0:c590:6ac8   ip-192-168-34-75.region-code.compute.internal   <none>           <none>
   aws-node-t74jh             1/1     Running   0          5m32s   2600:1f13:b66:8203:4516:2080:8ced:1ca9   ip-192-168-253-70.region-code.compute.internal  <none>           <none>
   coredns-85d5b4454c-cw7w2   1/1     Running   0          56m     2600:1f13:b66:8203:34e5::                ip-192-168-253-70.region-code.compute.internal  <none>           <none>
   coredns-85d5b4454c-tx6n8   1/1     Running   0          56m     2600:1f13:b66:8203:34e5::1               ip-192-168-253-70.region-code.compute.internal  <none>           <none>
   kube-proxy-btpbk           1/1     Running   0          5m36s   2600:1f13:b66:8200:11a5:ade0:c590:6ac8   ip-192-168-34-75.region-code.compute.internal   <none>           <none>
   kube-proxy-jjk2g           1/1     Running   0          5m33s   2600:1f13:b66:8203:4516:2080:8ced:1ca9   ip-192-168-253-70.region-code.compute.internal  <none>           <none>
   ```

1. Confirm that the default services are assigned `IPv6` addresses in the `IP` column.

   ```
   kubectl get services -n kube-system -o wide
   ```

   An example output is as follows.

   ```
   NAME       TYPE        CLUSTER-IP          EXTERNAL-IP   PORT(S)         AGE   SELECTOR
   kube-dns   ClusterIP   fd30:3087:b6c2::a   <none>        53/UDP,53/TCP   57m   k8s-app=kube-dns
   ```

1. (Optional) [Deploy a sample application](sample-deployment.md) or deploy the [AWS Load Balancer Controller](aws-load-balancer-controller.md) and a sample application to load balance HTTP applications with [Route application and HTTP traffic with Application Load Balancers](alb-ingress.md) or network traffic with [Route TCP and UDP traffic with Network Load Balancers](network-load-balancing.md) to `IPv6` Pods.

1. After you’ve finished with the cluster and nodes that you created for this tutorial, you should clean up the resources that you created with the following commands. Make sure that you’re not using any of the resources outside of this tutorial before deleting them.

   1. If you’re completing this step in a different shell than you completed the previous steps in, set the values of all the variables used in previous steps, replacing the example values with the values you specified when you completed the previous steps. If you’re completing this step in the same shell that you completed the previous steps in, skip to the next step.

      ```
      export region_code=region-code
      export vpc_stack_name=my-eks-ipv6-vpc
      export cluster_name=my-cluster
      export nodegroup_name=my-nodegroup
      export account_id=111122223333
      export node_role_name=AmazonEKSNodeRole
      export cluster_role_name=myAmazonEKSClusterRole
      ```

   1. Delete your node group.

      ```
      aws eks delete-nodegroup --region $region_code --cluster-name $cluster_name --nodegroup-name $nodegroup_name
      ```

      Deletion takes a few minutes. Run the following command. Don’t proceed to the next step if any output is returned.

      ```
      aws eks list-nodegroups --region $region_code --cluster-name $cluster_name --query nodegroups --output text
      ```

   1. Delete the cluster.

      ```
      aws eks delete-cluster --region $region_code --name $cluster_name
      ```

      The cluster takes a few minutes to delete. Before continuing make sure that the cluster is deleted with the following command.

      ```
      aws eks describe-cluster --region $region_code --name $cluster_name
      ```

      Don’t proceed to the next step until your output is similar to the following output.

      ```
      An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: my-cluster.
      ```

   1. Delete the IAM resources that you created. Replace *AmazonEKS\$1CNI\$1IPv6\$1Policy* with the name you chose, if you chose a different name than the one used in previous steps.

      ```
      aws iam detach-role-policy --role-name $cluster_role_name --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
      aws iam detach-role-policy --role-name $node_role_name --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
      aws iam detach-role-policy --role-name $node_role_name --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
      aws iam detach-role-policy --role-name $node_role_name --policy-arn arn:aws:iam::$account_id:policy/AmazonEKS_CNI_IPv6_Policy
      aws iam delete-policy --policy-arn arn:aws:iam::$account_id:policy/AmazonEKS_CNI_IPv6_Policy
      aws iam delete-role --role-name $cluster_role_name
      aws iam delete-role --role-name $node_role_name
      ```

   1. Delete the AWS CloudFormation stack that created the VPC.

      ```
      aws cloudformation delete-stack --region $region_code --stack-name $vpc_stack_name
      ```

# Enable outbound internet access for Pods
Outbound traffic

 **Applies to**: Linux `IPv4` Fargate nodes, Linux nodes with Amazon EC2 instances

If you deployed your cluster using the `IPv6` family, then the information in this topic isn’t applicable to your cluster, because `IPv6` addresses are not network translated. For more information about using `IPv6` with your cluster, see [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md).

By default, each Pod in your cluster is assigned a [private](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html#concepts-private-addresses) `IPv4` address from a classless inter-domain routing (CIDR) block that is associated with the VPC that the Pod is deployed in. Pods in the same VPC communicate with each other using these private IP addresses as end points. When a Pod communicates to any `IPv4` address that isn’t within a CIDR block that’s associated to your VPC, the Amazon VPC CNI plugin (for both [Linux](https://github.com/aws/amazon-vpc-cni-k8s#amazon-vpc-cni-k8s) or [Windows](https://github.com/aws/amazon-vpc-cni-plugins/tree/master/plugins/vpc-bridge)) translates the Pod’s `IPv4` address to the primary private `IPv4` address of the primary [elastic network interface](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#eni-basics) of the node that the Pod is running on, by default [\$1](#snat-exception).

**Note**  
For Windows nodes, there are additional details to consider. By default, the [VPC CNI plugin for Windows](https://github.com/aws/amazon-vpc-cni-plugins/tree/master/plugins/vpc-bridge) is defined with a networking configuration in which the traffic to a destination within the same VPC is excluded for SNAT. This means that internal VPC communication has SNAT disabled and the IP address allocated to a Pod is routable inside the VPC. But traffic to a destination outside of the VPC has the source Pod IP SNAT’ed to the instance ENI’s primary IP address. This default configuration for Windows ensures that the pod can access networks outside of your VPC in the same way as the host instance.

Due to this behavior:
+ Your Pods can communicate with internet resources only if the node that they’re running on has a [public](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html#concepts-public-addresses) or [elastic](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-eips.html) IP address assigned to it and is in a [public subnet](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html#subnet-basics). A public subnet’s associated [route table](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html) has a route to an internet gateway. We recommend deploying nodes to private subnets, whenever possible.
+ For versions of the plugin earlier than `1.8.0`, resources that are in networks or VPCs that are connected to your cluster VPC using [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html), a [transit VPC](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/transit-vpc-option.html), or [AWS Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) can’t initiate communication to your Pods behind secondary elastic network interfaces. Your Pods can initiate communication to those resources and receive responses from them, though.

If either of the following statements are true in your environment, then change the default configuration with the command that follows.
+ You have resources in networks or VPCs that are connected to your cluster VPC using [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html), a [transit VPC](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/transit-vpc-option.html), or [AWS Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) that need to initiate communication with your Pods using an `IPv4` address and your plugin version is earlier than `1.8.0`.
+ Your Pods are in a [private subnet](https://docs.aws.amazon.com/vpc/latest/userguide/configure-subnets.html#subnet-basics) and need to communicate outbound to the internet. The subnet has a route to a [NAT gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html).

```
kubectl set env daemonset -n kube-system aws-node AWS_VPC_K8S_CNI_EXTERNALSNAT=true
```

**Note**  
The `AWS_VPC_K8S_CNI_EXTERNALSNAT` and `AWS_VPC_K8S_CNI_EXCLUDE_SNAT_CIDRS` CNI configuration variables aren’t applicable to Windows nodes. Disabling SNAT isn’t supported for Windows. As for excluding a list of `IPv4` CIDRs from SNAT, you can define this by specifying the `ExcludedSnatCIDRs` parameter in the Windows bootstrap script. For more information on using this parameter, see [Bootstrap script configuration parameters](eks-optimized-windows-ami.md#bootstrap-script-configuration-parameters).

## Host networking


\$1 If a Pod’s spec contains `hostNetwork=true` (default is `false`), then its IP address isn’t translated to a different address. This is the case for the `kube-proxy` and Amazon VPC CNI plugin for Kubernetes Pods that run on your cluster, by default. For these Pods, the IP address is the same as the node’s primary IP address, so the Pod’s IP address isn’t translated. For more information about a Pod’s `hostNetwork` setting, see [PodSpec v1 core](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#podspec-v1-core) in the Kubernetes API reference.

# Limit Pod traffic with Kubernetes network policies
Kubernetes policies

## Overview


By default, there are no restrictions in Kubernetes for IP addresses, ports, or connections between any Pods in your cluster or between your Pods and resources in any other network. You can use Kubernetes *network policy* to restrict network traffic to and from your Pods. For more information, see [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) in the Kubernetes documentation.

## Standard network policy


You can use the standard `NetworkPolicy` to segment pod-to-pod traffic in the cluster. These network policies operate at layers 3 and 4 of the OSI network model, allowing you to control traffic flow at the IP address or port level within your Amazon EKS cluster. Standard network policies are scoped to the namespace level.

### Use cases

+ Segment network traffic between workloads to ensure that only related applications can talk to each other.
+ Isolate tenants at the namespace level using policies to enforce network separation.

### Example


In the policy below, egress traffic from the *webapp* pods in the *sun* namespace is restricted.

```
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: webapp-egress-policy
  namespace: sun
spec:
  podSelector:
    matchLabels:
      role: webapp
  policyTypes:
  - Egress
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: moon
      podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 8080
  - to:
    - namespaceSelector:
        matchLabels:
          name: stars
      podSelector:
        matchLabels:
          role: frontend
    ports:
    - protocol: TCP
      port: 8080
```

The policy applies to pods with the label `role: webapp` in the `sun` namespace.
+ Allowed traffic: Pods with the label `role: frontend` in the `moon` namespace on TCP port `8080` 
+ Allowed traffic: Pods with the label role: frontend in the `stars` namespace on TCP port `8080` 
+ Blocked traffic: All other outbound traffic from `webapp` pods is implicitly denied

## Admin (or cluster) network policy


![\[llustration of the evaluation order for network policies in EKS\]](http://docs.aws.amazon.com/eks/latest/userguide/images/evaluation-order.png)


You can use the `ClusterNetworkPolicy` to enforce a network security standard that applies to the whole cluster. Instead of repetitively defining and maintaining a distinct policy for each namespace, you can use a single policy to centrally manage network access controls for different workloads in the cluster, irrespective of their namespace.

### Use cases

+ Centrally manage network access controls for all (or a subset of) workloads in your EKS cluster.
+ Define a default network security posture across the cluster.
+ Extend organizational security standards to the scope of the cluster in a more operationally efficient way.

### Example


In the policy below, you can explicitly block cluster traffic from other namespaces to prevent network access to a sensitive workload namespace.

```
apiVersion: networking.k8s.aws/v1alpha1
kind: ClusterNetworkPolicy
metadata:
  name: protect-sensitive-workload
spec:
  tier: Admin
  priority: 10
  subject:
    namespaces:
      matchLabels:
        kubernetes.io/metadata.name: earth
  ingress:
    - action: Deny
      from:
      - namespaces:
          matchLabels: {} # Match all namespaces.
      name: select-all-deny-all
```

## Important notes


Network policies in the Amazon VPC CNI plugin for Kubernetes are supported in the configurations listed below.
+ Version 1.21.0 (or later) of Amazon VPC CNI plugin for both standard and admin network policies.
+ Cluster configured for `IPv4` or `IPv6` addresses.
+ You can use network policies with [security groups for Pods](security-groups-for-pods.md). With network policies, you can control all in-cluster communication. With security groups for Pods, you can control access to AWS services from applications within a Pod.
+ You can use network policies with *custom networking* and *prefix delegation*.

## Considerations


 **Architecture** 
+ When applying Amazon VPC CNI plugin for Kubernetes network policies to your cluster with the Amazon VPC CNI plugin for Kubernetes , you can apply the policies to Amazon EC2 Linux nodes only. You can’t apply the policies to Fargate or Windows nodes.
+ Network policies only apply either `IPv4` or `IPv6` addresses, but not both. In an `IPv4` cluster, the VPC CNI assigns `IPv4` address to pods and applies `IPv4` policies. In an `IPv6` cluster, the VPC CNI assigns `IPv6` address to pods and applies `IPv6` policies. Any `IPv4` network policy rules applied to an `IPv6` cluster are ignored. Any `IPv6` network policy rules applied to an `IPv4` cluster are ignored.

 **Network Policies** 
+ Network Policies are only applied to Pods that are part of a Deployment. Standalone Pods that don’t have a `metadata.ownerReferences` set can’t have network policies applied to them.
+ You can apply multiple network policies to the same Pod. When two or more policies that select the same Pod are configured, all policies are applied to the Pod.
+ The maximum number of combinations of ports and protocols for a single IP address range (CIDR) is 24 across all of your network policies. Selectors such as `namespaceSelector` resolve to one or more CIDRs. If multiple selectors resolve to a single CIDR or you specify the same direct CIDR multiple times in the same or different network policies, these all count toward this limit.
+ For any of your Kubernetes services, the service port must be the same as the container port. If you’re using named ports, use the same name in the service spec too.

 **Admin Network Policies** 

1.  **Admin tier policies (evaluated first)**: All Admin tier ClusterNetworkPolicies are evaluated before any other policies. Within the Admin tier, policies are processed in priority order (lowest priority number first). The action type determines what happens next.
   +  **Deny action (highest precedence)**: When an Admin policy with a Deny action matches traffic, that traffic is immediately blocked regardless of any other policies. No further ClusterNetworkPolicy or NetworkPolicy rules are processed. This ensures that organization-wide security controls cannot be overridden by namespace-level policies.
   +  **Allow action**: After Deny rules are evaluated, Admin policies with Allow actions are processed in priority order (lowest priority number first). When an Allow action matches, the traffic is accepted and no further policy evaluation occurs. These policies can grant access across multiple namespaces based on label selectors, providing centralized control over which workloads can access specific resources.
   +  **Pass action**: Pass actions in Admin tier policies delegate decision-making to lower tiers. When traffic matches a Pass rule, evaluation skips all remaining Admin tier rules for that traffic and proceeds directly to the NetworkPolicy tier. This allows administrators to explicitly delegate control for certain traffic patterns to application teams. For example, you might use Pass rules to delegate intra-namespace traffic management to namespace administrators while maintaining strict controls over external access.

1.  **Network policy tier**: If no Admin tier policy matches with Deny or Allow, or if a Pass action was matched, namespace-scoped NetworkPolicy resources are evaluated next. These policies provide fine-grained control within individual namespaces and are managed by application teams. Namespace-scoped policies can only be more restrictive than Admin policies. They cannot override an Admin policy’s Deny decision, but they can further restrict traffic that was allowed or passed by Admin policies.

1.  **Baseline tier Admin policies**: If no Admin or namespace-scoped policies match the traffic, Baseline tier ClusterNetworkPolicies are evaluated. These provide default security postures that can be overridden by namespace-scoped policies, allowing administrators to set organization-wide defaults while giving teams flexibility to customize as needed. Baseline policies are evaluated in priority order (lowest priority number first).

1.  **Default deny (if no policies match)**: This deny-by-default behavior ensures that only explicitly permitted connections are allowed, maintaining a strong security posture.

 **Migration** 
+ If your cluster is currently using a third party solution to manage Kubernetes network policies, you can use those same policies with the Amazon VPC CNI plugin for Kubernetes. However you must remove your existing solution so that it isn’t managing the same policies.

**Warning**  
We recommend that after you remove a network policy solution, then you replace all of the nodes that had the network policy solution applied to them. This is because the traffic rules might get left behind by a pod of the solution if it exits suddenly.

 **Installation** 
+ The network policy feature creates and requires a `PolicyEndpoint` Custom Resource Definition (CRD) called `policyendpoints.networking.k8s.aws`. `PolicyEndpoint` objects of the Custom Resource are managed by Amazon EKS. You shouldn’t modify or delete these resources.
+ If you run pods that use the instance role IAM credentials or connect to the EC2 IMDS, be careful to check for network policies that would block access to the EC2 IMDS. You may need to add a network policy to allow access to EC2 IMDS. For more information, see [Instance metadata and user data](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html) in the Amazon EC2 User Guide.

  Pods that use *IAM roles for service accounts* or *EKS Pod Identity* don’t access EC2 IMDS.
+ The Amazon VPC CNI plugin for Kubernetes doesn’t apply network policies to additional network interfaces for each pod, only the primary interface for each pod (`eth0`). This affects the following architectures:
  +  `IPv6` pods with the `ENABLE_V4_EGRESS` variable set to `true`. This variable enables the `IPv4` egress feature to connect the IPv6 pods to `IPv4` endpoints such as those outside the cluster. The `IPv4` egress feature works by creating an additional network interface with a local loopback IPv4 address.
  + When using chained network plugins such as Multus. Because these plugins add network interfaces to each pod, network policies aren’t applied to the chained network plugins.

# Restrict Pod network traffic with Kubernetes network policies
Restrict traffic

You can use a Kubernetes network policy to restrict network traffic to and from your Pods. For more information, see [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) in the Kubernetes documentation.

You must configure the following in order to use this feature:

1. Set up policy enforcement at Pod startup. You do this in the `aws-node` container of the VPC CNI `DaemonSet`.

1. Enable the network policy parameter for the add-on.

1. Configure your cluster to use the Kubernetes network policy

Before you begin, review the considerations. For more information, see [Considerations](cni-network-policy.md#cni-network-policy-considerations).

## Prerequisites


The following are prerequisites for the feature:

### Minimum cluster version


An existing Amazon EKS cluster. To deploy one, see [Get started with Amazon EKS](getting-started.md). The cluster must be running one of the Kubernetes versions and platform versions listed in the following table. Note that any Kubernetes and platform versions later than those listed are also supported. You can check your current Kubernetes version by replacing *my-cluster* in the following command with the name of your cluster and then running the modified command:

```
aws eks describe-cluster --name my-cluster --query cluster.version --output text
```


| Kubernetes version | Platform version | 
| --- | --- | 
|   `1.27.4`   |   `eks.5`   | 
|   `1.26.7`   |   `eks.6`   | 

### Minimum VPC CNI version


To create both standard Kubernetes network policies and admin network policies, you need to run version `1.21` of the VPC CNI plugin. You can see which version that you currently have with the following command.

```
kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3
```

If your version is earlier than `1.21`, see [Update the Amazon VPC CNI (Amazon EKS add-on)](vpc-add-on-update.md) to upgrade to version `1.21` or later.

### Minimum Linux kernel version


Your nodes must have Linux kernel version `5.10` or later. You can check your kernel version with `uname -r`. If you’re using the latest versions of the Amazon EKS optimized Amazon Linux, Amazon EKS optimized accelerated Amazon Linux AMIs, and Bottlerocket AMIs, they already have the required kernel version.

The Amazon EKS optimized accelerated Amazon Linux AMI version `v20231116` or later have kernel version `5.10`.

## Step 1: Set up policy enforcement at Pod startup


The Amazon VPC CNI plugin for Kubernetes configures network policies for pods in parallel with the pod provisioning. Until all of the policies are configured for the new pod, containers in the new pod will start with a *default allow policy*. This is called *standard mode*. A default allow policy means that all ingress and egress traffic is allowed to and from the new pods. For example, the pods will not have any firewall rules enforced (all traffic is allowed) until the new pod is updated with the active policies.

With the `NETWORK_POLICY_ENFORCING_MODE` variable set to `strict`, pods that use the VPC CNI start with a *default deny policy*, then policies are configured. This is called *strict mode*. In strict mode, you must have a network policy for every endpoint that your pods need to access in your cluster. Note that this requirement applies to the CoreDNS pods. The default deny policy isn’t configured for pods with Host networking.

You can change the default network policy by setting the environment variable `NETWORK_POLICY_ENFORCING_MODE` to `strict` in the `aws-node` container of the VPC CNI `DaemonSet`.

```
env:
  - name: NETWORK_POLICY_ENFORCING_MODE
    value: "strict"
```

## Step 2: Enable the network policy parameter for the add-on


The network policy feature uses port `8162` on the node for metrics by default. Also, the feature uses port `8163` for health probes. If you run another application on the nodes or inside pods that needs to use these ports, the app fails to run. From VPC CNI version `v1.14.1` or later, you can change these ports.

Use the following procedure to enable the network policy parameter for the add-on.

### AWS Management Console


1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the Amazon VPC CNI add-on for.

1. Choose the **Add-ons** tab.

1. Select the box in the top right of the add-on box and then choose **Edit**.

1. On the **Configure `Amazon VPC CNI` ** page:

   1. Select a `v1.14.0-eksbuild.3` or later version in the **Version** list.

   1. Expand the **Optional configuration settings**.

   1. Enter the JSON key `"enableNetworkPolicy":` and value `"true"` in **Configuration values**. The resulting text must be a valid JSON object. If this key and value are the only data in the text box, surround the key and value with curly braces `{ }`.

      The following example has network policy feature enabled and metrics and health probes are set to the default port numbers:

      ```
      {
          "enableNetworkPolicy": "true",
          "nodeAgent": {
              "healthProbeBindAddr": "8163",
              "metricsBindAddr": "8162"
          }
      }
      ```

### Helm


If you have installed the Amazon VPC CNI plugin for Kubernetes through `helm`, you can update the configuration to change the ports.

1. Run the following command to change the ports. Set the port number in the value for either key `nodeAgent.metricsBindAddr` or key `nodeAgent.healthProbeBindAddr`, respectively.

   ```
   helm upgrade --set nodeAgent.metricsBindAddr=8162 --set nodeAgent.healthProbeBindAddr=8163 aws-vpc-cni --namespace kube-system eks/aws-vpc-cni
   ```

### kubectl


1. Open the `aws-node` `DaemonSet` in your editor.

   ```
   kubectl edit daemonset -n kube-system aws-node
   ```

1. Replace the port numbers in the following command arguments in the `args:` in the `aws-network-policy-agent` container in the VPC CNI `aws-node` daemonset manifest.

   ```
       - args:
               - --metrics-bind-addr=:8162
               - --health-probe-bind-addr=:8163
   ```

## Step 3: Configure your cluster to use Kubernetes network policies


You can set this for an Amazon EKS add-on or self-managed add-on.

### Amazon EKS add-on


Using the AWS CLI, you can configure the cluster to use Kubernetes network policies by running the following command. Replace `my-cluster` with the name of your cluster and the IAM role ARN with the role that you are using.

```
aws eks update-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version v1.14.0-eksbuild.3 \
    --service-account-role-arn arn:aws:iam::123456789012:role/AmazonEKSVPCCNIRole \
    --resolve-conflicts PRESERVE --configuration-values '{"enableNetworkPolicy": "true"}'
```

To configure this using the AWS Management Console, follow the below steps:

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the Amazon VPC CNI add-on for.

1. Choose the **Add-ons** tab.

1. Select the box in the top right of the add-on box and then choose **Edit**.

1. On the **Configure `Amazon VPC CNI` ** page:

   1. Select a `v1.14.0-eksbuild.3` or later version in the **Version** list.

   1. Expand the **Optional configuration settings**.

   1. Enter the JSON key `"enableNetworkPolicy":` and value `"true"` in **Configuration values**. The resulting text must be a valid JSON object. If this key and value are the only data in the text box, surround the key and value with curly braces `{ }`. The following example shows network policy is enabled:

      ```
      { "enableNetworkPolicy": "true" }
      ```

      The following screenshot shows an example of this scenario.  
![\[<shared id="consolelong"/> showing the VPC CNI add-on with network policy in the optional configuration.\]](http://docs.aws.amazon.com/eks/latest/userguide/images/console-cni-config-network-policy.png)

### Self-managed add-on


If you have installed the Amazon VPC CNI plugin for Kubernetes through `helm`, you can update the configuration to enable network policy.

1. Run the following command to enable network policy.

   ```
   helm upgrade --set enableNetworkPolicy=true aws-vpc-cni --namespace kube-system eks/aws-vpc-cni
   ```

1. Open the `amazon-vpc-cni` `ConfigMap` in your editor.

   ```
   kubectl edit configmap -n kube-system amazon-vpc-cni -o yaml
   ```

1. Add the following line to the `data` in the `ConfigMap`.

   ```
   enable-network-policy-controller: "true"
   ```

   Once you’ve added the line, your `ConfigMap` should look like the following example.

   ```
   apiVersion: v1
    kind: ConfigMap
    metadata:
     name: amazon-vpc-cni
     namespace: kube-system
    data:
     enable-network-policy-controller: "true"
   ```

1. Open the `aws-node` `DaemonSet` in your editor.

   ```
   kubectl edit daemonset -n kube-system aws-node
   ```

   1. Replace the `false` with `true` in the command argument `--enable-network-policy=false` in the `args:` in the `aws-network-policy-agent` container in the VPC CNI `aws-node` daemonset manifest.

      ```
           - args:
              - --enable-network-policy=true
      ```

## Step 4. Next steps


After you complete the configuration, confirm that the `aws-node` pods are running on your cluster.

```
kubectl get pods -n kube-system | grep 'aws-node\|amazon'
```

An example output is as follows.

```
aws-node-gmqp7                                          2/2     Running   1 (24h ago)   24h
aws-node-prnsh                                          2/2     Running   1 (24h ago)   24h
```

There are 2 containers in the `aws-node` pods in versions `1.14` and later. In previous versions and if network policy is disabled, there is only a single container in the `aws-node` pods.

You can now deploy Kubernetes network policies to your cluster.

To implement Kubernetes network policies, you can create Kubernetes `NetworkPolicy` or `ClusterNetworkPolicy` objects and deploy them to your cluster. `NetworkPolicy` objects are scoped to a namespace, while `ClusterNetworkPolicy` objects can be scoped to the whole cluster or multiple namespaces. You implement policies to allow or deny traffic between Pods based on label selectors, namespaces, and IP address ranges. For more information about creating `NetworkPolicy` objects, see [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource) in the Kubernetes documentation.

Enforcement of Kubernetes `NetworkPolicy` objects is implemented using the Extended Berkeley Packet Filter (eBPF). Relative to `iptables` based implementations, it offers lower latency and performance characteristics, including reduced CPU utilization and avoiding sequential lookups. Additionally, eBPF probes provide access to context rich data that helps debug complex kernel level issues and improve observability. Amazon EKS supports an eBPF-based exporter that leverages the probes to log policy results on each node and export the data to external log collectors to aid in debugging. For more information, see the [eBPF documentation](https://ebpf.io/what-is-ebpf/#what-is-ebpf).

# Disable Kubernetes network policies for Amazon EKS Pod network traffic
Disable

Disable Kubernetes network policies to stop restricting Amazon EKS Pod network traffic

1. List all Kubernetes network policies.

   ```
   kubectl get netpol -A
   ```

1. Delete each Kubernetes network policy. You must delete all network policies before disabling network policies.

   ```
   kubectl delete netpol <policy-name>
   ```

1. Open the aws-node DaemonSet in your editor.

   ```
   kubectl edit daemonset -n kube-system aws-node
   ```

1. Replace the `true` with `false` in the command argument `--enable-network-policy=true` in the `args:` in the `aws-network-policy-agent` container in the VPC CNI `aws-node` daemonset manifest.

   ```
        - args:
           - --enable-network-policy=true
   ```

# Troubleshooting Kubernetes network policies For Amazon EKS
Troubleshooting

This is the troubleshooting guide for network policy feature of the Amazon VPC CNI.

This guide covers:
+ Install information, CRD and RBAC permissions [New `policyendpoints` CRD and permissions](#network-policies-troubleshooting-permissions) 
+ Logs to examine when diagnosing network policy problems [Network policy logs](#network-policies-troubleshooting-flowlogs) 
+ Running the eBPF SDK collection of tools to troubleshoot
+ Known issues and solutions [Known issues and solutions](#network-policies-troubleshooting-known-issues) 

**Note**  
Note that network policies are only applied to pods that are made by Kubernetes *Deployments*. For more limitations of the network policies in the VPC CNI, see [Considerations](cni-network-policy.md#cni-network-policy-considerations).

You can troubleshoot and investigate network connections that use network policies by reading the [Network policy logs](#network-policies-troubleshooting-flowlogs) and by running tools from the [eBPF SDK](#network-policies-ebpf-sdk).

## New `policyendpoints` CRD and permissions

+ CRD: `policyendpoints.networking.k8s.aws` 
+ Kubernetes API: `apiservice` called `v1.networking.k8s.io` 
+ Kubernetes resource: `Kind: NetworkPolicy` 
+ RBAC: `ClusterRole` called `aws-node` (VPC CNI), `ClusterRole` called `eks:network-policy-controller` (network policy controller in EKS cluster control plane)

For network policy, the VPC CNI creates a new `CustomResourceDefinition` (CRD) called `policyendpoints.networking.k8s.aws`. The VPC CNI must have permissions to create the CRD and create CustomResources (CR) of this and the other CRD installed by the VPC CNI (`eniconfigs.crd.k8s.amazonaws.com`). Both of the CRDs are available in the [`crds.yaml` file](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/charts/aws-vpc-cni/crds/customresourcedefinition.yaml) on GitHub. Specifically, the VPC CNI must have "get", "list", and "watch" verb permissions for `policyendpoints`.

The Kubernetes *Network Policy* is part of the `apiservice` called `v1.networking.k8s.io`, and this is `apiversion: networking.k8s.io/v1` in your policy YAML files. The VPC CNI `DaemonSet` must have permissions to use this part of the Kubernetes API.

The VPC CNI permissions are in a `ClusterRole` called `aws-node`. Note that `ClusterRole` objects aren’t grouped in namespaces. The following shows the `aws-node` of a cluster:

```
kubectl get clusterrole aws-node -o yaml
```

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: aws-vpc-cni
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: aws-node
    app.kubernetes.io/version: v1.19.4
    helm.sh/chart: aws-vpc-cni-1.19.4
    k8s-app: aws-node
  name: aws-node
rules:
- apiGroups:
  - crd.k8s.amazonaws.com
  resources:
  - eniconfigs
  verbs:
  - list
  - watch
  - get
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - list
  - watch
  - get
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - list
  - watch
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
  - get
- apiGroups:
  - ""
  - events.k8s.io
  resources:
  - events
  verbs:
  - create
  - patch
  - list
- apiGroups:
  - networking.k8s.aws
  resources:
  - policyendpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.aws
  resources:
  - policyendpoints/status
  verbs:
  - get
- apiGroups:
  - vpcresources.k8s.aws
  resources:
  - cninodes
  verbs:
  - get
  - list
  - watch
  - patch
```

Also, a new controller runs in the control plane of each EKS cluster. The controller uses the permissions of the `ClusterRole` called `eks:network-policy-controller`. The following shows the `eks:network-policy-controller` of a cluster:

```
kubectl get clusterrole eks:network-policy-controller -o yaml
```

```
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/name: amazon-network-policy-controller-k8s
  name: eks:network-policy-controller
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.aws
  resources:
  - policyendpoints
  verbs:
  - create
  - delete
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - networking.k8s.aws
  resources:
  - policyendpoints/finalizers
  verbs:
  - update
- apiGroups:
  - networking.k8s.aws
  resources:
  - policyendpoints/status
  verbs:
  - get
  - patch
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - networkpolicies
  verbs:
  - get
  - list
  - patch
  - update
  - watch
```

## Network policy logs


Each decision by the VPC CNI whether connections are allowed or denied by a network policies is logged in *flow logs*. The network policy logs on each node include the flow logs for every pod that has a network policy. Network policy logs are stored at `/var/log/aws-routed-eni/network-policy-agent.log`. The following example is from a `network-policy-agent.log` file:

```
{"level":"info","timestamp":"2023-05-30T16:05:32.573Z","logger":"ebpf-client","msg":"Flow Info: ","Src
IP":"192.168.87.155","Src Port":38971,"Dest IP":"64.6.160","Dest
Port":53,"Proto":"UDP","Verdict":"ACCEPT"}
```

Network policy logs are disabled by default. To enable the network policy logs, follow these steps:

**Note**  
Network policy logs require an additional 1 vCPU for the `aws-network-policy-agent` container in the VPC CNI `aws-node` `DaemonSet` manifest.

### Amazon EKS add-on


 ** AWS Management Console **   

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the Amazon VPC CNI add-on for.

1. Choose the **Add-ons** tab.

1. Select the box in the top right of the add-on box and then choose **Edit**.

1. On the **Configure *Amazon VPC CNI* ** page:

   1. Select a `v1.14.0-eksbuild.3` or later version in the **Version** dropdown list.

   1. Expand the **Optional configuration settings**.

   1. Enter the top-level JSON key `"nodeAgent":` and value is an object with a key `"enablePolicyEventLogs":` and value of `"true"` in **Configuration values**. The resulting text must be a valid JSON object. The following example shows network policy and the network policy logs are enabled, and the network policy logs are sent to CloudWatch Logs:

      ```
      {
          "enableNetworkPolicy": "true",
          "nodeAgent": {
              "enablePolicyEventLogs": "true"
          }
      }
      ```

The following screenshot shows an example of this scenario.

![\[<shared id="consolelong"/> showing the VPC CNI add-on with network policy and CloudWatch Logs in the optional configuration.\]](http://docs.aws.amazon.com/eks/latest/userguide/images/console-cni-config-network-policy-logs.png)


 AWS CLI  

1. Run the following AWS CLI command. Replace `my-cluster` with the name of your cluster and replace the IAM role ARN with the role that you are using.

   ```
   aws eks update-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version v1.14.0-eksbuild.3 \
       --service-account-role-arn arn:aws:iam::123456789012:role/AmazonEKSVPCCNIRole \
       --resolve-conflicts PRESERVE --configuration-values '{"nodeAgent": {"enablePolicyEventLogs": "true"}}'
   ```

### Self-managed add-on


Helm  
If you have installed the Amazon VPC CNI plugin for Kubernetes through `helm`, you can update the configuration to write the network policy logs.  

1. Run the following command to enable network policy.

   ```
   helm upgrade --set nodeAgent.enablePolicyEventLogs=true aws-vpc-cni --namespace kube-system eks/aws-vpc-cni
   ```

kubectl  
If you have installed the Amazon VPC CNI plugin for Kubernetes through `kubectl`, you can update the configuration to write the network policy logs.  

1. Open the `aws-node` `DaemonSet` in your editor.

   ```
   kubectl edit daemonset -n kube-system aws-node
   ```

1. Replace the `false` with `true` in the command argument `--enable-policy-event-logs=false` in the `args:` in the `aws-network-policy-agent` container in the VPC CNI `aws-node` `DaemonSet` manifest.

   ```
        - args:
           - --enable-policy-event-logs=true
   ```

### Send network policy logs to Amazon CloudWatch Logs


You can monitor the network policy logs using services such as Amazon CloudWatch Logs. You can use the following methods to send the network policy logs to CloudWatch Logs.

For EKS clusters, the policy logs will be located under `/aws/eks/cluster-name/cluster/` and for self-managed K8S clusters, the logs will be placed under `/aws/k8s-cluster/cluster/`.

#### Send network policy logs with Amazon VPC CNI plugin for Kubernetes


If you enable network policy, a second container is add to the `aws-node` pods for a *node agent*. This node agent can send the network policy logs to CloudWatch Logs.

**Note**  
Only the network policy logs are sent by the node agent. Other logs made by the VPC CNI aren’t included.

##### Prerequisites

+ Add the following permissions as a stanza or separate policy to the IAM role that you are using for the VPC CNI.

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "VisualEditor0",
              "Effect": "Allow",
              "Action": [
                  "logs:DescribeLogGroups",
                  "logs:CreateLogGroup",
                  "logs:CreateLogStream",
                  "logs:PutLogEvents"
              ],
              "Resource": "*"
          }
      ]
  }
  ```

##### Amazon EKS add-on


 ** AWS Management Console **   

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. In the left navigation pane, select **Clusters**, and then select the name of the cluster that you want to configure the Amazon VPC CNI add-on for.

1. Choose the **Add-ons** tab.

1. Select the box in the top right of the add-on box and then choose **Edit**.

1. On the **Configure *Amazon VPC CNI* ** page:

   1. Select a `v1.14.0-eksbuild.3` or later version in the **Version** dropdown list.

   1. Expand the **Optional configuration settings**.

   1. Enter the top-level JSON key `"nodeAgent":` and value is an object with a key `"enableCloudWatchLogs":` and value of `"true"` in **Configuration values**. The resulting text must be a valid JSON object. The following example shows network policy and the network policy logs are enabled, and the logs are sent to CloudWatch Logs:

      ```
      {
          "enableNetworkPolicy": "true",
          "nodeAgent": {
              "enablePolicyEventLogs": "true",
              "enableCloudWatchLogs": "true",
          }
      }
      ```
The following screenshot shows an example of this scenario.

![\[<shared id="consolelong"/> showing the VPC CNI add-on with network policy and CloudWatch Logs in the optional configuration.\]](http://docs.aws.amazon.com/eks/latest/userguide/images/console-cni-config-network-policy-logs-cwl.png)


 ** AWS CLI**   

1. Run the following AWS CLI command. Replace `my-cluster` with the name of your cluster and replace the IAM role ARN with the role that you are using.

   ```
   aws eks update-addon --cluster-name my-cluster --addon-name vpc-cni --addon-version v1.14.0-eksbuild.3 \
       --service-account-role-arn arn:aws:iam::123456789012:role/AmazonEKSVPCCNIRole \
       --resolve-conflicts PRESERVE --configuration-values '{"nodeAgent": {"enablePolicyEventLogs": "true", "enableCloudWatchLogs": "true"}}'
   ```

##### Self-managed add-on


 **Helm**   
If you have installed the Amazon VPC CNI plugin for Kubernetes through `helm`, you can update the configuration to send network policy logs to CloudWatch Logs.  

1. Run the following command to enable network policy logs and send them to CloudWatch Logs.

   ```
   helm upgrade --set nodeAgent.enablePolicyEventLogs=true --set nodeAgent.enableCloudWatchLogs=true aws-vpc-cni --namespace kube-system eks/aws-vpc-cni
   ```

 **kubectl**   

1. Open the `aws-node` `DaemonSet` in your editor.

   ```
   kubectl edit daemonset -n kube-system aws-node
   ```

1. Replace the `false` with `true` in two command arguments `--enable-policy-event-logs=false` and `--enable-cloudwatch-logs=false` in the `args:` in the `aws-network-policy-agent` container in the VPC CNI `aws-node` `DaemonSet` manifest.

   ```
        - args:
           - --enable-policy-event-logs=true
           - --enable-cloudwatch-logs=true
   ```

#### Send network policy logs with a Fluent Bit `DaemonSet`


If you are using Fluent Bit in a `DaemonSet` to send logs from your nodes, you can add configuration to include the network policy logs from network policies. You can use the following example configuration:

```
    [INPUT]
        Name              tail
        Tag               eksnp.*
        Path              /var/log/aws-routed-eni/network-policy-agent*.log
        Parser            json
        DB                /var/log/aws-routed-eni/flb_npagent.db
        Mem_Buf_Limit     5MB
        Skip_Long_Lines   On
        Refresh_Interval  10
```

## Included eBPF SDK


The Amazon VPC CNI plugin for Kubernetes installs eBPF SDK collection of tools on the nodes. You can use the eBPF SDK tools to identify issues with network policies. For example, the following command lists the programs that are running on the node.

```
sudo /opt/cni/bin/aws-eks-na-cli ebpf progs
```

To run this command, you can use any method to connect to the node.

## Known issues and solutions


The following sections describe known issues with the Amazon VPC CNI network policy feature and their solutions.

### Network policy logs generated despite enable-policy-event-logs set to false


 **Issue**: EKS VPC CNI is generating network policy logs even when the `enable-policy-event-logs` setting is set to `false`.

 **Solution**: The `enable-policy-event-logs` setting only disables the policy "decision" logs, but it won’t disable all Network Policy agent logging. This behavior is documented in the [aws-network-policy-agent README](https://github.com/aws/aws-network-policy-agent/) on GitHub. To completely disable logging, you might need to adjust other logging configurations.

### Network policy map cleanup issues


 **Issue**: Problems with network `policyendpoint` still existing and not being cleaned up after pods are deleted.

 **Solution**: This issue was caused by a problem with the VPC CNI add-on version 1.19.3-eksbuild.1. Update to a newer version of the VPC CNI add-on to resolve this issue.

### Network policies aren’t applied


 **Issue**: Network policy feature is enabled in the Amazon VPC CNI plugin, but network policies are not being applied correctly.

If you make a network policy `kind: NetworkPolicy` and it doesn’t effect the pod, check that the policyendpoint object was created in the same namespace as the pod. If there aren’t `policyendpoint` objects in the namespaces, the network policy controller (part of the EKS cluster) was unable to create network policy rules for the network policy agent (part of the VPC CNI) to apply.

 **Solution**: The solution is to fix the permissions of the VPC CNI (`ClusterRole` : `aws-node`) and the network policy controller (`ClusterRole` : `eks:network-policy-controller`) and to allow these actions in any policy enforcement tool such as Kyverno. Ensure that Kyverno policies are not blocking the creation of `policyendpoint` objects. See previous section for the permissions necessary permissions in [New `policyendpoints` CRD and permissions](#network-policies-troubleshooting-permissions).

### Pods don’t return to default deny state after policy deletion in strict mode


 **Issue**: When network policies are enabled in strict mode, pods start with a default deny policy. After policies are applied, traffic is allowed to the specified endpoints. However, when policies are deleted, the pod doesn’t return to the default deny state and instead goes to a default allow state.

 **Solution**: This issue was fixed in the VPC CNI release 1.19.3, which included the network policy agent 1.2.0 release. After the fix, with strict mode enabled, once policies are removed, the pod will fall back to the default deny state as expected.

### Security Groups for Pods startup latency


 **Issue**: When using the Security Groups for Pods feature in EKS, there is increased pod startup latency.

 **Solution**: The latency is due to rate limiting in the resource controller from API throttling on the `CreateNetworkInterface` API, which the VPC resource controller uses to create branch ENIs for the pods. Check your account’s API limits for this operation and consider requesting a limit increase if needed.

### FailedScheduling due to insufficient vpc.amazonaws.com/pod-eni


 **Issue**: Pods fail to schedule with the error: `FailedScheduling 2m53s (x28 over 137m) default-scheduler 0/5 nodes are available: 5 Insufficient vpc.amazonaws.com/pod-eni. preemption: 0/5 nodes are available: 5 No preemption victims found for incoming pod.` 

 **Solution**: As with the previous issue, assigning Security Groups to pods increases pod scheduling latency and it can increase beyond the CNI threshold for time to add each ENI, causing failures to start pods. This is expected behavior when using Security Groups for Pods. Consider the scheduling implications when designing your workload architecture.

### IPAM connectivity issues and segmentation faults


 **Issue**: Multiple errors occur including IPAM connectivity issues, throttling requests, and segmentation faults:
+  `Checking for IPAM connectivity …​` 
+  `Throttling request took 1.047064274s` 
+  `Retrying waiting for IPAM-D` 
+  `panic: runtime error: invalid memory address or nil pointer dereference` 

 **Solution**: This issue occurs if you install `systemd-udev` on AL2023, as the file is re-written with a breaking policy. This can happen when updating to a different `releasever` that has an updated package or manually updating the package itself. Avoid installing or updating `systemd-udev` on AL2023 nodes.

### Failed to find device by name error


 **Issue**: Error message: `{"level":"error","ts":"2025-02-05T20:27:18.669Z","caller":"ebpf/bpf_client.go:578","msg":"failed to find device by name eni9ea69618bf0: %!w(netlink.LinkNotFoundError={0xc000115310})"}` 

 **Solution**: This issue has been identified and fixed in the latest versions of the Amazon VPC CNI network policy agent (v1.2.0). Update to the latest version of the VPC CNI to resolve this issue.

### CVE vulnerabilities in Multus CNI image


 **Issue**: Enhanced EKS ImageScan CVE Report identifies vulnerabilities in the Multus CNI image version v4.1.4-eksbuild.2\$1thick.

 **Solution**: Update to the new version of the Multus CNI image and the new Network Policy Controller image, which have no vulnerabilities. The scanner can be updated to address the vulnerabilities found in the previous version.

### Flow Info DENY verdicts in logs


 **Issue**: Network policy logs show DENY verdicts: `{"level":"info","ts":"2024-11-25T13:34:24.808Z","logger":"ebpf-client","caller":"events/events.go:193","msg":"Flow Info: ","Src IP":"","Src Port":9096,"Dest IP":"","Dest Port":56830,"Proto":"TCP","Verdict":"DENY"}` 

 **Solution**: This issue has been resolved in the new version of the Network Policy Controller. Update to the latest EKS platform version to resolve logging issues.

### Pod-to-pod communication issues after migrating from Calico


 **Issue**: After upgrading an EKS cluster to version 1.30 and switching from Calico to Amazon VPC CNI for network policy, pod-to-pod communication fails when network policies are applied. Communication is restored when network policies are deleted.

 **Solution**: The network policy agent in the VPC CNI can’t have as many ports specified as Calico does. Instead, use port ranges in the network policies. The maximum number of unique combinations of ports for each protocol in each `ingress:` or `egress:` selector in a network policy is 24. Use port ranges to reduce the number of unique ports and avoid this limitation.

### Network policy agent doesn’t support standalone pods


 **Issue**: Network policies applied to standalone pods may have inconsistent behavior.

 **Solution**: The Network Policy agent currently only supports pods that are deployed as part of a deployment/replicaset. If network policies are applied to standalone pods, there might be some inconsistencies in the behavior. This is documented at the top of this page, in the [Considerations](cni-network-policy.md#cni-network-policy-considerations), and in the [aws-network-policy-agent GitHub issue \$1327](https://github.com/aws/aws-network-policy-agent/issues/327) on GitHub. Deploy pods as part of a deployment or replicaset for consistent network policy behavior.

# Stars demo of network policy for Amazon EKS
Stars policy demo

This demo creates a front-end, back-end, and client service on your Amazon EKS cluster. The demo also creates a management graphical user interface that shows the available ingress and egress paths between each service. We recommend that you complete the demo on a cluster that you don’t run production workloads on.

Before you create any network policies, all services can communicate bidirectionally. After you apply the network policies, you can see that the client can only communicate with the front-end service, and the back-end only accepts traffic from the front-end.

1. Apply the front-end, back-end, client, and management user interface services:

   ```
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/namespace.yaml
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/management-ui.yaml
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/backend.yaml
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/frontend.yaml
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/client.yaml
   ```

1. View all Pods on the cluster.

   ```
   kubectl get pods -A
   ```

   An example output is as follows.

   In your output, you should see pods in the namespaces shown in the following output. The *NAMES* of your pods and the number of pods in the `READY` column are different than those in the following output. Don’t continue until you see pods with similar names and they all have `Running` in the `STATUS` column.

   ```
   NAMESPACE         NAME                                       READY   STATUS    RESTARTS   AGE
   [...]
   client            client-xlffc                               1/1     Running   0          5m19s
   [...]
   management-ui     management-ui-qrb2g                        1/1     Running   0          5m24s
   stars             backend-sz87q                              1/1     Running   0          5m23s
   stars             frontend-cscnf                             1/1     Running   0          5m21s
   [...]
   ```

1. To connect to the management user interface, connect to the `EXTERNAL-IP` of the service running on your cluster:

   ```
   kubectl get service/management-ui -n management-ui
   ```

1. Open the a browser to the location from the previous step. You should see the management user interface. The **C** node is the client service, the **F** node is the front-end service, and the **B** node is the back-end service. Each node has full communication access to all other nodes, as indicated by the bold, colored lines.  
![\[Open network policy\]](http://docs.aws.amazon.com/eks/latest/userguide/images/stars-default.png)

1. Apply the following network policy in both the `stars` and `client` namespaces to isolate the services from each other:

   ```
   kind: NetworkPolicy
   apiVersion: networking.k8s.io/v1
   metadata:
     name: default-deny
   spec:
     podSelector:
       matchLabels: {}
   ```

   You can use the following commands to apply the policy to both namespaces:

   ```
   kubectl apply -n stars -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/apply_network_policies.files/default-deny.yaml
   kubectl apply -n client -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/apply_network_policies.files/default-deny.yaml
   ```

1. Refresh your browser. You see that the management user interface can no longer reach any of the nodes, so they don’t show up in the user interface.

1. Apply the following different network policies to allow the management user interface to access the services. Apply this policy to allow the UI:

   ```
   kind: NetworkPolicy
   apiVersion: networking.k8s.io/v1
   metadata:
     namespace: stars
     name: allow-ui
   spec:
     podSelector:
       matchLabels: {}
     ingress:
       - from:
           - namespaceSelector:
               matchLabels:
                 role: management-ui
   ```

   Apply this policy to allow the client:

   ```
   kind: NetworkPolicy
   apiVersion: networking.k8s.io/v1
   metadata:
     namespace: client
     name: allow-ui
   spec:
     podSelector:
       matchLabels: {}
     ingress:
       - from:
           - namespaceSelector:
               matchLabels:
                 role: management-ui
   ```

   You can use the following commands to apply both policies:

   ```
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/apply_network_policies.files/allow-ui.yaml
   kubectl apply -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/apply_network_policies.files/allow-ui-client.yaml
   ```

1. Refresh your browser. You see that the management user interface can reach the nodes again, but the nodes cannot communicate with each other.  
![\[UI access network policy\]](http://docs.aws.amazon.com/eks/latest/userguide/images/stars-no-traffic.png)

1. Apply the following network policy to allow traffic from the front-end service to the back-end service:

   ```
   kind: NetworkPolicy
   apiVersion: networking.k8s.io/v1
   metadata:
     namespace: stars
     name: backend-policy
   spec:
     podSelector:
       matchLabels:
         role: backend
     ingress:
       - from:
           - podSelector:
               matchLabels:
                 role: frontend
         ports:
           - protocol: TCP
             port: 6379
   ```

1. Refresh your browser. You see that the front-end can communicate with the back-end.  
![\[Front-end to back-end policy\]](http://docs.aws.amazon.com/eks/latest/userguide/images/stars-front-end-back-end.png)

1. Apply the following network policy to allow traffic from the client to the front-end service:

   ```
   kind: NetworkPolicy
   apiVersion: networking.k8s.io/v1
   metadata:
     namespace: stars
     name: frontend-policy
   spec:
     podSelector:
       matchLabels:
         role: frontend
     ingress:
       - from:
           - namespaceSelector:
               matchLabels:
                 role: client
         ports:
           - protocol: TCP
             port: 80
   ```

1. Refresh your browser. You see that the client can communicate to the front-end service. The front-end service can still communicate to the back-end service.  
![\[Final network policy\]](http://docs.aws.amazon.com/eks/latest/userguide/images/stars-final.png)

1. (Optional) When you are done with the demo, you can delete its resources.

   ```
   kubectl delete -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/client.yaml
   kubectl delete -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/frontend.yaml
   kubectl delete -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/backend.yaml
   kubectl delete -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/management-ui.yaml
   kubectl delete -f https://raw.githubusercontent.com/aws-samples/eks-workshop/2f9d29ed3f82ed6b083649e975a0e574fb8a4058/content/beginner/120_network-policies/calico/stars_policy_demo/create_resources.files/namespace.yaml
   ```

   Even after deleting the resources, there can still be network policy endpoints on the nodes that might interfere in unexpected ways with networking in your cluster. The only sure way to remove these rules is to reboot the nodes or terminate all of the nodes and recycle them. To terminate all nodes, either set the Auto Scaling Group desired count to 0, then back up to the desired number, or just terminate the nodes.

# Deploy Pods in alternate subnets with custom networking
Custom networking

 **Applies to**: Linux `IPv4` Fargate nodes, Linux nodes with Amazon EC2 instances

![\[Diagram of node with multiple network interfaces\]](http://docs.aws.amazon.com/eks/latest/userguide/images/cn-image.png)


By default, when the Amazon VPC CNI plugin for Kubernetes creates secondary [elastic network interfaces](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html) (network interfaces) for your Amazon EC2 node, it creates them in the same subnet as the node’s primary network interface. It also associates the same security groups to the secondary network interface that are associated to the primary network interface. For one or more of the following reasons, you might want the plugin to create secondary network interfaces in a different subnet or want to associate different security groups to the secondary network interfaces, or both:
+ There’s a limited number of `IPv4` addresses that are available in the subnet that the primary network interface is in. This might limit the number of Pods that you can create in the subnet. By using a different subnet for secondary network interfaces, you can increase the number of available `IPv4` addresses available for Pods.
+ For security reasons, your Pods might need to use a different subnet or security groups than the node’s primary network interface.
+ The nodes are configured in public subnets, and you want to place the Pods in private subnets. The route table associated to a public subnet includes a route to an internet gateway. The route table associated to a private subnet doesn’t include a route to an internet gateway.

**Tip**  
You can also add a new or existing subnet directly to your Amazon EKS Cluster, without using custom networking. For more information, see [Add an existing VPC Subnet to an Amazon EKS cluster from the management console](eks-networking.md#add-existing-subnet).

## Considerations


The following are considerations for using the feature.
+ With custom networking enabled, no IP addresses assigned to the primary network interface are assigned to Pods. Only IP addresses from secondary network interfaces are assigned to Pods.
+ If your cluster uses the `IPv6` family, you can’t use custom networking.
+ If you plan to use custom networking only to help alleviate `IPv4` address exhaustion, you can create a cluster using the `IPv6` family instead. For more information, see [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md).
+ Even though Pods deployed to subnets specified for secondary network interfaces can use different subnet and security groups than the node’s primary network interface, the subnets and security groups must be in the same VPC as the node.
+ For Fargate, subnets are controlled through the Fargate profile. For more information, see [Define which Pods use AWS Fargate when launched](fargate-profile.md).

# Customize the secondary network interface in Amazon EKS nodes
Secondary interface

Complete the following before you start the tutorial:
+ Review the considerations
+ Familiarity with how the Amazon VPC CNI plugin for Kubernetes creates secondary network interfaces and assigns IP addresses to Pods. For more information, see [ENI Allocation](https://github.com/aws/amazon-vpc-cni-k8s#eni-allocation) on GitHub.
+ Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.
+ The `kubectl` command line tool is installed on your device or AWS CloudShell. To install or upgrade `kubectl`, see [Set up `kubectl` and `eksctl`](install-kubectl.md).
+ We recommend that you complete the steps in this topic in a Bash shell. If you aren’t using a Bash shell, some script commands such as line continuation characters and the way variables are set and used require adjustment for your shell. Additionally, the quoting and escaping rules for your shell might be different. For more information, see [Using quotation marks with strings in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-parameters-quoting-strings.html) in the AWS Command Line Interface User Guide.

For this tutorial, we recommend using the example values, except where it’s noted to replace them. You can replace any example value when completing the steps for a production cluster. We recommend completing all steps in the same terminal. This is because variables are set and used throughout the steps and won’t exist in different terminals.

The commands in this topic are formatted using the conventions listed in [Using the AWS CLI examples](https://docs.aws.amazon.com/cli/latest/userguide/welcome-examples.html). If you’re running commands from the command line against resources that are in a different AWS Region than the default AWS Region defined in the AWS CLI [profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-profiles) that you’re using, then you need to add `--region us-west-2` to the commands, replacing `us-west-2` with your AWS region.

When you want to deploy custom networking to your production cluster, skip to [Step 2: Configure your VPC](#custom-networking-configure-vpc).

## Step 1: Create a test VPC and cluster


The following procedures help you create a test VPC and cluster and configure custom networking for that cluster. We don’t recommend using the test cluster for production workloads because several unrelated features that you might use on your production cluster aren’t covered in this topic. For more information, see [Create an Amazon EKS cluster](create-cluster.md).

1. Run the following command to define the `account_id` variable.

   ```
   account_id=$(aws sts get-caller-identity --query Account --output text)
   ```

1. Create a VPC.

   1. If you are deploying to a test system, create a VPC using an Amazon EKS AWS CloudFormation template.

      ```
      aws cloudformation create-stack --stack-name my-eks-custom-networking-vpc \
        --template-url https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/amazon-eks-vpc-private-subnets.yaml \
        --parameters ParameterKey=VpcBlock,ParameterValue=192.168.0.0/24 \
        ParameterKey=PrivateSubnet01Block,ParameterValue=192.168.0.64/27 \
        ParameterKey=PrivateSubnet02Block,ParameterValue=192.168.0.96/27 \
        ParameterKey=PublicSubnet01Block,ParameterValue=192.168.0.0/27 \
        ParameterKey=PublicSubnet02Block,ParameterValue=192.168.0.32/27
      ```

   1. The AWS CloudFormation stack takes a few minutes to create. To check on the stack’s deployment status, run the following command.

      ```
      aws cloudformation describe-stacks --stack-name my-eks-custom-networking-vpc --query Stacks\[\].StackStatus  --output text
      ```

      Don’t continue to the next step until the output of the command is `CREATE_COMPLETE`.

   1. Define variables with the values of the private subnet IDs created by the template.

      ```
      subnet_id_1=$(aws cloudformation describe-stack-resources --stack-name my-eks-custom-networking-vpc \
          --query "StackResources[?LogicalResourceId=='PrivateSubnet01'].PhysicalResourceId" --output text)
      subnet_id_2=$(aws cloudformation describe-stack-resources --stack-name my-eks-custom-networking-vpc \
          --query "StackResources[?LogicalResourceId=='PrivateSubnet02'].PhysicalResourceId" --output text)
      ```

   1. Define variables with the Availability Zones of the subnets retrieved in the previous step.

      ```
      az_1=$(aws ec2 describe-subnets --subnet-ids $subnet_id_1 --query 'Subnets[*].AvailabilityZone' --output text)
      az_2=$(aws ec2 describe-subnets --subnet-ids $subnet_id_2 --query 'Subnets[*].AvailabilityZone' --output text)
      ```

1. Create a cluster IAM role.

   1. Run the following command to create an IAM trust policy JSON file.

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "Service": "eks.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
          }
        ]
      }
      ```

   1. Create the Amazon EKS cluster IAM role. If necessary, preface `eks-cluster-role-trust-policy.json` with the path on your computer that you wrote the file to in the previous step. The command associates the trust policy that you created in the previous step to the role. To create an IAM role, the [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) that is creating the role must be assigned the `iam:CreateRole` action (permission).

      ```
      aws iam create-role --role-name myCustomNetworkingAmazonEKSClusterRole --assume-role-policy-document file://"eks-cluster-role-trust-policy.json"
      ```

   1. Attach the Amazon EKS managed policy named [AmazonEKSClusterPolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEKSClusterPolicy.html#AmazonEKSClusterPolicy-json) to the role. To attach an IAM policy to an [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal), the principal that is attaching the policy must be assigned one of the following IAM actions (permissions): `iam:AttachUserPolicy` or `iam:AttachRolePolicy`.

      ```
      aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy --role-name myCustomNetworkingAmazonEKSClusterRole
      ```

1. Create an Amazon EKS cluster and configure your device to communicate with it.

   1. Create a cluster.

      ```
      aws eks create-cluster --name my-custom-networking-cluster \
         --role-arn arn:aws:iam::$account_id:role/myCustomNetworkingAmazonEKSClusterRole \
         --resources-vpc-config subnetIds="$subnet_id_1","$subnet_id_2"
      ```
**Note**  
You might receive an error that one of the Availability Zones in your request doesn’t have sufficient capacity to create an Amazon EKS cluster. If this happens, the error output contains the Availability Zones that can support a new cluster. Retry creating your cluster with at least two subnets that are located in the supported Availability Zones for your account. For more information, see [Insufficient capacity](troubleshooting.md#ice).

   1. The cluster takes several minutes to create. To check on the cluster’s deployment status, run the following command.

      ```
      aws eks describe-cluster --name my-custom-networking-cluster --query cluster.status
      ```

      Don’t continue to the next step until the output of the command is `"ACTIVE"`.

   1. Configure `kubectl` to communicate with your cluster.

      ```
      aws eks update-kubeconfig --name my-custom-networking-cluster
      ```

## Step 2: Configure your VPC


This tutorial requires the VPC created in [Step 1: Create a test VPC and cluster](#custom-networking-create-cluster). For a production cluster, adjust the steps accordingly for your VPC by replacing all of the example values with your own.

1. Confirm that your currently-installed Amazon VPC CNI plugin for Kubernetes is the latest version. To determine the latest version for the Amazon EKS add-on type and update your version to it, see [Update an Amazon EKS add-on](updating-an-add-on.md). To determine the latest version for the self-managed add-on type and update your version to it, see [Assign IPs to Pods with the Amazon VPC CNI](managing-vpc-cni.md).

1. Retrieve the ID of your cluster VPC and store it in a variable for use in later steps.

   ```
   vpc_id=$(aws eks describe-cluster --name my-custom-networking-cluster --query "cluster.resourcesVpcConfig.vpcId" --output text)
   ```

1. Associate an additional Classless Inter-Domain Routing (CIDR) block with your cluster’s VPC. The CIDR block can’t overlap with any existing associated CIDR blocks.

   1. View the current CIDR blocks associated to your VPC.

      ```
      aws ec2 describe-vpcs --vpc-ids $vpc_id \
          --query 'Vpcs[*].CidrBlockAssociationSet[*].{CIDRBlock: CidrBlock, State: CidrBlockState.State}' --out table
      ```

      An example output is as follows.

      ```
      ----------------------------------
      |          DescribeVpcs          |
      +-----------------+--------------+
      |    CIDRBlock    |    State     |
      +-----------------+--------------+
      |  192.168.0.0/24 |  associated  |
      +-----------------+--------------+
      ```

   1. Associate an additional CIDR block to your VPC. Replace the CIDR block value in the following command. For more information, see [Associate additional IPv4 CIDR blocks with your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/modify-vpcs.html#add-ipv4-cidr) in the Amazon VPC User Guide.

      ```
      aws ec2 associate-vpc-cidr-block --vpc-id $vpc_id --cidr-block 192.168.1.0/24
      ```

   1. Confirm that the new block is associated.

      ```
      aws ec2 describe-vpcs --vpc-ids $vpc_id --query 'Vpcs[*].CidrBlockAssociationSet[*].{CIDRBlock: CidrBlock, State: CidrBlockState.State}' --out table
      ```

      An example output is as follows.

      ```
      ----------------------------------
      |          DescribeVpcs          |
      +-----------------+--------------+
      |    CIDRBlock    |    State     |
      +-----------------+--------------+
      |  192.168.0.0/24 |  associated  |
      |  192.168.1.0/24 |  associated  |
      +-----------------+--------------+
      ```

   Don’t proceed to the next step until your new CIDR block’s `State` is `associated`.

1. Create as many subnets as you want to use in each Availability Zone that your existing subnets are in. Specify a CIDR block that’s within the CIDR block that you associated with your VPC in a previous step.

   1. Create new subnets. Replace the CIDR block values in the following command. The subnets must be created in a different VPC CIDR block than your existing subnets are in, but in the same Availability Zones as your existing subnets. In this example, one subnet is created in the new CIDR block in each Availability Zone that the current private subnets exist in. The IDs of the subnets created are stored in variables for use in later steps. The `Name` values match the values assigned to the subnets created using the Amazon EKS VPC template in a previous step. Names aren’t required. You can use different names.

      ```
      new_subnet_id_1=$(aws ec2 create-subnet --vpc-id $vpc_id --availability-zone $az_1 --cidr-block 192.168.1.0/27 \
          --tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=my-eks-custom-networking-vpc-PrivateSubnet01},{Key=kubernetes.io/role/internal-elb,Value=1}]' \
          --query Subnet.SubnetId --output text)
      new_subnet_id_2=$(aws ec2 create-subnet --vpc-id $vpc_id --availability-zone $az_2 --cidr-block 192.168.1.32/27 \
          --tag-specifications 'ResourceType=subnet,Tags=[{Key=Name,Value=my-eks-custom-networking-vpc-PrivateSubnet02},{Key=kubernetes.io/role/internal-elb,Value=1}]' \
          --query Subnet.SubnetId --output text)
      ```
**Important**  
By default, your new subnets are implicitly associated with your VPC’s [main route table](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html#RouteTables). This route table allows communication between all the resources that are deployed in the VPC. However, it doesn’t allow communication with resources that have IP addresses that are outside the CIDR blocks that are associated with your VPC. You can associate your own route table to your subnets to change this behavior. For more information, see [Subnet route tables](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html#subnet-route-tables) in the Amazon VPC User Guide.

   1. View the current subnets in your VPC.

      ```
      aws ec2 describe-subnets --filters "Name=vpc-id,Values=$vpc_id" \
          --query 'Subnets[*].{SubnetId: SubnetId,AvailabilityZone: AvailabilityZone,CidrBlock: CidrBlock}' \
          --output table
      ```

      An example output is as follows.

      ```
      ----------------------------------------------------------------------
      |                           DescribeSubnets                          |
      +------------------+--------------------+----------------------------+
      | AvailabilityZone |     CidrBlock      |         SubnetId           |
      +------------------+--------------------+----------------------------+
      |  us-west-2d      |  192.168.0.0/27    |     subnet-example1        |
      |  us-west-2a      |  192.168.0.32/27   |     subnet-example2        |
      |  us-west-2a      |  192.168.0.64/27   |     subnet-example3        |
      |  us-west-2d      |  192.168.0.96/27   |     subnet-example4        |
      |  us-west-2a      |  192.168.1.0/27    |     subnet-example5        |
      |  us-west-2d      |  192.168.1.32/27   |     subnet-example6        |
      +------------------+--------------------+----------------------------+
      ```

      You can see the subnets in the `192.168.1.0` CIDR block that you created are in the same Availability Zones as the subnets in the `192.168.0.0` CIDR block.

## Step 3: Configure Kubernetes resources


1. Set the `AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG` environment variable to `true` in the `aws-node` DaemonSet.

   ```
   kubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true
   ```

1. Retrieve the ID of your [cluster security group](sec-group-reqs.md) and store it in a variable for use in the next step. Amazon EKS automatically creates this security group when you create your cluster.

   ```
   cluster_security_group_id=$(aws eks describe-cluster --name my-custom-networking-cluster --query cluster.resourcesVpcConfig.clusterSecurityGroupId --output text)
   ```

1.  Create an `ENIConfig` custom resource for each subnet that you want to deploy Pods in.

   1. Create a unique file for each network interface configuration.

      The following commands create separate `ENIConfig` files for the two subnets that were created in a previous step. The value for `name` must be unique. The name is the same as the Availability Zone that the subnet is in. The cluster security group is assigned to the `ENIConfig`.

      ```
      cat >$az_1.yaml <<EOF
      apiVersion: crd.k8s.amazonaws.com/v1alpha1
      kind: ENIConfig
      metadata:
        name: $az_1
      spec:
        securityGroups:
          - $cluster_security_group_id
        subnet: $new_subnet_id_1
      EOF
      ```

      ```
      cat >$az_2.yaml <<EOF
      apiVersion: crd.k8s.amazonaws.com/v1alpha1
      kind: ENIConfig
      metadata:
        name: $az_2
      spec:
        securityGroups:
          - $cluster_security_group_id
        subnet: $new_subnet_id_2
      EOF
      ```

      For a production cluster, you can make the following changes to the previous commands:
      + Replace \$1cluster\$1security\$1group\$1id with the ID of an existing [security group](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html) that you want to use for each `ENIConfig`.
      + We recommend naming your `ENIConfigs` the same as the Availability Zone that you’ll use the `ENIConfig` for, whenever possible. You might need to use different names for your `ENIConfigs` than the names of the Availability Zones for a variety of reasons. For example, if you have more than two subnets in the same Availability Zone and want to use them both with custom networking, then you need multiple `ENIConfigs` for the same Availability Zone. Since each `ENIConfig` requires a unique name, you can’t name more than one of your `ENIConfigs` using the Availability Zone name.

        If your `ENIConfig` names aren’t all the same as Availability Zone names, then replace \$1az\$11 and \$1az\$12 with your own names in the previous commands and [annotate your nodes with the ENIConfig](#custom-networking-annotate-eniconfig) later in this tutorial.
**Note**  
If you don’t specify a valid security group for use with a production cluster and you’re using:
      + version `1.8.0` or later of the Amazon VPC CNI plugin for Kubernetes, then the security groups associated with the node’s primary elastic network interface are used.
      + a version of the Amazon VPC CNI plugin for Kubernetes that’s earlier than `1.8.0`, then the default security group for the VPC is assigned to secondary network interfaces.
**Important**  
 `AWS_VPC_K8S_CNI_EXTERNALSNAT=false` is a default setting in the configuration for the Amazon VPC CNI plugin for Kubernetes. If you’re using the default setting, then traffic that is destined for IP addresses that aren’t within one of the CIDR blocks associated with your VPC use the security groups and subnets of your node’s primary network interface. The subnets and security groups defined in your `ENIConfigs` that are used to create secondary network interfaces aren’t used for this traffic. For more information about this setting, see [Enable outbound internet access for Pods](external-snat.md).
If you also use security groups for Pods, the security group that’s specified in a `SecurityGroupPolicy` is used instead of the security group that’s specified in the `ENIConfigs`. For more information, see [Assign security groups to individual Pods](security-groups-for-pods.md).

   1. Apply each custom resource file that you created to your cluster with the following commands.

      ```
      kubectl apply -f $az_1.yaml
      kubectl apply -f $az_2.yaml
      ```

1. Confirm that your `ENIConfigs` were created.

   ```
   kubectl get ENIConfigs
   ```

   An example output is as follows.

   ```
   NAME         AGE
   us-west-2a   117s
   us-west-2d   105s
   ```

1. If you’re enabling custom networking on a production cluster and named your `ENIConfigs` something other than the Availability Zone that you’re using them for, then skip to the [next step](#custom-networking-deploy-nodes) to deploy Amazon EC2 nodes.

   Enable Kubernetes to automatically apply the `ENIConfig` for an Availability Zone to any new Amazon EC2 nodes created in your cluster.

   1. For the test cluster in this tutorial, skip to the [next step](#custom-networking-automatically-apply-eniconfig).

      For a production cluster, check to see if an annotation with the key `k8s.amazonaws.com/eniConfig` for the ` [ENI\$1CONFIG\$1ANNOTATION\$1DEF](https://github.com/aws/amazon-vpc-cni-k8s#eni_config_annotation_def) ` environment variable exists in the container spec for the `aws-node` DaemonSet.

      ```
      kubectl describe daemonset aws-node -n kube-system | grep ENI_CONFIG_ANNOTATION_DEF
      ```

      If output is returned, the annotation exists. If no output is returned, then the variable is not set. For a production cluster, you can use either this setting or the setting in the following step. If you use this setting, it overrides the setting in the following step. In this tutorial, the setting in the next step is used.

   1.  Update your `aws-node` DaemonSet to automatically apply the `ENIConfig` for an Availability Zone to any new Amazon EC2 nodes created in your cluster.

      ```
      kubectl set env daemonset aws-node -n kube-system ENI_CONFIG_LABEL_DEF=topology.kubernetes.io/zone
      ```

## Step 4: Deploy Amazon EC2 nodes


1. Create a node IAM role.

   1. Run the following command to create an IAM trust policy JSON file.

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "Service": "ec2.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
          }
        ]
      }
      ```

   1. Create an IAM role and store its returned Amazon Resource Name (ARN) in a variable for use in a later step.

      ```
      node_role_arn=$(aws iam create-role --role-name myCustomNetworkingNodeRole --assume-role-policy-document file://"node-role-trust-relationship.json" \
          --query Role.Arn --output text)
      ```

   1. Attach three required IAM managed policies to the IAM role.

      ```
      aws iam attach-role-policy \
        --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy \
        --role-name myCustomNetworkingNodeRole
      aws iam attach-role-policy \
        --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly \
        --role-name myCustomNetworkingNodeRole
      aws iam attach-role-policy \
          --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy \
          --role-name myCustomNetworkingNodeRole
      ```
**Important**  
For simplicity in this tutorial, the [AmazonEKS\$1CNI\$1Policy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEKS_CNI_Policy.html) policy is attached to the node IAM role. In a production cluster however, we recommend attaching the policy to a separate IAM role that is used only with the Amazon VPC CNI plugin for Kubernetes. For more information, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md).

1. Create one of the following types of node groups. To determine the instance type that you want to deploy, see [Choose an optimal Amazon EC2 node instance type](choosing-instance-type.md). For this tutorial, complete the **Managed**, **Without a launch template or with a launch template without an AMI ID specified** option. If you’re going to use the node group for production workloads, then we recommend that you familiarize yourself with all of the [managed node group](create-managed-node-group.md) and [self-managed node group](worker.md) options before deploying the node group.
   +  **Managed** – Deploy your node group using one of the following options:
     +  **Without a launch template or with a launch template without an AMI ID specified** – Run the following command. For this tutorial, use the example values. For a production node group, replace all example values with your own. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters.

       ```
       aws eks create-nodegroup --cluster-name my-custom-networking-cluster --nodegroup-name my-nodegroup \
           --subnets $subnet_id_1 $subnet_id_2 --instance-types t3.medium --node-role $node_role_arn
       ```
     +  **With a launch template with a specified AMI ID** 

       1. Determine the maximum number of Pods for your nodes based on your instance type. For more information, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence). Note the value for use in the next step.

       1. In your launch template, specify an Amazon EKS optimized AMI ID, or a custom AMI built off the Amazon EKS optimized AMI, then [deploy the node group using a launch template](launch-templates.md) and provide the following user data in the launch template. This user data passes arguments into the `NodeConfig` specification. For more information about NodeConfig, see the [NodeConfig API reference](https://awslabs.github.io/amazon-eks-ami/nodeadm/doc/api/#nodeconfig). You can replace `20` with either the value from the previous step (recommended) or your own value.

          ```
          ---
          MIME-Version: 1.0
          Content-Type: multipart/mixed; boundary="BOUNDARY"
          --BOUNDARY
          Content-Type: application/node.eks.aws
          
          ---
          apiVersion: node.eks.aws/v1alpha1
          kind: NodeConfig
          spec:
            cluster:
              name: my-cluster
              ...
              kubelet:
                config:
                  maxPods: 20
          ```

          If you’ve created a custom AMI that is not built off the Amazon EKS optimized AMI, then you need to custom create the configuration yourself.
   +  **Self-managed** 

     1. Determine the maximum number of Pods for your nodes based on your instance type. For more information, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence). Note the value for use in the next step.

     1. Deploy the node group using the instructions in [Create self-managed Amazon Linux nodes](launch-workers.md).
**Note**  
If you want nodes in a production cluster to support a significantly higher number of Pods, you can enable prefix delegation. For example, `110` is returned for an `m5.large` instance type. For instructions on how to enable this capability, see [Assign more IP addresses to Amazon EKS nodes with prefixes](cni-increase-ip-addresses.md). You can use this capability with custom networking.

1. Node group creation takes several minutes. You can check the status of the creation of a managed node group with the following command.

   ```
   aws eks describe-nodegroup --cluster-name my-custom-networking-cluster --nodegroup-name my-nodegroup --query nodegroup.status --output text
   ```

   Don’t continue to the next step until the output returned is `ACTIVE`.

1.  For the tutorial, you can skip this step.

   For a production cluster, if you didn’t name your `ENIConfigs` the same as the Availability Zone that you’re using them for, then you must annotate your nodes with the `ENIConfig` name that should be used with the node. This step isn’t necessary if you only have one subnet in each Availability Zone and you named your `ENIConfigs` with the same names as your Availability Zones. This is because the Amazon VPC CNI plugin for Kubernetes automatically associates the correct `ENIConfig` with the node for you when you enabled it to do so in a [previous step](#custom-networking-automatically-apply-eniconfig).

   1. Get the list of nodes in your cluster.

      ```
      kubectl get nodes
      ```

      An example output is as follows.

      ```
      NAME                                          STATUS   ROLES    AGE     VERSION
      ip-192-168-0-126.us-west-2.compute.internal   Ready    <none>   8m49s   v1.22.9-eks-810597c
      ip-192-168-0-92.us-west-2.compute.internal    Ready    <none>   8m34s   v1.22.9-eks-810597c
      ```

   1. Determine which Availability Zone each node is in. Run the following command for each node that was returned in the previous step, replacing the IP addresses based on the previous output.

      ```
      aws ec2 describe-instances --filters Name=network-interface.private-dns-name,Values=ip-192-168-0-126.us-west-2.compute.internal \
      --query 'Reservations[].Instances[].{AvailabilityZone: Placement.AvailabilityZone, SubnetId: SubnetId}'
      ```

      An example output is as follows.

      ```
      [
          {
              "AvailabilityZone": "us-west-2d",
              "SubnetId": "subnet-Example5"
          }
      ]
      ```

   1. Annotate each node with the `ENIConfig` that you created for the subnet ID and Availability Zone. You can only annotate a node with one `ENIConfig`, though multiple nodes can be annotated with the same `ENIConfig`. Replace the example values with your own.

      ```
      kubectl annotate node ip-192-168-0-126.us-west-2.compute.internal k8s.amazonaws.com/eniConfig=EniConfigName1
      kubectl annotate node ip-192-168-0-92.us-west-2.compute.internal k8s.amazonaws.com/eniConfig=EniConfigName2
      ```

1.  If you had nodes in a production cluster with running Pods before you switched to using the custom networking feature, complete the following tasks:

   1. Make sure that you have available nodes that are using the custom networking feature.

   1. Cordon and drain the nodes to gracefully shut down the Pods. For more information, see [Safely Drain a Node](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/) in the Kubernetes documentation.

   1. Terminate the nodes. If the nodes are in an existing managed node group, you can delete the node group. Run the following command.

      ```
      aws eks delete-nodegroup --cluster-name my-custom-networking-cluster --nodegroup-name my-nodegroup
      ```

   Only new nodes that are registered with the `k8s.amazonaws.com/eniConfig` label use the custom networking feature.

1. Confirm that Pods are assigned an IP address from a CIDR block that’s associated to one of the subnets that you created in a previous step.

   ```
   kubectl get pods -A -o wide
   ```

   An example output is as follows.

   ```
   NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE     IP              NODE                                          NOMINATED NODE   READINESS GATES
   kube-system   aws-node-2rkn4             1/1     Running   0          7m19s   192.168.0.92    ip-192-168-0-92.us-west-2.compute.internal    <none>           <none>
   kube-system   aws-node-k96wp             1/1     Running   0          7m15s   192.168.0.126   ip-192-168-0-126.us-west-2.compute.internal   <none>           <none>
   kube-system   coredns-657694c6f4-smcgr   1/1     Running   0          56m     192.168.1.23    ip-192-168-0-92.us-west-2.compute.internal    <none>           <none>
   kube-system   coredns-657694c6f4-stwv9   1/1     Running   0          56m     192.168.1.28    ip-192-168-0-92.us-west-2.compute.internal    <none>           <none>
   kube-system   kube-proxy-jgshq           1/1     Running   0          7m19s   192.168.0.92    ip-192-168-0-92.us-west-2.compute.internal    <none>           <none>
   kube-system   kube-proxy-wx9vk           1/1     Running   0          7m15s   192.168.0.126   ip-192-168-0-126.us-west-2.compute.internal   <none>           <none>
   ```

   You can see that the coredns Pods are assigned IP addresses from the `192.168.1.0` CIDR block that you added to your VPC. Without custom networking, they would have been assigned addresses from the `192.168.0.0` CIDR block, because it was the only CIDR block originally associated with the VPC.

   If a Pod’s `spec` contains `hostNetwork=true`, it’s assigned the primary IP address of the node. It isn’t assigned an address from the subnets that you added. By default, this value is set to `false`. This value is set to `true` for the `kube-proxy` and Amazon VPC CNI plugin for Kubernetes (`aws-node`) Pods that run on your cluster. This is why the `kube-proxy` and the plugin’s `aws-node` Pods aren’t assigned 192.168.1.x addresses in the previous output. For more information about a Pod’s `hostNetwork` setting, see [PodSpec v1 core](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/#podspec-v1-core) in the Kubernetes API reference.

## Step 5: Delete tutorial resources


After you complete the tutorial, we recommend that you delete the resources that you created. You can then adjust the steps to enable custom networking for a production cluster.

1. If the node group that you created was just for testing, then delete it.

   ```
   aws eks delete-nodegroup --cluster-name my-custom-networking-cluster --nodegroup-name my-nodegroup
   ```

1. Even after the AWS CLI output says that the cluster is deleted, the delete process might not actually be complete. The delete process takes a few minutes. Confirm that it’s complete by running the following command.

   ```
   aws eks describe-nodegroup --cluster-name my-custom-networking-cluster --nodegroup-name my-nodegroup --query nodegroup.status --output text
   ```

   Don’t continue until the returned output is similar to the following output.

   ```
   An error occurred (ResourceNotFoundException) when calling the DescribeNodegroup operation: No node group found for name: my-nodegroup.
   ```

1. If the node group that you created was just for testing, then delete the node IAM role.

   1. Detach the policies from the role.

      ```
      aws iam detach-role-policy --role-name myCustomNetworkingNodeRole --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
      aws iam detach-role-policy --role-name myCustomNetworkingNodeRole --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
      aws iam detach-role-policy --role-name myCustomNetworkingNodeRole --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
      ```

   1. Delete the role.

      ```
      aws iam delete-role --role-name myCustomNetworkingNodeRole
      ```

1. Delete the cluster.

   ```
   aws eks delete-cluster --name my-custom-networking-cluster
   ```

   Confirm the cluster is deleted with the following command.

   ```
   aws eks describe-cluster --name my-custom-networking-cluster --query cluster.status --output text
   ```

   When output similar to the following is returned, the cluster is successfully deleted.

   ```
   An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: my-custom-networking-cluster.
   ```

1. Delete the cluster IAM role.

   1. Detach the policies from the role.

      ```
      aws iam detach-role-policy --role-name myCustomNetworkingAmazonEKSClusterRole --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
      ```

   1. Delete the role.

      ```
      aws iam delete-role --role-name myCustomNetworkingAmazonEKSClusterRole
      ```

1. Delete the subnets that you created in a previous step.

   ```
   aws ec2 delete-subnet --subnet-id $new_subnet_id_1
   aws ec2 delete-subnet --subnet-id $new_subnet_id_2
   ```

1. Delete the VPC that you created.

   ```
   aws cloudformation delete-stack --stack-name my-eks-custom-networking-vpc
   ```

# Assign more IP addresses to Amazon EKS nodes with prefixes
Increase IP addresses

 **Applies to**: Linux and Windows nodes with Amazon EC2 instances

 **Applies to**: Public and private subnets

Each Amazon EC2 instance supports a maximum number of elastic network interfaces and a maximum number of IP addresses that can be assigned to each network interface. Each node requires one IP address for each network interface. All other available IP addresses can be assigned to `Pods`. Each `Pod` requires its own IP address. As a result, you might have nodes that have available compute and memory resources, but can’t accommodate additional `Pods` because the node has run out of IP addresses to assign to `Pods`.

You can increase the number of IP addresses that nodes can assign to `Pods` by assigning IP prefixes, rather than assigning individual secondary IP addresses to your nodes. Each prefix includes several IP addresses. If you don’t configure your cluster for IP prefix assignment, your cluster must make more Amazon EC2 application programming interface (API) calls to configure network interfaces and IP addresses necessary for Pod connectivity. As clusters grow to larger sizes, the frequency of these API calls can lead to longer Pod and instance launch times. This results in scaling delays to meet the demand of large and spiky workloads, and adds cost and management overhead because you need to provision additional clusters and VPCs to meet scaling requirements. For more information, see [Kubernetes Scalability thresholds](https://github.com/kubernetes/community/blob/master/sig-scalability/configs-and-limits/thresholds.md) on GitHub.

## Compatibility with Amazon VPC CNI plugin for Kubernetes features


You can use IP prefixes with the following features:
+ IPv4 Source Network Address Translation - For more information, see [Enable outbound internet access for Pods](external-snat.md).
+ IPv6 addresses to clusters, Pods, and services - For more information, see [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md).
+ Restricting traffic using Kubernetes network policies - For more information, see [Limit Pod traffic with Kubernetes network policies](cni-network-policy.md).

The following list provides information about the Amazon VPC CNI plugin settings that apply. For more information about each setting, see [amazon-vpc-cni-k8s](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/README.md) on GitHub.
+  `WARM_IP_TARGET` 
+  `MINIMUM_IP_TARGET` 
+  `WARM_PREFIX_TARGET` 

## Considerations


Consider the following when you use this feature:
+ Each Amazon EC2 instance type supports a maximum number of Pods. If your managed node group consists of multiple instance types, the smallest number of maximum Pods for an instance in the cluster is applied to all nodes in the cluster.
+ By default, the maximum number of `Pods` that you can run on a node is 110, but you can change that number. If you change the number and have an existing managed node group, the next AMI or launch template update of your node group results in new nodes coming up with the changed value.
+ When transitioning from assigning IP addresses to assigning IP prefixes, we recommend that you create new node groups to increase the number of available IP addresses, rather than doing a rolling replacement of existing nodes. Running Pods on a node that has both IP addresses and prefixes assigned can lead to inconsistency in the advertised IP address capacity, impacting the future workloads on the node. For the recommended way of performing the transition, see [Prefix Delegation mode for Linux](https://docs.aws.amazon.com/eks/latest/best-practices/prefix-mode-linux.html) in the *Amazon EKS Best Practices Guide*.
+ The security group scope is at the node-level - For more information, see [Security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html).
+ IP prefixes assigned to a network interface support high Pod density per node and have the best launch time.
+ IP prefixes and IP addresses are associated with standard Amazon EC2 elastic network interfaces. Pods requiring specific security groups are assigned the primary IP address of a branch network interface. You can mix Pods getting IP addresses, or IP addresses from IP prefixes with Pods getting branch network interfaces on the same node.
+ For clusters with Linux nodes only.
  + After you configure the add-on to assign prefixes to network interfaces, you can’t downgrade your Amazon VPC CNI plugin for Kubernetes add-on to a version lower than `1.9.0` (or `1.10.1`) without removing all nodes in all node groups in your cluster.
  + If you’re also using security groups for Pods, with `POD_SECURITY_GROUP_ENFORCING_MODE`=`standard` and `AWS_VPC_K8S_CNI_EXTERNALSNAT`=`false`, when your Pods communicate with endpoints outside of your VPC, the node’s security groups are used, rather than any security groups you’ve assigned to your Pods.

    If you’re also using [security groups for Pods](security-groups-for-pods.md), with `POD_SECURITY_GROUP_ENFORCING_MODE`=`strict`, when your `Pods` communicate with endpoints outside of your VPC, the `Pod’s` security groups are used.

# Increase the available IP addresses for your Amazon EKS node
Procedure

You can increase the number of IP addresses that nodes can assign to Pods by assigning IP prefixes, rather than assigning individual secondary IP addresses to your nodes.

## Prerequisites

+ You need an existing cluster. To deploy one, see [Create an Amazon EKS cluster](create-cluster.md).
+ The subnets that your Amazon EKS nodes are in must have sufficient contiguous `/28` (for `IPv4` clusters) or `/80` (for `IPv6` clusters) Classless Inter-Domain Routing (CIDR) blocks. You can only have Linux nodes in an `IPv6` cluster. Using IP prefixes can fail if IP addresses are scattered throughout the subnet CIDR. We recommend the following:
  + Using a subnet CIDR reservation so that even if any IP addresses within the reserved range are still in use, upon their release, the IP addresses aren’t reassigned. This ensures that prefixes are available for allocation without segmentation.
  + Use new subnets that are specifically used for running the workloads that IP prefixes are assigned to. Both Windows and Linux workloads can run in the same subnet when assigning IP prefixes.
+ To assign IP prefixes to your nodes, your nodes must be AWS Nitro-based. Instances that aren’t Nitro-based continue to allocate individual secondary IP addresses, but have a significantly lower number of IP addresses to assign to Pods than Nitro-based instances do.
+  **For clusters with Linux nodes only** – If your cluster is configured for the `IPv4` family, you must have version `1.9.0` or later of the Amazon VPC CNI plugin for Kubernetes add-on installed. You can check your current version with the following command.

  ```
  kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -f 2
  ```

  If your cluster is configured for the `IPv6` family, you must have version `1.10.1` of the add-on installed. If your plugin version is earlier than the required versions, you must update it. For more information, see the updating sections of [Assign IPs to Pods with the Amazon VPC CNI](managing-vpc-cni.md).
+  **For clusters with Windows nodes only** 
  + You must have Windows support enabled for your cluster. For more information, see [Deploy Windows nodes on EKS clusters](windows-support.md).

## Assign IP address prefixes to nodes


Configure your cluster to assign IP address prefixes to nodes. Complete the procedure that matches your node’s operating system.

### Linux


1. Enable the parameter to assign prefixes to network interfaces for the Amazon VPC CNI DaemonSet. When you deploy a cluster, version `1.10.1` or later of the Amazon VPC CNI plugin for Kubernetes add-on is deployed with it. If you created the cluster with the `IPv6` family, this setting was set to `true` by default. If you created the cluster with the `IPv4` family, this setting was set to `false` by default.

   ```
   kubectl set env daemonset aws-node -n kube-system ENABLE_PREFIX_DELEGATION=true
   ```
**Important**  
Even if your subnet has available IP addresses, if the subnet does not have any contiguous `/28` blocks available, you will see the following error in the Amazon VPC CNI plugin for Kubernetes logs.  

   ```
   InsufficientCidrBlocks: The specified subnet does not have enough free cidr blocks to satisfy the request
   ```
This can happen due to fragmentation of existing secondary IP addresses spread out across a subnet. To resolve this error, either create a new subnet and launch Pods there, or use an Amazon EC2 subnet CIDR reservation to reserve space within a subnet for use with prefix assignment. For more information, see [Subnet CIDR reservations](https://docs.aws.amazon.com/vpc/latest/userguide/subnet-cidr-reservation.html) in the Amazon VPC User Guide.

1. If you plan to deploy a managed node group without a launch template, or with a launch template that you haven’t specified an AMI ID in, and you’re using a version of the Amazon VPC CNI plugin for Kubernetes at or later than the versions listed in the prerequisites, then skip to the next step. Managed node groups automatically calculates the maximum number of Pods for you.

   If you’re deploying a self-managed node group or a managed node group with a launch template that you have specified an AMI ID in, then you must set the maximum number of Pods for your nodes. For more information about how to determine the appropriate value, see [How maxPods is determined](choosing-instance-type.md#max-pods-precedence).
**Important**  
Managed node groups enforces a maximum number on the value of `maxPods`. For instances with less than 30 vCPUs the maximum number is 110 and for all other instances the maximum number is 250. This maximum number is applied whether prefix delegation is enabled or not.

1. If you’re using a cluster configured for `IPv6`, skip to the next step.

   Specify the parameters in one of the following options. To determine which option is right for you and what value to provide for it, see [WARM\$1PREFIX\$1TARGET, WARM\$1IP\$1TARGET, and MINIMUM\$1IP\$1TARGET](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/docs/prefix-and-ip-target.md) on GitHub.

   You can replace the example values with a value greater than zero.
   +  `WARM_PREFIX_TARGET` 

     ```
     kubectl set env ds aws-node -n kube-system WARM_PREFIX_TARGET=1
     ```
   +  `WARM_IP_TARGET` or `MINIMUM_IP_TARGET` – If either value is set, it overrides any value set for `WARM_PREFIX_TARGET`.

     ```
     kubectl set env ds aws-node -n kube-system WARM_IP_TARGET=5
     ```

     ```
     kubectl set env ds aws-node -n kube-system MINIMUM_IP_TARGET=2
     ```

1. Create one of the following types of node groups with at least one Amazon EC2 Nitro Amazon Linux 2023 instance type. For a list of Nitro instance types, see [Instances built on the Nitro System](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances) in the Amazon EC2 User Guide. This capability is not supported on Windows. For the options that include *110*, replace it with either the value from step 3 (recommended), or your own value.
   +  **Self-managed** – Deploy the node group using the instructions in [Create self-managed Amazon Linux nodes](launch-workers.md). Before creating the CloudFormation stack, open the template file and adjust the `UserData` in the `NodeLaunchTemplate` to be like the following

     ```
     ...
                 apiVersion: node.eks.aws/v1alpha1
                 kind: NodeConfig
                 spec:
                   cluster:
                     name: ${ClusterName}
                     apiServerEndpoint: ${ApiServerEndpoint}
                     certificateAuthority: ${CertificateAuthorityData}
                     cidr: ${ServiceCidr}
                   kubelet:
                     config:
                       maxPods: 110
     ...
     ```

     If you’re using `eksctl` to create the node group, you can use the following command.

     ```
     eksctl create nodegroup --cluster my-cluster --managed=false --max-pods-per-node 110
     ```
   +  **Managed** – Deploy your node group using one of the following options:
     +  **Without a launch template or with a launch template without an AMI ID specified** – Complete the procedure in [Create a managed node group for your cluster](create-managed-node-group.md). Managed node groups automatically calculates the Amazon EKS recommended `max-pods` value for you.
     +  **With a launch template with a specified AMI ID** – In your launch template, specify an Amazon EKS optimized AMI ID, or a custom AMI built off the Amazon EKS optimized AMI, then [deploy the node group using a launch template](launch-templates.md) and provide the following user data in the launch template. This user data passes a `NodeConfig` object to be read by the `nodeadm` tool on the node. For more information about `nodeadm`, see [the nodeadm documentation](https://awslabs.github.io/amazon-eks-ami/nodeadm).

       ```
       MIME-Version: 1.0
       Content-Type: multipart/mixed; boundary="//"
       
       --//
       Content-Type: application/node.eks.aws
       
       ---
       apiVersion: node.eks.aws/v1alpha1
       kind: NodeConfig
       spec:
        cluster:
          apiServerEndpoint: [.replaceable]`my-cluster`
          certificateAuthority: [.replaceable]`LS0t...`
          cidr: [.replaceable]`10.100.0.0/16`
          name: [.replaceable]`my-cluster
        kubelet:
          config:
            maxPods: [.replaceable]`110`
       --//--
       ```

       If you’re using `eksctl` to create the node group, you can use the following command.

       ```
       eksctl create nodegroup --cluster my-cluster --max-pods-per-node 110
       ```

       If you’ve created a custom AMI that is not built off the Amazon EKS optimized AMI, then you need to custom create the configuration yourself.
**Note**  
If you also want to assign IP addresses to Pods from a different subnet than the instance’s, then you need to enable the capability in this step. For more information, see [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md).

### Windows


1. Enable assignment of IP prefixes.

   1. Open the `amazon-vpc-cni` `ConfigMap` for editing.

      ```
      kubectl edit configmap -n kube-system amazon-vpc-cni -o yaml
      ```

   1. Add the following line to the `data` section.

      ```
        enable-windows-prefix-delegation: "true"
      ```

   1. Save the file and close the editor.

   1. Confirm that the line was added to the `ConfigMap`.

      ```
      kubectl get configmap -n kube-system amazon-vpc-cni -o "jsonpath={.data.enable-windows-prefix-delegation}"
      ```

      If the returned output isn’t `true`, then there might have been an error. Try completing the step again.
**Important**  
Even if your subnet has available IP addresses, if the subnet does not have any contiguous `/28` blocks available, you will see the following error in the Amazon VPC CNI plugin for Kubernetes logs.  

      ```
      InsufficientCidrBlocks: The specified subnet does not have enough free cidr blocks to satisfy the request
      ```
This can happen due to fragmentation of existing secondary IP addresses spread out across a subnet. To resolve this error, either create a new subnet and launch Pods there, or use an Amazon EC2 subnet CIDR reservation to reserve space within a subnet for use with prefix assignment. For more information, see [Subnet CIDR reservations](https://docs.aws.amazon.com/vpc/latest/userguide/subnet-cidr-reservation.html) in the Amazon VPC User Guide.

1. (Optional) Specify additional configuration for controlling the pre-scaling and dynamic scaling behavior for your cluster. For more information, see [Configuration options with Prefix Delegation mode on Windows](https://github.com/aws/amazon-vpc-resource-controller-k8s/blob/master/docs/windows/prefix_delegation_config_options.md) on GitHub.

   1. Open the `amazon-vpc-cni` `ConfigMap` for editing.

      ```
      kubectl edit configmap -n kube-system amazon-vpc-cni -o yaml
      ```

   1. Replace the example values with a value greater than zero and add the entries that you require to the `data` section of the `ConfigMap`. If you set a value for either `warm-ip-target` or `minimum-ip-target`, the value overrides any value set for `warm-prefix-target`.

      ```
        warm-prefix-target: "1"
        warm-ip-target: "5"
        minimum-ip-target: "2"
      ```

   1. Save the file and close the editor.

1. Create Windows node groups with at least one Amazon EC2 Nitro instance type. For a list of Nitro instance types, see [Instances built on the Nitro System](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/instance-types.html#ec2-nitro-instances) in the Amazon EC2 User Guide. By default, the maximum number of Pods that you can deploy to a node is 110. If you want to increase or decrease that number, specify the following in the user data for the bootstrap configuration. Replace *max-pods-quantity* with your max pods value.

   ```
   -KubeletExtraArgs '--max-pods=max-pods-quantity'
   ```

   If you’re deploying managed node groups, this configuration needs to be added in the launch template. For more information, see [Customize managed nodes with launch templates](launch-templates.md). For more information about the configuration parameters for Windows bootstrap script, see [Bootstrap script configuration parameters](eks-optimized-windows-ami.md#bootstrap-script-configuration-parameters).

## Determine max Pods and available IP addresses


1. Once your nodes are deployed, view the nodes in your cluster.

   ```
   kubectl get nodes
   ```

   An example output is as follows.

   ```
   NAME                                             STATUS     ROLES    AGE   VERSION
   ip-192-168-22-103.region-code.compute.internal   Ready      <none>   19m   v1.XX.X-eks-6b7464
   ip-192-168-97-94.region-code.compute.internal    Ready      <none>   19m   v1.XX.X-eks-6b7464
   ```

1. Describe one of the nodes to determine the value of `max-pods` for the node and the number of available IP addresses. Replace *192.168.30.193* with the `IPv4` address in the name of one of your nodes returned in the previous output.

   ```
   kubectl describe node ip-192-168-30-193.region-code.compute.internal | grep 'pods\|PrivateIPv4Address'
   ```

   An example output is as follows.

   ```
   pods:                                  110
   vpc.amazonaws.com/PrivateIPv4Address:  144
   ```

   In the previous output, `110` is the maximum number of Pods that Kubernetes will deploy to the node, even though *144* IP addresses are available.

# Assign security groups to individual Pods
Security groups for Pods

 **Applies to**: Linux nodes with Amazon EC2 instances

 **Applies to**: Private subnets

Security groups for Pods integrate Amazon EC2 security groups with Kubernetes Pods. You can use Amazon EC2 security groups to define rules that allow inbound and outbound network traffic to and from Pods that you deploy to nodes running on many Amazon EC2 instance types and Fargate. For a detailed explanation of this capability, see the [Introducing security groups for Pods](https://aws.amazon.com/blogs/containers/introducing-security-groups-for-pods) blog post.

## Compatibility with Amazon VPC CNI plugin for Kubernetes features


You can use security groups for Pods with the following features:
+ IPv4 Source Network Address Translation - For more information, see [Enable outbound internet access for Pods](external-snat.md).
+ IPv6 addresses to clusters, Pods, and services - For more information, see [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md).
+ Restricting traffic using Kubernetes network policies - For more information, see [Limit Pod traffic with Kubernetes network policies](cni-network-policy.md).

## Considerations


Before deploying security groups for Pods, consider the following limitations and conditions:
+ Security groups for Pods can’t be used with Windows nodes or EKS Auto Mode.
+ Security groups for Pods can be used with clusters configured for the `IPv6` family that contain Amazon EC2 nodes by using version 1.16.0 or later of the Amazon VPC CNI plugin. You can use security groups for Pods with clusters configure `IPv6` family that contain only Fargate nodes by using version 1.7.7 or later of the Amazon VPC CNI plugin. For more information, see [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md) 
+ Security groups for Pods are supported by most [Nitro-based](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html#ec2-nitro-instances) Amazon EC2 instance families, though not by all generations of a family. For example, the `m5`, `c5`, `r5`, `m6g`, `c6g`, and `r6g` instance family and generations are supported. No instance types in the `t` family are supported. For a complete list of supported instance types, see the [limits.go](https://github.com/aws/amazon-vpc-resource-controller-k8s/blob/v1.5.0/pkg/aws/vpc/limits.go) file on GitHub. Your nodes must be one of the listed instance types that have `IsTrunkingCompatible: true` in that file.
+ If you’re using custom networking and security groups for Pods together, the security group specified by security groups for Pods is used instead of the security group specified in the `ENIConfig`.
+ If you’re using version `1.10.2` or earlier of the Amazon VPC CNI plugin and you include the `terminationGracePeriodSeconds` setting in your Pod spec, the value for the setting can’t be zero.
+ If you’re using version `1.10` or earlier of the Amazon VPC CNI plugin, or version `1.11` with `POD_SECURITY_GROUP_ENFORCING_MODE`=`strict`, which is the default setting, then Kubernetes services of type `NodePort` and `LoadBalancer` using instance targets with an `externalTrafficPolicy` set to `Local` aren’t supported with Pods that you assign security groups to. For more information about using a load balancer with instance targets, see [Route TCP and UDP traffic with Network Load Balancers](network-load-balancing.md).
+ If you’re using version `1.10` or earlier of the Amazon VPC CNI plugin or version `1.11` with `POD_SECURITY_GROUP_ENFORCING_MODE`=`strict`, which is the default setting, source NAT is disabled for outbound traffic from Pods with assigned security groups so that outbound security group rules are applied. To access the internet, Pods with assigned security groups must be launched on nodes that are deployed in a private subnet configured with a NAT gateway or instance. Pods with assigned security groups deployed to public subnets are not able to access the internet.

  If you’re using version `1.11` or later of the plugin with `POD_SECURITY_GROUP_ENFORCING_MODE`=`standard`, then Pod traffic destined for outside of the VPC is translated to the IP address of the instance’s primary network interface. For this traffic, the rules in the security groups for the primary network interface are used, rather than the rules in the Pod’s security groups.
+ To use Calico network policy with Pods that have associated security groups, you must use version `1.11.0` or later of the Amazon VPC CNI plugin and set `POD_SECURITY_GROUP_ENFORCING_MODE`=`standard`. Otherwise, traffic flow to and from Pods with associated security groups are not subjected to Calico network policy enforcement and are limited to Amazon EC2 security group enforcement only. To update your Amazon VPC CNI version, see [Assign IPs to Pods with the Amazon VPC CNI](managing-vpc-cni.md) 
+ Pods running on Amazon EC2 nodes that use security groups in clusters that use [NodeLocal DNSCache](https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/) are only supported with version `1.11.0` or later of the Amazon VPC CNI plugin and with `POD_SECURITY_GROUP_ENFORCING_MODE`=`standard`. To update your Amazon VPC CNI plugin version, see [Assign IPs to Pods with the Amazon VPC CNI](managing-vpc-cni.md) 
+ Security groups for Pods might lead to higher Pod startup latency for Pods with high churn. This is due to rate limiting in the resource controller.
+ The EC2 security group scope is at the Pod-level - For more information, see [Security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html).

  If you set `POD_SECURITY_GROUP_ENFORCING_MODE=standard` and `AWS_VPC_K8S_CNI_EXTERNALSNAT=false`, traffic destined for endpoints outside the VPC use the node’s security groups, not the Pod’s security groups.

# Configure the Amazon VPC CNI plugin for Kubernetes for security groups for Amazon EKS Pods
Configure

If you use Pods with Amazon EC2 instances, you need to configure the Amazon VPC CNI plugin for Kubernetes for security groups

If you use Fargate Pods only, and don’t have any Amazon EC2 nodes in your cluster, see [Use a security group policy for an Amazon EKS Pod](sg-pods-example-deployment.md).

1. Check your current Amazon VPC CNI plugin for Kubernetes version with the following command:

   ```
   kubectl describe daemonset aws-node --namespace kube-system | grep amazon-k8s-cni: | cut -d : -f 3
   ```

   An example output is as follows.

   ```
   v1.7.6
   ```

   If your Amazon VPC CNI plugin for Kubernetes version is earlier than `1.7.7`, then update the plugin to version `1.7.7` or later. For more information, see [Assign IPs to Pods with the Amazon VPC CNI](managing-vpc-cni.md) 

1. Add the [AmazonEKSVPCResourceController](https://console.aws.amazon.com/iam/home#/policies/arn:aws:iam::aws:policy/AmazonEKSVPCResourceController) managed IAM policy to the [cluster role](cluster-iam-role.md#create-service-role) that is associated with your Amazon EKS cluster. The policy allows the role to manage network interfaces, their private IP addresses, and their attachment and detachment to and from network instances.

   1. Retrieve the name of your cluster IAM role and store it in a variable. Replace *my-cluster* with the name of your cluster.

      ```
      cluster_role=$(aws eks describe-cluster --name my-cluster --query cluster.roleArn --output text | cut -d / -f 2)
      ```

   1. Attach the policy to the role.

      ```
      aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEKSVPCResourceController --role-name $cluster_role
      ```

1. Enable the Amazon VPC CNI add-on to manage network interfaces for Pods by setting the `ENABLE_POD_ENI` variable to `true` in the `aws-node` DaemonSet. Once this setting is set to `true`, for each node in the cluster the add-on creates a `cninode` custom resource. The VPC resource controller creates and attaches one special network interface called a *trunk network interface* with the description `aws-k8s-trunk-eni`.

   ```
   kubectl set env daemonset aws-node -n kube-system ENABLE_POD_ENI=true
   ```
**Note**  
The trunk network interface is included in the maximum number of network interfaces supported by the instance type. For a list of the maximum number of network interfaces supported by each instance type, see [IP addresses per network interface per instance type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI) in the *Amazon EC2 User Guide*. If your node already has the maximum number of standard network interfaces attached to it then the VPC resource controller will reserve a space. You will have to scale down your running Pods enough for the controller to detach and delete a standard network interface, create the trunk network interface, and attach it to the instance.

1. You can see which of your nodes have a `CNINode` custom resource with the following command. If `No resources found` is returned, then wait several seconds and try again. The previous step requires restarting the Amazon VPC CNI plugin for Kubernetes Pods, which takes several seconds.

   ```
   kubectl get cninode -A
        NAME FEATURES
        ip-192-168-64-141.us-west-2.compute.internal [{"name":"SecurityGroupsForPods"}]
        ip-192-168-7-203.us-west-2.compute.internal [{"name":"SecurityGroupsForPods"}]
   ```

   If you are using VPC CNI versions older than `1.15`, node labels were used instead of the `CNINode` custom resource. You can see which of your nodes have the node label `aws-k8s-trunk-eni` set to `true` with the following command. If `No resources found` is returned, then wait several seconds and try again. The previous step requires restarting the Amazon VPC CNI plugin for Kubernetes Pods, which takes several seconds.

   ```
   kubectl get nodes -o wide -l vpc.amazonaws.com/has-trunk-attached=true
   ```

   Once the trunk network interface is created, Pods are assigned secondary IP addresses from the trunk or standard network interfaces. The trunk interface is automatically deleted if the node is deleted.

   When you deploy a security group for a Pod in a later step, the VPC resource controller creates a special network interface called a *branch network interface* with a description of `aws-k8s-branch-eni` and associates the security groups to it. Branch network interfaces are created in addition to the standard and trunk network interfaces attached to the node.

   If you are using liveness or readiness probes, then you also need to disable *TCP early demux*, so that the `kubelet` can connect to Pods on branch network interfaces using TCP. To disable *TCP early demux*, run the following command:

   ```
   kubectl patch daemonset aws-node -n kube-system \
     -p '{"spec": {"template": {"spec": {"initContainers": [{"env":[{"name":"DISABLE_TCP_EARLY_DEMUX","value":"true"}],"name":"aws-vpc-cni-init"}]}}}}'
   ```
**Note**  
If you’re using `1.11.0` or later of the Amazon VPC CNI plugin for Kubernetes add-on and set `POD_SECURITY_GROUP_ENFORCING_MODE`=`standard`, as described in the next step, then you don’t need to run the previous command.

1. If your cluster uses `NodeLocal DNSCache`, or you want to use Calico network policy with your Pods that have their own security groups, or you have Kubernetes services of type `NodePort` and `LoadBalancer` using instance targets with an `externalTrafficPolicy` set to `Local` for Pods that you want to assign security groups to, then you must be using version `1.11.0` or later of the Amazon VPC CNI plugin for Kubernetes add-on, and you must enable the following setting:

   ```
   kubectl set env daemonset aws-node -n kube-system POD_SECURITY_GROUP_ENFORCING_MODE=standard
   ```

   IMPORTANT: ** Pod security group rules aren’t applied to traffic between Pods or between Pods and services, such as `kubelet` or `nodeLocalDNS`, that are on the same node. Pods using different security groups on the same node can’t communicate because they are configured in different subnets, and routing is disabled between these subnets. ** Outbound traffic from Pods to addresses outside of the VPC is network address translated to the IP address of the instance’s primary network interface (unless you’ve also set `AWS_VPC_K8S_CNI_EXTERNALSNAT=true`). For this traffic, the rules in the security groups for the primary network interface are used, rather than the rules in the Pod’s security groups. \$1\$1 For this setting to apply to existing Pods, you must restart the Pods or the nodes that the Pods are running on.

1. To see how to use a security group policy for your Pod, see [Use a security group policy for an Amazon EKS Pod](sg-pods-example-deployment.md).

# Use a security group policy for an Amazon EKS Pod
SecurityGroupPolicy

To use security groups for Pods, you must have an existing security group. The following steps show you how to use the security group policy for a Pod. Unless otherwise noted, complete all steps from the same terminal because variables are used in the following steps that don’t persist across terminals.

If you have a Pod with Amazon EC2 instances, you must configure the plugin before you use this procedure. For more information, see [Configure the Amazon VPC CNI plugin for Kubernetes for security groups for Amazon EKS Pods](security-groups-pods-deployment.md).

1. Create a Kubernetes namespace to deploy resources to. You can replace *my-namespace* with the name of a namespace that you want to use.

   ```
   kubectl create namespace my-namespace
   ```

1.  Deploy an Amazon EKS `SecurityGroupPolicy` to your cluster.

   1. Copy the following contents to your device. You can replace *podSelector* with `serviceAccountSelector` if you’d rather select Pods based on service account labels. You must specify one selector or the other. An empty `podSelector` (example: `podSelector: {}`) selects all Pods in the namespace. You can change *my-role* to the name of your role. An empty `serviceAccountSelector` selects all service accounts in the namespace. You can replace *my-security-group-policy* with a name for your `SecurityGroupPolicy` and *my-namespace* with the namespace that you want to create the `SecurityGroupPolicy` in.

      You must replace *my\$1pod\$1security\$1group\$1id* with the ID of an existing security group. If you don’t have an existing security group, then you must create one. For more information, see [Amazon EC2 security groups for Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html) in the [Amazon EC2 User Guide](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/). You can specify 1-5 security group IDs. If you specify more than one ID, then the combination of all the rules in all the security groups are effective for the selected Pods.

      ```
      cat >my-security-group-policy.yaml <<EOF
      apiVersion: vpcresources.k8s.aws/v1beta1
      kind: SecurityGroupPolicy
      metadata:
        name: my-security-group-policy
        namespace: my-namespace
      spec:
        podSelector:
          matchLabels:
            role: my-role
        securityGroups:
          groupIds:
            - my_pod_security_group_id
      EOF
      ```
**Important**  
The security group or groups that you specify for your Pods must meet the following criteria:  
They must exist. If they don’t exist, then, when you deploy a Pod that matches the selector, your Pod remains stuck in the creation process. If you describe the Pod, you’ll see an error message similar to the following one: `An error occurred (InvalidSecurityGroupID.NotFound) when calling the CreateNetworkInterface operation: The securityGroup ID 'sg-05b1d815d1EXAMPLE' does not exist`.
They must allow inbound communication from the security group applied to your nodes (for `kubelet`) over any ports that you’ve configured probes for.
They must allow outbound communication over `TCP` and `UDP` ports 53 to a security group assigned to the Pods (or nodes that the Pods run on) running CoreDNS. The security group for your CoreDNS Pods must allow inbound `TCP` and `UDP` port 53 traffic from the security group that you specify.
They must have necessary inbound and outbound rules to communicate with other Pods that they need to communicate with.
They must have rules that allow the Pods to communicate with the Kubernetes control plane if you’re using the security group with Fargate. The easiest way to do this is to specify the cluster security group as one of the security groups.
Security group policies only apply to newly scheduled Pods. They do not affect running Pods.

   1. Deploy the policy.

      ```
      kubectl apply -f my-security-group-policy.yaml
      ```

1. Deploy a sample application with a label that matches the *my-role* value for *podSelector* that you specified in a previous step.

   1. Copy the following contents to your device. Replace the example values with your own and then run the modified command. If you replace *my-role*, make sure that it’s the same as the value you specified for the selector in a previous step.

      ```
      cat >sample-application.yaml <<EOF
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: my-deployment
        namespace: my-namespace
        labels:
          app: my-app
      spec:
        replicas: 4
        selector:
          matchLabels:
            app: my-app
        template:
          metadata:
            labels:
              app: my-app
              role: my-role
          spec:
            terminationGracePeriodSeconds: 120
            containers:
            - name: nginx
              image: public.ecr.aws/nginx/nginx:1.23
              ports:
              - containerPort: 80
      ---
      apiVersion: v1
      kind: Service
      metadata:
        name: my-app
        namespace: my-namespace
        labels:
          app: my-app
      spec:
        selector:
          app: my-app
        ports:
          - protocol: TCP
            port: 80
            targetPort: 80
      EOF
      ```

   1. Deploy the application with the following command. When you deploy the application, the Amazon VPC CNI plugin for Kubernetes matches the `role` label and the security groups that you specified in the previous step are applied to the Pod.

      ```
      kubectl apply -f sample-application.yaml
      ```

1. View the Pods deployed with the sample application. For the remainder of this topic, this terminal is referred to as `TerminalA`.

   ```
   kubectl get pods -n my-namespace -o wide
   ```

   An example output is as follows.

   ```
   NAME                             READY   STATUS    RESTARTS   AGE     IP               NODE                                            NOMINATED NODE   READINESS GATES
   my-deployment-5df6f7687b-4fbjm   1/1     Running   0          7m51s   192.168.53.48    ip-192-168-33-28.region-code.compute.internal   <none>           <none>
   my-deployment-5df6f7687b-j9fl4   1/1     Running   0          7m51s   192.168.70.145   ip-192-168-92-33.region-code.compute.internal   <none>           <none>
   my-deployment-5df6f7687b-rjxcz   1/1     Running   0          7m51s   192.168.73.207   ip-192-168-92-33.region-code.compute.internal   <none>           <none>
   my-deployment-5df6f7687b-zmb42   1/1     Running   0          7m51s   192.168.63.27    ip-192-168-33-28.region-code.compute.internal   <none>           <none>
   ```
**Note**  
Try these tips if any Pods are stuck.  
If any Pods are stuck in the `Waiting` state, then run `kubectl describe pod my-deployment-xxxxxxxxxx-xxxxx -n my-namespace `. If you see `Insufficient permissions: Unable to create Elastic Network Interface.`, confirm that you added the IAM policy to the IAM cluster role in a previous step.
If any Pods are stuck in the `Pending` state, confirm that your node instance type is listed in [limits.go](https://github.com/aws/amazon-vpc-resource-controller-k8s/blob/master/pkg/aws/vpc/limits.go) and that the product of the maximum number of branch network interfaces supported by the instance type multiplied times the number of nodes in your node group hasn’t already been met. For example, an `m5.large` instance supports nine branch network interfaces. If your node group has five nodes, then a maximum of 45 branch network interfaces can be created for the node group. The 46th Pod that you attempt to deploy will sit in `Pending` state until another Pod that has associated security groups is deleted.

   If you run `kubectl describe pod my-deployment-xxxxxxxxxx-xxxxx -n my-namespace ` and see a message similar to the following message, then it can be safely ignored. This message might appear when the Amazon VPC CNI plugin for Kubernetes tries to set up host networking and fails while the network interface is being created. The plugin logs this event until the network interface is created.

   ```
   Failed to create Pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e24268322e55c8185721f52df6493684f6c2c3bf4fd59c9c121fd4cdc894579f" network for Pod "my-deployment-5df6f7687b-4fbjm": networkPlugin
   cni failed to set up Pod "my-deployment-5df6f7687b-4fbjm-c89wx_my-namespace" network: add cmd: failed to assign an IP address to container
   ```

   You can’t exceed the maximum number of Pods that can be run on the instance type. For a list of the maximum number of Pods that you can run on each instance type, see [eni-max-pods.txt](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/misc/eni-max-pods.txt) on GitHub. When you delete a Pod that has associated security groups, or delete the node that the Pod is running on, the VPC resource controller deletes the branch network interface. If you delete a cluster with Pods using Pods for security groups, then the controller doesn’t delete the branch network interfaces, so you’ll need to delete them yourself. For information about how to delete network interfaces, see [Delete a network interface](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#delete_eni) in the Amazon EC2 User Guide.

1. In a separate terminal, shell into one of the Pods. For the remainder of this topic, this terminal is referred to as `TerminalB`. Replace *5df6f7687b-4fbjm* with the ID of one of the Pods returned in your output from the previous step.

   ```
   kubectl exec -it -n my-namespace my-deployment-5df6f7687b-4fbjm -- /bin/bash
   ```

1. From the shell in `TerminalB`, confirm that the sample application works.

   ```
   curl my-app
   ```

   An example output is as follows.

   ```
   <!DOCTYPE html>
   <html>
   <head>
   <title>Welcome to nginx!</title>
   [...]
   ```

   You received the output because all Pods running the application are associated with the security group that you created. That group contains a rule that allows all traffic between all Pods that the security group is associated to. DNS traffic is allowed outbound from that security group to the cluster security group, which is associated with your nodes. The nodes are running the CoreDNS Pods, which your Pods did a name lookup to.

1. From `TerminalA`, remove the security group rules that allow DNS communication to the cluster security group from your security group. If you didn’t add the DNS rules to the cluster security group in a previous step, then replace *\$1my\$1cluster\$1security\$1group\$1id* with the ID of the security group that you created the rules in.

   ```
   aws ec2 revoke-security-group-ingress --group-id $my_cluster_security_group_id --security-group-rule-ids $my_tcp_rule_id
   aws ec2 revoke-security-group-ingress --group-id $my_cluster_security_group_id --security-group-rule-ids $my_udp_rule_id
   ```

1. From `TerminalB`, attempt to access the application again.

   ```
   curl my-app
   ```

   An example output is as follows.

   ```
   curl: (6) Could not resolve host: my-app
   ```

   The attempt fails because the Pod is no longer able to access the CoreDNS Pods, which have the cluster security group associated to them. The cluster security group no longer has the security group rules that allow DNS communication from the security group associated to your Pod.

   If you attempt to access the application using the IP addresses returned for one of the Pods in a previous step, you still receive a response because all ports are allowed between Pods that have the security group associated to them and a name lookup isn’t required.

1. Once you’ve finished experimenting, you can remove the sample security group policy, application, and security group that you created. Run the following commands from `TerminalA`.

   ```
   kubectl delete namespace my-namespace
   aws ec2 revoke-security-group-ingress --group-id $my_pod_security_group_id --security-group-rule-ids $my_inbound_self_rule_id
   wait
   sleep 45s
   aws ec2 delete-security-group --group-id $my_pod_security_group_id
   ```

# Attach multiple network interfaces to Pods
Multiple interfaces

By default, the Amazon VPC CNI plugin assigns one IP address to each pod. This IP address is attached to an *elastic network interface* that handles all incoming and outgoing traffic for the pod. To increase the bandwidth and packet per second rate performance, you can use the *Multi-NIC feature* of the VPC CNI to configure a multi-homed pod. A multi-homed pod is a single Kubernetes pod that uses multiple network interfaces (and multiple IP addresses). By running a multi-homed pod, you can spread its application traffic across multiple network interfaces by using concurrent connections. This is especially useful for Artificial Intelligence (AI), Machine Learning (ML), and High Performance Computing (HPC) use cases.

The following diagram shows a multi-homed pod running on a worker node with multiple network interface cards (NICs) in use.

![\[A multi-homed pod with two network interfaces attached one network interface with ENA and one network interface with ENA and EFA\]](http://docs.aws.amazon.com/eks/latest/userguide/images/multi-homed-pod.png)


## Background


On Amazon EC2, an *elastic network interface* is a logical networking component in a VPC that represents a virtual network card. For many EC2 instance types, the network interfaces share a single network interface card (NIC) in hardware. This single NIC has a maximum bandwidth and packet per second rate.

If the multi-NIC feature is enabled, the VPC CNI doesn’t assign IP addresses in bulk, which it does by default. Instead, the VPC CNI assigns one IP address to a network interface on each network card on-demand when a new pod starts. This behavior reduces the rate of IP address exhaustion, which is increased by using multi-homed pods. Because the VPC CNI is assigning IP address on-demand, pods might take longer to start on instances with the multi-NIC feature enabled.

## Considerations

+ Ensure that your Kubernetes cluster is running VPC CNI version `1.20.0` and later. The multi-NIC feature is only available in version `1.20.0` of the VPC CNI or later.
+ Enable the `ENABLE_MULTI_NIC` environment variable in the VPC CNI plugin. You can run the following command to set the variable and start a deployment of the DaemonSet.
  +  `kubectl set env daemonset aws-node -n kube-system ENABLE_MULTI_NIC=true` 
+ Ensure that you create worker nodes that have multiple network interface cards (NICs). For a list of EC2 instances that have multiple network interface cards, see [Network cards](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#network-cards) in the **Amazon EC2 User Guide**.
+ If the multi-NIC feature is enabled, the VPC CNI doesn’t assign IP addresses in bulk, which it does by default. Because the VPC CNI is assigning IP address on-demand, pods might take longer to start on instances with the multi-NIC feature enabled. For more information, see the previous section [Background](#pod-multi-nic-background).
+ With the multi-NIC feature enabled, pods don’t have multiple network interfaces by default. You must configure each workload to use multi-NIC. Add the `k8s.amazonaws.com/nicConfig: multi-nic-attachment` annotation to workloads that should have multiple network interfaces.

### `IPv6` Considerations

+  **Custom IAM policy** - For `IPv6` clusters, create and use the following custom IAM policy for the VPC CNI. This policy is specific to multi-NIC. For more general information about using the VPC CNI with `IPv6` clusters, see [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md).

  ```
  {
      "Version":"2012-10-17",		 	 	 
      "Statement": [
          {
              "Sid": "AmazonEKSCNIPolicyIPv6MultiNIC",
              "Effect": "Allow",
              "Action": [
                  "ec2:CreateNetworkInterface",
                  "ec2:DescribeInstances",
                  "ec2:AssignIpv6Addresses",
                  "ec2:DetachNetworkInterface",
                  "ec2:DescribeNetworkInterfaces",
                  "ec2:DescribeTags",
                  "ec2:ModifyNetworkInterfaceAttribute",
                  "ec2:DeleteNetworkInterface",
                  "ec2:DescribeInstanceTypes",
                  "ec2:UnassignIpv6Addresses",
                  "ec2:AttachNetworkInterface",
                  "ec2:DescribeSubnets"
              ],
              "Resource": "*"
          },
          {
              "Sid": "AmazonEKSCNIPolicyENITagIPv6MultiNIC",
              "Effect": "Allow",
              "Action": "ec2:CreateTags",
              "Resource": "arn:aws:ec2:*:*:network-interface/*"
          }
      ]
  }
  ```
+  `IPv6` **transition mechanism not available** - If you use the multi-NIC feature, the VPC CNI doesn’t assign an `IPv4` address to pods on an `IPv6` cluster. Otherwise, the VPC CNI assigns a host-local `IPv4` address to each pod so that a pod can communicate with external `IPv4` resources in another Amazon VPC or the internet.

## Usage


After the multi-NIC feature is enabled in the VPC CNI and the `aws-node` pods have restarted, you can configure each workload to be multi-homed. The following example of a YAML configuration with the required annotation:

```
apiVersion: apps/v1
kind: Deployment
metadata:
  name: orders-deployment
  namespace: ecommerce
  labels:
    app: orders
spec:
  replicas: 3
  selector:
    matchLabels:
      app: orders
  template:
    metadata:
      annotations:
         k8s.amazonaws.com/nicConfig: multi-nic-attachment
      labels:
        app: orders
    spec:
...
```

## Frequently Asked Questions


### **1. What is a network interface card (NIC)?**


A network interface card (NIC), also simply called a network card, is a physical device that enables network connectivity for the underlying cloud compute hardware. In modern EC2 servers, this refers to the Nitro network card. An Elastic Network Interface (ENI) is a virtual representation of this underlying network card.

Some EC2 instance types have multiple NICs for greater bandwidth and packet rate performance. For such instances, you can assign secondary ENIs to the additional network cards. For example, ENI \$11 can function as the interface for the NIC attached to network card index 0, whereas ENI \$12 can function as the interface for the NIC attached to a separate network card index.

### **2. What is a multi-homed pod?**


A multi-homed pod is a single Kubernetes pod with multiple network interfaces (and by implication multiple IP addresses). Each pod network interface is associated with an [Elastic Network Interface (ENI)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html), and these ENIs are logical representations of separate NICs on the underlying worker node. With multiple network interfaces, a multi-homed pod has additional data transfer capacity, which also raises its data transfer rate.

**Important**  
The VPC CNI can only configure multi-homed pods on instance types that have multiple NICs.

### **3. Why should I use this feature?**


If you need to scale network performance in your Kubernetes-based workloads, you can use the multi-NIC feature to run multi-homed pods that interface with all the underlying NICs that have an ENA device attached to it. Leveraging additional network cards raises the bandwidth capacity and packet rate performance in your applications by distributing application traffic across multiple concurrent connections. This is especially useful for Artificial Intelligence (AI), Machine Learning (ML), and High Performance Computing (HPC) use cases.

### **4. How do I use this feature?**


1. First, you must ensure that your Kubernetes cluster is using VPC CNI version 1.20 or later. For the steps to update the VPC CNI as an EKS add-on, see [Update the Amazon VPC CNI (Amazon EKS add-on)](vpc-add-on-update.md).

1. Then, you have to enable multi-NIC support in the VPC CNI by using the `ENABLE_MULTI_NIC` environment variable.

1. Then, you must ensure that you make and join nodes that have multiple network cards. For a list of EC2 instance types that have multiple network cards, see [Network cards](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#network-cards) in the *Amazon EC2 User Guide*.

1. Finally, you configure each workload to use either multiple network interfaces (multi-homed pods) or use a single network interface.

### **5. How do I configure my workloads to use multiple NICs on a supported worker node?**


To use multi-homed pods, you need to add the following annotation: `k8s.amazonaws.com/nicConfig: multi-nic-attachment`. This will attach an ENI from every NIC in the underlying instance to the pod (one to many mapping between a pod and the NICs).

If this annotation is missing, the VPC CNI assumes that your pod only requires 1 network interface and assigns it an IP from an ENI on any available NIC.

### **6. What network interface adapters are supported with this feature?**


You can use any network interface adapter if you have at least one ENA attached to the underlying network card for IP traffic. For more information about ENA, see [Elastic Network Adapter (ENA)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html) in the *Amazon EC2 User Guide*.

Supported network device configurations:
+  **ENA** interfaces provide all of the traditional IP networking and routing features that are required to support IP networking for a VPC. For more information, see [Enable enhanced networking with ENA on your EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/enhanced-networking-ena.html).
+  **EFA** **(EFA** **with ENA)** interfaces provide both the ENA device for IP networking and the EFA device for low-latency, high-throughput communication.

**Important**  
If a network card only has an **EFA-only** adapter attached to it, the VPC CNI will skip it when provisioning network connectivity for a multi-homed pod. However, if you combine an **EFA-only** adapter with an **ENA** adapter on a network card, then the VPC CNI will manage ENIs on this device as well. To use EFA-only interfaces with EKS clusters, see [Run machine learning training on Amazon EKS with Elastic Fabric Adapter](node-efa.md).

### **7. Can I see if a node in my cluster has ENA support?**


Yes, you can use the AWS CLI or EC2 API to retrieve network information about an EC2 instance in your cluster. This provides details on whether or not the instance has ENA support. In the following example, replace `<your-instance-id>` with the EC2 instance ID of a node.

 AWS CLI example:

```
aws ec2 describe-instances --instance-ids <your-instance-id> --query "Reservations[].Instances[].EnaSupport"
```

Example output:

```
[ true ]
```

### **8. Can I see the different IP addresses associated with a pod?**


No, not easily. However, you can use `nsenter` from the node to run common network tools such as `ip route show` and see the additional IP addresses and interfaces.

### **9. Can I control the number of network interfaces for my pods?**


No. When your workload is configured to use multiple NICs on a supported instance, a single pod automatically has an IP address from every network card on the instance. Alternatively, single-homed pods will have one network interface attached to one NIC on the instance.

**Important**  
Network cards that *only* have an **EFA-only** device attached to it are skipped by the VPC CNI.

### **10. Can I configure my pods to use a specific NIC?**


No, this isn’t supported. If a pod has the relevant annotation, then the VPC CNI automatically configures it to use every NIC with an ENA adapter on the worker node.

### **11. Does this feature work with the other VPC CNI networking features?**


Yes, the multi-NIC feature in the VPC CNI works with both *custom networking* and *enhanced subnet discovery*. However, the multi-homed pods don’t use the custom subnets or security groups. Instead, the VPC CNI assigns IP addresses and network interfaces to the multi-homed pods with the same subnet and security group configuration as the node. For more information about custom networking, see [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md).

The multi-NIC feature in the VPC CNI doesn’t work with and can’t be combined with *security groups for pods*.

### **12. Can I use network policies with this feature?**


Yes, you can use Kubernetes network policies with multi-NIC. Kubernetes network policies restrict network traffic to and from your pods. For more information about applying network policies with the VPC CNI, see [Limit Pod traffic with Kubernetes network policies](cni-network-policy.md).

### **13. Is multi-NIC support enabled in EKS Auto Mode?**


Multi-NIC isn’t supported for EKS Auto Mode clusters.