

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Deploy Amazon EKS on-premises with AWS Outposts
Amazon EKS on AWS Outposts

You can use Amazon EKS to run on-premises Kubernetes applications on AWS Outposts. You can deploy Amazon EKS on Outposts in the following ways:
+  **Extended clusters** – Run the Kubernetes control plane in an AWS Region and nodes on your Outpost.
+  **Local clusters** – Run the Kubernetes control plane and nodes on your Outpost.

For both deployment options, the Kubernetes control plane is fully managed by AWS. You can use the same Amazon EKS APIs, tools, and console that you use in the cloud to create and run Amazon EKS on Outposts.

The following diagram shows these deployment options.

![\[Outpost deployment options\]](http://docs.aws.amazon.com/eks/latest/userguide/images/outposts-deployment-options.png)


## When to use each deployment option


Both local and extended clusters are general-purpose deployment options and can be used for a range of applications.

With local clusters, you can run the entire Amazon EKS cluster locally on Outposts. This option can mitigate the risk of application downtime that might result from temporary network disconnects to the cloud. These network disconnects can be caused by fiber cuts or weather events. Because the entire Amazon EKS cluster runs locally on Outposts, applications remain available. You can perform cluster operations during network disconnects to the cloud. For more information, see [Prepare local Amazon EKS clusters on AWS Outposts for network disconnects](eks-outposts-network-disconnects.md). If you’re concerned about the quality of the network connection from your Outposts to the parent AWS Region and require high availability through network disconnects, use the local cluster deployment option.

With extended clusters, you can conserve capacity on your Outpost because the Kubernetes control plane runs in the parent AWS Region. This option is suitable if you can invest in reliable, redundant network connectivity from your Outpost to the AWS Region. The quality of the network connection is critical for this option. The way that Kubernetes handles network disconnects between the Kubernetes control plane and nodes might lead to application downtime. For more information on the behavior of Kubernetes, see [Scheduling, Preemption, and Eviction](https://kubernetes.io/docs/concepts/scheduling-eviction/) in the Kubernetes documentation.

## Comparing the deployment options


The following table compares the differences between the two options.


| Feature | Extended cluster | Local cluster | 
| --- | --- | --- | 
|  Kubernetes control plane location  |   AWS Region  |  Outpost  | 
|  Kubernetes control plane account  |   AWS account  |  Your account  | 
|  Regional availability  |  see [Service endpoints](https://docs.aws.amazon.com/general/latest/gr/eks.html#eks_region)   |  US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Middle East (Bahrain), and South America (São Paulo)  | 
|  Kubernetes minor versions  |  eks/latest/userguide/kubernetes-versions.html[Supported Amazon EKS versions,type="documentation"].  |  eks/latest/userguide/kubernetes-versions.html[Supported Amazon EKS versions,type="documentation"].  | 
|  Platform versions  |  See [EKS platform-versions](https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html)   |  See [Learn Kubernetes and Amazon EKS platform versions for AWS Outposts](eks-outposts-platform-versions.md)   | 
|  Outpost form factors  |  Outpost racks  |  Outpost racks  | 
|  User interfaces  |   AWS Management Console, AWS CLI, Amazon EKS API, `eksctl`, AWS CloudFormation, and Terraform  |   AWS Management Console, AWS CLI, Amazon EKS API, `eksctl`, AWS CloudFormation, and Terraform  | 
|  Managed policies  |   [AmazonEKSClusterPolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-amazoneksclusterpolicy) and [AWS managed policy: AmazonEKSServiceRolePolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-amazoneksservicerolepolicy)   |   [AmazonEKSLocalOutpostClusterPolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-amazonekslocaloutpostclusterpolicy) and [AWS managed policy: AmazonEKSLocalOutpostServiceRolePolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-amazonekslocaloutpostservicerolepolicy)   | 
|  Cluster VPC and subnets  |  See [View Amazon EKS networking requirements for VPC and subnets](network-reqs.md)   |  See [Create a VPC and subnets for Amazon EKS clusters on AWS Outposts](eks-outposts-vpc-subnet-requirements.md)   | 
|  Cluster endpoint access  |  Public or private or both  |  Private only  | 
|  Kubernetes API server authentication  |   AWS Identity and Access Management (IAM) and OIDC  |  IAM and `x.509` certificates  | 
|  Node types  |  Self-managed only  |  Self-managed only  | 
|  Node compute types  |  Amazon EC2 on-demand  |  Amazon EC2 on-demand  | 
|  Node storage types  |  Amazon EBS `gp2` and local NVMe SSD  |  Amazon EBS `gp2` and local NVMe SSD  | 
|  Amazon EKS optimized AMIs  |  Amazon Linux, Windows, and Bottlerocket  |  Amazon Linux only  | 
|  IP versions  |   `IPv4` only  |   `IPv4` only  | 
|  Add-ons  |  Amazon EKS add-ons or self-managed add-ons  |  Self-managed add-ons only  | 
|  Default Container Network Interface  |  Amazon VPC CNI plugin for Kubernetes  |  Amazon VPC CNI plugin for Kubernetes  | 
|  Kubernetes control plane logs  |  Amazon CloudWatch Logs  |  Amazon CloudWatch Logs  | 
|  Load balancing  |  Use the [AWS Load Balancer Controller](aws-load-balancer-controller.md) to provision Application Load Balancers only (no Network Load Balancers)  |  Use the [AWS Load Balancer Controller](aws-load-balancer-controller.md) to provision Application Load Balancers only (no Network Load Balancers)  | 
|  Secrets envelope encryption  |  See [Encrypt Kubernetes secrets with KMS on existing clusters](enable-kms.md)   |  Not supported  | 
|  IAM roles for service accounts  |  See [IAM roles for service accounts](iam-roles-for-service-accounts.md)   |  Not supported  | 
|  Troubleshooting  |  See [Troubleshoot problems with Amazon EKS clusters and nodes](troubleshooting.md)   |  See [Troubleshoot local Amazon EKS clusters on AWS Outposts](eks-outposts-troubleshooting.md)   | 

**Topics**

# Create local Amazon EKS clusters on AWS Outposts for high availability
Run local clusters

You can use local clusters to run your entire Amazon EKS cluster locally on AWS Outposts. This helps mitigate the risk of application downtime that might result from temporary network disconnects to the cloud. These disconnects can be caused by fiber cuts or weather events. Because the entire Kubernetes cluster runs locally on Outposts, applications remain available. You can perform cluster operations during network disconnects to the cloud. For more information, see [Prepare local Amazon EKS clusters on AWS Outposts for network disconnects](eks-outposts-network-disconnects.md). The following diagram shows a local cluster deployment.

![\[Outpost local cluster\]](http://docs.aws.amazon.com/eks/latest/userguide/images/outposts-local-cluster.png)


Local clusters are generally available for use with Outposts racks.

## Supported AWS Regions


You can create local clusters in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Middle East (Bahrain), and South America (São Paulo). For detailed information about supported features, see [Comparing the deployment options](eks-outposts.md#outposts-overview-comparing-deployment-options).

**Topics**

# Deploy an Amazon EKS cluster on AWS Outposts
Deploy a local cluster

This topic provides an overview of what to consider when running a local cluster on an Outpost. The topic also provides instructions for how to deploy a local cluster on an Outpost.

**Important**  
These considerations aren’t replicated in related Amazon EKS documentation. If other Amazon EKS documentation topics conflict with the considerations here, follow the considerations here.
These considerations are subject to change and might change frequently. So, we recommend that you regularly review this topic.
Many of the considerations are different than the considerations for creating a cluster on the AWS Cloud.
+ Local clusters support Outpost racks only. A single local cluster can run across multiple physical Outpost racks that comprise a single logical Outpost. A single local cluster can’t run across multiple logical Outposts. Each logical Outpost has a single Outpost ARN.
+ Local clusters run and manage the Kubernetes control plane in your account on the Outpost. You can’t run workloads on the Kubernetes control plane instances or modify the Kubernetes control plane components. These nodes are managed by the Amazon EKS service. Changes to the Kubernetes control plane don’t persist through automatic Amazon EKS management actions, such as patching.
+ Local clusters support self-managed add-ons and self-managed Amazon Linux node groups. The [Amazon VPC CNI plugin for Kubernetes](managing-vpc-cni.md), [kube-proxy](managing-kube-proxy.md), and [CoreDNS](managing-coredns.md) add-ons are automatically installed on local clusters.
+ Local clusters require the use of Amazon EBS on Outposts. Your Outpost must have Amazon EBS available for the Kubernetes control plane storage. Outposts support Amazon EBS `gp2` volumes only.
+ Amazon EBS backed Kubernetes `PersistentVolumes` are supported using the Amazon EBS CSI driver.
+ The control plane instances of local clusters are set up in [stacked highly available topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/). Two out of the three control plane instances must be healthy at all times to maintain quorum. If quorum is lost, contact AWS support, as some service-side actions will be required to enable the new managed instances.

 **Prerequisites** 
+ Familiarity with the [Outposts deployment options](eks-outposts.md#outposts-overview-comparing-deployment-options), [Select instance types and placement groups for Amazon EKS clusters on AWS Outposts based on capacity considerations](eks-outposts-capacity-considerations.md), and [VPC requirements and considerations](eks-outposts-vpc-subnet-requirements.md).
+ An existing Outpost. For more information, see [What is AWS Outposts](https://docs.aws.amazon.com/outposts/latest/userguide/what-is-outposts.html).
+ The `kubectl` command line tool is installed on your computer or AWS CloudShell. The version can be the same as or up to one minor version earlier or later than the Kubernetes version of your cluster. For example, if your cluster version is `1.29`, you can use `kubectl` version `1.28`, `1.29`, or `1.30` with it. To install or upgrade `kubectl`, see [Set up `kubectl` and `eksctl`](install-kubectl.md).
+ Version `2.12.3` or later or version `1.27.160` or later of the AWS Command Line Interface (AWS CLI) installed and configured on your device or AWS CloudShell. To check your current version, use `aws --version | cut -d / -f2 | cut -d ' ' -f1`. Package managers such as `yum`, `apt-get`, or Homebrew for macOS are often several versions behind the latest version of the AWS CLI. To install the latest version, see [Installing](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html) and [Quick configuration with aws configure](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config) in the * AWS Command Line Interface User Guide*. The AWS CLI version that is installed in AWS CloudShell might also be several versions behind the latest version. To update it, see [Installing AWS CLI to your home directory](https://docs.aws.amazon.com/cloudshell/latest/userguide/vm-specs.html#install-cli-software) in the * AWS CloudShell User Guide*.
+ An IAM principal (user or role) with permissions to `create` and `describe` an Amazon EKS cluster. For more information, see [Create a local Kubernetes cluster on an Outpost](security-iam-id-based-policy-examples.md#policy-create-local-cluster) and [List or describe all clusters](security-iam-id-based-policy-examples.md#policy-example2).

When a local Amazon EKS cluster is created, the [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) that creates the cluster is permanently added. The principal is specifically added to the Kubernetes RBAC authorization table as the administrator. This entity has `system:masters` permissions. The identity of this entity isn’t visible in your cluster configuration. So, it’s important to note the entity that created the cluster and make sure that you never delete it. Initially, only the principal that created the server can make calls to the Kubernetes API server using `kubectl`. If you use the console to create the cluster, make sure that the same IAM credentials are in the AWS SDK credential chain when you run `kubectl` commands on your cluster. After your cluster is created, you can grant other IAM principals access to your cluster.

## Create an Amazon EKS local cluster


You can create a local cluster with the following tools described in this page:
+  [`eksctl`](#eksctl_create_cluster_outpost) 
+  [AWS Management Console](#console_create_cluster_outpost) 

You could also use the [AWS CLI](https://docs.aws.amazon.com/cli/latest/reference/eks/create-cluster.html), the [Amazon EKS API](https://docs.aws.amazon.com/eks/latest/APIReference/API_CreateCluster.html), the [AWS SDKs](https://aws.amazon.com/developer/tools/), [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-eks-cluster-outpostconfig.html) or [Terraform](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest) to create clusters on Outposts.

### `eksctl`


 **To create a local cluster with `eksctl` ** 

1. Install version `0.215.0` or later of the `eksctl` command line tool on your device or AWS CloudShell. To install or update `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.

1. Copy the contents that follow to your device. Replace the following values and then run the modified command to create the `outpost-control-plane.yaml` file:
   + Replace *region-code* with the [supported AWS Region](eks-outposts-local-cluster-overview.md#outposts-control-plane-supported-regions) that you want to create your cluster in.
   + Replace *my-cluster* with a name for your cluster. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in.
   + Replace *vpc-ExampleID1* and *subnet-ExampleID1* with the IDs of your existing VPC and subnet. The VPC and subnet must meet the requirements in [Create a VPC and subnets for Amazon EKS clusters on AWS Outposts](eks-outposts-vpc-subnet-requirements.md).
   + Replace *uniqueid* with the ID of your Outpost.
   + Replace *m5.large* with an instance type available on your Outpost. Before choosing an instance type, see [Select instance types and placement groups for Amazon EKS clusters on AWS Outposts based on capacity considerations](eks-outposts-capacity-considerations.md). Three control plane instances are deployed. You can’t change this number.

     ```
     cat >outpost-control-plane.yaml <<EOF
     apiVersion: eksctl.io/v1alpha5
     kind: ClusterConfig
     
     metadata:
       name: my-cluster
       region: region-code
       version: "1.35"
     
     vpc:
       clusterEndpoints:
         privateAccess: true
       id: "vpc-vpc-ExampleID1"
       subnets:
         private:
           outpost-subnet-1:
             id: "subnet-subnet-ExampleID1"
     
     outpost:
       controlPlaneOutpostARN: arn:aws:outposts:region-code:111122223333:outpost/op-uniqueid
       controlPlaneInstanceType: m5.large
     EOF
     ```

     For a complete list of all available options and defaults, see [AWS Outposts Support](https://eksctl.io/usage/outposts/) and [Config file schema](https://eksctl.io/usage/schema/) in the `eksctl` documentation.

1. Create the cluster using the configuration file that you created in the previous step. `eksctl` creates a VPC and one subnet on your Outpost to deploy the cluster in.

   ```
   eksctl create cluster -f outpost-control-plane.yaml
   ```

   Cluster provisioning takes several minutes. While the cluster is being created, several lines of output appear. The last line of output is similar to the following example line.

   ```
   [✓]  EKS cluster "my-cluster" in "region-code" region is ready
   ```
**Tip**  
To see the most options that you can specify when creating a cluster with `eksctl`, use the `eksctl create cluster --help` command. To see all the available options, you can use a `config` file. For more information, see [Using config files](https://eksctl.io/usage/creating-and-managing-clusters/#using-config-files) and the [config file schema](https://eksctl.io/usage/schema/) in the `eksctl` documentation. You can find [config file examples](https://github.com/weaveworks/eksctl/tree/master/examples) on GitHub.

   The `eksctl` command automatically created an [access entry](access-entries.md) for the IAM principal (user or role) that created the cluster and granted the IAM principal administrator permissions to Kubernetes objects on the cluster. If you don’t want the cluster creator to have administrator access to Kubernetes objects on the cluster, add the following text to the previous configuration file: `bootstrapClusterCreatorAdminPermissions: false` (at the same level as `metadata`, `vpc`, and `outpost`). If you added the option, then after cluster creation, you need to create an access entry for at least one IAM principal, or no IAM principals will have access to Kubernetes objects on the cluster.

### AWS Management Console


 **To create your cluster with the AWS Management Console ** 

1. You need an existing VPC and subnet that meet Amazon EKS requirements. For more information, see [Create a VPC and subnets for Amazon EKS clusters on AWS Outposts](eks-outposts-vpc-subnet-requirements.md).

1. If you already have a local cluster IAM role, or you’re going to create your cluster with `eksctl`, then you can skip this step. By default, `eksctl` creates a role for you.

   1. Run the following command to create an IAM trust policy JSON file.

      ```
      {
        "Version":"2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
              "Service": "ec2.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
          }
        ]
      }
      ```

   1. Create the Amazon EKS cluster IAM role. To create an IAM role, the [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal) that is creating the role must be assigned the `iam:CreateRole` action (permission).

      ```
      aws iam create-role --role-name myAmazonEKSLocalClusterRole --assume-role-policy-document file://"eks-local-cluster-role-trust-policy.json"
      ```

   1. Attach the Amazon EKS managed policy named [AmazonEKSLocalOutpostClusterPolicy](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonEKSLocalOutpostClusterPolicy.html) to the role. To attach an IAM policy to an [IAM principal](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html#iam-term-principal), the principal that is attaching the policy must be assigned one of the following IAM actions (permissions): `iam:AttachUserPolicy` or `iam:AttachRolePolicy`.

      ```
      aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonEKSLocalOutpostClusterPolicy --role-name myAmazonEKSLocalClusterRole
      ```

1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

1. At the top of the console screen, make sure that you have selected a [supported AWS Region](eks-outposts-local-cluster-overview.md#outposts-control-plane-supported-regions).

1. Choose **Add cluster** and then choose **Create**.

1. On the **Configure cluster** page, enter or select values for the following fields:
   +  **Kubernetes control plane location** – Choose AWS Outposts.
   +  **Outpost ID** – Choose the ID of the Outpost that you want to create your control plane on.
   +  **Instance type** – Select an instance type. Only the instance types available in your Outpost are displayed. In the dropdown list, each instance type describes how many nodes the instance type is recommended for. Before choosing an instance type, see [Select instance types and placement groups for Amazon EKS clusters on AWS Outposts based on capacity considerations](eks-outposts-capacity-considerations.md). All replicas are deployed using the same instance type. You can’t change the instance type after your cluster is created. Three control plane instances are deployed. You can’t change this number.
   +  **Name** – A name for your cluster. It must be unique in your AWS account. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in.
   +  **Kubernetes version** – Choose the Kubernetes version that you want to use for your cluster. We recommend selecting the latest version, unless you need to use an earlier version.
   +  **Cluster service role** – Choose the Amazon EKS cluster IAM role that you created in a previous step to allow the Kubernetes control plane to manage AWS resources.
   +  **Kubernetes cluster administrator access** – If you want the IAM principal (role or user) that’s creating the cluster to have administrator access to the Kubernetes objects on the cluster, accept the default (allow). Amazon EKS creates an access entry for the IAM principal and grants cluster administrator permissions to the access entry. For more information about access entries, see [Grant IAM users access to Kubernetes with EKS access entries](access-entries.md).

     If you want a different IAM principal than the principal creating the cluster to have administrator access to Kubernetes cluster objects, choose the disallow option. After cluster creation, any IAM principal that has IAM permissions to create access entries can add an access entries for any IAM principals that need access to Kubernetes cluster objects. For more information about the required IAM permissions, see [Actions defined by Amazon Elastic Kubernetes Service](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonelastickubernetesservice.html#amazonelastickubernetesservice-actions-as-permissions) in the Service Authorization Reference. If you choose the disallow option and don’t create any access entries, then no IAM principals will have access to the Kubernetes objects on the cluster.
   +  **Tags** – (Optional) Add any tags to your cluster. For more information, see [Organize Amazon EKS resources with tags](eks-using-tags.md). When you’re done with this page, choose **Next**.

1. On the **Specify networking** page, select values for the following fields:
   +  **VPC** – Choose an existing VPC. The VPC must have a sufficient number of IP addresses available for the cluster, any nodes, and other Kubernetes resources that you want to create. Your VPC must meet the requirements in [VPC requirements and considerations](eks-outposts-vpc-subnet-requirements.md#outposts-vpc-requirements).
   +  **Subnets** – By default, all available subnets in the VPC specified in the previous field are preselected. The subnets that you choose must meet the requirements in [Subnet requirements and considerations](eks-outposts-vpc-subnet-requirements.md#outposts-subnet-requirements).
   +  **Security groups** – (Optional) Specify one or more security groups that you want Amazon EKS to associate to the network interfaces that it creates. Amazon EKS automatically creates a security group that enables communication between your cluster and your VPC. Amazon EKS associates this security group, and any that you choose, to the network interfaces that it creates. For more information about the cluster security group that Amazon EKS creates, see [View Amazon EKS security group requirements for clusters](sec-group-reqs.md). You can modify the rules in the cluster security group that Amazon EKS creates. If you choose to add your own security groups, you can’t change the ones that you choose after cluster creation. For on-premises hosts to communicate with the cluster endpoint, you must allow inbound traffic from the cluster security group. For clusters that don’t have an ingress and egress internet connection (also knows as private clusters), you must do one of the following:
     + Add the security group associated with required VPC endpoints. For more information about the required endpoints, see [Using interface VPC endpoints](eks-outposts-vpc-subnet-requirements.md#vpc-subnet-requirements-vpc-endpoints) in [Subnet access to AWS services](eks-outposts-vpc-subnet-requirements.md#subnet-access-to-services).
     + Modify the security group that Amazon EKS created to allow traffic from the security group associated with the VPC endpoints. When you’re done with this page, choose **Next**.

1. On the **Configure observability** page, you can optionally choose which **Metrics** and **Control plane logging** options that you want to turn on. By default, each log type is turned off.
   + For more information on the Prometheus metrics option, see [Step 1: Turn on Prometheus metrics](prometheus.md#turn-on-prometheus-metrics).
   + For more information on the **Control plane logging** options, see [Send control plane logs to CloudWatch Logs](control-plane-logs.md). When you’re done with this page, choose **Next**.

1. On the **Review and create** page, review the information that you entered or selected on the previous pages. If you need to make changes, choose **Edit**. When you’re satisfied, choose **Create**. The **Status** field shows **CREATING** while the cluster is provisioned.

   Cluster provisioning takes several minutes.

## View your Amazon EKS local cluster


1. After your cluster is created, you can view the Amazon EC2 control plane instances that were created.

   ```
   aws ec2 describe-instances --query 'Reservations[*].Instances[*].{Name:Tags[?Key==`Name`]|[0].Value}' | grep my-cluster-control-plane
   ```

   An example output is as follows.

   ```
   "Name": "my-cluster-control-plane-id1"
   "Name": "my-cluster-control-plane-id2"
   "Name": "my-cluster-control-plane-id3"
   ```

   Each instance is tainted with `node-role.eks-local.amazonaws.com/control-plane` so that no workloads are ever scheduled on the control plane instances. For more information about taints, see [Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) in the Kubernetes documentation. Amazon EKS continuously monitors the state of local clusters. We perform automatic management actions, such as security patches and repairing unhealthy instances. When local clusters are disconnected from the cloud, we complete actions to ensure that the cluster is repaired to a healthy state upon reconnect.

1. If you created your cluster using `eksctl`, then you can skip this step. `eksctl` completes this step for you. Enable `kubectl` to communicate with your cluster by adding a new context to the `kubectl` `config` file. For instructions on how to create and update the file, see [Connect kubectl to an EKS cluster by creating a kubeconfig file](create-kubeconfig.md).

   ```
   aws eks update-kubeconfig --region region-code --name my-cluster
   ```

   An example output is as follows.

   ```
   Added new context arn:aws:eks:region-code:111122223333:cluster/my-cluster to /home/username/.kube/config
   ```

1. To connect to your local cluster’s Kubernetes API server, have access to the local gateway for the subnet, or connect from within the VPC. For more information about connecting an Outpost rack to your on-premises network, see [How local gateways for racks work](https://docs.aws.amazon.com/outposts/latest/userguide/how-racks-work.html) in the AWS Outposts User Guide. If you use Direct VPC Routing and the Outpost subnet has a route to your local gateway, the private IP addresses of the Kubernetes control plane instances are automatically broadcasted over your local network. The local cluster’s Kubernetes API server endpoint is hosted in Amazon Route 53 (Route 53). The API service endpoint can be resolved by public DNS servers to the Kubernetes API servers' private IP addresses.

   Local clusters' Kubernetes control plane instances are configured with static elastic network interfaces with fixed private IP addresses that don’t change throughout the cluster lifecycle. Machines that interact with the Kubernetes API server might not have connectivity to Route 53 during network disconnects. If this is the case, we recommend configuring `/etc/hosts` with the static private IP addresses for continued operations. We also recommend setting up local DNS servers and connecting them to your Outpost. For more information, see the [AWS Outposts documentation](https://docs.aws.amazon.com/outposts/latest/userguide/how-outposts-works.html#dns). Run the following command to confirm that communication’s established with your cluster.

   ```
   kubectl get svc
   ```

   An example output is as follows.

   ```
   NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
   kubernetes   ClusterIP   10.100.0.1   <none>        443/TCP   28h
   ```

1. (Optional) Test authentication to your local cluster when it’s in a disconnected state from the AWS Cloud. For instructions, see [Prepare local Amazon EKS clusters on AWS Outposts for network disconnects](eks-outposts-network-disconnects.md).

### Internal resources


Amazon EKS creates the following resources on your cluster. The resources are for Amazon EKS internal use. For proper functioning of your cluster, don’t edit or modify these resources.
+ The following [mirror Pods](https://kubernetes.io/docs/reference/glossary/?all=true#term-mirror-pod):
  +  `aws-iam-authenticator-node-hostname ` 
  +  `eks-certificates-controller-node-hostname ` 
  +  `etcd-node-hostname ` 
  +  `kube-apiserver-node-hostname ` 
  +  `kube-controller-manager-node-hostname ` 
  +  `kube-scheduler-node-hostname ` 
+ The following self-managed add-ons:
  +  `kube-system/coredns` 
  +  `kube-system/` `kube-proxy` (not created until you add your first node)
  +  `kube-system/aws-node` (not created until you add your first node). Local clusters use the Amazon VPC CNI plugin for Kubernetes plugin for cluster networking. Do not change the configuration for control plane instances (Pods named `aws-node-controlplane-*`). There are configuration variables that you can use to change the default value for when the plugin creates new network interfaces. For more information, see the [documentation](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/README.md) on GitHub.
+ The following services:
  +  `default/kubernetes` 
  +  `kube-system/kube-dns` 
+ A `PodSecurityPolicy` named `eks.system` 
+ A `ClusterRole` named `eks:system:podsecuritypolicy` 
+ A `ClusterRoleBinding` named `eks:system` 
+ In addition to the [cluster security group](sec-group-reqs.md), Amazon EKS creates a security group in your AWS account that’s named `eks-local-internal-do-not-use-or-edit-cluster-name-uniqueid `. This security group allows traffic to flow freely between Kubernetes components running on the control plane instances.

Recommended next steps:
+  [Grant the IAM principal that created the cluster the required permissions to view Kubernetes resources in the AWS Management Console](view-kubernetes-resources.md#view-kubernetes-resources-permissions) 
+  [Grant IAM entities access to your cluster](grant-k8s-access.md). If you want the entities to view Kubernetes resources in the Amazon EKS console, grant the [Required permissions](view-kubernetes-resources.md#view-kubernetes-resources-permissions) to the entities.
+  [Configure logging for your cluster](control-plane-logs.md) 
+ Familiarize yourself with what happens during [network disconnects](eks-outposts-network-disconnects.md).
+  [Add nodes to your cluster](eks-outposts-self-managed-nodes.md) 
+ Consider setting up a backup plan for your `etcd`. Amazon EKS doesn’t support automated backup and restore of `etcd` for local clusters. For more information, see [Backing up an etcd cluster](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster) in the Kubernetes documentation. The two main options are using `etcdctl` to automate taking snapshots or using Amazon EBS storage volume backup.

# Learn Kubernetes and Amazon EKS platform versions for AWS Outposts
EKS platform versions

Local cluster platform versions represent the capabilities of the Amazon EKS cluster on AWS Outposts. The versions include the components that run on the Kubernetes control plane, which Kubernetes API server flags are enabled. They also include the current Kubernetes patch version. Each Kubernetes minor version has one or more associated platform versions. The platform versions for different Kubernetes minor versions are independent. The platform versions for local clusters and Amazon EKS clusters in the cloud are independent.

When a new Kubernetes minor version is available for local clusters, such as `1.31`, the initial platform version for that Kubernetes minor version starts at `eks-local-outposts.1`. However, Amazon EKS releases new platform versions periodically to enable new Kubernetes control plane settings and to provide security fixes.

When new local cluster platform versions become available for a minor version:
+ The platform version number is incremented (`eks-local-outposts.n+1`).
+ Amazon EKS automatically updates all existing local clusters to the latest platform version for their corresponding Kubernetes minor version. Automatic updates of existing platform versions are rolled out incrementally. The roll-out process consists of the replacement of the managed Kubernetes control-plane instances running on the Outpost, one at a time, until all 3 instances get replaced by new ones.
+ The Kubernetes control-plane instance replacement process will stop progressing if there is risk of service interruption. Amazon EKS will only attempt to replace an instance in case the other 2 Kubernetes control-plane instances are healthy and passing all readiness conditions as a cluster node.
+ A platform version rollout will typically take less than 30 minutes to complete. If a cluster remains on `UPDATING` state for an extended amount of time, see the [Troubleshoot local Amazon EKS clusters on AWS Outposts](eks-outposts-troubleshooting.md) and seek help from AWS Support. Never manually terminate Kubernetes control-plane instances unless instructed by AWS Support.
+ Amazon EKS might publish a new node AMI with a corresponding patch version. All patch versions are compatible between the Kubernetes control plane and node AMIs for a single Kubernetes minor version.

New platform versions don’t introduce breaking changes or cause service interruptions.

Local clusters are always created with the latest available platform version (`eks-local-outposts.n`) for the specified Kubernetes version.

The current and recent platform versions are described in the following tables.

To receive notifications of all source file changes to this specific documentation page, you can subscribe to the following URL with an RSS reader:

```
https://github.com/awsdocs/amazon-eks-user-guide/commits/mainline/latest/ug/outposts/eks-outposts-platform-versions.adoc.atom
```

## Kubernetes version `1.31`


The following admission controllers are enabled for all `1.31` platform versions: `CertificateApproval`, `CertificateSigning`, `CertificateSubjectRestriction`, `ClusterTrustBundleAttest`, `DefaultIngressClass`, `DefaultStorageClass`, `DefaultTolerationSeconds`, `ExtendedResourceToleration`, `LimitRanger`, `MutatingAdmissionWebhook`, `NamespaceLifecycle`, `NodeRestriction`, `PersistentVolumeClaimResize`, `PodSecurity`, `Priority`, `ResourceQuota`, `RuntimeClass`, `ServiceAccount`, `StorageObjectInUseProtection`, `TaintNodesByCondition`, `ValidatingAdmissionPolicy`, and `ValidatingAdmissionWebhook`.


| Kubernetes version | Amazon EKS platform version | Release notes | Release date | 
| --- | --- | --- | --- | 
|   `1.31.14`   |   `eks-local-outposts.8`   |  New platform version with security fixes and enhancements. kube-proxy updated to `v1.31.14`. AWS IAM Authenticator updated to `v0.7.8`. Amazon VPC CNI plugin for Kubernetes updated to `v1.20.4`. Bottlerocket version updated to `v1.52.0`.  |  December 23, 2025  | 
|   `1.31.12`   |   `eks-local-outposts.5`   |  New platform version with security fixes and enhancements. kube-proxy updated to `v1.31.10`. AWS IAM Authenticator updated to `v0.7.4`. Amazon VPC CNI plugin for Kubernetes updated to `v1.20.2`. Bottlerocket version updated to `v1.47.0`.  |  October 3, 2025  | 
|   `1.31.9`   |   `eks-local-outposts.4`   |  New platform version with security fixes and enhancements. kube-proxy updated to `v1.31.9`. AWS IAM Authenticator updated to `v0.7.2`. Amazon VPC CNI plugin for Kubernetes updated to `v1.20.0` Bottlerocket version updated to `v1.43.0`.  |  August 9, 2025  | 
|   `1.31.7`   |   `eks-local-outposts.3`   |  New platform version with security fixes and enhancements. kube-proxy updated to `v1.31.9`. AWS IAM Authenticator updated to `v0.7.1`. Amazon VPC CNI plugin for Kubernetes updated to `v1.19.5`. Bottlerocket version updated to `v1.40.0`.  |  June 19, 2025  | 
|   `1.31.6`   |   `eks-local-outposts.2`   |  New platform version with security fixes and enhancements. Bottlerocket version updated to `v1.36.0`.  |  April 24, 2025  | 
|   `1.31.6`   |   `eks-local-outposts.1`   |  Initial release of Kubernetes version `v1.31` for local Amazon EKS clusters on Outposts.  |  April 9, 2025  | 

## Kubernetes version `1.30`


The following admission controllers are enabled for all `1.30` platform versions: `CertificateApproval`, `CertificateSigning`, `CertificateSubjectRestriction`, `ClusterTrustBundleAttest`, `DefaultIngressClass`, `DefaultStorageClass`, `DefaultTolerationSeconds`, `ExtendedResourceToleration`, `LimitRanger`, `MutatingAdmissionWebhook`, `NamespaceLifecycle`, `NodeRestriction`, `PersistentVolumeClaimResize`, `PodSecurity`, `Priority`, `ResourceQuota`, `RuntimeClass`, `ServiceAccount`, `StorageObjectInUseProtection`, `TaintNodesByCondition`, `ValidatingAdmissionPolicy`, and `ValidatingAdmissionWebhook`.


| Kubernetes version | Amazon EKS platform version | Release notes | Release date | 
| --- | --- | --- | --- | 
|   `1.30.14`   |   `eks-local-outposts.10`   |  New platform version with security fixes and enhancements. AWS IAM Authenticator updated to `v0.7.8`. Amazon VPC CNI plugin for Kubernetes updated to `v1.20.4`. Bottlerocket version updated to `v1.52.0`.  |  December 23, 2025  | 
|   `1.30.14`   |   `eks-local-outposts.7`   |  New platform version with security fixes and enhancements. kube-proxy updated to `v1.30.14`. AWS IAM Authenticator updated to `v0.7.4`. Amazon VPC CNI plugin for Kubernetes updated to `v1.20.2`. Bottlerocket version updated to `v1.47.0`.  |  October 3, 2025  | 
|   `1.30.13`   |   `eks-local-outposts.6`   |  New platform version with security fixes and enhancements. kube-proxy updated to `v1.30.13`. AWS IAM Authenticator updated to `v0.7.2`. Amazon VPC CNI plugin for Kubernetes updated to `v1.20.0`. Bottlerocket version updated to `v1.43.0`.  |  August 09, 2025  | 
|   `1.30.11`   |   `eks-local-outposts.5`   |  New platform version with security fixes and enhancements. kube-proxy updated to `v1.30.11`. AWS IAM Authenticator updated to `v0.7.1`. Amazon VPC CNI plugin for Kubernetes updated to `v1.19.5` Bottlerocket version updated to `v1.40.0`.  |  June 19, 2025  | 
|   `1.30.10`   |   `eks-local-outposts.4`   |  New platform version with security fixes and enhancements. Bottlerocket version updated to `v1.36.0`.  |  April 24, 2025  | 
|   `1.30.10`   |   `eks-local-outposts.3`   |  New platform version with security fixes and enhancements. kube-proxy updated to `v1.30.10`. AWS IAM Authenticator updated to `v0.6.29`. Amazon VPC CNI plugin for Kubernetes updated to `v1.19.2`. CoreDNS updated to `v1.11.4`. AWS Cloud Controller Manager updated to `v1.30.8`. Bottlerocket version updated to `v1.34.0`.  |  March 27, 2025  | 
|   `1.30.7`   |   `eks-local-outposts.2`   |  New platform version with security fixes and enhancements. kube-proxy updated to `v1.30.7`. AWS IAM Authenticator updated to `v0.6.28`. Amazon VPC CNI plugin for Kubernetes updated to `v1.19.0`. Updated Bottlerocket version to `v1.29.0`.  |  January 10, 2025  | 
|   `1.30.5`   |   `eks-local-outposts.1`   |  Initial release of Kubernetes version `v1.30` for local Amazon EKS clusters on Outposts.  |  November 13, 2024  | 

## Kubernetes version `1.29`


The following admission controllers are enabled for all `1.29` platform versions: `CertificateApproval`, `CertificateSigning`, `CertificateSubjectRestriction`, `ClusterTrustBundleAttest`, `DefaultIngressClass`, `DefaultStorageClass`, `DefaultTolerationSeconds`, `ExtendedResourceToleration`, `LimitRanger`, `MutatingAdmissionWebhook`, `NamespaceLifecycle`, `NodeRestriction`, `PersistentVolumeClaimResize`, `PodSecurity`, `Priority`, `ResourceQuota`, `RuntimeClass`, `ServiceAccount`, `StorageObjectInUseProtection`, `TaintNodesByCondition`, `ValidatingAdmissionPolicy`, and `ValidatingAdmissionWebhook`.


| Kubernetes version | Amazon EKS platform version | Release notes | Release date | 
| --- | --- | --- | --- | 
|   `1.29.15`   |   `eks-local-outposts.13`   |  New platform version with security fixes and enhancements. AWS IAM Authenticator updated to `v0.7.8`. Amazon VPC CNI plugin for Kubernetes updated to `v1.20.4`. Bottlerocket version updated to `v1.52.0`.  |  December 23, 2025  | 
|   `v1.29.15`   |   `eks-local-outposts.10`   |  New platform version with security fixes and enhancements. AWS IAM Authenticator updated to `v0.7.4`. Amazon VPC CNI plugin for Kubernetes updated to `v1.20.2`. Bottlerocket version updated to `v1.47.0`.  |  October 3, 2025  | 
|   `v1.29.15`   |   `eks-local-outposts.9`   |  New platform version with security fixes and enhancements. AWS IAM Authenticator updated to `v0.7.2`. Amazon VPC CNI plugin for Kubernetes updated to `v1.20.0`. Bottlerocket version updated to `v1.43.0`.  |  August 9, 2025  | 
|   `v1.29.15`   |   `eks-local-outposts.8`   |  New platform version with security fixes and enhancements. kube-proxy updated to `v1.29.15`. AWS IAM Authenticator updated to `v0.7.1`. Amazon VPC CNI plugin for Kubernetes updated to `v1.19.5`. Bottlerocket version updated to `v1.40.0`.  |  June 19, 2025  | 
|   `v1.29.14`   |   `eks-local-outposts.7`   |  New platform version with security fixes and enhancements. Bottlerocket version updated to `v1.36.0`.  |  March 24, 2025  | 
|   `v1.29.14`   |   `eks-local-outposts.6`   |  New platform version with security fixes and enhancements. kube-proxy updated to `v1.29.14`. Amazon VPC CNI plugin for Kubernetes updated to `v1.19.2`. CoreDNS updated to `v1.11.4`. AWS Cloud Controller Manager updated to `v1.29.8`. Bottlerocket version updated to `v1.34.0`.  |  March 27, 2025  | 
|   `v1.29.11`   |   `eks-local-outposts.5`   |  New platform version with security fixes and enhancements. kube-proxy updated to `v1.29.11`. Amazon VPC CNI plugin for Kubernetes updated to `v1.19.0`. Updated CoreDNS image to `v1.11.3`. Updated Bottlerocket version to `v1.29.0`.  |  January 10, 2025  | 
|   `1.29.9`   |   `eks-local-outposts.4`   |  New platform version with security fixes and enhancements. kube-proxy updated to `v1.29.9`. AWS IAM Authenticator updated to `v0.6.26`. Updated Bottlerocket version to `v1.26.0`.  |  November 8, 2024  | 
|   `1.29.6`   |   `eks-local-outposts.3`   |  New platform version with security fixes and enhancements. Updated Bottlerocket version to `v1.22.0`.  |  October 22, 2024  | 
|   `1.29.6`   |   `eks-local-outposts.2`   |  New platform version with security fixes and enhancements. Updated Bottlerocket version to `v1.21.0`.  |  August 27, 2024  | 
|   `1.29.6`   |   `eks-local-outposts.1`   |  Initial release of Kubernetes version `v1.29` for local Amazon EKS clusters on Outposts.  |  August 20, 2024  | 

# Create a VPC and subnets for Amazon EKS clusters on AWS Outposts
Create a VPC and subnets

When you create a local cluster, you specify a VPC and at least one private subnet that runs on Outposts. This topic provides an overview of the VPC and subnets requirements and considerations for your local cluster.

## VPC requirements and considerations


When you create a local cluster, the VPC that you specify must meet the following requirements and considerations:
+ Make sure that the VPC has enough IP addresses for the local cluster, any nodes, and other Kubernetes resources that you want to create. If the VPC that you want to use doesn’t have enough IP addresses, increase the number of available IP addresses. You can do this by [associating additional Classless Inter-Domain Routing (CIDR) blocks](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-vpcs.html#add-ipv4-cidr) with your VPC. You can associate private (RFC 1918) and public (non-RFC 1918) CIDR blocks to your VPC either before or after you create your cluster. It can take a cluster up to 5 hours for a CIDR block that you associated with a VPC to be recognized.
+ The VPC can’t have assigned IP prefixes or IPv6 CIDR blocks. Because of these constraints, the information that’s covered in [Assign more IP addresses to Amazon EKS nodes with prefixes](cni-increase-ip-addresses.md) and [Learn about IPv6 addresses to clusters, Pods, and services](cni-ipv6.md) isn’t applicable to your VPC.
+ The VPC has a DNS hostname and DNS resolution enabled. Without these features, the local cluster fails to create, and you need to enable the features and recreate your cluster. For more information, see [DNS attributes for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html) in the Amazon VPC User Guide.
+ To access your local cluster over your local network, the VPC must be associated with your Outpost’s local gateway route table. For more information, see [VPC associations](https://docs.aws.amazon.com/outposts/latest/userguide/outposts-local-gateways.html#vpc-associations) in the AWS Outposts User Guide.

## Subnet requirements and considerations


When you create the cluster, specify at least one private subnet. If you specify more than one subnet, the Kubernetes control plane instances are evenly distributed across the subnets. If more than one subnet is specified, the subnets must exist on the same Outpost. Moreover, the subnets must also have proper routes and security group permissions to communicate with each other. When you create a local cluster, the subnets that you specify must meet the following requirements:
+ The subnets are all on the same logical Outpost.
+ The subnets together have at least three available IP addresses for the Kubernetes control plane instances. If three subnets are specified, each subnet must have at least one available IP address. If two subnets are specified, each subnet must have at least two available IP addresses. If one subnet is specified, the subnet must have at least three available IP addresses.
+ The subnets have a route to the Outpost rack’s [local gateway](https://docs.aws.amazon.com/outposts/latest/userguide/outposts-local-gateways.html) to access the Kubernetes API server over your local network. If the subnets don’t have a route to the Outpost rack’s local gateway, you must communicate with your Kubernetes API server from within the VPC.
+ The subnets must use IP address-based naming. Amazon EC2 [resource-based naming](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-naming.html#instance-naming-rbn) isn’t supported by Amazon EKS.

## Subnet access to AWS services


The local cluster’s private subnets on Outposts must be able to communicate with Regional AWS services. You can achieve this by using a [NAT gateway](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) for outbound internet access or, if you want to keep all traffic private within your VPC, using [interface VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html).

### Using a NAT gateway


The local cluster’s private subnets on Outposts must have an associated route table that has a route to a NAT gateway in a public subnet that is in the Outpost’s parent Availability Zone. The public subnet must have a route to an [internet gateway](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html). The NAT gateway enables outbound internet access and prevents unsolicited inbound connections from the internet to instances on the Outpost.

### Using interface VPC endpoints


If the local cluster’s private subnets on Outposts don’t have an outbound internet connection, or if you want to keep all traffic private within your VPC, then you must create the following interface VPC endpoints and [gateway endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html) in a Regional subnet before creating your cluster.


| Endpoint | Endpoint type | 
| --- | --- | 
|   `com.amazonaws.region-code.ssm`   |  Interface  | 
|   `com.amazonaws.region-code.ssmmessages`   |  Interface  | 
|   `com.amazonaws.region-code.ec2messages`   |  Interface  | 
|   `com.amazonaws.region-code.ec2`   |  Interface  | 
|   `com.amazonaws.region-code.secretsmanager`   |  Interface  | 
|   `com.amazonaws.region-code.logs`   |  Interface  | 
|   `com.amazonaws.region-code.sts`   |  Interface  | 
|   `com.amazonaws.region-code.ecr.api`   |  Interface  | 
|   `com.amazonaws.region-code.ecr.dkr`   |  Interface  | 
|   `com.amazonaws.region-code.s3`   |  Gateway  | 

The endpoints must meet the following requirements:
+ Created in a private subnet located in your Outpost’s parent Availability Zone
+ Have private DNS names enabled
+ Have an attached security group that permits inbound HTTPS traffic from the CIDR range of the private outpost subnet.

Creating endpoints incurs charges. For more information, see [AWS PrivateLink pricing](https://aws.amazon.com/privatelink/pricing/). If your Pods need access to other AWS services, then you need to create additional endpoints. For a comprehensive list of endpoints, see [AWS services that integrate with AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/aws-services-privatelink-support.html).

## Create a VPC


You can create a VPC that meets the previous requirements using one of the following AWS CloudFormation templates:
+  ** [Template 1](https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2022-09-20/amazon-eks-local-outposts-vpc-subnet.yaml) ** – This template creates a VPC with one private subnet on the Outpost and one public subnet in the AWS Region. The private subnet has a route to an internet through a NAT Gateway that resides in the public subnet in the AWS Region. This template can be used to create a local cluster in a subnet with egress internet access.
+  ** [Template 2](https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2023-03-20/amazon-eks-local-outposts-fully-private-vpc-subnet.yaml) ** – This template creates a VPC with one private subnet on the Outpost and the minimum set of VPC Endpoints required to create a local cluster in a subnet that doesn’t have ingress or egress internet access (also referred to as a private subnet).

# Prepare local Amazon EKS clusters on AWS Outposts for network disconnects
Prepare for disconnects

If your local network has lost connectivity with the AWS Cloud, you can continue to use your local Amazon EKS cluster on an Outpost. This topic covers how you can prepare your local cluster for network disconnects and related considerations.
+ Local clusters enable stability and continued operations during temporary, unplanned network disconnects. AWS Outposts remains a fully connected offering that acts as an extension of the AWS Cloud in your data center. In the event of network disconnects between your Outpost and AWS Cloud, we recommend attempting to restore your connection. For instruction, see [AWS Outposts rack network troubleshooting checklist](https://docs.aws.amazon.com/outposts/latest/userguide/network-troubleshoot.html) in the * AWS Outposts User Guide*. For more information about how to troubleshoot issues with local clusters, see [Troubleshoot local Amazon EKS clusters on AWS Outposts](eks-outposts-troubleshooting.md).
+ Outposts emit a `ConnectedStatus` metric that you can use to monitor the connectivity state of your Outpost. For more information, see [Outposts Metrics](https://docs.aws.amazon.com/outposts/latest/userguide/outposts-cloudwatch-metrics.html#outposts-metrics) in the * AWS Outposts User Guide*.
+ Local clusters use IAM as the default authentication mechanism using the [AWS Identity and Access Management authenticator for Kubernetes](https://github.com/kubernetes-sigs/aws-iam-authenticator). IAM isn’t available during network disconnects. So, local clusters support an alternative authentication mechanism using `x.509` certificates that you can use to connect to your cluster during network disconnects. For information about how to obtain and use an `x.509` certificate for your cluster, see [Authenticating to your local cluster during a network disconnect](#outposts-network-disconnects-authentication).
+ If you can’t access Route 53 during network disconnects, consider using local DNS servers in your on-premises environment. The Kubernetes control plane instances use static IP addresses. You can configure the hosts that you use to connect to your cluster with the endpoint hostname and IP addresses as an alternative to using local DNS servers. For more information, see [DNS](https://docs.aws.amazon.com/outposts/latest/userguide/how-outposts-works.html#dns) in the * AWS Outposts User Guide*.
+ If you expect increases in application traffic during network disconnects, you can provision spare compute capacity in your cluster when connected to the cloud. Amazon EC2 instances are included in the price of AWS Outposts. So, running spare instances doesn’t impact your AWS usage cost.
+ During network disconnects to enable create, update, and scale operations for workloads, your application’s container images must be accessible over the local network and your cluster must have enough capacity. Local clusters don’t host a container registry for you. If the Pods have previously run on those nodes, container images are cached on the nodes. If you typically pull your application’s container images from Amazon ECR in the cloud, consider running a local cache or registry. A local cache or registry is helpful if you require create, update, and scale operations for workload resources during network disconnects.
+ Local clusters use Amazon EBS as the default storage class for persistent volumes and the Amazon EBS CSI driver to manage the lifecycle of Amazon EBS persistent volumes. During network disconnects, Pods that are backed by Amazon EBS can’t be created, updated, or scaled. This is because these operations require calls to the Amazon EBS API in the cloud. If you’re deploying stateful workloads on local clusters and require create, update, or scale operations during network disconnects, consider using an alternative storage mechanism.
+ Amazon EBS snapshots can’t be created or deleted if AWS Outposts can’t access the relevant AWS in-region APIs (such as the APIs for Amazon EBS or Amazon S3).
+ When integrating ALB (Ingress) with AWS Certificate Manager (ACM), certificates are pushed and stored in memory of the AWS Outposts ALB Compute instance. Current TLS termination will continue to operate in the event of a disconnect from the AWS Region. Mutating operations in this context will fail (such as new ingress definitions, new ACM based certificates API operations, ALB compute scale, or certificate rotation). For more information, see [Troubleshooting managed certificate renewal](https://docs.aws.amazon.com/acm/latest/userguide/troubleshooting-renewal.html) in the * AWS Certificate Manager User Guide*.
+ The Amazon EKS control plane logs are cached locally on the Kubernetes control plane instances during network disconnects. Upon reconnect, the logs are sent to CloudWatch Logs in the parent AWS Region. You can use [Prometheus](https://prometheus.io/), [Grafana](https://grafana.com/), or Amazon EKS partner solutions to monitor the cluster locally using the Kubernetes API server’s metrics endpoint or using Fluent Bit for logs.
+ If you’re using the AWS Load Balancer Controller on Outposts for application traffic, existing Pods fronted by the AWS Load Balancer Controller continue to receive traffic during network disconnects. New Pods created during network disconnects don’t receive traffic until the Outpost is reconnected to the AWS Cloud. Consider setting the replica count for your applications while connected to the AWS Cloud to accommodate your scaling needs during network disconnects.
+ The Amazon VPC CNI plugin for Kubernetes defaults to [secondary IP mode](https://aws.github.io/aws-eks-best-practices/networking/vpc-cni/#overview). It’s configured with `WARM_ENI_TARGET`=`1`, which allows the plugin to keep "a full elastic network interface" of available IP addresses available. Consider changing `WARM_ENI_TARGET`, `WARM_IP_TARGET`, and `MINIMUM_IP_TARGET` values according to your scaling needs during a disconnected state. For more information, see the [readme](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/README.md) file for the plugin on GitHub. For a list of the maximum number of Pods that’s supported by each instance type, see the [eni-max-pods.txt](https://github.com/aws/amazon-vpc-cni-k8s/blob/master/misc/eni-max-pods.txt) file on GitHub.

## Authenticating to your local cluster during a network disconnect


 AWS Identity and Access Management (IAM) isn’t available during network disconnects. You can’t authenticate to your local cluster using IAM credentials while disconnected. However, you can connect to your cluster over your local network using `x509` certificates when disconnected. You need to download and store a client `X509` certificate to use during disconnects. In this topic, you learn how to create and use the certificate to authenticate to your cluster when it’s in a disconnected state.

1. Create a certificate signing request.

   1. Generate a certificate signing request.

      ```
      openssl req -new -newkey rsa:4096 -nodes -days 365 \
          -keyout admin.key -out admin.csr -subj "/CN=admin"
      ```

   1. Create a certificate signing request in Kubernetes.

      ```
      BASE64_CSR=$(cat admin.csr | base64 -w 0)
      cat << EOF > admin-csr.yaml
      apiVersion: certificates.k8s.io/v1
      kind: CertificateSigningRequest
      metadata:
        name: admin-csr
      spec:
        signerName: kubernetes.io/kube-apiserver-client
        request: ${BASE64_CSR}
        usages:
        - client auth
      EOF
      ```

1. Create a certificate signing request using `kubectl`.

   ```
   kubectl create -f admin-csr.yaml
   ```

1. Check the status of the certificate signing request.

   ```
   kubectl get csr admin-csr
   ```

   An example output is as follows.

   ```
   NAME       AGE   REQUESTOR                       CONDITION
   admin-csr  11m   kubernetes-admin                Pending
   ```

   Kubernetes created the certificate signing request.

1. Approve the certificate signing request.

   ```
   kubectl certificate approve admin-csr
   ```

1. Recheck the certificate signing request status for approval.

   ```
   kubectl get csr admin-csr
   ```

   An example output is as follows.

   ```
   NAME       AGE   REQUESTOR                     CONDITION
   admin-csr  11m   kubernetes-admin              Approved
   ```

1. Retrieve and verify the certificate.

   1. Retrieve the certificate.

      ```
      kubectl get csr admin-csr -o jsonpath='{.status.certificate}' | base64 --decode > admin.crt
      ```

   1. Verify the certificate.

      ```
      cat admin.crt
      ```

1. Create a cluster role binding for an `admin` user.

   ```
   kubectl create clusterrolebinding admin --clusterrole=cluster-admin \
       --user=admin --group=system:masters
   ```

1. Generate a user-scoped kubeconfig for a disconnected state.

   You can generate a `kubeconfig` file using the downloaded `admin` certificates. Replace *my-cluster* and *apiserver-endpoint* in the following commands.

   ```
   aws eks describe-cluster --name my-cluster \
       --query "cluster.certificateAuthority" \
       --output text | base64 --decode > ca.crt
   ```

   ```
   kubectl config --kubeconfig admin.kubeconfig set-cluster my-cluster \
       --certificate-authority=ca.crt --server apiserver-endpoint --embed-certs
   ```

   ```
   kubectl config --kubeconfig admin.kubeconfig set-credentials admin \
       --client-certificate=admin.crt --client-key=admin.key --embed-certs
   ```

   ```
   kubectl config --kubeconfig admin.kubeconfig set-context admin@my-cluster \
       --cluster my-cluster --user admin
   ```

   ```
   kubectl config --kubeconfig admin.kubeconfig use-context admin@my-cluster
   ```

1. View your `kubeconfig` file.

   ```
   kubectl get nodes --kubeconfig admin.kubeconfig
   ```

1. If you have services already in production on your Outpost, skip this step. If Amazon EKS is the only service running on your Outpost and the Outpost isn’t currently in production, you can simulate a network disconnect. Before you go into production with your local cluster, simulate a disconnect to make sure that you can access your cluster when it’s in a disconnected state.

   1. Apply firewall rules on the networking devices that connect your Outpost to the AWS Region. This disconnects the service link of the Outpost. You can’t create any new instances. Currently running instances lose connectivity to the AWS Region and the internet.

   1. You can test the connection to your local cluster while disconnected using the `x509` certificate. Make sure to change your `kubeconfig` to the `admin.kubeconfig` that you created in a previous step. Replace *my-cluster* with the name of your local cluster.

      ```
      kubectl config use-context admin@my-cluster --kubeconfig admin.kubeconfig
      ```

   If you notice any issues with your local clusters while they’re in a disconnected state, we recommend opening a support ticket.

# Select instance types and placement groups for Amazon EKS clusters on AWS Outposts based on capacity considerations
Capacity considerations

This topic provides guidance for selecting the Kubernetes control plane instance type and (optionally) using placement groups to meet high-availability requirements for your local Amazon EKS cluster on an Outpost.

Before you select an instance type (such as `m5`, `c5`, or `r5`) to use for your local cluster’s Kubernetes control plane on Outposts, confirm the instance types that are available on your Outpost configuration. After you identify the available instance types, select the instance size (such as `large`, `xlarge`, or `2xlarge`) based on the number of nodes that your workloads require. The following table provides recommendations for choosing an instance size.

**Note**  
The instance sizes must be slotted on your Outposts. Make sure that you have enough capacity for three instances of the size available on your Outposts for the lifetime of your local cluster. For a list of the available Amazon EC2 instance types, see the Compute and storage sections in [AWS Outposts rack features](https://aws.amazon.com/outposts/rack/features/).


| Number of nodes | Kubernetes control plane instance size | 
| --- | --- | 
|  1–20  |   `large`   | 
|  21–100  |   `xlarge`   | 
|  101–250  |   `2xlarge`   | 
|  251–500  |   `4xlarge`   | 

The storage for the Kubernetes control plane requires 246 GB of Amazon EBS storage for each local cluster to meet the required IOPS for `etcd`. When the local cluster is created, the Amazon EBS volumes are provisioned automatically for you.

## Control plane placement


When you don’t specify a placement group with the `OutpostConfig.ControlPlanePlacement.GroupName` property, the Amazon EC2 instances provisioned for your Kubernetes control plane don’t receive any specific hardware placement enforcement across the underlying capacity available on your Outpost.

You can use placement groups to meet the high-availability requirements for your local Amazon EKS cluster on an Outpost. By specifying a placement group during cluster creation, you influence the placement of the Kubernetes control plane instances. The instances are spread across independent underlying hardware (racks or hosts), minimizing correlated instance impact on the event of hardware failures.

The type of spread that you can configure depends on the number of Outpost racks you have in your deployment.
+  **Deployments with one or two physical racks in a single logical Outpost** – You must have at least three hosts that are configured with the instance type that you choose for your Kubernetes control plane instances. A *spread* placement group using *host-level spread* ensures that all Kubernetes control plane instances run on distinct hosts within the underlying racks available in your Outpost deployment.
+  **Deployments with three or more physical racks in a single logical Outpost** – You must have at least three hosts configured with the instance type you choose for your Kubernetes control plane instances. A *spread* placement group using *rack-level spread* ensures that all Kubernetes control plane instances run on distinct racks in your Outpost deployment. You can alternatively use the *host-level spread* placement group as described in the previous option.

You are responsible for creating the desired placement group. You specify the placement group when calling the `CreateCluster` API. For more information about placement groups and how to create them, see [Placement Groups](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html) in the Amazon EC2 User Guide.
+ When a placement group is specified, there must be available slotted capacity on your Outpost to successfully create a local Amazon EKS cluster. The capacity varies based on whether you use the host or rack spread type. If there isn’t enough capacity, the cluster remains in the `Creating` state. You are able to check the `Insufficient Capacity Error` on the health field of the [DescribeCluster](https://docs.aws.amazon.com/eks/latest/APIReference/API_DescribeCluster.html) API response. You must free capacity for the creation process to progress.
+ During Amazon EKS local cluster platform and version updates, the Kubernetes control plane instances from your cluster are replaced by new instances using a rolling update strategy. During this replacement process, each control plane instance is terminated, freeing up its respective slot. A new updated instance is provisioned in its place. The updated instance might be placed in the slot that was released. If the slot is consumed by another unrelated instance and there is no more capacity left that respects the required spread topology requirement, then the cluster remains in the `Updating` state. You are able to see the respective `Insufficient Capacity Error` on the health field of the [DescribeCluster](https://docs.aws.amazon.com/eks/latest/APIReference/API_DescribeCluster.html) API response. You must free capacity so the update process can progress and reestablish prior high availability levels.
+ You can create a maximum of 500 placement groups per account in each AWS Region. For more information, see [General rules and limitations](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html#placement-groups-limitations-general) in the Amazon EC2 User Guide.

# Troubleshoot local Amazon EKS clusters on AWS Outposts
Troubleshoot clusters

This topic covers some common errors that you might see while using local clusters and how to troubleshoot them. Local clusters are similar to Amazon EKS clusters in the cloud, but there are some differences in how they’re managed by Amazon EKS.

**Important**  
Never terminate any managed EKS local cluster `Kubernetes` control-plane instance running on Outpost unless explicitly instructed by AWS Support. Terminating these instances impose a risk to local cluster service availability, including loss of the local cluster in case multiple instances are simultaneously terminated. EKS local cluster `Kubernetes` control-plane instances are identified by the tag `eks-local:controlplane-name` on the EC2 instance console.

## API behavior


Local clusters are created through the Amazon EKS API, but are run in an asynchronous manner. This means that requests to the Amazon EKS API return immediately for local clusters. However, these requests might succeed, fail fast because of input validation errors, or fail and have descriptive validation errors. This behavior is similar to the Kubernetes API.

Local clusters don’t transition to a `FAILED` status. Amazon EKS attempts to reconcile the cluster state with the user-requested desired state in a continuous manner. As a result, a local cluster might remain in the `CREATING` state for an extended period of time until the underlying issue is resolved.

## Describe cluster health field


Local cluster issues can be discovered using the [describe-cluster](https://docs.aws.amazon.com/cli/latest/reference/eks/describe-cluster.html) Amazon EKS AWS CLI command. Local cluster issues are surfaced by the `cluster.health` field of the `describe-cluster` command’s response. The message contained in this field includes an error code, descriptive message, and related resource IDs. This information is available through the Amazon EKS API and AWS CLI only. In the following example, replace *my-cluster* with the name of your local cluster.

```
aws eks describe-cluster --name my-cluster --query 'cluster.health'
```

An example output is as follows.

```
{
    "issues": [
        {
            "code": "ConfigurationConflict",
            "message": "The instance type 'm5.large' is not supported in Outpost 'my-outpost-arn'.",
            "resourceIds": [
                "my-cluster-arn"
            ]
        }
    ]
}
```

If the problem is beyond repair, you might need to delete the local cluster and create a new one. For example, trying to provision a cluster with an instance type that’s not available on your Outpost. The following table includes common health related errors.


| Error scenario | Code | Message | ResourceIds | 
| --- | --- | --- | --- | 
|  Provided subnets couldn’t be found.  |   `ResourceNotFound`   |   `The subnet ID subnet-id does not exist`   |  All provided subnet IDs  | 
|  Provided subnets don’t belong to the same VPC.  |   `ConfigurationConflict`   |   `Subnets specified must belong to the same VPC`   |  All provided subnet IDs  | 
|  Some provided subnets don’t belong to the specified Outpost.  |   `ConfigurationConflict`   |   `Subnet subnet-id expected to be in outpost-arn, but is in other-outpost-arn `   |  Problematic subnet ID  | 
|  Some provided subnets don’t belong to any Outpost.  |   `ConfigurationConflict`   |   `Subnet subnet-id is not part of any Outpost`   |  Problematic subnet ID  | 
|  Some provided subnets don’t have enough free addresses to create elastic network interfaces for control plane instances.  |   `ResourceLimitExceeded`   |   `The specified subnet does not have enough free addresses to satisfy the request.`   |  Problematic subnet ID  | 
|  The specified control plane instance type isn’t supported on your Outpost.  |   `ConfigurationConflict`   |   `The instance type type is not supported in Outpost outpost-arn `   |  Cluster ARN  | 
|  You terminated a control plane Amazon EC2 instance or `run-instance` succeeded, but the state observed changes to `Terminated`. This can happen for a period of time after your Outpost reconnects and Amazon EBS internal errors cause an Amazon EC2 internal work flow to fail.  |   `InternalFailure`   |   `EC2 instance state "Terminated" is unexpected`   |  Cluster ARN  | 
|  You have insufficient capacity on your Outpost. This can also happen when a cluster is being created if an Outpost is disconnected from the AWS Region.  |   `ResourceLimitExceeded`   |   `There is not enough capacity on the Outpost to launch or start the instance.`   |  Cluster ARN  | 
|  Your account exceeded your security group quota.  |   `ResourceLimitExceeded`   |  Error message returned by Amazon EC2 API  |  Target VPC ID  | 
|  Your account exceeded your elastic network interface quota.  |   `ResourceLimitExceeded`   |  Error message returned by Amazon EC2 API  |  Target subnet ID  | 
|  Control plane instances weren’t reachable through AWS Systems Manager. For resolution, see [Control plane instances aren’t reachable through AWS Systems Manager](#outposts-troubleshooting-control-plane-instances-ssm).  |   `ClusterUnreachable`   |  Amazon EKS control plane instances are not reachable through SSM. Please verify your SSM and network configuration, and reference the EKS on Outposts troubleshooting documentation.  |  Amazon EC2 instance IDs  | 
|  An error occurred while getting details for a managed security group or elastic network interface.  |  Based on Amazon EC2 client error code.  |  Error message returned by Amazon EC2 API  |  All managed security group IDs  | 
|  An error occurred while authorizing or revoking security group ingress rules. This applies to both the cluster and control plane security groups.  |  Based on Amazon EC2 client error code.  |  Error message returned by Amazon EC2 API  |  Problematic security group ID  | 
|  An error occurred while deleting an elastic network interface for a control plane instance.  |  Based on Amazon EC2 client error code.  |  Error message returned by Amazon EC2 API  |  Problematic elastic network interface ID  | 

The following table lists errors from other AWS services that are presented in the health field of the `describe-cluster` response.


| Amazon EC2 error code | Cluster health issue code | Description | 
| --- | --- | --- | 
|   `AuthFailure`   |   `AccessDenied`   |  This error can occur for a variety of reasons. The most common reason is that you accidentally removed a tag that the service uses to scope down the service linked role policy from the control plane. If this occurs, Amazon EKS can no longer manage and monitor these AWS resources.  | 
|   `UnauthorizedOperation`   |   `AccessDenied`   |  This error can occur for a variety of reasons. The most common reason is that you accidentally removed a tag that the service uses to scope down the service linked role policy from the control plane. If this occurs, Amazon EKS can no longer manage and monitor these AWS resources.  | 
|   `InvalidSubnetID.NotFound`   |   `ResourceNotFound`   |  This error occurs when subnet ID for the ingress rules of a security group can’t be found.  | 
|   `InvalidPermission.NotFound`   |   `ResourceNotFound`   |  This error occurs when the permissions for the ingress rules of a security group aren’t correct.  | 
|   `InvalidGroup.NotFound`   |   `ResourceNotFound`   |  This error occurs when the group of the ingress rules of a security group can’t be found.  | 
|   `InvalidNetworkInterfaceID.NotFound`   |   `ResourceNotFound`   |  This error occurs when the network interface ID for the ingress rules of a security group can’t be found.  | 
|   `InsufficientFreeAddressesInSubnet`   |   `ResourceLimitExceeded`   |  This error occurs when the subnet resource quota is exceeded.  | 
|   `InsufficientCapacityOnOutpost`   |   `ResourceLimitExceeded`   |  This error occurs when the outpost capacity quota is exceeded.  | 
|   `NetworkInterfaceLimitExceeded`   |   `ResourceLimitExceeded`   |  This error occurs when the elastic network interface quota is exceeded.  | 
|   `SecurityGroupLimitExceeded`   |   `ResourceLimitExceeded`   |  This error occurs when the security group quota is exceeded.  | 
|   `VcpuLimitExceeded`   |   `ResourceLimitExceeded`   |  This is observed when creating an Amazon EC2 instance in a new account. The error might be similar to the following: "`You have requested more vCPU capacity than your current vCPU limit of 32 allows for the instance bucket that the specified instance type belongs to. Please visit http://aws.amazon.com/contact-us/ec2-request to request an adjustment to this limit."`   | 
|   `InvalidParameterValue`   |   `ConfigurationConflict`   |  Amazon EC2 returns this error code if the specified instance type isn’t supported on the Outpost.  | 
|  All other failures  |   `InternalFailure`   |  None  | 

## Unable to create or modify clusters


Local clusters require different permissions and policies than Amazon EKS clusters that are hosted in the cloud. When a cluster fails to create and produces an `InvalidPermissions` error, double check that the cluster role that you’re using has the [AmazonEKSLocalOutpostClusterPolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-amazonekslocaloutpostclusterpolicy) managed policy attached to it. All other API calls require the same set of permissions as Amazon EKS clusters in the cloud.

## Cluster is stuck in `CREATING` state


The amount of time it takes to create a local cluster varies depending on several factors. These factors include your network configuration, Outpost configuration, and the cluster’s configuration. In general, a local cluster is created and changes to the `ACTIVE` status within 15–20 minutes. If a local cluster remains in the `CREATING` state, you can call `describe-cluster` for information about the cause in the `cluster.health` output field.

The most common issues are the following:
+ Your cluster can’t connect to the control plane instance from the AWS Region that Systems Manager is in. You can verify this by calling `aws ssm start-session --target instance-id ` from an in-Region bastion host. If that command doesn’t work, check if Systems Manager is running on the control plane instance. Or, another work around is to delete the cluster and then recreate it.
+ The control plane instances fail to create due to KMS key permissions for EBS volumes. With user managed KMS keys for encrypted EBS volumes, the control plane instances will terminate if the key is not accessible. If the instances are terminated, either switch to an AWS managed KMS key or ensure that your user managed key policy grants the necessary permissions to the cluster role.
+ Systems Manager control plane instances might not have internet access. Check if the subnet that you provided when you created the cluster has a NAT gateway and a VPC with an internet gateway. Use VPC reachability analyzer to verify that the control plane instance can reach the internet gateway. For more information, see [Getting started with VPC Reachability Analyzer](https://docs.aws.amazon.com/vpc/latest/reachability/getting-started.html).
+ The role ARN that you provided is missing policies. Check if the [AWS managed policy: AmazonEKSLocalOutpostClusterPolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-amazonekslocaloutpostclusterpolicy) was removed from the role. This can also occur if an AWS CloudFormation stack is misconfigured.
+ All the provided subnets must be associated with the same Outpost and must reach each other. When multiple subnets are specified when a cluster is created, Amazon EKS attempts to spread the control plane instances across multiple subnets.
+ The Amazon EKS managed security groups are applied at the elastic network interface. However, other configuration elements such as NACL firewall rules might conflict with the rules for the elastic network interface.
+ VPC and subnet DNS configuration is misconfigured or missing. Review [Create a VPC and subnets for Amazon EKS clusters on AWS Outposts](eks-outposts-vpc-subnet-requirements.md).

## Cluster is stuck in `UPDATING` state


Amazon EKS automatically updates all existing local clusters to the latest platform versions for their corresponding Kubernetes minor version. For more information about platform versions, please refer to [Learn Kubernetes and Amazon EKS platform versions for AWS Outposts](eks-outposts-platform-versions.md).

During an automatic platform-version rollout a cluster status changes to `UPDATING`. The update process consists of the replacement of all Kubernetes control-plane instances with new ones containing the latest security patches and bugfixes released for the respective Kubernetes minor version. In general, a local cluster platform update process completes within less than 30 minutes and the cluster changes back to `ACTIVE` status. If a local cluster remains in the `UPDATING` state for an extended period of time, you may call `describe-cluster` to check for information about the cause in the `cluster.health` output field.

Amazon EKS ensures at least 2 out of 3 Kubernetes control-plane instances are healthy and operational cluster nodes in order to maintain the local cluster availability and prevent service interruption. If a local cluster is stalled in `UPDATING` state it is usually because there is some infrastructure or configuration issue preventing the two-instances minimum availability to be guaranteed in case the process continues. So the update process stops progressing to protect the local cluster service interruption.

It is important to troubleshoot a local cluster stuck in `UPDATING` status and address the root-cause so that the update process can complete and restore the local cluster back to `ACTIVE` with the high-availability of 3 Kubernetes control-plane instances.

Do not terminate any managed EKS local cluster `Kubernetes` instances on Outposts unless explicitly instructed by AWS Support. This is especially important for local clusters stuck in `UPDATING` state because there’s a high probability that another control-plane nodes is not completely healthy and terminating the wrong instance could cause service interruption and risk local-cluster data loss.

The most common issues are the following:
+ One or more control-plane instances are unable to connect to System Manager because of a networking configuration change since the local cluster was first created. You can verify this by calling `aws ssm start-session --target instance-id ` from an in-Region bastion host. If that command doesn’t work, check if Systems Manager is running on the control plane instance.
+ New control plane instances fail to be created due to KMS key permissions for EBS volumes. With user managed KMS keys for encrypted EBS volumes, the control plane instances will terminate if the key is not accessible. If the instances are terminated, either switch to an AWS managed KMS key or ensure that your user managed key policy grants the necessary permissions to the cluster role.
+ Systems Manager control plane instances might have lost internet access. Check if the subnet that was provided when you created the cluster has a NAT gateway and a VPC with an internet gateway. Use VPC reachability analyzer to verify that the control plane instance can reach the internet gateway. For more information, see [Getting started with VPC Reachability Analyzer](https://docs.aws.amazon.com/vpc/latest/reachability/getting-started.html). If your private networks don’t have outbound internet connection, ensure that all the required VPC endpoints and gateway endpoint are still present in the Regional subnet from your cluster (see [Subnet access to AWS services](eks-outposts-vpc-subnet-requirements.md#subnet-access-to-services)).
+ The role ARN that you provided is missing policies. Check if the [AWS managed policy: AmazonEKSLocalOutpostClusterPolicy](security-iam-awsmanpol.md#security-iam-awsmanpol-amazonekslocaloutpostclusterpolicy) was not removed from the role.
+ One of the new Kubernetes control-plane instances may have experienced an unexpected bootstrapping failure. Please file a ticket with [AWS Support Center](https://console.aws.amazon.com/support/home) for further guidance on troubleshooting and log-collection in this exceptional case.

## Can’t join nodes to a cluster

+ AMI issues:
  + You’re using an incompatible AMI. Only Amazon EKS optimized Amazon Linux 2 AMIs are supported (`amazon-linux-2`,`amazon-linux-2-gpu`, `amazon-linux-2-arm64`). If you attempt to join AL2023 nodes to EKS LocalClusters on AWS Outposts, the nodes fail to join the cluster. For more information, see [Create Amazon Linux nodes on AWS Outposts](eks-outposts-self-managed-nodes.md).
  + If you used an AWS CloudFormation template to create your nodes, make sure it wasn’t using an unsupported AMI.
+ Missing the AWS IAM Authenticator `ConfigMap` – If it’s missing, you must create it. For more information, see [Apply the `aws-auth`   `ConfigMap` to your cluster](auth-configmap.md#aws-auth-configmap) .
+ The wrong security group is used – Make sure to use `eks-cluster-sg-cluster-name-uniqueid ` for your worker nodes' security group. The selected security group is changed by AWS CloudFormation to allow a new security group each time the stack is used.
+ Following unexpected private link VPC steps – Wrong CA data (`--b64-cluster-ca`) or API Endpoint (`--apiserver-endpoint`) are passed.

## Collecting logs


When an Outpost gets disconnected from the AWS Region that it’s associated with, the Kubernetes cluster likely will continue working normally. However, if the cluster doesn’t work properly, follow the troubleshooting steps in [Prepare local Amazon EKS clusters on AWS Outposts for network disconnects](eks-outposts-network-disconnects.md). If you encounter other issues, contact AWS Support. AWS Support can guide you on downloading and running a log collection tool. That way, you can collect logs from your Kubernetes cluster control plane instances and send them to AWS Support for further investigation.

## Control plane instances aren’t reachable through AWS Systems Manager


When the Amazon EKS control plane instances aren’t reachable through AWS Systems Manager (Systems Manager), Amazon EKS displays the following error for your cluster.

```
Amazon EKS control plane instances are not reachable through SSM. Please verify your SSM and network configuration, and reference the EKS on Outposts troubleshooting documentation.
```

To resolve this issue, make sure that your VPC and subnets meet the requirements in [Create a VPC and subnets for Amazon EKS clusters on AWS Outposts](eks-outposts-vpc-subnet-requirements.md) and that you completed the steps in [Setting up Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started.html) in the AWS Systems Manager User Guide.

# Create Amazon Linux nodes on AWS Outposts
Nodes

**Important**  
Amazon EKS Local Clusters on Outposts only supports nodes created from the following Amazon EKS-optimized Amazon Linux 2023 AMIs:  
Standard Amazon Linux 2023 (`amazon-linux-2023/x86_64/standard`)
Accelerated Nvidia Amazon Linux 2023 (`amazon-linux-2023/x86_64/nvidia`)
Accelerated Neuron Amazon Linux 2023 (`amazon-linux-2023/x86_64/neuron`)
 AWS ended support for EKS AL2-optimized and AL2-accelerated AMIs, effective November 26, 2025. While you can continue using EKS AL2 AMIs after the end-of-support (EOS) date (November 26, 2025), EKS will no longer release any new Kubernetes versions or updates to AL2 AMIs, including minor releases, patches, and bug fixes after this date. See [this](https://docs.aws.amazon.com/eks/latest/userguide/eks-ami-deprecation-faqs.html) for more information on AL2 deprecation.

This topic describes how you can launch Auto Scaling groups of Amazon Linux nodes on an Outpost that register with your Amazon EKS cluster. The cluster can be on the AWS Cloud or on an Outpost.
+ An existing Outpost. For more information, see [What is AWS Outposts](https://docs.aws.amazon.com/outposts/latest/userguide/what-is-outposts.html).
+ An existing Amazon EKS cluster. To deploy a cluster on the AWS Cloud, see [Create an Amazon EKS cluster](create-cluster.md). To deploy a cluster on an Outpost, see [Create local Amazon EKS clusters on AWS Outposts for high availability](eks-outposts-local-cluster-overview.md).
+ Suppose that you’re creating your nodes in a cluster on the AWS Cloud and you have subnets in the AWS Region where you have AWS Outposts, AWS Wavelength, or AWS Local Zones enabled. Then, those subnets must not have been passed in when you created your cluster. If you’re creating your nodes in a cluster on an Outpost, you must have passed in an Outpost subnet when creating your cluster.
+ (Recommended for clusters on the AWS Cloud) The Amazon VPC CNI plugin for Kubernetes add-on configured with its own IAM role that has the necessary IAM policy attached to it. For more information, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md). Local clusters do not support IAM roles for service accounts.

You can create a self-managed Amazon Linux node group with `eksctl` or the AWS Management Console (with an AWS CloudFormation template). You can also use Terraform.

You can create a self-managed node group for local cluster with the following tools described in this page:
+  [`eksctl`](#eksctl_create_nodes_outpost) 
+  [AWS Management Console](#console_create_nodes_outpost) 

**Important**  
Self-managed node group includes Amazon EC2 instances in your account. These instances aren’t automatically upgraded when you or Amazon EKS update the control plane version on your behalf. A self-managed node group doesn’t have any indication in the console that it needs updating. You can view the `kubelet` version installed on a node by selecting the node in the **Nodes** list on the **Overview** tab of your cluster to determine which nodes need updating. You must manually update the nodes. For more information, see [Update self-managed nodes for your cluster](update-workers.md).
The certificates used by kubelet on your self-managed nodes are issued with one year expiration. By default certificate rotation is **not** enabled (see: https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/\$1kubelet-config-k8s-io-v1beta1-KubeletConfiguration), this means if you have a self-managed node running for more than one year, it will no longer be able to authenticate to the Kubernetes API.
As a best practice we recommend customers to regularly update their self-managed node groups to receive CVEs and security patches from latest Amazon EKS optimized AMI. Updating AMI used in self-managed node groups also triggers re-creation of nodes and make sure they do not run into issue due to expired kubelet certificates.
Alternatively you can also enable client certificate rotation (see: https://kubernetes.io/docs/tasks/tls/certificate-rotation/) when creating the self-managed node groups to make sure kubelet certificates are renewed as the current certificate approaches expiration.

## `eksctl`


 **To launch self-managed Linux nodes using `eksctl` ** 

1. Install version `0.215.0` or later of the `eksctl` command line tool installed on your device or AWS CloudShell. To install or update `eksctl`, see [Installation](https://eksctl.io/installation) in the `eksctl` documentation.

1. If your cluster is on the AWS Cloud and the **AmazonEKS\$1CNI\$1Policy** managed IAM policy is attached to your [Amazon EKS node IAM role](create-node-role.md), we recommend assigning it to an IAM role that you associate to the Kubernetes `aws-node` service account instead. For more information, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md). If your cluster in on your Outpost, the policy must be attached to your node role.

1. The following command creates a node group in an existing cluster. The cluster must have been created using `eksctl`. Replace *al-nodes* with a name for your node group. The node group name can’t be longer than 63 characters. It must start with letter or digit, but can also include hyphens and underscores for the remaining characters. Replace *my-cluster* with the name of your cluster. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in. If your cluster exists on an Outpost, replace *id* with the ID of an Outpost subnet. If your cluster exists on the AWS Cloud, replace *id* with the ID of a subnet that you didn’t specify when you created your cluster. Replace the remaining example values with your own values. The nodes are created with the same Kubernetes version as the control plane, by default.

   Replace *instance-type* with an instance type available on your Outpost.

   Replace *my-key* with the name of your Amazon EC2 key pair or public key. This key is used to SSH into your nodes after they launch. If you don’t already have an Amazon EC2 key pair, you can create one in the AWS Management Console. For more information, see [Amazon EC2 key pairs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*.

   Create your node group with the following command.

   ```
   eksctl create nodegroup --cluster my-cluster --name al-nodes --node-type instance-type \
       --nodes 3 --nodes-min 1 --nodes-max 4 --managed=false \
       --node-volume-type gp2 --subnet-ids subnet-id \
       --node-ami-family AmazonLinux2023
   ```

   If your cluster is deployed on the AWS Cloud:
   + The node group that you deploy can assign `IPv4` addresses to Pods from a different CIDR block than that of the instance. For more information, see [Deploy Pods in alternate subnets with custom networking](cni-custom-network.md).
   + The node group that you deploy doesn’t require outbound internet access. For more information, see [Deploy private clusters with limited internet access](private-clusters.md).

   For a complete list of all available options and defaults, see [AWS Outposts Support](https://eksctl.io/usage/outposts/) in the `eksctl` documentation.
   + If nodes fail to join the cluster, then see [Nodes fail to join cluster](troubleshooting.md#worker-node-fail) in [Troubleshoot problems with Amazon EKS clusters and nodes](troubleshooting.md) and [Can’t join nodes to a cluster](eks-outposts-troubleshooting.md#outposts-troubleshooting-unable-to-join-nodes-to-a-cluster) in [Troubleshoot local Amazon EKS clusters on AWS Outposts](eks-outposts-troubleshooting.md).
   + An example output is as follows. Several lines are output while the nodes are created. One of the last lines of output is the following example line.

     ```
     [✔]  created 1 nodegroup(s) in cluster "my-cluster"
     ```

1. (Optional) Deploy a [sample application](sample-deployment.md) to test your cluster and Linux nodes.

## AWS Management Console


 **Step 1: Launch self-managed Linux nodes using AWS Management Console ** 

1. Download the latest version of the AWS CloudFormation template.

   ```
   curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2025-11-24/amazon-eks-outpost-nodegroup.yaml
   ```

1. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation/).

1. Choose **Create stack** and then select **With new resources (standard)**.

1. For **Specify template**, select **Upload a template file** and then select **Choose file**. Select the `amazon-eks-nodegroup.yaml` file that you downloaded in a previous step and then select **Next**.

1. On the **Specify stack details** page, enter the following parameters accordingly, and then choose **Next**:
   +  **Stack name**: Choose a stack name for your AWS CloudFormation stack. For example, you can call it *al-nodes*. The name can contain only alphanumeric characters (case-sensitive) and hyphens. It must start with an alphanumeric character and can’t be longer than 100 characters. The name must be unique within the AWS Region and AWS account that you’re creating the cluster in.
   +  **ApiServerEndpoint**: Enter the Kubernetes API Server endpoint, visible in EKS console or via DescribeCluster API.
   +  **ClusterName**: Enter the name of your cluster. If this name doesn’t match your cluster name, your nodes can’t join the cluster.
   +  **ClusterId**: Enter the id assigned to the cluster by EKS service. Visible via DescribeCluster API. If this id doesn’t match your cluster id, your nodes can’t join the cluster.
   +  **CertificateAuthority**: Enter base64 encoded string of the Kubernetes Certificate Authority. Visible in EKS console or via DescribeCluster API.
   +  **ServiceCidr**: Enter the Kubernetes Services CIDR. Visible in EKS console or via DescribeCluster API.
   +  **ClusterControlPlaneSecurityGroup**: Choose the **SecurityGroups** value from the AWS CloudFormation output that you generated when you created your [VPC](creating-a-vpc.md).

     The following steps show one operation to retrieve the applicable group.

     1. Open the [Amazon EKS console](https://console.aws.amazon.com/eks/home#/clusters).

     1. Choose the name of the cluster.

     1. Choose the **Networking** tab.

     1. Use the **Additional security groups** value as a reference when selecting from the **ClusterControlPlaneSecurityGroup** dropdown list.
   +  **NodeGroupName**: Enter a name for your node group. This name can be used later to identify the Auto Scaling node group that’s created for your nodes.
   +  **NodeAutoScalingGroupMinSize**: Enter the minimum number of nodes that your node Auto Scaling group can scale in to.
   +  **NodeAutoScalingGroupDesiredCapacity**: Enter the desired number of nodes to scale to when your stack is created.
   +  **NodeAutoScalingGroupMaxSize**: Enter the maximum number of nodes that your node Auto Scaling group can scale out to.
   +  **NodeInstanceType**: Choose an instance type for your nodes. If your cluster is running on the AWS Cloud, then for more information, see [Choose an optimal Amazon EC2 node instance type](choosing-instance-type.md). If your cluster is running on an Outpost, then you can only select an instance type that is available on your Outpost.
   +  **NodeImageIdSSMParam**: Pre-populated with the Amazon EC2 Systems Manager parameter of a recent Amazon EKS optimized AMI for a variable Kubernetes version. To use a different Kubernetes minor version supported with Amazon EKS, replace *1.XX* with a different [supported version](https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html). We recommend specifying the same Kubernetes version as your cluster.

     To use an Amazon EKS optimized accelerated AMI, update *NodeImageIdSSMParam* value to the desired SSM parameter. See how to retrieve EKS AMI IDs from SSM [here](https://docs.aws.amazon.com/eks/latest/userguide/retrieve-ami-id.html).
**Note**  
The Amazon EKS node AMIs are based on Amazon Linux. You can track security or privacy events for Amazon Linux at the [Amazon Linux security center](https://alas.aws.amazon.com/) by choosing the tab for your desired version. You can also subscribe to the applicable RSS feed. Security and privacy events include an overview of the issue, what packages are affected, and how to update your instances to correct the issue.
   +  **NodeImageId**: (Optional) If you’re using your own custom AMI (instead of an Amazon EKS optimized AMI), enter a node AMI ID for your AWS Region. If you specify a value here, it overrides any values in the **NodeImageIdSSMParam** field.
   +  **NodeVolumeSize**: Specify a root volume size for your nodes, in GiB.
   +  **NodeVolumeType**: Specify a root volume type for your nodes.
   +  **KeyName**: Enter the name of an Amazon EC2 SSH key pair that you can use to connect using SSH into your nodes with after they launch. If you don’t already have an Amazon EC2 key pair, you can create one in the AWS Management Console. For more information, see [Amazon EC2 key pairs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*.
**Note**  
If you don’t provide a key pair here, the AWS CloudFormation stack creation fails.
   +  **DisableIMDSv1**: By default, each node supports the Instance Metadata Service Version 1 (IMDSv1) and IMDSv2. You can disable IMDSv1. To prevent future nodes and Pods in the node group from using IMDSv1, set **DisableIMDSv1** to **true**. For more information about IMDS, see [Configuring the instance metadata service](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html). For more information about restricting access to it on your nodes, see [Restrict access to the instance profile assigned to the worker node](https://aws.github.io/aws-eks-best-practices/security/docs/iam/#restrict-access-to-the-instance-profile-assigned-to-the-worker-node).
   +  **VpcId**: Enter the ID for the [VPC](creating-a-vpc.md) that you created. Before choosing a VPC, review [VPC requirements and considerations](eks-outposts-vpc-subnet-requirements.md#outposts-vpc-requirements).
   +  **Subnets**: If your cluster is on an Outpost, then choose at least one private subnet in your VPC. Before choosing subnets, review [Subnet requirements and considerations](eks-outposts-vpc-subnet-requirements.md#outposts-subnet-requirements). You can see which subnets are private by opening each subnet link from the **Networking** tab of your cluster.

1. Select your desired choices on the **Configure stack options** page, and then choose **Next**.

1. Select the check box to the left of **I acknowledge that AWS CloudFormation might create IAM resources.**, and then choose **Create stack**.

1. When your stack has finished creating, select it in the console and choose **Outputs**.

1. Record the **NodeInstanceRole** for the node group that was created. You need this when you configure your Amazon EKS nodes.

 **Step 2: Enable nodes to join your cluster** 

1. Check to see if you already have an `aws-auth` `ConfigMap`.

   ```
   kubectl describe configmap -n kube-system aws-auth
   ```

1. If you are shown an `aws-auth` `ConfigMap`, then update it as needed.

   1. Open the `ConfigMap` for editing.

      ```
      kubectl edit -n kube-system configmap/aws-auth
      ```

   1. Add a new `mapRoles` entry as needed. Set the `rolearn` value to the **NodeInstanceRole** value that you recorded in the previous procedure.

      ```
      [...]
      data:
        mapRoles: |
          - rolearn: <ARN of instance role (not instance profile)>
            username: system:node:{{EC2PrivateDNSName}}
            groups:
              - system:bootstrappers
              - system:nodes
      [...]
      ```

   1. Save the file and exit your text editor.

1. If you received an error stating "`Error from server (NotFound): configmaps "aws-auth" not found`, then apply the stock `ConfigMap`.

   1. Download the configuration map.

      ```
      curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/cloudformation/2020-10-29/aws-auth-cm.yaml
      ```

   1. In the `aws-auth-cm.yaml` file, set the `rolearn` to the **NodeInstanceRole** value that you recorded in the previous procedure. You can do this with a text editor, or by replacing *my-node-instance-role* and running the following command:

      ```
      sed -i.bak -e 's|<ARN of instance role (not instance profile)>|my-node-instance-role|' aws-auth-cm.yaml
      ```

   1. Apply the configuration. This command may take a few minutes to finish.

      ```
      kubectl apply -f aws-auth-cm.yaml
      ```

1. Watch the status of your nodes and wait for them to reach the `Ready` status.

   ```
   kubectl get nodes --watch
   ```

   Enter `Ctrl`\$1`C` to return to a shell prompt.
**Note**  
If you receive any authorization or resource type errors, see [Unauthorized or access denied (`kubectl`)](troubleshooting.md#unauthorized) in the troubleshooting topic.

   If nodes fail to join the cluster, then see [Nodes fail to join cluster](troubleshooting.md#worker-node-fail) in [Troubleshoot problems with Amazon EKS clusters and nodes](troubleshooting.md) and [Can’t join nodes to a cluster](eks-outposts-troubleshooting.md#outposts-troubleshooting-unable-to-join-nodes-to-a-cluster) in [Troubleshoot local Amazon EKS clusters on AWS Outposts](eks-outposts-troubleshooting.md).

1. Install the Amazon EBS CSI driver. For more information, see [Installation](https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/master/docs/install.md) on GitHub. In the **Set up driver permission** section, make sure to follow the instruction for the **Using IAM instance profile** option. You must use the `gp2` storage class. The `gp3` storage class isn’t supported.

   To create a `gp2` storage class on your cluster, complete the following steps.

   1. Run the following command to create the `gp2-storage-class.yaml` file.

      ```
      cat >gp2-storage-class.yaml <<EOF
      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        annotations:
          storageclass.kubernetes.io/is-default-class: "true"
        name: ebs-sc
      provisioner: ebs.csi.aws.com
      volumeBindingMode: WaitForFirstConsumer
      parameters:
        type: gp2
        encrypted: "true"
      allowVolumeExpansion: true
      EOF
      ```

   1. Apply the manifest to your cluster.

      ```
      kubectl apply -f gp2-storage-class.yaml
      ```

1. (GPU nodes only) If you chose a GPU instance type and an Amazon EKS optimized accelerated AMI, you must apply the [NVIDIA device plugin for Kubernetes](https://github.com/NVIDIA/k8s-device-plugin) as a DaemonSet on your cluster. Replace *vX.X.X* with your desired [NVIDIA/k8s-device-plugin](https://github.com/NVIDIA/k8s-device-plugin/releases) version before running the following command.

   ```
   kubectl apply -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/vX.X.X/deployments/static/nvidia-device-plugin.yml
   ```

 **Step3: Additional actions** 

1. (Optional) Deploy a [sample application](sample-deployment.md) to test your cluster and Linux nodes.

1. If your cluster is deployed on an Outpost, then skip this step. If your cluster is deployed on the AWS Cloud, the following information is optional. If the **AmazonEKS\$1CNI\$1Policy** managed IAM policy is attached to your [Amazon EKS node IAM role](create-node-role.md), we recommend assigning it to an IAM role that you associate to the Kubernetes `aws-node` service account instead. For more information, see [Configure Amazon VPC CNI plugin to use IRSA](cni-iam-role.md).