

 **Help improve this page** 

To contribute to this user guide, choose the **Edit this page on GitHub** link that is located in the right pane of every page.

# Enable EKS Auto Mode on existing EKS clusters
Enable existing clusters

You can enable EKS Auto Mode on existing EKS Clusters.

 ** AWS supports the following migrations:** 
+ Migrating from Karpenter to EKS Auto Mode nodes. For more information, see [Migrate from Karpenter to EKS Auto Mode using kubectl](auto-migrate-karpenter.md).
+ Migrating from EKS Managed Node Groups to EKS Auto Mode nodes. For more information, see [Migrate from EKS Managed Node Groups to EKS Auto Mode](auto-migrate-mng.md).
+ Migrating from EKS Fargate to EKS Auto Mode. For more information, see [Migrate from EKS Fargate to EKS Auto Mode](auto-migrate-fargate.md).

 ** AWS does not support the following migrations:** 
+ Migrating volumes from the EBS CSI controller (using the Amazon EKS add-on) to EKS Auto Mode EBS CSI controller (managed by EKS Auto Mode). PVCs made with one can’t be mounted by the other, because they use two different Kubernetes volume provisioners.
  + The [https://github.com/awslabs/eks-auto-mode-ebs-migration-tool](https://github.com/awslabs/eks-auto-mode-ebs-migration-tool) (AWS Labs project) enables migration between standard EBS CSI StorageClass (`ebs.csi.aws.com`) and EKS Auto EBS CSI StorageClass (`ebs.csi.eks.amazonaws.com`). Note that migration requires deleting and re-creating existing PersistentVolumeClaim/PersistentVolume resources, so validation in a non-production environment is essential before implementation.
+ Migrating load balancers from the AWS Load Balancer Controller to EKS Auto Mode

  You can install the AWS Load Balancer Controller on an Amazon EKS Auto Mode cluster. Use the `IngressClass` or `loadBalancerClass` options to associate Service and Ingress resources with either the Load Balancer Controller or EKS Auto Mode.
+ Migrating EKS clusters with alternative CNIs or other unsupported networking configurations

## Migration reference


Use the following migration reference to configure Kubernetes resources to be owned by either self-managed controllers or EKS Auto Mode.


| Capability | Resource | Field | Self Managed | EKS Auto Mode | 
| --- | --- | --- | --- | --- | 
|  Block storage  |   `StorageClass`   |   `provisioner`   |   `ebs.csi.aws.com`   |   `ebs.csi.eks.amazonaws.com`   | 
|  Load balancing  |   `Service`   |   `loadBalancerClass`   |   `service.k8s.aws/nlb`   |   `eks.amazonaws.com/nlb`   | 
|  Load balancing  |   `IngressClass`   |   `controller`   |   `ingress.k8s.aws/alb`   |   `eks.amazonaws.com/alb`   | 
|  Load balancing  |   `IngressClassParams`   |   `apiversion`   |   `elbv2.k8s.aws/v1beta1`   |   `eks.amazonaws.com/v1`   | 
|  Load balancing  |   `TargetGroupBinding`   |   `apiversion`   |   `elbv2.k8s.aws/v1beta1`   |   `eks.amazonaws.com/v1`   | 
|  Compute  |   `NodeClass`   |   `apiVersion`   |   `karpenter.sh/v1`   |   `eks.amazonaws.com/v1`   | 

## Migrating EBS volumes


When migrating workloads to EKS Auto Mode, you need to handle EBS volume migration due to different CSI driver provisioners:
+ EKS Auto Mode provisioner: `ebs.csi.eks.amazonaws.com` 
+ Open source EBS CSI provisioner: `ebs.csi.aws.com` 

Follow these steps to migrate your persistent volumes:

1.  **Modify volume retention policy**: Change the existing platform version’s (PV’s) `persistentVolumeReclaimPolicy` to `Retain` to ensure the underlying EBS volume is not deleted.

1.  **Remove PV from Kubernetes**: Delete the old PV resource while keeping the actual EBS volume intact.

1.  **Create a new PV with static provisioning**: Create a new PV that references the same EBS volume but works with the target CSI driver.

1.  **Bind to a new PVC**: Create a new PVC that specifically references your PV using the `volumeName` field.

### Considerations

+ Ensure your applications are stopped before beginning this migration.
+ Back up your data before starting the migration process.
+ This process needs to be performed for each persistent volume.
+ The workload must be updated to use the new PVC.

## Migrating load balancers


You cannot directly transfer existing load balancers from the self-managed AWS load balancer controller to EKS Auto Mode. Instead, you must implement a blue-green deployment strategy. This involves maintaining your existing load balancer configuration while creating new load balancers under the managed controller.

To minimize service disruption, we recommend a DNS-based traffic shifting approach. First, create new load balancers by using EKS Auto Mode while keeping your existing configuration operational. Then, use DNS routing (such as Route 53) to gradually shift traffic from the old load balancers to the new ones. Once traffic has been successfully migrated and you’ve verified the new configuration, you can decommission the old load balancers and self-managed controller.

# Enable EKS Auto Mode on an existing cluster
Enable on cluster

This topic describes how to enable Amazon EKS Auto Mode on your existing Amazon EKS clusters. Enabling Auto Mode on an existing cluster requires updating IAM permissions and configuring core EKS Auto Mode settings. Once enabled, you can begin migrating your existing compute workloads to take advantage of Auto Mode’s simplified operations and automated infrastructure management.

**Important**  
Verify you have the minimum required version of certain Amazon EKS Add-ons installed before enabling EKS Auto Mode. For more information, see [Required add-on versions](#auto-addons-required).

Before you begin, ensure you have administrator access to your Amazon EKS cluster and permissions to modify IAM roles. The steps in this topic guide you through enabling Auto Mode using either the AWS Management Console or AWS CLI.

## AWS Management Console


You must be logged into the AWS console with permission to manage IAM, EKS, and EC2 resources.

**Note**  
The Cluster IAM role of an EKS Cluster cannot be changed after the cluster is created. EKS Auto Mode requires additional permissions on this role. You must attach additional policies to the current role.

### Update Cluster IAM role


1. Open your cluster overview page in the AWS Management Console.

1. Under **Cluster IAM role ARN**, select **View in IAM**.

1. From the **Add Permissions** dropdown, select **Attach Policies**.

1. Use the **Search** box to find and select the following policies:
   +  `AmazonEKSComputePolicy` 
   +  `AmazonEKSBlockStoragePolicy` 
   +  `AmazonEKSLoadBalancingPolicy` 
   +  `AmazonEKSNetworkingPolicy` 
   +  `AmazonEKSClusterPolicy` 

1. Select **Add permissions** 

1. From the **Trust relationships** tab, select **Edit trust policy** 

1. Insert the following Cluster IAM Role trust policy, and select **Update policy** 

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Action": [
        "sts:AssumeRole",
        "sts:TagSession"
      ]
    }
  ]
}
```

### Enable EKS Auto Mode


1. Open your cluster overview page in the AWS Management Console.

1. Under **EKS Auto Mode** select **Manage** 

1. Toggle **EKS Auto Mode** to on.

1. From the **EKS Node Pool** dropdown, select the default node pools you want to create.
   + Learn more about Node Pools in EKS Auto Mode. For more information, see [Create a Node Pool for EKS Auto Mode](create-node-pool.md).

1. If you have previously created an EKS Auto Mode Node IAM role this AWS account, select it in the **Node IAM Role** dropdown. If you have not created this role before, select **Create recommended Role** and follow the steps.

## AWS CLI


### Prerequisites

+ The Cluster IAM Role of the existing EKS Cluster must include sufficient permissions for EKS Auto Mode, such as the following policies:
  +  `AmazonEKSComputePolicy` 
  +  `AmazonEKSBlockStoragePolicy` 
  +  `AmazonEKSLoadBalancingPolicy` 
  +  `AmazonEKSNetworkingPolicy` 
  +  `AmazonEKSClusterPolicy` 
+ The Cluster IAM Role must have an updated trust policy including the `sts:TagSession` action. For more information on creating a Cluster IAM Role, see [Create an EKS Auto Mode Cluster with the AWS CLI](automode-get-started-cli.md).
+  `aws` CLI installed, logged in, and a sufficient version. You must have permission to manage IAM, EKS, and EC2 resources. For more information, see [Set up to use Amazon EKS](setting-up.md).

### Procedure


Use the following commands to enable EKS Auto Mode on an existing cluster.

**Note**  
The compute, block storage, and load balancing capabilities must all be enabled or disabled in the same request.

```
aws eks update-cluster-config \
 --name $CLUSTER_NAME \
 --compute-config enabled=true \
 --kubernetes-network-config '{"elasticLoadBalancing":{"enabled": true}}' \
 --storage-config '{"blockStorage":{"enabled": true}}'
```

## Required add-on versions


If you’re planning to enable EKS Auto Mode on an existing cluster, you may need to update certain add-ons. Please note:
+ This applies only to existing clusters transitioning to EKS Auto Mode.
+ New clusters created with EKS Auto Mode enabled don’t require these updates.

If you have any of the following add-ons installed, ensure they are at least at the specified minimum version:


| Add-on name | Minimum required version | 
| --- | --- | 
|  Amazon VPC CNI plugin for Kubernetes  |  v1.19.0-eksbuild.1  | 
|  Kube-proxy  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/eks/latest/userguide/auto-enable-existing.html)  | 
|  Amazon EBS CSI driver  |  v1.37.0-eksbuild.1  | 
|  CSI snapshot controller  |  v8.1.0-eksbuild.2  | 
|  EKS Pod Identity Agent  |  v1.3.4-eksbuild.1  | 

For more information, see [Update an Amazon EKS add-on](updating-an-add-on.md).

## Next Steps

+ To migrate Manage Node Group workloads, see [Migrate from EKS Managed Node Groups to EKS Auto Mode](auto-migrate-mng.md).
+ To migrate from Self-Managed Karpenter, see [Migrate from Karpenter to EKS Auto Mode using kubectl](auto-migrate-karpenter.md).

# Migrate from Karpenter to EKS Auto Mode using kubectl
Migrate from Karpenter

This topic walks you through the process of migrating workloads from Karpenter to Amazon EKS Auto Mode using kubectl. The migration can be performed gradually, allowing you to move workloads at your own pace while maintaining cluster stability and application availability throughout the transition.

The step-by-step approach outlined below enables you to run Karpenter and EKS Auto Mode side by side during the migration period. This dual-operation strategy helps ensure a smooth transition by allowing you to validate workload behavior on EKS Auto Mode before completely decommissioning Karpenter. You can migrate applications individually or in groups, providing flexibility to accommodate your specific operational requirements and risk tolerance.

## Prerequisites


Before beginning the migration, ensure you have:
+ Karpenter v1.1 or later installed on your cluster. For more information, see [Upgrading to 1.1.0\$1](https://karpenter.sh/docs/upgrading/upgrade-guide/#upgrading-to-110) in the Karpenter docs.
+  `kubectl` installed and connected to your cluster. For more information, see [Set up to use Amazon EKS](setting-up.md).

This topic assumes you are familiar with Karpenter and NodePools. For more information, see the [Karpenter Documentation.](https://karpenter.sh/) 

## Step 1: Enable EKS Auto Mode on the cluster


Enable EKS Auto Mode on your existing cluster using the AWS CLI or Management Console. For more information, see [Enable EKS Auto Mode on an existing cluster](auto-enable-existing.md).

**Note**  
While enabling EKS Auto Mode, don’t enable the `general purpose` nodepool at this stage during transition. This node pool is not selective.  
For more information, see [Enable or Disable Built-in NodePools](set-builtin-node-pools.md).

## Step 2: Create a tainted EKS Auto Mode NodePool


Create a new NodePool for EKS Auto Mode with a taint. This ensures that existing pods won’t automatically schedule on the new EKS Auto Mode nodes. This node pool uses the `default` `NodeClass` built into EKS Auto Mode. For more information, see [Create a Node Class for Amazon EKS](create-node-class.md).

Example node pool with taint:

```
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: eks-auto-mode
spec:
  template:
    spec:
      requirements:
        - key: "eks.amazonaws.com/instance-category"
          operator: In
          values: ["c", "m", "r"]
      nodeClassRef:
        group: eks.amazonaws.com
        kind: NodeClass
        name: default
      taints:
        - key: "eks-auto-mode"
          effect: "NoSchedule"
```

Update the requirements for the node pool to match the Karpenter configuration you are migrating from. You need at least one requirement.

## Step 3: Update workloads for migration


Identify and update the workloads you want to migrate to EKS Auto Mode. Add both tolerations and node selectors to these workloads:

```
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      tolerations:
      - key: "eks-auto-mode"
        effect: "NoSchedule"
      nodeSelector:
        eks.amazonaws.com/compute-type: auto
```

This change allows the workload to be scheduled on the new EKS Auto Mode nodes.

EKS Auto Mode uses different labels than Karpenter. Labels related to EC2 managed instances start with `eks.amazonaws.com`. For more information, see [Create a Node Pool for EKS Auto Mode](create-node-pool.md).

## Step 4: Gradually migrate workloads


Repeat Step 3 for each workload you want to migrate. This allows you to move workloads individually or in groups, based on your requirements and risk tolerance.

## Step 5: Remove the original Karpenter NodePool


Once all workloads have been migrated, you can remove the original Karpenter NodePool:

```
kubectl delete nodepool <original-nodepool-name>
```

## Step 6: Remove taint from EKS Auto Mode NodePool (Optional)


If you want EKS Auto Mode to become the default for new workloads, you can remove the taint from the EKS Auto Mode NodePool:

```
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: eks-auto-mode
spec:
  template:
    spec:
      nodeClassRef:
        group: eks.amazonaws.com
        kind: NodeClass
        name: default
      # Remove the taints section
```

## Step 7: Remove node selectors from workloads (Optional)


If you’ve removed the taint from the EKS Auto Mode NodePool, you can optionally remove the node selectors from your workloads, as EKS Auto Mode is now the default:

```
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      # Remove the nodeSelector section
      tolerations:
      - key: "eks-auto-mode"
        effect: "NoSchedule"
```

## Step 8: Uninstall Karpenter from your cluster


The steps to remove Karpenter depend on how you installed it. For more information, see the [Karpenter install instructions](https://karpenter.sh/docs/getting-started/getting-started-with-karpenter/#create-a-cluster-and-add-karpenter).

# Migrate from EKS Managed Node Groups to EKS Auto Mode
Migrate from MNGs

When transitioning your Amazon EKS cluster to use EKS auto mode, you can smoothly migrate your existing workloads from managed node groups (MNGs) using the eksctl CLI tool. This process ensures continuous application availability while EKS auto mode optimizes your compute resources. The migration can be performed with minimal disruption to your running applications.

This topic walks you through the steps to safely drain pods from your existing managed node groups and allow EKS auto mode to reschedule them on newly provisioned instances. By following this procedure, you can take advantage of EKS auto mode’s intelligent workload consolidation while maintaining your application’s availability throughout the migration.

## Prerequisites

+ Cluster with EKS Auto Mode enabled
+  `eksctl` CLI installed and connected to your cluster. For more information, see [Set up to use Amazon EKS](setting-up.md).
+ Karpenter is not installed on the cluster.

## Procedure


Use the following `eksctl` CLI command to initiate draining pods from the existing managed node group instances. EKS Auto Mode will create new nodes to back the displaced pods.

```
eksctl delete nodegroup --cluster=<clusterName> --name=<nodegroupName>
```

You will need to run this command for each managed node group in your cluster.

For more information on this command, see [Deleting and draining nodegroups](https://eksctl.io/usage/nodegroups/#deleting-and-draining-nodegroups) in the eksctl docs.

# Migrate from EKS Fargate to EKS Auto Mode
Migrate from Fargate

This topic walks you through the process of migrating workloads from EKS Fargate to Amazon EKS Auto Mode using `kubectl`. The migration can be performed gradually, allowing you to move workloads at your own pace while maintaining cluster stability and application availability throughout the transition.

The step-by-step approach outlined below enables you to run EKS Fargate and EKS Auto Mode side by side during the migration period. This dual-operation strategy helps ensure a smooth transition by allowing you to validate workload behavior on EKS Auto Mode before completely decommissioning EKS Fargate. You can migrate applications individually or in groups, providing flexibility to accommodate your specific operational requirements and risk tolerance.

## Comparing Amazon EKS Auto Mode and EKS with AWS Fargate


Amazon EKS with AWS Fargate remains an option for customers who want to run EKS, but Amazon EKS Auto Mode is the recommended approach moving forward. EKS Auto Mode is fully Kubernetes conformant, supporting all upstream Kubernetes primitives and platform tools like Istio, which Fargate is unable to support. EKS Auto Mode also fully supports all EC2 runtime purchase options, including GPU and Spot instances, enabling customers to leverage negotiated EC2 discounts and other savings mechanisms These capabilities are not available when using EKS with Fargate.

Furthermore, EKS Auto Mode allows customers to achieve the same isolation model as Fargate, using standard Kubernetes scheduling capabilities to ensure each EC2 instance runs a single application container. By adopting Amazon EKS Auto Mode, customers can unlock the full benefits of running Kubernetes on AWS — a fully Kubernetes-conformant platform that provides the flexibility to leverage the entire breadth of EC2 and purchasing options while retaining the ease of use and abstraction from infrastructure management that Fargate provides.

### Achieving Fargate-like isolation in EKS Auto Mode


To replicate Fargate’s pod isolation model where each pod runs on its own dedicated instance, you can use Kubernetes topology spread constraints. This is the recommended approach for controlling pod distribution across nodes:

```
apiVersion: apps/v1
kind: Deployment
metadata:
  name: isolated-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: isolated-app
  template:
    metadata:
      labels:
        app: isolated-app
      annotations:
        eks.amazonaws.com/compute-type: ec2
    spec:
      topologySpreadConstraints:
      - maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: DoNotSchedule
        labelSelector:
          matchLabels:
            app: isolated-app
        minDomains: 1
      containers:
      - name: app
        image: nginx
        ports:
        - containerPort: 80
```

In this configuration:
+  `maxSkew: 1` ensures that the difference in pod count between any two nodes is at most 1, effectively distributing one pod per node
+  `topologyKey: kubernetes.io/hostname` defines the node as the topology domain
+  `whenUnsatisfiable: DoNotSchedule` prevents scheduling if the constraint cannot be met
+  `minDomains: 1` ensures at least one domain (node) exists before scheduling

EKS Auto Mode will automatically provision new EC2 instances as needed to satisfy this constraint, providing the same isolation model as Fargate while giving you access to the full range of EC2 instance types and purchasing options.

Alternatively, you can use pod anti-affinity rules for stricter isolation:

```
apiVersion: apps/v1
kind: Deployment
metadata:
  name: isolated-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: isolated-app
  template:
    metadata:
      labels:
        app: isolated-app
      annotations:
        eks.amazonaws.com/compute-type: ec2
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - isolated-app
            topologyKey: kubernetes.io/hostname
      containers:
      - name: app
        image: nginx
        ports:
        - containerPort: 80
```

The `podAntiAffinity` rule with `requiredDuringSchedulingIgnoredDuringExecution` ensures that no two pods with the label `app: isolated-app` can be scheduled on the same node. This approach provides hard isolation guarantees similar to Fargate.

## Prerequisites


Before beginning the migration, ensure you have
+ Set up a cluster with Fargate. For more information, see [Get started with AWS Fargate for your cluster](fargate-getting-started.md).
+ Installed and connected `kubectl` to your cluster. For more information, see [Set up to use Amazon EKS](setting-up.md).

## Step 1: Check the Fargate cluster


1. Check if the EKS cluster with Fargate is running:

   ```
   kubectl get node
   ```

   ```
   NAME STATUS ROLES AGE VERSION
   fargate-ip-192-168-92-52.ec2.internal Ready <none> 25m v1.30.8-eks-2d5f260
   fargate-ip-192-168-98-196.ec2.internal Ready <none> 24m v1.30.8-eks-2d5f260
   ```

1. Check running pods:

   ```
   kubectl get pod -A
   ```

   ```
   NAMESPACE NAME READY STATUS RESTARTS AGE
   kube-system coredns-6659cb98f6-gxpjz 1/1 Running 0 26m
   kube-system coredns-6659cb98f6-gzzsx 1/1 Running 0 26m
   ```

1. Create a deployment in a file called `deployment_fargate.yaml`:

   ```
   apiVersion: apps/v1
   kind: Deployment
   metadata:
     name: nginx-deployment
     labels:
       app: nginx
   spec:
     replicas: 3
     selector:
       matchLabels:
         app: nginx
     template:
       metadata:
         labels:
           app: nginx
         annotations:
           eks.amazonaws.com/compute-type: fargate
       spec:
         containers:
         - name: nginx
           image: nginx
           ports:
           - containerPort: 80
   ```

1. Apply the deployment:

   ```
   kubectl apply -f deployment_fargate.yaml
   ```

   ```
   deployment.apps/nginx-deployment created
   ```

1. Check the pods and deployments:

   ```
   kubectl get pod,deploy
   ```

   ```
   NAME                                    READY   STATUS    RESTARTS   AGE
   pod/nginx-deployment-5c7479459b-6trtm   1/1     Running   0          61s
   pod/nginx-deployment-5c7479459b-g8ssb   1/1     Running   0          61s
   pod/nginx-deployment-5c7479459b-mq4mf   1/1     Running   0          61s
   
   NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
   deployment.apps/nginx-deployment   3/3     3            3           61s
   ```

1. Check the node:

   ```
   kubectl get node -owide
   ```

   ```
   NAME                                    STATUS  ROLES  AGE VERSION             INTERNAL-IP     EXTERNAL-IP OS-IMAGE       KERNEL-VERSION                  CONTAINER-RUNTIME
   fargate-ip-192-168-111-43.ec2.internal  Ready   <none> 31s v1.30.8-eks-2d5f260 192.168.111.43  <none>      Amazon Linux 2 5.10.234-225.910.amzn2.x86_64  containerd://1.7.25
   fargate-ip-192-168-117-130.ec2.internal Ready   <none> 36s v1.30.8-eks-2d5f260 192.168.117.130 <none>      Amazon Linux 2 5.10.234-225.910.amzn2.x86_64  containerd://1.7.25
   fargate-ip-192-168-74-140.ec2.internal  Ready   <none> 36s v1.30.8-eks-2d5f260 192.168.74.140  <none>      Amazon Linux 2 5.10.234-225.910.amzn2.x86_64  containerd://1.7.25
   ```

## Step 2: Enable EKS Auto Mode on the cluster


1. Enable EKS Auto Mode on your existing cluster using the AWS CLI or Management Console. For more information, see [Enable EKS Auto Mode on an existing cluster](auto-enable-existing.md).

1. Check the nodepool:

   ```
   kubectl get nodepool
   ```

   ```
   NAME              NODECLASS   NODES   READY   AGE
   general-purpose   default     1       True    6m58s
   system            default     0       True    3d14h
   ```

## Step 3: Update workloads for migration


Identify and update the workloads you want to migrate to EKS Auto Mode.

To migrate a workload from Fargate to EKS Auto Mode, apply the annotation `eks.amazonaws.com/compute-type: ec2`. This ensures that the workload will not be scheduled by Fargate, despite the Fargate profile, and will be caught up by the EKS Auto Mode NodePool. For more information, see [Create a Node Pool for EKS Auto Mode](create-node-pool.md).

1. Modify your deployments (for example, the `deployment_fargate.yaml` file) to change the compute type to `ec2`:

   ```
   apiVersion: apps/v1
   kind: Deployment
   metadata:
     name: nginx-deployment
     labels:
       app: nginx
   spec:
     replicas: 3
     selector:
       matchLabels:
         app: nginx
     template:
       metadata:
         labels:
           app: nginx
         annotations:
           eks.amazonaws.com/compute-type: ec2
       spec:
         containers:
         - name: nginx
           image: nginx
           ports:
           - containerPort: 80
   ```

1. Apply the deployment. This change allows the workload to be scheduled on the new EKS Auto Mode nodes:

   ```
   kubectl apply -f deployment_fargate.yaml
   ```

1. Check that the deployment is running in the EKS Auto Mode cluster:

   ```
   kubectl get pod -o wide
   ```

   ```
   NAME                               READY   STATUS    RESTARTS   AGE     IP               NODE                  NOMINATED NODE   READINESS GATES
   nginx-deployment-97967b68d-ffxxh   1/1     Running   0          3m31s   192.168.43.240   i-0845aafcb51630ffb   <none>           <none>
   nginx-deployment-97967b68d-mbcgj   1/1     Running   0          2m37s   192.168.43.241   i-0845aafcb51630ffb   <none>           <none>
   nginx-deployment-97967b68d-qpd8x   1/1     Running   0          2m35s   192.168.43.242   i-0845aafcb51630ffb   <none>           <none>
   ```

1. Verify there is no Fargate node running and deployment running in the EKS Auto Mode managed nodes:

   ```
   kubectl get node -owide
   ```

   ```
   NAME                STATUS ROLES  AGE   VERSION             INTERNAL-IP     EXTERNAL-IP OS-IMAGE                                         KERNEL-VERSION CONTAINER-RUNTIME
   i-0845aafcb51630ffb Ready  <none> 3m30s v1.30.8-eks-3c20087 192.168.41.125  3.81.118.95 Bottlerocket (EKS Auto) 2025.3.14 (aws-k8s-1.30) 6.1.129        containerd://1.7.25+bottlerocket
   ```

## Step 4: Gradually migrate workloads


Repeat Step 3 for each workload you want to migrate. This allows you to move workloads individually or in groups, based on your requirements and risk tolerance.

## Step 5: Remove the original fargate profile


Once all workloads have been migrated, you can remove the original `fargate` profile. Replace *<fargate profile name>* with the name of your Fargate profile:

```
aws eks delete-fargate-profile --cluster-name eks-fargate-demo-cluster --fargate-profile-name <fargate profile name>
```

## Step 6: Scale down CoreDNS


Because EKS Auto mode handles CoreDNS, you scale the `coredns` deployment down to 0:

```
kubectl scale deployment coredns -n kube-system —-replicas=0
```