

# Amazon ECS capacity providers for EC2 workloads
<a name="asg-capacity-providers"></a>

When you use Amazon EC2 instances for your capacity, you use Auto Scaling groups to manage the Amazon EC2 instances registered to their clusters. Auto Scaling helps ensure that you have the correct number of Amazon EC2 instances available to handle the application load. 

You can use the managed scaling feature to have Amazon ECS manage the scale-in and scale-out actions of the Auto Scaling group, or you can manage the scaling actions yourself. For more information, see [Automatically manage Amazon ECS capacity with cluster auto scaling](cluster-auto-scaling.md).

We recommend that you create a new empty Auto Scaling group. If you use an existing Auto Scaling group, any Amazon EC2 instances that are associated with the group that were already running and registered to an Amazon ECS cluster before the Auto Scaling group being used to create a capacity provider might not be properly registered with the capacity provider. This might cause issues when using the capacity provider in a capacity provider strategy. Use `DescribeContainerInstances` to confirm whether a container instance is associated with a capacity provider or not.

**Note**  
To create an empty Auto Scaling group, set the desired count to zero. After you created the capacity provider and associated it with a cluster, you can then scale it out.  
When you use the Amazon ECS console, Amazon ECS creates an Amazon EC2 launch template and Auto Scaling group on your behalf as part of the CloudFormation stack. They are prefixed with `EC2ContainerService-<ClusterName>`. You can use the Auto Scaling group as a capacity provider for that cluster.

We recommend you use managed instance draining to allow for graceful termination of Amazon EC2 instances that won't disrupt your workloads. This feature is on by default. For more information, see [Safely stop Amazon ECS workloads running on EC2 instances](managed-instance-draining.md)

Consider the following when using Auto Scaling group capacity providers in the console:
+ An Auto Scaling group must have a `MaxSize` greater than zero to scale out.
+ The Auto Scaling group can't have instance weighting settings.
+ If the Auto Scaling group can't scale out to accommodate the number of tasks run, the tasks fails to transition beyond the `PROVISIONING` state.
+ Don't modify the scaling policy resource associated with your Auto Scaling groups that are managed by capacity providers. 
+ If managed scaling is turned on when you create a capacity provider, the Auto Scaling group desired count can be set to `0`. When managed scaling is turned on, Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group.
+ You must associate capacity provider with a cluster before you associate it with the capacity provider strategy.
+ You can specify a maximum of 20 capacity providers for a capacity provider strategy.
+ You can't update a service using an Auto Scaling group capacity provider to use a Fargate capacity provider. The opposite is also the case.
+ In a capacity provider strategy, if no `weight` value is specified for a capacity provider in the console, then the default value of `1` is used. If using the API or AWS CLI, the default value of `0` is used.
+ When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value that's greater than zero. Any capacity providers with a weight of zero aren't used to place tasks. If you specify multiple capacity providers in a strategy with all the same weight of zero, then any `RunTask` or `CreateService` actions using the capacity provider strategy fail.
+ In a capacity provider strategy, only one capacity provider can have a defined *base* value. If no base value is specified, the default value of zero is used.
+ A cluster can contain a mix of both Auto Scaling group capacity providers and Fargate capacity providers. However, a capacity provider strategy can only contain Auto Scaling group or Fargate capacity providers, but not both.
+ A cluster can contain a mix of services and standalone tasks that use both capacity providers and launch types. A service can be updated to use a capacity provider strategy rather than a launch type. However, you must force a new deployment when doing so.
+ Amazon ECS supports Amazon EC2 Auto Scaling warm pools. A warm pool is a group of pre-initialized Amazon EC2 instances ready to be placed into service. Whenever your application needs to scale out, Amazon EC2 Auto Scaling uses the pre-initialized instances from the warm pool rather than launching cold instances. This allows for any final initialization process to run before the instance is placed into service. For more information, see [Configuring pre-initialized instances for your Amazon ECS Auto Scaling group](using-warm-pool.md).

For more information about creating an Amazon EC2 Auto Scaling launch template, see [Auto Scaling launch templates](https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-templates.html) in the *Amazon EC2 Auto Scaling User Guide*. For more information about creating an Amazon EC2 Auto Scaling group, see [Auto Scaling groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-groups.html) in the *Amazon EC2 Auto Scaling User Guide*.

# Amazon EC2 container instance security considerations for Amazon ECS
<a name="ec2-security-considerations"></a>

You should consider a single container instance and its access within your threat model. For example, a single affected task might be able to leverage the IAM permissions of a non-infected task on the same instance.

We recommend that you use the following to help prevent this:
+ Do not use administrator privileges when running your tasks. 
+ Assign a task role with least-privileged access to your tasks. 

  The container agent automatically creates a token with a unique credential ID which are used to access Amazon ECS resources.
+ To prevent containers run by tasks that use the `awsvpc` network mode from accessing the credential information supplied to the Amazon EC2 instance profile, while still allowing the permissions that are provided by the task role set the `ECS_AWSVPC_BLOCK_IMDS` agent configuration variable to true in the agent configuration file and restart the agent.
+ Use Amazon GuardDuty Runtime Monitoring to detect threats for clusters and containers within your AWS environment. Runtime Monitoring uses a GuardDuty security agent that adds runtime visibility into individual Amazon ECS workloads, for example, file access, process execution, and network connections. For more information, see [GuardDuty Runtime Monitoring](https://docs.aws.amazon.com/guardduty/latest/ug/runtime-monitoring.html) in the *GuardDuty User Guide*.

# Creating an Amazon ECS cluster for Amazon EC2 workloads
<a name="create-ec2-cluster-console-v2"></a>

You create a cluster to define the infrastructure your tasks and services run on.

Before you begin, be sure that you've completed the steps in [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md) and assign the appropriate IAM permission. For more information, see [Amazon ECS cluster examples](security_iam_id-based-policy-examples.md#IAM_cluster_policies). The Amazon ECS console provides a simple way to create the resources that are needed by an Amazon ECS cluster by creating a CloudFormation stack. 

To make the cluster creation process as easy as possible, the console has default selections for many choices which we describe below. There are also help panels available for most of the sections in the console which provide further context. 

You can register Amazon EC2 instances when you create the cluster or register additional instances with the cluster after it has been created.

You can modify the following default options:
+ Change the subnets where your instances launch.
+ Change the security groups used to control traffic to the container instances.
+ Add a namespace to the cluster.

  A namespace allows services that you create in the cluster can connect to the other services in the namespace without additional configuration. For more information, see [Interconnect Amazon ECS services](interconnecting-services.md).
+ Enable task events to receive EventBridge notifications for task state changes.
+ Assign a AWS KMS key for your managed storage. For information about how to create a key, see [Create a KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service User Guide*.
+ Assign a AWS KMS key for your Fargate ephemeral storage. For information about how to create a key, see [Create a KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service User Guide*.
+ Configure the AWS KMS key and logging for ECS Exec.
+ Add tags to help you identify your cluster.

## Auto Scaling group options
<a name="capacity-providers"></a>

When you use Amazon EC2 instances, you must specify an Auto Scaling group to manage the infrastructure that your tasks and services run on. 

When you choose to create a new Auto Scaling group, it is automatically configured for the following behavior:
+ Amazon ECS manages the scale-in and scale-out actions of the Auto Scaling group.
+ Amazon ECS will not prevent Amazon EC2 instances that contain tasks and that are in an Auto Scaling group from being terminated during a scale-in action. For more information, see [Instance Protection](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-instance-termination.html#instance-protection) in the *AWS Auto Scaling User Guide*.

You configure the following Auto Scaling group properties which determine the type and number of instances to launch for the group:
+ The Amazon ECS-optimized AMI. 
+ The instance type.
+ The SSH key pair that proves your identity when you connect to the instance. For information about how to create SSH keys, see [Amazon EC2 key pairs and Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*.
+ The minimum number of instances to launch for the Auto Scaling group. 
+ The maximum number of instances that are started for the Auto Scaling group. 

  In order for the group to scale out, the maximum must be greater than 0.

Amazon ECS creates an Amazon EC2 Auto Scaling launch template and Auto Scaling group on your behalf as part of the CloudFormation stack. The values that you specified for the AMI, the instance types, and the SSH key pair are part of the launch template. The templates are prefixed with `EC2ContainerService-<ClusterName>`, which makes them easy to identify. The Auto Scaling groups are prefixed with `<ClusterName>-ECS-Infra-ECSAutoScalingGroup`.

Instances launched for the Auto Scaling group use the launch template.

## Networking options
<a name="networking-options"></a>

By default instances are launched into the default subnets for the Region. The security groups, which control the traffic to your container instances, currently associated with the subnets are used. You can changed the subnets and security groups for the instances.

You can choose an existing subnet. You can either use an existing security group, or create a new one. To create tasks in an IPv6-only configuration, use subnets that include only an IPv6 CIDR block.

When you create a new security group, you need to specify at least one inbound rule. 

The inbound rules determine what traffic can reach your container instances and include the following properties: 
+ The protocol to allow
+ The range of ports to allow
+ The inbound traffic (source)

To allow inbound traffic from a specific address or CIDR block, use **Custom** for **Source** with the allowed CIDR. 

To allow inbound traffic from all destinations, use **Anywhere** for **Source**. This automatically adds the 0.0.0.0/0 IPv4 CIDR block and ::/0 IPv6 CIDR block.

To allow inbound traffic from your local computer, use **Source group** for **Source**. This automatically adds the current IP address of your local computer as the allowed source.

**To create a new cluster (Amazon ECS console)**

Before you begin, assign the appropriate IAM permission. For more information, see [Amazon ECS cluster examples](security_iam_id-based-policy-examples.md#IAM_cluster_policies).

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. From the navigation bar, select the Region to use.

1. In the navigation pane, choose **Clusters**.

1. On the **Clusters** page, choose **Create cluster**.

1. Under **Cluster configuration**, configure the following:
   + For **Cluster name**, enter a unique name.

     The name can contain up to 255 letters (uppercase and lowercase), numbers, and hyphens.
   + (Optional) To have the namespace used for Service Connect be different from the cluster name, under **Service Connect defaults**, for **Default namespace**, choose or enter a namespace name. To use a shared namespace, choose or enter a namespace ARN. For more information about using shared namespaces, see [Amazon ECS Service Connect with shared AWS Cloud Map namespaces](service-connect-shared-namespaces.md).

1. Add Amazon EC2 instances to your cluster, expand **Infrastructure** and then select **Fargate and Self-managed instances**. 

   Next, configure the Auto Scaling group which acts as the capacity provider:

   1. To using an existing Auto Scaling group, from **Auto Scaling group (ASG)**, select the group.

   1. To create a Auto Scaling group, from **Auto Scaling group (ASG)**, select **Create new group**, and then provide the following details about the group:
      + For **Provisioning model**, choose whether to use **On-demand** instances or **Spot** Instances.
      + If you choose to use Spot Instances, for **Allocation Strategy**, choose what Spot capacity pools (instance types and Availability Zones) are used for the instances.

        For most workloads, you can choose **Price capacity optimized**.

        For more information, see [Allocation strategies for Spot Instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-fleet-allocation-strategy.html) in the *Amazon EC2 User Guide*.
      + For **Container instance Amazon Machine Image (AMI)**, choose the Amazon ECS-optimized AMI for the Auto Scaling group instances.
      + For **EC2 instance type**, choose the instance type for your workloads.

         Managed scaling works best if your Auto Scaling group uses the same or similar instance types. 
      + For **EC2 instance role**, choose an existing container instance role, or you can create a new one.

        For more information, see [Amazon ECS container instance IAM role](instance_IAM_role.md).
      + For **Capacity**, enter the minimum number and the maximum number of instances to launch in the Auto Scaling group. 
      + For **SSH key pair**, choose the pair that proves your identity when you connect to the instance.
      + To allow for larger image and storage, for **Root EBS volume size**, enter the value in GiB. 

1. (Optional) To change the VPC and subnets, under **Networking for Amazon EC2 instances**, perform any of the following operations:
   + To remove a subnet, under **Subnets**, choose **X** for each subnet that you want to remove.
   + To change to a VPC other than the **default** VPC, under **VPC**, choose an existing **VPC**, and then under **Subnets**, choose the subnets. For an IPv6-only configuration, choose a VPC that has an IPv6 CIDR block and subnets that have only an IPv6 CIDR block.
   + Choose the security groups. Under **Security group**, choose one of the following options:
     + To use an existing security group, choose **Use an existing security group**, and then choose the security group.
     + To create a security group, choose **Create a new security group**. Then, choose **Add rule** for each inbound rule.

       For information about inbound rules, see [Networking options](#networking-options). 
   + To automatically assign public IP addresses to your Amazon EC2 container instances, for **Auto-assign public IP**, choose one of the following options:
     + **Use subnet setting** – Assign a public IP address to the instances when the subnet that the instances launch in are a public subnet.
     + **Turn on** – Assign a public IP address to the instances.

1. (Optional) Use Container Insights, expand **Monitoring**, and then choose one of the following options:
   + To use the recommended Container Insights with enhanced observability, choose **Container Insights with enhanced observability**.
   + To use Container Insights, choose **Container Insights**.

1. (Optional) To enable task events, expand **Task events**, and then turn on **Enable task events**.

   When you enable task events, Amazon ECS sends task state change events to EventBridge. This allows you to monitor and respond to task lifecycle changes automatically.

1. (Optional) To use ECS Exec to debug tasks in the cluster, expand **Troubleshooting configuration**, and then configure the following:
   + (Optional) For **AWS KMS key for ECS Exec**, enter the ARN of the AWS KMS key you want to use to encrypt the ECS Exec session data.
   + (Optional) For **ECS Exec logging**, choose the log destination:
     + To send logs to CloudWatch Logs, choose **Amazon CloudWatch**.
     + To send logs to Amazon S3, choose **Amazon S3**.
     + To disable logging, choose **None**.

1. (Optional)

   If you use Runtime Monitoring with the manual option and you want to have this cluster monitored by GuardDuty, choose **Add tag** and do the following:
   + For **Key**, enter **guardDutyRuntimeMonitoringManaged**
   + For **Value**, enter **true**.

1. (Optional) Encrypt the data on managed storage. Under **Encryption**, for **Managed storage**, enter the ARN of the AWS KMS key you want to use to encrypt the managed storage data.

1. (Optional) To manage the cluster tags, expand **Tags**, and then perform one of the following operations:

   [Add a tag] Choose **Add tag** and do the following:
   + For **Key**, enter the key name.
   + For **Value**, enter the key value.

   [Remove a tag] Choose **Remove** to the right of the tag’s Key and Value.

1. Choose **Create**.

## Next steps
<a name="ec2-cluster-next-steps"></a>

After you create the cluster, you can create task definitions for your applications and then run them as standalone tasks, or as part of a service. For more information, see the following:
+ [Amazon ECS task definitions](task_definitions.md)
+ [Running an application as an Amazon ECS task](standalone-task-create.md)
+ [Creating an Amazon ECS rolling update deployment](create-service-console-v2.md)

# Automatically manage Amazon ECS capacity with cluster auto scaling
<a name="cluster-auto-scaling"></a>

Amazon ECS can manage the scaling of Amazon EC2 instances that are registered to your cluster. This is referred to as Amazon ECS *cluster auto scaling*. You turn on managed scaling when you create the Amazon ECS Auto Scaling group capacity provider. Then, you set a target percentage (the `targetCapacity`) for the instance utilization in this Auto Scaling group. Amazon ECS creates two custom CloudWatch metrics and a target tracking scaling policy for your Auto Scaling group. Amazon ECS then manages the scale-in and scale-out actions based on the resource utilization that your tasks use.

For each Auto Scaling group capacity provider that's associated with a cluster, Amazon ECS creates and manages the following resources:
+ A low metric value CloudWatch alarm
+ A high metric value CloudWatch alarm
+ A target tracking scaling policy
**Note**  
Amazon ECS creates the target tracking scaling policy and attaches it to the Auto Scaling group. To update the target tracking scaling policy, update the capacity provider managed scaling settings, rather than updating the scaling policy directly.

When you turn off managed scaling or disassociate the capacity provider from a cluster, Amazon ECS removes both CloudWatch metrics and the target tracking scaling policy resources.

Amazon ECS uses the following metrics to determine what actions to take:

`CapacityProviderReservation`  
The percent of container instances in use for a specific capacity provider. Amazon ECS generates this metric.  
Amazon ECS sets the `CapacityProviderReservation` value to a number between 0-100. Amazon ECS uses the following formula to represent the ratio of how much capacity remains in the Auto Scaling group. Then, Amazon ECS publishes the metric to CloudWatch. For more information about how the metric is calculated, see [ Deep Dive on Amazon ECS Cluster Auto Scaling](https://aws.amazon.com/blogs/containers/deep-dive-on-amazon-ecs-cluster-auto-scaling/).  

```
CapacityProviderReservation = (number of instances needed) / (number of running instances) x 100
```

`DesiredCapacity`  
The amount of capacity for the Auto Scaling group. This metric isn't published to CloudWatch.

Amazon ECS publishes the `CapacityProviderReservation` metric to CloudWatch in the `AWS/ECS/ManagedScaling` namespace. The `CapacityProviderReservation` metric causes one of the following actions to occur:

**The `CapacityProviderReservation` value equals `targetCapacity`**  
The Auto Scaling group doesn't need to scale in or scale out. The target utilization percentage has been reached.

**The `CapacityProviderReservation` value is greater than `targetCapacity`**  
There are more tasks using a higher percentage of the capacity than your `targetCapacity` percentage. The increased value of the `CapacityProviderReservation` metric causes the associated CloudWatch alarm to act. This alarm updates the `DesiredCapacity` value for the Auto Scaling group. The Auto Scaling group uses this value to launch EC2 instances, and then register them with the cluster.  
When the `targetCapacity` is the default value of 100 %, the new tasks are in the `PENDING` state during the scale-out because there is no available capacity on the instances to run the tasks. After the new instances register with ECS, these tasks will start on the new instances.

**The `CapacityProviderReservation` value is less than `targetCapacity`**  
There are less tasks using a lower percentage of the capacity than your `targetCapacity` percentage and there is at least one instance that can be terminated. The decreased value of the `CapacityProviderReservation` metric causes the associated CloudWatch alarm to act. This alarm updates the `DesiredCapacity` value for the Auto Scaling group. The Auto Scaling group uses this value to terminate EC2 container instances, and then deregister them from the cluster.  
The Auto Scaling group follows the group termination policy to determine which instances it terminates first during scale-in events. Additionally it avoids instances with the instance scale-in protection setting turned on. Cluster auto scaling can manage which instances have the instance scale-in protection setting if you turn on managed termination protection. For more information about managed termination protection, see [Control the instances Amazon ECS terminates](managed-termination-protection.md). For more information about how Auto Scaling groups terminate instances, see [Control which Auto Scaling instances terminate during scale in](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-instance-protection.html) in the *Amazon EC2 Auto Scaling User Guide*.

Consider the following when using cluster auto scaling:
+ Don't change or manage the desired capacity for the Auto Scaling group that's associated with a capacity provider with any scaling policies other than the one Amazon ECS manages.
+ When Amazon ECS scales out from 0 instances, it automatically launches 2 instances.
+ Amazon ECS uses the `AWSServiceRoleForECS` service-linked IAM role for the permissions that it requires to call AWS Auto Scaling on your behalf. For more information, see [Using service-linked roles for Amazon ECS](using-service-linked-roles.md).
+ When using capacity providers with Auto Scaling groups, the user, group, or role that creates the capacity providers requires the `autoscaling:CreateOrUpdateTags` permission. This is because Amazon ECS adds a tag to the Auto Scaling group when it associates it with the capacity provider.
**Important**  
Make sure any tooling that you use doesn't remove the `AmazonECSManaged` tag from the Auto Scaling group. If this tag is removed, Amazon ECS can't manage the scaling.
+ Cluster auto scaling doesn't modify the **MinimumCapacity** or **MaximumCapacity** for the group. For the group to scale out, the value for **MaximumCapacity** must be greater than zero.
+ When Auto Scaling (managed scaling) is turned on, a capacity provider can only be connected to one cluster at the same time. If your capacity provider has managed scaling turned off, you can associate it with multiple clusters.
+ When managed scaling is turned off, the capacity provider doesn't scale in or scale out. You can use a capacity provider strategy to balance your tasks between capacity providers.
+ The `binpack` strategy is the most efficient strategy in terms of capacity.
+ When the target capacity is less than 100%, within the placement strategy, the `binpack` strategy must have a higher order than the `spread` strategy. This prevents the capacity provider from scaling out until each task has a dedicated instance or the limit is reached.

## Turn on cluster auto scaling
<a name="cluster-auto-scale-use"></a>

You can turn on cluster auto scaling by using the Console or the AWS CLI.

When you create a cluster that uses EC2 capacity providers using the console, Amazon ECS creates an Auto Scaling group on your behalf and sets the target capacity. For more information, see [Creating an Amazon ECS cluster for Amazon EC2 workloads](create-ec2-cluster-console-v2.md).

You can also create an Auto Scaling group, and then assign it to a cluster. For more information, see [Updating an Amazon ECS capacity provider](update-capacity-provider-console-v2.md).

When you use the AWS CLI, after you create the cluster

1. Before you create the capacity provider, you need to create an Auto Scaling group. For more information, see[ Auto Scaling groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-groups.html) in the *Amazon EC2 Auto Scaling User Guide*.

1. Use `put-cluster-capacity-providers` to modify the cluster capacity provider. For more information, see [Turning on Amazon ECS cluster auto scaling](turn-on-cluster-auto-scaling.md).

# Optimize Amazon ECS cluster auto scaling
<a name="capacity-cluster-speed-up-ec2"></a>

Customers who run Amazon ECS on Amazon EC2 can take advantage of cluster auto scaling to manage the scaling of Amazon EC2 Auto Scaling groups. With cluster auto scaling, you can configure Amazon ECS to scale your Auto Scaling group automatically, and just focus on running your tasks. Amazon ECS ensures the Auto Scaling group scales in and out as needed with no further intervention required. Amazon ECS capacity providers are used to manage the infrastructure in your cluster by ensuring there are enough container instances to meet the demands of your application. To learn how cluster auto scaling works under the hood, see [Deep Dive on Amazon ECS Cluster Auto Scaling](https://aws.amazon.com/blogs/containers/deep-dive-on-amazon-ecs-cluster-auto-scaling/).

Cluster auto scaling relies on a CloudWatch based integration with Auto Scaling group for adjusting cluster capacity. Therefore it has inherent latency associated with 
+ Publishing the CloudWatch metrics, 
+ The time taken for the metric `CapacityProviderReservation` to breach CloudWatch alarms (both high and low)
+ The time taken by a newly launched Amazon EC2 instance to warm-up. You can take the following actions to make cluster auto scaling more responsive for faster deployments:

## Capacity provider step scaling sizes
<a name="cas-step-size"></a>

Amazon ECS capacity providers will grow/shrink the container instances to meet the demands of your application. The minimum number of instances that Amazon ECS will launch is set to 1 by default. This may add additional time to your deployments, if several instances are required for placing your pending tasks. You can increase the [https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ManagedScaling.html](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ManagedScaling.html) via the Amazon ECS API to increase the minimum number of instances that Amazon ECS scales in or out at a time. A [https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ManagedScaling.html](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ManagedScaling.html) that is too low can limit how many container instances are scaled in or out at a time, which can slow down your deployments.

**Note**  
This configuration is currently only available via the [https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_CreateCapacityProvider.html](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_CreateCapacityProvider.html) or [https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_UpdateCapacityProvider.html](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_UpdateCapacityProvider.html) APIs.

## Instance warm-up period
<a name="instance-warmup-period"></a>

The instance warm-up period is the period of time after which a newly launched Amazon EC2 instance can contribute to CloudWatch metrics for the Auto Scaling group. After the specified warm-up period expires, the instance is counted toward the aggregated metrics of the Auto Scaling group, and cluster auto scaling proceeds with its next iteration of calculations to estimate the number instances required.

The default value for [https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ManagedScaling.html#ECS-Type-ManagedScaling-instanceWarmupPeriod](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ManagedScaling.html#ECS-Type-ManagedScaling-instanceWarmupPeriod) is 300 seconds, which you can configure to a lower value via the [https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_CreateCapacityProvider.html](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_CreateCapacityProvider.html) or [https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_UpdateCapacityProvider.html](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_UpdateCapacityProvider.html) APIs for more responsive scaling. We recommend that you set the value to greater than 60 seconds so that you can avoid over-provisioning.

## Spare capacity
<a name="spare-capacity"></a>

If your capacity provider has no container instances available for placing tasks, then it needs to increase (scale out) cluster capacity by launching Amazon EC2 instances on the fly, and wait for them to boot up before it can launch containers on them. This can significantly lower the task launch rate. You have two options here.

 In this case, having spare Amazon EC2 capacity already launched and ready to run tasks will increase the effective task launch rate. You can use the `Target Capacity` configuration to indicate that you wish to maintain spare capacity in your clusters. For example, by setting `Target Capacity` at 80%, you indicate that your cluster needs 20% spare capacity at all times. This spare capacity can allow any standalone tasks to be immediately launched, ensuring task launches are not throttled. The trade-off for this approach is potential increased costs of keeping spare cluster capacity. 

An alternate approach you can consider is adding headroom to your service, not to the capacity provider. This means that instead of reducing `Target Capacity` configuration to launch spare capacity, you can increase the number of replicas in your service by modifying the target tracking scaling metric or the step scaling thresholds of the service auto scaling. Note that this approach will only be helpful for spiky workloads, but won't have an effect when you’re deploying new services and going from 0 to N tasks for the first time. For more information about the related scaling policies, see [Target Tracking Scaling Policies](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-autoscaling-targettracking.html) or [Step Scaling Policies](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-autoscaling-stepscaling.html) in the *Amazon Elastic Container Service Developer Guide*.

# Amazon ECS managed scaling behavior
<a name="managed-scaling-behavior"></a>

When you have Auto Scaling group capacity providers that use managed scaling, Amazon ECS estimates the optimal number of instances to add to your cluster and uses the value to determine how many instances to request or release.

## Managed scale-out behavior
<a name="managed-scaling-scaleout"></a>

Amazon ECS selects a capacity provider for each task by following the capacity provider strategy from the service, standalone task, or the cluster default. Amazon ECS follows the rest of these steps for a single capacity provider.

Tasks without a capacity provider strategy are ignored by capacity providers. A pending task that doesn't have a capacity provider strategy won't cause any capacity provider to scale out. Tasks or services can't set a capacity provider strategy if that task or service sets a launch type.

The following describes the scale-out behavior in more detail.
+ Group all of the provisioning tasks for this capacity provider so that each group has the same exact resource requirements.
+ When you use multiple instance types in an Auto Scaling group, the instance types in the Auto Scaling group are sorted by their parameters. These parameters include vCPU, memory, elastic network interfaces (ENIs), ports, and GPUs. The smallest and the largest instance types for each parameter are selected. For more information about how to choose the instance type, see [Amazon EC2 container instances for Amazon ECS](create-capacity.md).
**Important**  
If a group of tasks have resource requirements that are greater than the smallest instance type in the Auto Scaling group, then that group of tasks can’t run with this capacity provider. The capacity provider doesn’t scale the Auto Scaling group. The tasks remain in the `PROVISIONING` state.  
To prevent tasks from staying in the `PROVISIONING` state, we recommend that you create separate Auto Scaling groups and capacity providers for different minimum resource requirements. When you run tasks or create services, only add capacity providers to the capacity provider strategy that can run the task on the smallest instance type in the Auto Scaling group. For other parameters, you can use placement constraints
+ For each group of tasks, Amazon ECS calculates the number of instances that are required to run the unplaced tasks. This calculation uses a `binpack` strategy. This strategy accounts for the vCPU, memory, elastic network interfaces (ENI), ports, and GPUs requirements of the tasks. It also accounts for the resource availability of the Amazon EC2 instances. The values for the largest instance types are treated as the maximum calculated instance count. The values for the smallest instance type are used as protection. If the smallest instance type can't run at least one instance of the task, the calculation considers the task as not compatible. As a result, the task is excluded from scale-out calculation. When all the tasks aren't compatible with the smallest instance type, cluster auto scaling stops and the `CapacityProviderReservation` value remains at the `targetCapacity` value.
+ Amazon ECS publishes the `CapacityProviderReservation` metric to CloudWatch with respect to the `minimumScalingStepSize` if either of the following is the case. 
  + The maximum calculated instance count is less than the minimum scaling step size.
  + The lower value of either the `maximumScalingStepSize` or the maximum calculated instance count.
+ CloudWatch alarms use the `CapacityProviderReservation` metric for capacity providers. When the `CapacityProviderReservation` metric is greater than the `targetCapacity` value, alarms also increase the `DesiredCapacity` of the Auto Scaling group. The `targetCapacity` value is a capacity provider setting that's sent to the CloudWatch alarm during the cluster auto scaling activation phase. 

  The default `targetCapacity` is 100%.
+ The Auto Scaling group launches additional EC2 instances. To prevent over-provisioning, Auto Scaling makes sure that recently launched EC2 instance capacity is stabilized before it launches new instances. Auto Scaling checks if all existing instances have passed the `instanceWarmupPeriod` (now minus the instance launch time). The scale-out is blocked for instances that are within the `instanceWarmupPeriod`.

  The default number of seconds for a newly launched instance to warm up is 300.

For more information, see [Deep dive on Amazon ECS cluster auto scaling](https://aws.amazon.com/blogs/containers/deep-dive-on-amazon-ecs-cluster-auto-scaling/).

### Scale-out considerations
<a name="scale-out-considerations"></a>

 Consider the following for the scale-out process:
+ Although there are multiple placement constraints, we recommend that you only use the `distinctInstance` task placement constraint. This prevents the scale-out process from stopping because you're using a placement constraint that's not compatible with the sampled instances.
+ Managed scaling works best if your Auto Scaling group uses the same or similar instance types. 
+ When a scale-out process is required and there are no currently running container instances, Amazon ECS always scales-out to two instances initially, and then performs additional scale-out or scale-in processes. Any additional scale-out waits for the instance warmup period. For scale-in processes, Amazon ECS waits 15 minutes after a scale-out process before starting scale-in processes at all times.
+ The second scale-out step needs to wait until the `instanceWarmupPeriod` expires, which might affect the overall scale limit. If you need to reduce this time, make sure that `instanceWarmupPeriod` is large enough for the EC2 instance to launch and start the Amazon ECS agent (which prevents over provisioning).
+ Cluster auto scaling supports Launch Configuration, Launch Templates, and multiple instance types in the capacity provider Auto Scaling group. You can also use attribute-based instance type selection without multiple instances types.
+ When using an Auto Scaling group with On-Demand instances and multiple instance types or Spot Instances, place the larger instance types higher in the priority list and don't specify a weight. Specifying a weight isn't supported at this time. For more information, see [Auto Scaling groups with multiple instance types](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-mixed-instances-groups.html) in the *AWS Auto Scaling User Guide*.
+ Amazon ECS then launch either the `minimumScalingStepSize`, if the maximum calculated instance count is less than the minimum scaling step size, or the lower of either the `maximumScalingStepSize` or the maximum calculated instance count value.
+ If an Amazon ECS service or `run-task` launches a task and the capacity provider container instances don't have enough resources to start the task, then Amazon ECS limits the number of tasks with this status for each cluster and prevents any tasks from exceeding this limit. For more information, see [Amazon ECS service quotas](service-quotas.md).

## Managed scale-in behavior
<a name="managed-scaling-scalein"></a>

Amazon ECS monitors container instances for each capacity provider within a cluster. When a container instance isn't running any tasks, the container instance is considered empty and Amazon ECS starts the scale-in process. 

CloudWatch scale-in alarms require 15 data points (15 minutes) before the scale-in process for the Auto Scaling group starts. After the scale-in process starts until Amazon ECS needs to reduce the number of registered container instances, the Auto Scaling group sets the `DesireCapacity` value to be greater than one instance and less than 50% each minute.

When Amazon ECS requests a scale-out (when `CapacityProviderReservation` is greater than 100) while a scale-in process is in progress, the scale-in process is stopped and starts from the beginning if required.

The following describes the scale-in behavior in more detail:

1. Amazon ECS calculates the number of container instances that are empty. A container instance is considered empty even when daemon tasks are running.

1. Amazon ECS sets the `CapacityProviderReservation` value to a number between 0-100 that uses the following formula to represent the ratio of how big the Auto Scaling group needs to be relative to how big it actually is, expressed as a percentage. Then, Amazon ECS publishes the metric to CloudWatch. For more information about how the metric is calculated, see [ Deep Dive on Amazon ECS Cluster Auto Scaling](https://aws.amazon.com/blogs/containers/deep-dive-on-amazon-ecs-cluster-auto-scaling/) 

   ```
   CapacityProviderReservation = (number of instances needed) / (number of running instances) x 100
   ```

1. The `CapacityProviderReservation` metric generates a CloudWatch alarm. This alarm updates the `DesiredCapacity` value for the Auto Scaling group. Then, one of the following actions occurs:
   + If you don't use capacity provider managed termination, the Auto Scaling group selects EC2 instances using the Auto Scaling group termination policy and terminates the instances until the number of EC2 instances reaches the `DesiredCapacity`. The container instances are then deregistered from the cluster.
   + If all the container instances use managed termination protection, Amazon ECS removes the scale-in protection on the container instances that are empty. The Auto Scaling group will then be able to terminate the EC2 instances. The container instances are then deregistered from the cluster.

# Control the instances Amazon ECS terminates
<a name="managed-termination-protection"></a>

**Important**  
You must turn on Auto Scaling *instance scale-in protection* on the Auto Scaling group to use the managed termination protection feature of cluster auto scaling.

Managed termination protection allows cluster auto scaling to control which instances are terminated. When you used managed termination protection, Amazon ECS only terminates EC2 instances that don't have any running Amazon ECS tasks. Tasks that are run by a service that uses the `DAEMON` scheduling strategy are ignored and an instance can be terminated by cluster auto scaling even when the instance is running these tasks. This is because all of the instances in the cluster are running these tasks.

Amazon ECS first turns on the *instance scale-in protection* option for the EC2 instances in the Auto Scaling group. Then, Amazon ECS places the tasks on the instances. When all non-daemon tasks are stopped on an instance, Amazon ECS initiates the scale-in process and turns off scale-in protection for the EC2 instance. The Auto Scaling group can then terminate the instance.

Auto Scaling *instance scale-in protection* controls which EC2 instances can be terminated by Auto Scaling. Instances with the scale-in feature turned on can't be terminated during the scale-in process. For more information about Auto Scaling instance scale-in protection, see [Using instance scale-in protection](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-instance-protection.html) in the *Amazon EC2 Auto Scaling User Guide*.

You can set the `targetCapacity` percentage so that you have spare capacity. This helps future tasks launch more quickly because the Auto Scaling group does not have to launch more instances. Amazon ECS uses the target capacity value to manage the CloudWatch metric that the service creates. Amazon ECS manages the CloudWatch metric. The Auto Scaling group is treated as a steady state so that no scaling action is required. The values can be from 0-100%. For example, to configure Amazon ECS to keep 10% free capacity on top of that used by Amazon ECS tasks, set the target capacity value to 90%. Consider the following when setting the `targetCapacity` value on a capacity provider.
+ A `targetCapacity` value of less than 100% represents the amount of free capacity (Amazon EC2 instances) that need to be present in the cluster. Free capacity means that there are no running tasks.
+ Placement constraints such as Availability Zones, without additional `binpack` forces Amazon ECS to eventually run one task for each instance, which might not be the desired behavior.

You must turn on Auto Scaling instance scale-in protection on the Auto Scaling group to use managed termination protection. If you don't turn on scale-in protection, then turning on managed termination protection can lead to undesirable behavior. For example, you may have instances stuck in draining state. For more information, see [Using instance scale-in protection](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-instance-protection.html) in the *Amazon EC2 Auto Scaling User Guide*.

When you use termination protection with a capacity provider, don't perform any manual actions, like detaching the instance, on the Auto Scaling group associated with the capacity provider. Manual actions can break the scale-in operation of the capacity provider. If you detach an instance from the Auto Scaling group, you need to also [deregister the detached instance](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deregister_container_instance.html) from the Amazon ECS cluster.

# Updating managed termination protection for Amazon ECS capacity providers
<a name="update-managed-termination-protection"></a>

When you use managed termination protection, you need to update the setting for existing capacity providers.

## Console
<a name="update-managed-termination-protection-console"></a>

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. On the **Clusters** page, choose the cluster.

1. On the cluster page, chooset the **Infrastructure** tab.

1. Choose the capacity provider.

1. Choose **Update** to modify the capacity provider settings.

1. Under **Auto Scaling group settings**, toggle **Managed termination protection** to enable or disable the feature.

1. Choose **Update**.

## AWS CLI
<a name="update-managed-termination-protection-cli"></a>

You can update a capacity provider's managed termination protection setting using the `update-capacity-provider` command:

To enable managed termination protection:

```
aws ecs update-capacity-provider \
  --name CapacityProviderName \
  --auto-scaling-group-provider "managedScaling={status=ENABLED,targetCapacity=70,minimumScalingStepSize=1,maximumScalingStepSize=10},managedTerminationProtection=ENABLED"
```

To disable managed termination protection:

```
aws ecs update-capacity-provider \
  --name CapacityProviderName \
  --auto-scaling-group-provider "managedScaling={status=ENABLED,targetCapacity=70,minimumScalingStepSize=1,maximumScalingStepSize=10},managedTerminationProtection=DISABLED"
```

**Note**  
It might take a few minutes for the changes to take effect across your cluster. When enabling managed termination protection, instances that are already running tasks will be protected from scale-in events. When disabling managed termination protection, the protection flag will be removed from instances during the next ECS capacity provider management cycle.

## Console for running tasks
<a name="update-managed-termination-protection-console"></a>

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. On the **Clusters** page, choose the cluster.

1. On the cluster page, chooset the **Tasks** tab.

1. Choose the task.

1. Under **Configuration**, toggle **Managed termination protection** to enable or disable the feature.

1. Choose **Configure task scale-in protection**.

   The **Configure task scale-in protection** dialog box displays

   1. Under **Task scale-in protection**, toggle **Turn on**.

   1. For **Expires in minutes**, enter the number of minutes before task scale-in protection ends.

   1. Choose **Update**

# Turning on Amazon ECS cluster auto scaling
<a name="turn-on-cluster-auto-scaling"></a>

You turn on cluster auto scaling so that Amazon ECS manages the scaling of Amazon EC2 instances that are registered to your cluster.

If you want to use the console to turn on Cluster auto scaling, see see [Creating a capacity provider for Amazon ECS](create-capacity-provider-console-v2.md).

Before you begin, create an Auto Scaling group and a capacity provider. For more information, see [Amazon ECS capacity providers for EC2 workloads](asg-capacity-providers.md).

To turn on cluster auto scaling, you associate the capacity provider with the cluster, Then you turn on cluster auto scaling.

1. Use the `put-cluster-capacity-providers` command to associate one or more capacity providers with the cluster. 

   To add the AWS Fargate capacity providers, include the `FARGATE` and `FARGATE_SPOT` capacity providers in the request. For more information, see `[put-cluster-capacity-providers](https://docs.aws.amazon.com/cli/latest/reference/ecs/put-cluster-capacity-providers.html)` in the *AWS CLI Command Reference*.

   ```
   aws ecs put-cluster-capacity-providers \
     --cluster ClusterName \
     --capacity-providers CapacityProviderName FARGATE FARGATE_SPOT \
     --default-capacity-provider-strategy capacityProvider=CapacityProvider,weight=1
   ```

   To add an Auto Scaling group for EC2, include the Auto Scaling group name in the request. For more information, see `[put-cluster-capacity-providers](https://docs.aws.amazon.com/cli/latest/reference/ecs/put-cluster-capacity-providers.html)` in the *AWS CLI Command Reference*.

   ```
   aws ecs put-cluster-capacity-providers \
     --cluster ClusterName \
     --capacity-providers CapacityProviderName \
     --default-capacity-provider-strategy capacityProvider=CapacityProvider,weight=1
   ```

1. Use the `describe-clusters` command to verify that the association was successful. For more information, see `[describe-clusters](https://docs.aws.amazon.com/cli/latest/reference/ecs/describe-clusters.html)` in the *AWS CLI Command Reference*.

   ```
   aws ecs describe-clusters \
     --cluster ClusterName \
     --include ATTACHMENTS
   ```

1. Use the `update-capacity-provider` command to turn on managed auto scaling for the capacity provider. For more information, see `[update-capacity-provider](https://docs.aws.amazon.com/cli/latest/reference/ecs/update-capacity-provider.html)` in the *AWS CLI Command Reference*.

   ```
   aws ecs update-capacity-provider \
     --name CapacityProviderName \
     --auto-scaling-group-provider "managedScaling={status=ENABLED}"
   ```

# Turning off Amazon ECS cluster auto scaling
<a name="turn-off-cluster-auto-scaling"></a>

You turn off cluster auto scaling when you need more granular control of the EC2 instances that are registered to your cluster,

To turn off cluster auto scaling for a cluster, you can either disassociate the capacity provider with managed scaling turned on from the cluster or update the capacity provider to turn off managed scaling.

## Disassociate the capacity provider
<a name="disassociate-capacity-provider"></a>

Use the following steps to disassociate a capacity provider with a cluster.

1. Use the `put-cluster-capacity-providers` command to disassociate the Auto Scaling group capacity provider with the cluster. The cluster can keep the association with the AWS Fargate capacity providers. For more information, see `[put-cluster-capacity-providers](https://docs.aws.amazon.com/cli/latest/reference/ecs/put-cluster-capacity-providers.html)` in the *AWS CLI Command Reference*.

   ```
   aws ecs put-cluster-capacity-providers \
     --cluster ClusterName \
     --capacity-providers FARGATE FARGATE_SPOT \
     --default-capacity-provider-strategy '[]'
   ```

   Use the `put-cluster-capacity-providers` command to disassociate the Auto Scaling group capacity provider with the cluster. For more information, see `[put-cluster-capacity-providers](https://docs.aws.amazon.com/cli/latest/reference/ecs/put-cluster-capacity-providers.html)` in the *AWS CLI Command Reference*.

   ```
   aws ecs put-cluster-capacity-providers \
     --cluster ClusterName \
     --capacity-providers [] \
     --default-capacity-provider-strategy '[]'
   ```

1. Use the `describe-clusters` command to verify that the disassociation was successful. For more information, see `[describe-clusters](https://docs.aws.amazon.com/cli/latest/reference/ecs/describe-clusters.html)` in the *AWS CLI Command Reference*.

   ```
   aws ecs describe-clusters \
     --cluster ClusterName \
     --include ATTACHMENTS
   ```

## Turn off managed scaling for the capacity provider
<a name="turn-off-managed-scaling"></a>

Use the following steps to turn off managed scaling for the capacity provider.
+ Use the `update-capacity-provider` command to turn off managed auto scaling for the capacity provider. For more information, see `[update-capacity-provider](https://docs.aws.amazon.com/cli/latest/reference/ecs/update-capacity-provider.html)` in the *AWS CLI Command Reference*.

  ```
  aws ecs update-capacity-provider \
    --name CapacityProviderName \
    --auto-scaling-group-provider "managedScaling={status=DISABLED}"
  ```

# Creating a capacity provider for Amazon ECS
<a name="create-capacity-provider-console-v2"></a>

After the cluster creation completes, you can create a new capacity provider (Auto Scaling group) for EC2. Capacity providers help to manage and scale your the infrastructure for your applications.

Before you create the capacity provider, you need to create an Auto Scaling group. For more information, see[ Auto Scaling groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-groups.html) in the *Amazon EC2 Auto Scaling User Guide*.

**To create a capacity provider for the cluster (Amazon ECS console)**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Clusters**.

1. On the **Clusters** page, choose the cluster.

1. On the **Cluster : *name*** page, choose **Infrastructure**, and then choose **Create**.

1. On the **Create capacity providers** page, configure the following options.

   1. Under **Basic details**, for **Capacity provider name**, enter a unique capacity provider name.

   1. Under **Auto Scaling group**, for **Use an existing Auto Scaling group**, choose the Auto Scaling group.

   1. (Optional) To configure a scaling policy, under **Scaling policies**, configure the following options.
      + To have Amazon ECS manage the scale-in and scale-out actions, select **Turn on managed scaling**.
      + To prevent EC2 instance with running Amazon ECS tasks from being terminated, select **Turn on scaling protection**.
      + For **Set target capacity**, enter the target value for the CloudWatch metric used in the Amazon ECS-managed target tracking scaling policy.

1. Choose **Create**.

# Updating an Amazon ECS capacity provider
<a name="update-capacity-provider-console-v2"></a>

When you use an Auto Scaling group as a capacity provider, you can modify the group's scaling policy.

**To update a capacity provider for the cluster (Amazon ECS console)**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Clusters**.

1. On the **Clusters** page, choose the cluster.

1. On the **Cluster : *name*** page, choose **Infrastructure**, and then choose **Update**.

1. On the **Create capacity providers** page, configure the following options.

   1. Under **Auto Scaling group**, under **Scaling policies**, configure the following options.
     + To have Amazon ECS manage the scale-in and scale-out actions, select **Turn on managed scaling**.
     + To prevent EC2 instances with running Amazon ECS tasks from being terminated, select **Turn on scaling protection**.
     + For **Set target capacity**, enter the target value for the CloudWatch metric used in the Amazon ECS-managed target tracking scaling policy.

1. Choose **Update**.

# Deleting an Amazon ECS capacity provider
<a name="delete-capacity-provider-console-v2"></a>

If you are finished using an Auto Scaling group capacity provider, you can delete it. After the group is deleted, the Auto Scaling group capacity provider transitions to the `INACTIVE` state. Capacity providers with an `INACTIVE` status may remain discoverable in your account for a period of time. However, this behavior is subject to change in the future, so you should not rely on `INACTIVE` capacity providers persisting. Before the Auto Scaling group capacity provider is deleted, the capacity provider must be removed from the capacity provider strategy from all services. You can use the `UpdateService` API or the update service workflow in the Amazon ECS console to remove a capacity provider from a service's capacity provider strategy. Use the **Force new deployment** option to ensure that any tasks using the Amazon EC2 instance capacity provided by the capacity provider are transitioned to use the capacity from the remaining capacity providers.

**To delete a capacity provider for the cluster (Amazon ECS console)**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Clusters**.

1. On the **Clusters** page, choose the cluster.

1. On the **Cluster : *name*** page, choose **Infrastructure**, the Auto Scaling group, and then choose **Delete**.

1. In the confirmation box, enter **delete *Auto Scaling group name***

1. Choose **Delete**.

# Safely stop Amazon ECS workloads running on EC2 instances
<a name="managed-instance-draining"></a>

Managed instance draining facilitates graceful termination of Amazon EC2 instances. This allows your workloads to stop safely and be rescheduled to non-terminating instances. Infrastructure maintenance and updates are performed without worrying about disruption to workloads. By using managed instance draining, you simplify your infrastructure management workflows that require replacement of Amazon EC2 instances while you ensure resilience and availability of your applications.

Amazon ECS managed instance draining works with Auto Scaling group instance replacements. Based on instance refresh and maximum instance lifetime, customers can ensure that they stay compliant with the latest OS and security mandates for their capacity.

Managed instance draining can only be used with Amazon ECS capacity providers. You can turn on managed instance draining when you create or update your Auto Scaling group capacity providers using the Amazon ECS console, AWS CLI, or SDK.

The following events are covered by Amazon ECS managed instance draining.
+ [Auto Scaling group instance refresh](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-instance-refresh.html) ‐ Use instance refresh to perform rolling replacement of your Amazon EC2 instances in your Auto Scaling group instead of manually doing it in batches. This is useful when you need to replace a large number of instances. An instance refresh is initiated through the Amazon EC2 console or the `StartInstanceRefresh` API. Make sure you select `Replace` for Scale-in protection when calling `StartInstanceRefresh` if you're using managed termination protection.
+ [Maximum instance lifetime](https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-max-instance-lifetime.html) ‐ You can define a maximum lifetime when it comes to replacing Auto Scaling group instances. This is helpful for scheduling replacement instances based on internal security policies or compliance.
+ Auto Scaling group scale-in ‐ Based on scaling policies and scheduled scaling actions, Auto Scaling group supports automatic scaling of instances. By using an Auto Scaling group as an Amazon ECS capacity provider, you can scale-in Auto Scaling group instances when no tasks are running in them.
+ [Auto Scaling group health checks](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-health-checks.html) ‐ Auto Scaling group supports many health checks to manage termination of unhealthy instances.
+ [CloudFormation stack updates](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-direct.html) ‐ You can add an `UpdatePolicy` attribute to your CloudFormation stack to perform rolling updates when group changes.
+ [Spot capacity rebalancing](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html) ‐ The Auto Scaling group tries to proactively replace Spot Instances that have a higher risk of interruption based on Amazon EC2 capacity rebalance notice. The Auto Scaling group terminates the old instance when the replacement is launched and healthy. Amazon ECS managed instance draining drains the Spot Instance the same way it drains a non-Spot Instance.
+ [Spot interruption](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-capacity-rebalancing.html) ‐ Spot Instances are terminated with a two minute notice. Amazon ECS-managed instance draining puts the instance in draining state in response.

**Amazon EC2 Auto Scaling lifecycle hooks with managed instance draining**  
Auto Scaling group lifecycle hooks enable customer to create solutions that are triggered by certain events in the instance lifecycle and perform a custom action when that certain event occurs. An Auto Scaling group allows for up to 50 hooks. Multiple termination hooks can exist and are performed in parallel, and Auto Scaling group waits for all hooks to finish before terminating an instance.

In addition to the Amazon ECS-managed hook termination, you can also configure your own lifecycle termination hooks. Lifecycle hooks have a `default action`, and we recommend setting `continue` as the default to ensure other hooks, such as the Amazon ECS managed hook, aren't impacted by any errors from custom hooks.

If you've already configured an Auto Scaling group termination lifecycle hook and also enabled Amazon ECS managed instance draining, both lifecycle hooks are performed. The relative timings, however, are not guaranteed. Lifecycle hooks have a `default action` setting to specify the action to take when timeout elapses. In case of failures we recommend using `continue` as the default result in your custom hook. This ensures other hooks, particularly the Amazon ECS managed hooks, aren't impacted by any errors in your custom lifecycle hook. The alternative result of `abandon` causes all other hooks to be skipped and should be avoided. For more information about Auto Scaling group lifecycle hooks see [Amazon EC2 Auto Scaling lifecycle hooks](https://docs.aws.amazon.com/autoscaling/ec2/userguide/lifecycle-hooks.html) in the *Amazon EC2* Auto Scaling User Guide.

**Tasks and managed instance draining**  
Amazon ECS managed instance draining uses the existing draining feature found in container instances. The [container instance draining](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/container-instance-draining.html) feature performs replacement and stops for replica tasks that belong to an Amazon ECS service. A standalone task, like one invoked by `RunTask`, that is in the `PENDING` or `RUNNING` state remain unaffected. You have to wait for these to either complete or stop them manually. The container instance remains in the `DRAINING` state until either all tasks are stopped or 48 hours has passed. Daemon tasks are the last to stop after all replica tasks have stopped.

**Managed instance draining and managed termination protection**  
Managed instance draining works even if managed termination is disabled. For information about managed termination protection, see [Control the instances Amazon ECS terminates](managed-termination-protection.md). 

The following table summarizes the behavior for different combinations of managed termination and managed draining.


|  Managed termination  |  Managed draining  |  Outcome  | 
| --- | --- | --- | 
|  Enabled  | Enabled | Amazon ECS protects Amazon EC2 instances that are running tasks from being terminated by scale-in events. Any instances undergoing termination, such as those that don't have termination protection set, have received Spot interruption, or are forced by instance refresh are gracefully drained. | 
|  Disabled  | Enabled | Amazon ECS doesn't protect Amazon EC2 instances running tasks from being scaled-in. However, any instances that are being terminated are gracefully drained. | 
|  Enabled  | Disabled | Amazon ECS protects Amazon EC2 instances that are running tasks from being terminated by scale-in events. However, instances can still get terminated by Spot interruption or forced instance refresh, or if they aren't running any tasks. Amazon ECS doesn't perform graceful draining for these instances, and launches replacement service tasks after they stop. | 
|  Disabled  | Disabled | Amazon EC2 instances can be scaled-in or terminated at any time, even if they are running Amazon ECS tasks. Amazon ECS will launch replacement service tasks after they stop. | 

**Managed instance draining and Spot Instance draining**  
With Spot Instance draining, you can set an environment variable `ECS_ENABLE_SPOT_INSTANCE_DRAINING` on the Amazon ECS agent which enables Amazon ECS to place an instance in the draining status in response to the two-minute Spot interruption. Amazon ECS managed instance draining facilitates graceful shutdown of Amazon EC2 instances undergoing termination due to many reasons, not just Spot interruption. For instance, you can use Amazon EC2 Auto Scaling capacity rebalancing to proactively replace Spot Instance at elevated risk of interruption, and managed instance draining performs graceful shutdown of Spot Instance being replaced. When you use managed instance draining, you don't need to enable Spot instance draining separately, so `ECS_ENABLE_SPOT_INSTANCE_DRAINING` in Auto Scaling group user data is redundant. For more information about Spot Instance draining, see [Spot Instances](create-capacity.md#container-instance-spot).

## How managed instance draining works with EventBridge
<a name="managed-instance-draining-eventbridge"></a>

Amazon ECS managed instance draining events are published to Amazon EventBridge, and Amazon ECS creates an EventBridge managed rule in your account’s default bus to support managed instance draining. You can filter these events to other AWS services like Lambda, Amazon SNS, and Amazon SQS to monitor and troubleshoot.
+ Amazon EC2 Auto Scaling sends an event to EventBridge when a lifecycle hook is invoked.
+ Spot interruption notices are published to EventBridge.
+ Amazon ECS generates error messages that you can retrieve through the Amazon ECS console and APIs.
+ EventBridge has retry mechanisms built in as mitigations for temporary failures.

# Configuring Amazon ECS capacity providers to safely shut down instances
<a name="enable-managed-instance-draining"></a>

You can turn on managed instance draining when you create or update your Auto Scaling group capacity providers using the Amazon ECS console and AWS CLI.

**Note**  
Managed instance draining is on by default when you create a capacity provider.

The following are examples using the AWS CLI for creating a capacity provider with managed instance draining enabled and enabling managed instance draining for a cluster's existing capacity provider.

**Create a capacity provider with managed instance draining enabled**  
To create a capacity provider with managed instance draining enabled, use the `create-capacity-provider` command. Set the `managedDraining` parameter to `ENABLED`.

```
aws ecs create-capacity-provider \
--name capacity-provider \
--auto-scaling-group-provider '{
  "autoScalingGroupArn": "asg-arn",
  "managedScaling": {
    "status": "ENABLED",
    "targetCapacity": 100,
    "minimumScalingStepSize": 1,
    "maximumScalingStepSize": 1
  },
  "managedDraining": "ENABLED",
  "managedTerminationProtection": "ENABLED",
}'
```

Response:

```
{
    "capacityProvider": {
        "capacityProviderArn": "capacity-provider-arn",
        "name": "capacity-provider",
        "status": "ACTIVE",
        "autoScalingGroupProvider": {
            "autoScalingGroupArn": "asg-arn",
            "managedScaling": {
                "status": "ENABLED",
                "targetCapacity": 100,
                "minimumScalingStepSize": 1,
                "maximumScalingStepSize": 1
            },
            "managedTerminationProtection": "ENABLED"
            "managedDraining": "ENABLED"
        }
    }
}
```

**Enable managed instance draining for a cluster's existing capacity provider**  
Enable managed instance draining for a cluster's existing capacity provider uses the `update-capacity-provider` command. You see that `managedDraining` currently says `DISABLED` and `updateStatus` says `UPDATE_IN_PROGRESS`.

```
aws ecs update-capacity-provider \
--name cp-draining \
--auto-scaling-group-provider '{
  "managedDraining": "ENABLED"
}
```

Response:

```
{
    "capacityProvider": {
        "capacityProviderArn": "cp-draining-arn",
        "name": "cp-draining",
        "status": "ACTIVE",
        "autoScalingGroupProvider": {
            "autoScalingGroupArn": "asg-draining-arn",
            "managedScaling": {
                "status": "ENABLED",
                "targetCapacity": 100,
                "minimumScalingStepSize": 1,
                "maximumScalingStepSize": 1,
                "instanceWarmupPeriod": 300
            },
            "managedTerminationProtection": "DISABLED",
            "managedDraining": "DISABLED" // before update
        },
        "updateStatus": "UPDATE_IN_PROGRESS", // in progress and need describe again to find out the result
        "tags": [
        ]
    }
}
```



Use the `describe-clusters` command and include `ATTACHMENTS`. The `status` of the managed instance draining attachment is `PRECREATED`, and the overall `attachmentsStatus` is `UPDATING`.

```
aws ecs describe-clusters --clusters cluster-name --include ATTACHMENTS
```

Response:

```
{
    "clusters": [
        {
            ...

            "capacityProviders": [
                "cp-draining"
            ],
            "defaultCapacityProviderStrategy": [],
            "attachments": [
                # new precreated managed draining attachment
                {
                    "id": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
                    "type": "managed_draining",
                    "status": "PRECREATED",
                    "details": [
                        {
                            "name": "capacityProviderName",
                            "value": "cp-draining"
                        },
                        {
                            "name": "autoScalingLifecycleHookName",
                            "value": "ecs-managed-draining-termination-hook"
                        }
                    ]
                },

                ...

            ],
            "attachmentsStatus": "UPDATING"
        }
    ],
    "failures": []
}
```

When the update is finished, use `describe-capacity-providers`, and you see `managedDraining` is now `ENABLED`.

```
aws ecs describe-capacity-providers --capacity-providers cp-draining
```

Response:

```
{
    "capacityProviders": [
        {
            "capacityProviderArn": "cp-draining-arn",
            "name": "cp-draining",
            "status": "ACTIVE",
            "autoScalingGroupProvider": {
                "autoScalingGroupArn": "asg-draning-arn",
                "managedScaling": {
                    "status": "ENABLED",
                    "targetCapacity": 100,
                    "minimumScalingStepSize": 1,
                    "maximumScalingStepSize": 1,
                    "instanceWarmupPeriod": 300
                },
                "managedTerminationProtection": "DISABLED",
                "managedDraining": "ENABLED" // successfully update
            },
            "updateStatus": "UPDATE_COMPLETE",
            "tags": []
        }
    ]
}
```

## Amazon ECS Managed instance draining troubleshooting
<a name="managed-instance-troubleshooting"></a>

You might need to troubleshoot issues with managed instance draining. The following is an example of an issue and resolution you may come across while using it.

**Instances don't terminate after exceeding maximum instance lifetime when using auto scaling.**  
If your instances aren't terminating even after reaching and exceeding the maximum instance lifetime while using an auto scaling group, it may be because they're protected from scale-in. You can turn off managed termination and allow managed draining to handle instance recycling. 

## Draining behavior for Amazon ECS Managed Instances
<a name="managed-instances-draining-behavior"></a>

Amazon ECS Managed Instances termination ensures graceful workload transitions while optimizing costs and maintaining system health. The termination system provides three distinct decision paths for instance termination, each with different timing characteristics and customer impact profiles.

### Termination decision paths
<a name="managed-instances-termination-paths"></a>

Customer-initiated termination  
Provides direct control over instance removal when you need to remove container instances from service immediately. You invoke the DeregisterContainerInstance API with the force flag set to true, indicating that immediate termination is required despite any running workloads.

System-initiated idle termination  
Amazon ECS Managed Instances continuously monitors and proactively optimizes costs by terminating idle Amazon ECS container instances not running any tasks. ECS uses a heuristic delay to give container instances a chance to acquire newly launched tasks before being terminated. This can be customized with the `scaleInAfter` Amazon ECS Managed Instances capacity provider configuration parameter.

Infrastructure refresh termination  
Amazon ECS Managed Instances automatically manages and updates software on managed container instances to ensure security and compliance while maintaining workload availability. For more information, see [patching in Amazon ECS Managed Instances](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/managed-instances-patching.html).

### Graceful draining and workload migration
<a name="managed-instances-draining-coordination"></a>

The graceful draining system implements sophisticated coordination with Amazon ECS service management to ensure that service-managed tasks are properly migrated away from instances scheduled for termination.

**Service task draining coordination**  
When an instance transitions to DRAINING state, the Amazon ECS scheduler automatically stops placing new tasks on the instance while implementing graceful shutdown procedures for existing service tasks. The service task draining includes coordination with service deployment strategies, health check requirements, and your draining preferences to ensure optimal migration timing and success rates.

**Standalone task handling**  
Standalone tasks require different handling because they do not benefit from automatic service management. The system evaluates standalone task characteristics including task duration estimates, completion probability analysis, and customer impact assessment. The graceful completion strategy allows standalone tasks to complete naturally during an extended grace period, while forced termination ensures infrastructure refresh occurs within acceptable timeframes when tasks have not completed naturally.

### Two-phase completion strategy
<a name="managed-instances-two-phase-completion"></a>

The termination system implements a two-phase approach that balances workload continuity against infrastructure management requirements.

**Phase 1: Graceful completion period**  
During this phase, the system implements graceful draining strategies that prioritize workload continuity. Service tasks are gracefully drained through normal Amazon ECS scheduling processes, standalone tasks continue running and may complete naturally, and the system monitors for all tasks to reach stopped state through natural completion processes.

**Phase 2: Hard deadline enforcement**  
When graceful completion does not achieve termination objectives within acceptable timeframes, the system implements hard deadline enforcement. The hard deadline is typically set to draining initiation time plus seven days, providing substantial time for graceful completion while maintaining operational requirements. The enforcement includes automatic invocation of force deregistration procedures and immediate termination of all remaining tasks regardless of completion status.

# Creating resources for Amazon ECS cluster auto scaling using the AWS Management Console
<a name="tutorial-cluster-auto-scaling-console"></a>

Learn how to create the resources for cluster auto scaling using the AWS Management Console. Where resources require a name, we use the prefix `ConsoleTutorial` to ensure they all have unique names and to make them easy to locate.

**Topics**
+ [

## Prerequisites
](#console-tutorial-prereqs)
+ [

## Step 1: Create an Amazon ECS cluster
](#console-tutorial-cluster)
+ [

## Step 2: Register a task definition
](#console-tutorial-register-task-definition)
+ [

## Step 3: Run a task
](#console-tutorial-run-task)
+ [

## Step 4: Verify
](#console-tutorial-verify)
+ [

## Step 5: Clean up
](#console-tutorial-cleanup)

## Prerequisites
<a name="console-tutorial-prereqs"></a>

This tutorial assumes that the following prerequisites have been completed:
+ The steps in [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md) have been completed.
+ Your IAM user has the required permissions specified in the [AmazonECS\$1FullAccess](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonECS_FullAccess) IAM policy example.
+ The Amazon ECS container instance IAM role is created. For more information, see [Amazon ECS container instance IAM role](instance_IAM_role.md).
+ The Amazon ECS service-linked IAM role is created. For more information, see [Using service-linked roles for Amazon ECS](using-service-linked-roles.md).
+ The Auto Scaling service-linked IAM role is created. For more information, see [Service-Linked Roles for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-service-linked-role.html) in the *Amazon EC2 Auto Scaling User Guide*.
+ You have a VPC and security group created to use. For more information, see [Create a virtual private cloud](get-set-up-for-amazon-ecs.md#create-a-vpc).

## Step 1: Create an Amazon ECS cluster
<a name="console-tutorial-cluster"></a>

Use the following steps to create an Amazon ECS cluster. 

Amazon ECS creates an Amazon EC2 Auto Scaling launch template and Auto Scaling group on your behalf as part of the CloudFormation stack. 

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Clusters**, and then choose **Create cluster**.

1. Under **Cluster configuration**, for **Cluster name**, enter `ConsoleTutorial-cluster`.

1. Under **Infrastructure**, clear AWS Fargate (serverless), and then select **Amazon EC2 instances**. Next, configure the Auto Scaling group which acts as the capacity provider.

   1. Under ** Auto Scaling group (ASG) **. Select **Create new ASG**, and then provide the following details about the group:
     + For **Operating system/Architecture**, choose **Amazon Linux 2**.
     + For **EC2 instance type**, choose **t3.nano**.
     + For **Capacity**, enter the minimum number and the maximum number of instances to launch in the Auto Scaling group. 

1. (Optional) To manage the cluster tags, expand **Tags**, and then perform one of the following operations:

   [Add a tag] Choose **Add tag** and do the following:
   + For **Key**, enter the key name.
   + For **Value**, enter the key value.

   [Remove a tag] Choose **Remove** to the right of the tag’s Key and Value.

1. Choose **Create**.

## Step 2: Register a task definition
<a name="console-tutorial-register-task-definition"></a>

Before you can run a task on your cluster, you must register a task definition. Task definitions are lists of containers grouped together. The following example is a simple task definition that uses an `amazonlinux` image from Docker Hub and simply sleeps. For more information about the available task definition parameters, see [Amazon ECS task definitions](task_definitions.md).

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Task definitions**.

1. Choose **Create new task definition**, **Create new task definition with JSON**.

1. In the **JSON editor** box, paste the following contents.

   ```
   {
       "family": "ConsoleTutorial-taskdef",
       "containerDefinitions": [
           {
               "name": "sleep",
               "image": "public.ecr.aws/amazonlinux/amazonlinux:latest",
               "memory": 20,
               "essential": true,
               "command": [
                   "sh",
                   "-c",
                   "sleep infinity"
               ]
           }
       ],
       "requiresCompatibilities": [
           "EC2"
       ]
   }
   ```

1. Choose **Create**.

## Step 3: Run a task
<a name="console-tutorial-run-task"></a>

After you have registered a task definition for your account, you can run a task in the cluster. For this tutorial, you run five instances of the `ConsoleTutorial-taskdef` task definition in your `ConsoleTutorial-cluster` cluster.

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. On the **Clusters** page, choose **ConsoleTutorial-cluster**.

1. Under **Tasks**, choose **Run new task**.

1. In the **Environment** section, under ** Compute options**, choose **Capacity provider strategy**.

1. Under **Deployment configuration**, for **Application type**, choose **Task**.

1.  Choose **ConsoleTutorial-taskdef** from the ** Family ** dropdown list.

1. Under **Desired tasks**, enter 5.

1. Choose **Create**.

## Step 4: Verify
<a name="console-tutorial-verify"></a>

At this point in the tutorial, you should have a cluster with five tasks running and an Auto Scaling group with a capacity provider. The capacity provider has Amazon ECS managed scaling enabled.

We can verify that everything is working properly by viewing the CloudWatch metrics, the Auto Scaling group settings, and finally the Amazon ECS cluster task count.

**To view the CloudWatch metrics for your cluster**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. On the navigation bar at the top of the screen, select the Region.

1. On the navigation pane, under ** Metrics**, choose **All metrics**.

1. On the **All metrics** page, under the **Browse** tab, choose `AWS/ECS/ManagedScaling`.

1. Choose **CapacityProviderName, ClusterName**.

1. Select the check box that corresponds to the `ConsoleTutorial-cluster` ** ClusterName**.

1. Under the **Graphed metrics** tab, change **Period** to **30 seconds** and **Statistic** to **Maximum**.

   The value displayed in the graph shows the target capacity value for the capacity provider. It should begin at `100`, which was the target capacity percent we set. You should see it scale up to `200`, which will trigger an alarm for the target tracking scaling policy. The alarm will then trigger the Auto Scaling group to scale out.

Use the following steps to view your Auto Scaling group details to confirm that the scale-out action occurred.

**To verify the Auto Scaling group scaled out**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. On the navigation bar at the top of the screen, select the Region.

1. On the navigation pane, under **Auto Scaling**, choose **Auto Scaling Groups**.

1. Choose the `ConsoleTutorial-cluster` Auto Scaling group created in this tutorial. View the value under **Desired capacity ** and view the instances under the ** Instance management ** tab to confirm your group scaled out to two instances.

Use the following steps to view your Amazon ECS cluster to confirm that the Amazon EC2 instances were registered with the cluster and your tasks transitioned to a `RUNNING` status.

**To verify the instances in the Auto Scaling group**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Clusters**.

1. On the **Clusters** page, choose the `ConsoleTutorial-cluster` cluster.

1. On the **Tasks** tab, confirm you see five tasks in the `RUNNING` status.

## Step 5: Clean up
<a name="console-tutorial-cleanup"></a>

When you have finished this tutorial, clean up the resources associated with it to avoid incurring charges for resources that you aren't using. Deleting capacity providers and task definitions are not supported, but there is no cost associated with these resources.

**To clean up the tutorial resources**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Clusters**.

1. On the **Clusters** page, choose **ConsoleTutorial-cluster**.

1. On the **ConsoleTutorial-cluster** page, choose the **Tasks** tab, and then choose **Stop**, **Stop all**.

1. In the navigation pane, choose **Clusters**.

1. On the **Clusters** page, choose **ConsoleTutorial-cluster**.

1. In the upper-right of the page, choose **Delete cluster**. 

1. In the confirmation box, enter **delete **ConsoleTutorial-cluster**** and choose **Delete**.

1. Delete the Auto Scaling groups using the following steps.

   1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

   1. On the navigation bar at the top of the screen, select the Region.

   1. On the navigation pane, under **Auto Scaling**, choose **Auto Scaling Groups**.

   1. Select the `ConsoleTutorial-cluster` Auto Scaling group, then choose ** Actions**.

   1.  From the **Actions** menu, choose **Delete**. Enter ** delete** in the confirmation box and then choose **Delete**.

# Amazon EC2 container instances for Amazon ECS
<a name="create-capacity"></a>

An Amazon ECS container instance is an Amazon EC2 instance that run the Amazon ECS container agent and is registered to a cluster. When you run tasks with Amazon ECS using the capacity provider, External capacity provider or an Auto Scaling group capacity provider, your tasks are placed on your active container instances. You are responsible for the container instance management and maintenance.

Although you can create your own Amazon EC2 instance AMI that meets the basic specifications needed to run your containerized workloads on Amazon ECS, the Amazon ECS-optimized AMIs are preconfigured and tested on Amazon ECS by AWS engineers. It is the simplest way for you to get started and to get your containers running on AWS quickly.

When you create a cluster using the console, Amazon ECS creates a launch template for your instances with the latest AMI associated with the selected operating system. 

When you use CloudFormation to create a cluster, the SSM parameter is part of the Amazon EC2 launch template for the Auto Scaling group instances. You can configure the template to use a dynamic Systems Manager parameter to determine what Amazon ECS Optimized AMI to deploy. This parameter ensures that each time you deploy the stack it will check to see if there is available update that needs to be applied to the EC2 instances. For an example of how to use the Systems Manager parameter, see [Create an Amazon ECS cluster with the Amazon ECS-optimized Amazon Linux 2023 AMI](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-cluster.html#aws-resource-ecs-cluster--examples--Create_an_cluster_with_the_Amazon_Linux_2023_ECS-Optimized-AMI) in the *AWS CloudFormation User Guide*.
+ [Retrieving Amazon ECS-optimized Linux AMI metadata](retrieve-ecs-optimized_AMI.md)
+ [Retrieving Amazon ECS-optimized Bottlerocket AMI metadata](ecs-bottlerocket-retrieve-ami.md)
+ [Retrieving Amazon ECS-optimized Windows AMI metadata](retrieve-ecs-optimized_windows_AMI.md)

You can choose from the instance types that are compatible with your application. With larger instances, you can launch more tasks at the same time. With smaller instances, you can scale out in a more fine-grained way to save costs. You don't need to choose a single Amazon EC2 instance type that to fit all the applications in your cluster. Instead, you can create multiple Auto Scaling groups where each group has a different instance type. Then, you can create an Amazon EC2 capacity provider for each one of these groups.

Use the following guidelines to determine the instance family types and instance type to use:
+ Eliminate the instance types or instance families that don't meet the specific requirements of your application. For example, if your application requires a GPU, you can exclude any instance types that don't have a GPU.
+ Consider requirements including network throughput and storage.
+ Consider the CPU and memory. As a general rule, the CPU and memory must be large enough to hold at least one replica of the task that you want to run. 

## Spot Instances
<a name="container-instance-spot"></a>

Spot capacity can provide significant cost savings over on-demand instances. Spot capacity is excess capacity that's priced significantly lower than on-demand or reserved capacity. Spot capacity is suitable for batch processing and machine-learning workloads, and development and staging environments. More generally, it's suitable for any workload that tolerates temporary downtime. 

Understand that the following consequences because Spot capacity might not be available all the time.
+ During periods of extremely high demand, Spot capacity might be unavailable. This can cause Amazon EC2 Spot instance launches to be delayed. In these events, Amazon ECS services retry launching tasks, and Amazon EC2 Auto Scaling groups also retry launching instances, until the required capacity becomes available. Amazon EC2 doesn't replace Spot capacity with on-demand capacity. 
+ When the overall demand for capacity increases, Spot Instances and tasks might be terminated with only a two-minute warning. After the warning is sent, tasks should begin an orderly shutdown if necessary before the instance is fully terminated. This helps minimize the possibility of errors. For more information about a graceful shutdown, see [Graceful shutdowns with ECS](https://aws.amazon.com/blogs/containers/graceful-shutdowns-with-ecs/).

To help minimize Spot capacity shortages, consider the following recommendations: 
+ Use multiple Regions and Availability Zones - Spot capacity varies by Region and Availability Zone. You can improve Spot availability by running your workloads in multiple Regions and Availability Zones. If possible, specify subnets in all the Availability Zones in the Regions where you run your tasks and instances. 
+ Use multiple Amazon EC2 instance types - When you use Mixed Instance Policies with Amazon EC2 Auto Scaling, multiple instance types are launched into your Auto Scaling Group. This ensures that a request for Spot capacity can be fulfilled when needed. To maximize reliability and minimize complexity, use instance types with roughly the same amount of CPU and memory in your Mixed Instances Policy. These instances can be from a different generation, or variants of the same base instance type. Note that they might come with additional features that you might not require. An example of such a list could include m4.large, m5.large, m5a.large, m5d.large, m5n.large, m5dn.large, and m5ad.large. For more information, see [Auto Scaling groups with multiple instance types and purchase options](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-mixed-instances-groups.html) in the *Amazon EC2 Auto Scaling User Guide*.
+ Use the capacity-optimized Spot allocation strategy - With Amazon EC2 Spot, you can choose between the capacity- and cost-optimized allocation strategies. If you choose the capacity-optimized strategy when launching a new instance, Amazon EC2 Spot selects the instance type with the greatest availability in the selected Availability Zone. This helps reduce the possibility that the instance is terminated soon after it launches. 

For information about how to configure spot termination notices on your container instances, see:
+ [Configuring Amazon ECS Linux container instances to receive Spot Instance notices](spot-instance-draining-linux-container.md)
+ [Configuring Amazon ECS Windows container instances to receive Spot Instance notices](windows-spot-instance-draining-container.md)

# Amazon ECS-optimized Linux AMIs
<a name="ecs-optimized_AMI"></a>

**Important**  
The Amazon ECS-Optimized Amazon Linux 2 AMI reaches end-of-life on June 30, 2026, mirroring the same EOL date of the upstream Amazon Linux 2 operating system (for more information, see the [Amazon Linux 2 FAQs](https://aws.amazon.com/amazon-linux-2/faqs/)). We encourage customers to upgrade their applications to use Amazon Linux 2023, which includes long term support through 2028. For information about migrating from Amazon Linux 2 to Amazon Linux 2023, see [Migrating from the Amazon Linux 2 Amazon ECS-optimized AMI to the Amazon Linux 2023 Amazon ECS-optimized AMI](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/al2-to-al2023-ami-transition.html).

By default, the deprecation date of all Amazon ECS-optimized AMIs are set to two years after the AMI creation date. You can use the Amazon EC2 `DescribeImages` API to check the deprecation status and date of an AMI. For more information, see [DescribeImages](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeImages.html) in the *Amazon Elastic Compute Cloud API Reference*.

Amazon ECS provides the Amazon ECS-optimized AMIs that are preconfigured with the requirements and recommendations to run your container workloads. We recommend that you use the Amazon ECS-optimized Amazon Linux 2023 AMI for your Amazon EC2 instances. Launching your container instances from the most recent Amazon ECS-Optimized AMI ensures that you receive the current security updates and container agent version. For information about how to launch an instance, see [Launching an Amazon ECS Linux container instance](launch_container_instance.md).

When you create a cluster using the console, Amazon ECS creates a launch template for your instances with the latest AMI associated with the selected operating system. 

When you use CloudFormation to create a cluster, the SSM parameter is part of the Amazon EC2 launch template for the Auto Scaling group instances. You can configure the template to use a dynamic Systems Manager parameter to determine what Amazon ECS Optimized AMI to deploy. This parameter ensures that each time you deploy the stack it will check to see if there is available update that needs to be applied to the EC2 instances. For an example of how to use the Systems Manager parameter, see [Create an Amazon ECS cluster with the Amazon ECS-optimized Amazon Linux 2023 AMI](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-cluster.html#aws-resource-ecs-cluster--examples--Create_an_cluster_with_the_Amazon_Linux_2023_ECS-Optimized-AMI) in the *AWS CloudFormation User Guide*.

If you need to customize the Amazon ECS-optimized AMI, see [Amazon ECS Optimized AMI Build Recipes](https://github.com/aws/amazon-ecs-ami) on GitHub.

The following variants of the Amazon ECS-optimized AMI are available for your Amazon EC2 instances with the Amazon Linux 2023 operating system.


| Operating system | AMI | Description | Storage configuration | 
| --- | --- | --- | --- | 
| Amazon Linux 2023 |  Amazon ECS-optimized Amazon Linux 2023 AMI |  Amazon Linux 2023 is the next generation of Amazon Linux from AWS. For most cases, recommended for launching your Amazon EC2 instances for your Amazon ECS workloads. For more information, see [What is Amazon Linux 2023](https://docs.aws.amazon.com/linux/al2023/ug/what-is-amazon-linux.html) in the *Amazon Linux 2023 User Guide*.  | By default, the Amazon ECS-optimized Amazon Linux 2023 AMI ships with a single 30-GiB root volume. You can modify the 30-GiB root volume size at launch time to increase the available storage on your container instance. This storage is used for the operating system and for Docker images and metadata. The default filesystem for the Amazon ECS-optimized Amazon Linux 2023 AMI is `xfs`, and Docker uses the `overlay2` storage driver. For more information, see [Use the OverlayFS storage driver](https://docs.docker.com/engine/storage/drivers/overlayfs-driver/) in the Docker documentation. | 
| Amazon Linux 2023 (arm64) |  Amazon ECS-optimized Amazon Linux 2023 (arm64) AMI |  Based on Amazon Linux 2023, this AMI is recommended for use when launching your Amazon EC2 instances, which are powered by Arm-based AWS Graviton/Graviton 2/Graviton 3/Graviton 4 Processors, for your Amazon ECS workloads. For more information, see [Specifications for the Amazon EC2 general purpose instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/gp.html) in the *Amazon EC2 Instance Types guide*.  | By default, the Amazon ECS-optimized Amazon Linux 2023 AMI ships with a single 30-GiB root volume. You can modify the 30-GiB root volume size at launch time to increase the available storage on your container instance. This storage is used for the operating system and for Docker images and metadata. The default filesystem for the Amazon ECS-optimized Amazon Linux 2023 AMI is `xfs`, and Docker uses the `overlay2` storage driver. For more information, see [Use the OverlayFS storage driver](https://docs.docker.com/engine/storage/drivers/overlayfs-driver/) in the Docker documentation. | 
| Amazon Linux 2023 (Neuron) |  Amazon ECS-optimized Amazon Linux 2023 AMI  |  Based on Amazon Linux 2023, this AMIis for Amazon EC2 Inf1, Trn1 or Inf2 instances. It comes pre-configured with AWS Inferentia and AWS Trainium drivers and the AWS Neuron runtime for Docker which makes running machine learning inference workloads easier on Amazon ECS. For more information, see [Amazon ECS task definitions for AWS Neuron machine learning workloads](ecs-inference.md).  The Amazon ECS-optimized Amazon Linux 2023 (Neuron) AMI does not come with the AWS CLI preinstalled.  | By default, the Amazon ECS-optimized Amazon Linux 2023 AMI ships with a single 30-GiB root volume. You can modify the 30-GiB root volume size at launch time to increase the available storage on your container instance. This storage is used for the operating system and for Docker images and metadata. The default filesystem for the Amazon ECS-optimized Amazon Linux 2023 AMI is `xfs`, and Docker uses the `overlay2` storage driver. For more information, see [Use the OverlayFS storage driver](https://docs.docker.com/engine/storage/drivers/overlayfs-driver/) in the Docker documentation. | 
| Amazon Linux 2023 GPU | Amazon ECS optimized Amazon Linux 2023 GPU AMI |  Based on Amazon Linux 2023, this AMI is recommended for use when launching your Amazon EC2 GPU-based instances for your Amazon ECS workloads. It comes pre-configured with NVIDIA kernel drivers and a Docker GPU runtime which makes running workloads that take advantage of GPUs on Amazon ECS. For more information, see [Amazon ECS task definitions for GPU workloads](ecs-gpu.md).  | By default, the Amazon ECS-optimized Amazon Linux 2023 AMI ships with a single 30-GiB root volume. You can modify the 30-GiB root volume size at launch time to increase the available storage on your container instance. This storage is used for the operating system and for Docker images and metadata. The default filesystem for the Amazon ECS-optimized Amazon Linux 2023 AMI is `xfs`, and Docker uses the `overlay2` storage driver. For more information, see [Use the OverlayFS storage driver](https://docs.docker.com/engine/storage/drivers/overlayfs-driver/) in the Docker documentation. | 

The following variants of the Amazon ECS-optimized AMI are available for your Amazon EC2 instances with the Amazon Linux 2 operating system.


| Operating system | AMI | Description | Storage configuration | 
| --- | --- | --- | --- | 
|  **Amazon Linux 2**   |  Amazon ECS-optimized Amazon Linux 2 kernel 5.10 AMI | Based on Amazon Linux 2, this AMI is for use when launching your Amazon EC2 instances and you want to use Linux kernel 5.10 instead of kernel 4.14 for your Amazon ECS workloads. The Amazon ECS-optimized Amazon Linux 2 kernel 5.10 AMI does not come with the AWS CLI preinstalled. | By default, the Amazon Linux 2-based Amazon ECS-optimized AMIs (Amazon ECS-optimized Amazon Linux 2 AMI, Amazon ECS-optimized Amazon Linux 2 (arm64) AMI, and Amazon ECS GPU-optimized AMI) ship with a single 30-GiB root volume. You can modify the 30-GiB root volume size at launch time to increase the available storage on your container instance. This storage is used for the operating system and for Docker images and metadata. The default filesystem for the Amazon ECS-optimized Amazon Linux 2 AMI is `xfs`, and Docker uses the `overlay2` storage driver. For more information, see [Use the OverlayFS storage driver](https://docs.docker.com/engine/storage/drivers/overlayfs-driver/) in the Docker documentation. | 
|  **Amazon Linux 2**  |  Amazon ECS-optimized Amazon Linux 2 AMI | This is for your Amazon ECS workloads. The Amazon ECS-optimized Amazon Linux 2 AMI does not come with the AWS CLI preinstalled. | By default, the Amazon Linux 2-based Amazon ECS-optimized AMIs (Amazon ECS-optimized Amazon Linux 2 AMI, Amazon ECS-optimized Amazon Linux 2 (arm64) AMI, and Amazon ECS GPU-optimized AMI) ship with a single 30-GiB root volume. You can modify the 30-GiB root volume size at launch time to increase the available storage on your container instance. This storage is used for the operating system and for Docker images and metadata. The default filesystem for the Amazon ECS-optimized Amazon Linux 2 AMI is `xfs`, and Docker uses the `overlay2` storage driver. For more information, see [Use the OverlayFS storage driver](https://docs.docker.com/engine/storage/drivers/overlayfs-driver/) in the Docker documentation. | 
|  **Amazon Linux 2 (arm64)**  |  Amazon ECS-optimized Amazon Linux 2 kernel 5.10 (arm64) AMI |  Based on Amazon Linux 2, this AMI is for your Amazon EC2 instances, which are powered by Arm-based AWS Graviton/Graviton 2/Graviton 3/Graviton 4 Processors, and you want to use Linux kernel 5.10 instead of Linux kernel 4.14 for your Amazon ECS workloads. For more information, see [Specifications for Amazon EC2 general purpose instances](https://docs.aws.amazon.com/ec2/latest/instancetypes/gp.html) in the *Amazon EC2 Instance Types guide*. The Amazon ECS-optimized Amazon Linux 2 (arm64) AMI does not come with the AWS CLI preinstalled.  | By default, the Amazon Linux 2-based Amazon ECS-optimized AMIs (Amazon ECS-optimized Amazon Linux 2 AMI, Amazon ECS-optimized Amazon Linux 2 (arm64) AMI, and Amazon ECS GPU-optimized AMI) ship with a single 30-GiB root volume. You can modify the 30-GiB root volume size at launch time to increase the available storage on your container instance. This storage is used for the operating system and for Docker images and metadata. The default filesystem for the Amazon ECS-optimized Amazon Linux 2 AMI is `xfs`, and Docker uses the `overlay2` storage driver. For more information, see [Use the OverlayFS storage driver](https://docs.docker.com/engine/storage/drivers/overlayfs-driver/) in the Docker documentation. | 
| Amazon Linux 2 (arm64) | Amazon ECS-optimized Amazon Linux 2 (arm64) AMI |  Based on Amazon Linux 2, this AMI is for use when launching your Amazon EC2 instances, which are powered by Arm-based AWS Graviton/Graviton 2/Graviton 3/Graviton 4 Processors, for your Amazon ECS workloads. The Amazon ECS-optimized Amazon Linux 2 (arm64) AMI does not come with the AWS CLI preinstalled.  | By default, the Amazon Linux 2-based Amazon ECS-optimized AMIs (Amazon ECS-optimized Amazon Linux 2 AMI, Amazon ECS-optimized Amazon Linux 2 (arm64) AMI, and Amazon ECS GPU-optimized AMI) ship with a single 30-GiB root volume. You can modify the 30-GiB root volume size at launch time to increase the available storage on your container instance. This storage is used for the operating system and for Docker images and metadata. The default filesystem for the Amazon ECS-optimized Amazon Linux 2 AMI is `xfs`, and Docker uses the `overlay2` storage driver. For more information, see [Use the OverlayFS storage driver](https://docs.docker.com/engine/storage/drivers/overlayfs-driver/) in the Docker documentation. | 
|  **Amazon Linux 2 (GPU)**  | Amazon ECS GPU-optimized kernel 5.10 AMI | Based on Amazon Linux 2, this AMI is recommended for use when launching your Amazon EC2 GPU-based instances with Linux kernel 5.10 for your Amazon ECS workloads. It comes pre-configured with NVIDIA kernel drivers and a Docker GPU runtime which makes running workloads that take advantage of GPUs on Amazon ECS. For more information, see [Amazon ECS task definitions for GPU workloads](ecs-gpu.md). | By default, the Amazon Linux 2-based Amazon ECS-optimized AMIs (Amazon ECS-optimized Amazon Linux 2 AMI, Amazon ECS-optimized Amazon Linux 2 (arm64) AMI, and Amazon ECS GPU-optimized AMI) ship with a single 30-GiB root volume. You can modify the 30-GiB root volume size at launch time to increase the available storage on your container instance. This storage is used for the operating system and for Docker images and metadata. The default filesystem for the Amazon ECS-optimized Amazon Linux 2 AMI is `xfs`, and Docker uses the `overlay2` storage driver. For more information, see [Use the OverlayFS storage driver](https://docs.docker.com/engine/storage/drivers/overlayfs-driver/) in the Docker documentation. | 
| Amazon Linux 2 (GPU) | Amazon ECS GPU-optimized AMI | Based on Amazon Linux 2, this AMI is recommended for use when launching your Amazon EC2 GPU-based instances with Linux kernel 4.14 for your Amazon ECS workloads. It comes pre-configured with NVIDIA kernel drivers and a Docker GPU runtime which makes running workloads that take advantage of GPUs on Amazon ECS. For more information, see [Amazon ECS task definitions for GPU workloads](ecs-gpu.md). | By default, the Amazon Linux 2-based Amazon ECS-optimized AMIs (Amazon ECS-optimized Amazon Linux 2 AMI, Amazon ECS-optimized Amazon Linux 2 (arm64) AMI, and Amazon ECS GPU-optimized AMI) ship with a single 30-GiB root volume. You can modify the 30-GiB root volume size at launch time to increase the available storage on your container instance. This storage is used for the operating system and for Docker images and metadata. The default filesystem for the Amazon ECS-optimized Amazon Linux 2 AMI is `xfs`, and Docker uses the `overlay2` storage driver. For more information, see [Use the OverlayFS storage driver](https://docs.docker.com/engine/storage/drivers/overlayfs-driver/) in the Docker documentation. | 
| Amazon Linux 2 (Neuron)  | Amazon ECS optimized Amazon Linux 2 (Neuron) kernel 5.10 AMI  | Based on Amazon Linux 2, this AMI is for Amazon EC2 Inf1, Trn1 or Inf2 instances. It comes pre-configured with AWS Inferentia with Linux kernel 5.10 and AWS Trainium drivers and the AWS Neuron runtime for Docker which makes running machine learning inference workloads easier on Amazon ECS. For more information, see [Amazon ECS task definitions for AWS Neuron machine learning workloads](ecs-inference.md). The Amazon ECS optimized Amazon Linux 2 (Neuron) AMI does not come with the AWS CLI preinstalled. | By default, the Amazon Linux 2-based Amazon ECS-optimized AMIs (Amazon ECS-optimized Amazon Linux 2 AMI, Amazon ECS-optimized Amazon Linux 2 (arm64) AMI, and Amazon ECS GPU-optimized AMI) ship with a single 30-GiB root volume. You can modify the 30-GiB root volume size at launch time to increase the available storage on your container instance. This storage is used for the operating system and for Docker images and metadata. The default filesystem for the Amazon ECS-optimized Amazon Linux 2 AMI is `xfs`, and Docker uses the `overlay2` storage driver. For more information, see [Use the OverlayFS storage driver](https://docs.docker.com/engine/storage/drivers/overlayfs-driver/) in the Docker documentation. | 
| Amazon Linux 2 (Neuron)  | Amazon ECS optimized Amazon Linux 2 (Neuron) AMI | Based on Amazon Linux 2, this AMI is for Amazon EC2 Inf1, Trn1 or Inf2 instances. It comes pre-configured with AWS Inferentia and AWS Trainium drivers and the AWS Neuron runtime for Docker which makes running machine learning inference workloads easier on Amazon ECS. For more information, see [Amazon ECS task definitions for AWS Neuron machine learning workloads](ecs-inference.md). The Amazon ECS optimized Amazon Linux 2 (Neuron) AMI does not come with the AWS CLI preinstalled. | By default, the Amazon Linux 2-based Amazon ECS-optimized AMIs (Amazon ECS-optimized Amazon Linux 2 AMI, Amazon ECS-optimized Amazon Linux 2 (arm64) AMI, and Amazon ECS GPU-optimized AMI) ship with a single 30-GiB root volume. You can modify the 30-GiB root volume size at launch time to increase the available storage on your container instance. This storage is used for the operating system and for Docker images and metadata. The default filesystem for the Amazon ECS-optimized Amazon Linux 2 AMI is `xfs`, and Docker uses the `overlay2` storage driver. For more information, see [Use the OverlayFS storage driver](https://docs.docker.com/engine/storage/drivers/overlayfs-driver/) in the Docker documentation. | 

Amazon ECS provides a changelog for the Linux variant of the Amazon ECS-optimized AMI on GitHub. For more information, see [Changelog](https://github.com/aws/amazon-ecs-ami/blob/main/CHANGELOG.md).

The Linux variants of the Amazon ECS-optimized AMI use the Amazon Linux 2 AMI or Amazon Linux 2023 AMI as their base. You can retrieve the AMI name for each variant by querying the Systems Manager Parameter Store API. For more information, see [Retrieving Amazon ECS-optimized Linux AMI metadata](retrieve-ecs-optimized_AMI.md). The Amazon Linux 2 AMI release notes are available as well. For more information, see [Amazon Linux 2 release notes](https://docs.aws.amazon.com/AL2/latest/relnotes/relnotes-al2.html). The Amazon Linux 2023 release notes are available as well. For more information see, [Amazon Linux 2023 release notes](https://docs.aws.amazon.com/linux/al2023/release-notes/relnotes.html).

The following pages provide additional information about the changes:
+ [Source AMI release](https://github.com/aws/amazon-ecs-ami/releases) notes on GitHub
+ [Docker Engine release notes](https://docs.docker.com/engine/release-notes/) in the Docker documentation
+ [NVIDIA Driver Documentation](https://docs.nvidia.com/datacenter/tesla/index.html) in the NVIDIA documentation
+ [Amazon ECS agent changelog](https://github.com/aws/amazon-ecs-agent/blob/master/CHANGELOG.md) on GitHub

  The source code for the `ecs-init` application and the scripts and configuration for packaging the agent are now part of the agent repository. For older versions of `ecs-init` and packaging, see [Amazon ecs-init changelog](https://github.com/aws/amazon-ecs-init/blob/master/CHANGELOG.md) on GitHub

## Applying security updates to the Amazon ECS-optimized AMI
<a name="ecs-optimized-AMI-security-changes"></a>

The Amazon ECS-optimized AMIs based on Amazon Linux contain a customized version of cloud-init. Cloud-init is a package that is used to bootstrap Linux images in a cloud computing environment and perform desired actions when launching an instance. By default, all Amazon ECS-optimized AMIs based on Amazon Linux released before June 12, 2024 have all "Critical" and "Important" security updates applied upon instance launch.

Beginning with the June 12, 2024 releases of the Amazon ECS-optimized AMIs based on Amazon Linux 2, the default behavior will no longer include updating packages at launch. Instead, we recommend that you update to a new Amazon ECS-optimized AMI as releases are made available. The Amazon ECS-optimized AMIs are released when there are available security updates or base AMI changes. This will ensure you are receiving the latest package versions and security updates, and that the package versions are immutable through instance launches. For more information on retrieving the latest Amazon ECS-optimized AMI, see [Retrieving Amazon ECS-optimized Linux AMI metadata](retrieve-ecs-optimized_AMI.md).

We recommend automating your environment to update to a new AMI as they are made available. For information about the available options, see [Amazon ECS enables easier EC2 capacity management, with managed instance draining](https://aws.amazon.com/blogs/containers/amazon-ecs-enables-easier-ec2-capacity-management-with-managed-instance-draining/).

To continue applying "Critical" and "Important" security updates manually on an AMI version, you can run the following command on your Amazon EC2 instance.

```
yum update --security
```

**Warning**  
 Updating docker or containerd packages will stop all running containers on the host, which means all running Amazon ECS tasks will be stopped. Plan accordingly to minimize service disruption. 

If you want to re-enable security updates at launch, you can add the following line to the `#cloud-config` section of the cloud-init user data when launching your Amazon EC2 instance. For more information, see [Using cloud-init on Amazon Linux 2](https://docs.aws.amazon.com/linux/al2/ug/amazon-linux-cloud-init.html) in the *Amazon Linux User Guide*.

```
#cloud-config
repo_upgrade: security
```

## Version-locked packages in Amazon ECS-optimized AL2023 GPU AMIs
<a name="ecs-optimized-ami-version-locked-packages"></a>

Certain packages are critical for correct, performant behavior of GPU functionality in Amazon ECS-optimized AL2023 GPU AMIs. These include:
+ NVIDIA drivers (`nvidia*`)
+ Kernel modules (`kmod*`)
+ NVIDIA libraries (`libnvidia*`)
+ Kernel packages (`kernel*`)

**Note**  
This is not an exhaustive list. The complete list of locked packages are available with `dnf versionlock list`

These packages are version-locked to ensure stability and prevent unintentional changes that could disrupt GPU workloads. As a result, these packages should generally be modified within the bounds of a managed process that gracefully handles potential issues and maintains GPU functionality.

To prevent unintended modifications, the `dnf versionlock` plugin is used on these packages.

If you wish to modify a locked package, you can:

```
# unlock a single package
sudo dnf versionlock delete $PACKAGE_NAME

# unlock all packages
sudo dnf versionlock clear
```

**Important**  
When updates to these packages are necessary, customers should consider using the latest AMI version that includes the required updates. If updating existing instances is required, a careful approach involving unlocking, updating, and re-locking packages should be employed, always ensuring GPU functionality is maintained throughout the process.

# Retrieving Amazon ECS-optimized Linux AMI metadata
<a name="retrieve-ecs-optimized_AMI"></a>

You can programmatically retrieve the Amazon ECS-optimized AMI metadata. The metadata includes the AMI name, Amazon ECS container agent version, and Amazon ECS runtime version which includes the Docker version. 

When you create a cluster using the console, Amazon ECS creates a launch template for your instances with the latest AMI associated with the selected operating system. 

When you use CloudFormation to create a cluster, the SSM parameter is part of the Amazon EC2 launch template for the Auto Scaling group instances. You can configure the template to use a dynamic Systems Manager parameter to determine what Amazon ECS Optimized AMI to deploy. This parameter ensures that each time you deploy the stack it will check to see if there is available update that needs to be applied to the EC2 instances. For an example of how to use the Systems Manager parameter, see [Create an Amazon ECS cluster with the Amazon ECS-optimized Amazon Linux 2023 AMI](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-cluster.html#aws-resource-ecs-cluster--examples--Create_an_cluster_with_the_Amazon_Linux_2023_ECS-Optimized-AMI) in the *AWS CloudFormation User Guide*.

The AMI ID, image name, operating system, container agent version, source image name, and runtime version for each variant of the Amazon ECS-optimized AMIs can be programmatically retrieved by querying the Systems Manager Parameter Store API. For more information about the Systems Manager Parameter Store API, see [GetParameters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameters.html) and [GetParametersByPath](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParametersByPath.html).

**Note**  
Your administrative user must have the following IAM permissions to retrieve the Amazon ECS-optimized AMI metadata. These permissions have been added to the `AmazonECS_FullAccess` IAM policy.  
ssm:GetParameters
ssm:GetParameter
ssm:GetParametersByPath

## Systems Manager Parameter Store parameter format
<a name="ecs-optimized-ami-parameter-format"></a>

The following is the format of the parameter name for each Amazon ECS-optimized AMI variant.

**Linux Amazon ECS-optimized AMIs**
+ Amazon Linux 2023 AMI metadata:

  ```
  /aws/service/ecs/optimized-ami/amazon-linux-2023/<version>
  ```
+ Amazon Linux 2023 (arm64) AMI metadata:

  ```
  /aws/service/ecs/optimized-ami/amazon-linux-2023/arm64/<version>
  ```
+ Amazon Linux 2023 (Neuron) AMI metadata:

  ```
  /aws/service/ecs/optimized-ami/amazon-linux-2023/neuron/<version>
  ```
+ Amazon Linux 2023 (GPU) AMI metadata:

  ```
  /aws/service/ecs/optimized-ami/amazon-linux-2023/gpu/<version>
  ```

  Amazon Linux 2 AMI metadata:

  ```
  /aws/service/ecs/optimized-ami/amazon-linux-2/<version>
  ```
+ Amazon Linux 2 kernel 5.10 AMI metadata:

  ```
  /aws/service/ecs/optimized-ami/amazon-linux-2/kernel-5.10/<version>
  ```
+ Amazon Linux 2 (arm64) AMI metadata:

  ```
  /aws/service/ecs/optimized-ami/amazon-linux-2/arm64/<version>
  ```
+ Amazon Linux 2 kernel 5.10 (arm64) AMI metadata:

  ```
  /aws/service/ecs/optimized-ami/amazon-linux-2/kernel-5.10/arm64/<version>
  ```
+ Amazon ECS GPU-optimized kernel 5.10 AMI metadata:

  ```
  /aws/service/ecs/optimized-ami/amazon-linux-2/kernel-5.10/gpu/<version>
  ```
+ Amazon Linux 2 (GPU) AMI metadata:

  ```
  /aws/service/ecs/optimized-ami/amazon-linux-2/gpu/<version>
  ```
+ Amazon ECS optimized Amazon Linux 2 (Neuron) kernel 5.10 AMI metadata:

  ```
  /aws/service/ecs/optimized-ami/amazon-linux-2/kernel-5.10/inf/<version>
  ```
+ Amazon Linux 2 (Neuron) AMI metadata:

  ```
  /aws/service/ecs/optimized-ami/amazon-linux-2/inf/<version>
  ```

The following parameter name format retrieves the image ID of the latest recommended Amazon ECS-optimized Amazon Linux 2 AMI by using the sub-parameter `image_id`.

```
/aws/service/ecs/optimized-ami/amazon-linux-2/recommended/image_id
```

The following parameter name format retrieves the metadata of a specific Amazon ECS-optimized AMI version by specifying the AMI name.
+ Amazon ECS-optimized Amazon Linux 2 AMI metadata:

  ```
  /aws/service/ecs/optimized-ami/amazon-linux-2/amzn2-ami-ecs-hvm-2.0.20181112-x86_64-ebs
  ```

**Note**  
All versions of the Amazon ECS-optimized Amazon Linux 2 AMI are available for retrieval. Only Amazon ECS-optimized AMI versions `amzn-ami-2017.09.l-amazon-ecs-optimized` (Linux) and later can be retrieved. 

## Examples
<a name="ecs-optimized-ami-parameter-examples"></a>

The following examples show ways in which you can retrieve the metadata for each Amazon ECS-optimized AMI variant.

### Retrieving the metadata of the latest recommended Amazon ECS-optimized AMI
<a name="ecs-optimized-ami-parameter-examples-1"></a>

You can retrieve the latest recommended Amazon ECS-optimized AMI using the AWS CLI with the following AWS CLI commands.

**Linux Amazon ECS-optimized AMIs**
+ **For the Amazon ECS-optimized Amazon Linux 2023 AMIs:**

  ```
  aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2023/recommended --region us-east-1
  ```
+ **For the Amazon ECS-optimized Amazon Linux 2023 (arm64) AMIs:**

  ```
  aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2023/arm64/recommended --region us-east-1
  ```
+ **For the Amazon ECS-optimized Amazon Linux 2023 (Neuron) AMIs:**

  ```
  aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2023/neuron/recommended --region us-east-1
  ```
+ **For the Amazon ECS-optimized Amazon Linux 2023 GPU AMIs:**

  ```
  aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2023/gpu/recommended --region us-east-1
  ```
+ **For the Amazon ECS-optimized Amazon Linux 2 kernel 5.10 AMIs:**

  ```
  aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2/kernel-5.10/recommended --region us-east-1
  ```
+ **For the Amazon ECS-optimized Amazon Linux 2 AMIs:**

  ```
  aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2/recommended --region us-east-1
  ```
+ **For the Amazon ECS-optimized Amazon Linux 2 kernel 5.10 (arm64) AMIs:**

  ```
  aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2/kernel-5.10/arm64/recommended --region us-east-1
  ```
+ **For the Amazon ECS-optimized Amazon Linux 2 (arm64) AMIs:**

  ```
  aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2/arm64/recommended --region us-east-1
  ```
+ **For the Amazon ECS GPU-optimized kernel 5.10 AMIs:**

  ```
  aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2/kernel-5.10/gpu/recommended --region us-east-1
  ```
+ **For the Amazon ECS GPU-optimized AMIs:**

  ```
  aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2/gpu/recommended --region us-east-1
  ```
+ **For the Amazon ECS optimized Amazon Linux 2 (Neuron) kernel 5.10 AMIs:**

  ```
  aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2/kernel-5.10/inf/recommended --region us-east-1
  ```
+ **For the Amazon ECS optimized Amazon Linux 2 (Neuron) AMIs:**

  ```
  aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2/inf/recommended --region us-east-1
  ```

### Retrieving the image ID of the latest recommended Amazon ECS-optimized Amazon Linux 2023 AMI
<a name="ecs-optimized-ami-parameter-examples-6"></a>

You can retrieve the image ID of the latest recommended Amazon ECS-optimized Amazon Linux 2023 AMI ID by using the sub-parameter `image_id`.

```
aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2023/recommended/image_id --region us-east-1
```

To retrieve the `image_id` value only, you can query the specific parameter value; for example:

```
aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2023/recommended/image_id --region us-east-1 --query "Parameters[0].Value"
```

### Retrieving the metadata of a specific Amazon ECS-optimized Amazon Linux 2 AMI version
<a name="ecs-optimized-ami-parameter-examples-2"></a>

Retrieve the metadata of a specific Amazon ECS-optimized Amazon Linux AMI version using the AWS CLI with the following AWS CLI command. Replace the AMI name with the name of the Amazon ECS-optimized Amazon Linux AMI to retrieve. 

```
aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2/amzn2-ami-ecs-hvm-2.0.20200928-x86_64-ebs --region us-east-1
```

### Retrieving the Amazon ECS-optimized Amazon Linux 2 kernel 5.10 AMI metadata using the Systems Manager GetParametersByPath API
<a name="ecs-optimized-ami-parameter-examples-3"></a>

Retrieve the Amazon ECS-optimized Amazon Linux 2 AMI metadata with the Systems Manager GetParametersByPath API using the AWS CLI with the following command.

```
aws ssm get-parameters-by-path --path /aws/service/ecs/optimized-ami/amazon-linux-2/kernel-5.10/ --region us-east-1
```

### Retrieving the image ID of the latest recommended Amazon ECS-optimized Amazon Linux 2 kernel 5.10 AMI
<a name="ecs-optimized-ami-parameter-examples-4"></a>

You can retrieve the image ID of the latest recommended Amazon ECS-optimized Amazon Linux 2 kernel 5.10 AMI ID by using the sub-parameter `image_id`.

```
aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2/kernel-5.10/recommended/image_id --region us-east-1
```

To retrieve the `image_id` value only, you can query the specific parameter value; for example:

```
aws ssm get-parameters --names /aws/service/ecs/optimized-ami/amazon-linux-2/recommended/image_id --region us-east-1 --query "Parameters[0].Value"
```

### Using the latest recommended Amazon ECS-optimized AMI in an CloudFormation template
<a name="ecs-optimized-ami-parameter-examples-5"></a>

You can reference the latest recommended Amazon ECS-optimized AMI in an CloudFormation template by referencing the Systems Manager parameter store name.

**Linux example**

```
Parameters:kernel-5.10
  LatestECSOptimizedAMI:
    Description: AMI ID
    Type: AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>
    Default: /aws/service/ecs/optimized-ami/amazon-linux-2/kernel-5.10/recommended/image_id
```

# Migrating from an Amazon Linux 2 to an Amazon Linux 2023 Amazon ECS-optimized AMI
<a name="al2-to-al2023-ami-transition"></a>

Following [Amazon Linux](https://aws.amazon.com/amazon-linux-2/faqs), Amazon ECS ends standard support for Amazon Linux 2 Amazon ECS-optimized AMIs effective June 30, 2026. After this date, the Amazon ECS agent version is pinned and new Amazon Linux 2 Amazon ECS-optimized AMIs are only published when the source Amazon Linux 2 AMI is updated. Complete End of Life (EOL) occurs on June 30, 2026, after which no more Amazon ECS-optimized Amazon Linux 2 AMIs are published, even if the source AMI is updated.

Amazon Linux 2023 provides a secure-by-default approach with preconfigured security policies, SELinux in permissive mode, IMDSv2-only mode enabled by default, optimized boot times, and improved package management for enhanced security and performance.

There is a high degree of compatibility between the Amazon Linux 2 and Amazon Linux 2023 Amazon ECS-optimized AMIs, and most customers will experience minimal-to-zero changes in their workloads between the two operating systems.

For more information, see [Comparing Amazon Linux 2 and *Amazon Linux 2023*](https://docs.aws.amazon.com/linux/al2023/ug/compare-with-al2.html) in the *Amazon Linux 2023 User Guide* and the [AL2023 FAQs](https://aws.amazon.com/linux/amazon-linux-2023/faqs).

## Compatibility considerations
<a name="al2-to-al2023-ami-transition-compatibility"></a>

### Package management and OS updates
<a name="al2-to-al2023-ami-transition-compatibility-package-management"></a>

Unlike previous versions of Amazon Linux, Amazon ECS-optimized Amazon Linux 2023 AMIs are locked to a specific version of the Amazon Linux repository. This insulates users from inadvertently updating packages that might bring in unwanted or breaking changes. For more information, see [Managing repositories and OS updates in Amazon Linux 2023](https://docs.aws.amazon.com/linux/al2023/ug/managing-repos-os-updates.html) in the *Amazon Linux 2023 User Guide*.

### Linux kernel versions
<a name="al2-to-al2023-ami-transition-compatibility-kernel"></a>

Amazon Linux 2 AMIs are based on Linux kernels 4.14 and 5.10, while Amazon Linux 2023 uses Linux kernel 6.1 and 6.12. For more information, see [Comparing Amazon Linux 2 and Amazon Linux 2023 kernels](https://docs.aws.amazon.com/linux/al2023/ug/compare-with-al2-kernel.html) in the *Amazon Linux 2023 User Guide*.

### Package availability changes
<a name="al2-to-al2023-ami-transition-compatibility-packages"></a>

The following are notable package changes in Amazon Linux 2023:
+ Some source binary packages in Amazon Linux 2 are no longer available in Amazon Linux 2023. For more information, see [Packages removed from Amazon Linux 2023](https://docs.aws.amazon.com/linux/al2023/release-notes/removed.html) in the *Amazon Linux 2023 Release Notes*.
+ Changes in how Amazon Linux supports different versions of packages. The `amazon-linux-extras` system used in Amazon Linux 2 does not exist in Amazon Linux 2023. All packages are simply available in the "core" repository.
+ Extra packages for Enterprise Linux (EPEL) are not supported in Amazon Linux 2023. For more information, see [EPEL compatibility in Amazon Linux 2023](https://docs.aws.amazon.com/linux/al2023/ug/epel.html) in the *Amazon Linux 2023 User Guide*.
+ 32-bit applications are not supported in Amazon Linux 2023. For more information, see [Deprecated features from Amazon Linux 2](https://docs.aws.amazon.com/linux/al2023/ug/deprecated-al2.html#deprecated-32bit-rpms) in the *Amazon Linux 2023 User Guide*.

### Control Groups (cgroups) changes
<a name="al2-to-al2023-ami-transition-compatibility-cgroups"></a>

A Control Group (cgroup) is a Linux kernel feature to hierarchically organize processes and distribute system resources between them. Control Groups are used extensively to implement a container runtime, and by `systemd`.

The Amazon ECS agent, Docker, and containerd all support both cgroupv1 and cgroupv2. cgroupv2 changes how container memory usage is calculated. In cgroupv1 (Amazon Linux 2), container memory utilization as reported by the container runtime typically excludes page cache. In cgroupv2 (Amazon Linux 2023), page cache is included in the reported memory usage. The same workload may report higher memory utilization on Amazon Linux 2023 compared to Amazon Linux 2, even when actual application memory consumption has not changed.

We recommend benchmarking memory usage on Amazon Linux 2023 instances before migrating production workloads, and adjusting task and container memory limits if needed. You can use [Container Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-metrics-ECS.html) to compare memory utilization between Amazon Linux 2 and Amazon Linux 2023.

For further details on cgroupv2, see [Control groups v2 in Amazon Linux 2023](https://docs.aws.amazon.com/linux/al2023/ug/cgroupv2.html) in the *Amazon Linux 2023 User Guide*.

### Instance Metadata Service (IMDS) changes
<a name="al2-to-al2023-ami-transition-compatibility-imds"></a>

Amazon Linux 2023 requires Instance Metadata Service version 2 (IMDSv2) by default. IMDSv2 has several benefits that help improve security posture. It uses a session-oriented authentication method that requires the creation of a secret token in a simple HTTP PUT request to start the session. A session's token can be valid for anywhere between 1 second and 6 hours.

For more information on how to transition from IMDSv1 to IMDSv2, see [Transition to using Instance Metadata Service Version 2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-metadata-transition-to-version-2.html) in the *Amazon EC2 User Guide*.

If you would like to use IMDSv1, you can still do so by manually overriding the settings using instance metadata option launch properties.

### Memory swappiness changes
<a name="al2-to-al2023-ami-transition-compatibility-memory-swappiness"></a>

Per-container memory swappiness is not supported on Amazon Linux 2023 and cgroups v2. For more information, see [Managing container swap memory space on Amazon ECS](container-swap.md).

### FIPS validation changes
<a name="al2-to-al2023-ami-transition-compatibility-fips"></a>

Amazon Linux 2 is certified under FIPS 140-2 and Amazon Linux 2023 is certified under FIPS 140-3.

To enable FIPS mode on Amazon Linux 2023, install the necessary packages on your Amazon EC2 instance and follow the configuration steps using the instructions in [Enable FIPS Mode on Amazon Linux 2023](https://docs.aws.amazon.com/linux/al2023/ug/fips-mode.html) in the *Amazon Linux 2023 User Guide*.

### Accelerated instance support
<a name="al2-to-al2023-ami-transition-compatibility-accelerated"></a>

The Amazon ECS-optimized Amazon Linux 2023 AMIs support both Neuron and GPU accelerated instance types. For more information, see [Amazon ECS-optimized Linux AMIs](ecs-optimized_AMI.md).

## Building custom AMIs
<a name="al2-to-al2023-ami-transition-custom-ami"></a>

While we recommend moving to officially supported and published Amazon ECS-optimized AMIs for Amazon Linux 2023, you can continue to build custom Amazon Linux 2 Amazon ECS-optimized AMIs using the open-source build scripts that are used to build the Linux variants of the Amazon ECS-optimized AMI. For more information, see [Amazon ECS-optimized Linux AMI build script](ecs-ami-build-scripts.md).

## Migration strategies
<a name="al2-to-al2023-ami-transition-migration"></a>

We recommend creating and implementing a migration plan that includes thorough application testing. The following sections outline different migration strategies based on how you manage your Amazon ECS infrastructure.

### Migrating with Amazon ECS capacity providers
<a name="al2-to-al2023-ami-transition-migration-capacity-providers"></a>

1. Create a new capacity provider with a new launch template. This should reference an Auto Scaling group with a launch template similar to your existing one, but instead of the Amazon Linux 2 Amazon ECS-optimized AMI, it should specify one of the Amazon Linux 2023 variants. Add this new capacity provider to your existing Amazon ECS cluster.

1. Update your cluster's default capacity provider strategy to include both the existing Amazon Linux 2 capacity provider and the new Amazon Linux 2023 capacity provider. Start with a higher weight on the Amazon Linux 2 provider and a lower weight on the Amazon Linux 2023 provider (for example, Amazon Linux 2: weight 80, Amazon Linux 2023: weight 20). This causes Amazon ECS to begin provisioning Amazon Linux 2023 instances as new tasks are scheduled. Verify that the instances register correctly and that tasks are able to run successfully on the new instances.

1. Gradually adjust the capacity provider weights in your cluster's default strategy, increasing the weight for the Amazon Linux 2023 provider while decreasing the Amazon Linux 2 provider weight over time (for example, 60/40, then 40/60, then 20/80). You can also update individual service capacity provider strategies to prioritize Amazon Linux 2023 instances. Monitor task placement to ensure they're successfully running on Amazon Linux 2023 instances.

1. Optionally drain Amazon Linux 2 container instances to accelerate task migration. If you have sufficient Amazon Linux 2023 replacement capacity, you can manually drain your Amazon Linux 2 container instances through the Amazon ECS console or AWS CLI to speed up the transition of your tasks from Amazon Linux 2 to Amazon Linux 2023. After the migration is complete, remove the Amazon Linux 2 capacity provider from your cluster and delete the associated Auto Scaling group.

### Migrating with an Amazon EC2 Auto Scaling group
<a name="al2-to-al2023-ami-transition-migration-asg"></a>

1. Create a new Amazon EC2 Auto Scaling group with a new launch template. This should be similar to your existing launch template, but instead of the Amazon Linux 2 Amazon ECS-optimized AMI, it should specify one of the Amazon Linux 2023 variants. This new Auto Scaling group can launch instances to your existing cluster.

1. Scale up the Auto Scaling group so that you begin to have Amazon Linux 2023 instances registering to your cluster. Verify that the instances register correctly and that tasks are able to run successfully on the new instances.

1. After your tasks have been verified to work on Amazon Linux 2023, scale up the Amazon Linux 2023 Auto Scaling group while gradually scaling down the Amazon Linux 2 Auto Scaling group, until you have completely replaced all Amazon Linux 2 instances.

1. If you have sufficient Amazon Linux 2023 replacement capacity, you might want to explicitly drain the container instances to speed up the transition of your tasks from Amazon Linux 2 to Amazon Linux 2023. For more information, see [Draining Amazon ECS container instances](container-instance-draining.md).

### Migrating with manually managed instances
<a name="al2-to-al2023-ami-transition-migration-manual"></a>

1. Manually launch (or adjust scripts that launch) new Amazon EC2 instances using the Amazon ECS-optimized Amazon Linux 2023 AMI instead of Amazon Linux 2. Ensure these instances use the same security groups, subnets, IAM roles, and cluster configuration as your existing Amazon Linux 2 instances. The instances should automatically register to your existing Amazon ECS cluster upon launch.

1. Verify the new Amazon Linux 2023 instances are successfully registering to your Amazon ECS cluster and are in an `ACTIVE` state. Test that tasks can be scheduled and run properly on these new instances by either waiting for natural task placement or manually stopping/starting some tasks to trigger rescheduling.

1. Gradually replace your Amazon Linux 2 instances by launching additional Amazon Linux 2023 instances as needed, then manually draining and terminating the Amazon Linux 2 instances one by one. You can drain instances through the Amazon ECS console by setting the instance to `DRAINING` status, which will stop placing new tasks on it and allow existing tasks to finish or be rescheduled elsewhere.

# Amazon ECS-optimized Linux AMI build script
<a name="ecs-ami-build-scripts"></a>

Amazon ECS has open-sourced the build scripts that are used to build the Linux variants of the Amazon ECS-optimized AMI. These build scripts are now available on GitHub. For more information, see [amazon-ecs-ami](https://github.com/aws/amazon-ecs-ami) on GitHub.

If you need to customize the Amazon ECS-optimized AMI , see [Amazon ECS Optimized AMI Build Recipies](https://github.com/aws/amazon-ecs-ami) on GitHub.

The build scripts repository includes a [HashiCorp packer](https://developer.hashicorp.com/packer/docs) template and build scripts to generate each of the Linux variants of the Amazon ECS-optimized AMI. These scripts are the source of truth for Amazon ECS-optimized AMI builds, so you can follow the GitHub repository to monitor changes to our AMIs. For example, perhaps you want your own AMI to use the same version of Docker that the Amazon ECS team uses for the official AMI.

For more information, see the Amazon ECS AMI repository at [aws/amazon-ecs-ami](https://github.com/aws/amazon-ecs-ami) on GitHub.

**To build an Amazon ECS-optimized Linux AMI**

1. Clone the `aws/amazon-ecs-ami` GitHub repo.

   ```
   git clone https://github.com/aws/amazon-ecs-ami.git
   ```

1. Add an environment variable for the AWS Region to use when creating the AMI. Replace the `us-west-2` value with the Region to use.

   ```
   export REGION=us-west-2
   ```

1. A Makefile is provided to build the AMI. From the root directory of the cloned repository, use one of the following commands, corresponding to the Linux variant of the Amazon ECS-optimized AMI you want to build.
   + Amazon ECS-optimized Amazon Linux 2 AMI

     ```
     make al2
     ```
   + Amazon ECS-optimized Amazon Linux 2 (arm64) AMI

     ```
     make al2arm
     ```
   + Amazon ECS GPU-optimized AMI

     ```
     make al2gpu
     ```
   + Amazon ECS optimized Amazon Linux 2 (Neuron) AMI

     ```
     make al2inf
     ```
   + Amazon ECS-optimized Amazon Linux 2023 AMI

     ```
     make al2023
     ```
   + Amazon ECS-optimized Amazon Linux 2023 (arm64) AMI

     ```
     make al2023arm
     ```
   + Amazon ECS-optimized Amazon Linux 2023 GPU AMI

     ```
     make al2023gpu
     ```
   + Amazon ECS optimized Amazon Linux 2023 (Neuron) AMI

     ```
     make al2023neu
     ```

# Amazon ECS-optimized Bottlerocket AMIs
<a name="ecs-bottlerocket"></a>

Bottlerocket is a Linux based open-source operating system that is purpose built by AWS for running containers on virtual machines or bare metal hosts. The Amazon ECS-optimized Bottlerocket AMI is secure and only includes the minimum number of packages that's required to run containers. This improves resource usage, reduces security attack surface, and helps lower management overhead. The Bottlerocket AMI is also integrated with Amazon ECS to help reduce the operational overhead involved in updating container instances in a cluster. 

Bottlerocket differs from Amazon Linux in the following ways:
+ Bottlerocket doesn't include a package manager, and its software can only be run as containers. Updates to Bottlerocket are both applied and can be rolled back in a single step, which reduces the likelihood of update errors.
+ The primary mechanism to manage Bottlerocket hosts is with a container scheduler. Unlike Amazon Linux, logging into individual Bottlerocket instances is intended to be an infrequent operation for advanced debugging and troubleshooting purposes only.

For more information about Bottlerocket, see the [documentation](https://github.com/bottlerocket-os/bottlerocket/blob/develop/README.md) and [releases](https://github.com/bottlerocket-os/bottlerocket/releases) on GitHub.

There are variants of the Amazon ECS-optimized Bottlerocket AMI for kernel 6.1 and kernel 5.10.

The following variants use kernel 6.1:
+ `aws-ecs-2`
+ `aws-ecs-2-nvidia`

The following variants use kernel 5.10:
+ `aws-ecs-1`
+ `aws-ecs-1-nvidia`

  For more information about the `aws-ecs-1-nvidia` variant, see [Announcing NVIDIA GPU support for Bottlerocket on Amazon ECS](https://aws.amazon.com/blogs/containers/announcing-nvidia-gpu-support-for-bottlerocket-on-amazon-ecs/).

## Considerations
<a name="ecs-bottlerocket-considerations"></a>

Consider the following when using a Bottlerocket AMI with Amazon ECS.
+ Bottlerocket supports Amazon EC2 instances with `x86_64` and `arm64` processors. The Bottlerocket AMI isn't recommended for use with Amazon EC2 instances with an Inferentia chip.
+ Bottlerocket images don't include an SSH server or a shell. However, you can use out-of-band management tools to gain SSH administrator access and perform bootstrapping. 

   For more information, see these sections in the [bottlerocket README.md](https://github.com/bottlerocket-os/bottlerocket) on GitHub:
  + [Exploration](https://github.com/bottlerocket-os/bottlerocket#exploration)
  + [Admin container](https://github.com/bottlerocket-os/bottlerocket#admin-container)
+ By default, Bottlerocket has a [control container](https://github.com/bottlerocket-os/bottlerocket-control-container) that's enabled. This container runs the [AWS Systems Manager agent](https://github.com/aws/amazon-ssm-agent) that you can use to run commands or start shell sessions on Amazon EC2 Bottlerocket instances. For more information, see [Setting up Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started.html) in the *AWS Systems Manager User Guide*.
+ Bottlerocket is optimized for container workloads and has a focus on security. Bottlerocket doesn't include a package manager and is immutable. 

  For information about the security features and guidance, see [Security Features](https://github.com/bottlerocket-os/bottlerocket/blob/develop/SECURITY_FEATURES.md) and [Security Guidance](https://github.com/bottlerocket-os/bottlerocket/blob/develop/SECURITY_GUIDANCE.md) on GitHub.
+ The `awsvpc` network mode is supported for Bottlerocket AMI version `1.1.0` or later.
+ App Mesh in a task definition is supported for Bottlerocket AMI version `1.15.0` or later.
+ The `initProcessEnabled` task definition parameter is supported for Bottlerocket AMI version `1.19.0` or later.
+ The Bottlerocket AMIs also don't support the following services and features:
  + ECS Anywhere
  + Service Connect
  + Amazon EFS in encrypted mode
  + Amazon EFS in `awsvpc` network mode
  + Amazon EBS volumes can't be mounted
  + Elastic Inference Accelerator

# Retrieving Amazon ECS-optimized Bottlerocket AMI metadata
<a name="ecs-bottlerocket-retrieve-ami"></a>

You can retrieve the Amazon Machine Image (AMI) ID for Amazon ECS-optimized AMIs by querying the AWS Systems Manager Parameter Store API. Using this parameter, you don't need to manually look up Amazon ECS-optimized AMI IDs. For more information about the Systems Manager Parameter Store API, see [GetParameter](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameter.html). The user that you use must have the `ssm:GetParameter` IAM permission to retrieve the Amazon ECS-optimized AMI metadata.

## `aws-ecs-2` Bottlerocket AMI variant
<a name="ecs-bottlerocket-aws-ecs-2-variant"></a>

You can retrieve the latest stable `aws-ecs-2` Bottlerocket AMI variant by AWS Region and architecture with the AWS CLI or the AWS Management Console. 
+ **AWS CLI** – You can retrieve the image ID of the latest recommended Amazon ECS-optimized Bottlerocket AMI with the following AWS CLI command by using the subparameter `image_id`. Replace the `region` with the Region code that you want the AMI ID for. 

  For information about the supported AWS Regions, see [Finding an AMI](https://github.com/bottlerocket-os/bottlerocket/blob/develop/QUICKSTART-ECS.md#finding-an-ami) on GitHub. To retrieve a version other than the latest, replace `latest` with the version number.
  + For the 64-bit (`x86_64`) architecture:

    ```
    aws ssm get-parameter --region us-east-2 --name "/aws/service/bottlerocket/aws-ecs-2/x86_64/latest/image_id" --query Parameter.Value --output text
    ```
  + For the 64-bit Arm (`arm64`) architecture:

    ```
    aws ssm get-parameter --region us-east-2 --name "/aws/service/bottlerocket/aws-ecs-2/arm64/latest/image_id" --query Parameter.Value --output text
    ```
+ **AWS Management Console** – You can query for the recommended Amazon ECS-optimized AMI ID using a URL in the AWS Management Console. The URL opens the Amazon EC2 Systems Manager console with the value of the ID for the parameter. In the following URL, replace `region` with the Region code that you want the AMI ID for. 

   For information about the supported AWS Regions, see [Finding an AMI](https://github.com/bottlerocket-os/bottlerocket/blob/develop/QUICKSTART-ECS.md#finding-an-ami) on GitHub.
  + For the 64-bit (`x86_64`) architecture:

    ```
    https://console.aws.amazon.com/systems-manager/parameters/aws/service/bottlerocket/aws-ecs-2/x86_64/latest/image_id/description?region=region#
    ```
  + For the 64-bit Arm (`arm64`) architecture:

    ```
    https://console.aws.amazon.com/systems-manager/parameters/aws/service/bottlerocket/aws-ecs-2/arm64/latest/image_id/description?region=region#
    ```

## `aws-ecs-2-nvidia` Bottlerocket AMI variant
<a name="ecs-bottlerocket-aws-ecs-1-nvidia-variants"></a>

You can retrieve the latest stable `aws-ecs-2-nvdia` Bottlerocket AMI variant by Region and architecture with the AWS CLI or the AWS Management Console. 
+ **AWS CLI** – You can retrieve the image ID of the latest recommended Amazon ECS-optimized Bottlerocket AMI with the following AWS CLI command by using the subparameter `image_id`. Replace the `region` with the Region code that you want the AMI ID for. 

   For information about the supported AWS Regions, see [Finding an AMI](https://github.com/bottlerocket-os/bottlerocket/blob/develop/QUICKSTART-ECS.md#finding-an-ami) on GitHub. To retrieve a version other than the latest, replace `latest` with the version number.
  + For the 64-bit (`x86_64`) architecture:

    ```
    aws ssm get-parameter --region us-east-1 --name "/aws/service/bottlerocket/aws-ecs-2-nvidia/x86_64/latest/image_id" --query Parameter.Value --output text
    ```
  + For the 64 bit Arm (`arm64`) architecture:

    ```
    aws ssm get-parameter --region us-east-1 --name "/aws/service/bottlerocket/aws-ecs-2-nvidia/arm64/latest/image_id" --query Parameter.Value --output text
    ```
+ **AWS Management Console** – You can query for the recommended Amazon ECS optimized AMI ID using a URL in the AWS Management Console. The URL opens the Amazon EC2 Systems Manager console with the value of the ID for the parameter. In the following URL, replace `region` with the Region code that you want the AMI ID for. 

  For information about the supported AWS Regions, see [Finding an AMI](https://github.com/bottlerocket-os/bottlerocket/blob/develop/QUICKSTART-ECS.md#finding-an-ami) on GitHub.
  + For the 64 bit (`x86_64`) architecture:

    ```
    https://regionconsole.aws.amazon.com/systems-manager/parameters/aws/service/bottlerocket/aws-ecs-2-nvidia/x86_64/latest/image_id/description?region=region#
    ```
  + For the 64 bit Arm (`arm64`) architecture:

    ```
    https://regionconsole.aws.amazon.com/systems-manager/parameters/aws/service/bottlerocket/aws-ecs-2-nvidia/arm64/latest/image_id/description?region=region#
    ```

## `aws-ecs-1` Bottlerocket AMI variant
<a name="ecs-bottlerocket-aws-ecs-1-variant"></a>

You can retrieve the latest stable `aws-ecs-1` Bottlerocket AMI variant by AWS Region and architecture with the AWS CLI or the AWS Management Console. 
+ **AWS CLI** – You can retrieve the image ID of the latest recommended Amazon ECS-optimized Bottlerocket AMI with the following AWS CLI command by using the subparameter `image_id`. Replace the `region` with the Region code that you want the AMI ID for. 

  For information about the supported AWS Regions, see [Finding an AMI](https://github.com/bottlerocket-os/bottlerocket/blob/develop/QUICKSTART-ECS.md#finding-an-ami) on GitHub. To retrieve a version other than the latest, replace `latest` with the version number.
  + For the 64-bit (`x86_64`) architecture:

    ```
    aws ssm get-parameter --region us-east-1 --name "/aws/service/bottlerocket/aws-ecs-1/x86_64/latest/image_id" --query Parameter.Value --output text
    ```
  + For the 64-bit Arm (`arm64`) architecture:

    ```
    aws ssm get-parameter --region us-east-1 --name "/aws/service/bottlerocket/aws-ecs-1/arm64/latest/image_id" --query Parameter.Value --output text
    ```
+ **AWS Management Console** – You can query for the recommended Amazon ECS-optimized AMI ID using a URL in the AWS Management Console. The URL opens the Amazon EC2 Systems Manager console with the value of the ID for the parameter. In the following URL, replace `region` with the Region code that you want the AMI ID for.

  For information about the supported AWS Regions, see [Finding an AMI](https://github.com/bottlerocket-os/bottlerocket/blob/develop/QUICKSTART-ECS.md#finding-an-ami) on GitHub.
  + For the 64-bit (`x86_64`) architecture:

    ```
    https://region.console.aws.amazon.com/systems-manager/parameters/aws/service/bottlerocket/aws-ecs-1/x86_64/latest/image_id/description
    ```
  + For the 64-bit Arm (`arm64`) architecture:

    ```
    https://region.console.aws.amazon.com/systems-manager/parameters/aws/service/bottlerocket/aws-ecs-1/arm64/latest/image_id/description
    ```

## `aws-ecs-1-nvidia` Bottlerocket AMI variant
<a name="ecs-bottlerocket-aws-ecs-1-nvidia-variants"></a>

You can retrieve the latest stable `aws-ecs-1-nvdia` Bottlerocket AMI variant by Region and architecture with the AWS CLI or the AWS Management Console. 
+ **AWS CLI** – You can retrieve the image ID of the latest recommended Amazon ECS-optimized Bottlerocket AMI with the following AWS CLI command by using the subparameter `image_id`. Replace the `region` with the Region code that you want the AMI ID for. 

  For information about the supported AWS Regions, see [Finding an AMI](https://github.com/bottlerocket-os/bottlerocket/blob/develop/QUICKSTART-ECS.md#finding-an-ami) on GitHub.
  + For the 64-bit (`x86_64`) architecture:

    ```
    aws ssm get-parameter --region us-east-1 --name "/aws/service/bottlerocket/aws-ecs-1-nvidia/x86_64/latest/image_id" --query Parameter.Value --output text
    ```
  + For the 64 bit Arm (`arm64`) architecture:

    ```
    aws ssm get-parameter --region us-east-1 --name "/aws/service/bottlerocket/aws-ecs-1-nvidia/arm64/latest/image_id" --query Parameter.Value --output text
    ```
+ **AWS Management Console** – You can query for the recommended Amazon ECS optimized AMI ID using a URL in the AWS Management Console. The URL opens the Amazon EC2 Systems Manager console with the value of the ID for the parameter. In the following URL, replace `region` with the Region code that you want the AMI ID for. 

  For information about the supported AWS Regions, see [Finding an AMI](https://github.com/bottlerocket-os/bottlerocket/blob/develop/QUICKSTART-ECS.md#finding-an-ami) on GitHub.
  + For the 64 bit (`x86_64`) architecture:

    ```
    https://console.aws.amazon.com/systems-manager/parameters/aws/service/bottlerocket/aws-ecs-1-nvidia/x86_64/latest/image_id/description?region=region#
    ```
  + For the 64 bit Arm (`arm64`) architecture:

    ```
    https://console.aws.amazon.com/systems-manager/parameters/aws/service/bottlerocket/aws-ecs-1-nvidia/arm64/latest/image_id/description?region=region#
    ```

## Next steps
<a name="bottlerocket-next-steps"></a>

For a detailed tutorial on how to get started with the Bottlerocket operating system on Amazon ECS, see [Using a Bottlerocket AMI with Amazon ECS](https://github.com/bottlerocket-os/bottlerocket/blob/develop/QUICKSTART-ECS.md) on GitHub and [ Getting started withBottlerocket and Amazon ECS](https://aws.amazon.com/blogs/containers/getting-started-with-bottlerocket-and-amazon-ecs/) on the AWS blog site.

For information about how to launch a Bottlerocket instance, see [Launching a Bottlerocket instance for Amazon ECS](bottlerocket-launch.md)

# Launching a Bottlerocket instance for Amazon ECS
<a name="bottlerocket-launch"></a>

You can launch a Bottlerocket instance so that you can run your container workloads.

You can use the AWS CLI to launch the Bottlerocket instance.

1. Create a file that's called `userdata.toml`. This file is used for the instance user data. Replace *cluster-name* with the name of your cluster.

   ```
   [settings.ecs]
   cluster = "cluster-name"
   ```

1. Use one of the commands that are included in [Retrieving Amazon ECS-optimized Bottlerocket AMI metadata](ecs-bottlerocket-retrieve-ami.md) to get the Bottlerocket AMI ID. You use this in the following step.

1. Run the following command to launch the Bottlerocket instance. Remember to replace the following parameters:
   + Replace *subnet* with the ID of the private or public subnet that your instance will launch in.
   + Replace *bottlerocket\$1ami* with the AMI ID from the previous step.
   + Replace *t3.large* with the instance type that you want to use.
   + Replace *region* with the Region code.

   ```
   aws ec2 run-instances --key-name ecs-bottlerocket-example \
      --subnet-id subnet \
      --image-id bottlerocket_ami \
      --instance-type t3.large \
      --region region \
      --tag-specifications 'ResourceType=instance,Tags=[{Key=bottlerocket,Value=example}]' \
      --user-data file://userdata.toml \
      --iam-instance-profile Name=ecsInstanceRole
   ```

1. Run the following command to verify that the container instance is registered to the cluster. When you run this command, remember to replace the following parameters:
   + Replace *cluster* with your cluster name.
   + Replace *region* with your Region code.

   ```
   aws ecs list-container-instances --cluster cluster-name --region region
   ```

For a detailed walkthrough of how to get started with the Bottlerocket operating system on Amazon ECS, see [Using a Bottlerocket AMI with Amazon ECS](https://github.com/bottlerocket-os/bottlerocket/blob/develop/QUICKSTART-ECS.md) on GitHub and Getting started with [Bottlerocket and Amazon ECS](https://aws.amazon.com/blogs/containers/getting-started-with-bottlerocket-and-amazon-ecs/) on the AWS blog site.

# Amazon ECS Linux container instance management
<a name="manage-linux"></a>

When you use EC2 instances for your Amazon ECS workloads, you are responsible for maintaining the instances

**Topics**
+ [Launching a container instance](launch_container_instance.md)
+ [Bootstrapping Linux container instances](bootstrap_container_instance.md)
+ [Configuring container instances to receive Spot Instance notices](spot-instance-draining-linux-container.md)
+ [Running a script when you launch a container instance](start_task_at_launch.md)
+ [

# Increasing Amazon ECS Linux container instance network interfaces
](container-instance-eni.md)
+ [Reserving container instance memory](memory-management.md)
+ [Manage container instances remotely](ec2-run-command.md)
+ [Using an HTTP proxy for Linux container instances](http_proxy_config.md)
+ [Configuring pre-initialized instances for your Auto Scaling group](using-warm-pool.md)
+ [

# Updating the Amazon ECS container agent
](ecs-agent-update.md)

Each Amazon ECS container agent version supports a different feature set and provides bug fixes from previous versions. When possible, we always recommend using the latest version of the Amazon ECS container agent. To update your container agent to the latest version, see [Updating the Amazon ECS container agent](ecs-agent-update.md).

To see which features and enhancements are included with each agent release, see [https://github.com/aws/amazon-ecs-agent/releases](https://github.com/aws/amazon-ecs-agent/releases).

**Important**  
The minimum Docker version for reliable metrics is Docker version `v20.10.13` and newer, which is included in Amazon ECS-optimized AMI `20220607` and newer.  
Amazon ECS agent versions `1.20.0` and newer have deprecated support for Docker versions older than `18.01.0`.

# Launching an Amazon ECS Linux container instance
<a name="launch_container_instance"></a>

You can create Amazon ECS container instances using the Amazon EC2 console. 

You can launch an instance by various methods including the Amazon EC2 console, AWS CLI, and SDK. For information about the other methods for launching an instance, see [Launch your instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/LaunchingAndUsingInstances.html) in the *Amazon EC2 User Guide*.

For more information about the launch wizard, see [Launch an instance using the new launch instance wizard](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-instance-wizard.html) in the *Amazon EC2 User Guide*. 

Before you begin, complete the steps in [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md).

You can use the new Amazon EC2 wizard to launch an instance. The launch instance wizard specifies the launch parameters that are required for launching an instance. 

**Topics**
+ [

## Procedure
](#linux-liw-initiate-instance-launch)
+ [

## Name and tags
](#linux-liw-name-and-tags)
+ [

## Application and OS Images (Amazon Machine Image)
](#linux-liw-ami)
+ [

## Instance type
](#linux-liw-instance-type)
+ [

## Key pair (login)
](#linux-liw-key-pair)
+ [

## Network settings
](#linux-liw-network-settings)
+ [

## Configure storage
](#linux-liw-storage)
+ [

## Advanced details
](#linux-liw-advanced-details)

## Procedure
<a name="linux-liw-initiate-instance-launch"></a>

Before you begin, complete the steps in [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md).

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation bar at the top of the screen, the current AWS Region is displayed (for example, US East (Ohio)). Select a Region in which to launch the instance. 

1. From the Amazon EC2 console dashboard, choose **Launch instance**.

## Name and tags
<a name="linux-liw-name-and-tags"></a>

The instance name is a tag, where the key is **Name**, and the value is the name that you specify. You can tag the instance, the volumes, and elastic graphics. For Spot Instances, you can tag the Spot Instance request only. 

Specifying an instance name and additional tags is optional.
+ For **Name**, enter a descriptive name for the instance. If you don't specify a name, the instance can be identified by its ID, which is automatically generated when you launch the instance.
+ To add additional tags, choose **Add additional tags**. Choose **Add tag**, and then enter a key and value, and select the resource type to tag. Choose **Add tag** again for each additional tag to add.

## Application and OS Images (Amazon Machine Image)
<a name="linux-liw-ami"></a>

An Amazon Machine Image (AMI) contains the information required to create an instance. For example, an AMI might contain the software that's required to act as a web server, such as Apache, and your website.

Use the **Search** bar to find a suitable Amazon ECS-optimized AMI published by AWS.

1. Enter one of the following terms in the **Search** bar.
   + **ami-ecs**
   + The **Value** of an Amazon ECS-optimized AMI.

     For the latest Amazon ECS-optimized AMIs and their values, see [Linux Amazon ECS-optimized AMI](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html#ecs-optimized-ami-linux).

1. Press **Enter**.

1. On the **Choose an Amazon Machine Image (AMI)** page, select the **AWS Marketplace AMIs** tab.

1. From the left **Refine results** pane, select **Amazon Web Services** as the **Publisher**.

1. Choose **Select** on the row of the AMI that you want to use.

   Alternatively, choose **Cancel** (at top right) to return to the launch instance wizard without choosing an AMI. A default AMI will be selected. Ensure that the AMI meets the requirements outlined in [Amazon ECS-optimized Linux AMIs](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html).

## Instance type
<a name="linux-liw-instance-type"></a>

The instance type defines the hardware configuration and size of the instance. Larger instance types have more CPU and memory. For more information, see [Instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) in the *Amazon EC2 User Guide*. If you want to run an IPv6-only workload, certain instance types don't support IPv6 addresses. For more information, see [IPv6 addresses](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html#ipv6-addressing) in the *Amazon EC2 User Guide*.
+ For **Instance type**, select the instance type for the instance. 

   The instance type that you select determines the resources available for your tasks to run on.

## Key pair (login)
<a name="linux-liw-key-pair"></a>

For **Key pair name**, choose an existing key pair, or choose **Create new key pair** to create a new one. 

**Important**  
If you choose the **Proceed without key pair (Not recommended)** option, you won't be able to connect to the instance unless you choose an AMI that is configured to allow users another way to log in.

## Network settings
<a name="linux-liw-network-settings"></a>

Configure the network settings as necessary after choosing the **Edit** button for the **Network settings** section of the form.
+ For **VPC**, choose the VPC that you want to launch the instance into. To run an IPv6-only workload, choose a dual stack VPC that includes both an IPv4 and an IPv6 CIDR block.
+ For **Subnet**, choose the subnet to launch the instance in. You can launch an instance in a subnet associated with an Availability Zone, Local Zone, Wavelength Zone, or Outpost.

  To launch the instance in an Availability Zone, select the subnet in which to launch your instance. To create a new subnet, choose **Create new subnet** to go to the Amazon VPC console. When you are done, return to the launch instance wizard and choose the Refresh icon to load your subnet in the list.

  To launch the instance in a Local Zone, select a subnet that you created in the Local Zone. 

  To launch an instance in an Outpost, select a subnet in a VPC that you associated with the Outpost.

  To run an IPv6-only workload, choose a subnet that includes only an IPv6 CIDR block.
+ **Auto-assign Public IP**: If your instance should be accessible from the internet, verify that the **Auto-assign Public IP** field is set to **Enable**. If not, set this field to **Disable**.
**Note**  
Container instances need access to communicate with the Amazon ECS service endpoint. This can be through an interface VPC endpoint or through your container instances having public IP addresses.  
For more information about interface VPC endpoints, see [Amazon ECS interface VPC endpoints (AWS PrivateLink)](vpc-endpoints.md)  
If you do not have an interface VPC endpoint configured and your container instances do not have public IP addresses, then they must use network address translation (NAT) to provide this access. For more information, see [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) in the *Amazon VPC User Guide* and [Using an HTTP proxy for Amazon ECS Linux container instances](http_proxy_config.md) in this guide. 
+ If you choose a dual stack VPC and an IPv6-only subnet, for **Auto-assign IPv6 IP**, choose **Enable**.
+ **Firewall (security groups)**: Use a security group to define firewall rules for your container instance. These rules specify which incoming network traffic is delivered to your container instance. All other traffic is ignored. 
  + To select an existing security group, choose **Select existing security group**, and select the security group that you created in [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md).
+ If you are launching the instance for an IPv6-only workload, choose **Advanced network configuration**, and then for **Assign Primary IPv6 IP**, choose **Yes**.
**Note**  
Without a primary IPv6 address, tasks running on the container instance in the host or bridge network modes will fail to register with load balancers or with AWS Cloud Map.

## Configure storage
<a name="linux-liw-storage"></a>

The AMI you selected includes one or more volumes of storage, including the root volume. You can specify additional volumes to attach to the instance.

You can use the **Simple** view.
+ **Storage type**: Configure the storage for your container instance.

  If you are using the Amazon ECS-optimized Amazon Linux 2 AMI, your instance has a single 30 GiB volume configured, which is shared between the operating system and Docker.

  If you are using the Amazon ECS-optimized AMI, your instance has two volumes configured. The **Root** volume is for the operating system's use, and the second Amazon EBS volume (attached to `/dev/xvdcz`) is for Docker's use.

  You can optionally increase or decrease the volume sizes for your instance to meet your application needs.

## Advanced details
<a name="linux-liw-advanced-details"></a>

For **Advanced details**, expand the section to view the fields and specify any additional parameters for the instance.
+ **Purchasing option**: Choose **Request Spot Instances** to request Spot Instances. You also need to set the other fields related to Spot Instances. For more information, see [Spot Instance Requests](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html).
**Note**  
If you are using Spot Instances and see a `Not available` message, you may need to choose a different instance type.
+ **IAM instance profile**: Select your container instance IAM role. This is usually named `ecsInstanceRole`.
**Important**  
If you do not launch your container instance with the proper IAM permissions, your Amazon ECS agent cannot connect to your cluster. For more information, see [Amazon ECS container instance IAM role](instance_IAM_role.md).
+ **User data**: Configure your Amazon ECS container instance with user data, such as the agent environment variables from [Amazon ECS container agent configuration](ecs-agent-config.md). Amazon EC2 user data scripts are executed only one time, when the instance is first launched. The following are common examples of what user data is used for:
  + By default, your container instance launches into your default cluster. To launch into a non-default cluster, choose the **Advanced Details** list. Then, paste the following script into the **User data** field, replacing *your\$1cluster\$1name* with the name of your cluster.

    ```
    #!/bin/bash
    echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
    ```
  + If you have an `ecs.config` file in Amazon S3 and have enabled Amazon S3 read-only access to your container instance role, choose the **Advanced Details** list. Then, paste the following script into the **User data** field, replacing *your\$1bucket\$1name* with the name of your bucket to install the AWS CLI and write your configuration file at launch time. 
**Note**  
For more information about this configuration, see [Storing Amazon ECS container instance configuration in Amazon S3](ecs-config-s3.md).

    ```
    #!/bin/bash
    yum install -y aws-cli
    aws s3 cp s3://your_bucket_name/ecs.config /etc/ecs/ecs.config
    ```
  + Specify tags for your container instance using the `ECS_CONTAINER_INSTANCE_TAGS` configuration parameter. This creates tags that are associated with Amazon ECS only, they cannot be listed using the Amazon EC2 API.
**Important**  
If you launch your container instances using an Amazon EC2 Auto Scaling group, then you should use the ECS\$1CONTAINER\$1INSTANCE\$1TAGS agent configuration parameter to add tags. This is due to the way in which tags are added to Amazon EC2 instances that are launched using Auto Scaling groups.

    ```
    #!/bin/bash
    cat <<'EOF' >> /etc/ecs/ecs.config
    ECS_CLUSTER=your_cluster_name
    ECS_CONTAINER_INSTANCE_TAGS={"tag_key": "tag_value"}
    EOF
    ```
  + Specify tags for your container instance and then use the `ECS_CONTAINER_INSTANCE_PROPAGATE_TAGS_FROM` configuration parameter to propagate them from Amazon EC2 to Amazon ECS

    The following is an example of a user data script that would propagate the tags associated with a container instance, as well as register the container instance with a cluster named `your_cluster_name`:

    ```
    #!/bin/bash
    cat <<'EOF' >> /etc/ecs/ecs.config
    ECS_CLUSTER=your_cluster_name
    ECS_CONTAINER_INSTANCE_PROPAGATE_TAGS_FROM=ec2_instance
    EOF
    ```
  + By default, the Amazon ECS container agent will try to detect the container instance's compatibility for an IPv6-only configuration by looking at the instance's default IPv4 and IPv6 routes. To override this behavior, you can set the ` ECS_INSTANCE_IP_COMPATIBILITY` parameter to `ipv4` or `ipv6` in the instance's `/etc/ecs/ecs.config` file.

    ```
    #!/bin/bash
    cat <<'EOF' >> /etc/ecs/ecs.config
    ECS_CLUSTER=your_cluster_name
    ECS_INSTANCE_IP_COMPATIBILITY=ipv6
    EOF
    ```

  For more information, see [Bootstrapping Amazon ECS Linux container instances to pass data](bootstrap_container_instance.md).

# Bootstrapping Amazon ECS Linux container instances to pass data
<a name="bootstrap_container_instance"></a>

When you launch an Amazon EC2 instance, you can pass user data to the EC2 instance. The data can be used to perform common automated configuration tasks and even run scripts when the instance boots. For Amazon ECS, the most common use cases for user data are to pass configuration information to the Docker daemon and the Amazon ECS container agent.

You can pass multiple types of user data to Amazon EC2, including cloud boothooks, shell scripts, and `cloud-init` directives. For more information about these and other format types, see the [Cloud-Init documentation](https://cloudinit.readthedocs.io/en/latest/explanation/format.html). 

To pass the user data when using the Amazon EC2 launch wizard, see [Launching an Amazon ECS Linux container instance](launch_container_instance.md).

You can configure the container instance to pass data in the container agent configuration or in the Docker daemon configuration.

## Amazon ECS container agent
<a name="bootstrap_container_agent"></a>

The Linux variants of the Amazon ECS-optimized AMI look for agent configuration data in the `/etc/ecs/ecs.config` file when the container agent starts. You can specify this configuration data at launch with Amazon EC2 user data. For more information about available Amazon ECS container agent configuration variables, see [Amazon ECS container agent configuration](ecs-agent-config.md).

To set only a single agent configuration variable, such as the cluster name, use **echo** to copy the variable to the configuration file:

```
#!/bin/bash
echo "ECS_CLUSTER=MyCluster" >> /etc/ecs/ecs.config
```

If you have multiple variables to write to `/etc/ecs/ecs.config`, use the following `heredoc` format. This format writes everything between the lines beginning with **cat** and `EOF` to the configuration file.

```
#!/bin/bash
cat <<'EOF' >> /etc/ecs/ecs.config
ECS_CLUSTER=MyCluster
ECS_ENGINE_AUTH_TYPE=docker
ECS_ENGINE_AUTH_DATA={"https://index.docker.io/v1/":{"username":"my_name","password":"my_password","email":"email@example.com"}}
ECS_LOGLEVEL=debug
ECS_WARM_POOLS_CHECK=true
EOF
```

To set custom instance attributes, set the `ECS_INSTANCE_ATTRIBUTES` environment variable.

```
#!/bin/bash
cat <<'EOF' >> ecs.config
ECS_INSTANCE_ATTRIBUTES={"envtype":"prod"}
EOF
```

## Docker daemon
<a name="bootstrap_docker_daemon"></a>

You can specify Docker daemon configuration information with Amazon EC2 user data. For more information about configuration options, see [the Docker daemon documentation](https://docs.docker.com/reference/cli/dockerd/).

**Note**  
AWS doesn't support custom Docker configurations, because they can sometimes conflict with future Amazon ECS changes or features without warning.

In the example below, the custom options are added to the Docker daemon configuration file, `/etc/docker/daemon.json` which is then specified in the user data when the instance is launched.

```
#!/bin/bash
cat <<EOF >/etc/docker/daemon.json
{"debug": true}
EOF
systemctl restart docker --no-block
```

In the example below, the custom options are added to the Docker daemon configuration file, `/etc/docker/daemon.json` which is then specified in the user data when the instance is launched. This example shows how to disable the docker-proxy in the Docker daemon config file.

```
#!/bin/bash
cat <<EOF >/etc/docker/daemon.json
{"userland-proxy": false}
EOF
systemctl restart docker --no-block
```

# Configuring Amazon ECS Linux container instances to receive Spot Instance notices
<a name="spot-instance-draining-linux-container"></a>

Amazon EC2 terminates, stops, or hibernates your Spot Instance when the Spot price exceeds the maximum price for your request or capacity is no longer available. Amazon EC2 provides a Spot Instance two-minute interruption notice for terminate and stop actions. It does not provide the two-minute notice for the hibernate action. If Amazon ECS Spot Instance draining is turned onon the instance, Amazon ECS receives the Spot Instance interruption notice and places the instance in `DRAINING` status. 

**Important**  
Amazon ECS does not receive a notice from Amazon EC2 when instances are removed by Auto Scaling Capacity Rebalancing. For more information, see [Amazon EC2 Auto Scaling Capacity Rebalancing](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-capacity-rebalancing.html).

When a container instance is set to `DRAINING`, Amazon ECS prevents new tasks from being scheduled for placement on the container instance. Service tasks on the draining container instance that are in the `PENDING` state are stopped immediately. If there are container instances in the cluster that are available, replacement service tasks are started on them.

Spot Instance draining is turned off by default. 

You can turn on Spot Instance draining when you launch an instance. Add the following script into the **User data** field. Replace *MyCluster* with the name of the cluster to register the container instance to.

```
#!/bin/bash
cat <<'EOF' >> /etc/ecs/ecs.config
ECS_CLUSTER=MyCluster
ECS_ENABLE_SPOT_INSTANCE_DRAINING=true
EOF
```

For more information, see [Launching an Amazon ECS Linux container instance](launch_container_instance.md).

**To turn on Spot Instance draining for an existing container instance**

1. Connect to the Spot Instance over SSH.

1. Edit the `/etc/ecs/ecs.config` file and add the following:

   ```
   ECS_ENABLE_SPOT_INSTANCE_DRAINING=true
   ```

1. Restart the `ecs` service.
   + For the Amazon ECS-optimized Amazon Linux 2 AMI:

     ```
     sudo systemctl restart ecs
     ```

1. (Optional) You can verify that the agent is running and see some information about your new container instance by querying the agent introspection API operation. For more information, see [Amazon ECS container introspection](ecs-agent-introspection.md).

   ```
   curl http://localhost:51678/v1/metadata
   ```

# Running a script when you launch an Amazon ECS Linux container instance
<a name="start_task_at_launch"></a>

You might need to run a specific container on every container instance to deal with operations or security concerns such as monitoring, security, metrics, service discovery, or logging.

To do this, you can configure your container instances to call the **docker run** command with the user data script at launch, or in some init system such as Upstart or **systemd**. While this method works, it has some disadvantages because Amazon ECS has no knowledge of the container and cannot monitor the CPU, memory, ports, or any other resources used. To ensure that Amazon ECS can properly account for all task resources, create a task definition for the container to run on your container instances. Then, use Amazon ECS to place the task at launch time with Amazon EC2 user data.

The Amazon EC2 user data script in the following procedure uses the Amazon ECS introspection API to identify the container instance. Then, it uses the AWS CLI and the **start-task** command to run a specified task on itself during startup. 

**To start a task at container instance launch time**

1. Modify your `ecsInstanceRole` IAM role to add permissions for the `StartTask` API operation. For more information, see [Update permissions for a role](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_update-role-permissions.html) in the *AWS Identity and Access Management User Guide*.

1. Launch one or more container instances using the Amazon ECS-optimized Amazon Linux 2 AMI. Launch new container instances and use the following example script in the EC2 User data. Replace *your\$1cluster\$1name* with the cluster for the container instance to register into and *my\$1task\$1def* with the task definition to run on the instance at launch. 

   For more information, see [Launching an Amazon ECS Linux container instance](launch_container_instance.md).
**Note**  
The MIME multi-part content below uses a shell script to set configuration values and install packages. It also uses a systemd job to start the task after the **ecs** service is running and the introspection API is available.

   ```
   Content-Type: multipart/mixed; boundary="==BOUNDARY=="
   MIME-Version: 1.0
   
   --==BOUNDARY==
   Content-Type: text/x-shellscript; charset="us-ascii"
   
   #!/bin/bash
   # Specify the cluster that the container instance should register into
   cluster=your_cluster_name
   
   # Write the cluster configuration variable to the ecs.config file
   # (add any other configuration variables here also)
   echo ECS_CLUSTER=$cluster >> /etc/ecs/ecs.config
   
   START_TASK_SCRIPT_FILE="/etc/ecs/ecs-start-task.sh"
   cat <<- 'EOF' > ${START_TASK_SCRIPT_FILE}
   	exec 2>>/var/log/ecs/ecs-start-task.log
   	set -x
   	
   	# Install prerequisite tools
   	yum install -y jq aws-cli
   	
   	# Wait for the ECS service to be responsive
   	until curl -s http://localhost:51678/v1/metadata
   	do
   		sleep 1
   	done
   
   	# Grab the container instance ARN and AWS Region from instance metadata
   	instance_arn=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F/ '{print $NF}' )
   	cluster=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .Cluster' | awk -F/ '{print $NF}' )
   	region=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F: '{print $4}')
   
   	# Specify the task definition to run at launch
   	task_definition=my_task_def
   
   	# Run the AWS CLI start-task command to start your task on this container instance
   	aws ecs start-task --cluster $cluster --task-definition $task_definition --container-instances $instance_arn --started-by $instance_arn --region $region
   EOF
   
   # Write systemd unit file
   UNIT="ecs-start-task.service"
   cat <<- EOF > /etc/systemd/system/${UNIT}
         [Unit]
         Description=ECS Start Task
         Requires=ecs.service
         After=ecs.service
    
         [Service]
         Restart=on-failure
         RestartSec=30
         ExecStart=/usr/bin/bash ${START_TASK_SCRIPT_FILE}
   
         [Install]
         WantedBy=default.target
   EOF
   
   # Enable our ecs.service dependent service with `--no-block` to prevent systemd deadlock
   # See https://github.com/aws/amazon-ecs-agent/issues/1707
   systemctl enable --now --no-block "${UNIT}"
   --==BOUNDARY==--
   ```

1. Verify that your container instances launch into the correct cluster and that your tasks have started.

   1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

   1. From the navigation bar, choose the Region that your cluster is in.

   1. In the navigation pane, choose **Clusters** and select the cluster that hosts your container instances.

   1. On the **Cluster** page, choose **Tasks**, and then choose yor tasks.

      Each container instance you launched should have your task running on it.

      If you do not see your tasks, you can log in to your container instances with SSH and check the `/var/log/ecs/ecs-start-task.log` file for debugging information.

# Increasing Amazon ECS Linux container instance network interfaces
<a name="container-instance-eni"></a>

**Note**  
This feature is not available on Fargate.

Each task that uses the `awsvpc` network mode receives its own elastic network interface (ENI), which is attached to the container instance that hosts it. There is a default limit to the number of network interfaces that can be attached to an Amazon EC2 instance, and the primary network interface counts as one. For example, by default a `c5.large` instance may have up to three ENIs attached to it. The primary network interface for the instance counts as one, so you can attach an additional two ENIs to the instance. Because each task using the `awsvpc` network mode requires an ENI, you can typically only run two such tasks on this instance type.

Amazon ECS supports launching container instances with increased ENI density using supported Amazon EC2 instance types. When you use these instance types and turn on the `awsvpcTrunking` account setting, additional ENIs are available on newly launched container instances. This configuration allows you to place more tasks on each container instance. To use the console to turn on the feature, see [Modifying Amazon ECS account settings](ecs-modifying-longer-id-settings.md). To use the AWS CLI to turn on the feature, see [Managing Amazon ECS account settings using the AWS CLI](account-setting-management-cli.md). 

For example, a `c5.large` instance with `awsvpcTrunking` has an increased ENI limit of twelve. The container instance will have the primary network interface and Amazon ECS creates and attaches a "trunk" network interface to the container instance. So this configuration allows you to launch ten tasks on the container instance instead of the current two tasks.

The trunk network interface is fully managed by Amazon ECS and is deleted when you either terminate or deregister your container instance from the cluster. For more information, see [Amazon ECS task networking options for EC2](task-networking.md).

## Considerations
<a name="eni-trunking-considerations"></a>

Consider the following when using the ENI trunking feature.
+ Only Linux variants of the Amazon ECS-optimized AMI, or other Amazon Linux variants with version `1.28.1` or later of the container agent and version `1.28.1-2` or later of the ecs-init package, support the increased ENI limits. If you use the latest Linux variant of the Amazon ECS-optimized AMI, these requirements will be met. Windows containers are not supported at this time.
+ Only new Amazon EC2 instances launched after enabling `awsvpcTrunking` receive the increased ENI limits and the trunk network interface. Previously launched instances do not receive these features regardless of the actions taken.
+ Amazon EC2 instances must have resource-based IPv4 DNS requests turned off. To disable this option, clear the **Enable resource-based IPV4 (A record) DNS requests** option when you create a new instance in the Amazon EC2 console. To disable this option using the AWS CLI, use the following command.

  ```
  aws ec2 modify-private-dns-name-options --instance-id i-xxxxxxx --no-enable-resource-name-dns-a-record --no-dry-run
  ```
+ Amazon EC2 instances in shared subnets are not supported. They will fail to register to a cluster if they are used.
+ Your tasks must use the `awsvpc` network mode and the EC2. Tasks using Fargate always received a dedicated ENI regardless of how many are launched, so this feature is not needed.
+ Your tasks must be launched in the same Amazon VPC as your container instance. Your tasks will fail to start with an attribute error if they are not within the same VPC.
+ When launching a new container instance, the instance transitions to a `REGISTERING` status while the trunk elastic network interface is provisioned for the instance. If the registration fails, the instance transitions to a `REGISTRATION_FAILED` status. You can troubleshoot a failed registration by describing the container instance to view the `statusReason` field which describes the reason for the failure. The container instance then can be manually deregistered or terminated. Once the container instance is successfully deregistered or terminated, Amazon ECS will delete the trunk ENI.
**Note**  
Amazon ECS emits container instance state change events which you can monitor for instances that transition to a `REGISTRATION_FAILED` state. For more information, see [Amazon ECS container instance state change events](ecs_container_instance_events.md).
+ Once the container instance is terminated, the instance transitions to a `DEREGISTERING` status while the trunk elastic network interface is deprovisioned. The instance then transitions to an `INACTIVE` status.
+ If a container instance in a public subnet with the increased ENI limits is stopped and then restarted, the instance loses its public IP address, and the container agent loses its connection.
+ When you enable `awsvpcTrunking`, container instances receive an additional ENI that uses the VPC's default security group, and is managed by Amazon ECS.

  A default VPC comes with a public subnet in each Availability Zone, an internet gateway, and settings to enable DNS resolution. The subnet is a public subnet, because the main route table sends the subnet's traffic that is destined for the internet to the internet gateway. You can make a default subnet into a private subnet by removing the route from the destination 0.0.0.0/0 to the internet gateway. However, if you do this, no container instance running in that subnet can access the internet. You can add or delete security group rules to control the traffic into and out of your subnets. For more information, see [Security group rules](https://docs.aws.amazon.com/vpc/latest/userguide/security-group-rules.html) in the* Amazon Virtual Private Cloud User Guide*.

## Prerequisites
<a name="eni-trunking-launching"></a>

Before you launch a container instance with the increased ENI limits, the following prerequisites must be completed.
+ The service-linked role for Amazon ECS must be created. The Amazon ECS service-linked role provides Amazon ECS with the permissions to make calls to other AWS services on your behalf. This role is created for you automatically when you create a cluster, or if you create or update a service in the AWS Management Console. For more information, see [Using service-linked roles for Amazon ECS](using-service-linked-roles.md). You can also create the service-linked role with the following AWS CLI command.

  ```
  aws iam [create-service-linked-role](https://docs.aws.amazon.com/cli/latest/reference/iam/create-service-linked-role.html) --aws-service-name ecs.amazonaws.com
  ```
+ Your account or container instance IAM role must enable the `awsvpcTrunking` account setting. We recommend that you create 2 container instance roles (`ecsInstanceRole`). You can then enable the `awsvpcTrunking` account setting for one role and use that role for tasks that require ENI trunking. For information about the container instance role, see [Amazon ECS container instance IAM role](instance_IAM_role.md).

After the prerequisites are met, you can launch a new container instance using one of the supported Amazon EC2 instance types, and the instance will have the increased ENI limits. For a list of supported instance types, see [Supported instances for increased Amazon ECS container network interfaces](eni-trunking-supported-instance-types.md). The container instance must have version `1.28.1` or later of the container agent and version `1.28.1-2` or later of the ecs-init package. If you use the latest Linux variant of the Amazon ECS-optimized AMI, these requirements will be met. For more information, see [Launching an Amazon ECS Linux container instance](launch_container_instance.md).

**Important**  
Amazon EC2 instances must have resource-based IPv4 DNS requests turned off. To disable this option, ensure the **Enable resource-based IPV4 (A record) DNS requests** option is deselected when creating a new instance using the Amazon EC2 console. To disable this option using the AWS CLI, use the following command.  

```
aws ec2 modify-private-dns-name-options --instance-id i-xxxxxxx --no-enable-resource-name-dns-a-record --no-dry-run
```

**To view your container instances with increased ENI limits with the AWS CLI**

Each container instance has a default network interface, referred to as a trunk network interface. Use the following command to list your container instances with increased ENI limits by querying for the `ecs.awsvpc-trunk-id` attribute, which indicates it has a trunk network interface.
+ [list-attributes](https://docs.aws.amazon.com/cli/latest/reference/ecs/list-attributes.html) (AWS CLI)

  ```
  aws ecs list-attributes \
        --target-type container-instance \
        --attribute-name ecs.awsvpc-trunk-id \
        --cluster cluster_name \
        --region us-east-1
  ```
+ [Get-ECSAttributeList](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-ECSAttributeList.html) (AWS Tools for Windows PowerShell)

  ```
  Get-ECSAttributeList -TargetType container-instance -AttributeName ecs.awsvpc-trunk-id -Region us-east-1
  ```

# Supported instances for increased Amazon ECS container network interfaces
<a name="eni-trunking-supported-instance-types"></a>

The following shows the supported Amazon EC2 instance types and how many tasks using the `awsvpc` network mode can be launched on each instance type before and after enabling the `awsvpcTrunking` account setting. 

**Important**  
Although other instance types are supported in the same instance family, the `a1.metal`, `c5.metal`, `c5a.8xlarge`, `c5ad.8xlarge`, `c5d.metal`, `m5.metal`, `p3dn.24xlarge`, `r5.metal`, `r5.8xlarge`, and `r5d.metal` instance types are not supported.  
The `c5n`, `d3`, `d3en`, `g3`, `g3s`, `g4dn`, `i3`, `i3en`, `inf1`, `m5dn`, `m5n`, `m5zn`, `mac1`, `r5b`, `r5n`, `r5dn`, `u-12tb1`, `u-6tb1`, `u-9tb1`, and `z1d` instance families are not supported.

**Topics**
+ [

## General purpose
](#eni-branch-gp)
+ [

## Compute optimized
](#eni-branch-co)
+ [

## Memory optimized
](#eni-branch-mo)
+ [

## Storage optimized
](#eni-branch-so)
+ [

## Accelerated computing
](#eni-branch-ac)
+ [

## High performance computing
](#eni-branch-hpc)

## General purpose
<a name="eni-branch-gp"></a>


| Instance type | Task limit without ENI trunking | Task limit with ENI trunking | 
| --- | --- | --- | 
| a1.medium | 1 | 10 | 
| a1.large | 2 | 10 | 
| a1.xlarge | 3 | 20 | 
| a1.2xlarge | 3 | 40 | 
| a1.4xlarge | 7 | 60 | 
| m5.large | 2 | 10 | 
| m5.xlarge | 3 | 20 | 
| m5.2xlarge | 3 | 40 | 
| m5.4xlarge | 7 | 60 | 
| m5.8xlarge | 7 | 60 | 
| m5.12xlarge | 7 | 60 | 
| m5.16xlarge | 14 | 120 | 
| m5.24xlarge | 14 | 120 | 
| m5a.large | 2 | 10 | 
| m5a.xlarge | 3 | 20 | 
| m5a.2xlarge | 3 | 40 | 
| m5a.4xlarge | 7 | 60 | 
| m5a.8xlarge | 7 | 60 | 
| m5a.12xlarge | 7 | 60 | 
| m5a.16xlarge | 14 | 120 | 
| m5a.24xlarge | 14 | 120 | 
| m5ad.large | 2 | 10 | 
| m5ad.xlarge | 3 | 20 | 
| m5ad.2xlarge | 3 | 40 | 
| m5ad.4xlarge | 7 | 60 | 
| m5ad.8xlarge | 7 | 60 | 
| m5ad.12xlarge | 7 | 60 | 
| m5ad.16xlarge | 14 | 120 | 
| m5ad.24xlarge | 14 | 120 | 
| m5d.large | 2 | 10 | 
| m5d.xlarge | 3 | 20 | 
| m5d.2xlarge | 3 | 40 | 
| m5d.4xlarge | 7 | 60 | 
| m5d.8xlarge | 7 | 60 | 
| m5d.12xlarge | 7 | 60 | 
| m5d.16xlarge | 14 | 120 | 
| m5d.24xlarge | 14 | 120 | 
| m5d.metal | 14 | 120 | 
| m6a.large | 2 | 10 | 
| m6a.xlarge | 3 | 20 | 
| m6a.2xlarge | 3 | 40 | 
| m6a.4xlarge | 7 | 60 | 
| m6a.8xlarge | 7 | 90 | 
| m6a.12xlarge | 7 | 120 | 
| m6a.16xlarge | 14 | 120 | 
| m6a.24xlarge | 14 | 120 | 
| m6a.32xlarge | 14 | 120 | 
| m6a.48xlarge | 14 | 120 | 
| m6a.metal | 14 | 120 | 
| m6g.medium | 1 | 4 | 
| m6g.large | 2 | 10 | 
| m6g.xlarge | 3 | 20 | 
| m6g.2xlarge | 3 | 40 | 
| m6g.4xlarge | 7 | 60 | 
| m6g.8xlarge | 7 | 60 | 
| m6g.12xlarge | 7 | 60 | 
| m6g.16xlarge | 14 | 120 | 
| m6g.metal | 14 | 120 | 
| m6gd.medium | 1 | 4 | 
| m6gd.large | 2 | 10 | 
| m6gd.xlarge | 3 | 20 | 
| m6gd.2xlarge | 3 | 40 | 
| m6gd.4xlarge | 7 | 60 | 
| m6gd.8xlarge | 7 | 60 | 
| m6gd.12xlarge | 7 | 60 | 
| m6gd.16xlarge | 14 | 120 | 
| m6gd.metal | 14 | 120 | 
| m6i.large | 2 | 10 | 
| m6i.xlarge | 3 | 20 | 
| m6i.2xlarge | 3 | 40 | 
| m6i.4xlarge | 7 | 60 | 
| m6i.8xlarge | 7 | 90 | 
| m6i.12xlarge | 7 | 120 | 
| m6i.16xlarge | 14 | 120 | 
| m6i.24xlarge | 14 | 120 | 
| m6i.32xlarge | 14 | 120 | 
| m6i.metal | 14 | 120 | 
| m6id.large | 2 | 10 | 
| m6id.xlarge | 3 | 20 | 
| m6id.2xlarge | 3 | 40 | 
| m6id.4xlarge | 7 | 60 | 
| m6id.8xlarge | 7 | 90 | 
| m6id.12xlarge | 7 | 120 | 
| m6id.16xlarge | 14 | 120 | 
| m6id.24xlarge | 14 | 120 | 
| m6id.32xlarge | 14 | 120 | 
| m6id.metal | 14 | 120 | 
| m6idn.large | 2 | 10 | 
| m6idn.xlarge | 3 | 20 | 
| m6idn.2xlarge | 3 | 40 | 
| m6idn.4xlarge | 7 | 60 | 
| m6idn.8xlarge | 7 | 90 | 
| m6idn.12xlarge | 7 | 120 | 
| m6idn.16xlarge | 14 | 120 | 
| m6idn.24xlarge | 14 | 120 | 
| m6idn.32xlarge | 15 | 120 | 
| m6idn.metal | 15 | 120 | 
| m6in.large | 2 | 10 | 
| m6in.xlarge | 3 | 20 | 
| m6in.2xlarge | 3 | 40 | 
| m6in.4xlarge | 7 | 60 | 
| m6in.8xlarge | 7 | 90 | 
| m6in.12xlarge | 7 | 120 | 
| m6in.16xlarge | 14 | 120 | 
| m6in.24xlarge | 14 | 120 | 
| m6in.32xlarge | 15 | 120 | 
| m6in.metal | 15 | 120 | 
| m7a.medium | 1 | 4 | 
| m7a.large | 2 | 10 | 
| m7a.xlarge | 3 | 20 | 
| m7a.2xlarge | 3 | 40 | 
| m7a.4xlarge | 7 | 60 | 
| m7a.8xlarge | 7 | 90 | 
| m7a.12xlarge | 7 | 120 | 
| m7a.16xlarge | 14 | 120 | 
| m7a.24xlarge | 14 | 120 | 
| m7a.32xlarge | 14 | 120 | 
| m7a.48xlarge | 14 | 120 | 
| m7a.metal-48xl | 14 | 120 | 
| m7g.medium | 1 | 4 | 
| m7g.large | 2 | 10 | 
| m7g.xlarge | 3 | 20 | 
| m7g.2xlarge | 3 | 40 | 
| m7g.4xlarge | 7 | 60 | 
| m7g.8xlarge | 7 | 60 | 
| m7g.12xlarge | 7 | 60 | 
| m7g.16xlarge | 14 | 120 | 
| m7g.metal | 14 | 120 | 
| m7gd.medium | 1 | 4 | 
| m7gd.large | 2 | 10 | 
| m7gd.xlarge | 3 | 20 | 
| m7gd.2xlarge | 3 | 40 | 
| m7gd.4xlarge | 7 | 60 | 
| m7gd.8xlarge | 7 | 60 | 
| m7gd.12xlarge | 7 | 60 | 
| m7gd.16xlarge | 14 | 120 | 
| m7gd.metal | 14 | 120 | 
| m7i.large | 2 | 10 | 
| m7i.xlarge | 3 | 20 | 
| m7i.2xlarge | 3 | 40 | 
| m7i.4xlarge | 7 | 60 | 
| m7i.8xlarge | 7 | 90 | 
| m7i.12xlarge | 7 | 120 | 
| m7i.16xlarge | 14 | 120 | 
| m7i.24xlarge | 14 | 120 | 
| m7i.48xlarge | 14 | 120 | 
| m7i.metal-24xl | 14 | 120 | 
| m7i.metal-48xl | 14 | 120 | 
| m7i-flex.large | 2 | 4 | 
| m7i-flex.xlarge | 3 | 10 | 
| m7i-flex.2xlarge | 3 | 20 | 
| m7i-flex.4xlarge | 7 | 40 | 
| m7i-flex.8xlarge | 7 | 60 | 
| m7i-flex.12xlarge | 7 | 120 | 
| m7i-flex.16xlarge | 14 | 120 | 
| m8a.medium | 1 | 4 | 
| m8a.large | 2 | 10 | 
| m8a.xlarge | 3 | 20 | 
| m8a.2xlarge | 3 | 40 | 
| m8a.4xlarge | 7 | 60 | 
| m8a.8xlarge | 9 | 90 | 
| m8a.12xlarge | 11 | 120 | 
| m8a.16xlarge | 15 | 120 | 
| m8a.24xlarge | 15 | 120 | 
| m8a.48xlarge | 23 | 120 | 
| m8a.metal-24xl | 15 | 120 | 
| m8a.metal-48xl | 23 | 120 | 
| m8azn.medium | 2 | 4 | 
| m8azn.large | 3 | 10 | 
| m8azn.xlarge | 3 | 20 | 
| m8azn.3xlarge | 7 | 40 | 
| m8azn.6xlarge | 7 | 60 | 
| m8azn.12xlarge | 15 | 120 | 
| m8azn.24xlarge | 15 | 120 | 
| m8azn.metal-12xl | 15 | 120 | 
| m8azn.metal-24xl | 15 | 120 | 
| m8g.medium | 1 | 4 | 
| m8g.large | 2 | 10 | 
| m8g.xlarge | 3 | 20 | 
| m8g.2xlarge | 3 | 40 | 
| m8g.4xlarge | 7 | 60 | 
| m8g.8xlarge | 7 | 60 | 
| m8g.12xlarge | 7 | 60 | 
| m8g.16xlarge | 14 | 120 | 
| m8g.24xlarge | 14 | 120 | 
| m8g.48xlarge | 14 | 120 | 
| m8g.metal-24xl | 14 | 120 | 
| m8g.metal-48xl | 14 | 120 | 
| m8gb.medium | 1 | 4 | 
| m8gb.large | 2 | 10 | 
| m8gb.xlarge | 3 | 20 | 
| m8gb.2xlarge | 3 | 40 | 
| m8gb.4xlarge | 7 | 60 | 
| m8gb.8xlarge | 9 | 60 | 
| m8gb.12xlarge | 11 | 60 | 
| m8gb.16xlarge | 15 | 120 | 
| m8gb.24xlarge | 23 | 120 | 
| m8gb.48xlarge | 23 | 120 | 
| m8gb.metal-24xl | 23 | 120 | 
| m8gb.metal-48xl | 23 | 120 | 
| m8gd.medium | 1 | 4 | 
| m8gd.large | 2 | 10 | 
| m8gd.xlarge | 3 | 20 | 
| m8gd.2xlarge | 3 | 40 | 
| m8gd.4xlarge | 7 | 60 | 
| m8gd.8xlarge | 7 | 60 | 
| m8gd.12xlarge | 7 | 60 | 
| m8gd.16xlarge | 14 | 120 | 
| m8gd.24xlarge | 14 | 120 | 
| m8gd.48xlarge | 14 | 120 | 
| m8gd.metal-24xl | 14 | 120 | 
| m8gd.metal-48xl | 14 | 120 | 
| m8gn.medium | 1 | 4 | 
| m8gn.large | 2 | 10 | 
| m8gn.xlarge | 3 | 20 | 
| m8gn.2xlarge | 3 | 40 | 
| m8gn.4xlarge | 7 | 60 | 
| m8gn.8xlarge | 9 | 60 | 
| m8gn.12xlarge | 11 | 60 | 
| m8gn.16xlarge | 15 | 120 | 
| m8gn.24xlarge | 23 | 120 | 
| m8gn.48xlarge | 23 | 120 | 
| m8gn.metal-24xl | 23 | 120 | 
| m8gn.metal-48xl | 23 | 120 | 
| m8i.large | 2 | 10 | 
| m8i.xlarge | 3 | 20 | 
| m8i.2xlarge | 3 | 40 | 
| m8i.4xlarge | 7 | 60 | 
| m8i.8xlarge | 9 | 90 | 
| m8i.12xlarge | 11 | 120 | 
| m8i.16xlarge | 15 | 120 | 
| m8i.24xlarge | 15 | 120 | 
| m8i.32xlarge | 23 | 120 | 
| m8i.48xlarge | 23 | 120 | 
| m8i.96xlarge | 23 | 120 | 
| m8i.metal-48xl | 23 | 120 | 
| m8i.metal-96xl | 23 | 120 | 
| m8id.large | 2 | 10 | 
| m8id.xlarge | 3 | 20 | 
| m8id.2xlarge | 3 | 40 | 
| m8id.4xlarge | 7 | 60 | 
| m8id.8xlarge | 9 | 90 | 
| m8id.12xlarge | 11 | 120 | 
| m8id.16xlarge | 15 | 120 | 
| m8id.24xlarge | 15 | 120 | 
| m8id.32xlarge | 23 | 120 | 
| m8id.48xlarge | 23 | 120 | 
| m8id.96xlarge | 23 | 120 | 
| m8id.metal-48xl | 23 | 120 | 
| m8id.metal-96xl | 23 | 120 | 
| m8i-flex.large | 2 | 4 | 
| m8i-flex.xlarge | 3 | 10 | 
| m8i-flex.2xlarge | 3 | 20 | 
| m8i-flex.4xlarge | 7 | 40 | 
| m8i-flex.8xlarge | 9 | 60 | 
| m8i-flex.12xlarge | 11 | 120 | 
| m8i-flex.16xlarge | 15 | 120 | 
| mac2.metal | 7 | 12 | 
| mac2-m1ultra.metal | 7 | 12 | 
| mac2-m2.metal | 7 | 12 | 
| mac2-m2pro.metal | 7 | 12 | 
| mac-m4.metal | 7 | 12 | 
| mac-m4pro.metal | 7 | 12 | 
| mac-m4max.metal | 7 | 12 | 

## Compute optimized
<a name="eni-branch-co"></a>


| Instance type | Task limit without ENI trunking | Task limit with ENI trunking | 
| --- | --- | --- | 
| c5.large | 2 | 10 | 
| c5.xlarge | 3 | 20 | 
| c5.2xlarge | 3 | 40 | 
| c5.4xlarge | 7 | 60 | 
| c5.9xlarge | 7 | 60 | 
| c5.12xlarge | 7 | 60 | 
| c5.18xlarge | 14 | 120 | 
| c5.24xlarge | 14 | 120 | 
| c5a.large | 2 | 10 | 
| c5a.xlarge | 3 | 20 | 
| c5a.2xlarge | 3 | 40 | 
| c5a.4xlarge | 7 | 60 | 
| c5a.12xlarge | 7 | 60 | 
| c5a.16xlarge | 14 | 120 | 
| c5a.24xlarge | 14 | 120 | 
| c5ad.large | 2 | 10 | 
| c5ad.xlarge | 3 | 20 | 
| c5ad.2xlarge | 3 | 40 | 
| c5ad.4xlarge | 7 | 60 | 
| c5ad.12xlarge | 7 | 60 | 
| c5ad.16xlarge | 14 | 120 | 
| c5ad.24xlarge | 14 | 120 | 
| c5d.large | 2 | 10 | 
| c5d.xlarge | 3 | 20 | 
| c5d.2xlarge | 3 | 40 | 
| c5d.4xlarge | 7 | 60 | 
| c5d.9xlarge | 7 | 60 | 
| c5d.12xlarge | 7 | 60 | 
| c5d.18xlarge | 14 | 120 | 
| c5d.24xlarge | 14 | 120 | 
| c6a.large | 2 | 10 | 
| c6a.xlarge | 3 | 20 | 
| c6a.2xlarge | 3 | 40 | 
| c6a.4xlarge | 7 | 60 | 
| c6a.8xlarge | 7 | 90 | 
| c6a.12xlarge | 7 | 120 | 
| c6a.16xlarge | 14 | 120 | 
| c6a.24xlarge | 14 | 120 | 
| c6a.32xlarge | 14 | 120 | 
| c6a.48xlarge | 14 | 120 | 
| c6a.metal | 14 | 120 | 
| c6g.medium | 1 | 4 | 
| c6g.large | 2 | 10 | 
| c6g.xlarge | 3 | 20 | 
| c6g.2xlarge | 3 | 40 | 
| c6g.4xlarge | 7 | 60 | 
| c6g.8xlarge | 7 | 60 | 
| c6g.12xlarge | 7 | 60 | 
| c6g.16xlarge | 14 | 120 | 
| c6g.metal | 14 | 120 | 
| c6gd.medium | 1 | 4 | 
| c6gd.large | 2 | 10 | 
| c6gd.xlarge | 3 | 20 | 
| c6gd.2xlarge | 3 | 40 | 
| c6gd.4xlarge | 7 | 60 | 
| c6gd.8xlarge | 7 | 60 | 
| c6gd.12xlarge | 7 | 60 | 
| c6gd.16xlarge | 14 | 120 | 
| c6gd.metal | 14 | 120 | 
| c6gn.medium | 1 | 4 | 
| c6gn.large | 2 | 10 | 
| c6gn.xlarge | 3 | 20 | 
| c6gn.2xlarge | 3 | 40 | 
| c6gn.4xlarge | 7 | 60 | 
| c6gn.8xlarge | 7 | 60 | 
| c6gn.12xlarge | 7 | 60 | 
| c6gn.16xlarge | 14 | 120 | 
| c6i.large | 2 | 10 | 
| c6i.xlarge | 3 | 20 | 
| c6i.2xlarge | 3 | 40 | 
| c6i.4xlarge | 7 | 60 | 
| c6i.8xlarge | 7 | 90 | 
| c6i.12xlarge | 7 | 120 | 
| c6i.16xlarge | 14 | 120 | 
| c6i.24xlarge | 14 | 120 | 
| c6i.32xlarge | 14 | 120 | 
| c6i.metal | 14 | 120 | 
| c6id.large | 2 | 10 | 
| c6id.xlarge | 3 | 20 | 
| c6id.2xlarge | 3 | 40 | 
| c6id.4xlarge | 7 | 60 | 
| c6id.8xlarge | 7 | 90 | 
| c6id.12xlarge | 7 | 120 | 
| c6id.16xlarge | 14 | 120 | 
| c6id.24xlarge | 14 | 120 | 
| c6id.32xlarge | 14 | 120 | 
| c6id.metal | 14 | 120 | 
| c6in.large | 2 | 10 | 
| c6in.xlarge | 3 | 20 | 
| c6in.2xlarge | 3 | 40 | 
| c6in.4xlarge | 7 | 60 | 
| c6in.8xlarge | 7 | 90 | 
| c6in.12xlarge | 7 | 120 | 
| c6in.16xlarge | 14 | 120 | 
| c6in.24xlarge | 14 | 120 | 
| c6in.32xlarge | 15 | 120 | 
| c6in.metal | 15 | 120 | 
| c7a.medium | 1 | 4 | 
| c7a.large | 2 | 10 | 
| c7a.xlarge | 3 | 20 | 
| c7a.2xlarge | 3 | 40 | 
| c7a.4xlarge | 7 | 60 | 
| c7a.8xlarge | 7 | 90 | 
| c7a.12xlarge | 7 | 120 | 
| c7a.16xlarge | 14 | 120 | 
| c7a.24xlarge | 14 | 120 | 
| c7a.32xlarge | 14 | 120 | 
| c7a.48xlarge | 14 | 120 | 
| c7a.metal-48xl | 14 | 120 | 
| c7g.medium | 1 | 4 | 
| c7g.large | 2 | 10 | 
| c7g.xlarge | 3 | 20 | 
| c7g.2xlarge | 3 | 40 | 
| c7g.4xlarge | 7 | 60 | 
| c7g.8xlarge | 7 | 60 | 
| c7g.12xlarge | 7 | 60 | 
| c7g.16xlarge | 14 | 120 | 
| c7g.metal | 14 | 120 | 
| c7gd.medium | 1 | 4 | 
| c7gd.large | 2 | 10 | 
| c7gd.xlarge | 3 | 20 | 
| c7gd.2xlarge | 3 | 40 | 
| c7gd.4xlarge | 7 | 60 | 
| c7gd.8xlarge | 7 | 60 | 
| c7gd.12xlarge | 7 | 60 | 
| c7gd.16xlarge | 14 | 120 | 
| c7gd.metal | 14 | 120 | 
| c7gn.medium | 1 | 4 | 
| c7gn.large | 2 | 10 | 
| c7gn.xlarge | 3 | 20 | 
| c7gn.2xlarge | 3 | 40 | 
| c7gn.4xlarge | 7 | 60 | 
| c7gn.8xlarge | 7 | 60 | 
| c7gn.12xlarge | 7 | 60 | 
| c7gn.16xlarge | 14 | 120 | 
| c7gn.metal | 14 | 120 | 
| c7i.large | 2 | 10 | 
| c7i.xlarge | 3 | 20 | 
| c7i.2xlarge | 3 | 40 | 
| c7i.4xlarge | 7 | 60 | 
| c7i.8xlarge | 7 | 90 | 
| c7i.12xlarge | 7 | 120 | 
| c7i.16xlarge | 14 | 120 | 
| c7i.24xlarge | 14 | 120 | 
| c7i.48xlarge | 14 | 120 | 
| c7i.metal-24xl | 14 | 120 | 
| c7i.metal-48xl | 14 | 120 | 
| c7i-flex.large | 2 | 4 | 
| c7i-flex.xlarge | 3 | 10 | 
| c7i-flex.2xlarge | 3 | 20 | 
| c7i-flex.4xlarge | 7 | 40 | 
| c7i-flex.8xlarge | 7 | 60 | 
| c7i-flex.12xlarge | 7 | 120 | 
| c7i-flex.16xlarge | 14 | 120 | 
| c8a.medium | 1 | 4 | 
| c8a.large | 2 | 10 | 
| c8a.xlarge | 3 | 20 | 
| c8a.2xlarge | 3 | 40 | 
| c8a.4xlarge | 7 | 60 | 
| c8a.8xlarge | 9 | 90 | 
| c8a.12xlarge | 11 | 120 | 
| c8a.16xlarge | 15 | 120 | 
| c8a.24xlarge | 15 | 120 | 
| c8a.48xlarge | 23 | 120 | 
| c8a.metal-24xl | 15 | 120 | 
| c8a.metal-48xl | 23 | 120 | 
| c8g.medium | 1 | 4 | 
| c8g.large | 2 | 10 | 
| c8g.xlarge | 3 | 20 | 
| c8g.2xlarge | 3 | 40 | 
| c8g.4xlarge | 7 | 60 | 
| c8g.8xlarge | 7 | 60 | 
| c8g.12xlarge | 7 | 60 | 
| c8g.16xlarge | 14 | 120 | 
| c8g.24xlarge | 14 | 120 | 
| c8g.48xlarge | 14 | 120 | 
| c8g.metal-24xl | 14 | 120 | 
| c8g.metal-48xl | 14 | 120 | 
| c8gb.medium | 1 | 4 | 
| c8gb.large | 2 | 10 | 
| c8gb.xlarge | 3 | 20 | 
| c8gb.2xlarge | 3 | 40 | 
| c8gb.4xlarge | 7 | 60 | 
| c8gb.8xlarge | 9 | 60 | 
| c8gb.12xlarge | 11 | 60 | 
| c8gb.16xlarge | 15 | 120 | 
| c8gb.24xlarge | 23 | 120 | 
| c8gb.48xlarge | 23 | 120 | 
| c8gb.metal-24xl | 23 | 120 | 
| c8gb.metal-48xl | 23 | 120 | 
| c8gd.medium | 1 | 4 | 
| c8gd.large | 2 | 10 | 
| c8gd.xlarge | 3 | 20 | 
| c8gd.2xlarge | 3 | 40 | 
| c8gd.4xlarge | 7 | 60 | 
| c8gd.8xlarge | 7 | 60 | 
| c8gd.12xlarge | 7 | 60 | 
| c8gd.16xlarge | 14 | 120 | 
| c8gd.24xlarge | 14 | 120 | 
| c8gd.48xlarge | 14 | 120 | 
| c8gd.metal-24xl | 14 | 120 | 
| c8gd.metal-48xl | 14 | 120 | 
| c8gn.medium | 1 | 4 | 
| c8gn.large | 2 | 10 | 
| c8gn.xlarge | 3 | 20 | 
| c8gn.2xlarge | 3 | 40 | 
| c8gn.4xlarge | 7 | 60 | 
| c8gn.8xlarge | 9 | 60 | 
| c8gn.12xlarge | 11 | 60 | 
| c8gn.16xlarge | 15 | 120 | 
| c8gn.24xlarge | 23 | 120 | 
| c8gn.48xlarge | 23 | 120 | 
| c8gn.metal-24xl | 23 | 120 | 
| c8gn.metal-48xl | 23 | 120 | 
| c8i.large | 2 | 10 | 
| c8i.xlarge | 3 | 20 | 
| c8i.2xlarge | 3 | 40 | 
| c8i.4xlarge | 7 | 60 | 
| c8i.8xlarge | 9 | 90 | 
| c8i.12xlarge | 11 | 120 | 
| c8i.16xlarge | 15 | 120 | 
| c8i.24xlarge | 15 | 120 | 
| c8i.32xlarge | 23 | 120 | 
| c8i.48xlarge | 23 | 120 | 
| c8i.96xlarge | 23 | 120 | 
| c8i.metal-48xl | 23 | 120 | 
| c8i.metal-96xl | 23 | 120 | 
| c8ib.large | 3 | 10 | 
| c8ib.xlarge | 3 | 20 | 
| c8ib.2xlarge | 3 | 40 | 
| c8ib.4xlarge | 7 | 60 | 
| c8ib.8xlarge | 7 | 90 | 
| c8ib.12xlarge | 11 | 120 | 
| c8ib.16xlarge | 15 | 120 | 
| c8ib.24xlarge | 15 | 120 | 
| c8ib.32xlarge | 15 | 120 | 
| c8ib.48xlarge | 23 | 120 | 
| c8ib.96xlarge | 23 | 120 | 
| c8ib.metal-48xl | 23 | 120 | 
| c8ib.metal-96xl | 23 | 120 | 
| c8id.large | 2 | 10 | 
| c8id.xlarge | 3 | 20 | 
| c8id.2xlarge | 3 | 40 | 
| c8id.4xlarge | 7 | 60 | 
| c8id.8xlarge | 9 | 90 | 
| c8id.12xlarge | 11 | 120 | 
| c8id.16xlarge | 15 | 120 | 
| c8id.24xlarge | 15 | 120 | 
| c8id.32xlarge | 23 | 120 | 
| c8id.48xlarge | 23 | 120 | 
| c8id.96xlarge | 23 | 120 | 
| c8id.metal-48xl | 23 | 120 | 
| c8id.metal-96xl | 23 | 120 | 
| c8in.large | 3 | 10 | 
| c8in.xlarge | 3 | 20 | 
| c8in.2xlarge | 3 | 40 | 
| c8in.4xlarge | 7 | 60 | 
| c8in.8xlarge | 7 | 90 | 
| c8in.12xlarge | 11 | 120 | 
| c8in.16xlarge | 15 | 120 | 
| c8in.24xlarge | 15 | 120 | 
| c8in.32xlarge | 15 | 120 | 
| c8in.48xlarge | 23 | 120 | 
| c8in.96xlarge | 23 | 120 | 
| c8in.metal-48xl | 23 | 120 | 
| c8in.metal-96xl | 23 | 120 | 
| c8i-flex.large | 2 | 4 | 
| c8i-flex.xlarge | 3 | 10 | 
| c8i-flex.2xlarge | 3 | 20 | 
| c8i-flex.4xlarge | 7 | 40 | 
| c8i-flex.8xlarge | 9 | 60 | 
| c8i-flex.12xlarge | 11 | 120 | 
| c8i-flex.16xlarge | 15 | 120 | 

## Memory optimized
<a name="eni-branch-mo"></a>


| Instance type | Task limit without ENI trunking | Task limit with ENI trunking | 
| --- | --- | --- | 
| r5.large | 2 | 10 | 
| r5.xlarge | 3 | 20 | 
| r5.2xlarge | 3 | 40 | 
| r5.4xlarge | 7 | 60 | 
| r5.12xlarge | 7 | 60 | 
| r5.16xlarge | 14 | 120 | 
| r5.24xlarge | 14 | 120 | 
| r5a.large | 2 | 10 | 
| r5a.xlarge | 3 | 20 | 
| r5a.2xlarge | 3 | 40 | 
| r5a.4xlarge | 7 | 60 | 
| r5a.8xlarge | 7 | 60 | 
| r5a.12xlarge | 7 | 60 | 
| r5a.16xlarge | 14 | 120 | 
| r5a.24xlarge | 14 | 120 | 
| r5ad.large | 2 | 10 | 
| r5ad.xlarge | 3 | 20 | 
| r5ad.2xlarge | 3 | 40 | 
| r5ad.4xlarge | 7 | 60 | 
| r5ad.8xlarge | 7 | 60 | 
| r5ad.12xlarge | 7 | 60 | 
| r5ad.16xlarge | 14 | 120 | 
| r5ad.24xlarge | 14 | 120 | 
| r5b.16xlarge | 14 | 120 | 
| r5d.large | 2 | 10 | 
| r5d.xlarge | 3 | 20 | 
| r5d.2xlarge | 3 | 40 | 
| r5d.4xlarge | 7 | 60 | 
| r5d.8xlarge | 7 | 60 | 
| r5d.12xlarge | 7 | 60 | 
| r5d.16xlarge | 14 | 120 | 
| r5d.24xlarge | 14 | 120 | 
| r5dn.16xlarge | 14 | 120 | 
| r6a.large | 2 | 10 | 
| r6a.xlarge | 3 | 20 | 
| r6a.2xlarge | 3 | 40 | 
| r6a.4xlarge | 7 | 60 | 
| r6a.8xlarge | 7 | 90 | 
| r6a.12xlarge | 7 | 120 | 
| r6a.16xlarge | 14 | 120 | 
| r6a.24xlarge | 14 | 120 | 
| r6a.32xlarge | 14 | 120 | 
| r6a.48xlarge | 14 | 120 | 
| r6a.metal | 14 | 120 | 
| r6g.medium | 1 | 4 | 
| r6g.large | 2 | 10 | 
| r6g.xlarge | 3 | 20 | 
| r6g.2xlarge | 3 | 40 | 
| r6g.4xlarge | 7 | 60 | 
| r6g.8xlarge | 7 | 60 | 
| r6g.12xlarge | 7 | 60 | 
| r6g.16xlarge | 14 | 120 | 
| r6g.metal | 14 | 120 | 
| r6gd.medium | 1 | 4 | 
| r6gd.large | 2 | 10 | 
| r6gd.xlarge | 3 | 20 | 
| r6gd.2xlarge | 3 | 40 | 
| r6gd.4xlarge | 7 | 60 | 
| r6gd.8xlarge | 7 | 60 | 
| r6gd.12xlarge | 7 | 60 | 
| r6gd.16xlarge | 14 | 120 | 
| r6gd.metal | 14 | 120 | 
| r6i.large | 2 | 10 | 
| r6i.xlarge | 3 | 20 | 
| r6i.2xlarge | 3 | 40 | 
| r6i.4xlarge | 7 | 60 | 
| r6i.8xlarge | 7 | 90 | 
| r6i.12xlarge | 7 | 120 | 
| r6i.16xlarge | 14 | 120 | 
| r6i.24xlarge | 14 | 120 | 
| r6i.32xlarge | 14 | 120 | 
| r6i.metal | 14 | 120 | 
| r6id.large | 2 | 10 | 
| r6id.xlarge | 3 | 20 | 
| r6id.2xlarge | 3 | 40 | 
| r6id.4xlarge | 7 | 60 | 
| r6id.8xlarge | 7 | 90 | 
| r6id.12xlarge | 7 | 120 | 
| r6id.16xlarge | 14 | 120 | 
| r6id.24xlarge | 14 | 120 | 
| r6id.32xlarge | 14 | 120 | 
| r6id.metal | 14 | 120 | 
| r6idn.large | 2 | 10 | 
| r6idn.xlarge | 3 | 20 | 
| r6idn.2xlarge | 3 | 40 | 
| r6idn.4xlarge | 7 | 60 | 
| r6idn.8xlarge | 7 | 90 | 
| r6idn.12xlarge | 7 | 120 | 
| r6idn.16xlarge | 14 | 120 | 
| r6idn.24xlarge | 14 | 120 | 
| r6idn.32xlarge | 15 | 120 | 
| r6idn.metal | 15 | 120 | 
| r6in.large | 2 | 10 | 
| r6in.xlarge | 3 | 20 | 
| r6in.2xlarge | 3 | 40 | 
| r6in.4xlarge | 7 | 60 | 
| r6in.8xlarge | 7 | 90 | 
| r6in.12xlarge | 7 | 120 | 
| r6in.16xlarge | 14 | 120 | 
| r6in.24xlarge | 14 | 120 | 
| r6in.32xlarge | 15 | 120 | 
| r6in.metal | 15 | 120 | 
| r7a.medium | 1 | 4 | 
| r7a.large | 2 | 10 | 
| r7a.xlarge | 3 | 20 | 
| r7a.2xlarge | 3 | 40 | 
| r7a.4xlarge | 7 | 60 | 
| r7a.8xlarge | 7 | 90 | 
| r7a.12xlarge | 7 | 120 | 
| r7a.16xlarge | 14 | 120 | 
| r7a.24xlarge | 14 | 120 | 
| r7a.32xlarge | 14 | 120 | 
| r7a.48xlarge | 14 | 120 | 
| r7a.metal-48xl | 14 | 120 | 
| r7g.medium | 1 | 4 | 
| r7g.large | 2 | 10 | 
| r7g.xlarge | 3 | 20 | 
| r7g.2xlarge | 3 | 40 | 
| r7g.4xlarge | 7 | 60 | 
| r7g.8xlarge | 7 | 60 | 
| r7g.12xlarge | 7 | 60 | 
| r7g.16xlarge | 14 | 120 | 
| r7g.metal | 14 | 120 | 
| r7gd.medium | 1 | 4 | 
| r7gd.large | 2 | 10 | 
| r7gd.xlarge | 3 | 20 | 
| r7gd.2xlarge | 3 | 40 | 
| r7gd.4xlarge | 7 | 60 | 
| r7gd.8xlarge | 7 | 60 | 
| r7gd.12xlarge | 7 | 60 | 
| r7gd.16xlarge | 14 | 120 | 
| r7gd.metal | 14 | 120 | 
| r7i.large | 2 | 10 | 
| r7i.xlarge | 3 | 20 | 
| r7i.2xlarge | 3 | 40 | 
| r7i.4xlarge | 7 | 60 | 
| r7i.8xlarge | 7 | 90 | 
| r7i.12xlarge | 7 | 120 | 
| r7i.16xlarge | 14 | 120 | 
| r7i.24xlarge | 14 | 120 | 
| r7i.48xlarge | 14 | 120 | 
| r7i.metal-24xl | 14 | 120 | 
| r7i.metal-48xl | 14 | 120 | 
| r7iz.large | 2 | 10 | 
| r7iz.xlarge | 3 | 20 | 
| r7iz.2xlarge | 3 | 40 | 
| r7iz.4xlarge | 7 | 60 | 
| r7iz.8xlarge | 7 | 90 | 
| r7iz.12xlarge | 7 | 120 | 
| r7iz.16xlarge | 14 | 120 | 
| r7iz.32xlarge | 14 | 120 | 
| r7iz.metal-16xl | 14 | 120 | 
| r7iz.metal-32xl | 14 | 120 | 
| r8a.medium | 1 | 4 | 
| r8a.large | 2 | 10 | 
| r8a.xlarge | 3 | 20 | 
| r8a.2xlarge | 3 | 40 | 
| r8a.4xlarge | 7 | 60 | 
| r8a.8xlarge | 9 | 90 | 
| r8a.12xlarge | 11 | 120 | 
| r8a.16xlarge | 15 | 120 | 
| r8a.24xlarge | 15 | 120 | 
| r8a.48xlarge | 23 | 120 | 
| r8a.metal-24xl | 15 | 120 | 
| r8a.metal-48xl | 23 | 120 | 
| r8g.medium | 1 | 4 | 
| r8g.large | 2 | 10 | 
| r8g.xlarge | 3 | 20 | 
| r8g.2xlarge | 3 | 40 | 
| r8g.4xlarge | 7 | 60 | 
| r8g.8xlarge | 7 | 60 | 
| r8g.12xlarge | 7 | 60 | 
| r8g.16xlarge | 14 | 120 | 
| r8g.24xlarge | 14 | 120 | 
| r8g.48xlarge | 14 | 120 | 
| r8g.metal-24xl | 14 | 120 | 
| r8g.metal-48xl | 14 | 120 | 
| r8gb.medium | 1 | 4 | 
| r8gb.large | 2 | 10 | 
| r8gb.xlarge | 3 | 20 | 
| r8gb.2xlarge | 3 | 40 | 
| r8gb.4xlarge | 7 | 60 | 
| r8gb.8xlarge | 9 | 60 | 
| r8gb.12xlarge | 11 | 60 | 
| r8gb.16xlarge | 15 | 120 | 
| r8gb.24xlarge | 23 | 120 | 
| r8gb.48xlarge | 23 | 120 | 
| r8gb.metal-24xl | 23 | 120 | 
| r8gb.metal-48xl | 23 | 120 | 
| r8gd.medium | 1 | 4 | 
| r8gd.large | 2 | 10 | 
| r8gd.xlarge | 3 | 20 | 
| r8gd.2xlarge | 3 | 40 | 
| r8gd.4xlarge | 7 | 60 | 
| r8gd.8xlarge | 7 | 60 | 
| r8gd.12xlarge | 7 | 60 | 
| r8gd.16xlarge | 14 | 120 | 
| r8gd.24xlarge | 14 | 120 | 
| r8gd.48xlarge | 14 | 120 | 
| r8gd.metal-24xl | 14 | 120 | 
| r8gd.metal-48xl | 14 | 120 | 
| r8gn.medium | 1 | 4 | 
| r8gn.large | 2 | 10 | 
| r8gn.xlarge | 3 | 20 | 
| r8gn.2xlarge | 3 | 40 | 
| r8gn.4xlarge | 7 | 60 | 
| r8gn.8xlarge | 9 | 60 | 
| r8gn.12xlarge | 11 | 60 | 
| r8gn.16xlarge | 15 | 120 | 
| r8gn.24xlarge | 23 | 120 | 
| r8gn.48xlarge | 23 | 120 | 
| r8gn.metal-24xl | 23 | 120 | 
| r8gn.metal-48xl | 23 | 120 | 
| r8i.large | 2 | 10 | 
| r8i.xlarge | 3 | 20 | 
| r8i.2xlarge | 3 | 40 | 
| r8i.4xlarge | 7 | 60 | 
| r8i.8xlarge | 9 | 90 | 
| r8i.12xlarge | 11 | 120 | 
| r8i.16xlarge | 15 | 120 | 
| r8i.24xlarge | 15 | 120 | 
| r8i.32xlarge | 23 | 120 | 
| r8i.48xlarge | 23 | 120 | 
| r8i.96xlarge | 23 | 120 | 
| r8i.metal-48xl | 23 | 120 | 
| r8i.metal-96xl | 23 | 120 | 
| r8id.large | 2 | 10 | 
| r8id.xlarge | 3 | 20 | 
| r8id.2xlarge | 3 | 40 | 
| r8id.4xlarge | 7 | 60 | 
| r8id.8xlarge | 9 | 90 | 
| r8id.12xlarge | 11 | 120 | 
| r8id.16xlarge | 15 | 120 | 
| r8id.24xlarge | 15 | 120 | 
| r8id.32xlarge | 23 | 120 | 
| r8id.48xlarge | 23 | 120 | 
| r8id.96xlarge | 23 | 120 | 
| r8id.metal-48xl | 23 | 120 | 
| r8id.metal-96xl | 23 | 120 | 
| r8i-flex.large | 2 | 4 | 
| r8i-flex.xlarge | 3 | 10 | 
| r8i-flex.2xlarge | 3 | 20 | 
| r8i-flex.4xlarge | 7 | 40 | 
| r8i-flex.8xlarge | 9 | 60 | 
| r8i-flex.12xlarge | 11 | 120 | 
| r8i-flex.16xlarge | 15 | 120 | 
| u-3tb1.56xlarge | 7 | 12 | 
| u-6tb1.56xlarge | 14 | 12 | 
| u-18tb1.112xlarge | 14 | 12 | 
| u-18tb1.metal | 14 | 12 | 
| u-24tb1.112xlarge | 14 | 12 | 
| u-24tb1.metal | 14 | 12 | 
| u7i-6tb.112xlarge | 14 | 120 | 
| u7i-8tb.112xlarge | 14 | 120 | 
| u7i-12tb.224xlarge | 14 | 120 | 
| u7in-16tb.224xlarge | 15 | 120 | 
| u7in-24tb.224xlarge | 15 | 120 | 
| u7in-32tb.224xlarge | 15 | 120 | 
| u7inh-32tb.480xlarge | 15 | 120 | 
| x2gd.medium | 1 | 10 | 
| x2gd.large | 2 | 10 | 
| x2gd.xlarge | 3 | 20 | 
| x2gd.2xlarge | 3 | 40 | 
| x2gd.4xlarge | 7 | 60 | 
| x2gd.8xlarge | 7 | 60 | 
| x2gd.12xlarge | 7 | 60 | 
| x2gd.16xlarge | 14 | 120 | 
| x2gd.metal | 14 | 120 | 
| x2idn.16xlarge | 14 | 120 | 
| x2idn.24xlarge | 14 | 120 | 
| x2idn.32xlarge | 14 | 120 | 
| x2idn.metal | 14 | 120 | 
| x2iedn.xlarge | 3 | 13 | 
| x2iedn.2xlarge | 3 | 29 | 
| x2iedn.4xlarge | 7 | 60 | 
| x2iedn.8xlarge | 7 | 120 | 
| x2iedn.16xlarge | 14 | 120 | 
| x2iedn.24xlarge | 14 | 120 | 
| x2iedn.32xlarge | 14 | 120 | 
| x2iedn.metal | 14 | 120 | 
| x2iezn.2xlarge | 3 | 64 | 
| x2iezn.4xlarge | 7 | 120 | 
| x2iezn.6xlarge | 7 | 120 | 
| x2iezn.8xlarge | 7 | 120 | 
| x2iezn.12xlarge | 14 | 120 | 
| x2iezn.metal | 14 | 120 | 
| x8g.medium | 1 | 4 | 
| x8g.large | 2 | 10 | 
| x8g.xlarge | 3 | 20 | 
| x8g.2xlarge | 3 | 40 | 
| x8g.4xlarge | 7 | 60 | 
| x8g.8xlarge | 7 | 60 | 
| x8g.12xlarge | 7 | 60 | 
| x8g.16xlarge | 14 | 120 | 
| x8g.24xlarge | 14 | 120 | 
| x8g.48xlarge | 14 | 120 | 
| x8g.metal-24xl | 14 | 120 | 
| x8g.metal-48xl | 14 | 120 | 
| x8aedz.large | 3 | 10 | 
| x8aedz.xlarge | 3 | 20 | 
| x8aedz.3xlarge | 7 | 40 | 
| x8aedz.6xlarge | 7 | 60 | 
| x8aedz.12xlarge | 15 | 120 | 
| x8aedz.24xlarge | 15 | 120 | 
| x8aedz.metal-12xl | 15 | 120 | 
| x8aedz.metal-24xl | 15 | 120 | 
| x8i.large | 2 | 10 | 
| x8i.xlarge | 3 | 20 | 
| x8i.2xlarge | 3 | 40 | 
| x8i.4xlarge | 7 | 60 | 
| x8i.8xlarge | 9 | 90 | 
| x8i.12xlarge | 11 | 120 | 
| x8i.16xlarge | 15 | 120 | 
| x8i.24xlarge | 15 | 120 | 
| x8i.32xlarge | 23 | 120 | 
| x8i.48xlarge | 23 | 120 | 
| x8i.64xlarge | 23 | 120 | 
| x8i.96xlarge | 23 | 120 | 
| x8i.metal-48xl | 23 | 120 | 
| x8i.metal-96xl | 23 | 120 | 

## Storage optimized
<a name="eni-branch-so"></a>


| Instance type | Task limit without ENI trunking | Task limit with ENI trunking | 
| --- | --- | --- | 
| i4g.large | 2 | 10 | 
| i4g.xlarge | 3 | 20 | 
| i4g.2xlarge | 3 | 40 | 
| i4g.4xlarge | 7 | 60 | 
| i4g.8xlarge | 7 | 60 | 
| i4g.16xlarge | 14 | 120 | 
| i4i.xlarge | 3 | 8 | 
| i4i.2xlarge | 3 | 28 | 
| i4i.4xlarge | 7 | 58 | 
| i4i.8xlarge | 7 | 118 | 
| i4i.12xlarge | 7 | 118 | 
| i4i.16xlarge | 14 | 248 | 
| i4i.24xlarge | 14 | 118 | 
| i4i.32xlarge | 14 | 498 | 
| i4i.metal | 14 | 498 | 
| i7i.large | 2 | 10 | 
| i7i.xlarge | 3 | 20 | 
| i7i.2xlarge | 3 | 40 | 
| i7i.4xlarge | 7 | 60 | 
| i7i.8xlarge | 7 | 90 | 
| i7i.12xlarge | 7 | 90 | 
| i7i.16xlarge | 14 | 120 | 
| i7i.24xlarge | 14 | 120 | 
| i7i.48xlarge | 14 | 120 | 
| i7i.metal-24xl | 14 | 120 | 
| i7i.metal-48xl | 14 | 120 | 
| i7ie.large | 2 | 20 | 
| i7ie.xlarge | 3 | 29 | 
| i7ie.2xlarge | 3 | 29 | 
| i7ie.3xlarge | 3 | 29 | 
| i7ie.6xlarge | 7 | 60 | 
| i7ie.12xlarge | 7 | 60 | 
| i7ie.18xlarge | 14 | 120 | 
| i7ie.24xlarge | 14 | 120 | 
| i7ie.48xlarge | 14 | 120 | 
| i7ie.metal-24xl | 14 | 120 | 
| i7ie.metal-48xl | 14 | 120 | 
| i8g.large | 2 | 10 | 
| i8g.xlarge | 3 | 20 | 
| i8g.2xlarge | 3 | 40 | 
| i8g.4xlarge | 7 | 60 | 
| i8g.8xlarge | 7 | 60 | 
| i8g.12xlarge | 7 | 60 | 
| i8g.16xlarge | 14 | 120 | 
| i8g.24xlarge | 14 | 120 | 
| i8g.48xlarge | 14 | 120 | 
| i8g.metal-24xl | 14 | 120 | 
| i8g.metal-48xl | 14 | 120 | 
| i8ge.large | 2 | 20 | 
| i8ge.xlarge | 3 | 29 | 
| i8ge.2xlarge | 3 | 29 | 
| i8ge.3xlarge | 5 | 29 | 
| i8ge.6xlarge | 9 | 60 | 
| i8ge.12xlarge | 11 | 60 | 
| i8ge.18xlarge | 15 | 120 | 
| i8ge.24xlarge | 15 | 120 | 
| i8ge.48xlarge | 23 | 120 | 
| i8ge.metal-24xl | 15 | 120 | 
| i8ge.metal-48xl | 23 | 120 | 
| im4gn.large | 2 | 10 | 
| im4gn.xlarge | 3 | 20 | 
| im4gn.2xlarge | 3 | 40 | 
| im4gn.4xlarge | 7 | 60 | 
| im4gn.8xlarge | 7 | 60 | 
| im4gn.16xlarge | 14 | 120 | 
| is4gen.medium | 1 | 4 | 
| is4gen.large | 2 | 10 | 
| is4gen.xlarge | 3 | 20 | 
| is4gen.2xlarge | 3 | 40 | 
| is4gen.4xlarge | 7 | 60 | 
| is4gen.8xlarge | 7 | 60 | 

## Accelerated computing
<a name="eni-branch-ac"></a>


| Instance type | Task limit without ENI trunking | Task limit with ENI trunking | 
| --- | --- | --- | 
| dl1.24xlarge | 59 | 120 | 
| dl2q.24xlarge | 14 | 120 | 
| f2.6xlarge | 7 | 90 | 
| f2.12xlarge | 7 | 120 | 
| f2.48xlarge | 14 | 120 | 
| g4ad.xlarge | 1 | 12 | 
| g4ad.2xlarge | 1 | 12 | 
| g4ad.4xlarge | 2 | 12 | 
| g4ad.8xlarge | 3 | 12 | 
| g4ad.16xlarge | 7 | 12 | 
| g5.xlarge | 3 | 6 | 
| g5.2xlarge | 3 | 19 | 
| g5.4xlarge | 7 | 40 | 
| g5.8xlarge | 7 | 90 | 
| g5.12xlarge | 14 | 120 | 
| g5.16xlarge | 7 | 120 | 
| g5.24xlarge | 14 | 120 | 
| g5.48xlarge | 6 | 120 | 
| g5g.xlarge | 3 | 20 | 
| g5g.2xlarge | 3 | 40 | 
| g5g.4xlarge | 7 | 60 | 
| g5g.8xlarge | 7 | 60 | 
| g5g.16xlarge | 14 | 120 | 
| g5g.metal | 14 | 120 | 
| g6.xlarge | 3 | 20 | 
| g6.2xlarge | 3 | 40 | 
| g6.4xlarge | 7 | 60 | 
| g6.8xlarge | 7 | 90 | 
| g6.12xlarge | 7 | 120 | 
| g6.16xlarge | 14 | 120 | 
| g6.24xlarge | 14 | 120 | 
| g6.48xlarge | 14 | 120 | 
| g6e.xlarge | 3 | 20 | 
| g6e.2xlarge | 3 | 40 | 
| g6e.4xlarge | 7 | 60 | 
| g6e.8xlarge | 7 | 90 | 
| g6e.12xlarge | 9 | 120 | 
| g6e.16xlarge | 14 | 120 | 
| g6e.24xlarge | 19 | 120 | 
| g6e.48xlarge | 39 | 120 | 
| g6f.large | 1 | 10 | 
| g6f.xlarge | 3 | 20 | 
| g6f.2xlarge | 3 | 40 | 
| g6f.4xlarge | 7 | 60 | 
| gr6.4xlarge | 7 | 60 | 
| gr6.8xlarge | 7 | 90 | 
| gr6f.4xlarge | 7 | 60 | 
| g7e.2xlarge | 3 | 242 | 
| g7e.4xlarge | 7 | 242 | 
| g7e.8xlarge | 7 | 242 | 
| g7e.12xlarge | 9 | 242 | 
| g7e.24xlarge | 19 | 242 | 
| g7e.48xlarge | 39 | 242 | 
| inf2.xlarge | 3 | 20 | 
| inf2.8xlarge | 7 | 90 | 
| inf2.24xlarge | 14 | 120 | 
| inf2.48xlarge | 14 | 120 | 
| p4d.24xlarge | 59 | 120 | 
| p4de.24xlarge | 59 | 120 | 
| p5.4xlarge | 3 | 60 | 
| p5.48xlarge | 63 | 242 | 
| p5e.48xlarge | 63 | 242 | 
| p5en.48xlarge | 63 | 242 | 
| p6-b200.48xlarge | 31 | 242 | 
| p6-b300.48xlarge | 67 | 242 | 
| p6e-gb200.36xlarge | 38 | 120 | 
| trn1.2xlarge | 3 | 19 | 
| trn1.32xlarge | 39 | 120 | 
| trn1n.32xlarge | 79 | 242 | 
| trn2.3xlarge | 1 | 14 | 
| trn2.48xlarge | 31 | 242 | 
| trn2u.48xlarge | 31 | 242 | 
| vt1.3xlarge | 3 | 40 | 
| vt1.6xlarge | 7 | 60 | 
| vt1.24xlarge | 14 | 120 | 

## High performance computing
<a name="eni-branch-hpc"></a>


| Instance type | Task limit without ENI trunking | Task limit with ENI trunking | 
| --- | --- | --- | 
| hpc6a.48xlarge | 1 | 120 | 
| hpc6id.32xlarge | 1 | 120 | 
| hpc7g.4xlarge | 3 | 120 | 
| hpc7g.8xlarge | 3 | 120 | 
| hpc7g.16xlarge | 3 | 120 | 
| hpc8a.96xlarge | 3 | -2 | 

# Reserving Amazon ECS Linux container instance memory
<a name="memory-management"></a>

When the Amazon ECS container agent registers a container instance to a cluster, the agent must determine how much memory the container instance has available to reserve for your tasks. Because of platform memory overhead and memory occupied by the system kernel, this number is different than the installed memory amount that is advertised for Amazon EC2 instances. For example, an `m4.large` instance has 8 GiB of installed memory. However, this does not always translate to exactly 8192 MiB of memory available for tasks when the container instance registers.

## ECS Managed Instances memory resource determination
<a name="ecs-mi-memory-calculation"></a>

Amazon ECS Managed Instances uses a hierarchical approach to determine memory resource requirements for tasks. Unlike ECS on EC2 which relies on Docker's memory introspection, ECS Managed Instances calculates memory requirements directly from the task payload during scheduling decisions.

When the ECS Managed Instances agent receives a task, it calculates the memory requirement using the following priority order:

1. **Task-level memory (highest priority)** - If task-level memory is specified in the task definition, the agent uses this value directly. This takes precedence over all container-level memory settings.

1. **Container-level memory sum (fallback)** - If task-level memory is not specified (or is 0), the agent sums the memory requirements from all containers in the task. For each container, it uses:

   1. *Memory reservation (soft limit)* - If a container specifies `memoryReservation` in its configuration, the agent uses this value.

   1. *Container memory (hard limit)* - If `memoryReservation` is not specified, the agent uses the container's `memory` field.

**Example Task-level memory specified**  
When task-level memory is specified, it takes precedence over container-level settings:  

```
{
  "family": "my-task",
  "memory": "2048",
  "containerDefinitions": [
    {
      "name": "container1",
      "memory": 1024,
      "memoryReservation": 512
    }
  ]
}
```
The agent reserves 2048 MiB (task-level memory takes precedence).

**Example Container-level memory with reservations**  
When task-level memory is not specified, the agent sums container memory requirements:  

```
{
  "family": "my-task",
  "containerDefinitions": [
    {
      "name": "container1",
      "memory": 1024,
      "memoryReservation": 512
    },
    {
      "name": "container2",
      "memory": 512
    }
  ]
}
```
The agent reserves 512 MiB (container1 reservation) \$1 512 MiB (container2 memory) = 1024 MiB total.

The ECS Managed Instances agent performs memory calculation in three phases:

1. **Task reception** - When a task payload arrives from the ECS control plane, the agent immediately calculates the required memory.

1. **Resource storage** - The calculated memory requirement is stored in the task model for later use in resource accounting operations.

1. **Scheduling decision** - Before accepting a task, the agent checks if sufficient memory is available. If insufficient memory is available, the task is rejected and remains in the ECS service queue until resources become available.

**Note**  
Unlike ECS on EC2, ECS Managed Instances does not use the `ECS_RESERVED_MEMORY` configuration variable. Memory reservation for system processes is handled through the underlying platform's resource management, and the agent performs accurate resource accounting based on task definitions.

 For ECS on EC2, the Amazon ECS container agent provides a configuration variable called `ECS_RESERVED_MEMORY`, which you can use to remove a specified number of MiB of memory from the pool that is allocated to your tasks. This effectively reserves that memory for critical system processes.

If you occupy all of the memory on a container instance with your tasks, then it is possible that your tasks will contend with critical system processes for memory and possibly start a system failure.

For example, if you specify `ECS_RESERVED_MEMORY=256` in your container agent configuration file, then the agent registers the total memory minus 256 MiB for that instance, and 256 MiB of memory could not be allocated by ECS tasks. For more information about agent configuration variables and how to set them, see [Amazon ECS container agent configuration](ecs-agent-config.md) and [Bootstrapping Amazon ECS Linux container instances to pass data](bootstrap_container_instance.md).

If you specify 8192 MiB for the task, and none of your container instances have 8192 MiB or greater of memory available to satisfy this requirement, then the task cannot be placed in your cluster. If you are using a managed compute environment, then AWS Batch must launch a larger instance type to accommodate the request.

The Amazon ECS container agent uses the Docker `ReadMemInfo()` function to query the total memory available to the operating system. Both Linux and Windows provide command line utilities to determine the total memory.

**Example - Determine Linux total memory**  
The **free** command returns the total memory that is recognized by the operating system.  

```
$ free -b
```
Example output for an `m4.large` instance running the Amazon ECS-optimized Amazon Linux AMI.  

```
             total       used       free     shared    buffers     cached
Mem:    8373026816  348180480 8024846336      90112   25534464  205418496
-/+ buffers/cache:  117227520 8255799296
```
This instance has 8373026816 bytes of total memory, which translates to 7985 MiB available for tasks.

**Example - Determine Windows total memory**  
The **wmic** command returns the total memory that is recognized by the operating system.  

```
C:\> wmic ComputerSystem get TotalPhysicalMemory
```
Example output for an `m4.large` instance running the Amazon ECS-optimized Windows Server AMI.  

```
TotalPhysicalMemory
8589524992
```
This instance has 8589524992 bytes of total memory, which translates to 8191 MiB available for tasks.

## Viewing container instance memory
<a name="viewing-memory"></a>

You can view how much memory a container instance registers with in the Amazon ECS console (or with the [DescribeContainerInstances](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DescribeContainerInstances.html) API operation). If you are trying to maximize your resource utilization by providing your tasks as much memory as possible for a particular instance type, you can observe the memory available for that container instance and then assign your tasks that much memory.

**To view container instance memory**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Clusters**, and then choose the cluster that hosts your container instance.

1. Choose **Infrastructure**, and then under Container instances, choose a container instance.

1. The **Resources** section shows the registered and available memory for the container instance.

   The **Registered** memory value is what the container instance; registered with Amazon ECS when it was first launched, and the **Available** memory value is what has not already been allocated to tasks.

# Managing Amazon ECS container instances remotely using AWS Systems Manager
<a name="ec2-run-command"></a>

You can use the Run Command capability in AWS Systems Manager (Systems Manager) to securely and remotely manage the configuration of your Amazon ECS container instances. Run Command provides a simple way to perform common administrative tasks without logging on locally to the instance. You can manage configuration changes across your clusters by simultaneously executing commands on multiple container instances. Run Command reports the status and results of each command.

Here are some examples of the types of tasks you can perform with Run Command:
+ Install or uninstall packages.
+ Perform security updates.
+ Clean up Docker images.
+ Stop or start services.
+ View system resources.
+ View log files.
+ Perform file operations.

For more information about Run Command, see [AWS Systems Manager Run Command](https://docs.aws.amazon.com/systems-manager/latest/userguide/run-command.html) in the *AWS Systems Manager User Guide*.

The following are prequisites to using Systems Manager with Amazon ECS.

1. You must grant the container instance role (**ecsInstanceRole**) permissions to access the Systems Manager APIs. You can do this by assigning the **AmazonSSMManagedInstanceCore** to the `ecsInstanceRole` role. For information about how to attach a policy to a role, see [Update permissions for a role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_update-role-permissions.html) in the *AWS Identity and Access Management User Guide*

1. Verify that SSM Agent is installed on your container instances. For more information, see [Manually installing and uninstalling SSM Agent on EC2 instances for Linux](https://docs.aws.amazon.com/systems-manager/latest/userguide/manually-install-ssm-agent-linux.html).

After you attach Systems Manager managed policies to your `ecsInstanceRole` and verify that AWS Systems Manager Agent (SSM Agent) is installed on your container instances, you can start using Run Command to send commands to your container instances. For information about running commands and shell scripts on your instances and viewing the resulting output, see [Running Commands Using Systems Manager Run Command](https://docs.aws.amazon.com/systems-manager/latest/userguide/run-command.html) and [Run Command Walkthroughs](https://docs.aws.amazon.com/systems-manager/latest/userguide/run-command-walkthroughs.html) in the *AWS Systems Manager User Guide*. 

A common use case is to update container instance software with Run Command. You can follow the procedues in the AWS Systems Manager User Guide with the following parameters.


| Parameter | Value | 
| --- | --- | 
|  **Command document**  | AWS-RunShellScript | 
| Command |  <pre>$ yum update -y</pre> | 
| Target instances | Your container instances | 

# Using an HTTP proxy for Amazon ECS Linux container instances
<a name="http_proxy_config"></a>

You can configure your Amazon ECS container instances to use an HTTP proxy for both the Amazon ECS container agent and the Docker daemon. This is useful if your container instances do not have external network access through an Amazon VPC internet gateway, NAT gateway, or instance. 

To configure your Amazon ECS Linux container instance to use an HTTP proxy, set the following variables in the relevant files at launch time (with Amazon EC2 user data). You can also manually edit the configuration file, and then restart the agent.

`/etc/ecs/ecs.config` (Amazon Linux 2 and AmazonLinux AMI)    
`HTTP_PROXY=10.0.0.131:3128`  
Set this value to the hostname (or IP address) and port number of an HTTP proxy to use for the Amazon ECS agent to connect to the internet. For example, your container instances may not have external network access through an Amazon VPC internet gateway, NAT gateway, or instance.  
`NO_PROXY=169.254.169.254,169.254.170.2,/var/run/docker.sock`  
Set this value to `169.254.169.254,169.254.170.2,/var/run/docker.sock` to filter EC2 instance metadata, IAM roles for tasks, and Docker daemon traffic from the proxy. 

`/etc/systemd/system/ecs.service.d/http-proxy.conf` (Amazon Linux 2 only)    
`Environment="HTTP_PROXY=10.0.0.131:3128/"`  
Set this value to the hostname (or IP address) and port number of an HTTP proxy to use for `ecs-init` to connect to the internet. For example, your container instances may not have external network access through an Amazon VPC internet gateway, NAT gateway, or instance.  
`Environment="NO_PROXY=169.254.169.254,169.254.170.2,/var/run/docker.sock"`  
Set this value to `169.254.169.254,169.254.170.2,/var/run/docker.sock` to filter EC2 instance metadata, IAM roles for tasks, and Docker daemon traffic from the proxy. 

`/etc/init/ecs.override` (Amazon Linux AMI only)    
`env HTTP_PROXY=10.0.0.131:3128`  
Set this value to the hostname (or IP address) and port number of an HTTP proxy to use for `ecs-init` to connect to the internet. For example, your container instances may not have external network access through an Amazon VPC internet gateway, NAT gateway, or instance.  
`env NO_PROXY=169.254.169.254,169.254.170.2,/var/run/docker.sock`  
Set this value to `169.254.169.254,169.254.170.2,/var/run/docker.sock` to filter EC2 instance metadata, IAM roles for tasks, and Docker daemon traffic from the proxy. 

`/etc/systemd/system/docker.service.d/http-proxy.conf` (Amazon Linux 2 only)    
`Environment="HTTP_PROXY=http://10.0.0.131:3128"`  
Set this value to the hostname (or IP address) and port number of an HTTP proxy to use for the Docker daemon to connect to the internet. For example, your container instances may not have external network access through an Amazon VPC internet gateway, NAT gateway, or instance.  
`Environment="NO_PROXY=169.254.169.254,169.254.170.2"`  
Set this value to `169.254.169.254,169.254.170.2` to filter EC2 instance metadata from the proxy. 

`/etc/sysconfig/docker` (Amazon Linux AMI and Amazon Linux 2 only)    
`export HTTP_PROXY=http://10.0.0.131:3128`  
Set this value to the hostname (or IP address) and port number of an HTTP proxy to use for the Docker daemon to connect to the internet. For example, your container instances may not have external network access through an Amazon VPC internet gateway, NAT gateway, or instance.  
`export NO_PROXY=169.254.169.254,169.254.170.2`  
Set this value to `169.254.169.254,169.254.170.2` to filter EC2 instance metadata from the proxy. 

Setting these environment variables in the above files only affects the Amazon ECS container agent, `ecs-init`, and the Docker daemon. They do not configure any other services (such as **yum**) to use the proxy.

For information about how to confiure thhe proxy, see [How do I set up an HTTP proxy for Docker and the Amazon ECS container agent in Amazon Linux 2 or AL2023](https://repost.aws/knowledge-center/ecs-http-proxy-docker-linux2).

# Configuring pre-initialized instances for your Amazon ECS Auto Scaling group
<a name="using-warm-pool"></a>

Amazon ECS supports Amazon EC2 Auto Scaling warm pools. A warm pool is a group of pre-initialized Amazon EC2 instances ready to be placed into service. Whenever your application needs to scale out, Amazon EC2 Auto Scaling uses the pre-initialized instances from the warm pool rather than launching cold instances, allows for any final initialization process to run, and then places the instance into service.

To learn more about warm pools and how to add a warm pool to your Auto Scaling group, see [Warm pools for Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-warm-pools.html) *in the Amazon EC2 Auto Scaling User Guide*.

When you create or update a warm pool for an Auto Scaling group for Amazon ECS , you cannot set the option that returns instances to the warm pool on scale in (`ReuseOnScaleIn`). For more information, see [put-warm-pool](https://docs.aws.amazon.com/cli/latest/reference/autoscaling/put-warm-pool.html) in the *AWS Command Line Interface Reference*.

To use warm pools with your Amazon ECS cluster, set the `ECS_WARM_POOLS_CHECK` agent configuration variable to `true` in the **User data** field of your Amazon EC2 Auto Scaling group launch template. 

The following shows an example of how the agent configuration variable can be specified in the **User data** field of an Amazon EC2 launch template. Replace *MyCluster* with the name our your cluster.

```
#!/bin/bash
cat <<'EOF' >> /etc/ecs/ecs.config
ECS_CLUSTER=MyCluster
ECS_WARM_POOLS_CHECK=true
EOF
```

The `ECS_WARM_POOLS_CHECK` variable is only supported on agent versions `1.59.0` and later. For more information about the variable, see [Amazon ECS container agent configuration](ecs-agent-config.md).

# Updating the Amazon ECS container agent
<a name="ecs-agent-update"></a>

Occasionally, you might need to update the Amazon ECS container agent to pick up bug fixes and new features. Updating the Amazon ECS container agent does not interrupt running tasks or services on the container instance. The process for updating the agent differs depending on whether your container instance was launched with an Amazon ECS-optimized AMI or another operating system.

**Note**  
Agent updates do not apply to Windows container instances. We recommend that you launch new container instances to update the agent version in your Windows clusters.

## Checking the Amazon ECS container agent version
<a name="checking_agent_version"></a>

You can check the version of the container agent that is running on your container instances to see if you need to update it. The container instance view in the Amazon ECS console provides the agent version. Use the following procedure to check your agent version.

------
#### [ Amazon ECS console ]

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. From the navigation bar, choose the Region where your external instance is registered.

1. In the navigation pane, choose **Clusters** and select the cluster that hosts the external instance.

1. On the **Cluster : *name*** page, choose the **Infrastructure** tab.

1. Under **Container instances**, note the **Agent version** column for your container instances. If the container instance does not contain the latest version of the container agent, the console alerts you with a message and flags the outdated agent version.

   If your agent version is outdated, you can update your container agent with the following procedures:
   + If your container instance is running an Amazon ECS-optimized AMI, see [Updating the Amazon ECS container agent on an Amazon ECS-optimized AMI](agent-update-ecs-ami.md).
   + If your container instance is not running an Amazon ECS-optimized AMI, see [Manually updating the Amazon ECS container agent (for non-Amazon ECS-Optimized AMIs)](manually_update_agent.md).
**Important**  
To update the Amazon ECS agent version from versions before v1.0.0 on your Amazon ECS-optimized AMI, we recommend that you terminate your current container instance and launch a new instance with the most recent AMI version. Any container instances that use a preview version should be retired and replaced with the most recent AMI. For more information, see [Launching an Amazon ECS Linux container instance](launch_container_instance.md).

------
#### [ Amazon ECS container agent introspection API  ]

You can also use the to check the agent Amazon ECS container agent introspection API version from the container instance itself. For more information, see [Amazon ECS container introspection](ecs-agent-introspection.md).

**To check if your Amazon ECS container agent is running the latest version with the introspection API**

1. Log in to your container instance via SSH.

1. Query the introspection API.

   ```
   [ec2-user ~]$ curl -s 127.0.0.1:51678/v1/metadata | python3 -mjson.tool
   ```
**Note**  
The introspection API added `Version` information in the version v1.0.0 of the Amazon ECS container agent. If `Version` is not present when querying the introspection API, or the introspection API is not present in your agent at all, then the version you are running is v0.0.3 or earlier. You should update your version.

------

# Updating the Amazon ECS container agent on an Amazon ECS-optimized AMI
<a name="agent-update-ecs-ami"></a>

If you are using an Amazon ECS-optimized AMI, you have several options to get the latest version of the Amazon ECS container agent (shown in order of recommendation):
+ Terminate the container instance and launch the latest version of the Amazon ECS-optimized Amazon Linux 2 AMI (either manually or by updating your Auto Scaling launch configuration with the latest AMI). This provides a fresh container instance with the most current tested and validated versions of Amazon Linux, Docker, `ecs-init`, and the Amazon ECS container agent. For more information, see [Amazon ECS-optimized Linux AMIs](ecs-optimized_AMI.md).
+ Connect to the instance with SSH and update the `ecs-init` package (and its dependencies) to the latest version. This operation provides the most current tested and validated versions of Docker and `ecs-init` that are available in the Amazon Linux repositories and the latest version of the Amazon ECS container agent. For more information, see [To update the `ecs-init` package on an Amazon ECS-optimized AMI](#procedure_update_ecs-init).
+ Update the container agent with the `UpdateContainerAgent` API operation, either through the console or with the AWS CLI or AWS SDKs. For more information, see [Updating the Amazon ECS container agent with the `UpdateContainerAgent` API operation](#agent-update-api).

**Note**  
Agent updates do not apply to Windows container instances. We recommend that you launch new container instances to update the agent version in your Windows clusters.<a name="procedure_update_ecs-init"></a>

**To update the `ecs-init` package on an Amazon ECS-optimized AMI**

1. Log in to your container instance via SSH.

1. Update the `ecs-init` package with the following command.

   ```
   sudo yum update -y ecs-init
   ```
**Note**  
The `ecs-init` package and the Amazon ECS container agent are updated immediately. However, newer versions of Docker are not loaded until the Docker daemon is restarted. Restart either by rebooting the instance, or by running the following commands on your instance:  
Amazon ECS-optimized Amazon Linux 2 AMI:  

     ```
     sudo systemctl restart docker
     ```
Amazon ECS-optimized Amazon Linux AMI:  

     ```
     sudo service docker restart && sudo start ecs
     ```

## Updating the Amazon ECS container agent with the `UpdateContainerAgent` API operation
<a name="agent-update-api"></a>

**Important**  
The `UpdateContainerAgent` API is only supported on Linux variants of the Amazon ECS-optimized AMI, with the exception of the Amazon ECS-optimized Amazon Linux 2 (arm64) AMI. For container instances using the Amazon ECS-optimized Amazon Linux 2 (arm64) AMI, update the `ecs-init` package to update the agent. For container instances that are running other operating systems, see [Manually updating the Amazon ECS container agent (for non-Amazon ECS-Optimized AMIs)](manually_update_agent.md). If you are using Windows container instances, we recommend that you launch new container instances to update the agent version in your Windows clusters.

The `UpdateContainerAgent` API process begins when you request an agent update, either through the console or with the AWS CLI or AWS SDKs. Amazon ECS checks your current agent version against the latest available agent version, and if an update is possible. If an update is not available, for example, if the agent is already running the most recent version, then a `NoUpdateAvailableException` is returned.

The stages in the update process shown above are as follows:

`PENDING`  
An agent update is available, and the update process has started.

`STAGING`  
The agent has begun downloading the agent update. If the agent cannot download the update, or if the contents of the update are incorrect or corrupted, then the agent sends a notification of the failure and the update transitions to the `FAILED` state.

`STAGED`  
The agent download has completed and the agent contents have been verified.

`UPDATING`  
The `ecs-init` service is restarted and it picks up the new agent version. If the agent is for some reason unable to restart, the update transitions to the `FAILED` state; otherwise, the agent signals Amazon ECS that the update is complete.

**Note**  
Agent updates do not apply to Windows container instances. We recommend that you launch new container instances to update the agent version in your Windows clusters.

**To update the Amazon ECS container agent on an Amazon ECS-optimized AMI in the console**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. From the navigation bar, choose the Region where your external instance is registered.

1. In the navigation pane, choose **Clusters** and select the cluster.

1. On the **Cluster : *name*** page, choose the **Infrastructure** tab.

1. Under **Container instances**, select the instances to update, and then choose **Actions**, **Update agent**.

# Manually updating the Amazon ECS container agent (for non-Amazon ECS-Optimized AMIs)
<a name="manually_update_agent"></a>

Occasionally, you might need to update the Amazon ECS container agent to pick up bug fixes and new features. Updating the Amazon ECS container agent does not interrupt running tasks or services on the container instance.
**Note**  
Agent updates do not apply to Windows container instances. We recommend that you launch new container instances to update the agent version in your Windows clusters.

1. Log in to your container instance via SSH.

1. Check to see if your agent uses the `ECS_DATADIR` environment variable to save its state.

   ```
   ubuntu:~$ docker inspect ecs-agent | grep ECS_DATADIR
   ```

   Output:

   ```
   "ECS_DATADIR=/data",
   ```
**Important**  
If the previous command does not return the `ECS_DATADIR` environment variable, you must stop any tasks running on this container instance before updating your agent. Newer agents with the `ECS_DATADIR` environment variable save their state and you can update them while tasks are running without issues.

1. Stop the Amazon ECS container agent.

   ```
   ubuntu:~$ docker stop ecs-agent
   ```

1. Delete the agent container.

   ```
   ubuntu:~$ docker rm ecs-agent
   ```

1. Ensure that the `/etc/ecs` directory and the Amazon ECS container agent configuration file exist at `/etc/ecs/ecs.config`.

   ```
   ubuntu:~$ sudo mkdir -p /etc/ecs && sudo touch /etc/ecs/ecs.config
   ```

1. Edit the `/etc/ecs/ecs.config` file and ensure that it contains at least the following variable declarations. If you do not want your container instance to register with the default cluster, specify your cluster name as the value for `ECS_CLUSTER`.

   ```
   ECS_DATADIR=/data
   ECS_ENABLE_TASK_IAM_ROLE=true
   ECS_ENABLE_TASK_IAM_ROLE_NETWORK_HOST=true
   ECS_LOGFILE=/log/ecs-agent.log
   ECS_AVAILABLE_LOGGING_DRIVERS=["json-file","awslogs"]
   ECS_LOGLEVEL=info
   ECS_CLUSTER=default
   ```

   For more information about these and other agent runtime options, see [Amazon ECS container agent configuration](ecs-agent-config.md).
**Note**  
You can optionally store your agent environment variables in Amazon S3 (which can be downloaded to your container instances at launch time using Amazon EC2 user data). This is recommended for sensitive information such as authentication credentials for private repositories. For more information, see [Storing Amazon ECS container instance configuration in Amazon S3](ecs-config-s3.md) and [Using non-AWS container images in Amazon ECS](private-auth.md).

1. Pull the latest Amazon ECS container agent image from Amazon Elastic Container Registry Public.

   ```
   ubuntu:~$ docker pull public.ecr.aws/ecs/amazon-ecs-agent:latest
   ```

   Output:

   ```
   Pulling repository amazon/amazon-ecs-agent
   a5a56a5e13dc: Download complete
   511136ea3c5a: Download complete
   9950b5d678a1: Download complete
   c48ddcf21b63: Download complete
   Status: Image is up to date for amazon/amazon-ecs-agent:latest
   ```

1. Run the latest Amazon ECS container agent on your container instance.
**Note**  
Use Docker restart policies or a process manager (such as **upstart** or **systemd**) to treat the container agent as a service or a daemon and ensure that it is restarted after exiting. The Amazon ECS-optimized AMI uses the `ecs-init` RPM for this purpose, and you can view the [source code for this RPM](https://github.com/aws/amazon-ecs-init) on GitHub. 

   The following example of the agent run command is broken into separate lines to show each option. For more information about these and other agent runtime options, see [Amazon ECS container agent configuration](ecs-agent-config.md).
**Important**  
Operating systems with SELinux enabled require the `--privileged` option in your **docker run** command. In addition, for SELinux-enabled container instances, we recommend that you add the `:Z` option to the `/log` and `/data` volume mounts. However, the host mounts for these volumes must exist before you run the command or you receive a `no such file or directory` error. Take the following action if you experience difficulty running the Amazon ECS agent on an SELinux-enabled container instance:  
Create the host volume mount points on your container instance.  

     ```
     ubuntu:~$ sudo mkdir -p /var/log/ecs /var/lib/ecs/data
     ```
Add the `--privileged` option to the **docker run** command below.
Append the `:Z` option to the `/log` and `/data` container volume mounts (for example, `--volume=/var/log/ecs/:/log:Z`) to the **docker run** command below.

   ```
   ubuntu:~$ sudo docker run --name ecs-agent \
   --detach=true \
   --restart=on-failure:10 \
   --volume=/var/run:/var/run \
   --volume=/var/log/ecs/:/log \
   --volume=/var/lib/ecs/data:/data \
   --volume=/etc/ecs:/etc/ecs \
   --volume=/etc/ecs:/etc/ecs/pki \
   --net=host \
   --env-file=/etc/ecs/ecs.config \
   amazon/amazon-ecs-agent:latest
   ```
**Note**  
If you receive an `Error response from daemon: Cannot start container` message, you can delete the failed container with the **sudo docker rm ecs-agent** command and try running the agent again. 

# Amazon ECS-optimized Windows AMIs
<a name="ecs-optimized_windows_AMI"></a>

The Amazon ECS-optimized AMIs are preconfigured with the necessary components that you need to run Amazon ECS workloads. Although you can create your own container instance AMI that meets the basic specifications needed to run your containerized workloads on Amazon ECS, the Amazon ECS-optimized AMIs are preconfigured and tested on Amazon ECS by AWS engineers. It is the simplest way for you to get started and to get your containers running on AWS quickly.

The Amazon ECS-optimized AMI metadata, including the AMI name, Amazon ECS container agent version, and Amazon ECS runtime version which includes the Docker version, for each variant can be retrieved programmatically. For more information, see [Retrieving Amazon ECS-optimized Windows AMI metadata](retrieve-ecs-optimized_windows_AMI.md).

**Important**  
 All ECS-optimized AMI variants produced after August 2022 will be migrating from Docker EE (Mirantis) to Docker CE (Moby project).  
To ensure that customers have the latest security updates by default, Amazon ECS maintains at least the last three Windows Amazon ECS-optimized AMIs. After releasing new Windows Amazon ECS-optimized AMIs, Amazon ECS makes the Windows Amazon ECS-optimized AMIs that are older private. If there is a private AMI that you need access to, let us know by filing a ticket with Cloud Support.

## Amazon ECS-optimized AMI variants
<a name="ecs-optimized-ami-variants"></a>

The following Windows Server variants of the Amazon ECS-optimized AMI are available for your Amazon EC2 instances.

**Important**  
All ECS-optimized AMI variants produced after August will be migrating from Docker EE (Mirantis) to Docker CE (Moby project).
+ **Amazon ECS-optimized Windows Server 2025 Full AMI** 
+ **Amazon ECS-optimized Windows Server 2025 Core AMI** 
+ **Amazon ECS-optimized Windows Server 2022 Full AMI** 
+ **Amazon ECS-optimized Windows Server 2022 Core AMI** 
+ **Amazon ECS-optimized Windows Server 2019 Full AMI** 
+ **Amazon ECS-optimized Windows Server 2019 Core AMI** 
+ **Amazon ECS-optimized Windows Server 2016 Full AMI**

**Important**  
Windows Server 2016 does not support the latest Docker version, for example 25.x.x. Therefore the Windows Server 2016 Full AMIs will not receive security or bug patches to the Docker runtime. We recommend that you move to one of the following Windows platforms:  
Windows Server 2022 Full
Windows Server 2022 Core
Windows Server 2019 Full
Windows Server 2019 Core

On August 9, 2022, the Amazon ECS-optimized Windows Server 20H2 Core AMI reached its end of support date. No new versions of this AMI will be released. For more information, see [Windows Server release information](https://learn.microsoft.com/en-us/windows-server/get-started/windows-server-release-info).

Windows Server 2025, Windows Server 2022, Windows Server 2019, and Windows Server 2016 are Long-Term Servicing Channel (LTSC) releases. Windows Server 20H2 is a Semi-Annual Channel (SAC) release. For more information, see [Windows Server release information](https://learn.microsoft.com/en-us/windows-server/get-started/windows-server-release-info).

### Considerations
<a name="windows_caveats"></a>

Here are some things you should know about Amazon EC2 Windows containers and Amazon ECS.
+ Windows containers can't run on Linux container instances, and the opposite is also the case. For better task placement for Windows and Linux tasks, keep Windows and Linux container instances in separate clusters and only place Windows tasks on Windows clusters. You can ensure that Windows task definitions are only placed on Windows instances by setting the following placement constraint: `memberOf(ecs.os-type=='windows')`.
+ Windows containers are supported for tasks that use the EC2 and Fargate.
+ Windows containers and container instances can't support all the task definition parameters that are available for Linux containers and container instances. For some parameters, they aren't supported at all, and others behave differently on Windows than they do on Linux. For more information, see [Amazon ECS task definition differences for EC2 instances running Windows](windows_task_definitions.md).
+ For the IAM roles for tasks feature, you need to configure your Windows container instances to allow the feature at launch. Your containers must run some provided PowerShell code when they use the feature. For more information, see [Amazon EC2 Windows instance additional configuration](task-iam-roles.md#windows_task_IAM_roles).
+ The IAM roles for tasks feature uses a credential proxy to provide credentials to the containers. This credential proxy occupies port 80 on the container instance, so if you use IAM roles for tasks, port 80 is not available for tasks. For web service containers, you can use an Application Load Balancer and dynamic port mapping to provide standard HTTP port 80 connections to your containers. For more information, see [Use load balancing to distribute Amazon ECS service traffic](service-load-balancing.md).
+ The Windows Server Docker images are large (9 GiB). So, your Windows container instances require more storage space than Linux container instances.
+ To run a Windows container on a Windows Server, the container’s base image OS version must match that of the host. For more information, see [Windows container version compatibility](https://learn.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/version-compatibility?tabs=windows-server-2022%2Cwindows-11) on the Microsoft documentation website. If your cluster runs multiple Windows versions, you can ensure that a task is placed on an EC2 instance running on the same version by using the placement constraint: `memberOf(attribute:ecs.os-family == WINDOWS_SERVER_<OS_Release>_<FULL or CORE>)`. For more information, see [Retrieving Amazon ECS-optimized Windows AMI metadata](retrieve-ecs-optimized_windows_AMI.md).

# Retrieving Amazon ECS-optimized Windows AMI metadata
<a name="retrieve-ecs-optimized_windows_AMI"></a>

The AMI ID, image name, operating system, container agent version, and runtime version for each variant of the Amazon ECS-optimized AMIs can be programmatically retrieved by querying the Systems Manager Parameter Store API. For more information about the Systems Manager Parameter Store API, see [GetParameters](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameters.html) and [GetParametersByPath](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParametersByPath.html).

**Note**  
Your administrative user must have the following IAM permissions to retrieve the Amazon ECS-optimized AMI metadata. These permissions have been added to the `AmazonECS_FullAccess` IAM policy.  
ssm:GetParameters
ssm:GetParameter
ssm:GetParametersByPath

## Systems Manager Parameter Store parameter format
<a name="ecs-optimized-ami-parameter-format"></a>

**Note**  
The following Systems Manager Parameter Store API parameters are deprecated and should not be used to retrieve the latest Windows AMIs:  
`/aws/service/ecs/optimized-ami/windows_server/2016/english/full/recommended/image_id `
`/aws/service/ecs/optimized-ami/windows_server/2019/english/full/recommended/image_id`

The following is the format of the parameter name for each Amazon ECS-optimized AMI variant.
+ Windows Server 2025 Full AMI metadata:

  ```
  /aws/service/ami-windows-latest/Windows_Server-2025-English-Full-ECS_Optimized
  ```
+ Windows Server 2025 Core AMI metadata:

  ```
  /aws/service/ami-windows-latest/Windows_Server-2025-English-Core-ECS_Optimized
  ```
+ Windows Server 2022 Full AMI metadata:

  ```
  /aws/service/ami-windows-latest/Windows_Server-2022-English-Full-ECS_Optimized
  ```
+ Windows Server 2022 Core AMI metadata:

  ```
  /aws/service/ami-windows-latest/Windows_Server-2022-English-Core-ECS_Optimized
  ```
+ Windows Server 2019 Full AMI metadata:

  ```
  /aws/service/ami-windows-latest/Windows_Server-2019-English-Full-ECS_Optimized
  ```
+ Windows Server 2019 Core AMI metadata:

  ```
  /aws/service/ami-windows-latest/Windows_Server-2019-English-Core-ECS_Optimized
  ```
+ Windows Server 2016 Full AMI metadata:

  ```
  /aws/service/ami-windows-latest/Windows_Server-2016-English-Full-ECS_Optimized
  ```

The following parameter name format retrieves the metadata of the latest stable Windows Server 2019 Full AMI

```
aws ssm get-parameters --names /aws/service/ami-windows-latest/Windows_Server-2019-English-Full-ECS_Optimized
```

The following is an example of the JSON object that is returned for the parameter value.

```
{
    "Parameters": [
        {
            "Name": "/aws/service/ami-windows-latest/Windows_Server-2019-English-Full-ECS_Optimized",
            "Type": "String",
            "Value": "{\"image_name\":\"Windows_Server-2019-English-Full-ECS_Optimized-2023.06.13\",\"image_id\":\"ami-0debc1fb48e4aee16\",\"ecs_runtime_version\":\"Docker (CE) version 20.10.21\",\"ecs_agent_version\":\"1.72.0\"}",
            "Version": 58,
            "LastModifiedDate": "2023-06-22T19:37:37.841000-04:00",
            "ARN": "arn:aws:ssm:us-east-1::parameter/aws/service/ami-windows-latest/Windows_Server-2019-English-Full-ECS_Optimized",
            "DataType": "text"
        }
    ],
    "InvalidParameters": []
}
```

Each of the fields in the output above are available to be queried as sub-parameters. Construct the parameter path for a sub-parameter by appending the sub-parameter name to the path for the selected AMI. The following sub-parameters are available:
+ `schema_version`
+ `image_id`
+ `image_name`
+ `os`
+ `ecs_agent_version`
+ `ecs_runtime_version`

## Examples
<a name="ecs-optimized-ami-windows-parameter-examples"></a>

The following examples show ways in which you can retrieve the metadata for each Amazon ECS-optimized AMI variant.

### Retrieving the metadata of the latest stable Amazon ECS-optimized AMI
<a name="ecs-optimized-ami-windows-parameter-examples-1"></a>

You can retrieve the latest stable Amazon ECS-optimized AMI using the AWS CLI with the following AWS CLI commands.
+ **For the Amazon ECS-optimized Windows Server 2025 Full AMI:**

  ```
  aws ssm get-parameters --names /aws/service/ami-windows-latest/Windows_Server-2025-English-Full-ECS_Optimized --region us-east-1
  ```
+ **For the Amazon ECS-optimized Windows Server 2025 Core AMI:**

  ```
  aws ssm get-parameters --names /aws/service/ami-windows-latest/Windows_Server-2025-English-Core-ECS_Optimized --region us-east-1
  ```
+ **For the Amazon ECS-optimized Windows Server 2022 Full AMI:**

  ```
  aws ssm get-parameters --names /aws/service/ami-windows-latest/Windows_Server-2022-English-Full-ECS_Optimized --region us-east-1
  ```
+ **For the Amazon ECS-optimized Windows Server 2022 Core AMI:**

  ```
  aws ssm get-parameters --names /aws/service/ami-windows-latest/Windows_Server-2022-English-Core-ECS_Optimized --region us-east-1
  ```
+ **For the Amazon ECS-optimized Windows Server 2019 Full AMI:**

  ```
  aws ssm get-parameters --names /aws/service/ami-windows-latest/Windows_Server-2019-English-Full-ECS_Optimized --region us-east-1
  ```
+ **For the Amazon ECS-optimized Windows Server 2019 Core AMI:**

  ```
  aws ssm get-parameters --names /aws/service/ami-windows-latest/Windows_Server-2019-English-Core-ECS_Optimized --region us-east-1
  ```
+ **For the Amazon ECS-optimized Windows Server 2016 Full AMI:**

  ```
  aws ssm get-parameters --names /aws/service/ami-windows-latest/Windows_Server-2016-English-Full-ECS_Optimized --region us-east-1
  ```

### Using the latest recommended Amazon ECS-optimized AMI in an CloudFormation template
<a name="ecs-optimized-ami-windows-parameter-examples-5"></a>

You can reference the latest recommended Amazon ECS-optimized AMI in an CloudFormation template by referencing the Systems Manager parameter store name.

```
Parameters:
  LatestECSOptimizedAMI:
    Description: AMI ID
    Type: AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>
    Default: /aws/service/ami-windows-latest/Windows_Server-2019-English-Full-ECS_Optimized/image_id
```

# Amazon ECS-optimized Windows AMI versions
<a name="ecs-windows-ami-versions"></a>

View the current and previous versions of the Amazon ECS-optimized AMIs and their corresponding versions of the Amazon ECS container agent, Docker, and the `ecs-init` package.

The Amazon ECS-optimized AMI metadata, including the AMI ID, for each variant can be retrieved programmatically. For more information, see [Retrieving Amazon ECS-optimized Windows AMI metadata](retrieve-ecs-optimized_windows_AMI.md). 

The following tabs display a list of Windows Amazon ECS-optimized AMIs versions. For details on referencing the Systems Manager Parameter Store parameter in an CloudFormation template, see [Using the latest recommended Amazon ECS-optimized AMI in an CloudFormation template](retrieve-ecs-optimized_AMI.md#ecs-optimized-ami-parameter-examples-5).

**Important**  
To ensure that customers have the latest security updates by default, Amazon ECS maintains at least the last three Windows Amazon ECS-optimized AMIs. After releasing new Windows Amazon ECS-optimized AMIs, Amazon ECS makes the Windows Amazon ECS-optimized AMIs that are older private. If there is a private AMI that you need access to, let us know by filing a ticket with Cloud Support.  
Windows Server 2016 does not support the latest Docker version, for example 25.x.x. Therefore the Windows Server 2016 Full AMIs will not receive security or bug patches to the Docker runtime. We recommend that you move to one of the following Windows platforms:  
Windows Server 2022 Full
Windows Server 2022 Core
Windows Server 2019 Full
Windows Server 2019 Core

**Note**  
gMSA plugin logging has been migrated from file-based logging `(C:\ProgramData\Amazon\gmsa)` to Windows Event logging with the August 2025 AMI release. The public log collector script will collect all gMSA logs. For more information, see [Collecting container logs with Amazon ECS logs collector](ecs-logs-collector.md).

------
#### [ Windows Server 2025 Full AMI versions ]

The table below lists the current and previous versions of the Amazon ECS-optimized Windows Server 2025 Full AMI and their corresponding versions of the Amazon ECS container agent and Docker.


|  Amazon ECS-optimized Windows Server 2025 Full AMI  |  Amazon ECS container agent version  |  Docker version  |  Visibility  | 
| --- | --- | --- | --- | 
|  **Windows\$1Server-2025-English-Full-ECS\$1Optimized-2026.03.13**  |  `1.102.0`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2025-English-Full-ECS\$1Optimized-2026.02.13**  |  `1.101.3`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2025-English-Full-ECS\$1Optimized-2026.01.16**  |  `1.101.2`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2025-English-Full-ECS\$1Optimized-2025.12.13**  |  `1.101.0`  |  `25.0.6 (Docker CE)`  |  Public  | 

Use the following AWS CLI command to retrieve the current Amazon ECS-optimized Windows Server 2025 Full AMI.

```
aws ssm get-parameters --names /aws/service/ami-windows-latest/Windows_Server-2025-English-Full-ECS_Optimized
```

------
#### [ Windows Server 2025 Core AMI versions ]

The table below lists the current and previous versions of the Amazon ECS-optimized Windows Server 2025 Core AMI and their corresponding versions of the Amazon ECS container agent and Docker.


|  Amazon ECS-optimized Windows Server 2025 Core AMI  |  Amazon ECS container agent version  |  Docker version  |  Visibility  | 
| --- | --- | --- | --- | 
|  **Windows\$1Server-2025-English-Core-ECS\$1Optimized-2026.03.13**  |  `1.102.0`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2025-English-Core-ECS\$1Optimized-2026.02.13**  |  `1.101.3`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2025-English-Core-ECS\$1Optimized-2026.01.16**  |  `1.101.2`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2025-English-Core-ECS\$1Optimized-2025.12.13**  |  `1.101.0`  |  `25.0.6 (Docker CE)`  |  Public  | 

Use the following AWS CLI command to retrieve the current Amazon ECS-optimized Windows Server 2025 Core AMI.

```
aws ssm get-parameters --names /aws/service/ami-windows-latest/Windows_Server-2025-English-Core-ECS_Optimized
```

------
#### [ Windows Server 2022 Full AMI versions ]

The table below lists the current and previous versions of the Amazon ECS-optimized Windows Server 2022 Full AMI and their corresponding versions of the Amazon ECS container agent and Docker.


|  Amazon ECS-optimized Windows Server 2022 Full AMI  |  Amazon ECS container agent version  |  Docker version  |  Visibility  | 
| --- | --- | --- | --- | 
|  **Windows\$1Server-2022-English-Full-ECS\$1Optimized-2026.03.13**  |  `1.102.0`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2022-English-Full-ECS\$1Optimized-2026.02.13**  |  `1.101.3`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2022-English-Full-ECS\$1Optimized-2026.01.16**  |  `1.101.2`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2022-English-Full-ECS\$1Optimized-2025.12.13**  |  `1.101.0`  |  `25.0.6 (Docker CE)`  |  Public  | 

Use the following AWS CLI command to retrieve the current Amazon ECS-optimized Windows Server 2022 Full AMI.

```
aws ssm get-parameters --names /aws/service/ami-windows-latest/Windows_Server-2022-English-Full-ECS_Optimized
```

------
#### [ Windows Server 2022 Core AMI versions ]

The table below lists the current and previous versions of the Amazon ECS-optimized Windows Server 2022 Core AMI and their corresponding versions of the Amazon ECS container agent and Docker.


|  Amazon ECS-optimized Windows Server 2022 Core AMI  |  Amazon ECS container agent version  |  Docker version  |  Visibility  | 
| --- | --- | --- | --- | 
|  **Windows\$1Server-2022-English-Core-ECS\$1Optimized-2026.03.13**  |  `1.102.0`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2022-English-Core-ECS\$1Optimized-2026.02.13**  |  `1.101.3`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2022-English-Core-ECS\$1Optimized-2026.01.16**  |  `1.101.2`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2022-English-Core-ECS\$1Optimized-2025.12.13**  |  `1.101.0`  |  `25.0.6 (Docker CE)`  |  Public  | 

Use the following AWS CLI command to retrieve the current Amazon ECS-optimized Windows Server 2022 Full AMI.

```
aws ssm get-parameters --names /aws/service/ami-windows-latest/Windows_Server-2022-English-Core-ECS_Optimized
```

------
#### [ Windows Server 2019 Full AMI versions ]

The table below lists the current and previous versions of the Amazon ECS-optimized Windows Server 2019 Full AMI and their corresponding versions of the Amazon ECS container agent and Docker.


|  Amazon ECS-optimized Windows Server 2019 Full AMI  |  Amazon ECS container agent version  |  Docker version  |  Visibility  | 
| --- | --- | --- | --- | 
|  **Windows\$1Server-2019-English-Full-ECS\$1Optimized-2026.03.13**  |  `1.102.0`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2019-English-Full-ECS\$1Optimized-2026.02.13**  |  `1.101.3`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2019-English-Full-ECS\$1Optimized-2026.01.16**  |  `1.101.2`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2019-English-Full-ECS\$1Optimized-2025.12.13**  |  `1.101.0`  |  `25.0.6 (Docker CE)`  |  Public  | 

Use the following AWS CLI command to retrieve the current Amazon ECS-optimized Windows Server 2019 Full AMI.

```
aws ssm get-parameters --names /aws/service/ami-windows-latest/Windows_Server-2019-English-Full-ECS_Optimized
```

------
#### [ Windows Server 2019 Core AMI versions ]

The table below lists the current and previous versions of the Amazon ECS-optimized Windows Server 2019 Core AMI and their corresponding versions of the Amazon ECS container agent and Docker.


|  Amazon ECS-optimized Windows Server 2019 Core AMI  |  Amazon ECS container agent version  |  Docker version  |  Visibility  | 
| --- | --- | --- | --- | 
|  **Windows\$1Server-2019-English-Core-ECS\$1Optimized-2026.03.13**  |  `1.102.0`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2019-English-Core-ECS\$1Optimized-2026.02.13**  |  `1.101.3`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2019-English-Core-ECS\$1Optimized-2026.01.16**  |  `1.101.2`  |  `25.0.6 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2019-English-Core-ECS\$1Optimized-2025.12.13**  |  `1.101.0`  |  `25.0.6 (Docker CE)`  |  Public  | 

Use the following AWS CLI command to retrieve the current Amazon ECS-optimized Windows Server 2019 Full AMI.

```
aws ssm get-parameters --names /aws/service/ami-windows-latest/Windows_Server-2019-English-Core-ECS_Optimized
```

------
#### [ Windows Server 2016 Full AMI versions ]

**Important**  
Windows Server 2016 does not support the latest Docker version, for example 25.x.x. Therefore the Windows Server 2016 Full AMIs will not receive security or bug patches to the Docker runtime. We recommend that you move to one of the following Windows platforms:  
Windows Server 2022 Full
Windows Server 2022 Core
Windows Server 2019 Full
Windows Server 2019 Core

The table below lists the current and previous versions of the Amazon ECS-optimized Windows Server 2016 Full AMI and their corresponding versions of the Amazon ECS container agent and Docker.


|  Amazon ECS-optimized Windows Server 2016 Full AMI  |  Amazon ECS container agent version  |  Docker version  |  Visibility  | 
| --- | --- | --- | --- | 
|  **Windows\$1Server-2016-English-Full-ECS\$1Optimized-2026.03.13**  |  `1.102.0`  |  `20.10.23 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2016-English-Full-ECS\$1Optimized-2026.02.13**  |  `1.101.3`  |  `20.10.23 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2016-English-Full-ECS\$1Optimized-2026.01.16**  |  `1.101.2`  |  `20.10.23 (Docker CE)`  |  Public  | 
|  **Windows\$1Server-2016-English-Full-ECS\$1Optimized-2025.12.13**  |  `1.101.0`  |  `20.10.23 (Docker CE)`  |  Public  | 

Use the following AWS CLI Amazon ECS-optimized Windows Server 2016 Full AMI.

```
aws ssm get-parameters --names /aws/service/ami-windows-latest/Windows_Server-2016-English-Full-ECS_Optimized
```

------

# Building your own Amazon ECS-optimized Windows AMI
<a name="windows-custom-ami"></a>

Use EC2 Image Builder to build your own custom Amazon ECS-optimized Windows AMI. This makes it easy to use a Windows AMI with your own license on Amazon ECS. Amazon ECS provides a managed Image Builder component which provides the system configuration needed to run Windows instances to host your containers. Each Amazon ECS managed component includes a specific container agent and Docker version. You can customize your image to use either the latest Amazon ECS managed component, or if an older container agent or Docker version is needed you can specify a different component.

For a full walkthrough of using EC2 Image Builder, see [Getting started with EC2 Image Builder](https://docs.aws.amazon.com/imagebuilder/latest/userguide/set-up-ib-env.html#image-builder-accessing-prereq) in the *EC2 Image Builder User Guide*.

When building your own Amazon ECS-optimized Windows AMI using EC2 Image Builder, you create an image recipe. Your image recipe must meet the following requirements:
+ The **Source image** should be based on Windows Server 2019 Core, Windows Server 2019 Full, Windows Server 2022 Core, or Windows Server 2022 Full. Any other Windows operating system is not supported and may not be compatible with the component.
+ When specifying the **Build components**, the `ecs-optimized-ami-windows` component is required. The `update-windows` component is recommended, which ensures the image contains the latest security updates.

  To specify a different component version, expand the **Versioning options** menu and specify the component version you want to use. For more information, see [Listing the `ecs-optimized-ami-windows` component versions](#windows-component-list).

## Listing the `ecs-optimized-ami-windows` component versions
<a name="windows-component-list"></a>

When creating an EC2 Image Builder recipe and specifying the `ecs-optimized-ami-windows` component, you can either use the default option or you can specify a specific component version. To determine what component versions are available, along with the Amazon ECS container agent and Docker versions contained within the component, you can use the AWS Management Console.

**To list the available `ecs-optimized-ami-windows` component versions**

1. Open the EC2 Image Builder console at [https://console.aws.amazon.com/imagebuilder/](https://console.aws.amazon.com/imagebuilder/).

1. On the navigation bar, select the Region that are building your image in.

1. In the navigation pane, under the **Saved configurations** menu, choose **Components**.

1. On the **Components** page, in the search bar type `ecs-optimized-ami-windows` and pull down the qualification menu and select **Quick start (Amazon-managed)**.

1. Use the **Description** column to determine the component version with the Amazon ECS container agent and Docker version your image requires.

# Amazon ECS Windows container instance management
<a name="manage-windows"></a>

When you use EC2 instances for your Amazon ECS workloads, you are responsible for maintaining the instances.

Agent updates do not apply to Windows container instances. We recommend that you launch new container instances to update the agent version in your Windows clusters.

**Topics**
+ [Launching a container instance](launch_window-container_instance.md)
+ [Bootstrapping container instances](bootstrap_windows_container_instance.md)
+ [Using an HTTP proxy for Windows container instances](http_proxy_config-windows.md)
+ [Configuring container instances to receive Spot Instance notices](windows-spot-instance-draining-container.md)

# Launching an Amazon ECS Windows container instance
<a name="launch_window-container_instance"></a>

Your Amazon ECS container instances are created using the Amazon EC2 console. Before you begin, be sure that you've completed the steps in [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md).

For more information about the launch wizard, see [Launch an instance using the new launch instance wizard](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-launch-instance-wizard.html) in the *Amazon EC2 User Guide*. 

You can use the new Amazon EC2 wizard to launch an instance. You can use the following list for the parameters and leave the parameters not listed as the default. The following instructions take you through each parameter group.

## Procedure
<a name="liw-initiate-instance-launch"></a>

Before you begin, complete the steps in [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md).

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. In the navigation bar at the top of the screen, the current AWS Region is displayed (for example, US East (Ohio)). Select a Region in which to launch the instance. This choice is important because some Amazon EC2 resources can be shared between Regions, while others can't. 

1. From the Amazon EC2 console dashboard, choose **Launch instance**.

## Name and tags
<a name="liw-name-and-tags"></a>

The instance name is a tag, where the key is **Name**, and the value is the name that you specify. You can tag the instance, the volumes, and elastic graphics. For Spot Instances, you can tag the Spot Instance request only. 

Specifying an instance name and additional tags is optional.
+ For **Name**, enter a descriptive name for the instance. If you don't specify a name, the instance can be identified by its ID, which is automatically generated when you launch the instance.
+ To add additional tags, choose **Add additional tags**. Choose **Add tag**, and then enter a key and value, and select the resource type to tag. Choose **Add tag** again for each additional tag to add.

## Application and OS Images (Amazon Machine Image)
<a name="liw-ami"></a>

An Amazon Machine Image (AMI) contains the information required to create an instance. For example, an AMI might contain the software that's required to act as a web server, such as Apache, and your website.

For the latest Amazon ECS-optimized AMIs and their values, see [Windows Amazon ECS-optimized AMI](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_windows_AMI.html).

Use the **Search** bar to find a suitable Amazon ECS-optimized AMI published by AWS.

1. Based on your requirements, enter one of the following AMIs in the **Search** bar and press **Enter**.
   + Windows\$1Server-2022-English-Full-ECS\$1Optimized
   + Windows\$1Server-2022-English-Core-ECS\$1Optimized
   + Windows\$1Server-2019-English-Full-ECS\$1Optimized
   + Windows\$1Server-2019-English-Core-ECS\$1Optimized
   + Windows\$1Server-2016-English-Full-ECS\$1Optimized

1. On the **Choose an Amazon Machine Image (AMI)** page, select the **Community AMIs** tab.

1. From the list that appears, choose a Microsoft-verified AMI with the most recent publish date and click **Select**.

## Instance type
<a name="liw-instance-type"></a>

The instance type defines the hardware configuration and size of the instance. Larger instance types have more CPU and memory. For more information, see [Instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html).
+ For **Instance type**, select the instance type for the instance. 

   The instance type that you select determines the resources available for your tasks to run on.

## Key pair (login)
<a name="liw-key-pair"></a>

For **Key pair name**, choose an existing key pair, or choose **Create new key pair** to create a new one. 

**Important**  
If you choose the **Proceed without key pair (Not recommended)** option, you won't be able to connect to the instance unless you choose an AMI that is configured to allow users another way to log in.

## Network settings
<a name="liw-network-settings"></a>

Configure the network settings, as necessary.
+ **Networking platform**: Choose **Virtual Private Cloud (VPC)**, and then specify the subnet in the **Network interfaces** section. 
+ **VPC**: Select an existing VPC in which to create the security group.
+ **Subnet**: You can launch an instance in a subnet associated with an Availability Zone, Local Zone, Wavelength Zone, or Outpost.

  To launch the instance in an Availability Zone, select the subnet in which to launch your instance. To create a new subnet, choose **Create new subnet** to go to the Amazon VPC console. When you are done, return to the launch instance wizard and choose the Refresh icon to load your subnet in the list.

  To launch the instance in a Local Zone, select a subnet that you created in the Local Zone. 

  To launch an instance in an Outpost, select a subnet in a VPC that you associated with the Outpost.
+ **Auto-assign Public IP**: If your instance should be accessible from the internet, verify that the **Auto-assign Public IP** field is set to **Enable**. If not, set this field to **Disable**.
**Note**  
Container instances need access to communicate with the Amazon ECS service endpoint. This can be through an interface VPC endpoint or through your container instances having public IP addresses.  
For more information about interface VPC endpoints, see [Amazon ECS interface VPC endpoints (AWS PrivateLink)](vpc-endpoints.md)  
If you do not have an interface VPC endpoint configured and your container instances do not have public IP addresses, then they must use network address translation (NAT) to provide this access. For more information, see [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) in the *Amazon VPC User Guide* and [Using an HTTP proxy for Amazon ECS Linux container instances](http_proxy_config.md) in this guide.
+ **Firewall (security groups)**: Use a security group to define firewall rules for your container instance. These rules specify which incoming network traffic is delivered to your container instance. All other traffic is ignored. 
  + To select an existing security group, choose **Select existing security group**, and select the security group that you created in [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md)

## Configure storage
<a name="liw-storage"></a>

The AMI you selected includes one or more volumes of storage, including the root volume. You can specify additional volumes to attach to the instance.

You can use the **Simple** view.
+ **Storage type**: Configure the storage for your container instance.

  If you are using the Amazon ECS-optimized Amazon Linux AMI, your instance has two volumes configured. The **Root** volume is for the operating system's use, and the second Amazon EBS volume (attached to `/dev/xvdcz`) is for Docker's use.

  You can optionally increase or decrease the volume sizes for your instance to meet your application needs.

## Advanced details
<a name="liw-advanced-details"></a>

For **Advanced details**, expand the section to view the fields and specify any additional parameters for the instance.
+ **Purchasing option**: Choose **Request Spot Instances** to request Spot Instances. You also need to set the other fields related to Spot Instances. For more information, see [Spot Instance Requests](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-requests.html).
**Note**  
If you are using Spot Instances and see a `Not available` message, you may need to choose a different instance type.

  .
+ **IAM instance profile**: Select your container instance IAM role. This is usually named `ecsInstanceRole`.
**Important**  
If you do not launch your container instance with the proper IAM permissions, your Amazon ECS agent cannot connect to your cluster. For more information, see [Amazon ECS container instance IAM role](instance_IAM_role.md).
+ (Optional) **User data**: Configure your Amazon ECS container instance with user data, such as the agent environment variables from [Amazon ECS container agent configuration](ecs-agent-config.md). Amazon EC2 user data scripts are executed only one time, when the instance is first launched. The following are common examples of what user data is used for:
  + By default, your container instance launches into your default cluster. To launch into a non-default cluster, choose the **Advanced Details** list. Then, paste the following script into the **User data** field, replacing *your\$1cluster\$1name* with the name of your cluster.

    The `EnableTaskIAMRole` turns on the Task IAM roles feature for the tasks.

    In addition, the following options are available when you use the `awsvpc` network mode.
    + `EnableTaskENI`: This flag turns on task networking and is required when you use the `awsvpc` network mode.
    + `AwsvpcBlockIMDS`: This optional flag blocks IMDS access for the task containers running in the `awsvpc` network mode.
    + `AwsvpcAdditionalLocalRoutes`: This optional flag allows you to have additional routes in the task namespace.

      Replace `ip-address` with the IP Address for the additional routes, for example 172.31.42.23/32.

    ```
    <powershell>
    Import-Module ECSTools
    Initialize-ECSAgent -Cluster your_cluster_name -EnableTaskIAMRole -EnableTaskENI -AwsvpcBlockIMDS -AwsvpcAdditionalLocalRoutes
    '["ip-address"]'
    </powershell>
    ```

# Bootstrapping Amazon ECS Windows container instances to pass data
<a name="bootstrap_windows_container_instance"></a>

When you launch an Amazon EC2 instance, you can pass user data to the EC2 instance. The data can be used to perform common automated configuration tasks and even run scripts when the instance boots. For Amazon ECS, the most common use cases for user data are to pass configuration information to the Docker daemon and the Amazon ECS container agent.

You can pass multiple types of user data to Amazon EC2, including cloud boothooks, shell scripts, and `cloud-init` directives. For more information about these and other format types, see the [Cloud-Init documentation](https://cloudinit.readthedocs.io/en/latest/explanation/format.html). 

You can pass this user data when using the Amazon EC2 launch wizard. For more information, see [Launching an Amazon ECS Linux container instance](launch_container_instance.md).

## Default Windows user data
<a name="windows-default-userdata"></a>

This example user data script shows the default user data that your Windows container instances receive if you use the console. The script below does the following:
+ Sets the cluster name to the name you entered.
+ Sets the IAM roles for tasks.
+ Sets `json-file` and `awslogs` as the available logging drivers.

In addition, the following options are available when you use the `awsvpc` network mode.
+ `EnableTaskENI`: This flag turns on task networking and is required when you use the `awsvpc` network mode.
+ `AwsvpcBlockIMDS`: This optional flag blocks IMDS access for the task containers running in `awsvpc` network mode.
+ `AwsvpcAdditionalLocalRoutes`: This optional flag allows you to have additional routes.

  Replace `ip-address` with the IP Address for the additional routes, for example 172.31.42.23/32.

You can use this script for your own container instances (provided that they are launched from the Amazon ECS-optimized Windows Server AMI). 

Replace the `-Cluster cluster-name` line to specify your own cluster name.

```
<powershell>
Initialize-ECSAgent -Cluster cluster-name -EnableTaskIAMRole -LoggingDrivers '["json-file","awslogs"]' -EnableTaskENI -AwsvpcBlockIMDS -AwsvpcAdditionalLocalRoutes
'["ip-address"]'
</powershell>
```

 For Windows tasks that are configured to use the `awslogs` logging driver, you must also set the `ECS_ENABLE_AWSLOGS_EXECUTIONROLE_OVERRIDE` environment variable on your container instance. Use the following syntax. 

Replace the `-Cluster cluster-name` line to specify your own cluster name.

```
<powershell>
[Environment]::SetEnvironmentVariable("ECS_ENABLE_AWSLOGS_EXECUTIONROLE_OVERRIDE", $TRUE, "Machine")
Initialize-ECSAgent -Cluster cluster-name -EnableTaskIAMRole -LoggingDrivers '["json-file","awslogs"]'
</powershell>
```

## Windows agent installation user data
<a name="agent-service-userdata"></a>

This example user data script installs the Amazon ECS container agent on an instance launched with a **Windows\$1Server-2016-English-Full-Containers** AMI. It has been adapted from the agent installation instructions on the [Amazon ECS Container Agent GitHub repository](https://github.com/aws/amazon-ecs-agent) README page.

**Note**  
This script is shared for example purposes. It is much easier to get started with Windows containers by using the Amazon ECS-optimized Windows Server AMI. For more information, see [Creating an Amazon ECS cluster for Fargate workloads](create-cluster-console-v2.md).

For information about how to install the Amazon ECS agent on Windows Server 2022 Full, see[ Issue 3753](https://github.com/aws/amazon-ecs-agent/issues/3753) on GitHub.

You can use this script for your own container instances (provided that they are launched with a version of the **Windows\$1Server-2016-English-Full-Containers** AMI). Be sure to replace the `windows` line to specify your own cluster name (if you are not using a cluster called `windows`).

```
<powershell>
# Set up directories the agent uses
New-Item -Type directory -Path ${env:ProgramFiles}\Amazon\ECS -Force
New-Item -Type directory -Path ${env:ProgramData}\Amazon\ECS -Force
New-Item -Type directory -Path ${env:ProgramData}\Amazon\ECS\data -Force
# Set up configuration
$ecsExeDir = "${env:ProgramFiles}\Amazon\ECS"
[Environment]::SetEnvironmentVariable("ECS_CLUSTER", "windows", "Machine")
[Environment]::SetEnvironmentVariable("ECS_LOGFILE", "${env:ProgramData}\Amazon\ECS\log\ecs-agent.log", "Machine")
[Environment]::SetEnvironmentVariable("ECS_DATADIR", "${env:ProgramData}\Amazon\ECS\data", "Machine")
# Download the agent
$agentVersion = "latest"
$agentZipUri = "https://s3.amazonaws.com/amazon-ecs-agent/ecs-agent-windows-$agentVersion.zip"
$zipFile = "${env:TEMP}\ecs-agent.zip"
Invoke-RestMethod -OutFile $zipFile -Uri $agentZipUri
# Put the executables in the executable directory.
Expand-Archive -Path $zipFile -DestinationPath $ecsExeDir -Force
Set-Location ${ecsExeDir}
# Set $EnableTaskIAMRoles to $true to enable task IAM roles
# Note that enabling IAM roles will make port 80 unavailable for tasks.
[bool]$EnableTaskIAMRoles = $false
if (${EnableTaskIAMRoles}) {
  $HostSetupScript = Invoke-WebRequest https://raw.githubusercontent.com/aws/amazon-ecs-agent/master/misc/windows-deploy/hostsetup.ps1
  Invoke-Expression $($HostSetupScript.Content)
}
# Install the agent service
New-Service -Name "AmazonECS" `
        -BinaryPathName "$ecsExeDir\amazon-ecs-agent.exe -windows-service" `
        -DisplayName "Amazon ECS" `
        -Description "Amazon ECS service runs the Amazon ECS agent" `
        -DependsOn Docker `
        -StartupType Manual
sc.exe failure AmazonECS reset=300 actions=restart/5000/restart/30000/restart/60000
sc.exe failureflag AmazonECS 1
Start-Service AmazonECS
</powershell>
```

# Using an HTTP proxy for Amazon ECS Windows container instances
<a name="http_proxy_config-windows"></a>

You can configure your Amazon ECS container instances to use an HTTP proxy for both the Amazon ECS container agent and the Docker daemon. This is useful if your container instances do not have external network access through an Amazon VPC internet gateway, NAT gateway, or instance.

To configure your Amazon ECS Windows container instance to use an HTTP proxy, set the following variables at launch time (with Amazon EC2 user data).

`[Environment]::SetEnvironmentVariable("HTTP_PROXY", "http://proxy.mydomain:port", "Machine")`  
Set `HTTP_PROXY` to the hostname (or IP address) and port number of an HTTP proxy to use for the Amazon ECS agent to connect to the internet. For example, your container instances may not have external network access through an Amazon VPC internet gateway, NAT gateway, or instance.

`[Environment]::SetEnvironmentVariable("NO_PROXY", "169.254.169.254,169.254.170.2,\\.\pipe\docker_engine", "Machine")`  
Set `NO_PROXY` to `169.254.169.254,169.254.170.2,\\.\pipe\docker_engine` to filter EC2 instance metadata, IAM roles for tasks, and Docker daemon traffic from the proxy. 

**Example Windows HTTP proxy user data script**  
The example user data PowerShell script below configures the Amazon ECS container agent and the Docker daemon to use an HTTP proxy that you specify. You can also specify a cluster into which the container instance registers itself.  
To use this script when you launch a container instance, follow the steps in [Launching an Amazon ECS Windows container instance](launch_window-container_instance.md). Just copy and paste the PowerShell script below into the **User data** field (be sure to substitute the red example values with your own proxy and cluster information).  
The `-EnableTaskIAMRole` option is required to enable IAM roles for tasks. For more information, see [Amazon EC2 Windows instance additional configuration](task-iam-roles.md#windows_task_IAM_roles).

```
<powershell>
Import-Module ECSTools

$proxy = "http://proxy.mydomain:port"
[Environment]::SetEnvironmentVariable("HTTP_PROXY", $proxy, "Machine")
[Environment]::SetEnvironmentVariable("NO_PROXY", "169.254.169.254,169.254.170.2,\\.\pipe\docker_engine", "Machine")

Restart-Service Docker
Initialize-ECSAgent -Cluster MyCluster -EnableTaskIAMRole
</powershell>
```

# Configuring Amazon ECS Windows container instances to receive Spot Instance notices
<a name="windows-spot-instance-draining-container"></a>

Amazon EC2 terminates, stops, or hibernates your Spot Instance when the Spot price exceeds the maximum price for your request or capacity is no longer available. Amazon EC2 provides a Spot Instance interruption notice, which gives the instance a two-minute warning before it is interrupted. If Amazon ECS Spot Instance draining is enabled on the instance, ECS receives the Spot Instance interruption notice and places the instance in `DRAINING` status.

**Important**  
Amazon ECS monitors for the Spot Instance interruption notices that have the `terminate` and `stop` instance-actions. If you specified either the `hibernate` instance interruption behavior when requesting your Spot Instances or Spot Fleet, then Amazon ECS Spot Instance draining is not supported for those instances.

When a container instance is set to `DRAINING`, Amazon ECS prevents new tasks from being scheduled for placement on the container instance. Service tasks on the draining container instance that are in the `PENDING` state are stopped immediately. If there are container instances in the cluster that are available, replacement service tasks are started on them.

You can turn on Spot Instance draining when you launch an instance. You must set the `ECS_ENABLE_SPOT_INSTANCE_DRAINING` parameter before you start the container agent. Replace *my-cluster* with the name of your cluster.

```
[Environment]::SetEnvironmentVariable("ECS_ENABLE_SPOT_INSTANCE_DRAINING", "true", "Machine")

# Initialize the agent
Initialize-ECSAgent -Cluster my-cluster
```

For more information, see [Launching an Amazon ECS Windows container instance](launch_window-container_instance.md).