

# Auto Scaling groups


**Note**  
If you are new to Auto Scaling groups, work through the steps in the [Create your first Auto Scaling group](create-your-first-auto-scaling-group.md) tutorial to get started and see how an Auto Scaling group responds when an instance in the group terminates.

An *Auto Scaling group* contains a collection of EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management. An Auto Scaling group also lets you use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies. Both maintaining the number of instances in an Auto Scaling group and automatic scaling are the core functionality of the Amazon EC2 Auto Scaling service.

The size of an Auto Scaling group depends on the number of instances that you set as the desired capacity. You can adjust its size to meet demand, either manually or by using automatic scaling. 

An Auto Scaling group starts by launching enough instances to meet its desired capacity. It maintains this number of instances by performing periodic health checks on the instances in the group. The Auto Scaling group continues to maintain a fixed number of instances even if an instance becomes unhealthy. If an instance becomes unhealthy, the group terminates the unhealthy instance and launches another instance to replace it. For more information, see [Health checks for instances in an Auto Scaling group](ec2-auto-scaling-health-checks.md). 

You can use scaling policies to increase or decrease the number of instances in your group dynamically to meet changing conditions. When the scaling policy is in effect, the Auto Scaling group adjusts the desired capacity of the group, between the minimum and maximum capacity values that you specify, and launches or terminates the instances as needed. You can also scale on a schedule. For more information, see [Choose your scaling method](scaling-overview.md). 

When creating an Auto Scaling group, you can choose whether to launch On-Demand Instances, Spot Instances, or both. You can specify multiple purchase options for your Auto Scaling group only when you use a launch template. For more information, see [Auto Scaling groups with multiple instance types and purchase options](ec2-auto-scaling-mixed-instances-groups.md). 

Spot Instances provide you with access to unused EC2 capacity at steep discounts relative to On-Demand prices. For more information, see [Amazon EC2 Spot Instances](https://aws.amazon.com/ec2/spot/pricing/). There are key differences between Spot Instances and On-Demand Instances:
+ The price for Spot Instances varies based on demand
+ Amazon EC2 can terminate an individual Spot Instance as the availability of, or price for, Spot Instances changes

When a Spot Instance is terminated, the Auto Scaling group attempts to launch a replacement instance to maintain the desired capacity for the group.

When instances are launched, if you specified multiple Availability Zones, the desired capacity is distributed across these Availability Zones. If a scaling action occurs, Amazon EC2 Auto Scaling automatically maintains balance across all of the Availability Zones that you specify.

**Topics**
+ [

# Create Auto Scaling groups using launch templates
](create-auto-scaling-groups-launch-template.md)
+ [

# Create Auto Scaling groups using launch configurations
](create-auto-scaling-groups-launch-configuration.md)
+ [

# Launch instances synchronously
](launch-instances-synchronously.md)
+ [

# Update an Auto Scaling group
](update-auto-scaling-group.md)
+ [

# Tag Auto Scaling groups and instances
](ec2-auto-scaling-tagging.md)
+ [

# Instance maintenance policies
](ec2-auto-scaling-instance-maintenance-policy.md)
+ [

# Amazon EC2 Auto Scaling lifecycle hooks
](lifecycle-hooks.md)
+ [

# Decrease latency for applications with long boot times using warm pools
](ec2-auto-scaling-warm-pools.md)
+ [

# Auto Scaling group zonal shift
](ec2-auto-scaling-zonal-shift.md)
+ [

# Auto Scaling group Availability Zone distribution
](ec2-auto-scaling-availability-zone-balanced.md)
+ [

# Detach or attach instances from your Auto Scaling group
](ec2-auto-scaling-detach-attach-instances.md)
+ [

# Temporarily remove instances from your Auto Scaling group
](as-enter-exit-standby.md)
+ [

# Delete your Auto Scaling infrastructure
](as-process-shutdown.md)

# Create Auto Scaling groups using launch templates
Create Auto Scaling groups using launch templates

If you have created a launch template, you can create an Auto Scaling group that uses a launch template as a configuration template for its EC2 instances. The launch template specifies information such as the AMI ID, instance type, key pair, security groups, and block device mapping for your instances. For information about creating launch templates, see [Create a launch template for an Auto Scaling group](create-launch-template.md).

You must have sufficient permissions to create an Auto Scaling group. You must also have sufficient permissions to create the service-linked role that Amazon EC2 Auto Scaling uses to perform actions on your behalf if it does not yet exist. For examples of IAM policies that an administrator can use as a reference for granting you permissions, see [Identity-based policy examples](security_iam_id-based-policy-examples.md) and [Control Amazon EC2 launch template usage in Auto Scaling groups](ec2-auto-scaling-launch-template-permissions.md).

**Topics**
+ [

# Create an Auto Scaling group using a launch template
](create-asg-launch-template.md)
+ [

# Create an Auto Scaling group using the Amazon EC2 launch wizard
](create-asg-ec2-wizard.md)
+ [

# Auto Scaling groups with multiple instance types and purchase options
](ec2-auto-scaling-mixed-instances-groups.md)

# Create an Auto Scaling group using a launch template
Create a group using a launch template

When you create an Auto Scaling group, you must specify the necessary information to configure the Amazon EC2 instances, the Availability Zones and VPC subnets for the instances, the desired capacity, and the minimum and maximum capacity limits. 

To configure Amazon EC2 instances that are launched by your Auto Scaling group, you can specify a launch template or a launch configuration. The following procedure demonstrates how to create an Auto Scaling group using a launch template. 

**Prerequisites**
+ You must have created a launch template. For more information, see [Create a launch template for an Auto Scaling group](create-launch-template.md).

**To create an Auto Scaling group using a launch template (console)**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. On the navigation bar at the top of the screen, choose the same AWS Region that you used when you created the launch template.

1. Choose **Create an Auto Scaling group**.

1. On the **Choose launch template or configuration** page, do the following:

   1. For **Auto Scaling group name**, enter a name for your Auto Scaling group.

   1. For **Launch template**, choose an existing launch template.

   1. For **Launch template version**, choose whether the Auto Scaling group uses the default, the latest, or a specific version of the launch template when scaling out. 

   1. Verify that your launch template supports all of the options that you are planning to use, and then choose **Next**.

1. On the **Choose instance launch options** page, if you're not using multiple instance types, you can skip the **Instance type requirements** section to use the EC2 instance type that is specified in the launch template.

   To use multiple instance types, see [Auto Scaling groups with multiple instance types and purchase options](ec2-auto-scaling-mixed-instances-groups.md).

1. Under **Network**, for **VPC**, choose a VPC. The Auto Scaling group must be created in the same VPC as the security group you specified in your launch template.

1. For **Availability Zones and subnets**, choose one or more subnets in the specified VPC. Use subnets in multiple Availability Zones for high availability. For more information, see [Considerations when choosing VPC subnets](asg-in-vpc.md#as-vpc-considerations).

1. For **Availability Zone distribution**, select a distribution strategy. For more information, see [Auto Scaling group Availability Zone distribution](ec2-auto-scaling-availability-zone-balanced.md).

1. If you created a launch template with an instance type specified, then you can continue to the next step to create an Auto Scaling group that uses the instance type in the launch template. 

   Alternatively, you can choose the **Override launch template** option if no instance type is specified in your launch template or if you want to use multiple instance types for auto scaling. For more information, see [Auto Scaling groups with multiple instance types and purchase options](ec2-auto-scaling-mixed-instances-groups.md).

1. Choose **Next** to continue to the next step. 

   Or, you can accept the rest of the defaults, and choose **Skip to review**. 

1. (Optional) On the **Integrate with other services** page, configure the following options, and then choose **Next**:

   1. For **Load balancing**, choose whether to attach your Auto Scaling group to a load balancer. For more information, see [Elastic Load Balancing](autoscaling-load-balancer.md).

   1. For **VPC Lattice integration options**, choose whether to use VPC Lattice. For more information, see [Manage traffic flow with a VPC Lattice target group](ec2-auto-scaling-vpc-lattice.md).

   1. For **Amazon Application Recovery Controller (ARC) zonal shift**, select the checkbox to enable zonal shift. For more information, see [Auto Scaling group zonal shift](ec2-auto-scaling-zonal-shift.md).

      1. If you enable zonal shift, for **Health check behavior**, select Ignore unhealthy or Replace unhealthy. For more information, see [How zonal shift works for Auto Scaling groups](ec2-auto-scaling-zonal-shift.md#asg-zonal-shift-how-it-works).

   1. Under **Health checks**, for **Additional health check types**, select **Turn on Amazon EBS health checks**. For more information, see [Monitor Auto Scaling instances with impaired Amazon EBS volumes using health checks](monitor-and-replace-instances-with-impaired-ebs-volumes.md).

   1. For **Health check grace period**, enter the amount of time, in seconds. This amount of time is how long Amazon EC2 Auto Scaling needs to wait before checking the health status of an instance after it enters the `InService` state. For more information, see [Set the health check grace period for an Auto Scaling group](health-check-grace-period.md). 

1. (Optional) On the **Configure group size and scaling** page, configure the following options, and then choose **Next**:

   1. Under **Group size**, for **Desired capacity**, enter the initial number of instances to launch. 

   1. Under **Scaling**, **Scaling limits**, if your new value for **Desired capacity** is greater than **Min desired capacity** and **Max desired capacity**, the **Max desired capacity** is automatically increased to the new desired capacity value. You can change these limits as needed. For more information, see [Set scaling limits for your Auto Scaling group](asg-capacity-limits.md).

   1. For **Automatic scaling**, choose whether you want to create a target tracking scaling policy. You can also create this policy after your create your Auto Scaling group.

      If you choose **Target tracking scaling policy**, follow the directions in [Create a target tracking scaling policy](policy_creating.md) to create the policy.

   1. Under **Instance maintenance policy**, choose whether you want to create an instance maintenance policy. You can also create this policy after your create your Auto Scaling group. Follow the directions in [Set an instance maintenance policy](set-instance-maintenance-policy.md) to create the policy.

   1. Under **Additional capacity settings**, **Capacity Reservation preference**, choose whether you want to use a Capacity Reservation preference. For more information, see [Reserve capacity in specific Availability Zones with Capacity Reservations](use-ec2-capacity-reservations.md).

   1. Under **Additional settings**, **Instance scale-in protection**, choose whether to enable instance scale-in protection. For more information, see [Use instance scale-in protection to control instance termination](ec2-auto-scaling-instance-protection.md).

   1. For **Monitoring**, choose whether to enable CloudWatch group metrics collection. These metrics provide measurements that can be indicators of a potential issue, such as number of terminating instances or number of pending instances. For more information, see [Monitor CloudWatch metrics for your Auto Scaling groups and instances](ec2-auto-scaling-cloudwatch-monitoring.md).

   1. For **default instance warmup**, select this option and choose the warmup time for your application. If you are creating an Auto Scaling group that has a scaling policy, the default instance warmup feature improves the Amazon CloudWatch metrics used for dynamic scaling. For more information, see [Set the default instance warmup for an Auto Scaling group](ec2-auto-scaling-default-instance-warmup.md).

1. (Optional) On the **Add notifications** page, configure the notification, and then choose **Next**. For more information, see [Amazon SNS notification options for Amazon EC2 Auto Scaling](ec2-auto-scaling-sns-notifications.md).

1. (Optional) On the **Add tags** page, choose **Add tag**, provide a tag key and value for each tag, and then choose **Next**. For more information, see [Tag Auto Scaling groups and instances](ec2-auto-scaling-tagging.md).

1. On the **Review** page, choose **Create Auto Scaling group**.

**To create an Auto Scaling group using the command line**

You can use one of the following commands:
+ [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) (AWS CLI)
+ [New-ASAutoScalingGroup](https://docs.aws.amazon.com/powershell/latest/reference/items/New-ASAutoScalingGroup.html) (AWS Tools for Windows PowerShell)

# Create an Auto Scaling group using the Amazon EC2 launch wizard
Create a group using the EC2 launch wizard

The following procedure shows how to create an Auto Scaling group by using the **Launch instance** wizard in the Amazon EC2 console. This option automatically populates a launch template with certain configuration details from the **Launch instance** wizard.

**Note**  
The wizard does not populate the Auto Scaling group with the number of instances you specify; it only populates the launch template with the Amazon Machine Image (AMI) ID and instance type. Use the **Create Auto Scaling group** wizard to specify the number of instances to launch.   
An AMI provides the information required to configure an instance. You can launch multiple instances from a single AMI when you need multiple instances with the same configuration. We recommend using a custom AMI that already has your application installed on it to avoid having your instances terminated if you reboot an instance belonging to an Auto Scaling group. To use a custom AMI with Amazon EC2 Auto Scaling, you must first create your AMI from a customized instance, and then use the AMI to create a launch template for your Auto Scaling group.

**Prerequisites**
+ You must have created a custom AMI in the same AWS Region where you plan to create the Auto Scaling group. For more information, see [Create an AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-ebs.html) in the *Amazon EC2 User Guide*.

## Use a custom AMI as a template


In this section, you use the Amazon EC2 launch wizard to automatically populate a launch template with your custom AMI. Alternatively, to set up the launch template from scratch or for more description of the parameters you can configure for your launch template, see [Create your launch template (console)](create-launch-template.md#create-launch-template-for-auto-scaling).

**To use a custom AMI as a template**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. On the navigation bar at the top of the screen, the current AWS Region is displayed. Select a Region in which to launch your Auto Scaling group.

1. In the navigation pane, choose **Instances**.

1. Choose **Launch instance**, and then do the following:

   1. Under **Name and tags**, leave **Name** blank. The name isn't part of the data that's used to create a launch template. 

   1. Under **Application and OS Images (Amazon Machine Image)**, choose **Browse more AMIs** to browse the full AMI catalog.

   1. Choose **My AMIs**, find the AMI that you created, and then choose **Select**. 

   1. Under **Instance type**, choose an instance type. 
**Note**  
Choose the same instance type that you used when you created the AMI or a more powerful one.

   1. On the right side of the screen, under **Summary**, for **Number of instances**, enter any number. The number that you enter here isn't important. You will specify the number of instances that you want to launch when you create the Auto Scaling group.

      Under the **Number of instances** field, a message displays that says **When launching more than 1 instance, consider EC2 Auto Scaling**. 

   1. Choose the **consider EC2 Auto Scaling** hyperlink text.

   1. On the **Launch into Auto Scaling Group** confirmation dialogue, choose **Continue** to go to the **Create launch template** page with the AMI and instance type you selected in the launch instance wizard already populated.

After you choose **Continue**, the **Create launch template** page opens. Follow this procedure to finish creating a launch template. 

**To create a launch template**

1. Under **Launch template name and description**, enter a name and description for the new launch template.

1. (Optional) Under **Key pair (login)**, for **Key pair name**, choose the name of the previously created key pair to use when connecting to instances, for example, using SSH.

1. (Optional) Under **Network settings**, for **Security groups**, choose one or more previously created [security groups](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html).

1. (Optional) Under **Configure storage**, update the storage configuration. The default storage configuration is determined by the AMI and the instance type. 

1. When you are done configuring the launch template, choose **Create launch template**.

1. On the confirmation page, choose **Create Auto Scaling group**.

## Create an Auto Scaling group


**Note**  
The rest of this topic describes the basic procedure for creating an Auto Scaling group. For more description of the parameters you can configure for your Auto Scaling group, see [Create an Auto Scaling group using a launch template](create-asg-launch-template.md).

After you choose **Create Auto Scaling group**, the **Create Auto Scaling group** wizard opens. Follow this procedure to create an Auto Scaling group.

**To create an Auto Scaling group**

1. On the **Choose launch template or configuration** page, enter a name for the Auto Scaling group.

1. The launch template that you created is already selected for you. 

   For **Launch template version**, choose whether the Auto Scaling group uses the default, the latest, or a specific version of the launch template when scaling out.

1. Choose **Next** to continue to the next step.

1. On the **Choose instance launch options** page, if you're not using multiple instance types, you can skip the **Instance type requirements** section to use the EC2 instance type that is specified in the launch template.

   To use multiple instance types, see [Auto Scaling groups with multiple instance types and purchase options](ec2-auto-scaling-mixed-instances-groups.md).

1. Under **Network**, for **VPC**, choose a VPC. The Auto Scaling group must be created in the same VPC as the security group you specified in your launch template.
**Tip**  
If you didn't specify a security group in your launch template, your instances are launched with a default security group from the VPC that you specify. By default, this security group doesn't allow inbound traffic from external networks.

1. For **Availability Zones and subnets**, choose one or more subnets in the specified VPC.

1. For **Availability Zone distribution**, select a distribution strategy. For more information, see [Auto Scaling group Availability Zone distribution](ec2-auto-scaling-availability-zone-balanced.md).

1. Choose **Next** twice to go to the **Configure group size and scaling policies** page.

1. Under **Group size**, define the **Desired capacity** (initial number of instances to launch immediately after the Auto Scaling group is created).

1. In the **Scaling** section, under **Scaling limits**, if your new value for **Desired capacity** is greater than **Min desired capacity** and **Max desired capacity**, the **Max desired capacity** is automatically increased to the new desired capacity value. You can change these limits as needed. For more information, see [Set scaling limits for your Auto Scaling group](asg-capacity-limits.md).

1. Choose **Skip to review**. 

1. On the **Review** page, choose **Create Auto Scaling group**.

## Next steps


You can check that the Auto Scaling group has been created correctly by viewing the activity history. On the **Activity** tab, under **Activity history**, the **Status** column shows whether your Auto Scaling group has successfully launched instances. If the instances fail to launch or they launch but then immediately terminate, see the following topics for possible causes and resolutions:
+ [Troubleshoot Amazon EC2 Auto Scaling: EC2 instance launch failures](ts-as-instancelaunchfailure.md)
+ [Troubleshoot Amazon EC2 Auto Scaling: AMI issues](ts-as-ami.md)
+ [Troubleshoot unhealthy instances in Amazon EC2 Auto Scaling](ts-as-healthchecks.md)

You can now attach a load balancer in the same Region as your Auto Scaling group, if desired. For more information, see [Use Elastic Load Balancing to distribute incoming application traffic in your Auto Scaling group](autoscaling-load-balancer.md).

# Auto Scaling groups with multiple instance types and purchase options
Use multiple instance types and purchase options

You can launch and automatically scale a fleet of On-Demand Instances and Spot Instances within a single Auto Scaling group. In addition to receiving discounts for using Spot Instances, you can use Reserved Instances or a Savings Plans to receive discounts on the regular On-Demand Instance pricing. These factors help you optimize your cost savings for EC2 instances and get the desired scale and performance for your application.

Spot Instances are spare capacity available at steep discounts compared to the EC2 On-Demand price. Spot Instances are a cost-effective choice if you can be flexible about when your applications run and if your applications can be interrupted. They can be used for various fault-tolerant and flexible applications. Examples include stateless web servers, API endpoints, big data and analytics applications, containerized workloads, CI/CD pipelines, high performance and high throughput computing (HPC/HTC), rendering workloads, and other flexible workloads.

For more information, see [Instance purchasing options](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-purchasing-options.html) in the *Amazon EC2 User Guide*.

**Topics**
+ [

# Setup overview for creating a mixed instances group
](mixed-instances-groups-set-up-overview.md)
+ [

# Allocation strategies for multiple instance types
](allocation-strategies.md)
+ [

# Create mixed instances group using attribute-based instance type selection
](create-mixed-instances-group-attribute-based-instance-type-selection.md)
+ [

# Create a mixed instances group by manually choosing instance types
](create-mixed-instances-group-manual-instance-type-selection.md)
+ [

# Configure an Auto Scaling group to use instance weights
](ec2-auto-scaling-mixed-instances-groups-instance-weighting.md)
+ [

# Use multiple launch templates
](ec2-auto-scaling-mixed-instances-groups-launch-template-overrides.md)

# Setup overview for creating a mixed instances group


This topic provides an overview and best practices for creating an Auto Scaling mixed instances group.

**Topics**
+ [

## Overview
](#mixed-instances-groups-overview)
+ [

## Instance type flexibility
](#mixed-instances-group-instance-flexibility)
+ [

## Availability Zone flexibility
](#mixed-instances-group-az-flexibility)
+ [

## Spot max price
](#mixed-instances-group-spot-max-price)
+ [

## Proactive capacity rebalancing
](#use-capacity-rebalancing)
+ [

## Scaling behavior
](#mixed-instances-group-scaling-behavior)
+ [

## Regional availability of instance types
](#setup-overview-regional-availability-of-instance-types)
+ [

## Related resources
](#setup-overview-related-resources)
+ [

## Limitations
](#setup-overview-limitations)

## Overview


To create a mixed instances group, you have two options:
+ [Attribute-based instance type selection](create-mixed-instances-group-attribute-based-instance-type-selection.md) – Define your compute requirements to choose your instance types automatically based on their specific instance attributes.
+ [Manual instance type selection](create-mixed-instances-group-manual-instance-type-selection.md) – Manually choose the instance types that suit your workload.

------
#### [ Manual selection ]

The following steps describe how to create a mixed instances group by manually choosing instance types: 

1. Choose a launch template that has the parameters to launch an EC2 instance. Parameters in launch templates are optional, but Amazon EC2 Auto Scaling can't launch an instance if the amilong; (AMI) ID is missing from the launch template.

1. Choose the option to override the launch template.

1. Manually choose the instance types that suit your workload.

1. Specify the percentages of On-Demand Instances and Spot Instances to launch.

1. Choose allocation strategies that determine how Amazon EC2 Auto Scaling fulfills your On-Demand and Spot capacities from the possible instance types.

1. Choose the Availability Zones and VPC subnets to launch your instances in.

1. Specify the initial size of the group (the desired capacity) and the minimum and maximum size of the group.

Overrides are necessary to override the instance type declared in the launch template and use multiple instances types that are embedded in the Auto Scaling group's own resource definition. For more information about the instance types that are available, see [Instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) in the *Amazon EC2 User Guide*. 

You can also configure the following optional parameters for each instance type:
+ `LaunchTemplateSpecification` – You can assign a different launch template to an instance type as needed. This option is currently not available from the console. For more information, see [Use multiple launch templates](ec2-auto-scaling-mixed-instances-groups-launch-template-overrides.md).
+ `WeightedCapacity` – You decide how much the instance counts toward the desired capacity relative to the rest of the instances in your group. If you specify a `WeightedCapacity` value for one instance type, you must specify a `WeightedCapacity` value for all of them. By default, each instance counts as one toward your desired capacity. For more information, see [Configure an Auto Scaling group to use instance weights](ec2-auto-scaling-mixed-instances-groups-instance-weighting.md).

------
#### [ Attribute-based selection ]

To let Amazon EC2 Auto Scaling choose your instance types automatically based on their specific instance attributes, use the following steps to create a mixed instances group by specifying your compute requirements:

1. Choose a launch template that has the parameters to launch an EC2 instance. Parameters in launch templates are optional, but Amazon EC2 Auto Scaling can't launch an instance if the amilong; (AMI) ID is missing from the launch template.

1. Choose the option to override the launch template.

1. Specify instance attributes that match your compute requirements, such as vCPUs and memory requirements.

1. Specify the percentages of On-Demand Instances and Spot Instances to launch.

1. Choose allocation strategies that determine how Amazon EC2 Auto Scaling fulfills your On-Demand and Spot capacities from the possible instance types.

1. Choose the Availability Zones and VPC subnets to launch your instances in.

1. Specify the initial size of the group (the desired capacity) and the minimum and maximum size of the group.

Overrides are necessary to override the instance type declared in the launch template and use a set of instance attributes that describe your compute requirements. For supported attributes, see [InstanceRequirements](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_InstanceRequirements.html) in the *Amazon EC2 Auto Scaling API Reference*. Alternatively, you can use a launch template that already has your instance attributes definition. 

You can also configure the `LaunchTemplateSpecification` parameter within the overrides structure to assign a different launch template to a set of instance requirements as needed. This option is currently not available from the console. For more information, see [LaunchTemplateOverrides](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_LaunchTemplateOverrides.html) in the *Amazon EC2 Auto Scaling API Reference*.

By default, you set the number of instances as the desired capacity of your Auto Scaling group. 

Alternatively, you can set the value for desired capacity to the number of vCPUs or the amount of memory. To do so, use the `DesiredCapacityType` property in the `CreateAutoScalingGroup` API operation or the **Desired capacity type** dropdown field in the AWS Management Console. This is a useful alternative to [instance weights](ec2-auto-scaling-mixed-instances-groups-instance-weighting.md).

------

## Instance type flexibility


To enhance availability, deploy your application across multiple instance types. It's a best practice to use multiple instance types to satisfy capacity requirements. This way, Amazon EC2 Auto Scaling can launch another instance type if there is insufficient instance capacity in your chosen Availability Zones.

If there is insufficient instance capacity with Spot Instances, Amazon EC2 Auto Scaling keeps trying to launch from other Spot Instance pools. (The pools it uses are determined by your choice of instance types and allocation strategy.) Amazon EC2 Auto Scaling helps you leverage the cost savings of Spot Instances by launching them instead of On-Demand Instances.

We recommend being flexible across at least 10 instance types for each workload. When choosing your instance types, don't limit yourself to the most popular new instance types. Choosing earlier generation instance types tends to result in fewer Spot interruptions because they are less in demand from On-Demand customers.

## Availability Zone flexibility


We strongly recommend that you span your Auto Scaling group across multiple Availability Zones. With multiple Availability Zones, you can design applications that automatically fail over between zones for greater resiliency. 

As an added benefit, you can access a deeper Amazon EC2 capacity pool when compared to groups in a single Availability Zone. Because capacity fluctuates independently for each instance type in each Availability Zone, you can often get more compute capacity with flexibility for both the instance type and the Availability Zone. 

For more information about using multiple Availability Zones, see [Example: Distribute instances across Availability Zones](auto-scaling-benefits.md#arch-AutoScalingMultiAZ).

## Spot max price


When you create your Auto Scaling group using the AWS CLI or an SDK, you can specify the `SpotMaxPrice` parameter. The `SpotMaxPrice` parameter determines the maximum price that you're willing to pay for a Spot Instance hour. 

When you specify the `WeightedCapacity` parameter in your overrides (or `"DesiredCapacityType": "vcpu"` or `"DesiredCapacityType": "memory-mib"` at the group level), the maximum price represents the maximum unit price, not the maximum price for a whole instance. 

We strongly recommend that you do not specify a maximum price. Your application might not run if you do not receive any Spot Instances, such as when your maximum price is too low. If you don't specify a maximum price, the default maximum price is the On-Demand price. You pay only the Spot price for the Spot Instances that you launch. You still receive the steep discounts provided by Spot Instances. These discounts are possible because of the stable Spot pricing that's available with the [Spot pricing model](https://aws.amazon.com/blogs/compute/new-amazon-ec2-spot-pricing/). For more information, see [Pricing and savings](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html#spot-pricing) in the *Amazon EC2 User Guide*. 

## Proactive capacity rebalancing


If your use case allows, we recommend Capacity Rebalancing. Capacity Rebalancing helps you maintain your workload availability by proactively replacing Spot Instances at risk of interruption.

When Capacity Rebalancing is enabled, Amazon EC2 Auto Scaling attempts to proactively replace Spot Instances that have received an EC2 instance rebalance recommendation. This provides an opportunity to rebalance your workload to new Spot Instances that are not at an elevated risk of interruption. 

For more information, see [Capacity Rebalancing in Auto Scaling to replace at-risk Spot Instances](ec2-auto-scaling-capacity-rebalancing.md).

## Scaling behavior


When you create a mixed instances group, it uses On-Demand Instances by default. To use Spot Instances, you must modify the percentage of the group to be launched as On-Demand Instances. You can specify any number from 0 to 100 for the On-Demand percentage.

Optionally, you can also designate a base number of On-Demand Instances to start with. If you do so, Amazon EC2 Auto Scaling waits to launch Spot Instances until after it launches the base capacity of On-Demand Instances when the group scales out. Anything beyond the base capacity uses the On-Demand percentage to determine how many On-Demand Instances and Spot Instances to launch. 

Amazon EC2 Auto Scaling converts the percentage to the equivalent number of instances. If the result creates a fractional number, it rounds up to the next integer in favor of On-Demand Instances.

The following table demonstrates the behavior of the Auto Scaling group as it increases and decreases in size.


**Example: Scaling behavior**  

| Purchase options | Group size and number of running instances across purchase options | 
| --- |--- |
|  | **10** | **20** | **30** | **40** | 
| --- |--- |--- |--- |--- |
| **Example 1**: base of 10, 50/50% On-Demand/Spot |  |  |  |  | 
| On-Demand Instances (base amount) | 10 | 10 | 10 | 10 | 
| On-Demand Instances | 0 | 5 | 10 | 15 | 
| Spot Instances | 0 | 5 | 10 | 15 | 
| **Example 2**: base of 0, 0/100% On-Demand/Spot |  |  |  |  | 
| On-Demand Instances (base amount) | 0 | 0 | 0 | 0 | 
| On-Demand Instances | 0 | 0 | 0 | 0 | 
| Spot Instances | 10 | 20 | 30 | 40 | 
| **Example 3**: base of 0, 60/40% On-Demand/Spot |  |  |  |  | 
| On-Demand Instances (base amount) | 0 | 0 | 0 | 0 | 
| On-Demand Instances | 6 | 12 | 18 | 24 | 
| Spot Instances | 4 | 8 | 12 | 16 | 
| **Example 4**: base of 0, 100/0% On-Demand/Spot |  |  |  |  | 
| On-Demand Instances (base amount) | 0 | 0 | 0 | 0 | 
| On-Demand Instances | 10 | 20 | 30 | 40 | 
| Spot Instances | 0 | 0 | 0 | 0 | 
| **Example 5**: base of 12, 0/100% On-Demand/Spot |  |  |  |  | 
| On-Demand Instances (base amount) | 10 | 12 | 12 | 12 | 
| On-Demand Instances | 0 | 0 | 0 | 0 | 
| Spot Instances | 0 | 8 | 18 | 28 | 

When the size of the group *increases*, Amazon EC2 Auto Scaling attempts to balance your capacity evenly across your specified Availability Zones. Then, it launches instance types according to the specified allocation strategy. 

When the size of the group *decreases*, Amazon EC2 Auto Scaling first identifies which of the two types (Spot or On-Demand) should be terminated. Then, it tries to terminate instances in a balanced way across your specified Availability Zones. It also favors terminating instances in a way that aligns closer to your allocation strategies. For information about termination policies, see [Configure termination policies for Amazon EC2 Auto Scaling](ec2-auto-scaling-termination-policies.md).

## Regional availability of instance types


The availability of EC2 instance types varies depending on your AWS Region. For example, the newest generation instance types might not yet be available in a given Region. Due to the variances in instance availability across Regions, you might encounter issues when making programmatic requests if multiple instance types in your overrides are not available in your Region. Using multiple instance types that are not available in your Region might cause the request to fail entirely. To solve the issue, retry the request with different instance types, making sure that each instance type is available in the Region. To search for instance types offered by location, use the [describe-instance-type-offerings](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/describe-instance-type-offerings.html) command. For more information, see [Finding an Amazon EC2 instance type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-discovery.html) in the *Amazon EC2 User Guide*. 

## Related resources


For more best practices for Spot Instances, see [Best practices for EC2 Spot](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-best-practices.html) in the *Amazon EC2 User Guide*. 

## Limitations


After you add overrides to an Auto Scaling group using a [mixed instances policy](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_MixedInstancesPolicy.html), you can update the overrides with the `UpdateAutoScalingGroup` API call but not delete them. To completely remove the overrides, you must first switch the Auto Scaling group to use a launch template or launch configuration instead of a mixed instances policy. Then, you can add a mixed instances policy again without any overrides.

# Allocation strategies for multiple instance types


When you use multiple instance types, you manage how Amazon EC2 Auto Scaling fulfills your On-Demand and Spot capacities from the possible instance types. To do this, you specify allocation strategies. 

To review the best practices for a mixed instances group, see [Setup overview for creating a mixed instances group](mixed-instances-groups-set-up-overview.md).

**Topics**
+ [

## Spot Instances
](#spot-allocation-strategy)
+ [

## On-Demand Instances
](#on-demand-allocation-strategy)
+ [

## How the allocation strategies work with weights
](#lowest-price-allocation-strategy)

## Spot Instances


Amazon EC2 Auto Scaling provides the following allocation strategies for Spot Instances: 

`price-capacity-optimized` (recommended)  
The price and capacity optimized allocation strategy looks at both price and capacity to select the Spot Instance pools that are the least likely to be interrupted and have the lowest possible price.  
We recommend this strategy when you're getting started. For more information, see [Introducing the price-capacity-optimized allocation strategy for EC2 Spot Instances](https://aws.amazon.com/blogs/compute/introducing-price-capacity-optimized-allocation-strategy-for-ec2-spot-instances/) in the AWS blog.

`capacity-optimized`  
Amazon EC2 Auto Scaling requests your Spot Instance from the pool with optimal capacity for the number of instances that are launching.   
With Spot Instances, the pricing changes slowly over time based on long-term trends in supply and demand. However, capacity fluctuates in real time. The `capacity-optimized` strategy automatically launches Spot Instances into the most available pools by looking at real-time capacity data and predicting which are the most available. This helps to minimize possible disruptions for workloads that might have a higher cost of interruption associated with restarting work and checkpointing. To give certain instance types a higher chance of launching first, use `capacity-optimized-prioritized`. 

`capacity-optimized-prioritized`  
You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling honors the instance type priorities on a best-effort basis but optimizes for capacity first. This is a good option for workloads where the possibility of disruption must be minimized, but the preference for certain instance types matters, too. If the On-Demand allocation strategy is set to `prioritized`, the same priority is applied when fulfilling On-Demand capacity. 

`lowest-price` (not recommended)  
We don't recommend the `lowest-price` strategy because it has the highest risk of interruption for your Spot Instances.
Amazon EC2 Auto Scaling requests your Spot Instances using the lowest priced pools within an Availability Zone, across the N number of Spot pools that you specify for the **Lowest priced pools** setting. For example, if you specify four instance types and four Availability Zones, your Auto Scaling group can access up to 16 Spot pools. (Four in each Availability Zone.) If you specify two Spot pools (N=2) for the allocation strategy, your Auto Scaling group can draw on the two lowest priced pools per Availability Zone to fulfill your Spot capacity.  
The `lowest-price` strategy is only available when using the AWS CLI.  
Amazon EC2 Auto Scaling makes an effort to draw Spot Instances from the N number of pools that you specify. However, if a pool runs out of Spot capacity before fulfilling your desired capacity, Amazon EC2 Auto Scaling continues to fulfill your request by drawing from the next lowest priced pool. To meet your desired capacity, you might receive Spot Instances from more pools than your specified N number. Likewise, if most of the pools have no Spot capacity, you might receive your full desired capacity from fewer pools than your specified N number.

**Note**  
If you configure your Spot Instances to launch with [AMD SEV-SNP](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sev-snp.html) turned on, you are charged an additional hourly usage fee that is equivalent to 10% of the [On-Demand hourly rate](https://aws.amazon.com/ec2/pricing/on-demand/) of the selected instance type. If the allocation strategy uses price as an input, Amazon EC2 Auto Scaling does not include this additional fee; only the Spot price is used.

## On-Demand Instances


Amazon EC2 Auto Scaling provides the following allocation strategies that can be used for On-Demand Instances: 

`lowest-price`  
Amazon EC2 Auto Scaling automatically deploys the lowest priced instance type in each Availability Zone based on the current On-Demand price.  
To meet your desired capacity, you might receive On-Demand Instances of more than one instance type in each Availability Zone. This depends on how much capacity you request.

`prioritized`  
When fulfilling On-Demand capacity, Amazon EC2 Auto Scaling determines which instance type to use first based on the order of instance types in the list of launch template overrides. For example, let's say that you specify three launch template overrides in the following order: `c5.large`, `c4.large`, and `c3.large`. When your On-Demand Instances launch, the Auto Scaling group fulfills On-Demand capacity in the following order: `c5.large`, `c4.large`, and then `c3.large`.   
Consider the following when managing the priority order of your On-Demand Instances:  
+ You can pay for usage upfront to get significant discounts for On-Demand Instances by using Savings Plans or Reserved Instances. For more information, see the [Amazon EC2 pricing](https://aws.amazon.com/ec2/pricing/) page. 
+ With Reserved Instances, your discounted rate of the regular On-Demand Instance pricing applies if Amazon EC2 Auto Scaling launches matching instance types. Therefore, if you have unused Reserved Instances for `c4.large`, you can set the instance type priority to give the highest priority for your Reserved Instances to a `c4.large` instance type. When a `c4.large` instance launches, you receive the Reserved Instance pricing. 
+ With Savings Plans, your discounted rate of the regular On-Demand Instance pricing applies when using Amazon EC2 Instance Savings Plans or Compute Savings Plans. With Savings Plans, you have more flexibility when prioritizing your instance types. As long as you use instance types that are covered by your Savings Plans, you can set them in any priority order. You can also occasionally change the entire order of your instance types, while still receiving the Savings Plans discounted rate. For more information about Savings Plans, see the [Savings Plans User Guide](https://docs.aws.amazon.com/savingsplans/latest/userguide/).

## How the allocation strategies work with weights


When you specify the `WeightedCapacity` parameter in your overrides (or `"DesiredCapacityType": "vcpu"` or `"DesiredCapacityType": "memory-mib"` at the group level), the allocation strategies work exactly like they do for other Auto Scaling groups. 

Suppose you have an Auto Scaling group with several instance types that have varying amounts of vCPUs. You use `lowest-price` for your Spot and On-Demand allocation strategies. If you choose to assign weights based on the vCPU count of each instance type, Amazon EC2 Auto Scaling launches whichever instance types have the lowest price per your assigned weight values (for example, per vCPU) at the time of fulfillment. If it's a Spot Instance, then this means the lowest Spot price per vCPU. If it's an On-Demand Instance, then this means the lowest On-Demand price per vCPU.

 For more information, see [Configure an Auto Scaling group to use instance weights](ec2-auto-scaling-mixed-instances-groups-instance-weighting.md).

# Create mixed instances group using attribute-based instance type selection
Create a group using attribute-based instance type selection

Instead of manually choosing instance types for your mixed instances group, you can specify a set of instance attributes that describe your compute requirements. As Amazon EC2 Auto Scaling launches instances, any instance types used by the Auto Scaling group must match your required instance attributes. This is known as *attribute-based instance type selection*.

This approach is ideal for workloads and frameworks that can be flexible about which instance types they use, such as containers, big data, and CI/CD.

The following are benefits of attribute-based instance type selection:
+ **Optimal flexibility for Spot Instances** – Amazon EC2 Auto Scaling can select from a wide range of instance types for launching Spot Instances. This meets the Spot best practice of being flexible about instance types, which gives the Amazon EC2 Spot service a better chance of finding and allocating your required amount of compute capacity.
+ **Easily use the right instance types** – With so many instance types available, finding the right instance types for your workload can be time consuming. When you specify instance attributes, the instance types will automatically have the required attributes for your workload.
+ **Automatic use of new instance types** – Your Auto Scaling groups can use newer generation instance types as they're released. Newer generation instance types are automatically used when they match your requirements and align with the allocation strategies you choose for your Auto Scaling group. 

**Topics**
+ [

## How attribute-based instance type selection works
](#how-attribute-based-instance-type-selection-works)
+ [

## Price protection
](#understand-price-protection)
+ [

## Performance protection
](#understand-performance-protection)
+ [

## Prerequisites
](#attribute-based-instance-type-selection-prerequisites)
+ [

## Create a mixed instances group with attribute-based instance type selection (console)
](#attribute-based-instance-type-selection-console)
+ [

## Create a mixed instances group with attribute-based instance type selection (AWS CLI)
](#attribute-based-instance-type-selection-aws-cli)
+ [

## Example configuration
](#attribute-based-instance-type-selection-example-configurations)
+ [

## Preview your instance types
](#attribute-based-instance-type-selection-preview)
+ [

## Related resources
](#attribute-based-instance-type-selection-related-resources)

## How attribute-based instance type selection works


With attribute-based instance type selection, instead of providing a list of specific instance types, you provide a list of instance attributes that your instances require, such as:
+ **vCPU count** – The minimum and maximum number of vCPUs per instance.
+ **Memory** – The minimum and maximum GiBs of memory per instance.
+ **Local storage** – Whether to use EBS or instance store volumes for local storage.
+ **Burstable performance** – Whether to use the T instance family, including T4g, T3a, T3, and T2 types. 

There are many options available for defining your instance requirements. For a description of each option and the default values, see [InstanceRequirements](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_InstanceRequirements.html) in the *Amazon EC2 Auto Scaling API Reference*.

When your Auto Scaling group needs to launch an instance, it will search for instance types that match your specified attributes and are available in that Availability Zone. The allocation strategy then determines which of the matching instance types to launch. By default, attribute-based instance type selection has a price protection feature enabled to prevent your Auto Scaling group from launching instance types that exceed your budget thresholds.

By default, you use the number of instances as the unit of measurement when setting the desired capacity of your Auto Scaling group, meaning each instance counts as one unit. 

Alternatively, you can set the value for desired capacity to the number of vCPUs or the amount of memory. To do so, use the **Desired capacity type** dropdown field in the AWS Management Console or the `DesiredCapacityType` property in the `CreateAutoScalingGroup` or `UpdateAutoScalingGroup` API operation. Amazon EC2 Auto Scaling then launches the number of instances required to meet the desired vCPU or memory capacity. For example, if you use vCPUs as the desired capacity type and use instances with 2 vCPUs each, a desired capacity of 10 vCPUs would launch 5 instances. This is a useful alternative to [instance weights](ec2-auto-scaling-mixed-instances-groups-instance-weighting.md).

## Price protection


With price protection, you can specify the maximum price you are willing to pay for EC2 instances launched by your Auto Scaling group. Price protection is a feature that prevents your Auto Scaling group from using instance types that you would consider too expensive even if they happen to fit the attributes that you specified. 

Price protection is enabled by default and has separate price thresholds for On-Demand Instances and Spot Instances. When Amazon EC2 Auto Scaling needs to launch new instances, any instance types priced above the relevant threshold are not launched.

**Topics**
+ [

### On-Demand price protection
](#on-demand-price-price-protection)
+ [

### Spot price protection
](#spot-price-price-protection)
+ [

### Customize price protection
](#customize-price-price-protection)

### On-Demand price protection


For On-Demand Instances, you define the maximum On-Demand price you're willing to pay as a percentage higher than an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. 

If an On-Demand price protection value is not explicitly defined, a default maximum On-Demand price of 20 percent higher than the identified On-Demand price will be used.

### Spot price protection


By default, Amazon EC2 Auto Scaling will automatically apply optimal Spot Instance price protection to consistently select from a wide range of instance types. You can also manually set the price protection yourself. However, letting Amazon EC2 Auto Scaling do it for you can improve the likelihood that your Spot capacity is fulfilled.

You can manually specify the price protection using one of the following options. If you manually set the price protection, we recommend using the first option.
+ **A percentage of an identified *On-Demand price*** – The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes.
+ **A percentage higher than an identified *Spot price*** – The identified Spot price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. We do not recommend using this option because Spot prices can fluctuate, and therefore your price protection threshold might also fluctuate.

### Customize price protection


You can customize the price protection thresholds in the Amazon EC2 Auto Scaling console or using the AWS CLI or SDKs. 
+ In the console, use the **On-Demand price protection** and **Spot price protection** settings in **Additional instance attributes**. 
+ In the [InstanceRequirements](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_InstanceRequirements.html) structure, to specify the On-Demand Instance price protection threshold, use the `OnDemandMaxPricePercentageOverLowestPrice` property. To specify the Spot Instance price protection threshold, use either the `MaxSpotPriceAsPercentageOfOptimalOnDemandPrice` or the `SpotMaxPricePercentageOverLowestPrice` property. 

If you set **Desired capacity type** (`DesiredCapacityType`) to **vCPUs** or **Memory GiB**, the price protection applies based on the per vCPU or per memory price instead of the per instance price. 

You can also turn off price protection. To indicate no price protection threshold, specify a high percentage value, such as `999999`.

**Note**  
If no current generation C, M, or R instance types match your specified attributes, price protection is still applicable. When no match is found, the identified price is from the lowest priced current generation instance types, or failing that, the lowest priced previous generation instance types, that match your attributes. 

## Performance protection


*Performance protection* is a feature that ensures your Auto Scaling group uses instance types that are similar to or exceed a specified performance baseline. To use performance protection, you specify an instance family as a baseline reference. The capabilities of the specified instance family establish the lowest acceptable level of performance. When Auto Scaling selects instance types, it considers your specified attributes and the performance baseline. Instance types that fall below the performance baseline are automatically excluded from selection, even if they match your other specified attributes. This ensures that all selected instance types offer performance similar to or better than the baseline established by the specified instance family. Auto Scaling uses this baseline to guide instance type selection, but there is no guarantee that the selected instance types will always exceed the baseline for every application.

Currently, this feature only supports CPU performance as a baseline performance factor. The CPU performance of the specified instance family serves as the performance baseline, ensuring that selected instance types are similar to or exceed this baseline. Instance families with the same CPU processors lead to the same filtering results, even if their network or disk performance differs. For example, specifying either `c6in` or `c6i` as the baseline reference would produce identical performance-based filtering results because both instance families use the same CPU processor.

**Unsupported instance families**  
The following instance families are not supported for performance protection:
+ `c1`
+ `g3` \$1 `g3s`
+ `hpc7g`
+ `m1` \$1 `m2`
+ `mac1` \$1 `mac2` \$1 `mac2-m1ultra` \$1 `mac2-m2` \$1 `mac2-m2pro`
+ `p3dn` \$1 `p4d` \$1 `p5`
+ `t1`
+ `u-12tb1` \$1 `u-18tb1` \$1 `u-24tb1` \$1 `u-3tb1` \$1 `u-6tb1` \$1 `u-9tb1` \$1 `u7i-12tb` \$1 `u7in-16tb` \$1 `u7in-24tb` \$1 `u7in-32tb`

If you enable performance protection by specifying a supported instance family, the returned instance types will exclude the above unsupported instance families.

**Example: Set a CPU performance baseline**  
In the following example, the instance requirement is to launch with instance types that have CPU cores that are as performant as the `c6i` instance family. This will filter out instance types with less performant CPU processors, even if they meet your other specified instance requirements such as the number of vCPUs. For example, if your specified instance attributes include 4 vCPUs and 16 GB of memory, an instance type with these attributes but with lower CPU performance than `c6i` will be excluded from selection.

```
"BaselinePerformanceFactors": {
        "Cpu": {
            "References": [
                {
                    "InstanceFamily": "c6i"
                }
            ]
        }
```

**Considerations**  
Consider the following when using performance protection:
+ You can specify either instance types or instance attributes, but not both at the same time.
+ You can specify a maximum of four `InstanceRequirements` structures in a request configuration.

## Prerequisites

+ Create a launch template. For more information, see [Create a launch template for an Auto Scaling group](create-launch-template.md).
+ Verify that the launch template doesn't already request Spot Instances. 

## Create a mixed instances group with attribute-based instance type selection (console)


Use the following procedure to create a mixed instances group by using attribute-based instance type selection. To help you move through the steps efficiently, some optional sections are skipped.

For most general purpose workloads, it's enough to specify the number of vCPUs and memory that you need. For advanced use cases, you can specify attributes like storage type, network interfaces, CPU manufacturer, and accelerator type.

To review the best practices for a mixed instances group, see [Setup overview for creating a mixed instances group](mixed-instances-groups-set-up-overview.md).

**To create a mixed instances group**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. On the navigation bar at the top of the screen, choose the same AWS Region that you used when you created the launch template.

1. Choose **Create an Auto Scaling group**. 

1. On the **Choose launch template or configuration** page, for **Auto Scaling group name**, enter a name for your Auto Scaling group.

1. To choose your launch template, do the following:

   1. For **Launch template**, choose an existing launch template.

   1. For **Launch template version**, choose whether the Auto Scaling group uses the default, the latest, or a specific version of the launch template when scaling out. 

   1. Verify that your launch template supports all of the options that you are planning to use, and then choose **Next**.

1. On the **Choose instance launch options** page, do the following:

   1. For **Instance type requirements**, choose **Override launch template**.
**Note**  
If you chose a launch template that already contains a set of instance attributes, such as vCPUs and memory, then the instance attributes are displayed. These attributes are added to the Auto Scaling group properties, where you can update them from the Amazon EC2 Auto Scaling console at any time.

   1. Under **Specify instance attributes**, start by entering your vCPUs and memory requirements.
      + For **vCPUs**, enter the desired minimum and maximum number of vCPUs. To specify no limit, select **No minimum**, **No maximum**, or both.
      + For **Memory (GiB)**, enter the desired minimum and maximum amount of memory. To specify no limit, select **No minimum**, **No maximum**, or both.

   1. (Optional) For **Additional instance attributes**, you can optionally specify one or more attributes to express your compute requirements in more detail. Each additional attribute adds further constraints to your request.

   1. Expand **Preview matching instance types** to view the instance types that have your specified attributes.

   1. Under **Instance purchase options**, for **Instances distribution**, specify the percentages of the group to launch as On-Demand Instances and as Spot Instances. If your application is stateless, fault tolerant, and can handle an instance being interrupted, you can specify a higher percentage of Spot Instances.

   1. (Optional) When you specify a percentage for Spot Instances, select **Include On-Demand base capacity** and then specify the minimum amount of the Auto Scaling group's initial capacity that must be fulfilled by On-Demand Instances. Anything beyond the base capacity uses the **Instances distribution** settings to determine how many On-Demand Instances and Spot Instances to launch. 

   1. Under **Allocation strategies**, **Lowest price** is automatically selected for the **On-Demand allocation strategy** and cannot be changed.

   1. For **Spot allocation strategy**, choose an allocation strategy. **Price capacity optimized** is selected by default.

   1. For **Capacity Rebalancing**, choose whether to enable or disable Capacity Rebalancing. Use Capacity Rebalancing to automatically respond when your Spot Instances approach termination from a Spot interruption. For more information, see [Capacity Rebalancing in Auto Scaling to replace at-risk Spot Instances](ec2-auto-scaling-capacity-rebalancing.md). 

   1. Under **Network**, for **VPC**, choose a VPC. The Auto Scaling group must be created in the same VPC as the security group you specified in your launch template.

   1. For **Availability Zones and subnets**, choose one or more subnets in the specified VPC. Use subnets in multiple Availability Zones for high availability. For more information, see [Considerations when choosing VPC subnets](asg-in-vpc.md#as-vpc-considerations).

   1. Choose **Next**, **Next**.

1. For the **Configure group size and scaling policies** step, do the following:

   1. To measure your desired capacity in units other than instances, choose the appropriate option for **Group size**, **Desired capacity type**. **Units**, **vCPUs**, and **Memory GiB** are supported. By default, Amazon EC2 Auto Scaling specifies **Units**, which translates into number of instances.

   1. For **Desired capacity**, the initial size of your Auto Scaling group. 

   1. In the **Scaling** section, under **Scaling limits**, if your new value for **Desired capacity** is greater than **Min desired capacity** and **Max desired capacity**, the **Max desired capacity** is automatically increased to the new desired capacity value. You can change these limits as needed. For more information, see [Set scaling limits for your Auto Scaling group](asg-capacity-limits.md).

1. Choose **Skip to review**.

1. On the **Review** page, choose **Create Auto Scaling group**.

## Create a mixed instances group with attribute-based instance type selection (AWS CLI)


**To create a mixed instances group using the command line**  
Use one of the following commands:
+ [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) (AWS CLI)
+ [New-ASAutoScalingGroup](https://docs.aws.amazon.com/powershell/latest/reference/items/New-ASAutoScalingGroup.html) (AWS Tools for Windows PowerShell)

## Example configuration


To create an Auto Scaling group with attribute-based instance type selection by using the AWS CLI, use the following [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command. 

The following instance attributes are specified:
+ `VCpuCount` – The instance types must have a minimum of four vCPUs and a maximum of eight vCPUs. 
+ `MemoryMiB` – The instance types must have a minimum of 16,384 MiB of memory. 
+ `CpuManufacturers` – The instance types must have an Intel manufactured CPU. 

### JSON


```
aws autoscaling create-auto-scaling-group --cli-input-json file://~/config.json
```

The following is an example `config.json` file. 

```
{
    "AutoScalingGroupName": "my-asg",
    "DesiredCapacityType": "units",
    "MixedInstancesPolicy": {
        "LaunchTemplate": {
            "LaunchTemplateSpecification": {
                "LaunchTemplateName": "my-launch-template",
                "Version": "$Default"
            },
            "Overrides": [{
                "InstanceRequirements": {
                    "VCpuCount": {"Min": 4, "Max": 8},
                    "MemoryMiB": {"Min": 16384},
                    "CpuManufacturers": ["intel"]
                }
            }]
        },
        "InstancesDistribution": {
            "OnDemandPercentageAboveBaseCapacity": 50,
            "SpotAllocationStrategy": "price-capacity-optimized"
        }
    },
    "MinSize": 0,
    "MaxSize": 100,
    "DesiredCapacity": 4,
    "DesiredCapacityType": "units",
    "VPCZoneIdentifier": "subnet-5ea0c127,subnet-6194ea3b,subnet-c934b782"
}
```

To set the value for desired capacity as the number of vCPUs or the amount of memory, specify `"DesiredCapacityType": "vcpu"` or `"DesiredCapacityType": "memory-mib"` in the file. The default desired capacity type is `units`, which sets the value for desired capacity as the number of instances.

### YAML


Alternatively, you can use the following [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command to create the Auto Scaling group. This references a YAML file as the sole parameter for your Auto Scaling group.

```
aws autoscaling create-auto-scaling-group --cli-input-yaml file://~/config.yaml
```

The following is an example `config.yaml` file. 

```
---
AutoScalingGroupName: my-asg
DesiredCapacityType: units
MixedInstancesPolicy:
  LaunchTemplate:
    LaunchTemplateSpecification:
      LaunchTemplateName: my-launch-template
      Version: $Default
    Overrides:
    - InstanceRequirements:
         VCpuCount:
           Min: 2
           Max: 4
         MemoryMiB:
           Min: 2048
         CpuManufacturers:
         - intel
  InstancesDistribution:
    OnDemandPercentageAboveBaseCapacity: 50
    SpotAllocationStrategy: price-capacity-optimized
MinSize: 0
MaxSize: 100
DesiredCapacity: 4
DesiredCapacityType: units
VPCZoneIdentifier: subnet-5ea0c127,subnet-6194ea3b,subnet-c934b782
```

To set the value for desired capacity as the number of vCPUs or the amount of memory, specify `DesiredCapacityType: vcpu` or `DesiredCapacityType: memory-mib` in the file. The default desired capacity type is `units`, which sets the value for desired capacity as the number of instances.

For an example of how to use multiple launch templates with attribute-based instance type selection, see [Use multiple launch templates](ec2-auto-scaling-mixed-instances-groups-launch-template-overrides.md).

## Preview your instance types


You can preview the instance types that match your compute requirements without launching them and adjust your requirements if necessary. When creating your Auto Scaling group in the Amazon EC2 Auto Scaling console, a preview of the instance types appears in the **Preview matching instance types** section on the **Choose instance launch options** page.

Alternatively, you can preview the instance types by making an Amazon EC2 [GetInstanceTypesFromInstanceRequirements](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_GetInstanceTypesFromInstanceRequirements.html) API call using the AWS CLI or an SDK. Pass the `InstanceRequirements` parameters in the request in the exact format that you would use to create or update an Auto Scaling group. For more information, see [Preview instance types with specified attributes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-fleet-attribute-based-instance-type-selection.html#ec2fleet-get-instance-types-from-instance-requirements) in the *Amazon EC2 User Guide*.

## Related resources


To learn more about attribute-based instance type selection, see [Attribute-Based Instance Type Selection for EC2 Auto Scaling and EC2 Fleet](https://aws.amazon.com/blogs/aws/new-attribute-based-instance-type-selection-for-ec2-auto-scaling-and-ec2-fleet/) on the AWS Blog.

You can declare attribute-based instance type selection when you create an Auto Scaling group using CloudFormation. For more information, see the example snippet in the [Auto scaling template snippets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/quickref-autoscaling.html#scenario-mixed-instances-group-template-examples) section of the *CloudFormation User Guide*.

# Create a mixed instances group by manually choosing instance types
Create a group using manual instance type selection

This topic shows you how to launch multiple instance types in a single Auto Scaling group by manually choosing your instance types. 

If you prefer to use instance attributes as criteria for selecting instance types, see [Create mixed instances group using attribute-based instance type selection](create-mixed-instances-group-attribute-based-instance-type-selection.md).

**Topics**
+ [

## Prerequisites
](#manual-instance-type-selection-prerequisites)
+ [

## Create a mixed instances group (console)
](#manual-instance-type-selection-console)
+ [

## Create a mixed instances group (AWS CLI)
](#manual-instance-type-selection-aws-cli)
+ [

## Example configurations
](#manual-instance-type-selection-example-configurations)

## Prerequisites

+ Create a launch template. For more information, see [Create a launch template for an Auto Scaling group](create-launch-template.md).
+ Verify that the launch template doesn't already request Spot Instances. 

## Create a mixed instances group (console)


Use the following procedure to create a mixed instances group by manually choosing which instance types your group can launch. To help you move through the steps efficiently, some optional sections are skipped.

To review the best practices for a mixed instances group, see [Setup overview for creating a mixed instances group](mixed-instances-groups-set-up-overview.md).

**To create a mixed instances group**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. On the navigation bar at the top of the screen, choose the same AWS Region that you used when you created the launch template.

1. Choose **Create an Auto Scaling group**. 

1. On the **Choose launch template or configuration** page, for **Auto Scaling group name**, enter a name for your Auto Scaling group.

1. To choose your launch template, do the following:

   1. For **Launch template**, choose an existing launch template.

   1. For **Launch template version**, choose whether the Auto Scaling group uses the default, the latest, or a specific version of the launch template when scaling out. 

   1. Verify that your launch template supports all of the options that you plan to use, and then choose **Next**.

1. On the **Choose instance launch options** page, do the following:

   1. For **Instance type requirements**, choose **Override launch template**, and then choose **Manually add instance types**. 

   1. Choose your instance types. You can use our recommendations as a starting point. **Family and generation flexible** is selected by default.
      + To change the order of the instance types, use the arrows. If you choose an allocation strategy that supports prioritization, the instance type order sets their launch priority.
      + To remove an instance type, choose **X**.
      + (Optional) For the boxes in the **Weight** column, assign each instance type a relative weight. To do so, enter the number of units that an instance of that type counts toward the desired capacity of the group. Doing so might be useful if the instance types offer different vCPU, memory, storage, or network bandwidth capabilities. For more information, see [Configure an Auto Scaling group to use instance weights](ec2-auto-scaling-mixed-instances-groups-instance-weighting.md). 

        Note that if you choose to use **Size flexible** recommendations, then all instance types that are part of this section automatically have a weight value. If you don't want to specify any weights, clear the boxes in the **Weight** column for all instance types.

   1. Under **Instance purchase options**, for **Instances distribution**, specify the percentages of the group to be launched as On-Demand Instances and Spot Instances respectively. If your application is stateless, fault tolerant, and can handle an instance being interrupted, you can specify a higher percentage of Spot Instances.

   1. (Optional) When you specify a percentage for Spot Instances, select **Include On-Demand base capacity** and then specify the minimum amount of the Auto Scaling group's initial capacity that must be fulfilled by On-Demand Instances. Anything beyond the base capacity uses the **Instances distribution** settings to determine how many On-Demand Instances and Spot Instances to launch. 

   1. Under **Allocation strategies**, for **On-Demand allocation strategy**, choose an allocation strategy. When you manually choose your instance types, **Prioritized** is selected by default.

   1. For **Spot allocation strategy**, choose an allocation strategy. **Price capacity optimized** is selected by default.

      If you choose **Capacity optimized**, you can optionally check the **Prioritize instance types** box to let Amazon EC2 Auto Scaling choose which instance type to launch first based on the order your instance types are listed in. 

   1. For **Capacity Rebalancing**, choose whether to enable or disable Capacity Rebalancing. Use Capacity Rebalancing to automatically respond when your Spot Instances approach termination from a Spot interruption. For more information, see [Capacity Rebalancing in Auto Scaling to replace at-risk Spot Instances](ec2-auto-scaling-capacity-rebalancing.md). 

   1. Under **Network**, for **VPC**, choose a VPC. The Auto Scaling group must be created in the same VPC as the security group you specified in your launch template.

   1. For **Availability Zones and subnets**, choose one or more subnets in the specified VPC. Use subnets in multiple Availability Zones for high availability. For more information, see [Considerations when choosing VPC subnets](asg-in-vpc.md#as-vpc-considerations).

   1. Choose **Next**, **Next**.

1. For the **Configure group size and scaling policies** step, do the following:

   1. Under **Group size**, for **Desired capacity**, enter the initial number of instances to launch. 

      By default, the desired capacity is expressed as the number of instances. If you assigned weights to your instance types, you must convert this value to the same unit of measurement that you used to assign weights, such as the number of vCPUs. 

   1. In the **Scaling** section, under **Scaling limits**, if your new value for **Desired capacity** is greater than **Min desired capacity** and **Max desired capacity**, the **Max desired capacity** is automatically increased to the new desired capacity value. You can change these limits as needed. For more information, see [Set scaling limits for your Auto Scaling group](asg-capacity-limits.md).

1. Choose **Skip to review**.

1. On the **Review** page, choose **Create Auto Scaling group**.

## Create a mixed instances group (AWS CLI)


**To create a mixed instances group using the command line**  
Use one of the following commands:
+ [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) (AWS CLI)
+ [New-ASAutoScalingGroup](https://docs.aws.amazon.com/powershell/latest/reference/items/New-ASAutoScalingGroup.html) (AWS Tools for Windows PowerShell)

## Example configurations


The following example configurations show how to create mixed instances groups using different Spot allocation strategies.

**Note**  
These examples show how to use a configuration file formatted in JSON or YAML. If you use AWS CLI version 1, you must specify a JSON-formatted configuration file. If you use AWS CLI version 2, you can specify a configuration file formatted in either YAML or JSON.

**Topics**
+ [

### Example 1: Launch Spot Instances using the `capacity-optimized` allocation strategy
](#capacity-optimized-aws-cli)
+ [

### Example 2: Launch Spot Instances using the `capacity-optimized-prioritized` allocation strategy
](#capacity-optimized-prioritized-aws-cli)
+ [

### Example 3: Launch Spot Instances using the `lowest-price` allocation strategy diversified over two pools
](#lowest-price-aws-cli)
+ [

### Example 4: Launch Spot Instances using the `price-capacity-optimized` allocation strategy
](#price-capacity-optimized-aws-cli)

### Example 1: Launch Spot Instances using the `capacity-optimized` allocation strategy


The following [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command creates an Auto Scaling group that specifies the following:
+ The percentage of the group to launch as On-Demand Instances (`0`) and a base number of On-Demand Instances to start with (`1`).
+ The instance types to launch in priority order (`c5.large`, `c5a.large`, `m5.large`, `m5a.large`, `c4.large`, `m4.large`, `c3.large`, `m3.large`) .
+ The subnets in which to launch the instances (`subnet-5ea0c127`, `subnet-6194ea3b`, `subnet-c934b782`). Each corresponds to a different Availability Zone.
+ The launch template (`my-launch-template`) and the launch template version (`$Default`).

When Amazon EC2 Auto Scaling attempts to fulfill your On-Demand capacity, it launches the `c5.large` instance type first. The Spot Instances come from the optimal Spot pool in each Availability Zone based on Spot Instance capacity.

#### JSON


```
aws autoscaling create-auto-scaling-group --cli-input-json file://~/config.json
```

The `config.json` file contains the following content.

```
{
    "AutoScalingGroupName": "my-asg",
    "MixedInstancesPolicy": {
        "LaunchTemplate": {
            "LaunchTemplateSpecification": {
                "LaunchTemplateName": "my-launch-template",
                "Version": "$Default"
            },
            "Overrides": [
                {
                    "InstanceType": "c5.large"
                },
                {
                    "InstanceType": "c5a.large"
                },
                {
                    "InstanceType": "m5.large"
                },
                {
                    "InstanceType": "m5a.large"
                },
                {
                    "InstanceType": "c4.large"
                },
                {
                    "InstanceType": "m4.large"
                },
                {
                    "InstanceType": "c3.large"
                },
                {
                    "InstanceType": "m3.large"
                }
            ]
        },
        "InstancesDistribution": {
            "OnDemandBaseCapacity": 1,
            "OnDemandPercentageAboveBaseCapacity": 0,
            "SpotAllocationStrategy": "capacity-optimized"
        }
    },
    "MinSize": 1,
    "MaxSize": 5,
    "DesiredCapacity": 3,
    "VPCZoneIdentifier": "subnet-5ea0c127,subnet-6194ea3b,subnet-c934b782"
}
```

#### YAML


Alternatively, you can use the following [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command to create the Auto Scaling group. This references a YAML file as the sole parameter for your Auto Scaling group.

```
aws autoscaling create-auto-scaling-group --cli-input-yaml file://~/config.yaml
```

The `config.yaml` file contains the following content.

```
---
AutoScalingGroupName: my-asg
MixedInstancesPolicy:
  LaunchTemplate:
    LaunchTemplateSpecification:
      LaunchTemplateName: my-launch-template
      Version: $Default
    Overrides:
    - InstanceType: c5.large
    - InstanceType: c5a.large
    - InstanceType: m5.large
    - InstanceType: m5a.large
    - InstanceType: c4.large
    - InstanceType: m4.large
    - InstanceType: c3.large
    - InstanceType: m3.large
  InstancesDistribution:
    OnDemandBaseCapacity: 1
    OnDemandPercentageAboveBaseCapacity: 0
    SpotAllocationStrategy: capacity-optimized
MinSize: 1
MaxSize: 5
DesiredCapacity: 3
VPCZoneIdentifier: subnet-5ea0c127,subnet-6194ea3b,subnet-c934b782
```

### Example 2: Launch Spot Instances using the `capacity-optimized-prioritized` allocation strategy


The following [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command creates an Auto Scaling group that specifies the following:
+ The percentage of the group to launch as On-Demand Instances (`0`) and a base number of On-Demand Instances to start with (`1`).
+ The instance types to launch in priority order (`c5.large`, `c5a.large`, `m5.large`, `m5a.large`, `c4.large`, `m4.large`, `c3.large`, `m3.large`) .
+ The subnets in which to launch the instances (`subnet-5ea0c127`, `subnet-6194ea3b`, `subnet-c934b782`). Each corresponds to a different Availability Zone.
+ The launch template (`my-launch-template`) and the launch template version (`$Latest`).

When Amazon EC2 Auto Scaling attempts to fulfill your On-Demand capacity, it launches the `c5.large` instance type first. When Amazon EC2 Auto Scaling attempts to fulfill your Spot capacity, it honors the instance type priorities on a best-effort basis. However, it optimizes for capacity first.

#### JSON


```
aws autoscaling create-auto-scaling-group --cli-input-json file://~/config.json
```

The `config.json` file contains the following content. 

```
{
    "AutoScalingGroupName": "my-asg",
    "MixedInstancesPolicy": {
        "LaunchTemplate": {
            "LaunchTemplateSpecification": {
                "LaunchTemplateName": "my-launch-template",
                "Version": "$Latest"
            },
            "Overrides": [
                {
                    "InstanceType": "c5.large"
                },
                {
                    "InstanceType": "c5a.large"
                },
                {
                    "InstanceType": "m5.large"
                },
                {
                    "InstanceType": "m5a.large"
                },
                {
                    "InstanceType": "c4.large"
                },
                {
                    "InstanceType": "m4.large"
                },
                {
                    "InstanceType": "c3.large"
                },
                {
                    "InstanceType": "m3.large"
                }
            ]
        },
        "InstancesDistribution": {
            "OnDemandBaseCapacity": 1,
            "OnDemandPercentageAboveBaseCapacity": 0,
            "SpotAllocationStrategy": "capacity-optimized-prioritized"
        }
    },
    "MinSize": 1,
    "MaxSize": 5,
    "DesiredCapacity": 3,
    "VPCZoneIdentifier": "subnet-5ea0c127,subnet-6194ea3b,subnet-c934b782"
}
```

#### YAML


Alternatively, you can use the following [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command to create the Auto Scaling group. This references a YAML file as the sole parameter for your Auto Scaling group. 

```
aws autoscaling create-auto-scaling-group --cli-input-yaml file://~/config.yaml
```

The `config.yaml` file contains the following content. 

```
---
AutoScalingGroupName: my-asg
MixedInstancesPolicy:
  LaunchTemplate:
    LaunchTemplateSpecification:
      LaunchTemplateName: my-launch-template
      Version: $Default
    Overrides:
    - InstanceType: c5.large
    - InstanceType: c5a.large
    - InstanceType: m5.large
    - InstanceType: m5a.large
    - InstanceType: c4.large
    - InstanceType: m4.large
    - InstanceType: c3.large
    - InstanceType: m3.large
  InstancesDistribution:
    OnDemandBaseCapacity: 1
    OnDemandPercentageAboveBaseCapacity: 0
    SpotAllocationStrategy: capacity-optimized-prioritized
MinSize: 1
MaxSize: 5
DesiredCapacity: 3
VPCZoneIdentifier: subnet-5ea0c127,subnet-6194ea3b,subnet-c934b782
```

### Example 3: Launch Spot Instances using the `lowest-price` allocation strategy diversified over two pools


The following [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command creates an Auto Scaling group that specifies the following:
+ The percentage of the group to launch as On-Demand Instances (`50`). (This does not specify a base number of On-Demand Instances to start with.)
+ The instance types to launch in priority order (`c5.large`, `c5a.large`, `m5.large`, `m5a.large`, `c4.large`, `m4.large`, `c3.large`, `m3.large`). 
+ The subnets in which to launch the instances (`subnet-5ea0c127`, `subnet-6194ea3b`, `subnet-c934b782`). Each corresponds to a different Availability Zone.
+ The launch template (`my-launch-template`) and the launch template version (`$Latest`).

When Amazon EC2 Auto Scaling attempts to fulfill your On-Demand capacity, it launches the `c5.large` instance type first. For your Spot capacity, Amazon EC2 Auto Scaling attempts to launch the Spot Instances evenly across the two lowest priced pools in each Availability Zone. 

#### JSON


```
aws autoscaling create-auto-scaling-group --cli-input-json file://~/config.json
```

The `config.json` file contains the following content. 

```
{
    "AutoScalingGroupName": "my-asg",
    "MixedInstancesPolicy": {
        "LaunchTemplate": {
            "LaunchTemplateSpecification": {
                "LaunchTemplateName": "my-launch-template",
                "Version": "$Latest"
            },
            "Overrides": [
                {
                    "InstanceType": "c5.large"
                },
                {
                    "InstanceType": "c5a.large"
                },
                {
                    "InstanceType": "m5.large"
                },
                {
                    "InstanceType": "m5a.large"
                },
                {
                    "InstanceType": "c4.large"
                },
                {
                    "InstanceType": "m4.large"
                },
                {
                    "InstanceType": "c3.large"
                },
                {
                    "InstanceType": "m3.large"
                }
            ]
        },
        "InstancesDistribution": {
            "OnDemandPercentageAboveBaseCapacity": 50,
            "SpotAllocationStrategy": "lowest-price",
            "SpotInstancePools": 2
        }
    },
    "MinSize": 1,
    "MaxSize": 5,
    "DesiredCapacity": 3,
    "VPCZoneIdentifier": "subnet-5ea0c127,subnet-6194ea3b,subnet-c934b782"
}
```

#### YAML


Alternatively, you can use the following [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command to create the Auto Scaling group. This references a YAML file as the sole parameter for your Auto Scaling group. 

```
aws autoscaling create-auto-scaling-group --cli-input-yaml file://~/config.yaml
```

The `config.yaml` file contains the following content. 

```
---
AutoScalingGroupName: my-asg
MixedInstancesPolicy:
  LaunchTemplate:
    LaunchTemplateSpecification:
      LaunchTemplateName: my-launch-template
      Version: $Default
    Overrides:
    - InstanceType: c5.large
    - InstanceType: c5a.large
    - InstanceType: m5.large
    - InstanceType: m5a.large
    - InstanceType: c4.large
    - InstanceType: m4.large
    - InstanceType: c3.large
    - InstanceType: m3.large
  InstancesDistribution:
    OnDemandPercentageAboveBaseCapacity: 50
    SpotAllocationStrategy: lowest-price
    SpotInstancePools: 2
MinSize: 1
MaxSize: 5
DesiredCapacity: 3
VPCZoneIdentifier: subnet-5ea0c127,subnet-6194ea3b,subnet-c934b782
```

### Example 4: Launch Spot Instances using the `price-capacity-optimized` allocation strategy


The following [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command creates an Auto Scaling group that specifies the following:
+ The percentage of the group to launch as On-Demand Instances (`30`). (This does not specify a base number of On-Demand Instances to start with.)
+ The instance types to launch in priority order (`c5.large`, `c5a.large`, `m5.large`, `m5a.large`, `c4.large`, `m4.large`, `c3.large`, `m3.large`). 
+ The subnets in which to launch the instances (`subnet-5ea0c127`, `subnet-6194ea3b`, `subnet-c934b782`). Each corresponds to a different Availability Zone.
+ The launch template (`my-launch-template`) and the launch template version (`$Latest`).

When Amazon EC2 Auto Scaling attempts to fulfill your On-Demand capacity, it launches the `c5.large` instance type first. For your Spot capacity, Amazon EC2 Auto Scaling attempts to launch the Spot Instances from Spot Instance pools with the lowest price possible, but also with optimal capacity for the number of instances that are launching.

#### JSON


```
aws autoscaling create-auto-scaling-group --cli-input-json file://~/config.json
```

The `config.json` file contains the following content. 

```
{
    "AutoScalingGroupName": "my-asg",
    "MixedInstancesPolicy": {
        "LaunchTemplate": {
            "LaunchTemplateSpecification": {
                "LaunchTemplateName": "my-launch-template",
                "Version": "$Latest"
            },
            "Overrides": [
                {
                    "InstanceType": "c5.large"
                },
                {
                    "InstanceType": "c5a.large"
                },
                {
                    "InstanceType": "m5.large"
                },
                {
                    "InstanceType": "m5a.large"
                },
                {
                    "InstanceType": "c4.large"
                },
                {
                    "InstanceType": "m4.large"
                },
                {
                    "InstanceType": "c3.large"
                },
                {
                    "InstanceType": "m3.large"
                }
            ]
        },
        "InstancesDistribution": {
            "OnDemandPercentageAboveBaseCapacity": 30,
            "SpotAllocationStrategy": "price-capacity-optimized"
        }
    },
    "MinSize": 1,
    "MaxSize": 5,
    "DesiredCapacity": 3,
    "VPCZoneIdentifier": "subnet-5ea0c127,subnet-6194ea3b,subnet-c934b782"
}
```

#### YAML


Alternatively, you can use the following [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command to create the Auto Scaling group. This references a YAML file as the sole parameter for your Auto Scaling group. 

```
aws autoscaling create-auto-scaling-group --cli-input-yaml file://~/config.yaml
```

The `config.yaml` file contains the following content. 

```
---
AutoScalingGroupName: my-asg
MixedInstancesPolicy:
  LaunchTemplate:
    LaunchTemplateSpecification:
      LaunchTemplateName: my-launch-template
      Version: $Default
    Overrides:
    - InstanceType: c5.large
    - InstanceType: c5a.large
    - InstanceType: m5.large
    - InstanceType: m5a.large
    - InstanceType: c4.large
    - InstanceType: m4.large
    - InstanceType: c3.large
    - InstanceType: m3.large
  InstancesDistribution:
    OnDemandPercentageAboveBaseCapacity: 30
    SpotAllocationStrategy: price-capacity-optimized
MinSize: 1
MaxSize: 5
DesiredCapacity: 3
VPCZoneIdentifier: subnet-5ea0c127,subnet-6194ea3b,subnet-c934b782
```

# Configure an Auto Scaling group to use instance weights
Use instance weights

When you use multiple instance types, you can specify how many units to associate with each instance type, and then specify the capacity of your group with the same unit of measurement. This capacity specification option is known as weights.

For example, let's say that you run a compute-intensive application that performs best with at least 8 vCPUs and 15 GiB of RAM. If you use `c5.2xlarge` as your base unit, any of the following EC2 instance types would meet your application needs. 


**Instance types example**  

| Instance type | vCPU | Memory (GiB) | 
| --- | --- | --- | 
| c5.2xlarge  |  8  | 16 | 
| c5.4xlarge | 16 | 32 | 
| c5.12xlarge | 48 | 96 | 
| c5.18xlarge  | 72 | 144 | 
| c5.24xlarge | 96 | 192 | 

By default, all instance types have equal weight regardless of size. In other words, whether Amazon EC2 Auto Scaling launches a large or small instance type, each instance counts the same toward the desired capacity of the Auto Scaling group.

With weights, however, you assign a number value that specifies how many units to associate with each instance type. For example, if the instances are of different sizes, a `c5.2xlarge` instance could have the weight of 2, and a `c5.4xlarge` (which is two times bigger) could have the weight of 4, and so on. Then, when Amazon EC2 Auto Scaling scales the group, these weights translate into the number of units that each instance counts toward your desired capacity. 

The weights do not change which instance types Amazon EC2 Auto Scaling chooses to launch; instead, the allocation strategies do that. For more information, see [Allocation strategies for multiple instance types](allocation-strategies.md).

**Important**  
To configure an Auto Scaling group to fulfill its desired capacity using the number of vCPUs or the amount of memory of each instance type, we recommend using attribute-based instance type selection. Setting the `DesiredCapacityType` parameter automatically specifies the number of units to associate with each instance type based on the value that you set for this parameter. For more information, see [Create mixed instances group using attribute-based instance type selection](create-mixed-instances-group-attribute-based-instance-type-selection.md).

**Topics**
+ [

## Considerations
](#weights-considerations)
+ [

## Instance weight behaviors
](#instance-weighting-behaviors)
+ [

# Configure an Auto Scaling group to use weights
](configue-auto-scaling-group-to-use-weights.md)
+ [

# Spot price per unit hour example
](weights-spot-price-per-unit-hour-example.md)

## Considerations


This section discusses key considerations for effectively implementing weights.
+ Choose a few instance types that match your application's performance needs. Decide the weight each instance type should count toward the desired capacity of your Auto Scaling group based on its capabilities. These weights apply to current and future instances.
+ Avoid large ranges between weights. For example, don't specify a weight of 1 for an instance type when the next larger instance type has a weight of 200. The difference between the smallest and largest weights shouldn't be extreme, either. Extreme weight differences can negatively impact cost-performance optimization.
+ Specify the group's desired capacity in units, not instances. For example, if you use vCPU-based weights, set your desired number of cores and also the minimum and maximum.
+ Set your weights and desired capacity so that the desired capacity is at least two to three times larger than your largest weight.

Note the following when updating existing groups:
+ When you add weights to an existing group, include weights for all instance types currently in use.
+ When you add or change weights, Amazon EC2 Auto Scaling will launch or terminate instances to reach the desired capacity based on the new weight values.
+ If you remove an instance type, running instances of that type keep their last weight, even if no longer defined.

## Instance weight behaviors


When you use instance weights, Amazon EC2 Auto Scaling behaves in the following way:
+ Current capacity will either be at the desired capacity or above it. Current capacity can exceed the desired capacity if instances launched that exceed the remaining desired capacity units. For example, suppose that you specify two instance types, `c5.2xlarge` and `c5.12xlarge`, and you assign instance weights of 2 for `c5.2xlarge` and 12 for `c5.12xlarge`. If there are five units remaining to fulfill the desired capacity, and Amazon EC2 Auto Scaling provisions a `c5.12xlarge`, the desired capacity is exceeded by seven units. 
+ When launching instances, Amazon EC2 Auto Scaling prioritizes distributing capacity across Availability Zones and respecting allocation strategies over exceeding the desired capacity.
+ Amazon EC2 Auto Scaling can exceed the maximum capacity limit to maintain balance across Availability Zones, using your preferred allocation strategies. The hard limit enforced by Amazon EC2 Auto Scaling is your desired capacity plus your largest weight.

# Configure an Auto Scaling group to use weights


You can configure an Auto Scaling group to use weights, as shown in the following AWS CLI examples. For instructions on using the console, see [Create a mixed instances group by manually choosing instance types](create-mixed-instances-group-manual-instance-type-selection.md).

**To configure a new Auto Scaling group to use weights (AWS CLI)**  
Use the [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command. For example, the following command creates a new Auto Scaling group and assigns weights by specifying the following:
+ The percentage of the group to launch as On-Demand Instances (`0`) 
+ The allocation strategy for Spot Instances in each Availability Zone (`capacity-optimized`)
+ The instance types to launch in priority order (`m4.16xlarge`, `m5.24xlarge`)
+ The instance weights that correspond to the relative size difference (vCPUs) between instance types (`16`, `24`)
+ The subnets in which to launch the instances (`subnet-5ea0c127`, `subnet-6194ea3b`, `subnet-c934b782`), each corresponding to a different Availability Zone
+ The launch template (`my-launch-template`) and the launch template version (`$Latest`)

```
aws autoscaling create-auto-scaling-group --cli-input-json file://~/config.json
```

The `config.json` file contains the following content.

```
{
    "AutoScalingGroupName": "my-asg",
    "MixedInstancesPolicy": {
        "LaunchTemplate": {
            "LaunchTemplateSpecification": {
                "LaunchTemplateName": "my-launch-template",
                "Version": "$Latest"
            },
            "Overrides": [
                {
                    "InstanceType": "m4.16xlarge",
                    "WeightedCapacity": "16"
                },
                {
                    "InstanceType": "m5.24xlarge",
                    "WeightedCapacity": "24"
                }
            ]
        },
        "InstancesDistribution": {
            "OnDemandPercentageAboveBaseCapacity": 0,
            "SpotAllocationStrategy": "capacity-optimized"
        }
    },
    "MinSize": 160,
    "MaxSize": 720,
    "DesiredCapacity": 480,
    "VPCZoneIdentifier": "subnet-5ea0c127,subnet-6194ea3b,subnet-c934b782",
    "Tags": []
}
```

**To configure an existing Auto Scaling group to use weights (AWS CLI)**  
Use the [update-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/update-auto-scaling-group.html) command. For example, the following command assigns weights to instance types in an existing Auto Scaling group by specifying the following:
+ The instance types to launch in priority order (`c5.18xlarge`, `c5.24xlarge`, `c5.2xlarge`, `c5.4xlarge`)
+ The instance weights that correspond to the relative size difference (vCPUs) between instance types (`18`, `24`, `2`, `4`)
+ The new, increased desired capacity, which is larger than the largest weight

```
aws autoscaling update-auto-scaling-group --cli-input-json file://~/config.json
```

The `config.json` file contains the following content.

```
{
    "AutoScalingGroupName": "my-existing-asg",
    "MixedInstancesPolicy": {
        "LaunchTemplate": {
            "Overrides": [
                {
                    "InstanceType": "c5.18xlarge",
                    "WeightedCapacity": "18"
                },
                {
                    "InstanceType": "c5.24xlarge",
                    "WeightedCapacity": "24"
                },
                {
                    "InstanceType": "c5.2xlarge",
                    "WeightedCapacity": "2"
                },
                {
                    "InstanceType": "c5.4xlarge",
                    "WeightedCapacity": "4"
                }
            ]
        }
    },
    "MinSize": 0,
    "MaxSize": 100,
    "DesiredCapacity": 100
}
```

**To verify the weights using the command line**  
Use one of the following commands:
+ [describe-auto-scaling-groups](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/describe-auto-scaling-groups.html) (AWS CLI)
+ [Get-ASAutoScalingGroup](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-ASAutoScalingGroup.html) (AWS Tools for Windows PowerShell)

# Spot price per unit hour example


The following table compares the hourly price for Spot Instances in different Availability Zones in US East (Northern Virginia) with the price for On-Demand Instances in the same Region. The prices shown are example pricing and not current pricing. These are your costs per *instance* hour. 


**Example: Spot pricing per instance hour**  

| Instance type | us-east-1a | us-east-1b | us-east-1c | On-Demand pricing | 
| --- | --- | --- | --- | --- | 
| c5.2xlarge  | \$10.180 | \$10.191 | \$10.170 | \$10.34  | 
| c5.4xlarge | \$10.341 | \$10.361 | \$10.318 | \$10.68 | 
| c5.12xlarge  | \$10.779 | \$10.777  | \$10.777  | \$12.04 | 
| c5.18xlarge  | \$11.207 | \$11.475 | \$11.357 | \$13.06 | 
| c5.24xlarge | \$11.555 | \$11.555 | \$11.555 | \$14.08 | 

With instance weights, you can evaluate your costs based on what you use per *unit* hour. You can determine the price per unit hour by dividing your price for an instance type by the number of units that it represents. For On-Demand Instances, the price per unit hour is the same when deploying one instance type as it is when deploying a different size of the same instance type. In contrast, however, the Spot price per unit hour varies by Spot pool. 

The following example shows how the Spot price per unit hour calculation works with instance weights. For ease of calculation, let's say you want to launch Spot Instances only in `us-east-1a`. The per unit hour price is captured in the following table.


**Example: Spot Price per unit hour**  

| Instance type | us-east-1a | Instance weight | Price per unit hour  | 
| --- | --- | --- | --- | 
| c5.2xlarge  | \$10.180 | 2 | \$10.090 | 
| c5.4xlarge | \$10.341 | 4 | \$10.085 | 
| c5.12xlarge  | \$10.779 | 12 | \$10.065 | 
| c5.18xlarge  | \$11.207 | 18 | \$10.067 | 
| c5.24xlarge | \$11.555 | 24 | \$10.065 | 

# Use multiple launch templates


In addition to using multiple instance types, you can also use multiple launch templates.

For example, say that you configure an Auto Scaling group for compute-intensive applications and want to include a mix of C5, C5a, and C6g instance types. However, C6g instances feature an AWS Graviton processor based on 64-bit Arm architecture, while the C5 and C5a instances run on 64-bit Intel x86 processors. The AMIs for C5 and C5a instances both work on each of those instances, but not on C6g instances. To solve this problem, use a different launch template for C6g instances. You can still use the same launch template for C5 and C5a instances.

This section contains procedures for using the AWS CLI to perform tasks related to using multiple launch templates. Currently, this feature is available only if you use the AWS CLI or an SDK, and is not available from the console. 

**Topics**
+ [

## Configure an Auto Scaling group to use multiple launch templates
](#configue-auto-scaling-group-to-use-multiple-launch-templates)
+ [

## Related resources
](#multiple-launch-templates-related-resources)

## Configure an Auto Scaling group to use multiple launch templates


You can configure an Auto Scaling group to use multiple launch templates, as shown in the following examples. 

**To configure a new Auto Scaling group to use multiple launch templates (AWS CLI)**  
Use the [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command. For example, the following command creates a new Auto Scaling group. It specifies the `c5.large`, `c5a.large`, and `c6g.large` instance types and defines a new launch template for the `c6g.large` instance type to ensure that an appropriate AMI is used to launch Arm instances. Amazon EC2 Auto Scaling uses the order of instance types to determine which instance type to use first when fulfilling On-Demand capacity.

```
aws autoscaling create-auto-scaling-group --cli-input-json file://~/config.json
```

The `config.json` file contains the following content.

```
{
  "AutoScalingGroupName":"my-asg",
  "MixedInstancesPolicy":{
    "LaunchTemplate":{
      "LaunchTemplateSpecification":{
        "LaunchTemplateName":"my-launch-template-for-x86",
        "Version":"$Latest"
      },
      "Overrides":[
        {
          "InstanceType":"c6g.large",
          "LaunchTemplateSpecification": {
            "LaunchTemplateName": "my-launch-template-for-arm",
            "Version": "$Latest"
          }
        },
        {
          "InstanceType":"c5.large"
        },
        {
          "InstanceType":"c5a.large"
        }
      ]
    },
    "InstancesDistribution":{
      "OnDemandBaseCapacity": 1,
      "OnDemandPercentageAboveBaseCapacity": 50,
      "SpotAllocationStrategy": "capacity-optimized"
    }
  },
  "MinSize":1,
  "MaxSize":5,
  "DesiredCapacity":3,
  "VPCZoneIdentifier":"subnet-5ea0c127,subnet-6194ea3b,subnet-c934b782",
  "Tags":[ ]
}
```

**To configure an existing Auto Scaling group to use multiple launch templates (AWS CLI)**  
Use the [update-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/update-auto-scaling-group.html) command. For example, the following command assigns the launch template named `my-launch-template-for-arm` to the `c6g.large` instance type for the Auto Scaling group named *`my-asg`*.

```
aws autoscaling update-auto-scaling-group --cli-input-json file://~/config.json
```

The `config.json` file contains the following content.

```
{
  "AutoScalingGroupName":"my-asg",
  "MixedInstancesPolicy":{
    "LaunchTemplate":{
      "Overrides":[
        {
          "InstanceType":"c6g.large",
          "LaunchTemplateSpecification": {
            "LaunchTemplateName": "my-launch-template-for-arm",
            "Version": "$Latest"
          }
        },
        {
          "InstanceType":"c5.large"
        },
        {
          "InstanceType":"c5a.large"
        }
      ]
    }
  }
}
```

**To configure a new Auto Scaling group to use multiple launch templates with attribute-based instance type selection (AWS CLI)**  
Use the [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command. For example, the following command creates a new Auto Scaling group by specifying a launch template for AWS Graviton instances with an ARM AMI and an additional launch template for AMD or Intel based instances with an x86 AMI. Then, it uses [attribute-based instance selection](create-mixed-instances-group-attribute-based-instance-type-selection.md) twice to select from a wide range of instance types for each CPU architecture. You can add a similar configuration to an existing Auto Scaling group with the [update-autoscaling-group](https://docs.aws.amazon.com/cli/latest/reference/autoscaling/update-auto-scaling-group.html) command.

```
aws autoscaling create-auto-scaling-group --cli-input-json file://~/config.json
```

The `config.json` file contains the following content.

```
{
  "AutoScalingGroupName":"my-asg",
  "MixedInstancesPolicy":{
    "LaunchTemplate":{
      "LaunchTemplateSpecification":{
        "LaunchTemplateName":"my-launch-template-for-arm",
        "Version":"$Latest"
      },
      "Overrides":[
        {
          "InstanceRequirements": {
            "VCpuCount": {"Min": 2},
            "MemoryMiB": {"Min": 2048},
            "CpuManufacturers": ["amazon-web-services"]
          }
         },
         {
           "InstanceRequirements": {
            "VCpuCount": {"Min": 2},
            "MemoryMiB": {"Min": 2048},
            "CpuManufacturers": ["intel", "amd"]
          },
          "LaunchTemplateSpecification": {
            "LaunchTemplateName": "my-launch-template-for-x86",
            "Version": "$Latest"
          }
         }
      ]
    },
    "InstancesDistribution":{
      "OnDemandPercentageAboveBaseCapacity": 0, 
      "SpotAllocationStrategy": "price-capacity-optimized"
    }
  },
  "MinSize":1,
  "MaxSize":10,
  "DesiredCapacity":6,
  "VPCZoneIdentifier":"subnet-5ea0c127,subnet-6194ea3b,subnet-c934b782",
  "Tags":[ ]
}
```

**To verify the launch templates for an Auto Scaling group**  
Use one of the following commands:
+ [describe-auto-scaling-groups](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/describe-auto-scaling-groups.html) (AWS CLI)
+ [Get-ASAutoScalingGroup](https://docs.aws.amazon.com/powershell/latest/reference/items/Get-ASAutoScalingGroup.html) (AWS Tools for Windows PowerShell)

## Related resources


You can find an example of specifying multiple launch templates using attribute-based instance type selection in a CloudFormation template on [AWS re:Post](https://repost.aws/articles/ARQeKDQX68TcqipYaaisl6bA/cloudformation-auto-scaling-group-sample-template-for-mixed-x86-intel-amd-and-aws-graviton-instances).

# Create Auto Scaling groups using launch configurations
Create Auto Scaling groups using launch configurations

**Important**  
Limitations:  
As of **January 1, 2023**, new Amazon EC2 instance types are no longer supported in launch configurations. This includes support for any instance types added to an AWS Region after the initial Region launch.
Accounts created on or after **June 1, 2023** cannot create new launch configurations using the console.
Accounts created on or after **October 1, 2024** cannot create new launch configurations using any method (console, API, AWS CLI, or CloudFormation).
 Migrate to launch templates to make sure that you don’t need to create new launch configurations now or in the future. For information about migrating your Auto Scaling groups to launch templates, see [Migrate your Auto Scaling groups to launch templates](migrate-to-launch-templates.md).

If you have created a launch configuration or an EC2 instance, you can create an Auto Scaling group that uses a launch configuration as a configuration template for its EC2 instances. The launch configuration specifies information such as the AMI ID, instance type, key pair, security groups, and block device mapping for your instances. For information about creating launch configurations, see [Create a launch configuration](create-launch-config.md).

You must have sufficient permissions to create an Auto Scaling group. You must also have sufficient permissions to create the service-linked role that Amazon EC2 Auto Scaling uses to perform actions on your behalf if it does not yet exist. For examples of IAM policies that an administrator can use as a reference for granting you permissions, see [Identity-based policy examples](security_iam_id-based-policy-examples.md).

**Topics**
+ [

# Create an Auto Scaling group using a launch configuration
](create-asg-launch-configuration.md)
+ [

# Create an Auto Scaling group from existing instance using the AWS CLI
](create-asg-from-instance.md)

# Create an Auto Scaling group using a launch configuration
Create a group using a launch configuration

**Important**  
We provide information about launch configurations for customers who have not yet migrated from launch configurations to launch templates. For information about migrating your Auto Scaling groups to launch templates, see [Migrate your Auto Scaling groups to launch templates](migrate-to-launch-templates.md).

When you create an Auto Scaling group, you must specify the necessary information to configure the Amazon EC2 instances, the Availability Zones and VPC subnets for the instances, the desired capacity, and the minimum and maximum capacity limits.

The following procedure demonstrates how to create an Auto Scaling group using a launch configuration. You cannot modify a launch configuration after it is created, but you can replace the launch configuration for an Auto Scaling group. For more information, see [Change the launch configuration for an Auto Scaling group](change-launch-config.md). 

**Prerequisites**
+ You must have created a launch configuration. For more information, see [Create a launch configuration](create-launch-config.md).

**To create an Auto Scaling group using a launch configuration (console)**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. On the navigation bar at the top of the screen, choose the same AWS Region that you used when you created the launch configuration.

1. Choose **Create an Auto Scaling group**.

1. On the **Choose launch template or configuration** page, for **Auto Scaling group name**, enter a name for your Auto Scaling group.

1. To choose a launch configuration, do the following:

   1. For **Launch template**, choose **Switch to launch configuration**.

   1. For **Launch configuration**, choose an existing launch configuration.

   1. Verify that your launch configuration supports all of the options that you are planning to use, and then choose **Next**.

1. On the **Configure instance launch options** page, under **Network**, for **VPC**, choose a VPC. The Auto Scaling group must be created in the same VPC as the security group you specified in your launch configuration.

1. For **Availability Zones and subnets**, choose one or more subnets in the specified VPC. Use subnets in multiple Availability Zones for high availability. For more information, see [Considerations when choosing VPC subnets](asg-in-vpc.md#as-vpc-considerations).

1. Choose **Next**. 

   Or, you can accept the rest of the defaults, and choose **Skip to review**. 

1. (Optional) On the **Configure advanced options** page, configure the following options, and then choose **Next**:

   1. (Optional) For **Health checks**, **Additional health check types**, select **Turn on Amazon EBS health checks**. For more information, see [Monitor Auto Scaling instances with impaired Amazon EBS volumes using health checks](monitor-and-replace-instances-with-impaired-ebs-volumes.md).

   1. (Optional) For **Health check grace period**, enter the amount of time, in seconds. This amount of time is how long Amazon EC2 Auto Scaling needs to wait before checking the health status of an instance after it enters the `InService` state. For more information, see [Set the health check grace period for an Auto Scaling group](health-check-grace-period.md). 

   1. Under **Additional settings**, **Monitoring**, choose whether to enable CloudWatch group metrics collection. These metrics provide measurements that can be indicators of a potential issue, such as number of terminating instances or number of pending instances. For more information, see [Monitor CloudWatch metrics for your Auto Scaling groups and instances](ec2-auto-scaling-cloudwatch-monitoring.md).

   1. For **Enable default instance warmup**, select this option and choose the warmup time for your application. If you are creating an Auto Scaling group that has a scaling policy, the default instance warmup feature improves the Amazon CloudWatch metrics used for dynamic scaling. For more information, see [Set the default instance warmup for an Auto Scaling group](ec2-auto-scaling-default-instance-warmup.md).

1. (Optional) On the **Configure group size and scaling policies** page, configure the following options, and then choose **Next**:

   1. Under **Group size**, for **Desired capacity**, enter the initial number of instances to launch. 

   1. In the **Scaling** section, under **Scaling limits**, if your new value for **Desired capacity** is greater than **Min desired capacity** and **Max desired capacity**, the **Max desired capacity** is automatically increased to the new desired capacity value. You can change these limits as needed. For more information, see [Set scaling limits for your Auto Scaling group](asg-capacity-limits.md).

   1. For **Automatic scaling**, choose whether you want to create a target tracking scaling policy. You can also create this policy after your create your Auto Scaling group.

      If you choose **Target tracking scaling policy**, follow the directions in [Create a target tracking scaling policy](policy_creating.md) to create the policy.

   1. For **Instance maintenance policy**, choose whether you want to create an instance maintenance policy. You can also create this policy after your create your Auto Scaling group. Follow the directions in [Set an instance maintenance policy](set-instance-maintenance-policy.md) to create the policy.

   1. Under **Instance scale-in protection**, choose whether to enable instance scale-in protection. For more information, see [Use instance scale-in protection to control instance termination](ec2-auto-scaling-instance-protection.md).

1. (Optional) To receive notifications, for **Add notification**, configure the notification, and then choose **Next**. For more information, see [Amazon SNS notification options for Amazon EC2 Auto Scaling](ec2-auto-scaling-sns-notifications.md).

1. (Optional) To add tags, choose **Add tag**, provide a tag key and value for each tag, and then choose **Next**. For more information, see [Tag Auto Scaling groups and instances](ec2-auto-scaling-tagging.md).

1. On the **Review** page, choose **Create Auto Scaling group**.

**To create an Auto Scaling group using the command line**

You can use one of the following commands:
+ [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) (AWS CLI)
+ [New-ASAutoScalingGroup](https://docs.aws.amazon.com/powershell/latest/reference/items/New-ASAutoScalingGroup.html) (AWS Tools for Windows PowerShell)

# Create an Auto Scaling group from existing instance using the AWS CLI
Create a group from instance using AWS CLI

**Important**  
We provide information about launch configurations for customers who have not yet migrated from launch configurations to launch templates. For information about migrating your Auto Scaling groups to launch templates, see [Migrate your Auto Scaling groups to launch templates](migrate-to-launch-templates.md).

If this is your first time creating an Auto Scaling group, we recommend you use the console to create a launch template from an existing EC2 instance. Then use the launch template to create a new Auto Scaling group. For this procedure, see [Create an Auto Scaling group using the Amazon EC2 launch wizard](create-asg-ec2-wizard.md).

The following procedure shows how to create an Auto Scaling group by specifying an existing instance to use as a base for launching other instances. Multiple parameters are required to create an EC2 instance, such as the Amazon Machine Image (AMI) ID, instance type, key pair, and security group. All of this information is also used by Amazon EC2 Auto Scaling to launch instances on your behalf when there is a need to scale. This information is stored in either a launch template or a launch configuration. 

When you use an existing instance, Amazon EC2 Auto Scaling creates an Auto Scaling group that launches instances based on a launch configuration that's created at the same time. The new launch configuration has the same name as the Auto Scaling group, and it includes certain configuration details from the identified instance.

The following configuration details are copied from the identified instance into the launch configuration: 
+ AMI ID
+ Instance type
+ Key pair
+ Security groups
+ IP address type (public or private)
+ IAM instance profile, if applicable
+ Monitoring (true or false)
+ EBS optimized (true or false)
+ Tenancy setting, if launching into a VPC (shared or dedicated)
+ Kernel ID and RAM disk ID, if applicable
+ User data, if specified 
+ Spot (maximum) price

The VPC subnet and Availability Zone are copied from the identified instance to the Auto Scaling group's own resource definition. 

If the identified instance is in a placement group, the new Auto Scaling group launches instances into the same placement group as the identified instance. Because the launch configuration settings do not allow a placement group to be specified, the placement group is copied to the `PlacementGroup` attribute of the new Auto Scaling group.

The following configuration details are not copied from your identified instance:
+ Storage: The block devices (EBS volumes and instance store volumes) are not copied from the identified instance. Instead, the block device mapping created as part of creating the AMI determines which devices are used.
+ Number of network interfaces: The network interfaces are not copied from your identified instance. Instead, Amazon EC2 Auto Scaling uses its default settings to create one network interface, which is the primary network interface (eth0).
+ Instance metadata options: The metadata accessible, metadata version, and token response hop limit settings are not copied from the identified instance. Instead, Amazon EC2 Auto Scaling uses its default settings. For more information, see [Configure the instance metadata options](create-launch-config.md#launch-configurations-imds).
+ Load balancers: If the identified instance is registered with one or more load balancers, the information about the load balancer is not copied to the load balancer or target group attribute of the new Auto Scaling group.
+ Tags: If the identified instance has tags, the tags are not copied to the `Tags` attribute of the new Auto Scaling group.

## Prerequisites


The EC2 instance must meet the following criteria:
+ The instance is not a member of another Auto Scaling group.
+ The instance is in the `running` state.
+ The AMI that was used to launch the instance must still exist.

## Create an Auto Scaling group from an EC2 instance (AWS CLI)


The following procedure shows you how to use a CLI command to create an Auto Scaling group from an EC2 instance.

This procedure does not add the instance to the Auto Scaling group. For the instance to be attached, you must run the [attach-instances](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/attach-instances.html) command after the Auto Scaling group has been created.

Before you begin, find the ID of the EC2 instance using the Amazon EC2 console or the [describe-instances](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/describe-instances.html) command.

**To use your current instance as a template**
+ Use the following [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command to create an Auto Scaling group, `my-asg-from-instance`, from the EC2 instance `i-123456789abcdefg0`.

  ```
  aws autoscaling create-auto-scaling-group --auto-scaling-group-name my-asg-from-instance \
    --instance-id i-123456789abcdefg0 --min-size 1 --max-size 2 --desired-capacity 2
  ```

**To verify that your Auto Scaling group has launched instances**
+ Use the following [describe-auto-scaling-groups](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/describe-auto-scaling-groups.html) command to verify that the Auto Scaling group was created successfully.

  ```
  aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name my-asg-from-instance
  ```

  The following example response shows that the desired capacity of the group is 2, the group has 2 running instances, and the launch configuration is named `my-asg-from-instance`.

  ```
  {
    "AutoScalingGroups":[
      {
        "AutoScalingGroupName":"my-asg-from-instance",
        "AutoScalingGroupARN":"arn",
        "LaunchConfigurationName":"my-asg-from-instance",
        "MinSize":1,
        "MaxSize":2,
        "DesiredCapacity":2,
        "DefaultCooldown":300,
        "AvailabilityZones":[
          "us-west-2a"
        ],
        "LoadBalancerNames":[],
        "TargetGroupARNs":[],
        "HealthCheckType":"EC2",
        "HealthCheckGracePeriod":0,
        "Instances":[
          {
            "InstanceId":"i-34567890abcdef012",
            "InstanceType":"t2.micro",
            "AvailabilityZone":"us-west-2a",
            "LifecycleState":"InService",
            "HealthStatus":"Healthy",
            "LaunchConfigurationName":"my-asg-from-instance",
            "ProtectedFromScaleIn":false
          },
          {
            "InstanceId":"i-012345abcdefg6789",
            "InstanceType":"t2.micro",
            "AvailabilityZone":"us-west-2a",
            "LifecycleState":"InService",
            "HealthStatus":"Healthy",
            "LaunchConfigurationName":"my-asg-from-instance",
            "ProtectedFromScaleIn":false
          }
        ],
        "CreatedTime":"2020-10-28T02:39:22.152Z",
        "SuspendedProcesses":[ ],
        "VPCZoneIdentifier":"subnet-0abc1234",
        "EnabledMetrics":[ ],
        "Tags":[ ],
        "TerminationPolicies":[
          "Default"
        ],
        "NewInstancesProtectedFromScaleIn":false,
        "ServiceLinkedRoleARN":"arn",
        "TrafficSources":[]
      }
    ]
  }
  ```

**To view the launch configuration**
+ Use the following [describe-launch-configurations](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/describe-launch-configurations.html) command to view the details of the launch configuration.

  ```
  aws autoscaling describe-launch-configurations --launch-configuration-names my-asg-from-instance
  ```

  The following is example output:

  ```
  {
    "LaunchConfigurations":[
      {
        "LaunchConfigurationName":"my-asg-from-instance",
        "LaunchConfigurationARN":"arn",
        "ImageId":"ami-234567890abcdefgh",
        "KeyName":"my-key-pair-uswest2",
        "SecurityGroups":[
          "sg-12abcdefgh3456789"
        ],
        "ClassicLinkVPCSecurityGroups":[ ],
        "UserData":"",
        "InstanceType":"t2.micro",
        "KernelId":"",
        "RamdiskId":"",
        "BlockDeviceMappings":[ ],
        "InstanceMonitoring":{
          "Enabled":true
        },
        "CreatedTime":"2020-10-28T02:39:22.321Z",
        "EbsOptimized":false,
        "AssociatePublicIpAddress":true
      }
    ]
  }
  ```

**To terminate the instance**
+ If you no longer need the instance, you can terminate it. The following [terminate-instances](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/terminate-instances.html) command terminates the instance `i-123456789abcdefg0`. 

  ```
  aws ec2 terminate-instances --instance-ids i-123456789abcdefg0
  ```

  After you terminate an Amazon EC2 instance, you can't restart the instance. After termination, its data is gone and the volume can't be attached to any instance. To learn more about terminating instances, see [Terminate an instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/terminating-instances.html#terminating-instances-console) in the *Amazon EC2 User Guide*.

# Launch instances synchronously
Synchronous provisioning

Amazon EC2 Auto Scaling provides two methods for launching instances in your Auto Scaling group: asynchronous scaling behavior and synchronous provisioning using the LaunchInstances API.

With synchronous provisioning, you use the LaunchInstances API to request a specific number of instances in a particular Availability Zone. Synchronous provisioning provides the following benefits:
+ Immediate feedback on capacity availability in specific Availability Zones
+ Precise control over which Availability Zone instances launch in
+ Deterministic instance IDs for immediate use in orchestration systems
+ Real-time scaling decisions based on actual capacity constraints
+ Faster scaling by eliminating wait times for asynchronous Auto Scaling launches

With asynchronous Auto Scaling, when you change the desired capacity or when a scaling policy triggers, Amazon EC2 Auto Scaling processes the scaling request and launches instances in the background. You must monitor scaling activities or describe your Auto Scaling group to determine when instances are successfully launched.

**Note**  
The LaunchInstances API only works with Auto Scaling groups that use launch templates. Auto Scaling groups that use launch configurations are not supported. If your Auto Scaling group uses a launch configuration, you must migrate to a launch template before using synchronous provisioning.
The LaunchInstances API supports mixed instances policies with fully On-Demand or fully Spot purchasing options only. Mixed policies combining both On-Demand and Spot Instances are not supported.
For Auto Scaling groups covering multiple Availability Zones, you must specify the target Availability Zone or subnet. For single-AZ groups, this parameter is optional.

## Synchronous provisioning and asynchronous scaling


### Synchronous provisioning


When you use the LaunchInstances API, Amazon EC2 Auto Scaling:
+ Immediately attempts to launch the requested instances using CreateFleet
+ Waits for CreateFleet to return instance IDs before responding
+ Returns instance IDs, instance types, and Availability Zone information on success
+ Returns specific error codes and details on failure
+ Provides immediate feedback, enabling real-time scaling decisions

### Asynchronous scaling


When you use asynchronous Auto Scaling methods such as changing the desired capacity or using scaling policies, Amazon EC2 Auto Scaling:
+ Updates the desired capacity in the API but won't return instances immediately
+ Plans instance launches across Availability Zones automatically
+ Launches instances through background workflows
+ Automatically distributes capacity across multiple Availability Zones for balance
+ Handles launch failures with built-in retry logic

You must poll scaling activities or describe your Auto Scaling group to check the status of launch operations.

## Limitations and considerations


When working with synchronous provisioning, keep in mind the following notes and limitations:
+ **Instance state after launch** – Instances returned by the API are in pending state. They may still fail during subsequent workflow processes or lifecycle hooks. A successful API response means that EC2 has accepted the launch request and returned the instance IDs. Instances aren't automatically considered fully ready for workloads and must complete standard EC2 and Auto Scaling lifecycle processes.
+ **Warm pool limitation** – Auto Scaling groups with warm pools are currently not supported. If you attempt to call the LaunchInstances API on an Auto Scaling group that has a warm pool configured, the API performs a cold start instead of using warm pool instances and returns an UnsupportedOperation error. For more information about cold starts, see [Limitations of warm pools](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-warm-pools.html#warm-pools-limitations).
+ **API timeout and retries** – If the underlying CreateFleet operation takes longer than expected, the API may timeout and return an idempotency token. You can retry using the same ClientToken to track the original launch operation or use describe-instances with the client token to check launched instances.
+ **Availability Zone constraints** – If your Auto Scaling group spans multiple Availability Zones and has Availability Zone rebalancing enabled, launching instances synchronously can cause operational conflicts:
  + Single AZ limitation per call – Each LaunchInstances API call can only target one Availability Zone, even if your Auto Scaling group spans multiple zones.
  + AZ rebalancing conflicts - If your Auto Scaling group has AZ rebalancing enabled, sequential calls across different AZs may trigger additional asynchronous launches, resulting in more instances than intended. Consider suspending AZ rebalancing for precise capacity control. For more information, see [Suspend and resume Amazon EC2 Auto Scaling processes](as-suspend-resume-processes.md).
+ **Partial success scenarios** – The `LaunchInstances` API may return partial success if only some of the requested capacity is available, which is normal EC2 behavior. The API returns successfully launched instances along with error details for failed launches. For use cases requiring all instances to launch together (such as applications needing all instances in the same AZ for low latency), you'll need to terminate partially launched instances and retry in a different AZ. Consider this behavior when designing retry logic for capacity-sensitive workloads.
+ **Instance weights** – If your Auto Scaling group uses instance weights, the RequestedCapacity parameter represents weighted capacity units, not the number of instances. The actual number of instances launched depends on the instance types selected and their configured weights. EC2 Auto Scaling limits launches to 100 instances per API call, regardless of the weighted capacity requested.
+ **Mixed instance types** – The LaunchInstances API uses your Auto Scaling group's existing mixed instances policy to determine which instance types to launch. The API launches instances according to your group's allocation strategy and instance type priorities.

# Launching instances with synchronous provisioning


You can use the LaunchInstances API to synchronously launch a specific number of instances in your Auto Scaling group. The API launches instances in the Availability Zone or subnet that you specify and immediately returns the instance IDs or error information.

## Prerequisites


Before you can use the LaunchInstances API, you must have:
+ An Auto Scaling group that uses a launch template (launch configurations are not supported)
+ You must have permissions for the following IAM actions:
  + `autoscaling:LaunchInstances`
  + `ec2:CreateFleet`
  + `ec2:DescribeLaunchTemplateVersions`

## Launch instances with synchronous provisioning


You can launch instances with synchronous provisioning through the AWS CLI.

### AWS CLI


To launch instances with synchronous provisioning:

```
aws autoscaling launch-instances \
        --auto-scaling-group-name group-name \
        --requested-capacity number \
        [--availability-zones zone-name] \
        [--subnet-ids subnet-id] \
        [--availability-zone-ids zone-id] \
        [--retry-strategy none|retry-with-group-configuration] \
        [--client-token token]
```

#### Examples


**Launching instances in a specific Availability Zone**

```
aws autoscaling launch-instances \
        --auto-scaling-group-name my-asg \
        --requested-capacity 3 \
        --availability-zones us-east-1a \
        --retry-strategy retry-with-group-configuration
```

**Launching instances in a specific subnet**

```
aws autoscaling launch-instances \
        --auto-scaling-group-name my-asg \
        --requested-capacity 2 \
        --subnet-ids subnet-12345678 \
        --retry-strategy none \
        --client-token my-unique-token-123
```

#### Handling responses


**Example of a successful response:**

```
{
    "AutoScalingGroupName": "my-asg",
    "ClientToken": "my-unique-token-123",
    "Instances": [
        {
            "InstanceType": "m5.xlarge",
            "AvailabilityZone": "us-east-1a",
            "AvailabilityZoneId": "use1-az1",
            "SubnetId": "subnet-12345678",
            "MarketType": "OnDemand",
            "InstanceIds": ["i-0123456789abcdef0", "i-0fedcba9876543210"]
        }
    ],
    "Errors": []
}
```

**Example of a response with errors**

```
{
    "AutoScalingGroupName": "my-asg",
    "ClientToken": "my-unique-token-123",
    "Instances": [],
    "Errors": [
       {
        "InstanceType": "m5.large",
        "AvailabilityZone": "us-east-1a",
        "AvailabilityZoneId": "use1-az1",
        "SubnetId": "subnet-12345678",
        "MarketType": "OnDemand",
        "ErrorCode": "InsufficientInstanceCapacity",
        "ErrorMessage": "There is not enough capacity to fulfill your request for instance type 'm5.large' in 'us-east-1a'"
        }
    ]
}
```

## Handle launch failures and retries


When the LaunchInstances API encounters failures, you can implement retry strategies using idempotency tokens and appropriate retry policies.

You can use the client-token parameter to retry requests. You can also use the following retry strategies:
+ `RetryStrategy: none` (default) - If the API call fails, the Auto Scaling group's desired capacity remains unchanged and no automatic retry occurs.
+ `RetryStrategy: retry-with-group-configuration` - If the API call fails, the Auto Scaling group's desired capacity is increased by the requested amount, and Auto Scaling will automatically retry launching instances using the group's standard configuration and processes.

Retry behavior for `RetryStrategy: retry-with-group-configuration` depends on the failure type:
+ **Validation errors**: Desired capacity is not increased since the operation can't proceed. For example, invalid parameters or unsupported configurations.
+ **Capacity errors**: Desired capacity is increased and Auto Scaling will retry launching instances asynchronously using the group's normal scaling processes.

### Using client tokens for idempotency


The `client-token` parameter ensures idempotent operations and enables safe retries of launch requests.

Key behaviors:
+ Client tokens have an 8-hour lifetime from the initial request
+ Retrying with the same client token within 8 hours returns the cached response instead of launching new instances
+ After 8 hours, the same client token will initiate a new launch operation

# Update an Auto Scaling group
Update an Auto Scaling group

You can update most of your Auto Scaling group's details. You can't update the name of an Auto Scaling group or change its AWS Region. 

**To update an Auto Scaling group (console)**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. Choose your Auto Scaling group to display information about the group, with tabs for **Details**, **Activity**, **Automatic scaling**, **Instance management**, **Monitoring**, and **Instance refresh**.

1. Choose the tabs for the configuration areas that you're interested in and update the settings as needed. For each setting that you edit, choose **Update** to save your changes to the Auto Scaling group's configuration. 
   + **Details** tab

     These are the general settings for your Auto Scaling group. You can edit and manage these in the same way as during Auto Scaling group creation. 

     The **Advanced configurations** section has some options that are not available when creating the group such as [termination policies](ec2-auto-scaling-termination-policies.md), [cooldown](ec2-auto-scaling-scaling-cooldowns.md), [suspended processes](as-suspend-resume-processes.md), and [maximum instance lifetime](asg-max-instance-lifetime.md). You can also view, but not edit the placement group and [service-linked role](autoscaling-service-linked-role.md) of the Auto Scaling group.
   + **Integrations** tab
     + **Load balancing** – [Elastic Load Balancing ](autoscaling-load-balancer.md)

       If the group is associated with Elastic Load Balancing resources, see [Add an Availability ZoneRemove an Availability Zone](as-add-az-console.md) before changing Availability Zones. Some restrictions on the load balancer might prevent you from applying changes to your group's Availability Zones to your load balancer's Availability Zones.
     + **VPC Lattice integration options** – [VPC Lattice](ec2-auto-scaling-vpc-lattice.md) 
     + **ARC zonal shift** – [Auto Scaling group zonal shift](ec2-auto-scaling-zonal-shift.md) 
   + **Automatic scaling** tab
     + **Dynamic scaling policies** – [Dynamic scaling policies](as-scale-based-on-demand.md)
     + **Predictive scaling policies** – [Predictive scaling policies](ec2-auto-scaling-predictive-scaling.md)
     + **Scheduled actions** – [Scheduled actions](ec2-auto-scaling-scheduled-scaling.md)
   + **Instance management** tab
     + **Lifecycle hooks** – [Lifecycle hooks](lifecycle-hooks.md)
     + **Warm pool** – [Warm pools](ec2-auto-scaling-warm-pools.md)
   + **Activity** tab
     + **Activity notifications** – [Amazon SNS notifications](ec2-auto-scaling-sns-notifications.md)
   + **Monitoring** tab
     + There is just a single option in this tab, which lets you enable or disable [CloudWatch group metrics collection](ec2-auto-scaling-metrics.md).

**To update an Auto Scaling group using the command line**

You can use one of the following commands:
+ [update-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/update-auto-scaling-group.html) (AWS CLI)
+ [Update-ASAutoScalingGroup](https://docs.aws.amazon.com/powershell/latest/reference/items/Update-ASAutoScalingGroup.html) (AWS Tools for Windows PowerShell)

## Update Auto Scaling instances


If you associate a new launch template or launch configuration with an Auto Scaling group, all new instances will get the updated configuration. Existing instances continue to run with the configuration that they were originally launched with. To apply your changes to existing instances, you have the following options:
+ Start an instance refresh to replace the older instances. For more information, see [Use an instance refresh to update instances in an Auto Scaling group](asg-instance-refresh.md). 
+ Wait for scaling activities to gradually replace older instances with newer instances based on your [termination policies](as-instance-termination.md).
+ Manually terminate them so that they are replaced by your Auto Scaling group.

**Note**  
You can change the following instance attributes by specifying them as part of the launch template or launch configuration:   
Amazon Machine Image (AMI)
block devices
key pair
instance type
security groups
user data
monitoring
IAM instance profile
placement tenancy
kernel
ramdisk
whether the instance has a public IP address
the Availability Zone distribution strategy

## Auto Scaling group allocation strategy and capacity changes


When you change an Auto Scaling group allocation strategy, existing instances are not replaced. Any new instances that are launched because of scale out events will follow the new allocation strategy. Any future scale in events will follow the [termination policy](ec2-auto-scaling-termination-policies.md) and use the new allocation strategy if the termination policy is set to `Default` or `AllocationStrategy`. For example, if you change the allocation strategy from `lowest-price` to `price-capacity-optimized`, there might not be any instance terminations, but any new instances will be launched with the new allocation strategy. Instance type changes do not affect existing instances.

When you change certain parameters such as the [OnDemandBaseCapacity](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_InstancesDistribution.html) or the [OnDemandPercentageAboveBaseCapacity](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_InstancesDistribution.html), Auto Scaling will automatically rebalance if the percentage of On-Demand Instances and Spot Instances do not match the new specifications. For example, suppose an Auto Scaling group has the `OnDemandPercentageAboveBaseCapacity` set to 50 percent On-Demand Instances and 50 percent Spot Instances. Then, the `OnDemandPercentageAboveBaseCapacity` is increased to 100 percent On-Demand Instances. The Auto Scaling group will proactively rebalance by launching new On-Demand Instances and terminating Spot Instances. The [instance maintenance policy](instance-maintenance-policy-overview-and-considerations.md) that you defined determines the order of the launch and termination activities. 

# Tag Auto Scaling groups and instances
Tag groups and instances

A *tag* is a custom attribute label that you assign or that AWS assigns to an AWS resource. Each tag has two parts: 
+ A tag key (for example, `costcenter`, `environment`, or `project`)
+ An optional field known as a tag value (for example, `111122223333` or `production`)

Tags help you do the following:
+ Track your AWS costs. You activate these tags on the AWS Billing and Cost Management dashboard. AWS uses the tags to categorize your costs and deliver a monthly cost allocation report to you. For more information, see [Using cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html) in the *AWS Billing User Guide*.
+ Control access to Auto Scaling groups based on tags. You can use conditions in your IAM policies to control access to Auto Scaling groups based on the tags on that group. For more information, see [Tags for security](tag-security.md).
+ Filter and search for Auto Scaling groups based on the tags that you add. For more information, see [Use tags to filter Auto Scaling groups](use-tag-filters-aws-cli.md).
+ Identify and organize your AWS resources. Many AWS services support tagging, so you can assign the same tag to resources from different services to indicate that the resources are related.

You can tag new or existing Auto Scaling groups. You can also propagate tags from an Auto Scaling group to the EC2 instances that it launches. 

Tags are not propagated to Amazon EBS volumes. To add tags to Amazon EBS volumes, specify the tags in a launch template. For more information, see [Create a launch template for an Auto Scaling group](create-launch-template.md).

You can create and manage tags through the AWS Management Console, AWS CLI, or SDKs. 

**Topics**
+ [

## Tag naming and usage restrictions
](#tag_restrictions)
+ [

## EC2 instance tagging lifecycle
](#tag-lifecycle)
+ [

# Tag your Auto Scaling groups
](add-tags.md)
+ [

# Delete tags
](delete-tag.md)
+ [

# Tags for security
](tag-security.md)
+ [

# Control access to tags
](tag-permissions.md)
+ [

# Use tags to filter Auto Scaling groups
](use-tag-filters-aws-cli.md)

## Tag naming and usage restrictions


The following basic restrictions apply to tags:
+ The maximum number of tags per resource is 50.
+ The maximum number of tags that you can add or remove using a single call is 25.
+ The maximum key length is 128 Unicode characters.
+ The maximum value length is 256 Unicode characters.
+ Tag keys and values are case-sensitive. As a best practice, decide on a strategy for capitalizing tags, and consistently implement that strategy across all resource types. 
+ Do not use the `aws:` prefix in your tag names or values, because it is reserved for AWS use. You can't edit or delete tag names or values with this prefix, and they do not count toward your tags per resource quota.

## EC2 instance tagging lifecycle


If you have opted to propagate tags to your EC2 instances, the tags are managed as follows:
+ When an Auto Scaling group launches instances, it adds tags to the instances during resource creation rather than after the resource is created. 
+ The Auto Scaling group automatically adds a tag to instances with a key of `aws:autoscaling:groupName` and a value of the Auto Scaling group name. 
+ If you specify instance tags in your launch template and you opted to propagate your group's tags to its instances, all the tags are merged. If the same tag key is specified for a tag in your launch template and a tag in your Auto Scaling group, then the tag value from the group takes precedence.
+ When you attach existing instances, the Auto Scaling group adds the tags to the instances, overwriting any existing tags with the same tag key. It also adds a tag with a key of `aws:autoscaling:groupName` and a value of the Auto Scaling group name.
+ When you detach an instance from an Auto Scaling group, it removes only the `aws:autoscaling:groupName` tag.

# Tag your Auto Scaling groups


When you add a tag to your Auto Scaling group, you can specify whether it should be added to instances launched in the Auto Scaling group. If you modify a tag, the updated version of the tag is added to instances launched in the Auto Scaling group after the change. If you create or modify a tag for an Auto Scaling group, these changes are not made to instances that are already running in the Auto Scaling group.

**Topics**
+ [

## Add or modify tags (console)
](#add-tags-console)
+ [

## Add or modify tags (AWS CLI)
](#add-tags-aws-cli)

## Add or modify tags (console)


**To tag an Auto Scaling group on creation**  
When you use the Amazon EC2 console to create an Auto Scaling group, you can specify tag keys and values on the **Add tags** page of the Create Auto Scaling group wizard. To propagate a tag to the instances launched in the Auto Scaling group, make sure that you keep the **Tag new instances** option for that tag selected. Otherwise, you can deselect it. 

**To add or modify tags for an existing Auto Scaling group**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. Select the check box next to the Auto Scaling group.

   A split pane opens up in the bottom of the **Auto Scaling groups** page. 

1. On the **Details** tab, choose **Tags**, **Edit**.

1. To modify existing tags, edit **Key** and **Value**.

1. To add a new tag, choose **Add tag** and edit **Key** and **Value**. You can keep **Tag new instances** selected to add the tag to the instances launched in the Auto Scaling group automatically, and deselect it otherwise.

1. When you have finished adding tags, choose **Update**.

## Add or modify tags (AWS CLI)


The following examples show how to use the AWS CLI to add tags when you create Auto Scaling groups, and to add or modify tags for existing Auto Scaling groups. 

**To tag an Auto Scaling group on creation**  
Use the [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command to create a new Auto Scaling group and add a tag, for example, **environment=production**, to the Auto Scaling group. The tag is also added to any instances launched in the Auto Scaling group.

```
aws autoscaling create-auto-scaling-group --auto-scaling-group-name my-asg \
  --launch-configuration-name my-launch-config --min-size 1 --max-size 3 \
  --vpc-zone-identifier "subnet-5ea0c127,subnet-6194ea3b,subnet-c934b782" \
  --tags Key=environment,Value=production,PropagateAtLaunch=true
```

**To create or modify tags for an existing Auto Scaling group**  
Use the [create-or-update-tags](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-or-update-tags.html) command to create or modify a tag. For example, the following command adds the `Name=my-asg` and `costcenter=cc123` tags. The tags are also added to any instances launched in the Auto Scaling group after this change. If a tag with either key already exists, the existing tag is replaced. The Amazon EC2 console associates the display name for each instance with the name that is specified for the `Name` key (case-sensitive).

```
aws autoscaling create-or-update-tags \
  --tags ResourceId=my-asg,ResourceType=auto-scaling-group,Key=Name,Value=my-asg,PropagateAtLaunch=true \
  ResourceId=my-asg,ResourceType=auto-scaling-group,Key=costcenter,Value=cc123,PropagateAtLaunch=true
```

### Describe the tags for an Auto Scaling group (AWS CLI)


If you want to view the tags that are applied to a specific Auto Scaling group, you can use either of the following commands: 
+ [describe-tags](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/describe-tags.html) – You supply your Auto Scaling group name to view a list of the tags for the specified group.

  ```
  aws autoscaling describe-tags --filters Name=auto-scaling-group,Values=my-asg
  ```

  The following is an example response.

  ```
  {
      "Tags": [
          {
              "ResourceType": "auto-scaling-group",
              "ResourceId": "my-asg",
              "PropagateAtLaunch": true,
              "Value": "production",
              "Key": "environment"
          }
      ]
  }
  ```
+ [describe-auto-scaling-groups](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/describe-auto-scaling-groups.html) – You supply your Auto Scaling group name to view the attributes of the specified group, including any tags.

  ```
  aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name my-asg
  ```

  The following is an example response.

  ```
  {
      "AutoScalingGroups": [
          {
              "AutoScalingGroupName": "my-asg",
              "AutoScalingGroupARN": "arn",
              "LaunchTemplate": {
                  "LaunchTemplateId": "lt-0b97f1e282EXAMPLE",
                  "LaunchTemplateName": "my-launch-template",
                  "Version": "$Latest"
              },
              "MinSize": 1,
              "MaxSize": 5,
              "DesiredCapacity": 1,
              ...
              "Tags": [
                  {
                      "ResourceType": "auto-scaling-group",
                      "ResourceId": "my-asg",
                      "PropagateAtLaunch": true,
                      "Value": "production",
                      "Key": "environment"
                  }
              ],
              ...
          }
      ]
  }
  ```

# Delete tags


You can delete a tag associated with your Auto Scaling group at any time.

**Topics**
+ [

## Delete tags (console)
](#delete-tag-console)
+ [

## Delete tags (AWS CLI)
](#delete-tag-aws-cli)

## Delete tags (console)


**To delete a tag**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. Select the check box next to an existing group.

   A split pane opens up in the bottom of the **Auto Scaling groups** page.

1. On the **Details** tab, choose **Tags**, **Edit**.

1. Choose **Remove** next to the tag.

1. Choose **Update**.

## Delete tags (AWS CLI)


Use the [delete-tags](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/delete-tags.html) command to delete a tag. For example, the following command deletes a tag with a key of `environment`.

```
aws autoscaling delete-tags --tags "ResourceId=my-asg,ResourceType=auto-scaling-group,Key=environment"
```

You must specify the tag key, but you don't have to specify the value. If you specify a value and the value is incorrect, the tag is not deleted.

# Tags for security


Use tags to verify that the requester (such as an IAM user or role) has permissions to create, modify, or delete specific Auto Scaling groups. Provide tag information in the condition element of an IAM policy by using one or more of the following condition keys:
+ Use `autoscaling:ResourceTag/tag-key: tag-value` to allow (or deny) user actions on Auto Scaling groups with specific tags. 
+ Use `aws:RequestTag/tag-key: tag-value` to require that a specific tag be present (or not present) in a request. 
+ Use `aws:TagKeys [tag-key, ...]` to require that specific tag keys be present (or not present) in a request. 

For example, you could deny access to all Auto Scaling groups that include a tag with the key `environment` and the value `production`, as shown in the following example.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Deny",
            "Action": [        
                "autoscaling:CreateAutoScalingGroup",
                "autoscaling:UpdateAutoScalingGroup",
                "autoscaling:DeleteAutoScalingGroup"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {"autoscaling:ResourceTag/environment": "production"}
            }
        }
    ]
}
```

------

For more information about using condition keys to control access to Auto Scaling groups, see [How Amazon EC2 Auto Scaling works with IAM](control-access-using-iam.md).

# Control access to tags


Use tags to verify that the requester (such as an IAM user or role) has permissions to add, modify, or delete tags for Auto Scaling groups. 

The following example IAM policy gives the principal permission to remove only the tag with the `temporary` key from Auto Scaling groups.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "autoscaling:DeleteTags",
            "Resource": "*",
            "Condition": {
                "ForAllValues:StringEquals": { "aws:TagKeys": ["temporary"] }
            }
        }
    ]
}
```

------

For more examples of IAM policies that enforce constraints on the tags specified for Auto Scaling groups, see [Control which tag keys and tag values can be used](security_iam_id-based-policy-examples.md#policy-example-tags).

**Note**  
Even if you have a policy that restricts your users from performing a tagging (or untagging) operation on an Auto Scaling group, this does not prevent them from manually changing the tags on the instances after they have launched. For examples that control access to tags on EC2 instances, see [Example: Tagging resources](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ExamplePolicies_EC2.html#iam-example-taggingresources) in the *Amazon EC2 User Guide*.

# Use tags to filter Auto Scaling groups


The following examples show you how to use filters with the [describe-auto-scaling-groups](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/describe-auto-scaling-groups.html) command to describe Auto Scaling groups with specific tags. Filtering by tags is limited to the AWS CLI or an SDK, and is not available from the console.

**Filtering considerations**
+ You can specify multiple filters and multiple filter values in a single request.
+ You cannot use wildcards with the filter values. 
+ Filter values are case-sensitive.

**Example: Describe Auto Scaling groups with a specific tag key and value pair**  
The following command shows how to filter results to show only Auto Scaling groups with the tag key and value pair of **`environment=production`**.

```
aws autoscaling describe-auto-scaling-groups \
  --filters Name=tag-key,Values=environment Name=tag-value,Values=production
```

The following is an example response.

```
{
    "AutoScalingGroups": [
        {
            "AutoScalingGroupName": "my-asg",
            "AutoScalingGroupARN": "arn",
            "LaunchTemplate": {
                "LaunchTemplateId": "lt-0b97f1e282EXAMPLE",
                "LaunchTemplateName": "my-launch-template",
                "Version": "$Latest"
            },
            "MinSize": 1,
            "MaxSize": 5,
            "DesiredCapacity": 1,
            ...
            "Tags": [
                {
                    "ResourceType": "auto-scaling-group",
                    "ResourceId": "my-asg",
                    "PropagateAtLaunch": true,
                    "Value": "production",
                    "Key": "environment"
                }
            ],
            ...
        },

    ... additional groups ...

    ]
}
```

Alternatively, you can specify tags using a `tag:<key>` filter. For example, the following command shows how to filter results to show only Auto Scaling groups with a tag key and value pair of **`environment=production`**. This filter is formatted as follows: `Name=tag:<key>,Values=<value>`, with **<key>** and **<value>** representing a tag key and value pair. 

```
aws autoscaling describe-auto-scaling-groups \
  --filters Name=tag:environment,Values=production
```

You can also filter AWS CLI output by using the `--query` option. The following example shows how to limit AWS CLI output for the previous command to the group name, minimum size, maximum size, and desired capacity attributes only.

```
aws autoscaling describe-auto-scaling-groups \
  --filters Name=tag:environment,Values=production \
  --query "AutoScalingGroups[].{AutoScalingGroupName: AutoScalingGroupName, MinSize: MinSize, MaxSize: MaxSize, DesiredCapacity: DesiredCapacity}"
```

The following is an example response.

```
[
    {
        "AutoScalingGroupName": "my-asg",
        "MinSize": 0,
        "MaxSize": 10,
        "DesiredCapacity": 1
    },

    ... additional groups ...

]
```

For more information about filtering, see [Filtering AWS CLI output](https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-filter.html) in the *AWS Command Line Interface User Guide*.

**Example: Describe Auto Scaling groups with tags that match the tag key specified**  
The following command shows how to filter results to show only Auto Scaling groups with the `environment` tag, regardless of the tag value.

```
aws autoscaling describe-auto-scaling-groups \
  --filters Name=tag-key,Values=environment
```

**Example: Describe Auto Scaling groups with tags that match the set of tag keys specified**  
The following command shows how to filter results to show only Auto Scaling groups with tags for `environment` and `project`, regardless of the tag values.

```
aws autoscaling describe-auto-scaling-groups \
  --filters Name=tag-key,Values=environment Name=tag-key,Values=project
```

**Example: Describe Auto Scaling groups with tags that match at least one of the tag keys specified**  
The following command shows how to filter results to show only Auto Scaling groups with tags for `environment` or `project`, regardless of the tag values.

```
aws autoscaling describe-auto-scaling-groups \
  --filters Name=tag-key,Values=environment,project
```

**Example: Describe Auto Scaling groups with the specified tag value**  
The following command shows how to filter results to show only Auto Scaling groups with a tag value of `production`, regardless of the tag key.

```
aws autoscaling describe-auto-scaling-groups \
  --filters Name=tag-value,Values=production
```

**Example: Describe Auto Scaling groups with the set of tag values specified**  
The following command shows how to filter results to show only Auto Scaling groups with the tag values `production` and `development`, regardless of the tag key.

```
aws autoscaling describe-auto-scaling-groups \
  --filters Name=tag-value,Values=production Name=tag-value,Values=development
```

**Example: Describe Auto Scaling groups with tags that match at least one of the tag values specified**  
The following command shows how to filter results to show only Auto Scaling groups with a tag value of `production` or `development`, regardless of the tag key.

```
aws autoscaling describe-auto-scaling-groups \
  --filters Name=tag-value,Values=production,development
```

**Example: Describe Auto Scaling groups with tags that match multiple tag keys and values**  
You can also combine filters to create custom AND and OR logic to do more complex filtering.

The following command shows how to filter results to show only Auto Scaling groups with a specific set of tags. One tag key is `environment` AND the tag value is (`production` OR `development`) AND the other tag key is `costcenter` AND the tag value is `cc123`.

```
aws autoscaling describe-auto-scaling-groups \
  --filters Name=tag:environment,Values=production,development Name=tag:costcenter,Values=cc123
```

# Instance maintenance policies
Instance maintenance policies

You can configure an instance maintenance policy for your Auto Scaling group to meet specific capacity requirements during events that cause instances to be replaced, such as an instance refresh or the health check process. 

For example, suppose you have an Auto Scaling group that has a small number of instances. You want to avoid the potential disruptions from terminating and then replacing an instance when health checks indicate an impaired instance. With an instance maintenance policy, you can make sure that Amazon EC2 Auto Scaling first launches a new instance and then waits for it to be fully ready before terminating the unhealthy instance. 

An instance maintenance policy also helps you minimize any potential disruptions in cases where multiple instances are replaced at the same time. You set the minimum and maximum healthy percentage parameters for the policy, and your Auto Scaling group can only increase and decrease capacity within that minimum-maximum range when replacing instances. A larger range increases the number of instances that can be replaced at the same time.

**Topics**
+ [

# Instance maintenance policy for Auto Scaling group
](instance-maintenance-policy-overview-and-considerations.md)
+ [

# Set an instance maintenance policy on your Auto Scaling group
](set-instance-maintenance-policy-on-group.md)

# Instance maintenance policy for Auto Scaling group
Overview

This topic provides an overview of the options available and describes what to consider when you create an instance maintenance policy.

**Topics**
+ [

## Overview
](#instance-maintenance-policy-overview)
+ [

## Core concepts
](#instance-maintenance-policy-core-concepts)
+ [

## Instance warmup
](#instance-maintenance-policy-instance-warm-up)
+ [

## Health check grace period
](#instance-maintenance-policy-health-check-grace-period)
+ [

## Scale your Auto Scaling group
](#instance-maintenance-policy-scaling-limits)
+ [

## Example scenarios
](#instance-maintenance-policy-scenarios)

## Overview


When you create an instance maintenance policy for your Auto Scaling group, the policy affects Amazon EC2 Auto Scaling events that cause instances to be replaced. This results in more consistent replacement behaviors within the same Auto Scaling group. It also lets you optimize your group for availability or cost depending on your needs.

In the console, the following configuration options are available:
+ **Launch before terminating** – A new instance must be provisioned first before an existing instance can be terminated. This approach is a good choice for applications that favor availability over cost savings.
+ **Terminate and launch** – New instances are provisioned at the same time your existing instances are terminated. This approach is a good choice for applications that favor cost savings over availability. It's also a good choice for applications that should not launch more capacity than is currently available, even when replacing instances.
+ **Custom policy** – This option lets you set up your policy with a custom minimum and maximum range for the amount of capacity that you want available when replacing instances. This approach can help you achieve the right balance between cost and availability.

The default for an Auto Scaling group is to not have an instance maintenance policy, which causes it to respond to instance maintenance events with the default behaviors. The default behaviors are described in the following table.


**Instance maintenance event default behaviors**  

|  Event  |  Description  |  Default behavior  | 
| --- | --- | --- | 
|  Health check failure  |  Happens automatically when instances fail their health checks. Amazon EC2 Auto Scaling replaces instances that fail their health checks. To understand the causes of health check failures, see [Health checks for instances in an Auto Scaling group](ec2-auto-scaling-health-checks.md).  |  Terminate and launch.  | 
|  Instance refresh  |  Happens when you start an instance refresh. Depending on your configuration, an instance refresh replaces instances one at a time, several at a time, or all at once. For more information, see [Use an instance refresh to update instances in an Auto Scaling group](asg-instance-refresh.md).  |  Terminate and launch.  | 
|  Maximum instance lifetime  |  Happens automatically when instances reach the maximum instance lifetime that you specify for your Auto Scaling group. Amazon EC2 Auto Scaling replaces instances that reach their maximum instance lifetime. For more information, see [Replace Auto Scaling instances based on maximum instance lifetime](asg-max-instance-lifetime.md).  |  Terminate and launch.  | 
|  Rebalancing  |  Happens automatically if there are underlying changes that cause the group to become unbalanced. Amazon EC2 Auto Scaling rebalances the group in the following situations: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/autoscaling/ec2/userguide/instance-maintenance-policy-overview-and-considerations.html)  |  Launch before terminating. Amazon EC2 Auto Scaling can exceed your group's size limits by up to 10 percent of its *maximum capacity*. However, if you're using Capacity Rebalancing, it can only exceed these limits by up to 10 percent of the *desired capacity*.  | 

Amazon EC2 Auto Scaling will continue to default to terminate and launch in the following situations. Therefore, when one of these situations occur, your group's capacity might be less than the lower threshold of your instance maintenance policy.
+ When an instance terminates unexpectedly, for example, because of human action. Amazon EC2 Auto Scaling immediately replaces instances that are no longer running. For more information, see [Amazon EC2 health checks](health-checks-overview.md#instance-health-detection).
+ When Amazon EC2 reboots, stops, or retires an instance as part of a scheduled event before Amazon EC2 Auto Scaling can launch the replacement instance. For more information about these events, see [Scheduled events for your instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-instances-status-check_sched.html) in the *Amazon EC2 User Guide*.
+ When the Amazon EC2 Spot Service initiates a Spot Instance interruption and a Spot Instance is then forcibly terminated.

With Spot Instances, if you enabled Capacity Rebalancing on your Auto Scaling group, then the instance might already have a pending instance from a different Spot pool that we launched before we initiated the Spot interruption. For details about how Capacity Rebalancing works, see [Capacity Rebalancing in Auto Scaling to replace at-risk Spot Instances](ec2-auto-scaling-capacity-rebalancing.md).

However, because Spot Instances are not guaranteed to remain available and can be terminated with a two-minute Spot Instance interruption notice, your instance maintenance policy's lower threshold can be exceeded if instances are interrupted before your new instances have launched. 

## Core concepts


Before you get started, familiarize yourself with the following core concepts and terms:

**Desired capacity**  
The *desired capacity* is the capacity of the Auto Scaling group at the time of creation. It is also the capacity the group attempts to maintain when there are no scaling conditions attached to the group. 

**Instance maintenance policy**  
An *instance maintenance policy* controls whether an instance is provisioned first before an existing instance is terminated for instance maintenance events. It also determines how far below and over your desired capacity your Auto Scaling group might go to replace multiple instances at the same time. 

**Maximum healthy percentage**  
The *maximum healthy percentage* is the percentage of its desired capacity that your Auto Scaling group can increase to when replacing instances. It represents the maximum percentage of the group that can be in service and healthy, or pending, to support your workload. In the console, you can set the maximum healthy percentage when you use either the **Launch before terminating** option or the **Custom policy** option. The valid values are 100–200 percent.

**Minimum healthy percentage**  
The *minimum healthy percentage* is the percentage of the desired capacity to keep in service, healthy, and ready to use to support your workload when replacing instances. An instance is considered healthy and ready to use after it successfully completes its first health check and the specified warmup time passes. In the console, you can set the minimum healthy percentage when you use either the **Terminate and launch** option or the **Custom policy** option. The valid values are 0–100 percent.   
To replace instances faster, you can specify a low minimum healthy percentage. However, if there aren't enough healthy instances running, it can reduce availability. We recommend selecting a reasonable value to maintain availability in situations where multiple instances will be replaced.

## Instance warmup


If your instances need time to initialize after they enter the `InService` state, enable the default instance warmup for your Auto Scaling group. With the default instance warmup, you can prevent instances from being counted toward the minimum healthy percentage before they are ready. This ensures that Amazon EC2 Auto Scaling considers how long it takes to have enough capacity in place to support the workload before it terminates existing instances.

As an added benefit, you can improve the Amazon CloudWatch metrics used for dynamic scaling when you enable the default instance warmup. If your Auto Scaling group has any scaling policies, when the group scales out, it uses the same default warmup period to prevent instances from being counted toward CloudWatch metrics before they have finished initializing.

For more information, see [Set the default instance warmup for an Auto Scaling group](ec2-auto-scaling-default-instance-warmup.md).

## Health check grace period


Amazon EC2 Auto Scaling determines whether an instance is healthy based on the status of the health checks that your Auto Scaling group uses. For more information, see [Health checks for instances in an Auto Scaling group](ec2-auto-scaling-health-checks.md). 

To make sure that these health checks start as soon as possible, don't set the group's health check grace period too high, but high enough for your Elastic Load Balancing health checks to determine whether a target is available to handle requests. For more information, see [Set the health check grace period for an Auto Scaling group](health-check-grace-period.md).

## Scale your Auto Scaling group


An instance maintenance policy only applies to instance maintenance events and doesn't prevent the group from being manually or automatically scaled.

When there are scaling policies or scheduled actions attached to your Auto Scaling group, they can run in parallel while instance maintenance events are occurring. In which case, they could increase or decrease the group's desired capacity but only within the scaling limits that you defined. For more information about these limits, see [Set scaling limits for your Auto Scaling group](asg-capacity-limits.md).

## Example scenarios


In a typical scenario, your instance maintenance policy and desired capacity might look something like this:
+ Minimum healthy percentage = 90 percent
+ Maximum healthy percentage = 120 percent
+ Desired capacity = 100

During any instance maintenance event, your Auto Scaling group might have as few as 90 instances and as many as 120. After the event, the group goes back to having 100 instances. 

When you use an instance maintenance policy with an Auto Scaling group that has a warm pool, the minimum and maximum healthy percentages are applied separately to the Auto Scaling group and the warm pool. 

For example, assume this is your configuration:
+ Minimum healthy percentage = 90 percent
+ Maximum healthy percentage = 120 percent
+ Desired capacity = 100
+ Warm pool size = 10

If you start an instance refresh to recycle the group's instances, Amazon EC2 Auto Scaling replaces instances in the Auto Scaling group first, and then instances in the warm pool. While Amazon EC2 Auto Scaling is still working on replacing instances in the Auto Scaling group, the group might have as few as 90 instances and as many as 120. After finishing with the group, Amazon EC2 Auto Scaling can work on replacing instances in the warm pool. While this is happening, the warm pool might have as few as 9 instances and as many as 12.

# Set an instance maintenance policy on your Auto Scaling group
Set an instance maintenance policy on your group

You can create an instance maintenance policy when you create an Auto Scaling group. You can also create it for existing groups.

By setting an instance maintenance policy on your Auto Scaling group, you no longer have to specify values for minimum and maximum healthy percentage parameters for the instance refresh feature unless you want to override the instance maintenance policy.

In the console, Amazon EC2 Auto Scaling provides options to help you get started. 

**Topics**
+ [

# Set an instance maintenance policy
](set-instance-maintenance-policy.md)
+ [

# Remove an instance maintenance policy
](remove-instance-maintenance-policy.md)

# Set an instance maintenance policy


To set an instance maintenance policy on an Auto Scaling group, use one of the following methods:

------
#### [ Console ]

**To set an instance maintenance policy on a new group (console)**

1. Follow the instructions in [Create an Auto Scaling group using a launch template](create-asg-launch-template.md) and complete each step in the procedure, up to step 11.

1. On the **Configure group size and scaling policies**, for **Desired capacity**, enter the initial number of instances to launch. 

1. In the **Scaling** section, under **Scaling limits**, if your new value for **Desired capacity** is greater than **Min desired capacity** and **Max desired capacity**, the **Max desired capacity** is automatically increased to the new desired capacity value. You can change these limits as needed.

1. For **Automatic scaling**, choose whether you want to create a target tracking scaling policy. You can also create this policy after your create your Auto Scaling group.

   If you choose **Target tracking scaling policy**, follow the directions in [Create a target tracking scaling policy](policy_creating.md) to create the policy.

1. In the **Instance maintenance policy** section, choose one of the available options: 
   + **Launch before terminating**: A new instance must be provisioned first before an existing instance can be terminated. This is a good choice for applications that favor availability over cost savings.
   + **Terminate and launch**: New instances are provisioned at the same time your existing instances are terminated. This is a good choice for applications that favor cost savings over availability. It's also a good choice for applications that should not launch more capacity than is currently available.
   + **Custom policy**: This option lets you set up your policy with a custom minimum and maximum range for the amount of capacity that you want available when replacing instances. This can help you achieve the right balance between cost and availability.

1. For **Set healthy percentage**, enter values for one or both of the following fields. The enabled fields vary depending on the option that you chose in the preceding step.
   + **Min**: Sets the minimum healthy percentage that's required to proceed with replacing instances.
   + **Max**: Sets the maximum healthy percentage that's possible when replacing instances.

1. Expand the **View capacity during replacements based on your desired capacity** section to confirm how the values for **Min** and **Max** apply to your group. The exact values used depend on the desired capacity value, which will change if the group scales.

1. Continue with the steps in [Create an Auto Scaling group using a launch template](create-asg-launch-template.md).

------
#### [ AWS CLI ]

**To set an instance maintenance policy on a new group (AWS CLI)**  
Add the `--instance-maintenance-policy` option to the [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command. The following example set an instance maintenance policy on a new Auto Scaling group named `my-asg`.

```
aws autoscaling create-auto-scaling-group \
  --launch-template LaunchTemplateName=my-launch-template,Version='1' \
  --auto-scaling-group-name my-asg \
  --min-size 1 \
  --max-size 10 \
  --desired-capacity 5 \
  --default-instance-warmup 20 \
  --instance-maintenance-policy '{
      "MinHealthyPercentage": 90,
      "MaxHealthyPercentage": 120       
    }' \
  --vpc-zone-identifier "subnet-5e6example,subnet-613example,subnet-c93example"
```

------

------
#### [ Console ]

**To set an instance maintenance policy on an existing group (console)**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. On the navigation bar at the top of the screen, choose the AWS Region that you created your Auto Scaling group in.

1. Select the check box next to the Auto Scaling group.

   A split pane opens up in the bottom of the page. 

1. On the **Details** tab, choose **Instance maintenance policy**, **Edit**.

1. To set an instance maintenance policy on the group, choose one of the available options: 
   + **Launch before terminating**: A new instance must be provisioned first before an existing instance can be terminated. This is a good choice for applications that favor availability over cost savings.
   + **Terminate and launch**: New instances are provisioned at the same time your existing instances are terminated. This is a good choice for applications that favor cost savings over availability. It's also a good choice for applications that should not launch more capacity than is currently available.
   + **Custom policy**: This option lets you set up your policy with a custom minimum and maximum range for the amount of capacity that you want available when replacing instances. This can help you achieve the right balance between cost and availability.

1. For **Set healthy percentage**, enter values for one or both of the following fields. The enabled fields vary depending on the option that you chose in the preceding step.
   + **Min**: Sets the minimum healthy percentage that's required to proceed with replacing instances.
   + **Max**: Sets the maximum healthy percentage that's possible when replacing instances.

1. Expand the **View capacity during replacements based on your desired capacity** section to confirm how the values for **Min** and **Max** apply to your group. The exact values used depend on the desired capacity value, which will change if the group scales.

1. Choose **Update**.

------
#### [ AWS CLI ]

**To set an instance maintenance policy on an existing group (AWS CLI)**  
Add the `--instance-maintenance-policy` option to the [update-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/update-auto-scaling-group.html) command. The following example sets an instance maintenance policy on the specified Auto Scaling group.

```
aws autoscaling update-auto-scaling-group --auto-scaling-group-name my-asg \
  --instance-maintenance-policy '{
      "MinHealthyPercentage": 90,
      "MaxHealthyPercentage": 120       
    }'
```

------

# Remove an instance maintenance policy


If you want to stop using an instance maintenance policy with your Auto Scaling group, you can remove it. 

------
#### [ Console ]

**To remove an instance maintenance policy (console)**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. On the navigation bar at the top of the screen, choose the AWS Region that you created your Auto Scaling group in.

1. Select the check box next to the Auto Scaling group.

   A split pane opens up in the bottom of the page. 

1. On the **Details** tab, choose **Instance maintenance policy**, **Edit**.

1. Choose **No instance maintenance policy**.

1. Choose **Update**.

------
#### [ AWS CLI ]

**To remove an instance maintenance policy (AWS CLI)**  
Add the `--instance-maintenance-policy` option to the [update-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/update-auto-scaling-group.html) command. The following example removes the instance maintenance policy from the specified Auto Scaling group. 

```
aws autoscaling update-auto-scaling-group --auto-scaling-group-name my-asg \
  --instance-maintenance-policy '{
      "MinHealthyPercentage": -1,
      "MaxHealthyPercentage": -1       
    }'
```

------

# Amazon EC2 Auto Scaling lifecycle hooks
Lifecycle hooks

Amazon EC2 Auto Scaling offers the ability to add lifecycle hooks to your Auto Scaling groups. These hooks let you create solutions that are aware of events in the Auto Scaling instance lifecycle, and then perform a custom action on instances when the corresponding lifecycle event occurs. A lifecycle hook provides a specified amount of time (one hour by default) to wait for the action to complete before the instance transitions to the next state.

As an example of using lifecycle hooks with Auto Scaling instances: 
+ When a scale-out event occurs, your newly launched instance completes its startup sequence and transitions to a wait state. While the instance is in a wait state, it runs a script to download and install the needed software packages for your application, making sure that your instance is fully ready before it starts receiving traffic. When the script is finished installing software, it sends the **complete-lifecycle-action** command to continue.
+ When a scale-in event occurs, a lifecycle hook pauses the instance before it is terminated and sends you a notification using Amazon EventBridge. While the instance is in the wait state, you can invoke an AWS Lambda function or connect to the instance to download logs or other data before the instance is fully terminated. 

A popular use of lifecycle hooks is to control when instances are registered with Elastic Load Balancing. By adding a launch lifecycle hook to your Auto Scaling group, you can ensure that your bootstrap scripts have completed successfully and the applications on the instances are ready to accept traffic before they are registered to the load balancer at the end of the lifecycle hook.

**Topics**
+ [

## Lifecycle hook availability
](#lifecycle-hooks-availability)
+ [Considerations and limitations](#lifecycle-hook-considerations)
+ [

## Related resources
](#lifecycle-hook-related-resources)
+ [

# How lifecycle hooks work in Auto Scaling groups
](lifecycle-hooks-overview.md)
+ [Prepare to add a lifecycle hook](prepare-for-lifecycle-notifications.md)
+ [

# Control instance retention with instance lifecycle policies
](instance-lifecycle-policy.md)
+ [Retrieve the target lifecycle state](retrieving-target-lifecycle-state-through-imds.md)
+ [

# Add lifecycle hooks to your Auto Scaling group
](adding-lifecycle-hooks.md)
+ [

# Complete a lifecycle action in an Auto Scaling group
](completing-lifecycle-hooks.md)
+ [Tutorial: Use instance metadata to retrieve lifecycle state](tutorial-lifecycle-hook-instance-metadata.md)
+ [

# Tutorial: Configure a lifecycle hook that invokes a Lambda function
](tutorial-lifecycle-hook-lambda.md)

## Lifecycle hook availability


The following table lists the lifecycle hooks available for various scenarios.


| Event | Instance launch or termination¹ | [Maximum Instance Lifetime](asg-max-instance-lifetime.md): Replacement instances | [Instance Refresh](asg-instance-refresh.md): Replacement instances | [Capacity Rebalancing](ec2-auto-scaling-capacity-rebalancing.md): Replacement instances | [Warm Pools](ec2-auto-scaling-warm-pools.md): Instances entering and leaving the warm pool | 
| --- | --- | --- | --- | --- | --- | 
| Instance launching | ✓ | ✓ | ✓ | ✓ | ✓ | 
| Instance terminating | ✓ | ✓ | ✓ | ✓ | ✓ | 

¹ Applies to all launches and terminations, whether they are initiated automatically or manually such as when you call the `SetDesiredCapacity` or `TerminateInstanceInAutoScalingGroup` operations. Does not apply when you attach or detach instances, move instances in and out of standby mode, or delete the group with the force delete option.

## Considerations and limitations for lifecycle hooks
Considerations and limitations

When working with lifecycle hooks, keep in mind the following notes and limitations:
+ Amazon EC2 Auto Scaling provides its own lifecycle to help with the management of Auto Scaling groups. This lifecycle differs from that of other EC2 instances. For more information, see [Amazon EC2 Auto Scaling instance lifecycle](ec2-auto-scaling-lifecycle.md). Instances in a warm pool also have their own lifecycle, as described in [Lifecycle state transitions for instances in a warm pool](warm-pool-instance-lifecycle.md#lifecycle-state-transitions).
+  By default, termination lifecycle hooks operate on a best-effort basis. If a termination lifecycle hook times out, or is abandoned, Amazon EC2 Auto Scaling proceeds with terminating the instance immediately. You can combine termination lifecycle hooks with an instance lifecycle policy for instance retention. For more information, see [Control instance retention with instance lifecycle policies](instance-lifecycle-policy.md). 
+ You can use lifecycle hooks with Spot Instances, but a lifecycle hook does not prevent an instance from terminating in the event that capacity is no longer available, which can happen at any time with a two-minute interruption notice. For more information, see [Spot Instance interruptions](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/spot-interruptions.html) in the *Amazon EC2 User Guide*. However, you can enable Capacity Rebalancing to proactively replace Spot Instances that have received a rebalance recommendation from the Amazon EC2 Spot service, a signal that is sent when a Spot Instance is at elevated risk of interruption. For more information, see [Capacity Rebalancing in Auto Scaling to replace at-risk Spot Instances](ec2-auto-scaling-capacity-rebalancing.md).
+ Instances can remain in a wait state for a finite period of time. The default timeout for a lifecycle hook is one hour (heartbeat timeout). There is also a global timeout that specifies the maximum amount of time that you can keep an instance in a wait state. The global timeout is 48 hours or 100 times the heartbeat timeout, whichever is smaller.
+ The result of the lifecycle hook can be either abandon or continue. If an instance is launching, continue indicates that your actions were successful, and that Amazon EC2 Auto Scaling can put the instance into service. Otherwise, abandon indicates that your custom actions were unsuccessful, and that we can terminate and replace the instance. If an instance is terminating, both abandon and continue allow the instance to terminate. However, abandon stops any remaining actions, such as other lifecycle hooks, and continue allows any other lifecycle hooks to complete.
+ Amazon EC2 Auto Scaling limits the rate at which it allows instances to launch if the lifecycle hooks are failing consistently, so make sure to test and fix any permanent errors in your lifecycle actions. 
+ Creating and updating lifecycle hooks using the AWS CLI, CloudFormation, or an SDK provides options not available when creating a lifecycle hook from the AWS Management Console. For example, the field to specify the ARN of an SNS topic or SQS queue doesn't appear in the console, because Amazon EC2 Auto Scaling already sends events to Amazon EventBridge. These events can be filtered and redirected to AWS services such as Lambda, Amazon SNS, and Amazon SQS as needed.
+ You can add multiple lifecycle hooks to an Auto Scaling group while you are creating it, by calling the [CreateAutoScalingGroup](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_CreateAutoScalingGroup.html) API using the AWS CLI, CloudFormation, or an SDK. However, each hook must have the same notification target and IAM role, if specified. To create lifecycle hooks with different notification targets and different roles, create the lifecycle hooks one at a time in separate calls to the [PutLifecycleHook](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_PutLifecycleHook.html) API. 
+ If you add a lifecycle hook for instance launch, the health check grace period starts as soon as the instance reaches the `InService` state. For more information, see [Set the health check grace period for an Auto Scaling group](health-check-grace-period.md).

**Scaling considerations**
+ Dynamic scaling policies scale in and out in response to CloudWatch metric data, such as CPU and network I/O, that's aggregated across multiple instances. When scaling out, Amazon EC2 Auto Scaling doesn't immediately count a new instance towards the aggregated instance metrics of the Auto Scaling group. It waits until the instance reaches the `InService` state and the instance warmup has finished. For more information, see [Scaling performance considerations](ec2-auto-scaling-default-instance-warmup.md#scaling-performance-considerations) in the default instance warmup topic. 
+ On scale in, the aggregated instance metrics might not instantly reflect the removal of a terminating instance. The terminating instance stops counting toward the group's aggregated instance metrics shortly after the Amazon EC2 Auto Scaling termination workflow begins. 
+ In most cases when lifecycle hooks are invoked, scaling activities due to simple scaling policies are paused until the lifecycle actions have completed and the cooldown period has expired. Setting a long interval for the cooldown period means that it will take longer for scaling to resume. For more information, see [Lifecycle hooks can cause additional delays](ec2-auto-scaling-scaling-cooldowns.md#cooldowns-lifecycle-hooks) in the cooldown topic. In general, we recommend against using simple scaling policies if you can use either step scaling or target tracking scaling policies instead.

## Related resources


For an introduction video, see [AWS re:Invent 2018: Capacity Management Made Easy with Amazon EC2 Auto Scaling](https://youtu.be/PideBMIcwBQ?t=469) on *YouTube*.

We provide a few JSON and YAML template snippets that you can use to understand how to declare lifecycle hooks in your CloudFormation stack templates. For more information, see the [AWS::AutoScaling::LifecycleHook](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-autoscaling-lifecyclehook.html) reference in the *AWS CloudFormation User Guide*.

You can also visit our [GitHub repository](https://github.com/aws-samples/amazon-ec2-auto-scaling-group-examples) to download example templates and user data scripts for lifecycle hooks.

For examples of the use of lifecycle hooks, see the following blog posts. 
+ [Building a Backup System for Scaled Instances using Lambda and Amazon EC2 Run Command](https://aws.amazon.com/blogs/compute/building-a-backup-system-for-scaled-instances-using-aws-lambda-and-amazon-ec2-run-command/)
+ [Run code before terminating an EC2 Auto Scaling instance](https://aws.amazon.com/blogs/infrastructure-and-automation/run-code-before-terminating-an-ec2-auto-scaling-instance/).

# How lifecycle hooks work in Auto Scaling groups


An Amazon EC2 instance transitions through different states from the time it launches until it is terminated. You can create custom actions for your Auto Scaling group to act when an instance transitions into a wait state due to a lifecycle hook.

The following illustration shows the transitions between Auto Scaling instance states when you use lifecycle hooks for scale out and scale in. 

![\[The transitions between Auto Scaling instance states when you use lifecycle hooks for scale out and scale in.\]](http://docs.aws.amazon.com/autoscaling/ec2/userguide/images/how-lifecycle-hooks-work.png)


As shown in the preceding diagram:

1. The Auto Scaling group responds to a scale-out event and begins launching an instance.

1. The lifecycle hook puts the instance into a wait state (`Pending:Wait`) and then performs a custom action.

   The instance remains in a wait state until you either complete the lifecycle action, or the timeout period ends. By default, the instance remains in a wait state for one hour, and then the Auto Scaling group continues the launch process (`Pending:Proceed`). If you need more time, you can restart the timeout period by recording a heartbeat. If you complete the lifecycle action when the custom action has completed and the timeout period hasn't expired yet, the period ends and the Auto Scaling group continues the launch process.

1. The instance enters the `InService` state and the health check grace period starts. However, before the instance reaches the `InService` state, if the Auto Scaling group is associated with an Elastic Load Balancing load balancer, the instance is registered with the load balancer, and the load balancer starts checking its health. After the health check grace period ends, Amazon EC2 Auto Scaling begins checking the health state of the instance.

1. The Auto Scaling group responds to a scale-in event and begins terminating an instance. If the Auto Scaling group is being used with Elastic Load Balancing, the terminating instance is first deregistered from the load balancer. If connection draining is enabled for the load balancer, the instance stops accepting new connections and waits for existing connections to drain before completing the deregistration process.

1. The lifecycle hook puts the instance into a wait state (`Terminating:Wait`) and then performs a custom action.

   The instance remains in a wait state either until you complete the lifecycle action, or until the timeout period ends (one hour by default). After you complete the lifecycle hook or the timeout period expires, the instance transitions to the next state (`Terminating:Proceed`).

1. The instance is terminated.

**Important**  
Instances in a warm pool also have their own lifecycle with corresponding wait states, as described in [Lifecycle state transitions for instances in a warm pool](warm-pool-instance-lifecycle.md#lifecycle-state-transitions).

## Lifecycle state transitions for instances undergoing root volume replacement


The following diagram shows the transition between Auto Scaling instance states when you use lifecycle hooks for replace root volume:

![\[The transitions between Auto Scaling instance states when you use lifecycle hooks for replace root volume.\]](http://docs.aws.amazon.com/autoscaling/ec2/userguide/images/root-volume-replacement-lifecycle-states.png)


As shown in the preceding diagram:

1. Auto Scaling group responds to an instance refresh and selects an instance for root volume replacement. The instance enters the `ReplacingRootVolume` state. If the instance is registered with a load balancer it is deregistered from the load balancer.

1. The lifecycle hook puts the instance into a wait state (`ReplacingRootVolume:Wait`) and then performs a custom action. The instance remains in a wait state until you either complete the lifecycle action, or the timeout period ends. If you complete the lifecycle action when the custom action has completed and the timeout period hasn't expired yet, the period ends and the Auto Scaling group continues the root volume replacement process.

1. The instance completes its root volume replacement and enters the `RootVolumeReplaced` state.

1. The instance enters the `Pending` state.

1. The lifecycle hook puts the instance into a wait state (`Pending:Wait`) and then performs a custom action. The instance remains in a wait state either until you complete the lifecycle action, or until the timeout period ends. After you complete the lifecycle hook or the timeout period expires, the instance transitions to the next state (`Pending:Proceed`).

1. The instance enters the `InService` state. However, before the instance reaches the `InService` state, if the Auto Scaling group is associated with an Elastic Load Balancing load balancer, the instance is registered with the load balancer.

# Prepare to add a lifecycle hook to your Auto Scaling group
Prepare to add a lifecycle hook

Before you add a lifecycle hook to your Auto Scaling group, be sure that your user data script or notification target is set up correctly.
+ To use a user data script to perform custom actions on your instances as they are launching, you do not need to configure a notification target. However, you must have already created the launch template or launch configuration that specifies your user data script and associated it with your Auto Scaling group. For more information about user data scripts, see [Run commands on your Linux instance at launch](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html) in the *Amazon EC2 User Guide*. 
+ To signal Amazon EC2 Auto Scaling when the lifecycle action is complete, you must add the [CompleteLifecycleAction](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_CompleteLifecycleAction.html) API call to the script, and you must manually create an IAM role with a policy that allows Auto Scaling instances to call this API. Your launch template or launch configuration must specify this role using an IAM instance profile that gets attached to your Amazon EC2 instances at launch. For more information, see [Complete a lifecycle action in an Auto Scaling group](completing-lifecycle-hooks.md) and [IAM role for applications that run on Amazon EC2 instances](us-iam-role.md).
+ To allow Lambda to signal Amazon EC2 Auto Scaling when the lifecycle action is complete, you must add the [CompleteLifecycleAction](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_CompleteLifecycleAction.html) API call to the function code. You must also have attached an IAM policy to the function's execution role that gives Lambda permission to complete lifecycle actions. For more information, see [Tutorial: Configure a lifecycle hook that invokes a Lambda function](tutorial-lifecycle-hook-lambda.md).
+ To use a service such as a Amazon SNS or Amazon SQS to perform a custom action, you must have already created the SNS topic or SQS queue and have ready its Amazon Resource Name (ARN). You must also have already created the IAM role that gives Amazon EC2 Auto Scaling access to your SNS topic or SQS target and have ready its ARN. For more information, see [Configure a notification target for lifecycle notifications](#lifecycle-hook-notification-target). 
**Note**  
By default, when you add a lifecycle hook in the console, Amazon EC2 Auto Scaling sends lifecycle event notifications to Amazon EventBridge. Using EventBridge or a user data script is a recommended best practice. To create a lifecycle hook that sends notifications directly to Amazon SNS, Amazon SQS, or AWS Lambda, use the AWS CLI, AWS CloudFormation, or an SDK to add the lifecycle hook.

## Configure a notification target for lifecycle notifications
Configure a notification target

You can add lifecycle hooks to an Auto Scaling group to perform custom actions when an instance enters a wait state. You can choose a target service to perform these actions depending on your preferred development approach.

There are four different approaches for implementing notification targets for lifecycle hooks:
+ **Amazon EventBridge** — Receive the notifications and perform the actions that you want.
+ **Amazon Simple Notification Service (Amazon SNS)** — Create a topic for publishing notifications. Clients can subscribe to the SNS topic and receive published messages using a supported protocol.
+ **Amazon Simple Queue Service (Amazon SQS)** — Exchange messages through a polling model.
+ **AWS Lambda** — Invoke a Lambda function that performs the action you want.

As a best practice, we recommend that you use EventBridge. The notifications sent to Amazon SNS and Amazon SQS contain the same information as the notifications that Amazon EC2 Auto Scaling sends to EventBridge. Before EventBridge, the standard practice was to send a notification to SNS or SQS and integrate another service with SNS or SQS to perform programmatic actions. Today, EventBridge gives you more options for which services you can target and makes it easier to handle events using serverless architecture. 

Remember, if you have a user data script in your launch template or launch configuration that configures your instances when they launch, you do not need to receive notifications to perform custom actions on your instances.

The following procedures cover how to set up your notification target.

**Topics**
+ [

### Route notifications to Lambda using EventBridge
](#cloudwatch-events-notification)
+ [

### Receive notifications using Amazon SNS
](#sns-notifications)
+ [

### Receive notifications using Amazon SQS
](#sqs-notifications)
+ [

### Route notifications to AWS Lambda directly
](#lambda-notification)
+ [

### Notification message example
](#notification-message-example)

**Important**  
The EventBridge rule, Lambda function, Amazon SNS topic, and Amazon SQS queue that you use with lifecycle hooks must always be in the same Region where you created your Auto Scaling group.

### Route notifications to Lambda using EventBridge


You can configure an EventBridge rule to invoke a Lambda function when an instance enters a wait state. Amazon EC2 Auto Scaling emits a lifecycle event notification to EventBridge about the instance that is launching or terminating and a token that you can use to control the lifecycle action. For examples of these events, see [Amazon EC2 Auto Scaling event reference](ec2-auto-scaling-event-reference.md).

**Note**  
When you use the AWS Management Console to create an event rule, the console automatically adds the IAM permissions necessary to grant EventBridge permission to call your Lambda function. If you are creating an event rule using the AWS CLI, you need to grant this permission explicitly.   
For information about how to create event rules in the EventBridge console, see [Creating Amazon EventBridge rules that react to events](https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule.html) in the *Amazon EventBridge User Guide*.  
– or –   
For an introductory tutorial that is directed towards console users, see [Tutorial: Configure a lifecycle hook that invokes a Lambda function](tutorial-lifecycle-hook-lambda.md). This tutorial shows you how to create a simple Lambda function that listens for launch events and writes them out to a CloudWatch Logs log.

**To create an EventBridge rule that invokes a Lambda function**

1. Create a Lambda function by using the [Lambda console](https://console.aws.amazon.com/lambda/home#/functions) and note its Amazon Resource Name (ARN). For example, `arn:aws:lambda:region:123456789012:function:my-function`. You need the ARN to create an EventBridge target. For more information, see [Getting started with Lambda](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html) in the *AWS Lambda Developer Guide*.

1. To create a rule that matches events for instance launch, use the following [put-rule](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/events/put-rule.html) command.

   ```
   aws events put-rule --name my-rule --event-pattern file://pattern.json --state ENABLED
   ```

   The following example shows the `pattern.json` for an instance launch lifecycle action. Replace the text in **italics** with the name of your Auto Scaling group.

   ```
   {
     "source": [ "aws.autoscaling" ],
     "detail-type": [ "EC2 Instance-launch Lifecycle Action" ],
     "detail": {
         "AutoScalingGroupName": [ "my-asg" ]
      }
   }
   ```

   If the command runs successfully, EventBridge responds with the ARN of the rule. Note this ARN. You'll need to enter it in step 4.

   To create a rule that matches for other events, modify the event pattern. For more information, see [Use EventBridge to handle Auto Scaling events](automating-ec2-auto-scaling-with-eventbridge.md).

1. To specify the Lambda function to use as a target for the rule, use the following [put-targets](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/events/put-targets.html) command.

   ```
   aws events put-targets --rule my-rule --targets Id=1,Arn=arn:aws:lambda:region:123456789012:function:my-function
   ```

   In the preceding command, *my-rule* is the name that you specified for the rule in step 2, and the value for the `Arn` parameter is the ARN of the function that you created in step 1.

1. To add permissions that allow the rule to invoke your Lambda function, use the following Lambda [add-permission](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/lambda/add-permission.html) command. This command trusts the EventBridge service principal (`events.amazonaws.com`) and scopes permissions to the specified rule.

   ```
   aws lambda add-permission --function-name my-function --statement-id my-unique-id \
     --action 'lambda:InvokeFunction' --principal events.amazonaws.com --source-arn arn:aws:events:region:123456789012:rule/my-rule
   ```

   In the preceding command:
   + *my-function* is the name of the Lambda function that you want the rule to use as a target.
   + *my-unique-id* is a unique identifier that you define to describe the statement in the Lambda function policy.
   + `source-arn` is the ARN of the EventBridge rule.

   If the command runs successfully, you receive output similar to the following.

   ```
   {
     "Statement": "{\"Sid\":\"my-unique-id\",
       \"Effect\":\"Allow\",
       \"Principal\":{\"Service\":\"events.amazonaws.com\"},
       \"Action\":\"lambda:InvokeFunction\",
       \"Resource\":\"arn:aws:lambda:us-west-2:123456789012:function:my-function\",
       \"Condition\":
         {\"ArnLike\":
           {\"AWS:SourceArn\":
            \"arn:aws:events:us-west-2:123456789012:rule/my-rule\"}}}"
   }
   ```

   The `Statement` value is a JSON string version of the statement that was added to the Lambda function policy.

1. After you have followed these instructions, continue on to [Add lifecycle hooks to your Auto Scaling group](adding-lifecycle-hooks.md) as a next step.

### Receive notifications using Amazon SNS


You can use Amazon SNS to set up a notification target (an SNS topic) to receive notifications when a lifecycle action occurs. Amazon SNS then sends the notifications to the subscribed recipients. Until the subscription is confirmed, no notifications published to the topic are sent to the recipients. 

**To set up notifications using Amazon SNS**

1. Create an Amazon SNS topic by using either the [Amazon SNS console](https://console.aws.amazon.com/sns/) or the following [create-topic](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/sns/create-topic.html) command. Ensure that the topic is in the same Region as the Auto Scaling group that you're using. For more information, see [Getting started with Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/sns-getting-started.html) in the *Amazon Simple Notification Service Developer Guide*. 

   ```
   aws sns create-topic --name my-sns-topic
   ```

1. Note the topic Amazon Resource Name (ARN), for example, `arn:aws:sns:region:123456789012:my-sns-topic`. You need it to create the lifecycle hook.

1. Create an IAM service role to give Amazon EC2 Auto Scaling access to your Amazon SNS notification target.

    **To give Amazon EC2 Auto Scaling access to your SNS topic** 

   1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

   1. In the navigation pane on the left, choose **Roles**.

   1. Choose **Create role**.

   1. For **Select trusted entity**, choose **AWS service**.

   1. For your use case, under **Use cases for other AWS services**, choose **EC2 Auto Scaling** and then **EC2 Auto Scaling Notification Access**.

   1. Choose **Next** twice to go to the **Name, review, and create** page.

   1. For **Role name**, enter a name for the role (for example, **my-notification-role**) and choose **Create role**.

   1. On the **Roles** page, choose the role that you just created to open the **Summary** page. Make a note of the role **ARN**. For example, `arn:aws:iam::123456789012:role/my-notification-role`. You need it to create the lifecycle hook.

1. After you have followed these instructions, continue on to [Add lifecycle hooks (AWS CLI)](adding-lifecycle-hooks.md#adding-lifecycle-hooks-aws-cli) as a next step.

### Receive notifications using Amazon SQS


You can use Amazon SQS to set up a notification target to receive messages when a lifecycle action occurs. A queue consumer must then poll an SQS queue to act on these notifications.

**Important**  
FIFO queues are not compatible with lifecycle hooks.

**To set up notifications using Amazon SQS**

1. Create an Amazon SQS queue by using the [Amazon SQS console](https://console.aws.amazon.com/sqs/). Ensure that the queue is in the same Region as the Auto Scaling group that you're using. For more information, see [Getting started with Amazon SQS](https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-getting-started.html) in the *Amazon Simple Queue Service Developer Guide*. 

1. Note the queue ARN, for example, `arn:aws:sqs:us-west-2:123456789012:my-sqs-queue`. You need it to create the lifecycle hook.

1. Create an IAM service role to give Amazon EC2 Auto Scaling access to your Amazon SQS notification target.

    **To give Amazon EC2 Auto Scaling access to your SQS queue** 

   1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

   1. In the navigation pane on the left, choose **Roles**.

   1. Choose **Create role**.

   1. For **Select trusted entity**, choose **AWS service**.

   1. For your use case, under **Use cases for other AWS services**, choose **EC2 Auto Scaling** and then **EC2 Auto Scaling Notification Access**.

   1. Choose **Next** twice to go to the **Name, review, and create** page.

   1. For **Role name**, enter a name for the role (for example, **my-notification-role**) and choose **Create role**.

   1. On the **Roles** page, choose the role that you just created to open the **Summary** page. Make a note of the role **ARN**. For example, `arn:aws:iam::123456789012:role/my-notification-role`. You need it to create the lifecycle hook.

1. After you have followed these instructions, continue on to [Add lifecycle hooks (AWS CLI)](adding-lifecycle-hooks.md#adding-lifecycle-hooks-aws-cli) as a next step.

### Route notifications to AWS Lambda directly


You can use a Lambda function as a notification target when a lifecycle action occurs. 

**To route notifications to AWS Lambda directly**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) on the Lambda console.

1. Choose the Lambda function you want.

   If you want to create a new Lambda function, see [Create the Lambda function](lambda-custom-termination-policy.md#lambda-custom-termination-policy-create-function)

1. Choose the **Configuration** tab and then **Permissions**. 

1. Scroll down to **Resource-based policy** and then choose **Add permissions**. A resource-based policy is used to grant permissions to invoke your function to the principal that is specified in the policy. In this case, the principal will be the [Amazon EC2 Auto Scaling service-linked role](https://docs.aws.amazon.com/autoscaling/ec2/userguide/autoscaling-service-linked-role.html) that is associated with the Auto Scaling group.

1. In the **Policy statement** section, configure your permissions: 

   1. Choose **AWS account**.

   1. For **Principal**, enter the ARN of the calling service-linked role, for example, **arn:aws:iam::<aws-account-id>:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling**.

   1. For **Action**, choose **lambda:InvokeFunction**.

   1. For **Statement ID**, enter a unique statement ID, such as **AllowInvokeByAutoScaling**.

   1. Choose **Save**. 

1. After you have followed these instructions, continue on to [Add lifecycle hooks (AWS CLI)](adding-lifecycle-hooks.md#adding-lifecycle-hooks-aws-cli) as a next step.

### Notification message example


This section provides an example of notification for Amazon SNS,Amazon SQS, and AWS Lambda.

While the instance is in a wait state, a message is published to the Amazon SNS, Amazon SQS, and AWS Lambda notification target. 

The message includes the following information:
+ `Origin` — A place where an EC2 instance is coming from.
+ `Destination` — A place where an EC2 instance is going to.
+ `LifecycleActionToken` — The lifecycle action token.
+ `AccountId` — The AWS account ID.
+ `AutoScalingGroupName` — The name of the Auto Scaling group.
+ `LifecycleHookName` — The name of the lifecycle hook.
+ `EC2InstanceId` — The ID of the EC2 instance.
+ `LifecycleTransition` — The lifecycle hook type.
+ `NotificationMetadata` — The notification metadata.

The following is a notification message example.

```
Service: AWS Auto Scaling
Time: 2021-01-19T00:36:26.533Z
RequestId: 18b2ec17-3e9b-4c15-8024-ff2e8ce8786a
Origin: EC2
Destination: AutoScalingGroup
LifecycleActionToken: 71514b9d-6a40-4b26-8523-05e7ee35fa40
AccountId: 123456789012
AutoScalingGroupName: my-asg
LifecycleHookName: my-hook
EC2InstanceId: i-0598c7d356eba48d7
LifecycleTransition: autoscaling:EC2_INSTANCE_LAUNCHING
NotificationMetadata: hook message metadata
```

#### Test notification message example


When you first add a lifecycle hook, a test notification message is published to the notification target. The following is a test notification message example.

```
Service: AWS Auto Scaling
Time: 2021-01-19T00:35:52.359Z
RequestId: 18b2ec17-3e9b-4c15-8024-ff2e8ce8786a
Event: autoscaling:TEST_NOTIFICATION
AccountId: 123456789012
AutoScalingGroupName: my-asg
AutoScalingGroupARN: arn:aws:autoscaling:us-west-2:123456789012:autoScalingGroup:042cba90-ad2f-431c-9b4d-6d9055bcc9fb:autoScalingGroupName/my-asg
```

**Note**  
For examples of the events delivered from Amazon EC2 Auto Scaling to EventBridge, see [Amazon EC2 Auto Scaling event reference](ec2-auto-scaling-event-reference.md).

# Control instance retention with instance lifecycle policies


 Instance lifecycle policies provide protection against Amazon EC2 Auto Scaling terminations when a termination lifecycle action is abandoned. Unlike lifecycle hooks alone, instance lifecycle policies are designed to ensure that instances move to a retained state when graceful shutdown procedures don't complete successfully. 

## When to use instance lifecycle policies


 Use instance lifecycle policies when graceful shutdown of your application is not optional but mandatory and failed shutdowns require manual intervention. Common use cases include: 
+  Stateful applications that must complete data persistence before termination. 
+  Applications requiring extended draining periods that may exceed the maximum lifecycle hook timeout of 48 hours. 
+  Workloads handling sensitive data where failed or incomplete cleanup could result in data loss or corruption. 
+  Mission-critical services where abrupt shutdown causes availability impact. 

 For more information on how to gracefully handle instance termination, see [Design your applications to gracefully handle instance termination](gracefully-handle-instance-termination.md). 

## How instance lifecycle policies work with termination lifecycle hooks


 Instance lifecycle policies work in combination with termination lifecycle hooks, not as a replacement. The process follows several stages: 

1.  **Termination lifecycle actions execute.** When Amazon EC2 Auto Scaling selects an instance for termination, your termination lifecycle hooks are invoked and the instance enters the `Terminating:Wait` state to begin executing the termination lifecycle actions. 

1.  **Graceful shutdown attempt begins.** Your application, either running on the instance or via a control plane, receives the terminatioin lifecycle action notification and begins graceful shutdown procedures such as draining connections, completing in-progress work, or transferring data. 

1.  **Termination lifecycle actions complete.** A termination lifecycle action can complete with `CONTINUE` or `ABANDON` result. 

1.  **The instance lifecycle policy evaluates the situation.** Without an instance lifecycle policy configured, the instance proceeds to termination immediately even if the termination lifecycle action was completed with `ABANDON` result. With an instance lifecycle policy configured to retain instances on `TerminateHookAbandon`, the instance moves to a retained state if the termination lifecycle action was completed with `ABANDON` result. 

1.  **Retained instances await manual action.** Instances in retained states continue to incur standard Amazon EC2 charges. These instances don't count toward your Auto Scaling group's desired capacity, so Auto Scaling launches replacement instances to maintain the desired size. Auto Scaling features such as instance refresh and max instance lifetime will also ignore retained instances. This allows you to complete cleanup procedures manually, recover data, or investigate why automated shutdown failed before manually terminating the instance. 

1.  **Manual termination occurs.** After you complete the necessary actions on the retained instance, you need to call the `TerminateInstanceInAutoScalingGroup` API to terminate the instance. 

# Configure instance retention


Set up your Amazon EC2 Auto Scaling group to retain instances when termination lifecycle actions fail.

 To use instance lifecycle policies in your Auto Scaling group, you must also configure a termination lifecycle hook. If you configure an instance lifecycle policy but don't have any termination lifecycle hooks, the policy has no effect. Instance lifecycle policies will only apply when termination lifecycle actions are abandoned, not when they complete successfully with the `CONTINUE` result. 

 Instance lifecycle policies use retention triggers to determine when to retain an instance. The `TerminateHookAbandon` trigger causes retention in several scenarios: 
+  When you explicitly call the [ CompleteLifecycleAction ](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_CompleteLifecycleAction.html) API with the `ABANDON` result. 
+  When a termination lifecycle action with default result `ABANDON` times out because the heartbeat timeout is reached without receiving a heartbeat. 
+  When the global timeout is reached on a termination lifecycle action with default result `ABANDON`, which is 48 hours or 100 times the heartbeat timeout, whichever is smaller 

------
#### [ Console ]

**To configure instance retention**

1. Open the Amazon EC2 Auto Scaling console

1. Create your Auto Scaling group (instance lifecycle policy defaults to Terminate)

1. Go to your Auto Scaling group details page and choose the **Instance Management** tab

1. In **Instance lifecycle policy for lifecycle hooks**, choose **Retain**

1. Create your termination lifecycle hooks with:
   + Lifecycle transition set to **Instance terminate**
   + Default result set to **Abandon**

------
#### [ AWS CLI ]

**To configure instance retention**  
 Use the [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command with an instance lifecycle policy: 

```
aws autoscaling create-auto-scaling-group \
--auto-scaling-group-name my-asg \
--launch-template LaunchTemplateName=my-template,Version='$Latest' \
--min-size 1 \
--max-size 3 \
--desired-capacity 2 \
--vpc-zone-identifier subnet-12345678 \
--instance-lifecycle-policy file://lifecycle-policy.json
```

Contents of lifecycle-policy.json:

```
{
    "RetentionTriggers": {
        "TerminateHookAbandon": "retain"
    }
}
```

**To add a termination lifecycle hook**  
Use the [put-lifecycle-hook](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/put-lifecycle-hook.html) command:

```
aws autoscaling put-lifecycle-hook \
--lifecycle-hook-name my-termination-hook \
--auto-scaling-group-name my-asg \
--lifecycle-transition autoscaling:EC2_INSTANCE_TERMINATING \
--default-result ABANDON \
--heartbeat-timeout 300
```

------

# Manage retained instances


 Monitor and control Amazon EC2 instances that have been moved to a retained state. Use CloudWatch metrics to track retained instances, then manually terminate retained instances after completing your custom actions. 

 Retained instances do not count toward your Amazon EC2 Auto Scaling group's desired capacity. When an instance enters a retained state, Auto Scaling launches a replacement instance to maintain the desired capacity. For example, suppose your Auto Scaling group has a desired capacity of 10. When an instance enters the `Terminating:Retained` state, Auto Scaling launches a replacement instance to maintain the desired capacity of 10. You now have 11 running instances in total: 10 in your active group plus 1 retained instance. Standard Amazon EC2 charges for all 11 instances will apply until you manually terminate the retained instance. 

## Instance lifecycle states of retained instances


 Understand how instances transition through lifecycle states when instance lifecycle policies are used. Instances follow a specific path from normal termination through retention to final termination. 

*When retention is triggered, instances transition through these states:*

1. `Terminating` - Normal termination begins

1. `Terminating:Wait` - Lifecycle hook executes

1. `Terminating:Proceed` - Lifecycle actions wrap up (whether they succeeded or failed)

1. `Terminating:Retained` - Hook fails, instance retained for manual intervention

Warm pool instances take different lifecycle state paths depending on the scenario:

*Instances scaling back into the warm pool:*

1. `Warmed:Pending` - Normal warm pool transition begins

1. `Warmed:Pending:Wait` - Lifecycle hook executes

1. `Warmed:Pending:Proceed` - Lifecycle actions wrap up (whether they succeeded or failed)

1. `Warmed:Pending:Retained` - Hook fails, instance retained for manual intervention

*Instances being terminated from the warm pool:*

1. `Warmed:Terminating` - Normal termination begins

1. `Warmed:Terminating:Wait` - Lifecycle hook executes

1. `Warmed:Terminating:Proceed` - Lifecycle actions wrap up (whether they succeeded or failed)

1. `Warmed:Terminating:Retained` - Hook fails, instance retained for manual intervention

## Monitor retained instances


 Because retained Amazon EC2 instances incur costs and require manual intervention, monitoring them is essential. Amazon EC2 Auto Scaling provides several CloudWatch metrics to track retained instances. 

Enable group metrics to track retained instances:

```
aws autoscaling enable-metrics-collection \
--auto-scaling-group-name my-asg \
--metrics GroupTerminatingRetainedInstances
```

The available metrics are:
+  `GroupTerminatingRetainedInstances` shows the number of instances in the `Terminating:Retained` state. 
+  `GroupTerminatingRetainedCapacity` shows the capacity units represented by instances in the `Terminating:Retained` state. 
+  `WarmPoolTerminatingRetainedCapacity` tracks retained instances terminating from the warm pool. 
+  `WarmPoolPendingRetainedCapacity` tracks retained instances returning to the warm pool. 

 You can also check your Amazon EC2 Auto Scaling group's scaling activities to understand why instances were retained. Look for termination activities with `StatusCode: Cancelled` and status reason messages indicating lifecycle hook failures: 

```
aws autoscaling describe-scaling-activities \
--auto-scaling-group-name my-asg
```

 We recommend creating CloudWatch alarms on these metrics to alert you when instances enter a retained state. This helps you track cost implications and ensures you don't forget to clean up instances that require manual intervention. 

## Terminate retained instances


After completing your custom actions, terminate your retained instances by calling the [ TerminateInstanceInAutoScalingGroup ](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_TerminateInstanceInAutoScalingGroup.html) API: 

```
aws autoscaling terminate-instance-in-auto-scaling-group \
--instance-id i-1234567890abcdef0 \
--no-should-decrement-desired-capacity
```

# Retrieve the target lifecycle state through instance metadata
Retrieve the target lifecycle state

Each Auto Scaling instance that you launch goes through several lifecycle states. To invoke custom actions from within an instance that act on specific lifecycle state transitions, you must retrieve the target lifecycle state through instance metadata. 

For example, you might need a mechanism to detect instance termination from inside the instance to run some code on the instance before it's terminated. You can do this by writing code that polls for the lifecycle state of an instance directly from the instance. You can then add a lifecycle hook to the Auto Scaling group to keep the instance running until your code sends the **complete-lifecycle-action** command to continue. 

The Auto Scaling instance lifecycle has two primary steady states—`InService` and `Terminated`—and two side steady states—`Detached` and `Standby`. If you use a warm pool, the lifecycle has four additional steady states—`Warmed:Hibernated`, `Warmed:Running`, `Warmed:Stopped`, and `Warmed:Terminated`.

When an instance prepares to transition to one of the preceding steady states, Amazon EC2 Auto Scaling updates the value of the instance metadata item `autoscaling/target-lifecycle-state`. To get the target lifecycle state from within the instance, you must use the Instance Metadata Service to retrieve it from the instance metadata. 

**Note**  
*Instance metadata* is data about an Amazon EC2 instance that applications can use to query instance information. The *Instance Metadata Service* is an on-instance component that local code uses to access instance metadata. Local code can include user data scripts or applications running on the instance.

Local code can access instance metadata from a running instance using one of two methods: Instance Metadata Service Version 1 (IMDSv1) or Instance Metadata Service Version 2 (IMDSv2). IMDSv2 uses session-oriented requests and mitigates several types of vulnerabilities that could be used to try to access the instance metadata. For details about these two methods, see [Use IMDSv2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html) in the *Amazon EC2 User Guide*.

------
#### [ IMDSv2 ]

```
[ec2-user ~]$ TOKEN=`curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600"` \
&& curl -H "X-aws-ec2-metadata-token: $TOKEN" -v http://169.254.169.254/latest/meta-data/autoscaling/target-lifecycle-state
```

------
#### [ IMDSv1 ]

```
[ec2-user ~]$ curl http://169.254.169.254/latest/meta-data/autoscaling/target-lifecycle-state
```

------

The following is example output.

```
InService
```

The target lifecycle state is the state that the instance is transitioning to. The current lifecycle state is the state that the instance is in. These can be the same after the lifecycle action is complete and the instance finishes its transition to the target lifecycle state. You cannot retrieve the current lifecycle state of the instance from the instance metadata.

Amazon EC2 Auto Scaling started generating the target lifecycle state on March 10, 2022. If your instance transitions to one of the target lifecycle states after that date, the target lifecycle state item is present in your instance metadata. Otherwise, it is not present, and you receive an HTTP 404 error.

For more information about retrieving instance metadata, see [Retrieve instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html) in the *Amazon EC2 User Guide*.

For a tutorial that shows you how to create a lifecycle hook with a custom action in a user data script that uses the target lifecycle state, see [Tutorial: Use data script and instance metadata to retrieve lifecycle state](tutorial-lifecycle-hook-instance-metadata.md).

**Important**  
To ensure that you can invoke a custom action as soon as possible, your local code should poll IMDS frequently and retry on errors.

# Add lifecycle hooks to your Auto Scaling group


To put your Auto Scaling instances into a wait state and perform custom actions on them, you can add lifecycle hooks to your Auto Scaling group. Custom actions are performed as the instances launch or before they terminate. Instances remain in a wait state until you either complete the lifecycle action, or the timeout period ends.

After you create an Auto Scaling group from the AWS Management Console, you can add one or more lifecycle hooks to it, up to a total of 50 lifecycle hooks. You can also use the AWS CLI, CloudFormation, or an SDK to add lifecycle hooks to an Auto Scaling group as you are creating it.

By default, when you add a lifecycle hook in the console, Amazon EC2 Auto Scaling sends lifecycle event notifications to Amazon EventBridge. Using EventBridge or a user data script is a recommended best practice. To create a lifecycle hook that sends notifications directly to Amazon SNS, Amazon SQS, or AWS Lambda you can use the [put-lifecycle-hook](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/put-lifecycle-hook.html) command, as shown in the examples in this topic.

**Topics**
+ [

## Add lifecycle hooks (console)
](#adding-lifecycle-hooks-console)
+ [

## Add lifecycle hooks (AWS CLI)
](#adding-lifecycle-hooks-aws-cli)

## Add lifecycle hooks (console)


Follow these steps to add lifecycle hooks to your Auto Scaling group. To add lifecycle hooks for scaling out (instances launching) and scaling in (instances terminating or returning to a warm pool), you must create two separate hooks. 

Before you begin, confirm that you have set up a custom action, as needed, as described in [Prepare to add a lifecycle hook to your Auto Scaling group](prepare-for-lifecycle-notifications.md).

**To add a lifecycle hook for scale out**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. Select the check box next to your Auto Scaling group. A split pane opens up in the bottom of the page. 

1. On the **Instance management** tab, in **Lifecycle hooks**, choose **Create lifecycle hook**.

1. To define a lifecycle hook for scale out (instances launching), do the following:

   1. For **Lifecycle hook name**, specify a name for the lifecycle hook.

   1. For **Lifecycle transition**, choose **Instance launch**.

   1. For **Heartbeat timeout**, specify the amount of time, in seconds, for instances to remain in a wait state when scaling out before the hook times out. The range is from `30` to `7200` seconds. Setting a long timeout period provides more time for your custom action to complete. Then, if you finish before the timeout period ends, send the [complete-lifecycle-action](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/complete-lifecycle-action.html) command to allow the instance to proceed to the next state. 

   1. For **Default result**, specify the action to take when the lifecycle hook timeout elapses or when an unexpected failure occurs. You can choose to either **CONTINUE** or **ABANDON**.
      + If you choose **CONTINUE**, the Auto Scaling group can proceed with any other lifecycle hooks and then put the instance into service.
      + If you choose **ABANDON**, the Auto Scaling group stops any remaining actions and terminates the instance immediately.

   1. (Optional) For **Notification metadata**, specify other information that you want to include when Amazon EC2 Auto Scaling sends a message to the notification target. 

1. Choose **Create**.

**To add a lifecycle hook for scale in**

1. Choose **Create lifecycle hook** to continue where you left off after creating a lifecycle hook for scale out.

1. To define a lifecycle hook for scale in (instances terminating or returning to a warm pool), do the following:

   1. For **Lifecycle hook name**, specify a name for the lifecycle hook.

   1. For **Lifecycle transition**, choose **Instance terminate**. 

   1. For **Heartbeat timeout**, specify the amount of time, in seconds, for instances to remain in a wait state when scaling out before the hook times out. We recommend a short timeout period of `30` to `120` seconds, depending on how much time you need to perform any final tasks, such as pulling EC2 logs from CloudWatch.

   1. For **Default result**, specify the action that the Auto Scaling group takes when the timeout elapses or if an unexpected failure occurs. Both **ABANDON** and **CONTINUE** let the instance terminate. 
      + If you choose **CONTINUE**, the Auto Scaling group can proceed with any remaining actions, such as other lifecycle hooks, before termination. 
      + If you choose **ABANDON**, the Auto Scaling group terminates the instance immediately. 

   1. (Optional) For **Notification metadata**, specify other information that you want to include when Amazon EC2 Auto Scaling sends a message to the notification target.

1. Choose **Create**.

## Add lifecycle hooks (AWS CLI)


Create and update lifecycle hooks using the [put-lifecycle-hook](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/put-lifecycle-hook.html) command.

To perform an action on scale out, use the following command.

```
aws autoscaling put-lifecycle-hook --lifecycle-hook-name my-launch-hook  \
  --auto-scaling-group-name my-asg \
  --lifecycle-transition autoscaling:EC2_INSTANCE_LAUNCHING
```

To perform an action on scale in, use the following command instead.

```
aws autoscaling put-lifecycle-hook --lifecycle-hook-name my-termination-hook  \
  --auto-scaling-group-name my-asg \
  --lifecycle-transition autoscaling:EC2_INSTANCE_TERMINATING
```

To receive notifications using Amazon SNS or Amazon SQS, add the `--notification-target-arn` and `--role-arn` options. To receive notifications using AWS Lambda, add the `--notification-target-arn`.

The following example creates a lifecycle hook that specifies an SNS topic named `my-sns-topic` as the notification target.

```
aws autoscaling put-lifecycle-hook --lifecycle-hook-name my-termination-hook  \
  --auto-scaling-group-name my-asg \
  --lifecycle-transition autoscaling:EC2_INSTANCE_TERMINATING \
  --notification-target-arn arn:aws:sns:region:123456789012:my-sns-topic \
  --role-arn arn:aws:iam::123456789012:role/my-notification-role
```

The topic receives a test notification with the following key-value pair.

```
"Event": "autoscaling:TEST_NOTIFICATION"
```

By default, the [put-lifecycle-hook](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/put-lifecycle-hook.html) command creates a lifecycle hook with a heartbeat timeout of `3600` seconds (one hour). 

To change the heartbeat timeout for an existing lifecycle hook, add the `--heartbeat-timeout` option, as shown in the following example.

```
aws autoscaling put-lifecycle-hook --lifecycle-hook-name my-termination-hook \
  --auto-scaling-group-name my-asg --heartbeat-timeout 120
```

If an instance is already in a wait state, you can prevent the lifecycle hook from timing out by recording a heartbeat, using the [record-lifecycle-action-heartbeat](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/record-lifecycle-action-heartbeat.html) CLI command. This extends the timeout period by the timeout value specified when you created the lifecycle hook. If you finish before the timeout period ends, you can send the [complete-lifecycle-action](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/complete-lifecycle-action.html) CLI command to allow the instance to proceed to the next state. For more information and examples, see [Complete a lifecycle action in an Auto Scaling group](completing-lifecycle-hooks.md).

# Complete a lifecycle action in an Auto Scaling group


When an Auto Scaling group responds to a lifecycle event, it puts the instance in a wait state and sends an event notification. You can perform a custom action while the instance is in a wait state.

Completing the lifecycle action with a result of `CONTINUE` is helpful if you finish before the timeout period has expired. If you don't complete the lifecycle action, the lifecycle hook goes to the status that you specified for **Default result** after the timeout period ends.

**Topics**
+ [

## Complete a lifecycle action (manual)
](#completing-lifecycle-hooks-aws-cli)
+ [

## Complete a lifecycle action (automatic)
](#completing-lifecycle-hooks-automatic)

## Complete a lifecycle action (manual)


The following procedure is for the command line interface and is not supported in the console. Information that must be replaced, such as the instance ID or the name of an Auto Scaling group, are shown in italics. 

**To complete a lifecycle action (AWS CLI)**

1. If you need more time to complete the custom action, use the [record-lifecycle-action-heartbeat](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/record-lifecycle-action-heartbeat.html) command to restart the timeout period and keep the instance in a wait state. For example, if the timeout period is one hour, and you call this command after 30 minutes, the instance remains in a wait state for an additional hour, or a total of 90 minutes. 

   You can specify the lifecycle action token that you received with the [notification](prepare-for-lifecycle-notifications.md#notification-message-example), as shown in the following command.

   ```
   aws autoscaling record-lifecycle-action-heartbeat --lifecycle-hook-name my-launch-hook \
     --auto-scaling-group-name my-asg --lifecycle-action-token bcd2f1b8-9a78-44d3-8a7a-4dd07d7cf635
   ```

   Alternatively, you can specify the ID of the instance that you received with the [notification](prepare-for-lifecycle-notifications.md#notification-message-example), as shown in the following command.

   ```
   aws autoscaling record-lifecycle-action-heartbeat --lifecycle-hook-name my-launch-hook \
     --auto-scaling-group-name my-asg --instance-id i-1a2b3c4d
   ```

1. If you finish the custom action before the timeout period ends, use the [complete-lifecycle-action](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/complete-lifecycle-action.html) command so that the Auto Scaling group can continue launching or terminating the instance. You can specify the lifecycle action token, as shown in the following command.

   ```
   aws autoscaling complete-lifecycle-action --lifecycle-action-result CONTINUE \
     --lifecycle-hook-name my-launch-hook --auto-scaling-group-name my-asg \
     --lifecycle-action-token bcd2f1b8-9a78-44d3-8a7a-4dd07d7cf635
   ```

   Alternatively, you can specify the ID of the instance, as shown in the following command.

   ```
   aws autoscaling complete-lifecycle-action --lifecycle-action-result CONTINUE \
     --instance-id i-1a2b3c4d --lifecycle-hook-name my-launch-hook \
     --auto-scaling-group-name my-asg
   ```

## Complete a lifecycle action (automatic)


If you have a user data script that configures your instances after they launch, you do not need to manually complete lifecycle actions. You can add the [complete-lifecycle-action](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/complete-lifecycle-action.html) command to the script. The script can retrieve the instance ID from the instance metadata and signal Amazon EC2 Auto Scaling when the bootstrap scripts have completed successfully. 

If you are not doing so already, update your script to retrieve the instance ID of the instance from the instance metadata. For more information, see [Retrieve instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html) in the *Amazon EC2 User Guide*.

If you use Lambda, you can also set up a callback in your function's code to let the lifecycle of the instance proceed if the custom action is successful. For more information, see [Tutorial: Configure a lifecycle hook that invokes a Lambda function](tutorial-lifecycle-hook-lambda.md).

# Tutorial: Use data script and instance metadata to retrieve lifecycle state
Tutorial: Use instance metadata to retrieve lifecycle state

A common way to create custom actions for lifecycle hooks is to use notifications that Amazon EC2 Auto Scaling sends to other services, such as Amazon EventBridge. However, you can avoid having to create additional infrastructure by instead using a user data script to move the code that configures instances and completes the lifecycle action into the instances themselves. 

The following tutorial shows you how to get started using a user data script and instance metadata. You create a basic Auto Scaling group configuration with a user data script that reads the [target lifecycle state](retrieving-target-lifecycle-state-through-imds.md) of the instances in your group and performs a callback action at a specific phase of an instance's lifecycle to continue the launch process.

The following illustration summarizes the flow for a scale-out event when you use a user data script to perform a custom action. After an instance launches, the lifecycle of the instance is paused until the lifecycle hook is completed, either by timing out or by Amazon EC2 Auto Scaling receiving a signal to continue. 

![\[The flow for a scale-out event when you use a user data script to perform a custom action.\]](http://docs.aws.amazon.com/autoscaling/ec2/userguide/images/lifecycle-hook-user-data-script.png)


**Topics**
+ [

## Step 1: Create an IAM role with permissions to complete lifecycle actions
](#instance-metadata-create-iam-role)
+ [

## Step 2: Create a launch template and include the IAM role and a user data script
](#instance-metadata-create-hello-world-function)
+ [

## Step 3: Create an Auto Scaling group
](#instance-metadata-create-auto-scaling-group)
+ [

## Step 4: Add a lifecycle hook
](#instance-metadata-add-lifecycle-hook)
+ [

## Step 5: Test and verify the functionality
](#instance-metadata-testing-hook)
+ [

## Step 6: Clean up
](#instance-metadata-lifecycle-hooks-tutorial-cleanup)
+ [

## Related resources
](#instance-metadata-lifecycle-hooks-tutorial-related-resources)

## Step 1: Create an IAM role with permissions to complete lifecycle actions


When you use the AWS CLI or an AWS SDK to send a callback to complete lifecycle actions, you must use an IAM role with permissions to complete lifecycle actions. 

**To create the policy**

1. Open the [Policies page](https://console.aws.amazon.com/iam/home?#/policies) of the IAM console, and then choose **Create policy**.

1. Choose the **JSON** tab.

1. In the **Policy Document** box, copy and paste the following policy document into the box. Replace the **sample text** with your account number and the name of the Auto Scaling group that you want to create (**TestAutoScalingEvent-group**).

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Effect": "Allow",
         "Action": [
           "autoscaling:CompleteLifecycleAction"
         ],
         "Resource": "arn:aws:autoscaling:*:123456789012:autoScalingGroup:*:autoScalingGroupName/TestAutoScalingEvent-group"
       }
     ]
   }
   ```

------

1. Choose **Next**. 

1. For **Policy name**, enter **TestAutoScalingEvent-policy**. Choose **Create policy**.

When you finish creating the policy, you can create a role that uses it.

**To create the role**

1. In the navigation pane on the left, choose **Roles**.

1. Choose **Create role**.

1. For **Select trusted entity**, choose **AWS service**.

1. For your use case, choose **EC2** and then choose **Next**. 

1. Under **Add permissions**, choose the policy that you created (**TestAutoScalingEvent-policy**). Then, choose **Next**. 

1. On the **Name, review, and create** page, for **Role name**, enter **TestAutoScalingEvent-role** and choose **Create role**. 

## Step 2: Create a launch template and include the IAM role and a user data script


Create a launch template to use with your Auto Scaling group. Include the IAM role you created and the provided sample user data script.

**To create a launch template**

1. Open the [Launch templates page](https://console.aws.amazon.com/ec2/v2/#LaunchTemplates) of the Amazon EC2 console.

1. Choose **Create launch template**.

1. For **Launch template name**, enter **TestAutoScalingEvent-template**.

1. Under **Auto Scaling guidance**, select the check box. 

1. For **Application and OS Images (Amazon Machine Image)**, choose Amazon Linux 2 (HVM), SSD Volume Type, 64-bit (x86) from the **Quick Start** list. 

1. For **Instance type**, choose a type of Amazon EC2 instance (for example, "t2.micro").

1. For **Advanced details**, expand the section to view the fields. 

1. For **IAM instance profile**, choose the IAM instance profile name of your IAM role (**TestAutoScalingEvent-role**). An instance profile is a container for an IAM role that allows Amazon EC2 to pass the IAM role to an instance when the instance is launched.

   When you used the IAM console to create an IAM role, the console automatically created an instance profile with the same name as its corresponding role.

1. For **User data**, copy and paste the following sample user data script into the field. Replace the sample text for `group_name` with the name of the Auto Scaling group that you want to create and `region` with the AWS Region you want your Auto Scaling group to use.

   ```
   #!/bin/bash
   
   function token {
       echo "X-aws-ec2-metadata-token: $(curl -X PUT 'http://169.254.169.254/latest/api/token' -H 'X-aws-ec2-metadata-token-ttl-seconds: 21600')"
   }
   
   function get_target_state {
       echo $(curl -H "$(token)" -s http://169.254.169.254/latest/meta-data/autoscaling/target-lifecycle-state)
   }
   
   function get_instance_id {
       echo $(curl -H "$(token)" -s http://169.254.169.254/latest/meta-data/instance-id)
   }
   
   function complete_lifecycle_action {
       instance_id=$(get_instance_id)
       group_name='TestAutoScalingEvent-group'
       region='us-west-2'
    
       echo $instance_id
       echo $region
       echo $(aws autoscaling complete-lifecycle-action \
         --lifecycle-hook-name TestAutoScalingEvent-hook \
         --auto-scaling-group-name $group_name \
         --lifecycle-action-result CONTINUE \
         --instance-id $instance_id \
         --region $region)
   }
   
   function main {
       while true
       do
           target_state=$(get_target_state)
           if [ \"$target_state\" = \"InService\" ]; then
               # Change hostname
               export new_hostname="${group_name}-$instance_id"
               hostname $new_hostname
               # Send callback
               complete_lifecycle_action
               break
           fi
           echo $target_state
           sleep 5
       done
   }
   
   main
   ```

   This simple user data script does the following:
   + Calls the instance metadata to retrieve the target lifecycle state and instance ID from the instance metadata
   + Retrieves the target lifecycle state repeatedly until it changes to `InService`
   + Changes the hostname of the instance to the instance ID prepended with the name of the Auto Scaling group, if the target lifecycle state is `InService`
   + Sends a callback by calling the **complete-lifecycle-action** CLI command to signal Amazon EC2 Auto Scaling to `CONTINUE` the EC2 launch process

1. Choose **Create launch template**.

1. On the confirmation page, choose **Create Auto Scaling group**.

**Note**  
For other examples that you can use as a reference for developing your user data script, see the [GitHub repository](https://github.com/aws-samples/amazon-ec2-auto-scaling-group-examples) for Amazon EC2 Auto Scaling.

## Step 3: Create an Auto Scaling group


After you create your launch template, create an Auto Scaling group.

**To create an Auto Scaling group**

1. On the **Choose launch template or configuration** page, for **Auto Scaling group name**, enter a name for your Auto Scaling group (**TestAutoScalingEvent-group**).

1. Choose **Next** to go to the **Choose instance launch options** page. 

1. For **Network**, choose a VPC.

1. For **Availability Zones and subnets**, choose one or more subnets from one or more Availability Zones.

1. In the **Instance type requirements** section, use the default setting to simplify this step. (Do not override the launch template.) For this tutorial, you will launch only one On-Demand Instance using the instance type specified in your launch template. 

1. Choose **Skip to review** at the bottom of the screen. 

1. On the **Review** page, review the details of your Auto Scaling group, and then choose **Create Auto Scaling group**.

## Step 4: Add a lifecycle hook


Add a lifecycle hook to hold the instance in a wait state until your lifecycle action is complete.

**To add a lifecycle hook**

1. Open the [Auto Scaling groups page](https://console.aws.amazon.com/ec2/v2/home?#AutoScalingGroups) of the Amazon EC2 console.

1. Select the check box next to your Auto Scaling group. A split pane opens up in the bottom of the page. 

1. In the lower pane, on the **Instance management** tab, in **Lifecycle hooks**, choose **Create lifecycle hook**.

1. To define a lifecycle hook for scale out (instances launching), do the following:

   1. For **Lifecycle hook name**, enter **TestAutoScalingEvent-hook**.

   1. For **Lifecycle transition**, choose **Instance launch**.

   1. For **Heartbeat timeout**, enter **300** for the number of seconds to wait for a callback from your user data script.

   1. For **Default result**, choose **ABANDON**. If the hook times out without receiving a callback from your user data script, the Auto Scaling group terminates the new instance.

   1. (Optional) Keep **Notification metadata** blank.

1. Choose **Create**.

## Step 5: Test and verify the functionality


To test the functionality, update the Auto Scaling group by increasing the desired capacity of the Auto Scaling group by 1. The user data script runs and starts to check the instance's target lifecycle state soon after the instance launches. The script changes the hostname and sends a callback action when the target lifecycle state is `InService`. This usually takes only a few seconds to finish.

**To increase the size of the Auto Scaling group**

1. Open the [Auto Scaling groups page](https://console.aws.amazon.com/ec2/v2/home?#AutoScalingGroups) of the Amazon EC2 console.

1. Select the check box next to your Auto Scaling group. View details in a lower pane while still seeing the top rows of the upper pane. 

1. In the lower pane, on the **Details** tab, choose **Group details**, **Edit**.

1. For **Desired capacity**, increase the current value by 1.

1. Choose **Update**. While the instance is being launched, the **Status** column in the upper pane displays a status of *Updating capacity*. 

After increasing the desired capacity, you can verify that your instance has successfully launched and is not terminated from the description of scaling activities. 

**To view the scaling activity**

1. Return to the **Auto Scaling groups** page and select your group.

1. On the **Activity** tab, under **Activity history**, the **Status** column shows whether your Auto Scaling group has successfully launched an instance. 

1. If the user data script fails, after the timeout period passes, you see a scaling activity with a status of `Canceled` and a status message of `Instance failed to complete user's Lifecycle Action: Lifecycle Action with token e85eb647-4fe0-4909-b341-a6c42EXAMPLE was abandoned: Lifecycle Action Completed with ABANDON Result`.

## Step 6: Clean up


If you are done working with the resources that you created for this tutorial, use the following steps to delete them.

**To delete the lifecycle hook**

1. Open the [Auto Scaling groups page](https://console.aws.amazon.com/ec2/v2/home?#AutoScalingGroups) of the Amazon EC2 console.

1. Select the check box next to your Auto Scaling group.

1. On the **Instance management** tab, in **Lifecycle hooks**, choose the lifecycle hook (`TestAutoScalingEvent-hook`).

1. Choose **Actions**, **Delete**.

1. Choose **Delete** again to confirm.

**To delete the launch template**

1. Open the [Launch templates page](https://console.aws.amazon.com/ec2/v2/#LaunchTemplates) of the Amazon EC2 console.

1. Select your launch template (`TestAutoScalingEvent-template`) and then choose **Actions**, **Delete template**.

1. When prompted for confirmation, type **Delete** to confirm deleting the specified launch template and then choose **Delete**.

If you are done working with the example Auto Scaling group, delete it. You can also delete the IAM role and permissions policy that you created.

**To delete the Auto Scaling group**

1. Open the [Auto Scaling groups page](https://console.aws.amazon.com/ec2/v2/home?#AutoScalingGroups) of the Amazon EC2 console.

1. Select the check box next to your Auto Scaling group (`TestAutoScalingEvent-group`) and choose **Delete**. 

1. When prompted for confirmation, type **delete** to confirm deleting the specified Auto Scaling group and then choose **Delete**.

   A loading icon in the **Name** column indicates that the Auto Scaling group is being deleted. It takes a few minutes to terminate the instances and delete the group. 

**To delete the IAM role**

1. Open the [Roles page](https://console.aws.amazon.com/iam/home?#/roles) of the IAM console.

1. Select the function's role (`TestAutoScalingEvent-role`).

1. Choose **Delete**.

1. When prompted for confirmation, type the name of the role and then choose **Delete**.

**To delete the IAM policy**

1. Open the [Policies page](https://console.aws.amazon.com/iam/home?#/policies) of the IAM console.

1. Select the policy that you created (`TestAutoScalingEvent-policy`).

1. Choose **Actions**, **Delete**.

1. When prompted for confirmation, type the name of the policy and then choose **Delete**.

## Related resources


The following related topics can be helpful as you develop code that invokes actions on instances based on data available in the instance metadata.
+ [Retrieve the target lifecycle state through instance metadata](retrieving-target-lifecycle-state-through-imds.md). This section describes the lifecycle state for other use cases, such as instance termination.
+ [Add lifecycle hooks (console)](adding-lifecycle-hooks.md#adding-lifecycle-hooks-console). This procedure shows you how to add lifecycle hooks for both scale out (instances launching) and scale in (instances terminating or returning to a warm pool).
+ [Instance metadata categories](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-categories) in the *Amazon EC2 User Guide*. This topic lists all categories of instance metadata that you can use to invoke actions on EC2 instances.

For a tutorial that shows you how to use Amazon EventBridge to create rules that invoke Lambda functions based on events that happen to the instances in your Auto Scaling group, see [Tutorial: Configure a lifecycle hook that invokes a Lambda function](tutorial-lifecycle-hook-lambda.md).

# Tutorial: Configure a lifecycle hook that invokes a Lambda function


In this exercise, you create an Amazon EventBridge rule that includes a filter pattern that when matched, invokes an AWS Lambda function as the rule target. We provide the filter pattern and sample function code to use. 

If everything is configured correctly, at the end of this tutorial, the Lambda function performs a custom action when instances launch. The custom action simply logs the event in the CloudWatch Logs log stream associated with the Lambda function.

The Lambda function also performs a callback to let the lifecycle of the instance proceed if this action is successful, but lets the instance abandon the launch and terminate if the action fails.

The following illustration summarizes the flow for a scale-out event when you use a Lambda function to perform a custom action. After an instance launches, the lifecycle of the instance is paused until the lifecycle hook is completed, either by timing out or by Amazon EC2 Auto Scaling receiving a signal to continue. 

![\[The flow for a scale-out event when you use a Lambda function to perform a custom action.\]](http://docs.aws.amazon.com/autoscaling/ec2/userguide/images/lifecycle-hook-lambda-function.png)


**Note**  
Depending on your use case, you can configure a lifecycle hook by following the steps below and creating an EventBridge rule. Or, you can use a Lambda function to configure a lifecycle hook directly without creating an EventBridge rule.

**Topics**
+ [

## Prerequisites
](#lambda-hello-world-tutorial-prerequisites)
+ [

## Step 1: Create an IAM role with permissions to complete lifecycle actions
](#lambda-create-iam-role)
+ [

## Step 2: Create a Lambda function
](#lambda-create-hello-world-function)
+ [

## Step 3: Create an EventBridge rule
](#lambda-create-rule)
+ [

## Step 4: Add a lifecycle hook
](#lambda-add-lifecycle-hook)
+ [

## Step 5: Test and verify the event
](#lambda-testing-hook-notifications)
+ [

## Step 6: Clean up
](#lambda-lifecycle-hooks-tutorial-cleanup)
+ [

## Related resources
](#lambda-lifecycle-hooks-tutorial-related-resources)

## Prerequisites


Before you begin this tutorial, create an Auto Scaling group, if you don't have one already. To create an Auto Scaling group, open the [Auto Scaling groups page](https://console.aws.amazon.com/ec2/v2/home?#AutoScalingGroups) of the Amazon EC2 console and choose **Create Auto Scaling group**.

## Step 1: Create an IAM role with permissions to complete lifecycle actions


Before you create a Lambda function, you must first create an execution role and a permissions policy to allow Lambda to complete lifecycle hooks.

**To create the policy**

1. Open the [Policies page](https://console.aws.amazon.com/iam/home?#/policies) of the IAM console, and then choose **Create policy**.

1. Choose the **JSON** tab.

1. In the **Policy Document** box, paste the following policy document into the box, replacing the text in **italics** with your account number and the name of your Auto Scaling group.

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Effect": "Allow",
         "Action": [
           "autoscaling:CompleteLifecycleAction"
         ],
         "Resource": "arn:aws:autoscaling:*:123456789012:autoScalingGroup:*:autoScalingGroupName/my-asg"
       }
     ]
   }
   ```

------

1. Choose **Next**. 

1. For **Policy name**, enter **LogAutoScalingEvent-policy**. Choose **Create policy**.

When you finish creating the policy, you can create a role that uses it.

**To create the role**

1. In the navigation pane on the left, choose **Roles**.

1. Choose **Create role**.

1. For **Select trusted entity**, choose **AWS service**.

1. For your use case, choose **Lambda** and then choose **Next**. 

1. Under **Add permissions**, choose the policy that you created (**LogAutoScalingEvent-policy**) and the policy named **AWSLambdaBasicExecutionRole**. Then, choose **Next**. 
**Note**  
The **AWSLambdaBasicExecutionRole** policy has the permissions that the function needs to write logs to CloudWatch Logs.

1. On the **Name, review, and create** page, for **Role name**, enter **LogAutoScalingEvent-role** and choose **Create role**.

## Step 2: Create a Lambda function


Create a Lambda function to serve as the target for events. The sample Lambda function, written in Node.js, is invoked by EventBridge when a matching event is emitted by Amazon EC2 Auto Scaling.

**To create a Lambda function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) on the Lambda console.

1. Choose **Create function**, **Author from scratch**.

1. Under **Basic information**, for **Function name**, enter **LogAutoScalingEvent**.

1. For **Runtime**, choose **Node.js 18.x**.

1. Scroll down and choose **Change default execution role**, and then for **Execution role**, choose **Use an existing role**.

1. For **Existing role**, choose **LogAutoScalingEvent-role**.

1. Leave the other default values.

1. Choose **Create function**. You are returned to the function's code and configuration. 

1. With your `LogAutoScalingEvent` function still open in the console, under **Code source**, in the editor, paste the following sample code into the file named index.mjs.

   ```
   import { AutoScalingClient, CompleteLifecycleActionCommand } from "@aws-sdk/client-auto-scaling";
   export const handler = async(event) => {
     console.log('LogAutoScalingEvent');
     console.log('Received event:', JSON.stringify(event, null, 2));
     var autoscaling = new AutoScalingClient({ region: event.region });
     var eventDetail = event.detail;
     var params = {
       AutoScalingGroupName: eventDetail['AutoScalingGroupName'], /* required */
       LifecycleActionResult: 'CONTINUE', /* required */
       LifecycleHookName: eventDetail['LifecycleHookName'], /* required */
       InstanceId: eventDetail['EC2InstanceId'],
       LifecycleActionToken: eventDetail['LifecycleActionToken']
     };
     var response;
     const command = new CompleteLifecycleActionCommand(params);
     try {
       var data = await autoscaling.send(command);
       console.log(data); // successful response
       response = {
         statusCode: 200,
         body: JSON.stringify('SUCCESS'),
       };
     } catch (err) {
       console.log(err, err.stack); // an error occurred
       response = {
         statusCode: 500,
         body: JSON.stringify('ERROR'),
       };
     }
     return response;
   };
   ```

   This code simply logs the event so that, at the end of this tutorial, you can see an event appear in the CloudWatch Logs log stream that's associated with this Lambda function. 

1. Choose **Deploy**. 

## Step 3: Create an EventBridge rule


Create an EventBridge rule to run your Lambda function. For more information about using EventBridge, see [Use EventBridge to handle Auto Scaling events](automating-ec2-auto-scaling-with-eventbridge.md).

**To create a rule using the console**

1. Open the [EventBridge console](https://console.aws.amazon.com/events/).

1. In the navigation pane, choose **Rules**.

1. Choose **Create rule**.

1. For **Define rule detail**, do the following:

   1. For **Name**, enter **LogAutoScalingEvent-rule**.

   1. For **Event bus**, choose **default**. When an AWS service in your account generates an event, it always goes to your account's default event bus.

   1. For **Rule type**, choose **Rule with an event pattern**.

   1. Choose **Next**.

1. For **Build event pattern**, do the following:

   1. For **Event source**, choose **AWS events or EventBridge partner events**.

   1. Scroll down to **Event pattern**, and do the following:

   1. 

      1. For **Event source**, choose **AWS services**.

      1. For **AWS service**, choose **Auto Scaling**.

      1. For **Event type**, choose **Instance Launch and Terminate**.

      1. By default, the rule matches any scale-in or scale-out event. To create a rule that notifies you when there is a scale-out event and an instance is put into a wait state due to a lifecycle hook, choose **Specific instance event(s)** and select **EC2 Instance-launch Lifecycle Action**.

      1. By default, the rule matches any Auto Scaling group in the Region. To make the rule match a specific Auto Scaling group, choose **Specific group name(s)** and select the group.

      1. Choose **Next**.

1. For **Select target(s)**, do the following:

   1. For **Target types**, choose **AWS service**.

   1. For **Select a target**, choose **Lambda function**.

   1. For **Function**, choose **LogAutoScalingEvent**.

   1. Choose **Next** twice.

1. On the **Review and create** page, choose **Create rule**.

## Step 4: Add a lifecycle hook


In this section, you add a lifecycle hook so that Lambda runs your function on instances at launch.

**To add a lifecycle hook**

1. Open the [Auto Scaling groups page](https://console.aws.amazon.com/ec2/v2/home?#AutoScalingGroups) of the Amazon EC2 console.

1. Select the check box next to your Auto Scaling group. A split pane opens up in the bottom of the page. 

1. In the lower pane, on the **Instance management** tab, in **Lifecycle hooks**, choose **Create lifecycle hook**.

1. To define a lifecycle hook for scale out (instances launching), do the following:

   1. For **Lifecycle hook name**, enter **LogAutoScalingEvent-hook**.

   1. For **Lifecycle transition**, choose **Instance launch**.

   1. For **Heartbeat timeout**, enter **300** for the number of seconds to wait for a callback from your Lambda function.

   1. For **Default result**, choose **ABANDON**. This means that the Auto Scaling group will terminate a new instance if the hook times out without receiving a callback from your Lambda function.

   1. (Optional) Leave **Notification metadata** empty. The event data that we pass to EventBridge contains all of the necessary information to invoke the Lambda function.

1. Choose **Create**.

## Step 5: Test and verify the event


To test the event, update the Auto Scaling group by increasing the desired capacity of the Auto Scaling group by 1. Your Lambda function is invoked within a few seconds after increasing the desired capacity.

**To increase the size of the Auto Scaling group**

1. Open the [Auto Scaling groups page](https://console.aws.amazon.com/ec2/v2/home?#AutoScalingGroups) of the Amazon EC2 console.

1. Select the check box next to your Auto Scaling group to view details in a lower pane and still see the top rows of the upper pane. 

1. In the lower pane, on the **Details** tab, choose **Group details**, **Edit**.

1. For **Desired capacity**, increase the current value by 1.

1. Choose **Update**. While the instance is being launched, the **Status** column in the upper pane displays a status of *Updating capacity*. 

After increasing the desired capacity, you can verify that your Lambda function was invoked.

**To view the output from your Lambda function**

1. Open the [Log groups page](https://console.aws.amazon.com/cloudwatch/home#logs:) of the CloudWatch console.

1. Select the name of the log group for your Lambda function (`/aws/lambda/LogAutoScalingEvent`).

1. Select the name of the log stream to view the data provided by the function for the lifecycle action.

Next, you can verify that your instance has successfully launched from the description of scaling activities.

**To view the scaling activity**

1. Return to the **Auto Scaling groups** page and select your group.

1. On the **Activity** tab, under **Activity history**, the **Status** column shows whether your Auto Scaling group has successfully launched an instance. 
   + If the action was successful, the scaling activity will have a status of "Successful".
   + If it failed, after waiting a few minutes, you will see a scaling activity with a status of "Cancelled" and a status message of "Instance failed to complete user's Lifecycle Action: Lifecycle Action with token e85eb647-4fe0-4909-b341-a6c42EXAMPLE was abandoned: Lifecycle Action Completed with ABANDON Result".

**To decrease the size of the Auto Scaling group**  
If you do not need the additional instance that you launched for this test, you can open the **Details** tab and decrease **Desired capacity** by 1.

## Step 6: Clean up


If you are done working with the resources that you created just for this tutorial, use the following steps to delete them.

**To delete the lifecycle hook**

1. Open the [Auto Scaling groups page](https://console.aws.amazon.com/ec2/v2/home?#AutoScalingGroups) of the Amazon EC2 console.

1. Select the check box next to your Auto Scaling group.

1. On the **Instance management** tab, in **Lifecycle hooks**, choose the lifecycle hook (`LogAutoScalingEvent-hook`).

1. Choose **Actions**, **Delete**.

1. Choose **Delete** again to confirm.

**To delete the Amazon EventBridge rule**

1. Open the [Rules page](https://console.aws.amazon.com/events/home?#/rules) in the Amazon EventBridge console.

1. Under **Event bus**, choose the event bus that is associated with the rule (`Default`).

1. Select the check box next to your rule (`LogAutoScalingEvent-rule`).

1. Choose **Delete**.

1. When prompted for confirmation, type the name of the rule and then choose **Delete**.

If you are done working with the example function, delete it. You can also delete the log group that stores the function's logs, and the execution role and permissions policy that you created.

**To delete a Lambda function**

1. Open the [Functions page](https://console.aws.amazon.com/lambda/home#/functions) on the Lambda console.

1. Choose the function (`LogAutoScalingEvent`).

1. Choose **Actions**, **Delete**.

1. When prompted for confirmation, type **delete** to confirm deleting the specified function and then choose **Delete**.

**To delete the log group**

1. Open the [Log groups page](https://console.aws.amazon.com/cloudwatch/home#logs:) of the CloudWatch console.

1. Select the function's log group (`/aws/lambda/LogAutoScalingEvent`).

1. Choose **Actions**, **Delete log group(s)**.

1. In the **Delete log group(s)** dialog box, choose **Delete**.

**To delete the execution role**

1. Open the [Roles page](https://console.aws.amazon.com/iam/home?#/roles) of the IAM console.

1. Select the function's role (`LogAutoScalingEvent-role`).

1. Choose **Delete**.

1. When prompted for confirmation, type the name of the role and then choose **Delete**.

**To delete the IAM policy**

1. Open the [Policies page](https://console.aws.amazon.com/iam/home?#/policies) of the IAM console.

1. Select the policy that you created (`LogAutoScalingEvent-policy`).

1. Choose **Actions**, **Delete**.

1. When prompted for confirmation, type the name of the policy and then choose **Delete**.

## Related resources


The following related topics can be helpful as you create EventBridge rules based on events that happen to the instances in your Auto Scaling group.
+ [Use EventBridge to handle Auto Scaling events](automating-ec2-auto-scaling-with-eventbridge.md). This section shows you examples of events for other use cases, including events for scale in.
+ [Add lifecycle hooks (console)](adding-lifecycle-hooks.md#adding-lifecycle-hooks-console). This procedure shows you how to add lifecycle hooks for both scale out (instances launching) and scale in (instances terminating or returning to a warm pool).

For a tutorial that shows you how to use the Instance Metadata Service (IMDS) to invoke an action from within the instance itself, see [Tutorial: Use data script and instance metadata to retrieve lifecycle state](tutorial-lifecycle-hook-instance-metadata.md).

# Decrease latency for applications with long boot times using warm pools
Warm pools

A warm pool gives you the ability to decrease latency for your applications that have exceptionally long boot times, for example, because instances need to write massive amounts of data to disk. With warm pools, you no longer have to over-provision your Auto Scaling groups to manage latency in order to improve application performance. For more information, see the following blog post [Scaling your applications faster with EC2 Auto Scaling Warm Pools](https://aws.amazon.com/blogs/compute/scaling-your-applications-faster-with-ec2-auto-scaling-warm-pools/).

**Important**  
Creating a warm pool when it's not required can lead to unnecessary costs. If your first boot time does not cause noticeable latency issues for your application, there probably isn't a need for you to use a warm pool.

**Topics**
+ [

## Core concepts
](#warm-pool-core-concepts)
+ [

## Prerequisites
](#warm-pool-prerequisites)
+ [

## Update the instances in a warm pool
](#update-warm-pool)
+ [

## Related resources
](#warm-pools-related-resources)
+ [

## Limitations
](#warm-pools-limitations)
+ [Use lifecycle hooks](warm-pool-instance-lifecycle.md)
+ [

# Create a warm pool for an Auto Scaling group
](create-warm-pool.md)
+ [View health check status](warm-pools-health-checks-monitor-view-status.md)
+ [AWS CLI examples for working with warm pools](examples-warm-pools-aws-cli.md)

## Core concepts


Before you get started, familiarize yourself with the following core concepts:

**Warm pool**  
A warm pool is a pool of pre-initialized EC2 instances that sits alongside an Auto Scaling group. Whenever your application needs to scale out, the Auto Scaling group can draw on the warm pool to meet its new desired capacity. This helps you to ensure that instances are ready to quickly start serving application traffic, accelerating the response to a scale-out event. As instances leave the warm pool, they count toward the desired capacity of the group. This is known as a *warm start*.   
While instances are in the warm pool, your scaling policies only scale out if the metric value from instances that are in the `InService` state is greater than the scaling policy's alarm high threshold (which is the same as the target utilization of a target tracking scaling policy).

**Warm pool size**  
By default, the size of the warm pool is calculated as the difference between the Auto Scaling group's maximum capacity and its desired capacity. For example, if the desired capacity of your Auto Scaling group is 6 and the maximum capacity is 10, the size of your warm pool will be 4 when you first set up the warm pool and the pool is initializing.   
To specify the warm pool's maximum capacity separately, use the custom specification (`MaxGroupPreparedCapacity`) option and set a custom value for it that is greater than the current capacity of the group. If you provide a custom value, the size of the warm pool is calculated as the difference between the custom value and the current desired capacity of the group. For example, if the desired capacity of your Auto Scaling group is 6, if the maximum capacity is 20, and if the custom value is 8, the size of your warm pool will be 2 when you first set up the warm pool and the pool is initializing.   
You might only need to use the custom specification (`MaxGroupPreparedCapacity`) option when working with large Auto Scaling groups to manage the cost benefits of having a warm pool. For example, an Auto Scaling group with 1,000 instances, a maximum capacity of 1,500 (to provide extra capacity for emergency traffic spikes), and a warm pool of 100 instances might help you achieve your goals better than keeping 500 instances reserved for future use inside the warm pool.

**Minimum warm pool size**  
Consider using the minimum size setting (`MinSize`) to statically set the minimum number of instances to maintain in the warm pool. There is no minimum size set by default. The `MinSize` setting is useful when you specify `MaxGroupPreparedCapacity` to ensure that a minimum number of instances are maintained in the warm pool even when the desired capacity of the Auto Scaling group is higher than the `MaxGroupPreparedCapacity`.

**Warm pool instance state**  
You can keep instances in the warm pool in one of three states: `Stopped`, `Running`, or `Hibernated`. Keeping instances in a `Stopped` state is an effective way to minimize costs. With stopped instances, you pay only for the volumes that you use and the Elastic IP addresses attached to the instances.  
Alternatively, you can keep instances in a `Hibernated` state to stop instances without deleting their memory contents (RAM). When an instance is hibernated, this signals the operating system to save the contents of your RAM to your Amazon EBS root volume. When the instance is started again, the root volume is restored to its previous state and the RAM contents are reloaded. While the instances are in hibernation, you pay only for the EBS volumes, including storage for the RAM contents, and the Elastic IP addresses attached to the instances.  
Keeping instances in a `Running` state inside the warm pool is also possible, but is highly discouraged to avoid incurring unnecessary charges. When instances are stopped or hibernated, you are saving the cost of the instances themselves. You pay for the instances only when they are running.

**Lifecycle hooks**  
You use [lifecycle hooks](warm-pool-instance-lifecycle.md) to put instances into a wait state so that you can perform custom actions on the instances. Custom actions are performed as the instances launch or before they terminate.  
In a warm pool configuration, lifecycle hooks delay instances from being stopped or hibernated and from being put in service during a scale-out event until they have finished initializing. If you add a warm pool to your Auto Scaling group without a lifecycle hook, instances that take a long time to finish initializing could be stopped or hibernated and then put in service during a scale-out event before they are ready.

**Instance reuse policy**  
By default, Amazon EC2 Auto Scaling terminates your instances when your Auto Scaling group scales in. Then, it launches new instances into the warm pool to replace the instances that were terminated.   
If you want to return instances to the warm pool instead, you can specify an instance reuse policy. This lets you reuse instances that are already configured to serve application traffic. To make sure that your warm pool is not over-provisioned, Amazon EC2 Auto Scaling can terminate instances in the warm pool to reduce its size when it is larger than necessary based on its settings. When terminating instances in the warm pool, it uses the [default termination policy](ec2-auto-scaling-termination-policies.md#default-termination-policy) to choose which instances to terminate first.   
If you want to hibernate instances on scale in and there are existing instances in the Auto Scaling group, they must meet the requirements for instance hibernation. If they don't, when instances return to the warm pool, they will fallback to being stopped instead of being hibernated.
Currently, you can only specify an instance reuse policy by using the AWS CLI or an SDK. This feature is not available from the console.

## Prerequisites


Before you create a warm pool for your Auto Scaling group, decide how you will use lifecycle hooks to initialize new instances with an appropriate initial state.

To perform custom actions on instances while they are in a wait state due to a lifecycle hook, you have two options:
+ For simple scenarios where you want to run commands on your instances at launch, you can include a user data script when you create a launch template or launch configuration for your Auto Scaling group. User data scripts are just normal shell scripts or cloud-init directives that are run by cloud-init when your instances start. The script can also control when your instances transition to the next state by using the ID of the instance on which it runs. If you are not doing so already, update your script to retrieve the instance ID of the instance from the instance metadata. For more information, see [Access instance metadata](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-retrieval.html) in the *Amazon EC2 User Guide*.
**Tip**  
To run user data scripts when an instance restarts, the user data must be in the MIME multi-part format and specify the following in the `#cloud-config` section of the user data:  

  ```
  #cloud-config
  cloud_final_modules:
   - [scripts-user, always]
  ```
+ For advanced scenarios where you need a service such as AWS Lambda to do something as instances are entering or leaving the warm pool, you can create a lifecycle hook for your Auto Scaling group and configure the target service to perform custom actions based on lifecycle notifications. For more information, see [Supported notification targets](warm-pool-instance-lifecycle.md#warm-pools-supported-notification-targets).

**Prepare instances for hibernation**  
To prepare Auto Scaling instances to use the `Hibernated` pool state, create a new launch template or launch configuration that is set up correctly to support instance hibernation, as described in the [Hibernation prerequisites](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/hibernating-prerequisites.html) topic in the *Amazon EC2 User Guide*. Then, associate the new launch template or launch configuration with the Auto Scaling group and start an instance refresh to replace the instances associated with a previous launch template or launch configuration. For more information, see [Use an instance refresh to update instances in an Auto Scaling group](asg-instance-refresh.md).

## Update the instances in a warm pool


To update the instances in a warm pool, you create a new launch template or launch configuration and associate it with the Auto Scaling group. Any new instances are launched using the new AMI and other updates that are specified in the launch template or launch configuration, but existing instances are not affected.

To force replacement warm pool instances to launch that use the new launch template or launch configuration, you can start an instance refresh to do a rolling update of your group. An instance refresh first replaces `InService` instances. Then it replaces instances in the warm pool. For more information, see [Use an instance refresh to update instances in an Auto Scaling group](asg-instance-refresh.md).

## Related resources


You can visit our [GitHub repository](https://github.com/aws-samples/amazon-ec2-auto-scaling-group-examples) for examples of lifecycle hooks for warm pools. 

## Limitations

+ Warm pool limitations for an Auto Scaling group with mixed instances types:
  + Warm pools aren’t supported with weighted mixed instance groups. If your Auto Scaling group uses instance weighting, you can’t add a warm pool.
  + Warm pools don’t support Spot Instances within mixed instance groups. Your mixed instances policy must be configured for On-Demand instances only when using warm pools.
  + When using warm pools with mixed instance groups in hibernated state, you must configure `HibernationOptions` in your launch template. 
+ Amazon EC2 Auto Scaling can put an instance in a `Stopped` or `Hibernated` state only if it has an Amazon EBS volume as its root device. Instances that use instance stores for the root device cannot be stopped or hibernated.
+ Amazon EC2 Auto Scaling can put an instance in a `Hibernated` state only if meets all of the requirements listed in the [Hibernation prerequisites](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/hibernating-prerequisites.html) topic in the *Amazon EC2 User Guide*. 
+ If your warm pool is depleted when there is a scale-out event, instances will launch directly into the Auto Scaling group (a *cold start*). You could also experience cold starts if an Availability Zone is out of capacity.
+ If an instance within the warm pool encounters an issue during the launch process, preventing it from reaching the `InService` state, the instance will be considered a failed launch and terminated. This applies regardless of the underlying cause, such as an insufficient capacity error or any other factor.
+ If you try using a warm pool with an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group, instances that are still initializing might register with your Amazon EKS cluster. As a result, the cluster might schedule jobs on an instance as it is preparing to be stopped or hibernated.
+ Likewise, if you try using a warm pool with an Amazon ECS cluster, instances might register with the cluster before they finish initializing. To solve this problem, you must configure a launch template or launch configuration that includes a special agent configuration variable in the user data. For more information, see [Using a warm pool for your Auto Scaling group](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/asg-capacity-providers.html#using-warm-pool) in the *Amazon Elastic Container Service Developer Guide*.

# Use lifecycle hooks with a warm pool in Auto Scaling group
Use lifecycle hooks

Instances in a warm pool maintain their own independent lifecycle to help you create the appropriate custom action for each transition. This lifecycle is designed to help you to invoke actions in a target service (for example, a Lambda function) while an instance is still initializing and before it is put in service. 

**Note**  
The API operations that you use to add and manage lifecycle hooks and complete lifecycle actions are not changed. Only the instance lifecycle is changed. 

For more information about adding a lifecycle hook, see [Add lifecycle hooks to your Auto Scaling group](adding-lifecycle-hooks.md). For more information about completing a lifecycle action, see [Complete a lifecycle action in an Auto Scaling group](completing-lifecycle-hooks.md).

For instances entering the warm pool, you might need a lifecycle hook for one of the following reasons:
+ You want to launch EC2 instances from an AMI that takes a long time to finish initializing.
+ You want to run user data scripts to bootstrap the EC2 instances.

For instances leaving the warm pool, you might need a lifecycle hook for one of the following reasons:
+ You can use some extra time to prepare EC2 instances for use. For example, you might have services that must start when an instance restarts before your application can work correctly.
+ You want to pre-populate cache data so that a new server doesn't launch with an empty cache.
+ You want to register new instances as managed instances with your configuration management service.

## Lifecycle state transitions for instances in a warm pool
Lifecycle state transitions

An Auto Scaling instance can transition through many states as part of its lifecycle.

The following diagram shows the transition between Auto Scaling states when you use a warm pool:

![\[The lifecycle state transitions for instances in a warm pool.\]](http://docs.aws.amazon.com/autoscaling/ec2/userguide/images/warm-pools-lifecycle-diagram.png)


¹ This state varies based on the warm pool's pool state setting. If the pool state is set to `Running`, then this state is `Warmed:Running` instead. If the pool state is set to `Hibernated`, then this state is `Warmed:Hibernated` instead.

When you add lifecycle hooks, consider the following:
+ When a lifecycle hook is configured for the `autoscaling:EC2_INSTANCE_LAUNCHING` lifecycle action, a newly launched instance first pauses to perform a custom action when it reaches the `Warmed:Pending:Wait` state, and then again when the instance restarts and reaches the `Pending:Wait` state.
+ When a lifecycle hook is configured for the `EC2_INSTANCE_TERMINATING` lifecycle action, a terminating instance pauses to perform a custom action when it reaches the `Terminating:Wait` state. However, if you specify an instance reuse policy to return instances to the warm pool on scale in instead of terminating them, then an instance that is returning to the warm pool pauses to perform a custom action at the `Warmed:Pending:Wait` state for the `EC2_INSTANCE_TERMINATING` lifecycle action.
+ If the demand on your application depletes the warm pool, Amazon EC2 Auto Scaling can launch instances directly into the Auto Scaling group as long as the group isn't at its maximum capacity yet. If the instances launch directly into the group, they are only paused to perform a custom action at the `Pending:Wait` state.
+ To control how long an instance stays in a wait state before it transitions to the next state, configure your custom action to use the **complete-lifecycle-action** command. With lifecycle hooks, instances remain in a wait state either until you notify Amazon EC2 Auto Scaling that the specified lifecycle action is complete, or until the timeout period ends (one hour by default). 

The following summarizes the flow for a scale-out event.

![\[A flow diagram of a scale-out event.\]](http://docs.aws.amazon.com/autoscaling/ec2/userguide/images/warm-pools-scale-out-event-diagram.png)


When instances reach a wait state, Amazon EC2 Auto Scaling sends a notification. Examples of these notifications are available in the EventBridge section of this guide. For more information, see [Warm pool example events and patterns](warm-pools-eventbridge-events.md).

## Supported notification targets


Amazon EC2 Auto Scaling provides support for defining any of the following as notification targets for lifecycle notifications:
+ EventBridge rules
+ Amazon SNS topics 
+ Amazon SQS queues
+ AWS Lambda functions

**Important**  
If you have a user data (cloud-init) script in your launch template or launch configuration that configures your instances when they launch, you do not need to receive notifications to perform custom actions on instances that are starting or restarting.

The following sections contain links to documentation that describes how to configure notification targets:

**EventBridge rules** — To run code when Amazon EC2 Auto Scaling puts an instance into a wait state, you can create an EventBridge rule and specify a Lambda function as its target. To invoke different Lambda functions based on different lifecycle notifications, you can create multiple rules and associate each rule with a specific event pattern and Lambda function. For more information, see [Create EventBridge rules for warm pool events](warm-pool-events-eventbridge-rules.md).

**Amazon SNS topics** — To receive a notification when an instance is put into a wait state, you create an Amazon SNS topic and then set up Amazon SNS message filtering to deliver lifecycle notifications differently based on a message attribute. For more information, see [Receive notifications using Amazon SNS](prepare-for-lifecycle-notifications.md#sns-notifications).

**Amazon SQS queues** — To set up a delivery point for lifecycle notifications where a relevant consumer can pick them up and process them, you can create an Amazon SQS queue and a queue consumer that processes messages from the SQS queue. If you want the queue consumer to process lifecycle notifications differently based on a message attribute, you must also set up the queue consumer to parse the message and then act on the message when a specific attribute matches the desired value. For more information, see [Receive notifications using Amazon SQS](prepare-for-lifecycle-notifications.md#sqs-notifications).

**AWS Lambda functions** — To run custom code when Amazon EC2 Auto Scaling puts an instance into a wait state, you can specify a Lambda function as the notification target. The Lambda function is invoked with lifecycle notification data, allowing you to perform custom actions such as instance configuration, application setup, or integration with other AWS services. You must configure the Lambda function's resource-based policy to allow the Auto Scaling service-linked role to invoke the function. For more information, see [Route notifications to AWS Lambda directly](prepare-for-lifecycle-notifications.md#lambda-notification).

# Create a warm pool for an Auto Scaling group


This topic describes how to create a warm pool for your Auto Scaling group. 

**Important**  
Before you continue, complete the [prerequisites](ec2-auto-scaling-warm-pools.md#warm-pool-prerequisites) for creating a warm pool and confirm that you have created a lifecycle hook for your Auto Scaling group.

## Create a warm pool


Use the following procedure to create a warm pool for your Auto Scaling group.

**To create a warm pool (console)**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. Select the check box next to an existing group.

   A split pane opens up at the bottom of the page. 

1. Choose the **Instance management** tab. 

1. Under **Warm pool**, choose **Create warm pool**. 

1. To configure a warm pool, do the following:

   1. For **Warm pool instance state**, choose which state you want to transition your instances to when they enter the warm pool. The default is `Stopped`. 

   1. For **Minimum warm pool size**, enter the minimum number of instances to maintain in the warm pool.

   1. For **Instance reuse**, select the **Reuse on scale in** check box to allow instances in the Auto Scaling group to return to the warm pool on scale in. 

   1. For **Warm pool size**, choose one of the available options: 
      + **Default specification**: The size of the warm pool is determined by the difference between the maximum and desired capacity of the Auto Scaling group. This option streamlines warm pool management. After you create the warm pool, its size can be easily updated just by adjusting the maximum capacity of the group.
      + **Custom specification**: The size of the warm pool is determined by the difference between a custom value and the desired capacity of the Auto Scaling group. This option gives you flexibility to manage the size of your warm pool independently from the maximum capacity of the group. 

1. View the **Estimated warm pool size based on current settings** section to confirm how the default or custom specification applies to the size of the warm pool. Remember, the warm pool size depends on the desired capacity of the Auto Scaling group, which will change if the group scales.

1. Choose **Create**. 

## Instance type selection with mixed instance groups


Auto Scaling prioritizes instance types that are already in the warm pool during scaling events when your group is configured with a mixed instances policy. Launch behavior:

1. Auto Scaling attempts to launch instances using instance types available in the warm pool.

1. If warm launch fails, Auto Scaling attempts cold launch using all remaining instance types in your mixed instances policy.

**Example**  
**Example**  
If you configure your Auto Scaling group with 10 instance types and your warm pool contains 6 of those instance types. During scale-out, Auto Scaling first tries the 6 instance types from the warm pool. If unsuccessful, Auto Scaling tries all configured instance types through cold launch.

This gives you warm pool performance benefits when possible while maintaining the flexibility of your full mixed instances configuration.

## Delete a warm pool


When you no longer need the warm pool, use the following procedure to delete it.

**To delete your warm pool (console)**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. Select the check box next to an existing group.

   A split pane opens up at the bottom of the page. 

1. Choose the **Instance management** tab. 

1. For **Warm pool**, choose **Actions**, **Delete**.

1. When prompted for confirmation, choose **Delete**. 

# View health check status and the reason for health check failures
View health check status

Health checks allow Amazon EC2 Auto Scaling to determine when an instance is unhealthy and should be terminated. For warm pool instances kept in a `Stopped` state, it employs the knowledge that Amazon EBS has of a `Stopped` instance's availability to identify unhealthy instances. It does this by calling the `DescribeVolumeStatus` API to determine the status of the EBS volume that's attached to the instance. For warm pool instances kept in a `Running` state, it relies on EC2 status checks to determine instance health. While there is no health check grace period for warm pool instances, Amazon EC2 Auto Scaling doesn't start checking instance health until the lifecycle hook finishes. 

When an instance is found to be unhealthy, Amazon EC2 Auto Scaling automatically deletes the unhealthy instance and creates a new one to replace it. Instances are usually terminated within a few minutes after failing their health check. For more information, see [View the reason for health check failures](replace-unhealthy-instance.md).

Custom health checks are also supported. This can be helpful if you have your own health check system that can detect an instance's health and send this information to Amazon EC2 Auto Scaling. For more information, see [Set up a custom health check for your Auto Scaling group](set-up-a-custom-health-check.md).

On the Amazon EC2 Auto Scaling console, you can view the status (healthy or unhealthy) of your warm pool instances. You can also view their health status using the AWS CLI or one of the SDKs. 

**To view the status of your warm pool instances (console)**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. Select the check box next to the Auto Scaling group. 

   A split pane opens up in the bottom of the **Auto Scaling groups** page. 

1. On the **Instance management** tab, under **Warm pool instances**, the **Lifecycle** column contains the state of your instances.

   The **Health status** column shows the assessment that Amazon EC2 Auto Scaling has made of instance health.
**Note**  
New instances start healthy. Until the lifecycle hook is finished, an instance's health is not checked.

**To view the reason for health check failures (console)**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. Select the check box next to the Auto Scaling group. 

   A split pane opens up in the bottom of the **Auto Scaling groups** page. 

1. On the **Activity** tab, under **Activity history**, the **Status** column shows whether your Auto Scaling group has successfully launched or terminated instances.

   If it terminated any unhealthy instances, the **Cause** column shows the date and time of the termination and the reason for the health check failure. For example, "At 2021-04-01T21:48:35Z an instance was taken out of service in response to EBS volume health check failure". 

**To view the status of your warm pool instances (AWS CLI)**  
View the warm pool for an Auto Scaling group by using the following [describe-warm-pool](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/describe-warm-pool.html) command.

```
aws autoscaling describe-warm-pool --auto-scaling-group-name my-asg
```

Example output.

```
{
    "WarmPoolConfiguration": {
        "MinSize": 0,
        "PoolState": "Stopped"
    },
    "Instances": [
        {
            "InstanceId": "i-0b5e5e7521cfaa46c",
            "InstanceType": "t2.micro",
            "AvailabilityZone": "us-west-2a",
            "LifecycleState": "Warmed:Stopped",
            "HealthStatus": "Healthy",
            "LaunchTemplate": {
                "LaunchTemplateId": "lt-08c4cd42f320d5dcd",
                "LaunchTemplateName": "my-template-for-auto-scaling",
                "Version": "1"
            }
        },
        {
            "InstanceId": "i-0e21af9dcfb7aa6bf",
            "InstanceType": "t2.micro",
            "AvailabilityZone": "us-west-2a",
            "LifecycleState": "Warmed:Stopped",
            "HealthStatus": "Healthy",
            "LaunchTemplate": {
                "LaunchTemplateId": "lt-08c4cd42f320d5dcd",
                "LaunchTemplateName": "my-template-for-auto-scaling",
                "Version": "1"
            }
        }
    ]
}
```

**To view the reason for health check failures (AWS CLI)**  
Use the following [describe-scaling-activities](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/describe-scaling-activities.html) command. 

```
aws autoscaling describe-scaling-activities --auto-scaling-group-name my-asg
```

The following is an example response, where `Description` indicates that your Auto Scaling group has terminated an instance and `Cause` indicates the reason for the health check failure. 

Scaling activities are ordered by start time. Activities still in progress are described first. 

```
{
  "Activities": [
    {
      "ActivityId": "4c65e23d-a35a-4e7d-b6e4-2eaa8753dc12",
      "AutoScalingGroupName": "my-asg",
      "Description": "Terminating EC2 instance: i-04925c838b6438f14",
      "Cause": "At 2021-04-01T21:48:35Z an instance was taken out of service in response to EBS volume health check failure.",
      "StartTime": "2021-04-01T21:48:35.859Z",
      "EndTime": "2021-04-01T21:49:18Z",
      "StatusCode": "Successful",
      "Progress": 100,
      "Details": "{\"Subnet ID\":\"subnet-5ea0c127\",\"Availability Zone\":\"us-west-2a\"...}",
      "AutoScalingGroupARN": "arn:aws:autoscaling:us-west-2:123456789012:autoScalingGroup:283179a2-f3ce-423d-93f6-66bb518232f7:autoScalingGroupName/my-asg"
    },
...
  ]
}
```

# Examples for creating and managing warm pools with the AWS CLI
AWS CLI examples for working with warm pools

You can create and manage warm pools using the AWS Management Console, AWS Command Line Interface (AWS CLI), or SDKs.

The following examples show you how to create and manage warm pools using the AWS CLI.

**Topics**
+ [

## Example 1: Keep instances in the `Stopped` state
](#warm-pool-configuration-ex1)
+ [

## Example 2: Keep instances in the `Running` state
](#warm-pool-configuration-ex2)
+ [

## Example 3: Keep instances in the `Hibernated` state
](#warm-pool-configuration-ex3)
+ [

## Example 4: Return instances to the warm pool when scaling in
](#warm-pool-configuration-ex4)
+ [

## Example 5: Specify the minimum number of instances in the warm pool
](#warm-pool-configuration-ex5)
+ [

## Example 6: Define the warm pool size using a custom specification
](#warm-pool-configuration-ex6)
+ [

## Example 7: Define an absolute warm pool size
](#warm-pool-configuration-ex7)
+ [

## Example 8: Delete a warm pool
](#delete-warm-pool-cli)

## Example 1: Keep instances in the `Stopped` state


The following [put-warm-pool](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/put-warm-pool.html) example creates a warm pool that keeps instances in a `Stopped` state.

```
aws autoscaling put-warm-pool --auto-scaling-group-name my-asg /
  --pool-state Stopped
```

## Example 2: Keep instances in the `Running` state


The following [put-warm-pool](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/put-warm-pool.html) example creates a warm pool that keeps instances in a `Running` state instead of a `Stopped` state. 

```
aws autoscaling put-warm-pool --auto-scaling-group-name my-asg /
  --pool-state Running
```

## Example 3: Keep instances in the `Hibernated` state


The following [put-warm-pool](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/put-warm-pool.html) example creates a warm pool that keeps instances in a `Hibernated` state instead of a `Stopped` state. This lets you stop instances without deleting their memory contents (RAM).

```
aws autoscaling put-warm-pool --auto-scaling-group-name my-asg /
  --pool-state Hibernated
```

## Example 4: Return instances to the warm pool when scaling in


The following [put-warm-pool](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/put-warm-pool.html) example creates a warm pool that keeps instances in a `Stopped` state and includes the `--instance-reuse-policy` option. The instance reuse policy value `'{"ReuseOnScaleIn": true}'` tells Amazon EC2 Auto Scaling to return instances to the warm pool when your Auto Scaling group scales in.

```
aws autoscaling put-warm-pool --auto-scaling-group-name my-asg /
  --pool-state Stopped --instance-reuse-policy '{"ReuseOnScaleIn": true}'
```

## Example 5: Specify the minimum number of instances in the warm pool


The following [put-warm-pool](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/put-warm-pool.html) example creates a warm pool that maintains a minimum of 4 instances, so that there are at least 4 instances available to handle traffic spikes. 

```
aws autoscaling put-warm-pool --auto-scaling-group-name my-asg /
  --pool-state Stopped --min-size 4
```

## Example 6: Define the warm pool size using a custom specification


By default, Amazon EC2 Auto Scaling manages the size of your warm pool as the difference between the maximum and desired capacity of the Auto Scaling group. However, you can manage the size of the warm pool independently from the group's maximum capacity by using the `--max-group-prepared-capacity` option.

The following [put-warm-pool](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/put-warm-pool.html) example creates a warm pool and sets the maximum number of instances that can exist concurrently in both the warm pool and Auto Scaling group. If the group has a desired capacity of 800, the warm pool will initially have a size of 100 as it initializes after running this command. 

```
aws autoscaling put-warm-pool --auto-scaling-group-name my-asg /
  --pool-state Stopped --max-group-prepared-capacity 900
```

To maintain a minimum number of instances in the warm pool, include the `--min-size` option with the command, as follows. 

```
aws autoscaling put-warm-pool --auto-scaling-group-name my-asg /
  --pool-state Stopped --max-group-prepared-capacity 900 --min-size 25
```

## Example 7: Define an absolute warm pool size


If you set the same values for the `--max-group-prepared-capacity` and `--min-size` options, the warm pool has an absolute size. The following [put-warm-pool](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/put-warm-pool.html) example creates a warm pool that maintains a constant warm pool size of 10 instances.

```
aws autoscaling put-warm-pool --auto-scaling-group-name my-asg /
  --pool-state Stopped --min-size 10 --max-group-prepared-capacity 10
```

## Example 8: Delete a warm pool


Use the following [delete-warm-pool](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/delete-warm-pool.html) command to delete a warm pool. 

```
aws autoscaling delete-warm-pool --auto-scaling-group-name my-asg
```

If there are instances in the warm pool, or if scaling activities are in progress, use the [delete-warm-pool](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/delete-warm-pool.html) command with the `--force-delete` option. This option also terminates the Amazon EC2 instances and any outstanding lifecycle actions.

```
aws autoscaling delete-warm-pool --auto-scaling-group-name my-asg --force-delete
```

# Auto Scaling group zonal shift


Zonal shift is a capability in the Amazon Application Recovery Controller (ARC). With zonal shift, you can quickly recover from application impairments in an Availability Zone with a single action. When you enable zonal shift for an Auto Scaling group, the group is registered with the ARC zonal shift service. Then, you can start a zonal shift using the AWS Management Console, AWS CLI, or API and the Auto Scaling group treats the Availability Zone with an active zonal shift as impaired. 

## Auto Scaling group zonal shift concepts


Before proceeding, make sure that you are familiar with the following core concepts related to the integration with ARC zonal shift. 

**ARC zonal shift**  
Auto Scaling can register Auto Scaling groups with ARC zonal shift when you enable this feature. After registration, you can view your resources with the [ARC `ListManagedResources`](https://docs.aws.amazon.com/arc-zonal-shift/latest/api/API_ListManagedResources.html) API. For more information, see [Zonal shift in ARC](https://docs.aws.amazon.com/r53recovery/latest/dg/arc-zonal-shift.html) in the *Amazon Application Recovery Controller (ARC) Developer Guide*.

**Availability Zone rebalancing**  
Auto Scaling attempts to keep capacity in each Availability Zone balanced. When an imbalance occurs between Availability Zones, Auto Scaling automatically attempts to fix the imbalance. For more information, see [Instance distribution](auto-scaling-benefits.md#AutoScalingBehavior.Rebalancing).

**Dynamic scaling**  
Dynamic scaling scales the desired capacity of your Auto Scaling group based on metrics that you choose with scaling policies. For more information, see [Dynamic scaling for Amazon EC2 Auto Scaling](as-scale-based-on-demand.md).

**Health checks**  
Auto Scaling periodically checks the health status of all instances within an Auto Scaling group to make sure they're running and in good condition. When an unhealthy instance is detected, Auto Scaling marks it for replacement. For more information, see [Health checks for instances in an Auto Scaling group](ec2-auto-scaling-health-checks.md).

**Instance refresh**  
You can use an instance refresh to update the instances in your Auto Scaling group. After an instance refresh is started, Auto Scaling attempts to replace all instances in your Auto Scaling group. For more information, see [Use an instance refresh to update instances in an Auto Scaling group](asg-instance-refresh.md).

**Prescaled**  
You can tolerate the loss of a single Availability Zone because you have enough capacity in the remaining Availability Zones for your application.

**Scaling out**  
When you increase the desired capacity of an Auto Scaling group, Auto Scaling attempts to launch additional instances to meet the new desired capacity. By default, Auto Scaling launches instance in a balanced way to maintain equal capacity across each enabled Availability Zone in an Auto Scaling group.

## How zonal shift works for Auto Scaling groups


Suppose you have an Auto Scaling group with the following Availability Zones: 
+ `us-east-1a`
+ `us-east-1b`
+ `us-east-1c`

You have zonal shift enabled in all Availability Zones and notice failures in `us-east-1a` so you trigger a zonal shift. The following behaviors occur when a zonal shift is triggered in `us-east-1a`.
+ **Scaling out** – Auto Scaling will launch all new capacity requests in the healthy Availability Zones (`us-east-1b` and `us-east-1c`).
+ **Dynamic scaling** – Auto Scaling will block scaling policies from decreasing desired capacity in all Availability Zones. Auto Scaling will not block scaling policies from increasing desired capacity in all Availability Zones.
+ **Instance refreshes** – Auto Scaling will extend the timeout for any instance refresh process that is delayed while a zonal shift is active.

The following table describes the health check behavior for each option when a zonal shift is triggered in `us-east-1a`.


| Impaired Availability Zone health check behavior selection | Health check behavior | 
| --- | --- | 
|  Replace unhealthy  |  Instances that appear unhealthy will be replaced in all Availability Zones (`us-east-1a`, `us-east-1b`, and `us-east-1c`).  | 
|  Ignore unhealthy  |  Instances that appear unhealthy will be replaced in `us-east-1b` and `us-east-1c`. Instances will not be replaced in the Availability Zone with the active zonal shift (`us-east-1a`).  | 

## Best practices for using zonal shift


To maintain high availability for your applications when using zonal shift, we recommend the following best practices:
+ Monitor EventBridge notifications to determine when there is an ongoing Availability Zone impairment event. For more information, see [Use EventBridge to handle Auto Scaling events](automating-ec2-auto-scaling-with-eventbridge.md).
+ Use scaling policies with appropriate thresholds to make sure that you have enough capacity to tolerate the loss of an Availability Zone.
+ Set an instance maintenance policy with a minimum healthy percentage of 100. With this setting, Auto Scaling waits for a new instance to be ready to use before terminating an unhealthy instance.

For prescaled customers, we also recommend the following:
+ Select **Ignore unhealthy** as the health check behavior for the impaired Availability Zone because you don't need to replace the unhealthy instance during the impairment event.
+ Use zonal autoshift in ARC for your Auto Scaling groups. The zonal autoshift capability in ARC allows AWS to shift traffic for a resource away from an Availability Zone when AWS detects an impairment in an Availability Zone. For more information, see [Zonal autoshift in ARC](https://docs.aws.amazon.com/r53recovery/latest/dg/arc-zonal-autoshift.html) in the *Amazon Application Recovery Controller (ARC) Developer Guide*.

For customers with cross-zone disabled load balancers, we also recommend the following:
+ Use **balanced only** for your Availability Zone distribution.
+ If you are using zonal shift on both Auto Scaling groups and load balancers, cancel the zonal shift on your Auto Scaling group first. Then, wait for capacity to balance across all Availability Zones before canceling the zonal shift on the load balancer.
+ Due to the possibility for imbalanced capacity when you enable zonal shift and use a cross-zone disabled load balancer, Auto Scaling includes an extra validation step. If you are following best practices, you can acknowledge this possibility by selecting the AWS Management Console checkbox or using the `skip-zonal-shift-validation` flag in `CreateAutoScalingGroup`, `UpdateAutoScalingGroup`, or `AttachTrafficSources`.

For more information about using zonal shift with Auto Scaling groups, see the AWS Compute Blog [Using zonal shift with Amazon EC2 Auto Scaling](https://aws.amazon.com/blogs/compute/using-zonal-shift-with-amazon-ec2-auto-scaling/).

# Enable zonal shift using the AWS Management Console or the AWS CLI


To enable zonal shift, use one of the following methods.

------
#### [ Console ]

**To enable zonal shift on a new group (console)**

1. Follow the instructions in [Create an Auto Scaling group using a launch template](create-asg-launch-template.md) and complete each step in the procedure, up to step 10.

1. On the **Integrate with other services** page, for **Application Recovery Controller (ARC) zonal shift**, select the checkbox to enable zonal shift.

1. For **Health check behavior**, choose Ignore unhealthy or Replace unhealthy. For more information, see [How zonal shift works for Auto Scaling groups](ec2-auto-scaling-zonal-shift.md#asg-zonal-shift-how-it-works).

1. Continue with the steps in [Create an Auto Scaling group using a launch template](create-asg-launch-template.md).

------
#### [ AWS CLI ]

**To enable zonal shift on a new group (AWS CLI)**  
Add the `--availability-zone-impairment-policy` parameter to the [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command.

The `--availability-zone-impairment-policy` parameter has two options:
+ **ZonalShiftEnabled** – If set to `true`, Auto Scaling registers the Auto Scaling group with ARC zonal shift and you can [start, update, or cancel a zonal shift](https://docs.aws.amazon.com/r53recovery/latest/dg/arc-zonal-shift.start-cancel.html) on the ARC console. If set to `false`, Auto Scaling deregisters the Auto Scaling group from ARC zonal shift. You must already have zonal shift enabled to set to `false`.
+ **ImpairedZoneHealthCheckBehavior** – If set to `replace-unhealthy`, unhealthy instances will be replaced in the Availability Zone with the active zonal shift. If set to `ignore-unhealthy`, unhealthy instances will not be replaced in the Availability Zone with the active zonal shift. For more information, see [How zonal shift works for Auto Scaling groups](ec2-auto-scaling-zonal-shift.md#asg-zonal-shift-how-it-works).

The following example enables zonal shift on a new Auto Scaling group named `my-asg`.

```
aws autoscaling create-auto-scaling-group \
  --launch-template LaunchTemplateName=my-launch-template,Version='1' \
  --auto-scaling-group-name my-asg \
  --min-size 1 \
  --max-size 10 \
  --desired-capacity 5 \
  --availability-zones us-east-1a us-east-1b us-east-1c \
  --availability-zone-impairment-policy '{
      "ZonalShiftEnabled": true,
      "ImpairedZoneHealthCheckBehavior": IgnoreUnhealthy       
    }'
```

------

------
#### [ Console ]

**To enable zonal shift on an existing group (console)**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. On the navigation bar at the top of the screen, choose the AWS Region that you created your Auto Scaling group in.

1. Select the check box next to the Auto Scaling group.

   A split pane opens up in the bottom of the page. 

1. On the **Integrations** tab, under **Application Recovery Controller (ARC) zonal shift**, choose **Edit**.

1. Select the checkbox to enable zonal shift.

1. For **Health check behavior**, choose Ignore unhealthy or Replace unhealthy. For more information, see [How zonal shift works for Auto Scaling groups](ec2-auto-scaling-zonal-shift.md#asg-zonal-shift-how-it-works).

1. Choose **Update**.

------
#### [ AWS CLI ]

**To enable zonal shift on an existing group (AWS CLI)**  
Add the `--availability-zone-impairment-policy` parameter to the [update-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/update-auto-scaling-group.html) command.

The `--availability-zone-impairment-policy` parameter has two options:
+ **ZonalShiftEnabled** – If set to `true`, Auto Scaling registers the Auto Scaling group with ARC zonal shift and you can [start, update, or cancel a zonal shift](https://docs.aws.amazon.com/r53recovery/latest/dg/arc-zonal-shift.start-cancel.html) on the ARC console. If set to `false`, Auto Scaling deregisters the Auto Scaling group from ARC zonal shift. You must already have zonal shift enabled to set to `false`.
+ **ImpairedZoneHealthCheckBehavior** – If set to `replace-unhealthy`, unhealthy instances will be replaced in the Availability Zone with the active zonal shift. If set to `ignore-unhealthy`, unhealthy instances will not be replaced in the Availability Zone with the active zonal shift. For more information, see [How zonal shift works for Auto Scaling groups](ec2-auto-scaling-zonal-shift.md#asg-zonal-shift-how-it-works).

The following example enables zonal shift on the specified Auto Scaling group.

```
aws autoscaling update-auto-scaling-group --auto-scaling-group-name my-asg \
  --availability-zone-impairment-policy '{
      "ZonalShiftEnabled": true,
      "ImpairedZoneHealthCheckBehavior": IgnoreUnhealthy       
    }'
```

------

# Auto Scaling group Availability Zone distribution
Availability Zone distribution

The following information describes Auto Scaling group Availability Zone strategies.

**Balanced best effort**  
Auto Scaling maintains an equal number of instances across enabled Availability Zones. If launch attempts fail in an Availability Zone, Auto Scaling attempts to launch instances in another healthy Availability Zone. This strategy is important for applications that need Availability Zone redundancy and are not impacted by imbalanced groups.

**Balanced only**  
Auto Scaling maintains an equal number of instances across enabled Availability Zones. If launch attempts fail in an Availability Zone, Auto Scaling will continue to attempt to launch instances in the Availability Zone. This strategy is important to meet certain requirements such as quorum-based workloads or if your Auto Scaling group can tolerate the loss of an Availability Zone because you have sufficient capacity in the remaining Availability Zones.

The Availability Zone distribution strategy selection is in the **Network** section of the AWS Management Console or you can use the [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) or [update-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/update-auto-scaling-group.html) commands.

For more information, see [Create Auto Scaling groups using launch templates](create-auto-scaling-groups-launch-template.md).

# Detach or attach instances from your Auto Scaling group
Detach-attach instances

You can detach instances from your Auto Scaling group. After an instance is detached, that instance becomes independent and can either be managed on its own or attached to a different Auto Scaling group, separate from the original group it belonged to. This can be useful, for example, when you want to perform testing using existing instances that are already running your application.

This topic provides instructions on how to detach and attach instances. When attaching instances, you can also use an existing instance rather than a detached one.

Instead of detaching and re-attaching an instance to the same group, we recommend using the standby procedure to temporarily remove the instance from the group. For more information, see [Temporarily remove instances from your Auto Scaling group](as-enter-exit-standby.md). 

**Topics**
+ [

## Considerations for detaching instances
](#detach-instances-considerations)
+ [

## Considerations for attaching instances
](#attach-instances-considerations)
+ [

## Move an instance to a different group using detach and attach
](#detach-attach-instances)

## Considerations for detaching instances


When you detach instances, keep these points in mind:
+ You can detach an instance only when it's either in an `InService` or `StandBy` state. If you are detaching instances that are in the `StandBy` state, exercise caution. Including the `ShouldDecrementDesiredCapacity` flag in the API call when attempting to detach instances after putting them into the `StandBy` state might cause other instances to terminate unexpectedly. 
+ After you detach an instance, it continues running and incurring charges. To avoid unnecessary charges, make sure to reattach or terminate detached instances when they're no longer needed.
+ You can choose to decrement the desired capacity by the number of instances that you are detaching. If you choose not to decrement the capacity, Amazon EC2 Auto Scaling launches new instances to replace the detached ones to maintain the desired capacity. 
+ If the number of instances that you are detaching will bring the Auto Scaling group below its minimum capacity, you must decrement the minimum capacity.
+ If you detach multiple instances from the same Availability Zone without decrementing the desired capacity, the group will rebalance itself unless you suspend the `AZRebalance` process. For more information, see [Suspend and resume Amazon EC2 Auto Scaling processes](as-suspend-resume-processes.md).
+ If you detach an instance from an Auto Scaling group that has an attached load balancer target group or Classic Load Balancer, the instance is deregistered from the load balancer. If connection draining (deregistration delay) is enabled for your load balancer, Amazon EC2 Auto Scaling waits for in-flight requests to complete.

## Considerations for attaching instances


Note the following when attaching instances:
+ Amazon EC2 Auto Scaling treats attached instances the same as instances launched by the group itself. This means that attached instances can be terminated during scale-in events if they're selected. The permissions granted by the AWSServiceRoleForAutoScaling service-linked role allow Amazon EC2 Auto Scaling to do so.
+ When you attach instances, the desired capacity of the group increases by the number of instances being attached. If the desired capacity after adding the new instances exceeds the maximum size of the group, the request to attach more instances fails.
+ If you add instances to your group causing uneven distribution across Availability Zones, Amazon EC2 Auto Scaling rebalances the group to re-establish an even distribution unless you suspend the `AZRebalance` process. For more information, see [Suspend and resume Amazon EC2 Auto Scaling processes](as-suspend-resume-processes.md).
+ If you attach an instance to an Auto Scaling group that has an attached load balancer target group or Classic Load Balancer, the instance is registered with the load balancer.

For an instance to be attached, it must meet the following criteria:
+ The instance is in the `running` state with Amazon EC2.
+ The AMI used to launch the instance must still exist.
+ The instance is not a member of another Auto Scaling group.
+ The instance is launched into one of the Availability Zones defined in the Auto Scaling group.
+ If the Auto Scaling group has an attached load balancer target group or Classic Load Balancer, the instance and the load balancer must both be in the same VPC.

## Move an instance to a different group using detach and attach


Use one of the following procedures to detach an instance from your Auto Scaling group and attach it to a different Auto Scaling group.

To create a new Auto Scaling group from a detached instance, see [Create an Auto Scaling group from existing instance using the AWS CLI](create-asg-from-instance.md) (not recommended, creates a launch configuration).

------
#### [ Console ]

**To detach an instance from an Auto Scaling group**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. Select the check box next to your Auto Scaling group.

   A split pane opens up in the bottom of the page. 

1. On the **Instance management** tab, in **Instances**, select an instance and choose **Actions**, **Detach**.

1. In the **Detach instance** dialog box, keep the **Replace instance** check box selected to launch a replacement instance. Clear the check box to decrement the desired capacity. 

1. When prompted for confirmation, type **detach** to confirm removing the specified instance from the Auto Scaling group, and then choose **Detach instance**.

You can now attach the instance to a different Auto Scaling group.

**To attach an instance to an Auto Scaling group**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. (Optional) On the navigation pane, under **Auto Scaling**, choose **Auto Scaling Groups**. Select the Auto Scaling group and verify that the maximum size of the Auto Scaling group is large enough that you can add another instance. Otherwise, on the **Details** tab, increase the maximum capacity. 

1. On the navigation pane, under **Instances**, choose **Instances**, and then select an instance.

1. Choose **Actions**, **Instance settings**, **Attach to Auto Scaling Group**.

1. On the **Attach to Auto Scaling group** page, for **Auto Scaling Group**, select the Auto Scaling group, and then choose **Attach**.

1. If the instance doesn't meet the criteria, you get an error message with the details. For example, the instance might not be in the same Availability Zone as the Auto Scaling group. Choose **Close** and try again with an Auto Scaling group that meets the criteria.

------
#### [ AWS CLI ]

To detach and attach an instance, use the following example commands. Replace each *user input placeholder* with your own information.

**To detach an instance from an Auto Scaling group**

1. To describe the current instances, use the following [describe-auto-scaling-instances](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/describe-auto-scaling-instances.html) command.

   ```
   aws autoscaling describe-auto-scaling-instances \
     --query 'AutoScalingInstances[?AutoScalingGroupName==`my-asg`]'
   ```

   The following example shows the output produced when you run this command. 

   Take note of the ID of the instance that you intend to remove from the group. You need this ID in the next step.

   ```
   {
       "AutoScalingInstances": [
           {
               "ProtectedFromScaleIn": false,
               "AvailabilityZone": "us-west-2a",
               "LaunchTemplate": {
                   "LaunchTemplateName": "my-launch-template",
                   "Version": "1",
                   "LaunchTemplateId": "lt-050555ad16a3f9c7f"
               },
               "InstanceId": "i-05b4f7d5be44822a6",
               "InstanceType": "t3.micro",
               "AutoScalingGroupName": "my-asg",
               "HealthStatus": "HEALTHY",
               "LifecycleState": "InService"
           },
           {
               "ProtectedFromScaleIn": false,
               "AvailabilityZone": "us-west-2a",
               "LaunchTemplate": {
                   "LaunchTemplateName": "my-launch-template",
                   "Version": "1",
                   "LaunchTemplateId": "lt-050555ad16a3f9c7f"
               },
               "InstanceId": "i-0c20ac468fa3049e8",
               "InstanceType": "t3.micro",
               "AutoScalingGroupName": "my-asg",
               "HealthStatus": "HEALTHY",
               "LifecycleState": "InService"
           },
           {
               "ProtectedFromScaleIn": false,
               "AvailabilityZone": "us-west-2a",
               "LaunchTemplate": {
                   "LaunchTemplateName": "my-launch-template",
                   "Version": "1",
                   "LaunchTemplateId": "lt-050555ad16a3f9c7f"
               },
               "InstanceId": "i-0787762faf1c28619",
               "InstanceType": "t3.micro",
               "AutoScalingGroupName": "my-asg",
               "HealthStatus": "HEALTHY",
               "LifecycleState": "InService"
           },
           {
               "ProtectedFromScaleIn": false,
               "AvailabilityZone": "us-west-2a",
               "LaunchTemplate": {
                   "LaunchTemplateName": "my-launch-template",
                   "Version": "1",
                   "LaunchTemplateId": "lt-050555ad16a3f9c7f"
               },
               "InstanceId": "i-0f280a4c58d319a8a",
               "InstanceType": "t3.micro",
               "AutoScalingGroupName": "my-asg",
               "HealthStatus": "HEALTHY",
               "LifecycleState": "InService"
           }
       ]
   }
   ```

1. To detach an instance without decrementing the desired capacity, use the following [detach-instances](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/detach-instances.html) command.

   ```
   aws autoscaling detach-instances --instance-ids i-05b4f7d5be44822a6 \
     --auto-scaling-group-name my-asg
   ```

   To detach an instance and decrement the desired capacity, include the `--should-decrement-desired-capacity` option.

   ```
   aws autoscaling detach-instances --instance-ids i-05b4f7d5be44822a6 \
     --auto-scaling-group-name my-asg --should-decrement-desired-capacity
   ```

You can now attach the instance to a different Auto Scaling group.

**To attach an instance to an Auto Scaling group**

1. To attach the instance to a different Auto Scaling group, use the following [attach-instances](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/attach-instances.html) command.

   ```
   aws autoscaling attach-instances --instance-ids i-05b4f7d5be44822a6 --auto-scaling-group-name my-asg-for-testing
   ```

1. To verify the size of the Auto Scaling group after attaching an instance, use the following [describe-auto-scaling-groups](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/describe-auto-scaling-groups.html) command.

   ```
   aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names my-asg-for-testing
   ```

   The following example response shows that the group has two running instances, one of which is the instance you attached.

   ```
   {
       "AutoScalingGroups": [
           {
               "AutoScalingGroupName": "my-asg-for-testing",
               "AutoScalingGroupARN": "arn",
               "LaunchTemplate": {
                   "LaunchTemplateName": "my-launch-template",
                   "Version": "2",
                   "LaunchTemplateId": "lt-050555ad16a3f9c7f"
               },
               "MinSize": 1,
               "MaxSize": 5,
               "DesiredCapacity": 2,
               ...
               "Instances": [
                   {
                       "ProtectedFromScaleIn": false,
                       "AvailabilityZone": "us-west-2a",
                       "LaunchTemplate": {
                           "LaunchTemplateName": "my-launch-template",
                           "Version": "1",
                           "LaunchTemplateId": "lt-050555ad16a3f9c7f"
                       },
                       "InstanceId": "i-05b4f7d5be44822a6",
                       "InstanceType": "t3.micro",
                       "HealthStatus": "Healthy",
                       "LifecycleState": "InService"
                   },
                   {
                       "ProtectedFromScaleIn": false,
                       "AvailabilityZone": "us-west-2a",
                       "LaunchTemplate": {
                           "LaunchTemplateName": "my-launch-template",
                           "Version": "2",
                           "LaunchTemplateId": "lt-050555ad16a3f9c7f"
                       },
                       "InstanceId": "i-00dcdfffdf5175890",
                       "InstanceType": "t3.micro",
                       "HealthStatus": "Healthy",
                       "LifecycleState": "InService"
                   }
               ],
               ...
           }
       ]
   }
   ```

------

# Temporarily remove instances from your Auto Scaling group
Temporarily remove instances

You can put an instance that is in the `InService` state into the `Standby` state, update or troubleshoot the instance, and then return the instance to service. Instances that are on standby are still part of the Auto Scaling group, but they do not actively handle load balancer traffic.

This feature helps you stop and start the instances or reboot them without worrying about Amazon EC2 Auto Scaling terminating the instances as part of its health checks or during scale-in events.

For example, you can change the Amazon Machine Image (AMI) for an Auto Scaling group at any time by changing the launch template or launch configuration. Any subsequent instances that the Auto Scaling group launches use this AMI. However, the Auto Scaling group does not update the instances that are currently in service. You can terminate these instances and let Amazon EC2 Auto Scaling replace them, or use the instance refresh feature to terminate and replace the instances. Or, you can put the instances on standby, update the software, and then put the instances back in service.

Detaching instances from an Auto Scaling group is similar to putting instances on standby. Detaching instances might be useful if you want to attach them to a different group or manage the instances like standalone EC2 instances and possibly terminate them. For more information, see [Detach or attach instances from your Auto Scaling group](ec2-auto-scaling-detach-attach-instances.md).

**Topics**
+ [

## How the standby state works
](#standby-state)
+ [

## Considerations
](#standby-instance-considerations)
+ [

## Health status of an instance in a standby state
](#standby-instance-health-status)
+ [

## Temporarily remove an instance by setting it to standby
](#standby-state)

## How the standby state works


The standby state works as follows to help you temporarily remove an instance from your Auto Scaling group:

1. You put an instance into the standby state. The instance remains in this state until you exit the standby state.

1. If there is a load balancer target group or Classic Load Balancer attached to your Auto Scaling group, the instance is deregistered from the load balancer. If connection draining is enabled for the load balancer, Elastic Load Balancing waits 300 seconds by default before completing the deregistration process, which helps in-flight requests to complete. 

1. You can update or troubleshoot the instance.

1. You return the instance to service by exiting the standby state.

1. If there is a load balancer target group or Classic Load Balancer attached to your Auto Scaling group, the instance is registered with the load balancer.

For more information about the lifecycle of instances in an Auto Scaling group, see [Amazon EC2 Auto Scaling instance lifecycle](ec2-auto-scaling-lifecycle.md).

## Considerations


The following are considerations when moving instances in and out of the standby state:
+ When you put an instance on standby, you can either decrement the desired capacity through this operation, or keep it the same value.
  + If you choose not to decrement the desired capacity of the Auto Scaling group, Amazon EC2 Auto Scaling launches an instance to replace the one on standby. The intention is to help you maintain capacity for your application while one or more instances are on standby.
  + If you choose to decrement the desired capacity of the Auto Scaling group, this prevents the launch of an instance to replace the one on standby.
+ After you put the instance back in service, the desired capacity is incremented to reflect how many instances are in the Auto Scaling group.
+ To do the increment (and decrement), the new desired capacity must be between the minimum and maximum group size. Otherwise, the operation fails.
+ If at anytime after putting an instance on standby, or returning the instance to service by exiting the standby state, your Auto Scaling group is found to not be balanced between Availability Zones, Amazon EC2 Auto Scaling compensates by rebalancing the Availability Zones unless you suspend the `AZRebalance` process. For more information, see [Suspend and resume Amazon EC2 Auto Scaling processes](as-suspend-resume-processes.md).
+ You are billed for instances that are in a standby state.

## Health status of an instance in a standby state


Amazon EC2 Auto Scaling does not perform health checks on instances that are in a standby state. While the instance is in a standby state, its health status reflects the status that it had before you put it on standby. Amazon EC2 Auto Scaling does not perform a health check on the instance until you put it back in service.

For example, if you put a healthy instance on standby and then terminate it, Amazon EC2 Auto Scaling continues to report the instance as healthy. If you attempt to put a terminated instance that was on standby back in service, Amazon EC2 Auto Scaling performs a health check on the instance, determines that it is terminating and unhealthy, and launches a replacement instance. For more information, see [Health checks for instances in an Auto Scaling group](ec2-auto-scaling-health-checks.md).

## Temporarily remove an instance by setting it to standby


Use one of the following procedures to take an instance out of service temporarily by placing it into standby state.

------
#### [ Console ]

**To temporarily remove an instance**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. Select the check box next to the Auto Scaling group.

   A split pane opens up in the bottom of the page. 

1. On the **Instance management** tab, in **Instances**, select an instance.

1. Choose **Actions**, **Set to Standby**.

1. In the **Set to Standby** dialog box, keep the **Replace instance** check box selected to launch a replacement instance. Clear the check box to decrement the desired capacity. 

1. When prompted for confirmation, type **standby** to confirm putting the specified instance into the `Standby` state, and then choose **Set to Standby**.

1. You can update or troubleshoot your instance as needed. When you have finished, continue with the next step to return the instance to service.

1. Select the instance, choose **Actions**, **Set to InService**. In the **Set to InService** dialog box, choose **Set to InService**.

------
#### [ AWS CLI ]

To temporarily remove an instance from your Auto Scaling group, use the following example commands. Replace each *user input placeholder* with your own information.

**To temporarily remove an instance**

1. Use the following [describe-auto-scaling-instances](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/describe-auto-scaling-instances.html) command to identify the instance to update.

   ```
   aws autoscaling describe-auto-scaling-instances \
     --query 'AutoScalingInstances[?AutoScalingGroupName==`my-asg`]'
   ```

   The following example shows the output produced when you run this command. 

   Take note of the ID of the instance that you intend to remove from the group. You need this ID in the next step.

   ```
   {
       "AutoScalingInstances": [
           {
               "ProtectedFromScaleIn": false,
               "AvailabilityZone": "us-west-2a",
               "LaunchTemplate": {
                   "LaunchTemplateName": "my-launch-template",
                   "Version": "1",
                   "LaunchTemplateId": "lt-050555ad16a3f9c7f"
               },
               "InstanceId": "i-05b4f7d5be44822a6",
               "InstanceId": "t3.micro",
               "AutoScalingGroupName": "my-asg",
               "HealthStatus": "HEALTHY",
               "LifecycleState": "InService"
           },
          ...
       ]
   }
   ```

1. Move the instance into a `Standby` state using the following [enter-standby](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/enter-standby.html) command. The `--should-decrement-desired-capacity` option decreases the desired capacity so that the Auto Scaling group does not launch a replacement instance.

   ```
   aws autoscaling enter-standby --instance-ids i-05b4f7d5be44822a6 \
     --auto-scaling-group-name my-asg --should-decrement-desired-capacity
   ```

   The following is an example response.

   ```
   {
       "Activities": [
           {
               "ActivityId": "3b1839fe-24b0-40d9-80ae-bcd883c2be32",
               "AutoScalingGroupName": "my-asg",
               "Description": "Moving EC2 instance to Standby: i-05b4f7d5be44822a6",
               "Cause": "At 2023-12-15T21:31:26Z instance i-05b4f7d5be44822a6 was moved to standby 
                 in response to a user request, shrinking the capacity from 4 to 3.",
               "StartTime": "2023-12-15T21:31:26.150Z",
               "StatusCode": "InProgress",
               "Progress": 50,
               "Details": "{\"Subnet ID\":\"subnet-c934b782\",\"Availability Zone\":\"us-west-2a\"}"
           }
       ]
   }
   ```

1. (Optional) Verify that the instance is in `Standby` using the following [describe-auto-scaling-instances](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/describe-auto-scaling-instances.html) command.

   ```
   aws autoscaling describe-auto-scaling-instances --instance-ids i-05b4f7d5be44822a6
   ```

   The following is an example response. Notice that the status of the instance is now `Standby`.

   ```
   {
       "AutoScalingInstances": [
           {
               "ProtectedFromScaleIn": false,
               "AvailabilityZone": "us-west-2a",
               "LaunchTemplate": {
                   "LaunchTemplateName": "my-launch-template",
                   "Version": "1",
                   "LaunchTemplateId": "lt-050555ad16a3f9c7f"
               },
               "InstanceId": "i-05b4f7d5be44822a6",
               "InstanceType": "t3.micro",
               "AutoScalingGroupName": "my-asg",
               "HealthStatus": "HEALTHY",
               "LifecycleState": "Standby"
           },
          ...
       ]
   }
   ```

1. You can update or troubleshoot your instance as needed. When you have finished, continue with the next step to return the instance to service.

1. Put the instance back in service using the following [exit-standby](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/exit-standby.html) command.

   ```
   aws autoscaling exit-standby --instance-ids i-05b4f7d5be44822a6 --auto-scaling-group-name my-asg
   ```

   The following is an example response.

   ```
   {
       "Activities": [
           {
               "ActivityId": "db12b166-cdcc-4c54-8aac-08c5935f8389",
               "AutoScalingGroupName": "my-asg",
               "Description": "Moving EC2 instance out of Standby: i-05b4f7d5be44822a6",
               "Cause": "At 2023-12-15T21:46:14Z instance i-05b4f7d5be44822a6 was moved out of standby in
                  response to a user request, increasing the capacity from 3 to 4.",
               "StartTime": "2023-12-15T21:46:14.678Z",
               "StatusCode": "PreInService",
               "Progress": 30,
               "Details": "{\"Subnet ID\":\"subnet-c934b782\",\"Availability Zone\":\"us-west-2a\"}"
           }
       ]
   }
   ```

1. (Optional) Verify that the instance is back in service using the following `describe-auto-scaling-instances` command.

   ```
   aws autoscaling describe-auto-scaling-instances --instance-ids i-05b4f7d5be44822a6
   ```

   The following is an example response. Notice that the status of the instance is `InService`.

   ```
   {
       "AutoScalingInstances": [
           {
               "ProtectedFromScaleIn": false,
               "AvailabilityZone": "us-west-2a",
               "LaunchTemplate": {
                   "LaunchTemplateName": "my-launch-template",
                   "Version": "1",
                   "LaunchTemplateId": "lt-050555ad16a3f9c7f"
               },
               "InstanceId": "i-05b4f7d5be44822a6",
               "InstanceType": "t3.micro",
               "AutoScalingGroupName": "my-asg",
               "HealthStatus": "HEALTHY",
               "LifecycleState": "InService"
           },
          ...
       ]
   }
   ```

------

# Delete your Auto Scaling infrastructure


To completely delete your scaling infrastructure, complete the following tasks.

**Topics**
+ [

## Delete your Auto Scaling group
](#as-shutdown-lbs-delete-asg-cli)
+ [

## (Optional) Delete the launch configuration
](#as-shutdown-lbs-delete-lc-cli)
+ [

## (Optional) Delete the launch template
](#as-shutdown-lbs-delete-lt-cli)
+ [

## (Optional) Delete the load balancer and target groups
](#as-shutdown-lbs-delete-lbs-cli)
+ [

## (Optional) Delete CloudWatch alarms
](#as-shutdown-delete-alarms-cli)
+ [

# Configure deletion protection for your Amazon EC2 Auto Scaling resources
](resource-deletion-protection.md)

## Delete your Auto Scaling group


When you delete an Auto Scaling group, its desired, minimum, and maximum values are set to 0. As a result, the instances are terminated. Deleting an instance also deletes any associated logs or data, and any volumes on the instance. If you do not want to terminate one or more instances, you can detach them before you delete the Auto Scaling group. If the group has scaling policies, deleting the group deletes the policies, the underlying alarm actions, and any alarm that no longer has an associated action.

**To delete your Auto Scaling group (console)**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. Select the check box next to your Auto Scaling group and choose **Actions**, **Delete**. 

1. When prompted for confirmation, type **delete** to confirm deleting the specified Auto Scaling group and then choose **Delete**.

   A loading icon in the **Name** column indicates that the Auto Scaling group is being deleted. The **Desired**, **Min**, and **Max** columns show `0` instances for the Auto Scaling group. It takes a few minutes to terminate the instance and delete the group. Refresh the list to see the current state. 

**To delete your Auto Scaling group (AWS CLI)**  
Use the following [delete-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/delete-auto-scaling-group.html) command to delete the Auto Scaling group. This operation does not work if the group has any EC2 instances; it is for group's with zero instances only. 

```
aws autoscaling delete-auto-scaling-group --auto-scaling-group-name my-asg
```

If the group has instances or scaling activities in progress, use the [delete-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/delete-auto-scaling-group.html) command with the `--force-delete` option. This will also terminate the EC2 instances. When you delete an Auto Scaling group from the Amazon EC2 Auto Scaling console, the console uses this operation to terminate any EC2 instances and delete the group at the same time.

```
aws autoscaling delete-auto-scaling-group --auto-scaling-group-name my-asg --force-delete
```

## (Optional) Delete the launch configuration


You can skip this step to keep the launch configuration for future use.

**To delete the launch configuration (console)**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. On the left navigation pane, under **Auto Scaling**, choose **Auto Scaling Groups**. 

1. Choose **Launch configurations** near the top of the page. When prompted for confirmation, choose **View launch configurations** to confirm that you want to view the **Launch configurations** page. 

1. Select your launch configuration and choose **Actions**, **Delete launch configuration**.

1. When prompted for confirmation, choose **Delete**.

**To delete the launch configuration (AWS CLI)**  
Use the following [delete-launch-configuration](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/delete-launch-configuration.html) command.

```
aws autoscaling delete-launch-configuration --launch-configuration-name my-launch-config
```

## (Optional) Delete the launch template


You can delete your launch template or just one version of your launch template. When you delete a launch template, all its versions are deleted.

You can skip this step to keep the launch template for future use. 

**To delete your launch template (console)**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. On the navigation pane, under **Instances**, choose **Launch Templates**.

1. Select your launch template and then do one of the following: 
   + Choose **Actions**, **Delete template**. When prompted for confirmation, type **Delete** to confirm deleting the specified launch template and then choose **Delete**.
   + Choose **Actions**, **Delete template version**. Select the version to delete and choose **Delete**.

**To delete the launch template (AWS CLI)**  
Use the following [delete-launch-template](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/delete-launch-template.html) command to delete your template and all its versions.

```
aws ec2 delete-launch-template --launch-template-id lt-068f72b72934aff71
```

Alternatively, you can use the [delete-launch-template-versions](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/delete-launch-template-versions.html) command to delete a specific version of a launch template. 

```
aws ec2 delete-launch-template-versions --launch-template-id lt-068f72b72934aff71 --versions 1
```

## (Optional) Delete the load balancer and target groups


Skip this step if your Auto Scaling group is not associated with an Elastic Load Balancing load balancer, or if you want to keep the load balancer for future use. 

**To delete your load balancer (console)**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. On the navigation pane, under **Load Balancing**, choose **Load Balancers**.

1. Choose the load balancer and choose **Actions**, **Delete**.

1. When prompted for confirmation, choose **Yes, Delete**.

**To delete your target group (console)**

1. On the navigation pane, under **Load Balancing**, choose **Target Groups**.

1. Choose the target group and choose **Actions**, **Delete**.

1. When prompted for confirmation, choose **Yes, Delete**.

**To delete the load balancer associated with the Auto Scaling group (AWS CLI)**  
For Application Load Balancers and Network Load Balancers, use the following [delete-load-balancer](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/elbv2/delete-load-balancer.html) and [delete-target-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/elbv2/delete-target-group.html) commands.

```
aws elbv2 delete-load-balancer --load-balancer-arn my-load-balancer-arn
aws elbv2 delete-target-group --target-group-arn my-target-group-arn
```

For Classic Load Balancers, use the following [delete-load-balancer](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/elb/delete-load-balancer.html) command.

```
aws elb delete-load-balancer --load-balancer-name my-load-balancer
```

## (Optional) Delete CloudWatch alarms


To delete the CloudWatch alarms associated with your Auto Scaling group, complete the following steps. For example, you might have alarms associated with step scaling or simple scaling policies.

**Note**  
Deleting an Auto Scaling group automatically deletes the CloudWatch alarms that Amazon EC2 Auto Scaling manages for a target tracking scaling policy. 

You can skip this step if your Auto Scaling group is not associated with any CloudWatch alarms, or if you want to keep the alarms for future use.

**To delete the CloudWatch alarms (console)**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. On the navigation pane, choose **Alarms**.

1. Choose the alarms and choose **Action**, **Delete**.

1. When prompted for confirmation, choose **Delete**.

**To delete the CloudWatch alarms (AWS CLI)**  
Use the [delete-alarms](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudwatch/delete-alarms.html) command. You can delete one or more alarms at a time. For example, use the following command to delete the `Step-Scaling-AlarmHigh-AddCapacity` and `Step-Scaling-AlarmLow-RemoveCapacity` alarms.

```
aws cloudwatch delete-alarms --alarm-name Step-Scaling-AlarmHigh-AddCapacity Step-Scaling-AlarmLow-RemoveCapacity
```

# Configure deletion protection for your Amazon EC2 Auto Scaling resources
Resource deletion protection

 Protect your Amazon EC2 Auto Scaling infrastructure from accidental deletion by configuring multiple layers of protection. Auto Scaling provides several approaches to prevent unwanted resource deletion for your Auto Scaling groups and the Amazon EC2 instances that it manages. 

**Topics**
+ [

## Configure Auto Scaling group deletion protection
](#asg-deletion-protection)
+ [

## Control deletion permissions with IAM policies
](#deletion-protection-iam-policies)

## Configure Auto Scaling group deletion protection


 Deletion protection is a resource-level setting that prevents your Amazon EC2 Auto Scaling group from accidental deletion. When enabled, deletion protection blocks the [ DeleteAutoScalingGroup ](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_DeleteAutoScalingGroup.html) API operation from succeeding, requiring you to first update the deletion protection setting to a less restrictive level before you can delete the Auto Scaling group. 

Amazon EC2 Auto Scaling offers three levels of deletion protection:

**None** (default)  
 No deletion protection is enabled, meaning your Auto Scaling group can be deleted with or without using the `ForceDelete` option. When `ForceDelete` is used, all Amazon EC2 instances managed by your Auto Scaling group will also be forcibly terminated without executing termination lifecycle hooks. 

**Prevent force deletion**  
 Your Auto Scaling group can't be deleted when using the `ForceDelete` option. This configuration allows deletion of empty Auto Scaling groups (groups with no instances). This option is recommended for production workloads where you want to prevent mass instance termination but allow cleanup of empty groups. 

**Prevent all deletion**  
 Your Auto Scaling group can't be deleted regardless of whether the `ForceDelete` option is used. This option provides the strongest protection against accidental deletion. It requires explicitly disabling deletion protection before your Auto Scaling group can be deleted. This is recommended for mission-critical Auto Scaling groups that should rarely or never be deleted. 

### How deletion protection works


 When you attempt the [ DeleteAutoScalingGroup ](https://docs.aws.amazon.com/autoscaling/ec2/APIReference/API_DeleteAutoScalingGroup.html) API operation with deletion protection enabled: 

1.  Amazon EC2 Auto Scaling validates the deletion protection setting before processing the request. 

1.  If the configured deletion protection level blocks the deletion attempt, Amazon EC2 Auto Scaling returns a `ValidationError`. 

1.  Your Auto Scaling group and its Amazon EC2 instances remain unchanged. 

1.  You must update the deletion protection setting to a less restrictive level before you can delete your Auto Scaling group. 

 Deletion protection does not prevent other operations such as: 
+  Updating the Auto Scaling group configuration. 
+  Terminating individual instances. 
+  Scaling operations (manual or automatic). 
+  Suspending or resuming processes. 

 For more information on how to gracefully handle instance termination, see [Design your applications to gracefully handle instance termination](gracefully-handle-instance-termination.md). 

### Configure deletion protection


 You can set deletion protection when you create an Auto Scaling group or update the setting on an existing Auto Scaling group. 

------
#### [ Console ]

**To create an Auto Scaling group with deletion protection**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. Choose **Create Auto Scaling group**.

1. Complete the configuration steps for your Auto Scaling group.

1. On the **Configure group size and scaling** page, expand **Additional settings**.

1. For **Auto Scaling group deletion protection**, choose your desired protection level:
   + **None** - No deletion protection (default)
   + **Prevent force deletion** - Block force delete operations
   + **Prevent all deletion** - Block all delete operations

1. Complete the remaining steps to create your Auto Scaling group.

**To update deletion protection on an existing Auto Scaling group**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/), and choose **Auto Scaling Groups** from the navigation pane.

1. Select the check box next to your Auto Scaling group.

1. Choose **Actions**, **Edit**.

1. Under **Additional settings**, update the **Auto Scaling group deletion protection** setting.

1. Choose **Update**.

------
#### [ AWS CLI ]

**To create an Auto Scaling group with deletion protection**  
Use the [create-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/create-auto-scaling-group.html) command with the `--deletion-protection` parameter:

```
aws autoscaling create-auto-scaling-group \
    --auto-scaling-group-name my-asg \
    --launch-template LaunchTemplateName=my-template,Version='$Latest' \
    --min-size 1 \
    --max-size 5 \
    --desired-capacity 2 \
    --vpc-zone-identifier "subnet-12345678,subnet-87654321" \
    --deletion-protection prevent-force-deletion
```

Valid values for `--deletion-protection` are: `none` \$1 `prevent-force-deletion` \$1 `prevent-all-deletion`

**To update deletion protection on an existing Auto Scaling group**  
Use the [update-auto-scaling-group](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/update-auto-scaling-group.html) command:

```
aws autoscaling update-auto-scaling-group \
    --auto-scaling-group-name my-asg \
    --deletion-protection prevent-all-deletion
```

**To disable deletion protection**  
Set deletion protection to `none`:

```
aws autoscaling update-auto-scaling-group \
    --auto-scaling-group-name my-asg \
    --deletion-protection none
```

**To verify deletion protection status**  
Use the [describe-auto-scaling-groups](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/autoscaling/describe-auto-scaling-groups.html) command:

```
aws autoscaling describe-auto-scaling-groups \
    --auto-scaling-group-names my-asg
```

------

## Control deletion permissions with IAM policies


 Use AWS Identity and Access Management (IAM) policies to control which users and roles can delete Auto Scaling groups. IAM-based controls provide an additional layer of security by restricting permissions at the identity level. 

IAM policies are particularly useful when you want to:
+  Allow different users different levels of access to Auto Scaling operations. 
+  Prevent specific users from using the `ForceDelete` option even if they can perform other Auto Scaling operations. 
+  Restrict deletion permissions to specific Auto Scaling groups. 

 The following policy allows deletion of an Auto Scaling group only if the group has the tag `environment=development`. 

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement": [{
      "Effect": "Allow",
      "Action": "autoscaling:DeleteAutoScalingGroup",
      "Resource": "*",
      "Condition": {
          "StringEquals": { "aws:ResourceTag/environment": "development" }
      }
   }]
}
```

------

 The following policy uses the `autoscaling:ForceDelete` condition key to control access to the `DeleteAutoScalingGroup` API action. This can prevent certain users from using the `ForceDelete` operation, which terminates all Amazon EC2 instances within an Auto Scaling group. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [{
        "Effect": "Deny",
        "Action": "autoscaling:DeleteAutoScalingGroup",
        "Resource": "*",
        "Condition": {
            "Bool": {
                "autoscaling:ForceDelete": "true"
            }
        }
    }]
}
```

------

 Alternatively, if you are not using condition keys to control access to Auto Scaling groups, you can specify the ARNs of resources in the `Resource` element to control access instead. 

 The following policy gives users permissions to use the `DeleteAutoScalingGroup` API action, but only for Auto Scaling groups whose name begins with `devteam-`. 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "autoscaling:DeleteAutoScalingGroup",
            "Resource": "arn:aws:autoscaling:us-east-1:111122223333:autoScalingGroup:*:autoScalingGroupName/devteam-*"
        }
    ]
}
```

------

 You can also specify multiple ARNs by enclosing them in a list. Including the UUID ensures that access is granted to the specific Auto Scaling group. The UUID for a new group is different from the UUID for a deleted group with the same name. 

```
"Resource": [
    "arn:aws:autoscaling:region:account-id:autoScalingGroup:uuid:autoScalingGroupName/devteam-1",
    "arn:aws:autoscaling:region:account-id:autoScalingGroup:uuid:autoScalingGroupName/devteam-2",
    "arn:aws:autoscaling:region:account-id:autoScalingGroup:uuid:autoScalingGroupName/devteam-3"
]
```

 For additional examples of IAM policies for Amazon EC2 Auto Scaling, including policies that control deletion permissions, see [Identity-based policy examples](security_iam_id-based-policy-examples.md). 