

# Getting started with scaling plans
<a name="getting-started-with-scaling-plans"></a>

Before you create a scaling plan for use with your application, review your application thoroughly as it runs in the AWS Cloud. Take note of the following:
+ Whether you have existing scaling policies created from other consoles. You can replace the existing scaling policies, or you can keep them (without being allowed to make any changes to their values) when you create your scaling plan.
+ The target utilization that makes sense for each scalable resource in your application based on the resource as a whole. For example, the amount of CPU that the EC2 instances in an Auto Scaling group are expected to use compared to their available CPU. Or for a service like DynamoDB that uses a provisioned throughput model, the amount of read and write activity that a table or index is expected to use compared to the available throughput. In other words, the ratio of consumed to provisioned capacity. You can change the target utilization at any time after you create your scaling plan.
+ How long it takes to launch and configure a server. Knowing this helps you configure a window for each EC2 instance to warm up after launching to ensure that a new server isn't launched while the previous one is still launching.
+ Whether the metric history is sufficiently long to use with predictive scaling (if using newly created Auto Scaling groups). In general, having a full 14 days of historical data translates into more accurate forecasts. The minimum is 24 hours.

The better you understand your application, the more effective you can make your scaling plan. 

The following tasks help you become familiar with scaling plans. You will create a scaling plan for a single Auto Scaling group and enable predictive scaling and dynamic scaling. 

**Topics**
+ [Step 1: Find your scalable resources](gs-select-application.md)
+ [Step 2: Specify the scaling strategy](gs-configure-scaling-plan.md)
+ [Step 3: Configure advanced settings (optional)](gs-specify-custom-settings.md)
+ [Step 4: Create your scaling plan](gs-create-scaling-plan.md)
+ [Step 5: Clean up](gs-delete-scaling-plan.md)
+ [Step 6: Next steps](gs-next-steps.md)

# Step 1: Find your scalable resources
<a name="gs-select-application"></a>

This section includes a hands-on introduction to creating scaling plans in the AWS Auto Scaling console. If this is your first scaling plan, we recommend that you start by creating a sample scaling plan using an Amazon EC2 Auto Scaling group. 

## Prerequisites
<a name="gs-select-application-prereq"></a>

To practice using a scaling plan, create an Auto Scaling group. Launch at least one Amazon EC2 instance in the Auto Scaling group. For more information, see [Get started with Amazon EC2 Auto Scaling](https://docs.aws.amazon.com/autoscaling/ec2/userguide/get-started-with-ec2-auto-scaling.html) in the *Amazon EC2 Auto Scaling User Guide*.

Use an Auto Scaling group with CloudWatch metrics enabled to have capacity data on the graphs that are available when you complete the **Create Scaling Plan** wizard. For more information, see [Monitor CloudWatch metrics for your Auto Scaling groups and instances](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-cloudwatch-monitoring.html) in the *Amazon EC2 Auto Scaling User Guide*.

Generate some load for a few days or more to have CloudWatch metric data available for the predictive scaling feature, if possible.

Verify that you have the permissions required to work with scaling plans. For more information, see [Identity and access management for scaling plans](auth-and-access-control.md).

## Add your Auto Scaling group to your new scaling plan
<a name="gs-add-auto-scaling-group"></a>

When you create a scaling plan from the console, it helps you find your scalable resources as a first step. Before you begin, confirm that you meet the following requirements:
+ You created an Auto Scaling group and launched at least one EC2 instance, as described in the previous section.
+ The Auto Scaling group you created has existed for at least 24 hours.

**To start creating a scaling plan**

1. Open the AWS Auto Scaling console at [https://console.aws.amazon.com/awsautoscaling/](https://console.aws.amazon.com/awsautoscaling/).

1. On the navigation bar at the top of the screen, choose the same Region that you used when you created your Auto Scaling group. 

1. From the welcome page, choose **Get started**.

1. On the **Find scalable resources** page, do one of the following:
   + Choose **Search by CloudFormation stack**, and then choose the CloudFormation stack to use. 
   + Choose **Search by tag**. Then, for each tag, choose a tag key from **Key** and tag values from **Value**. To add tags, choose **Add another row**. To remove tags, choose **Remove**.
   + Choose **Choose EC2 Auto Scaling groups**, and then choose one or more Auto Scaling groups.
**Note**  
For an introductory tutorial, choose **Choose EC2 Auto Scaling groups**, and then choose the Auto Scaling group you created.  
![\[Console options for finding scalable resources.\]](http://docs.aws.amazon.com/autoscaling/plans/userguide/images/aws-as-gs-choose-asg.PNG)

1. Choose **Next** to continue with the scaling plan creation process.

## Learn more about discovering your scalable resources
<a name="gs-choose-discovery-method"></a>

If you have already created a sample scaling plan and would like to create more, see the following scenarios for using a CloudFormation stack or a set of tags in more detail. You can use this section to decide whether to choose the **Search by CloudFormation stack** or **Search by tag** option to discover your scalable resources when using the console to create your scaling plan.

When you choose the **Search by CloudFormation stack** or **Search by tag** option in step 1 of the **Create Scaling Plan** wizard, this makes the scalable resources associated with the stack or set of tags available to the scaling plan. As you define your scaling plan, you can then choose which of these resources to include or exclude. 

**Discovering scalable resources using a CloudFormation stack**  
When you use CloudFormation, you work with stacks to provision resources. All of the resources in a stack are defined by the stack's template. Your scaling plan adds an orchestration layer on top of the stack that makes it easier to configure scaling for multiple resources. Without a scaling plan, you would need to set up scaling for each scalable resource individually. This means figuring out the order for provisioning resources and scaling policies, and understanding the subtleties of how these dependencies work. 

In the AWS Auto Scaling console, you can select an existing stack to scan it for resources that can be configured for automatic scaling. AWS Auto Scaling only finds resources that are defined in the selected stack. It does not traverse through nested stacks. 

For your ECS services to be discoverable in a CloudFormation stack, the AWS Auto Scaling console must know which ECS cluster is running the service. This requires that your ECS services be in the same CloudFormation stack as the ECS cluster that is running the service. Otherwise, they must be part of the default cluster. To be identified correctly, the ECS service name must also be unique across each of these ECS clusters.

For more information about CloudFormation, see [What is CloudFormation?](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) in the *AWS CloudFormation User Guide*. 

**Discovering scalable resources using tags**  
Tags provide metadata that can be used to discover related scalable resources in the AWS Auto Scaling console, using tag filters.

Use tags to find any of the following resources: 
+ Aurora DB clusters
+ Auto Scaling groups
+ DynamoDB tables and global secondary indexes

When you search by more than one tag, each resource must have all of the listed tags to be discovered. 

For more information about tagging, read the following documentation.
+ Learn how to [tag Aurora clusters](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_Tagging.html) in the *Amazon Aurora User Guide*.
+ Learn how to [tag Auto Scaling groups](https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-tagging.html) in the *Amazon EC2 Auto Scaling User Guide*.
+ Learn how to [tag DynamoDB resources](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Tagging.html) in the *Amazon DynamoDB Developer Guide*.

# Step 2: Specify the scaling strategy
<a name="gs-configure-scaling-plan"></a>

Use the following procedure to specify scaling strategies for the resources that were found in the previous step. 

For each type of resource, AWS Auto Scaling chooses the metric that is most commonly used for determining how much of the resource is in use at any given time. You choose the most appropriate scaling strategy to optimize performance of your application based on this metric. When you enable the dynamic scaling feature and the predictive scaling feature, the scaling strategy is shared between them. For more information, see [How scaling plans work](how-it-works.md).

The following scaling strategies are available:
+ **Optimize for availability**—AWS Auto Scaling scales the resource out and in automatically to maintain resource utilization at 40 percent. This option is useful when your application has urgent and sometimes unpredictable scaling needs.
+ **Balance availability and cost**—AWS Auto Scaling scales the resource out and in automatically to maintain resource utilization at 50 percent. This option helps you maintain high availability while also reducing costs.
+ **Optimize for cost**—AWS Auto Scaling scales the resource out and in automatically to maintain resource utilization at 70 percent. This option is useful for lowering costs if your application can handle having reduced buffer capacity when there are unexpected changes in demand.

For example, the scaling plan configures your Auto Scaling group to add or remove Amazon EC2 instances based on how much of the CPU is used on average for all instances in the group. You choose whether to optimize utilization for availability, cost, or a combination of the two by changing the scaling strategy. 

Alternatively, you can configure a custom strategy if an existing strategy doesn't meet your needs. With a custom strategy, you can change the target utilization value, choose a different metric, or both. 

**Important**  
For the introductory tutorial, complete only the first step of the following procedure and then choose **Next** to continue. 

**To specify a scaling strategy**

1. On the **Specify scaling strategy** page, for **Scaling plan details**, **Name**, enter a name for your scaling plan. The name of your scaling plan must be unique within your set of scaling plans for the Region. It can have a maximum of 128 characters, and it must not contain pipes "\$1", forward slashes "/", or colons ":".

1. All included resources are listed by resource type. For **Auto Scaling groups**, do the following:  
![\[Overview of scaling strategies for Auto Scaling group.\]](http://docs.aws.amazon.com/autoscaling/plans/userguide/images/aws-as-gs-choose-scaling-strategy.PNG)

   1. Skip this step to use the default scaling strategy and metrics. To use a different scaling strategy or metrics instead, proceed with the following steps:

      1. For the **Scaling strategy**, choose the desired scaling strategy. 

         For the introductory tutorial, make sure to choose **Optimize for availability**. This specifies that the average CPU utilization of your Auto Scaling group will be maintained at 40 percent.

      1. If you chose **Custom**, expand the **Configuration details** to choose the desired metrics and target value. 
         + For **Scaling metric**, choose the desired scaling metric.
         + For **Target value**, choose the desired target value, such as the target utilization or the target throughput during any one-minute interval. 
         + For **Load metric** [Auto Scaling groups only], choose the desired load metric to use for predictive scaling. 
         + Select **Replace external scaling policies** to specify that AWS Auto Scaling can delete scaling policies previously created from outside of the scaling plan (such as from other consoles) and replace them with new target tracking scaling policies created by the scaling plan.

   1. (Optional) By default, predictive scaling is enabled for Auto Scaling groups. To turn off predictive scaling for the Auto Scaling groups, clear **Enable predictive scaling**. 

   1. (Optional) By default, dynamic scaling is enabled for each resource type. To turn off dynamic scaling for the resource type, clear **Enable dynamic scaling**. 

   1. (Optional) By default, when you specify an application source from which multiple scalable resources are discovered, all resource types are automatically included in your scaling plan. To omit a type of resource from your scaling plan, clear **Include in scaling plan**.

1. (Optional) To specify a scaling strategy for another resource type, repeat the preceding steps.

1. When you are finished, choose **Next** to continue with the scaling plan creation process.

# Step 3: Configure advanced settings (optional)
<a name="gs-specify-custom-settings"></a>

Now that you have specified the scaling strategy to use for each resource type, you can choose to customize any of the default settings on a per resource basis using the **Configure advanced settings** step. For each resource type, there are multiple groups of settings that you can customize. In most cases, however, the default settings should be more efficient, with the possible exception of the values for minimum capacity and maximum capacity, which should be carefully adjusted.

Skip this procedure if you would like to keep the default settings. You can change these settings anytime by editing the scaling plan.

**Important**  
For the introductory tutorial, let's make a few changes to update the maximum capacity of your Auto Scaling group and enable predictive scaling in forecast only mode. Although you do not need to customize all of the settings for the tutorial, let's also briefly examine the settings in each section. 

## General settings
<a name="gs-customize-general-scaling"></a>

Use this procedure to view and customize the settings you specified in the previous step, on a per resource basis. You can also customize the minimum capacity and maximum capacity for each resource. 

**To view and customize the general settings**

1. On the **Configure advanced settings** page, choose the arrow to the left of any of the section headings to expand the section. For the tutorial, expand the **Auto Scaling groups** section.

1. From the table that's displayed, choose the Auto Scaling group that you are using in this tutorial. 

1. Leave the **Include in scaling plan** option selected. If this option is not selected, the resource is omitted from the scaling plan. If you do not include at least one resource, the scaling plan cannot be created. 

1. To expand the view and see the details of the **General Settings** section, choose the arrow to the left of the section heading.

1. You can make choices for any of the following items. For this tutorial, locate the **Maximum capacity** setting and enter a value of `3` in place of the current value. 
   + **Scaling strategy**—Allows you to optimize for availability, cost, or a balance of both, or to specify a custom strategy.
   + **Enable dynamic scaling**—If this setting is cleared, the selected resource cannot scale using a target tracking scaling configuration.
   + **Enable predictive scaling**—[Auto Scaling groups only] If this setting is cleared, the selected group cannot scale using predictive scaling.
   + **Scaling metric**—Specifies the scaling metric to use. If you choose **Custom**, you can specify a custom metric to use instead of the predefined metrics that are available in the console. For more information, see the next topic in this section.
   + **Target value**—Specifies the target utilization value to use.
   + **Load metric**—[Auto Scaling groups only] Specifies the load metric to use. If you choose **Custom**, you can specify a custom metric to use instead of the predefined metrics that are available in the console. For more information, see the next topic in this section.
   + **Minimum capacity**—Specifies the minimum capacity for the resource. AWS Auto Scaling ensures that your resource never goes below this size.
   + **Maximum capacity**—Specifies the maximum capacity for the resource. AWS Auto Scaling ensures that your resource never goes above this size. 
**Note**  
When you use predictive scaling, you can optionally choose a different maximum capacity behavior to use based on the forecast capacity. This setting is in the **Predictive scaling settings** section.

### Custom metrics
<a name="gs-customized-metric-specification"></a>

AWS Auto Scaling provides the most commonly used metrics for automatic scaling. However, depending on your needs, you might prefer to get data from different metrics instead of the metrics in the console. Amazon CloudWatch has many different metrics to choose from. CloudWatch also lets you publish your own metrics. 

You use JSON to specify a CloudWatch custom metric. Before you follow these instructions, we recommend that you become familiar with the [Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/).

To specify a custom metric, you construct a JSON-formatted payload using a set of required parameters from a template. You add the values for each parameter from CloudWatch. We provide the template as part of the custom options for **Scaling metric** and **Load metric** in the advanced settings of your scaling plan. 

JSON represents data in two ways:
+ An *object*, which is an unordered collection of name-value pairs. An object is defined within left (\$1) and right (\$1) braces. Each name-value pair begins with the name, followed by a colon, followed by the value. Name-value pairs are comma-separated. 
+ An *array*, which is an ordered collection of values. An array is defined within left ([) and right (]) brackets. Items in the array are comma-separated. 

Here is an example of the JSON template with sample values for each parameter: 

```
 {
   "MetricName": "MyBackendCPU",
   "Namespace": "MyNamespace",
   "Dimensions": [
     {
       "Name": "MyOptionalMetricDimensionName",
       "Value": "MyOptionalMetricDimensionValue"
     }
   ],
   "Statistic": "Sum"
 }
```

For more information, see [Customized scaling metric specification](https://docs.aws.amazon.com/autoscaling/plans/APIReference/API_CustomizedScalingMetricSpecification.html) and [Customized load metric specification](https://docs.aws.amazon.com/autoscaling/plans/APIReference/API_CustomizedLoadMetricSpecification.html) in the *AWS Auto Scaling API Reference*.

## Dynamic scaling settings
<a name="gs-customize-dynamic-scaling"></a>

Use this procedure to view and customize the settings for the target tracking scaling policy that AWS Auto Scaling creates. 

**To view and customize the settings for dynamic scaling**

1. To expand the view and see the details of the **Dynamic scaling settings** section, choose the arrow to the left of the section heading. 

1. You can make choices for the following items. However, the default settings are fine for this tutorial. 
   + **Replace external scaling policies**—If this setting is cleared, it keeps existing scaling policies created from outside of this scaling plan, and does not create new ones. 
   + **Disable scale-in**—If this setting is cleared, automatic scale-in to decrease the current capacity of the resource is allowed when the specified metric is below the target value. 
   + **Cooldown**—Creates scale-out and scale-in cooldown periods. The cooldown period is the amount of time the scaling policy waits for a previous scaling activity to take effect. For more information, see [Cooldown period](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html#target-tracking-cooldown) in the *Application Auto Scaling User Guide*. (This setting is not shown if the resource is an Auto Scaling group.) 
   + **Instance warmup**—[Auto Scaling groups only] Controls the amount of time that elapses before a newly launched instance begins contributing to the CloudWatch metrics. For more information, see [Instance warmup](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-target-tracking.html#as-target-tracking-scaling-warmup) in the *Amazon EC2 Auto Scaling User Guide*.

## Predictive scaling settings
<a name="gs-customize-predictive-scaling"></a>

If your resource is an Auto Scaling group, use this procedure to view and customize the settings AWS Auto Scaling uses for predictive scaling. 

**To view and customize the settings for predictive scaling**

1. To expand the view and see the details of the **Predictive scaling settings** section, choose the arrow to the left of the section heading. 

1. You can make choices for the following items. For this tutorial, change the **Predictive scaling mode** to **Forecast only**.
   + **Predictive scaling mode**—Specifies the scaling mode. The default is **Forecast and scale**. If you change it to **Forecast only**, the scaling plan forecasts future capacity but doesn't apply the scaling actions.
   + **Pre-launch instances**—Adjusts the scaling actions to run earlier when scaling out. For example, the forecast says to add capacity at 10:00 AM, and the buffer time is 5 minutes (300 seconds). The runtime of the corresponding scaling action is then 9:55 AM. This is helpful for Auto Scaling groups, where it can take a few minutes from the time an instance launches until it comes in service. The actual time can vary as it depends on several factors, such as the size of the instance and whether there are startup scripts to complete. The default is 300 seconds.
   + **Max capacity behavior**—Controls whether the selected resource can scale up above the maximum capacity when the forecast capacity is close to or exceeds the currently specified maximum capacity. The default is **Enforce the maximum capacity setting**. 
     + **Enforce the maximum capacity setting**—AWS Auto Scaling cannot scale resource capacity higher than the maximum capacity. The maximum capacity is enforced as a hard limit. 
     + **Set the maximum capacity to equal forecast capacity**—AWS Auto Scaling can scale resource capacity higher than the maximum capacity to equal but not exceed forecast capacity.
     + **Increase maximum capacity above forecast capacity**—AWS Auto Scaling can scale resource capacity higher than the maximum capacity by a specified buffer value. The intention is to give the target tracking scaling policy extra capacity if unexpected traffic occurs. 
   + **Max capacity behavior buffer**—If you chose **Increase maximum capacity above forecast capacity**, choose the size of the capacity buffer to use when the forecast capacity is close to or exceeds the maximum capacity. The value is specified as a percentage relative to the forecast capacity. For example, with a 10 percent buffer, if the forecast capacity is 50, and the maximum capacity is 40, then the effective maximum capacity is 55. 

1. When you are finished customizing settings, choose **Next**.
**Note**  
To revert any of your changes, select the resources and choose **Revert to original**. This resets the selected resources to their last known state within the scaling plan. 

# Step 4: Create your scaling plan
<a name="gs-create-scaling-plan"></a>

On the **Review and create** page, review the details of your scaling plan and choose **Create scaling plan**. You are directed to a page that shows the status of your scaling plan. The scaling plan can take a moment to finish being created while your resources are updated. 

With predictive scaling, AWS Auto Scaling analyzes the history of the specified load metric from the past 14 days (minimum of 24 hours of data is required) to generate a forecast for two days ahead. It then schedules scaling actions to adjust the resource capacity to match the forecast for each hour in the forecast period. 

After the creation of the scaling plan is complete, view the scaling plan details by choosing its name from the **Scaling plans** screen. 

## (Optional) View scaling information for a resource
<a name="gs-view-resource"></a>

Use this procedure to view the scaling information created for a resource. 

Data is presented in the following ways:
+ Graphs showing recent metric history data from CloudWatch. 
+ Predictive scaling graphs showing load forecasts and capacity forecasts based on data from AWS Auto Scaling. 
+ A table that lists all the predictive scaling actions scheduled for the resource.

**To view scaling information for a resource**

1. Open the AWS Auto Scaling console at [https://console.aws.amazon.com/awsautoscaling/](https://console.aws.amazon.com/awsautoscaling/).

1. On the **Scaling plans** page, choose the scaling plan.

1. On the **Scaling plan details** page, choose the resource to view. 

### Monitoring and evaluating forecasts
<a name="gs-monitoring-forecasts"></a>

When your scaling plan is up and running, you can monitor the load forecast, the capacity forecast, and scaling actions to examine the performance of predictive scaling. All of this data is available in the AWS Auto Scaling console for all Auto Scaling groups that are enabled for predictive scaling. Keep in mind that your scaling plan requires at least 24 hours of historical load data to make the initial forecast.

In the following example, the left side of each graph shows a historical pattern. The right side shows the forecast that was generated by the scaling plan for the forecast period. Both actual and forecast values (in blue and orange) are plotted. 

![\[Graphs on the Predictive scaling forecasts and scheduled actions page in the console.\]](http://docs.aws.amazon.com/autoscaling/plans/userguide/images/monitoring-forecasts.png)


AWS Auto Scaling learns from your data automatically. First, it makes a load forecast. Then, a capacity forecast calculation determines the minimum number of instances that are required to support the application. Based on the capacity forecast, AWS Auto Scaling schedules scaling actions that scale the Auto Scaling group in advance of predicted load changes. If dynamic scaling is enabled (recommended), the Auto Scaling group can scale out additional capacity (or remove capacity) based on the current utilization of the group of instances.

When evaluating how well predictive scaling performs, monitor how closely the actual and forecast values match *over time*. When you create a scaling plan, AWS Auto Scaling provides graphs based on the most recent actual data. It also provides an initial forecast for the next 48 hours. However, when the scaling plan is created, there is very little forecast data to compare the actual data to. Wait until the scaling plan has obtained forecast values for a few periods before comparing the historical forecast values against the actual values. After a few days of daily forecasts, you'll have a larger sample of forecast values to compare with actual values. 

For patterns that occur on a daily basis, the time interval between creating your scaling plan and evaluating the forecast effectiveness can be as short as a few days. However, this length of time is insufficient to evaluate the forecast based on a recent pattern change. For example, let's say you are looking at the forecast for an Auto Scaling group that started a new marketing campaign in the past week. The campaign significantly increases your web traffic for the same two days each week. In situations like this, we recommend waiting for the group to collect a full week or two of new data before evaluating the effectiveness of the forecast. The same recommendation applies for a brand new Auto Scaling group that has only started to collect metric data. 

If the actual and forecast values don't match after monitoring them over an appropriate length of time, you should also consider your choice of load metric. To be effective, the load metric must represent a reliable and accurate measure of the total load on all instances in the Auto Scaling group. The load metric is core to predictive scaling. If you choose a non-optimal load metric, it can prevent predictive scaling from making accurate load and capacity forecasts and scheduling the correct capacity adjustments for your Auto Scaling group. 

# Step 5: Clean up
<a name="gs-delete-scaling-plan"></a>

After you have completed the getting started tutorial, you can choose to keep your scaling plan. However, if you are not actively using your scaling plan, you should consider deleting it so that your account does not incur unnecessary charges. 

Deleting a scaling plan deletes the target tracking scaling policies, their associated CloudWatch alarms, and the predictive scaling actions that AWS Auto Scaling created on your behalf. 

Deleting a scaling plan does not delete your CloudFormation stack, Auto Scaling group, or other scalable resources. 

**To delete a scaling plan**

1. Open the AWS Auto Scaling console at [https://console.aws.amazon.com/awsautoscaling/](https://console.aws.amazon.com/awsautoscaling/).

1. On the **Scaling plans** page, select the scaling plan that you created for this tutorial and choose **Delete**.

1. When prompted for confirmation, choose **Delete**.

After you delete your scaling plan, your resources do not revert to their original capacity. For example, if your Auto Scaling group is scaled to 10 instances when you delete the scaling plan, your group is still scaled to 10 instances after the scaling plan is deleted. You can update the capacity of specific resources by accessing the console for each individual service.

## Delete your Auto Scaling group
<a name="gs-delete-asg"></a>

To prevent your account from accruing Amazon EC2 charges, you should also delete the Auto Scaling group that you created for this tutorial.

For step-by-step instructions, see [Delete your Auto Scaling group](https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-process-shutdown.html#as-shutdown-lbs-delete-asg-cli) in the *Amazon EC2 Auto Scaling User Guide*.

# Step 6: Next steps
<a name="gs-next-steps"></a>

Now that you have familiarized yourself with scaling plans and some of its features, you may want to try creating your own scaling plan template using CloudFormation. 

An CloudFormation template is a JSON or YAML-formatted text file that describes the Amazon Web Services infrastructure needed to run an application or service along with any interconnections among infrastructure components. With CloudFormation, you deploy and manage an associated collection of resources as a *stack*. CloudFormation is available at no additional charge, and you pay only for the AWS resources needed to run your applications. Resources can consist of any AWS resource you define within the template. For more information, see [How CloudFormation works](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cloudformation-overview.html) in the *AWS CloudFormation User Guide*.

In the *AWS CloudFormation User Guide*, we provide a simple template to get you started. The sample template is available as an example in the [AWS::AutoScalingPlans::ScalingPlan](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-autoscalingplans-scalingplan.html) section of the CloudFormation template reference documentation. The sample template creates a scaling plan for a single Auto Scaling group and enables predictive scaling and dynamic scaling.

For more information, see [Getting started with CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/GettingStarted.html) in the *AWS CloudFormation User Guide*. 