

# Using Auto Scaling with shards
<a name="AutoScaling-Using-Shards"></a>

With ElastiCache's AutoScaling you can use tracking and scheduled policies with your Valkey or Redis OSS engine. 

The following provides details on target tracking and scheduled policies and how to apply them using the AWS Management Console AWS CLI and APIs.

**Topics**
+ [Target tracking scaling policies](AutoScaling-Scaling-Policies-Target.md)
+ [Adding a scaling policy](AutoScaling-Scaling-Adding-Policy-Shards.md)
+ [Registering a Scalable Target](AutoScaling-Scaling-Registering-Policy-CLI.md)
+ [Defining a scaling policy](AutoScaling-Scaling-Defining-Policy-API.md)
+ [Disabling scale-in activity](AutoScaling-Scaling-Disabling-Scale-in.md)
+ [Applying a scaling policy](AutoScaling-Scaling-Applying-a-Scaling-Policy.md)
+ [Editing a scaling policy](AutoScaling-Scaling-Editing-a-Scaling-Policy.md)
+ [Deleting a scaling policy](AutoScaling-Scaling-Deleting-a-Scaling-Policy.md)
+ [Use CloudFormation for Auto Scaling policies](AutoScaling-with-Cloudformation-Shards.md)
+ [Scheduled scaling](AutoScaling-with-Scheduled-Scaling-Shards.md)

# Target tracking scaling policies
<a name="AutoScaling-Scaling-Policies-Target"></a>

With target tracking scaling policies, you select a metric and set a target value. ElastiCache for Valkey and Redis OSS Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes shards as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking scaling policy also adjusts to the fluctuations in the metric due to a fluctuating load pattern and minimizes rapid fluctuations in the capacity of the fleet. 

For example, consider a scaling policy that uses the predefined average `ElastiCachePrimaryEngineCPUUtilization` metric with configured target value. Such a policy can keep CPU utilization at, or close to the specified target value.

## Predefined metrics
<a name="AutoScaling-Scaling-Criteria-predfined-metrics"></a>

A predefined metric is a structure that refers to a specific name, dimension, and statistic (`average`) of a given CloudWatch metric. Your Auto Scaling policy defines one of the below predefined metrics for your cluster:


****  

| Predefined Metric Name | CloudWatch Metric Name | CloudWatch Metric Dimension | Ineligible Instance Types  | 
| --- | --- | --- | --- | 
| ElastiCachePrimaryEngineCPUUtilization |  `EngineCPUUtilization`  |  ReplicationGroupId, Role = Primary  | None | 
| ElastiCacheDatabaseCapacityUsageCountedForEvictPercentage |  `DatabaseCapacityUsageCountedForEvictPercentage`  |  Valkey or Redis OSS Replication Group Metrics  | None | 
| ElastiCacheDatabaseMemoryUsageCountedForEvictPercentage |  `DatabaseMemoryUsageCountedForEvictPercentage`  |  Valkey or Redis OSS Replication Group Metrics  | R6gd | 

Data-tiered instance types cannot use `ElastiCacheDatabaseMemoryUsageCountedForEvictPercentage`, as these instance types store data in both memory and SSD. The expected use case for data-tiered instances is to have 100 percent memory usage and fill up SSD as needed.

## Auto Scaling criteria for shards
<a name="AutoScaling-Scaling-Criteria"></a>

When the service detects that your predefined metric is equal to or greater than the Target setting, it will increase your shards capacity automatically. ElastiCache for Valkey and Redis OSS scales out your cluster shards by a count equal to the larger of two numbers: Percent variation from Target and 20 percent of current shards. For scale-in, ElastiCache won't auto scale-in unless the overall metric value is below 75 percent of your defined Target. 

For a scale out example, if you have 50 shards and
+ if your Target breaches by 30 percent, ElastiCache scales out by 30 percent, which results in 65 shards per cluster. 
+ if your Target breaches by 10 percent, ElastiCache scales out by default Minimum of 20 percent, which results in 60 shards per cluster. 

For a scale-in example, if you have selected a Target value of 60 percent, ElastiCache won't auto scale-in until the metric is less than or equal to 45 percent (25 percent below the Target 60 percent).

## Auto Scaling considerations
<a name="AutoScaling-Scaling-Considerations"></a>

Keep the following considerations in mind:
+ A target tracking scaling policy assumes that it should perform scale out when the specified metric is above the target value. You cannot use a target tracking scaling policy to scale out when the specified metric is below the target value. ElastiCache for Valkey and Redis OSS scales out shards by a minimum of 20 percent deviation of target of existing shards in the cluster.
+ A target tracking scaling policy does not perform scaling when the specified metric has insufficient data. It does not perform scale-in because it does not interpret insufficient data as low utilization. 
+ You may see gaps between the target value and the actual metric data points. This is because ElastiCache Auto Scaling always acts conservatively by rounding up or down when it determines how much capacity to add or remove. This prevents it from adding insufficient capacity or removing too much capacity. 
+ To ensure application availability, the service scales out proportionally to the metric as fast as it can, but scales in more conservatively. 
+ You can have multiple target tracking scaling policies for an ElastiCache for Valkey and Redis OSS cluster, provided that each of them uses a different metric. The intention of ElastiCache Auto Scaling is to always prioritize availability, so its behavior differs depending on whether the target tracking policies are ready for scale out or scale in. It will scale out the service if any of the target tracking policies are ready for scale out, but will scale in only if all of the target tracking policies (with the scale-in portion enabled) are ready to scale in. 
+ Do not edit or delete the CloudWatch alarms that ElastiCache Auto Scaling manages for a target tracking scaling policy. ElastiCache Auto Scaling deletes the alarms automatically when you delete the scaling policy. 
+ ElastiCache Auto Scaling doesn't prevent you from manually modifying cluster shards. These manual adjustments don't affect any existing CloudWatch alarms that are attached to the scaling policy but can impact metrics that may trigger these CloudWatch alarms. 
+ These CloudWatch alarms managed by Auto Scaling are defined over the AVG metric across all the shards in the cluster. So, having hot shards can result in either scenario of:
  + scaling when not required due to load on a few hot shards triggering a CloudWatch alarm
  + not scaling when required due to aggregated AVG across all shards affecting alarm not to breach. 
+ ElastiCache default limits on Nodes per cluster still applies. So, when opting for Auto Scaling and if you expect maximum nodes to be more than default limit, request a limit increase at [AWS Service Limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) and choose the limit type **Nodes per cluster per instance type**. 
+ Ensure that you have enough ENIs (Elastic Network Interfaces) available in your VPC, which are required during scale-out. For more information, see [Elastic network interfaces](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_ElasticNetworkInterfaces.html).
+ If there is not enough capacity available from EC2, ElastiCache Auto Scaling would not scale and be delayed til the capacity is available.
+ ElastiCache for Redis OSS Auto Scaling during scale-in will not remove shards with slots having an item size larger than 256 MB post-serialization.
+ During scale-in it will not remove shards if insufficient memory available on resultant shard configuration.

# Adding a scaling policy
<a name="AutoScaling-Scaling-Adding-Policy-Shards"></a>

You can add a scaling policy using the AWS Management Console. 

**To add an Auto Scaling policy to an ElastiCache for Valkey and Redis OSS cluster**

1. Sign in to the AWS Management Console and open the Amazon ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Valkey** or **Redis OSS**. 

1. Choose the cluster that you want to add a policy to (choose the cluster name and not the button to its left). 

1. Choose the **Auto Scaling policies** tab. 

1. Choose **add dynamic scaling**. 

1. For **Policy name** enter a policy name. 

1. For **Scalable Dimension** choose **shards**. 

1. For the target metric, choose one of the following:
   + **Primary CPU Utilization** to create a policy based on the average CPU utilization. 
   + **Memory** to create a policy based on the average database memory. 
   + **Capacity** to create a policy based on average database capacity usage. The Capacity metric includes memory and SSD utilization for data-tiered instances, and memory utilization for all other instance types.

1. For the target value, choose a value greater than or equal to 35 and less than or equal to 70. Auto scaling will maintain this value for the selected target metric across your ElastiCache shards: 
   + **Primary CPU Utilization**: maintains target value for `EngineCPUUtilization` metric on primary nodes. 
   + **Memory**: maintains target value for `DatabaseMemoryUsageCountedForEvictPercentage` metric 
   + **Capacity** maintains target value for `DatabaseCapacityUsageCountedForEvictPercentage` metric,

   Cluster shards are added or removed to keep the metric close to the specified value. 

1. (Optional) Scale-in or scale-out cooldown periods are not supported from the console. Use the AWS CLI to modify the cooldown values. 

1. For **Minimum capacity**, type the minimum number of shards that the ElastiCache Auto Scaling policy is required to maintain. 

1. For **Maximum capacity**, type the maximum number of shards that the ElastiCache Auto Scaling policy is required to maintain. This value must be less than or equal to 250.

1. Choose **Create**.

# Registering a Scalable Target
<a name="AutoScaling-Scaling-Registering-Policy-CLI"></a>

Before you can use Auto Scaling with an ElastiCache for Valkey and Redis OSS cluster, you register your cluster with ElastiCache auto scaling. You do so to define the scaling dimension and limits to be applied to that cluster. ElastiCache auto scaling dynamically scales the cluster along the `elasticache:replication-group:NodeGroups` scalable dimension, which represents the number of cluster shards. 

 **Using the AWS CLI** 

To register your ElastiCache for Valkey and Redis OSS cluster, use the [register-scalable-target](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/register-scalable-target.html) command with the following parameters: 
+ `--service-namespace` – Set this value to `elasticache`
+ `--resource-id` – The resource identifier for the cluster. For this parameter, the resource type is `ReplicationGroup` and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ `--scalable-dimension` – Set this value to `elasticache:replication-group:NodeGroups`. 
+ `--max-capacity ` – The maximum number of shards to be managed by ElastiCache auto scaling. For information about the relationship between `--min-capacity`, `--max-capacity`, and the number of shards in your cluster, see [Minimum and maximum capacity](AutoScaling-Policies.md#AutoScaling-MinMax). 
+ `--min-capacity ` – The minimum number of shards to be managed by ElastiCache auto scaling. For information about the relationship between `--min-capacity`, `--max-capacity`, and the number of shards in your cluster, see [Minimum and maximum capacity](AutoScaling-Policies.md#AutoScaling-MinMax). 

**Example**  
 In the following example, you register an ElastiCache cluster named `myscalablecluster`. The registration indicates that the cluster should be dynamically scaled to have from one to ten shards.   
For Linux, macOS, or Unix:  

```
aws application-autoscaling register-scalable-target \
    --service-namespace elasticache \
    --resource-id replication-group/myscalablecluster \
    --scalable-dimension elasticache:replication-group:NodeGroups \
    --min-capacity 1 \
    --max-capacity 10 \
```
For Windows:  

```
aws application-autoscaling register-scalable-target ^
    --service-namespace elasticache ^
    --resource-id replication-group/myscalablecluster ^
    --scalable-dimension elasticache:replication-group:NodeGroups ^
    --min-capacity 1 ^
    --max-capacity 10 ^
```

**Using the API**

To register your ElastiCache cluster, use the [register-scalable-target](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/register-scalable-target.html) command with the following parameters: 
+ ServiceNamespace – Set this value to elasticache. 
+ ResourceID – The resource identifier for the ElastiCache cluster. For this parameter, the resource type is ReplicationGroup and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ ScalableDimension – Set this value to `elasticache:replication-group:NodeGroups`. 
+ MinCapacity – The minimum number of shards to be managed by ElastiCache auto scaling. For information about the relationship between --min-capacity, --max-capacity, and the number of replicas in your cluster, see [Minimum and maximum capacity](AutoScaling-Policies.md#AutoScaling-MinMax).
+ MaxCapacity – The maximum number of shards to be managed by ElastiCache auto scaling. For information about the relationship between --min-capacity, --max-capacity, and the number of replicas in your cluster, see [Minimum and maximum capacity](AutoScaling-Policies.md#AutoScaling-MinMax).

**Example**  
In the following example, you register an ElastiCache cluster named `myscalablecluster` with the Application Auto Scaling API. This registration indicates that the cluster should be dynamically scaled to have from one to 5 replicas.   

```
POST / HTTP/1.1
Host: autoscaling.us-east-2.amazonaws.com
Accept-Encoding: identity
Content-Length: 219
X-Amz-Target: AnyScaleFrontendService.RegisterScalableTarget
X-Amz-Date: 20160506T182145Z
User-Agent: aws-cli/1.10.23 Python/2.7.11 Darwin/15.4.0 botocore/1.4.8
Content-Type: application/x-amz-json-1.1
Authorization: AUTHPARAMS
{
    "ServiceNamespace": "elasticache",
    "ResourceId": "replication-group/myscalablecluster",
    "ScalableDimension": "elasticache:replication-group:NodeGroups",
    "MinCapacity": 1,
    "MaxCapacity": 5
}
```

# Defining a scaling policy
<a name="AutoScaling-Scaling-Defining-Policy-API"></a>

A target-tracking scaling policy configuration is represented by a JSON block that the metrics and target values are defined in. You can save a scaling policy configuration as a JSON block in a text file. You use that text file when invoking the AWS CLI or the Application Auto Scaling API. For more information about policy configuration syntax, see [TargetTrackingScalingPolicyConfiguration](https://docs.aws.amazon.com/autoscaling/application/APIReference/API_TargetTrackingScalingPolicyConfiguration.html) in the Application Auto Scaling API Reference. 

The following options are available for defining a target-tracking scaling policy configuration: 

**Topics**
+ [Using a predefined metric](#AutoScaling-Scaling-Predefined-Metric)
+ [Using a custom metric](#AutoScaling-Scaling-Custom-Metric)
+ [Using cooldown periods](#AutoScaling-Scaling-Cooldown-periods)

## Using a predefined metric
<a name="AutoScaling-Scaling-Predefined-Metric"></a>

By using predefined metrics, you can quickly define a target-tracking scaling policy for an ElastiCache for Valkey and Redis OSS cluster that works with target tracking in ElastiCache Auto Scaling. 

Currently, ElastiCache supports the following predefined metrics in NodeGroup Auto Scaling: 
+ **ElastiCachePrimaryEngineCPUUtilization** – The average value of the `EngineCPUUtilization` metric in CloudWatch across all primary nodes in the cluster.
+ **ElastiCacheDatabaseMemoryUsageCountedForEvictPercentage** – The average value of the `DatabaseMemoryUsageCountedForEvictPercentage` metric in CloudWatch across all primary nodes in the cluster.
+ **ElastiCacheDatabaseCapacityUsageCountedForEvictPercentage** – The average value of the `ElastiCacheDatabaseCapacityUsageCountedForEvictPercentage` metric in CloudWatch across all primary nodes in the cluster.

For more information about the `EngineCPUUtilization`, `DatabaseMemoryUsageCountedForEvictPercentage` and `DatabaseCapacityUsageCountedForEvictPercentage` metrics, see [Monitoring use with CloudWatch Metrics](CacheMetrics.md). To use a predefined metric in your scaling policy, you create a target tracking configuration for your scaling policy. This configuration must include a `PredefinedMetricSpecification` for the predefined metric and a TargetValue for the target value of that metric. 

**Example**  
The following example describes a typical policy configuration for target-tracking scaling for an ElastiCache for Valkey and Redis OSS cluster. In this configuration, the `ElastiCachePrimaryEngineCPUUtilization` predefined metric is used to adjust the cluster based on an average CPU utilization of 40 percent across all primary nodes in the cluster.   

```
{
    "TargetValue": 40.0,
    "PredefinedMetricSpecification":
    {
        "PredefinedMetricType": "ElastiCachePrimaryEngineCPUUtilization"
    }
}
```

## Using a custom metric
<a name="AutoScaling-Scaling-Custom-Metric"></a>

 By using custom metrics, you can define a target-tracking scaling policy that meets your custom requirements. You can define a custom metric based on any ElastiCache metric that changes in proportion to scaling. Not all ElastiCache metrics work for target tracking. The metric must be a valid utilization metric and describe how busy an instance is. The value of the metric must increase or decrease in proportion to the number of Shards in the cluster. This proportional increase or decrease is necessary to use the metric data to proportionally scale out or in the number of shards. 

**Example**  
The following example describes a target-tracking configuration for a scaling policy. In this configuration, a custom metric adjusts an ElastiCache for Redis OSS cluster based on an average CPU utilization of 50 percent across all shards in an cluster named `my-db-cluster`. 

```
{
    "TargetValue": 50,
    "CustomizedMetricSpecification":
    {
        "MetricName": "EngineCPUUtilization",
        "Namespace": "AWS/ElastiCache",
        "Dimensions": [
            {
                "Name": "ReplicationGroup","Value": "my-db-cluster"
            },
            {
                "Name": "Role","Value": "PRIMARY"
            }
        ],
        "Statistic": "Average",
        "Unit": "Percent"
    }
}
```

## Using cooldown periods
<a name="AutoScaling-Scaling-Cooldown-periods"></a>

You can specify a value, in seconds, for `ScaleOutCooldown` to add a cooldown period for scaling out your cluster. Similarly, you can add a value, in seconds, for `ScaleInCooldown` to add a cooldown period for scaling in your cluster. For more information, see [TargetTrackingScalingPolicyConfiguration](https://docs.aws.amazon.com/autoscaling/application/APIReference/API_TargetTrackingScalingPolicyConfiguration.html) in the Application Auto Scaling API Reference. 

 The following example describes a target-tracking configuration for a scaling policy. In this configuration, the `ElastiCachePrimaryEngineCPUUtilization` predefined metric is used to adjust an ElastiCache for Redis OSS cluster based on an average CPU utilization of 40 percent across all primary nodes in that cluster. The configuration provides a scale-in cooldown period of 10 minutes and a scale-out cooldown period of 5 minutes. 

```
{
    "TargetValue": 40.0,
    "PredefinedMetricSpecification":
    {
        "PredefinedMetricType": "ElastiCachePrimaryEngineCPUUtilization"
    },
    "ScaleInCooldown": 600,
    "ScaleOutCooldown": 300
}
```

# Disabling scale-in activity
<a name="AutoScaling-Scaling-Disabling-Scale-in"></a>

You can prevent the target-tracking scaling policy configuration from scaling in your cluster by disabling scale-in activity. Disabling scale-in activity prevents the scaling policy from deleting shards, while still allowing the scaling policy to create them as needed. 

You can specify a Boolean value for `DisableScaleIn` to enable or disable scale in activity for your cluster. For more information, see [TargetTrackingScalingPolicyConfiguration](https://docs.aws.amazon.com/autoscaling/application/APIReference/API_TargetTrackingScalingPolicyConfiguration.html) in the Application Auto Scaling API Reference. 

The following example describes a target-tracking configuration for a scaling policy. In this configuration, the `ElastiCachePrimaryEngineCPUUtilization` predefined metric adjusts an ElastiCache for Valkey and Redis OSS cluster based on an average CPU utilization of 40 percent across all primary nodes in that cluster. The configuration disables scale-in activity for the scaling policy. 

```
{
    "TargetValue": 40.0,
    "PredefinedMetricSpecification":
    {
        "PredefinedMetricType": "ElastiCachePrimaryEngineCPUUtilization"
    },
    "DisableScaleIn": true
}
```

# Applying a scaling policy
<a name="AutoScaling-Scaling-Applying-a-Scaling-Policy"></a>

After registering your cluster with ElastiCache for Valkey and Redis OSS auto scaling and defining a scaling policy, you apply the scaling policy to the registered cluster. To apply a scaling policy to an ElastiCache for Redis OSS cluster, you can use the AWS CLI or the Application Auto Scaling API. 

## Applying a scaling policy using the AWS CLI
<a name="AutoScaling-Scaling-Applying-a-Scaling-Policy-CLI"></a>

To apply a scaling policy to your ElastiCache for Valkey and Redis OSS cluster, use the [put-scaling-policy](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/put-scaling-policy.html) command with the following parameters: 
+ **--policy-name** – The name of the scaling policy. 
+ **--policy-type** – Set this value to `TargetTrackingScaling`. 
+ **--resource-id** – The resource identifier. For this parameter, the resource type is `ReplicationGroup` and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ **--service-namespace** – Set this value to `elasticache`. 
+ **--scalable-dimension** – Set this value to `elasticache:replication-group:NodeGroups`. 
+ **--target-tracking-scaling-policy-configuration** – The target-tracking scaling policy configuration to use for the cluster. 

In the following example, you apply a target-tracking scaling policy named `myscalablepolicy` to an ElastiCache for Valkey and Redis OSS cluster named `myscalablecluster` with ElastiCache auto scaling. To do so, you use a policy configuration saved in a file named `config.json`. 

For Linux, macOS, or Unix:

```
aws application-autoscaling put-scaling-policy \
    --policy-name myscalablepolicy \
    --policy-type TargetTrackingScaling \
    --resource-id replication-group/myscalablecluster \
    --service-namespace elasticache \
    --scalable-dimension elasticache:replication-group:NodeGroups \
    --target-tracking-scaling-policy-configuration file://config.json
```

For Windows:

```
aws application-autoscaling put-scaling-policy ^
    --policy-name myscalablepolicy ^
    --policy-type TargetTrackingScaling ^
    --resource-id replication-group/myscalablecluster ^
    --service-namespace elasticache ^
    --scalable-dimension elasticache:replication-group:NodeGroups ^
    --target-tracking-scaling-policy-configuration file://config.json
```

## Applying a scaling policy using the API
<a name="AutoScaling-Scaling-Applying-a-Scaling-Policy-API"></a>

To apply a scaling policy to your ElastiCache for Valkey and Redis OSS cluster, use the [PutScalingPolicy](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/put-scaling-policy.html) AWS CLI command with the following parameters: 
+ **--policy-name** – The name of the scaling policy. 
+ **--resource-id** – The resource identifier. For this parameter, the resource type is `ReplicationGroup` and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ **--service-namespace** – Set this value to `elasticache`. 
+ **--scalable-dimension** – Set this value to `elasticache:replication-group:NodeGroups`. 
+ **--target-tracking-scaling-policy-configuration** – The target-tracking scaling policy configuration to use for the cluster. 

In the following example, you apply a target-tracking scaling policy named `myscalablepolicy` to an ElastiCache cluster named `myscalablecluster` with ElastiCache auto scaling. You use a policy configuration based on the `ElastiCachePrimaryEngineCPUUtilization` predefined metric. 

```
POST / HTTP/1.1
Host: autoscaling.us-east-2.amazonaws.com
Accept-Encoding: identity
Content-Length: 219
X-Amz-Target: AnyScaleFrontendService.PutScalingPolicy
X-Amz-Date: 20160506T182145Z
User-Agent: aws-cli/1.10.23 Python/2.7.11 Darwin/15.4.0 botocore/1.4.8
Content-Type: application/x-amz-json-1.1
Authorization: AUTHPARAMS
{
    "PolicyName": "myscalablepolicy",
    "ServiceNamespace": "elasticache",
    "ResourceId": "replication-group/myscalablecluster",
    "ScalableDimension": "elasticache:replication-group:NodeGroups",
    "PolicyType": "TargetTrackingScaling",
    "TargetTrackingScalingPolicyConfiguration": {
        "TargetValue": 40.0,
        "PredefinedMetricSpecification":
        {
            "PredefinedMetricType": "ElastiCachePrimaryEngineCPUUtilization"
        }
    }
}
```

# Editing a scaling policy
<a name="AutoScaling-Scaling-Editing-a-Scaling-Policy"></a>

You can edit a scaling policy using the AWS Management Console, the AWS CLI, or the Application Auto Scaling API. 

## Editing a scaling policy using the AWS Management Console
<a name="AutoScaling-Scaling-Editing-a-Scaling-Policy-CON"></a>

**To edit an Auto Scaling policy for an ElastiCache for Valkey and Redis OSS cluster**

1. Sign in to the AWS Management Console and open the Amazon ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose the appropriate engine. 

1. Choose the cluster that you want to add a policy to (choose the cluster name and not the button to its left). 

1. Choose the **Auto Scaling policies** tab. 

1. Under **Scaling policies**, choose the button to the left of the Auto Scaling policy you wish to change, and then choose **Modify**. 

1. Make the requisite changes to the policy.

1. Choose **Modify**.

## Editing a scaling policy using the AWS CLI and API
<a name="AutoScaling-Scaling-Editing-a-Scaling-Policy-CLI"></a>

You can use the AWS CLI or the Application Auto Scaling API to edit a scaling policy in the same way that you apply a scaling policy: 
+ When using the AWS CLI, specify the name of the policy you want to edit in the `--policy-name` parameter. Specify new values for the parameters you want to change. 
+ When using the Application Auto Scaling API, specify the name of the policy you want to edit in the `PolicyName` parameter. Specify new values for the parameters you want to change. 

For more information, see [Applying a scaling policy](AutoScaling-Scaling-Applying-a-Scaling-Policy.md).

# Deleting a scaling policy
<a name="AutoScaling-Scaling-Deleting-a-Scaling-Policy"></a>

You can delete a scaling policy using the AWS Management Console, the AWS CLI, or the Application Auto Scaling API. 

## Deleting a scaling policy using the AWS Management Console
<a name="AutoScaling-Scaling-Editing-a-Scaling-Policy-CON"></a>

**To delete an Auto Scaling policy for an ElastiCache for Redis OSS cluster**

1. Sign in to the AWS Management Console and open the Amazon ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Valkey** or **Redis OSS**. 

1. Choose the cluster whose Auto Scaling policy you want to edit (choose the cluster name, not the button to its left). 

1. Choose the **Auto Scaling policies** tab. 

1. Under **Scaling policies**, choose the Auto Scaling policy, and then choose **Delete**. 

## Deleting a scaling policy using the AWS CLI
<a name="AutoScaling-Scaling-Deleting-a-Scaling-Policy-CLI"></a>

To delete a scaling policy to your ElastiCache for Valkey and Redis OSS cluster, use the [delete-scaling-policy](https://docs.aws.amazon.com/cli/latest/reference/autoscaling/delete-scaling-policy.html) AWS CLI command with the following parameters: 
+ **--policy-name** – The name of the scaling policy. 
+ **--resource-id** – The resource identifier. For this parameter, the resource type is `ReplicationGroup` and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ **--service-namespace** – Set this value to `elasticache`. 
+ **--scalable-dimension** – Set this value to `elasticache:replication-group:NodeGroups`. 

In the following example, you delete a target-tracking scaling policy named `myscalablepolicy` from a cluster named `myscalablecluster`. 

For Linux, macOS, or Unix:

```
aws application-autoscaling delete-scaling-policy \
    --policy-name myscalablepolicy \
    --resource-id replication-group/myscalablecluster \
    --service-namespace elasticache \
    --scalable-dimension elasticache:replication-group:NodeGroups
```

For Windows:

```
aws application-autoscaling delete-scaling-policy ^
    --policy-name myscalablepolicy ^
    --resource-id replication-group/myscalablecluster ^
    --service-namespace elasticache ^
    --scalable-dimension elasticache:replication-group:NodeGroups
```

## Deleting a scaling policy using the API
<a name="AutoScaling-Scaling-Deleting-a-Scaling-Policy-API"></a>

To delete a scaling policy to your ElastiCache for Valkey and Redis OSS cluster, use the [DeleteScalingPolicy](https://docs.aws.amazon.com/cli/latest/reference/autoscaling/delete-scaling-policy.html) AWS CLI command with the following parameters: 
+ **--policy-name** – The name of the scaling policy. 
+ **--resource-id** – The resource identifier. For this parameter, the resource type is `ReplicationGroup` and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ **--service-namespace** – Set this value to `elasticache`. 
+ **--scalable-dimension** – Set this value to `elasticache:replication-group:NodeGroups`. 

In the following example, you delete a target-tracking scaling policy named `myscalablepolicy` from a cluster named `myscalablecluster`. 

```
POST / HTTP/1.1
Host: autoscaling.us-east-2.amazonaws.com
Accept-Encoding: identity
Content-Length: 219
X-Amz-Target: AnyScaleFrontendService.DeleteScalingPolicy
X-Amz-Date: 20160506T182145Z
User-Agent: aws-cli/1.10.23 Python/2.7.11 Darwin/15.4.0 botocore/1.4.8
Content-Type: application/x-amz-json-1.1
Authorization: AUTHPARAMS
{
    "PolicyName": "myscalablepolicy",
    "ServiceNamespace": "elasticache",
    "ResourceId": "replication-group/myscalablecluster",
    "ScalableDimension": "elasticache:replication-group:NodeGroups"
}
```

# Use CloudFormation for Auto Scaling policies
<a name="AutoScaling-with-Cloudformation-Shards"></a>

This snippet shows how to create a target tracking policy and apply it to an [AWS::ElastiCache::ReplicationGroup](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticache-replicationgroup.html) resource using the [AWS::ApplicationAutoScaling::ScalableTarget](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-applicationautoscaling-scalabletarget.html) resource. It uses the [Fn::Join](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-join.html) and [Ref](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html) intrinsic functions to construct the `ResourceId` property with the logical name of the `AWS::ElastiCache::ReplicationGroup` resource that is specified in the same template. 

```
ScalingTarget:
   Type: 'AWS::ApplicationAutoScaling::ScalableTarget'
   Properties:
     MaxCapacity: 3
     MinCapacity: 1
     ResourceId: !Sub replication-group/${logicalName}
     ScalableDimension: 'elasticache:replication-group:NodeGroups'
     ServiceNamespace: elasticache
     RoleARN: !Sub "arn:aws:iam::${AWS::AccountId}:role/aws-service-role/elasticache.application-autoscaling.amazonaws.com/AWSServiceRoleForApplicationAutoScaling_ElastiCacheRG"

  ScalingPolicy:
    Type: "AWS::ApplicationAutoScaling::ScalingPolicy"
    Properties:
      ScalingTargetId: !Ref ScalingTarget
      ServiceNamespace: elasticache
      PolicyName: testpolicy
      PolicyType: TargetTrackingScaling
      ScalableDimension: 'elasticache:replication-group:NodeGroups'
      TargetTrackingScalingPolicyConfiguration:
        PredefinedMetricSpecification:
          PredefinedMetricType: ElastiCachePrimaryEngineCPUUtilization
        TargetValue: 40
```

# Scheduled scaling
<a name="AutoScaling-with-Scheduled-Scaling-Shards"></a>

Scaling based on a schedule enables you to scale your application in response to predictable changes in demand. To use scheduled scaling, you create scheduled actions, which tell ElastiCache for Valkey and Redis OSS to perform scaling activities at specific times. When you create a scheduled action, you specify an existing cluster, when the scaling activity should occur, minimum capacity, and maximum capacity. You can create scheduled actions that scale one time only or that scale on a recurring schedule. 

 You can only create a scheduled action for clusters that already exist. You can't create a scheduled action at the same time that you create a cluster.

For more information on terminology for scheduled action creation, management, and deletion, see [ Commonly used commands for scheduled action creation, management, and deletion ](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-scheduled-scaling.html#scheduled-scaling-commonly-used-commands) 

**To create on a recurring schedule:**

1. Sign in to the AWS Management Console and open the Amazon ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Valkey** or **Redis OSS**. 

1. Choose the cluster that you want to add a policy for. 

1. Choose the **Manage Auto Scaling policie** from the **Actions** dropdown. 

1. Choose the **Auto Scaling policies** tab.

1. In the **Auto scaling policies** section, the **Add Scaling policy** dialog box appears. Choose **Scheduled scaling**.

1. For **Policy Name**, enter the policy name. 

1. For **Scalable Dimension**, choose **Shards**. 

1. For **Target Shards**, choose the value. 

1. For **Recurrence**, choose **Recurring**. 

1. For **Frequency**, choose the respective value. 

1. For **Start Date** and **Start time**, choose the time from when the policy will go into effect. 

1. Choose **Add Policy**. 

**To create a one-time scheduled action:**

1. Sign in to the AWS Management Console and open the Amazon ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Valkey** or **Redis OSS**. 

1. Choose the cluster that you want to add a policy for. 

1. Choose the **Manage Auto Scaling policie** from the **Actions** dropdown. 

1. Choose the **Auto Scaling policies** tab.

1. In the **Auto scaling policies** section, the **Add Scaling policy** dialog box appears. Choose **Scheduled scaling**.

1. For **Policy Name**, enter the policy name. 

1. For **Scalable Dimension**, choose **Shards**. 

1. For **Target Shards**, choose the value. 

1. For **Recurrence**, choose **One Time**. 

1. For **Start Date** and **Start time**, choose the time from when the policy will go into effect. 

1. For **End Date** choose the date until when the policy would be in effect. 

1. Choose **Add Policy**. 

**To delete a scheduled action**

1. Sign in to the AWS Management Console and open the Amazon ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Valkey** or **Redis OSS**. 

1. Choose the cluster that you want to add a policy for. 

1. Choose the **Manage Auto Scaling policie** from the **Actions** dropdown. 

1. Choose the **Auto Scaling policies** tab.

1. In the **Auto scaling policies** section, choose the auto scaling policy, and then choose **Delete** from the **Actions** dialog.

**To manage scheduled scaling using the AWS CLI **

Use the following application-autoscaling APIs:
+ [put-scheduled-action](https://docs.aws.amazon.com/cli/latest/reference/autoscaling/put-scheduled-action.html) 
+ [describe-scheduled-actions](https://docs.aws.amazon.com/cli/latest/reference/autoscaling/describe-scheduled-actions.html) 
+ [delete-scheduled-action](https://docs.aws.amazon.com/cli/latest/reference/autoscaling/delete-scheduled-action.html) 

## Use CloudFormation to create a scheduled action
<a name="AutoScaling-with-Cloudformation-Declare-Scheduled-Action"></a>

This snippet shows how to create a target tracking policy and apply it to an [AWS::ElastiCache::ReplicationGroup](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticache-replicationgroup.html) resource using the [AWS::ApplicationAutoScaling::ScalableTarget](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-applicationautoscaling-scalabletarget.html) resource. It uses the [Fn::Join](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-join.html) and [Ref](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html) intrinsic functions to construct the `ResourceId` property with the logical name of the `AWS::ElastiCache::ReplicationGroup` resource that is specified in the same template. 

```
ScalingTarget:
   Type: 'AWS::ApplicationAutoScaling::ScalableTarget'
   Properties:
     MaxCapacity: 3
     MinCapacity: 1
     ResourceId: !Sub replication-group/${logicalName}
     ScalableDimension: 'elasticache:replication-group:NodeGroups'
     ServiceNamespace: elasticache
     RoleARN: !Sub "arn:aws:iam::${AWS::AccountId}:role/aws-service-role/elasticache.application-autoscaling.amazonaws.com/AWSServiceRoleForApplicationAutoScaling_ElastiCacheRG"
     ScheduledActions:
       - EndTime: '2020-12-31T12:00:00.000Z'
         ScalableTargetAction:
           MaxCapacity: '5'
           MinCapacity: '2'
         ScheduledActionName: First
         Schedule: 'cron(0 18 * * ? *)'
```