

# Working with ElastiCache
<a name="WorkingWithElastiCache"></a>

 In this section you can find details about how to manage the various components of your ElastiCache implementation. 

**Topics**
+ [Snapshot and restore](backups.md)
+ [Engine versions and upgrading in ElastiCache](engine-versions.md)
+ [ElastiCache best practices and caching strategies](BestPractices.md)
+ [Managing your node-based cluster in ElastiCache](manage-self-designed-cluster.md)
+ [Connecting an EC2 instance and an ElastiCache cache automatically](compute-connection.md)
+ [Scaling ElastiCache](Scaling.md)
+ [Getting started with Bloom filters](BloomFilters.md)
+ [Getting started with Watch in Serverless](ServerlessWatch.md)
+ [Getting started with Vector Search](vector-search.md)
+ [Using Amazon ElastiCache for Valkey for semantic caching](semantic-caching.md)
+ [Using Amazon ElastiCache for Valkey for agentic memory](agentic-memory.md)
+ [Getting started with JSON for Valkey and Redis OSS](json-gs.md)
+ [Tagging your ElastiCache resources](Tagging-Resources.md)
+ [Using the Amazon ElastiCache Well-Architected Lens](WellArchitechtedLens.md)
+ [Common troubleshooting steps and best practices with ElastiCache](wwe-troubleshooting.md)

# Snapshot and restore
<a name="backups"></a>

Amazon ElastiCache caches running Valkey, Redis OSS, or Serverless Memcached can back up their data by creating a snapshot. You can use the backup to restore a cache or seed data to a new cache. The backup consists of the cache’s metadata, along with all of the data in the cache. All backups are written to Amazon Simple Storage Service (Amazon S3), which provides durable storage. At any time, you can restore your data by creating a new Valkey, Redis OSS, or Serverless Memcached cache and populating it with data from a backup. With ElastiCache, you can manage backups using the AWS Management Console, the AWS Command Line Interface (AWS CLI), and the ElastiCache API.

If you plan to delete a cache and it's important to preserve the data, you can take an extra precaution. To do this, create a manual backup first, verify that its status is *available*, and then delete the cache. Doing this makes sure that if the backup fails, you still have the cache data available. You can retry making a backup, following the best practices outlined preceding.

**Topics**
+ [Backup constraints](#backups-constraints)
+ [Performance impact of backups of node-based clusters](#backups-performance)
+ [Scheduling automatic backups](backups-automatic.md)
+ [Taking manual backups](backups-manual.md)
+ [Creating a final backup](backups-final.md)
+ [Describing backups](backups-describing.md)
+ [Copying backups](backups-copying.md)
+ [Exporting a backup](backups-exporting.md)
+ [Restoring from a backup into a new cache](backups-restoring.md)
+ [Deleting a backup](backups-deleting.md)
+ [Tagging backups](backups-tagging.md)
+ [Tutorial: Seeding a new node-based cluster with an externally created backup](backups-seeding-redis.md)

## Backup constraints
<a name="backups-constraints"></a>

Consider the following constraints when planning or making backups:
+ Backup and restore are supported only for caches running on Valkey, Redis OSS or Serverless Memcached.
+ For Valkey or Redis OSS (cluster mode disabled) clusters, backup and restore aren't supported on `cache.t1.micro` nodes. All other cache node types are supported.
+ For Valkey or Redis OSS (cluster mode enabled) clusters, backup and restore are supported for all node types.
+ During any contiguous 24-hour period, you can create no more than 24 manual backups per serverless cache. For Valkey and Redis OSS node-based clusters, you can create no more than 20 manual backups per node in the cluster. 
+ Valkey or Redis OSS (cluster mode enabled) only supports taking backups on the cluster level (for the API or CLI, the replication group level). Valkey or Redis OSS (cluster mode enabled) doesn't support taking backups at the shard level (for the API or CLI, the node group level).
+ During the backup process, you can't run any other API or CLI operations on serverless cache. You can run API or CLI operations on a node-based cluster during backup.
+ If you are using Valkey or Redis OSS caches with data tiering, you cannot export a backup to Amazon S3.
+ You can restore a backup of a cluster using the r6gd node type only to clusters using the r6gd node type.

## Performance impact of backups of node-based clusters
<a name="backups-performance"></a>

Backups on serverless caches are transparent to the application with no performance impact. However, when creating backups for node-based clusters, there can be some performance impact depending on the available reserved memory. Backups for node-based clusters are not available with ElastiCache for Memcached but are available with ElastiCache for Redis OSS.

The following are guidelines for improving backup performance for node-based clusters.
+ Set the `reserved-memory-percent` parameter – To mitigate excessive paging, we recommend that you set the *reserved-memory-percent* parameter. This parameter prevents Valkey and Redis OSS from consuming all of the node's available memory, and can help reduce the amount of paging. You might also see performance improvements by simply using a larger node. For more information about the *reserved-memory* and *reserved-memory-percent* parameters, see [Managing reserved memory for Valkey and Redis OSS](redis-memory-management.md).

   
+ Create backups from a read replica – If you are running Valkey or Redis OSS in a node group with more than one node, you can take a backup from the primary node or one of the read replicas. Because of the system resources required during BGSAVE, we recommend that you create backups from one of the read replicas. While the backup is being created from the replica, the primary node remains unaffected by BGSAVE resource requirements. The primary node can continue serving requests without slowing down.

  To do this, see [Creating a manual backup (Console)](backups-manual.md#backups-manual-CON) and in the **Cluster Name** field in the **Create Backup** window, choose a replica instead of the default primary node.

If you delete a replication group and request a final backup, ElastiCache always takes the backup from the primary node. This ensures that you capture the very latest Valkey or Redis OSS data, before the replication group is deleted.

# Scheduling automatic backups
<a name="backups-automatic"></a>

You can enable automatic backups for any Valkey or Redis OSS serverless cache or node-based cluster. When automatic backups are enabled, ElastiCache creates a backup of the cache on a daily basis. There is no impact on the cache and the change is immediate. Automatic backups can help guard against data loss. In the event of a failure, you can create a new cache, restoring your data from the most recent backup. The result is a warm-started cache, preloaded with your data and ready for use. For more information, see [Restoring from a backup into a new cache](backups-restoring.md).

You can enable automatic backups for any Memcached Serverless cache. When automatic backups are enabled, ElastiCache creates a backup of the cache on a daily basis. There is no impact on the cache and the change is immediate. Automatic backups can help guard against data loss. In the event of a failure, you can create a new cache, restoring your data from the most recent backup. The result is a warm-started cache, preloaded with your data and ready for use. For more information, see [Restoring from a backup into a new cache](backups-restoring.md).

When you schedule automatic backups, you should plan the following settings:
+ **Backup start time** – A time of day when ElastiCache begins creating a backup. You can set the backup window for any time when it's most convenient. If you don't specify a backup window, ElastiCache assigns one automatically.

   
+ **Backup retention limit** – The number of days the backup is retained in Amazon S3. For example, if you set the retention limit to 5, then a backup taken today is retained for 5 days. When the retention limit expires, the backup is automatically deleted.

  The maximum backup retention limit is 35 days. If the backup retention limit is set to 0, automatic backups are disabled for the cache.

When you schedule automatic backups, ElastiCache will begin creating the backup. You can set the backup window for any time when it's most convenient. If you don't specify a backup window, ElastiCache assigns one automatically.

You can enable or disable automatic backups when either creating a new cache or updating an existing cache, by using the ElastiCache console, the AWS CLI, or the ElastiCache API. For Valkey and Redis OSS, this is done by checking the **Enable Automatic Backups** box in the **Advanced Valkey Settings** or **Advanced Redis OSS Settings** section. For Memcached, this is done by checking the **Enable Automatic Backups** box in the **Advanced Memcached Settings** section.

# Taking manual backups
<a name="backups-manual"></a>

In addition to automatic backups, you can create a *manual* backup at any time. Unlike automatic backups, which are automatically deleted after a specified retention period, manual backups do not have a retention period after which they are automatically deleted. Even if you delete the cache, any manual backups from that cache are retained. If you no longer want to keep a manual backup, you must explicitly delete it yourself.

In addition to directly creating a manual backup, you can create a manual backup in one of the following ways:
+ [Copying backups](backups-copying.md). It does not matter whether the source backup was created automatically or manually.
+ [Creating a final backup](backups-final.md). Create a backup immediately before deleting a cluster or node.

You can create a manual backup of a cache using the AWS Management Console, the AWS CLI, or the ElastiCache API.

You can generate manual backups from replicas that are cluster mode enabled, and cluster mode disabled.



## Creating a manual backup (Console)
<a name="backups-manual-CON"></a>

**To create a backup of a cache (console)**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. From the navigation pane, choose **Valkey caches**, **Redis OSS caches**, or **Memcached caches**, depending on your preference.

1. Choose the box to the left of the name of the cache you want to back up.

1. Choose **Backup**.

1. In the **Create Backup** dialog, type in a name for your backup in the **Backup Name** box. We recommend that the name indicate which cluster was backed up and the date and time the backup was made.

   Cluster naming constraints are as follows:
   + Must contain 1–40 alphanumeric characters or hyphens.
   + Must begin with a letter.
   + Can't contain two consecutive hyphens.
   + Can't end with a hyphen.

1. Choose **Create Backup**.

   The status of the cluster changes to *snapshotting*.

## Creating a manual backup (AWS CLI)
<a name="backups-manual-CLI"></a>

**Manual backup of a serverless cache with the AWS CLI**

To create a manual backup of a cache using the AWS CLI, use the `create-serverless-snapshot` AWS CLI operation with the following parameters:
+ `--serverless-cache-name` – The name of the serverless cache that you are backing up.
+ `--serverless-cache-snapshot-name` – Name of the snapshot to be created.

For Linux, macOS, or Unix:
+ 

  ```
  aws elasticache create-serverless-snapshot \
                          --serverless-cache-name CacheName \
                          --serverless-cache-snapshot-name bkup-20231127
  ```

For Windows:
+ 

  ```
  aws elasticache create-serverless-snapshot ^
      --serverless-cache-name CacheName ^
      --serverless-cache-snapshot-name bkup-20231127
  ```

**Manual backup of a node-based cluster with the AWS CLI**

To create a manual backup of a node-based cluster using the AWS CLI, use the `create-snapshot` AWS CLI operation with the following parameters:
+ `--cache-cluster-id`
  + If the cluster you're backing up has no replica nodes, `--cache-cluster-id` is the name of the cluster you are backing up, for example *mycluster*.
  + If the cluster you're backing up has one or more replica nodes, `--cache-cluster-id` is the name of the node in the cluster that you want to use for the backup. For example, the name might be *mycluster-002*.

  Use this parameter only when backing up a Valkey or Redis OSS (cluster mode disabled) cluster.

   
+ `--replication-group-id` – Name of the Valkey or Redis OSS (cluster mode enabled) cluster (CLI/API: a replication group) to use as the source for the backup. Use this parameter when backing up a Valkey or Redis OSS (cluster mode enabled) cluster.

   
+ `--snapshot-name` – Name of the snapshot to be created.

  Cluster naming constraints are as follows:
  + Must contain 1–40 alphanumeric characters or hyphens.
  + Must begin with a letter.
  + Can't contain two consecutive hyphens.
  + Can't end with a hyphen.

### Example 1: Backing up a Valkey or Redis OSS (Cluster Mode Disabled) cluster that has no replica nodes
<a name="backups-manual-CLI-example1"></a>

The following AWS CLI operation creates the backup `bkup-20150515` from the Valkey or Redis OSS (cluster mode disabled) cluster `myNonClusteredRedis` that has no read replicas.

For Linux, macOS, or Unix:

```
aws elasticache create-snapshot \
    --cache-cluster-id myNonClusteredRedis \
    --snapshot-name bkup-20150515
```

For Windows:

```
aws elasticache create-snapshot ^
    --cache-cluster-id myNonClusteredRedis ^
    --snapshot-name bkup-20150515
```

### Example 2: Backing up a Valkey or Redis OSS (Cluster Mode Disabled) cluster with replica nodes
<a name="backups-manual-CLI-example2"></a>

The following AWS CLI operation creates the backup `bkup-20150515` from the Valkey or Redis OSS (cluster mode disabled) cluster `myNonClusteredRedis`. This backup has one or more read replicas.

For Linux, macOS, or Unix:

```
aws elasticache create-snapshot \
    --cache-cluster-id myNonClusteredRedis-001 \
    --snapshot-name bkup-20150515
```

For Windows:

```
aws elasticache create-snapshot ^
    --cache-cluster-id myNonClusteredRedis-001 ^
    --snapshot-name bkup-20150515
```

**Example Output: Backing Up a Valkey or Redis OSS (Cluster Mode Disabled) Cluster with Replica Nodes**

Output from the operation looks something like the following.

```
{
    "Snapshot": {
        "Engine": "redis", 
        "CacheParameterGroupName": "default.redis6.x", 
        "VpcId": "vpc-91280df6", 
        "CacheClusterId": "myNonClusteredRedis-001", 
        "SnapshotRetentionLimit": 0, 
        "NumCacheNodes": 1, 
        "SnapshotName": "bkup-20150515", 
        "CacheClusterCreateTime": "2017-01-12T18:59:48.048Z", 
        "AutoMinorVersionUpgrade": true, 
        "PreferredAvailabilityZone": "us-east-1c", 
        "SnapshotStatus": "creating", 
        "SnapshotSource": "manual", 
        "SnapshotWindow": "08:30-09:30", 
        "EngineVersion": "6.0", 
        "NodeSnapshots": [
            {
                "CacheSize": "", 
                "CacheNodeId": "0001", 
                "CacheNodeCreateTime": "2017-01-12T18:59:48.048Z"
            }
        ], 
        "CacheSubnetGroupName": "default", 
        "Port": 6379, 
        "PreferredMaintenanceWindow": "wed:07:30-wed:08:30", 
        "CacheNodeType": "cache.m3.2xlarge",
        "DataTiering": "disabled"
    }
}
```

### Example 3: Backing up a cluster for Valkey or Redis OSS (Cluster Mode Enabled)
<a name="backups-manual-CLI-example3"></a>

The following AWS CLI operation creates the backup `bkup-20150515` from the Valkey or Redis OSS (cluster mode enabled) cluster `myClusteredRedis`. Note the use of `--replication-group-id` instead of `--cache-cluster-id` to identify the source. Also note that ElastiCache takes the backup using the replica node when present, and will default to the primary node if a replica node is unavailable.

For Linux, macOS, or Unix:

```
aws elasticache create-snapshot \
    --replication-group-id myClusteredRedis \
    --snapshot-name bkup-20150515
```

For Windows:

```
aws elasticache create-snapshot ^
    --replication-group-id myClusteredRedis ^
    --snapshot-name bkup-20150515
```

**Example Output: Backing Up a Valkey or Redis OSS (Cluster Mode Enabled) Cluster**

Output from this operation looks something like the following.

```
{
    "Snapshot": {
        "Engine": "redis", 
        "CacheParameterGroupName": "default.redis6.x.cluster.on", 
        "VpcId": "vpc-91280df6", 
        "NodeSnapshots": [
            {
                "CacheSize": "", 
                "NodeGroupId": "0001"
            }, 
            {
                "CacheSize": "", 
                "NodeGroupId": "0002"
            }
        ], 
        "NumNodeGroups": 2, 
        "SnapshotName": "bkup-20150515", 
        "ReplicationGroupId": "myClusteredRedis", 
        "AutoMinorVersionUpgrade": true, 
        "SnapshotRetentionLimit": 1, 
        "AutomaticFailover": "enabled", 
        "SnapshotStatus": "creating", 
        "SnapshotSource": "manual", 
        "SnapshotWindow": "10:00-11:00", 
        "EngineVersion": "6.0", 
        "CacheSubnetGroupName": "default", 
        "ReplicationGroupDescription": "2 shards 2 nodes each", 
        "Port": 6379, 
        "PreferredMaintenanceWindow": "sat:03:30-sat:04:30", 
        "CacheNodeType": "cache.r3.large",
        "DataTiering": "disabled"
    }
}
```

### Related topics
<a name="backups-manual-CLI-see-also"></a>

For more information, see [create-snapshot](https://docs.aws.amazon.com/cli/latest/reference/elasticache/create-snapshot.html) in the *AWS CLI Command Reference*.

## Creating a backup using CloudFormation
<a name="backups-CFN"></a>

You can use CloudFormation to create a backup of your ElastiCache Redis OSS or Valkey cache, using the `AWS::ElastiCache::ServerlessCache` or `AWS::ElastiCache::ReplicationGroup` properties.

**Using the `AWS::ElastiCache::ServerlessCache` resource **

Use this to create a backup using the AWS::ElastiCache::ServerlessCache resource:

```
Resources:
                    iotCatalog:
                        Type: AWS::ElastiCache::ServerlessCache
                            Properties:
                            ...
                            ServerlessCacheName: "your-cache-name"
                            Engine: "redis"
                            CacheUsageLimits
```

**Using the AWS::ElastiCache::ReplicationGroup resource **

Use the `AWS::ElastiCache::ReplicationGroup` resource:

```
Resources:
                    iotCatalog:
                        Type: AWS::ElastiCache::ReplicationGroup 
                            Properties:
                            ...
                            ReplicationGroupDescription: "Description of your replication group"
                            Engine: "redis"
                            CacheNodeType
                            NumCacheClusters
                            AutomaticFailoverEnabled
                            AtRestEncryptionEnabled
```

# Creating a final backup
<a name="backups-final"></a>

You can create a final backup using the ElastiCache console, the AWS CLI, or the ElastiCache API.

## Creating a final backup (Console)
<a name="backups-final-CON"></a>

You can create a final backup when you delete a Valkey, Memcached, or Redis OSS serverless cache, or a Valkey or Redis OSS node-based cluster, by using the ElastiCache console.

To create a final backup when deleting a cache, on the delete dialog box choose **Yes** under **Create backup** and give the backup a name.

**Related topics**
+ [Using the AWS Management Console](Clusters.Delete.md#Clusters.Delete.CON)
+ [Deleting a Replication Group (Console)](Replication.DeletingRepGroup.md#Replication.DeletingRepGroup.CON)

## Creating a final backup (AWS CLI)
<a name="backups-final-CLI"></a>

You can create a final backup when deleting a cache using the AWS CLI.

**Topics**
+ [When deleting a Valkey cache, Memcached serverless cache, or Redis OSS cache](#w2aac24b7c29b7b1b7)
+ [When deleting a Valkey or Redis OSS cluster with no read replicas](#w2aac24b7c29b7b1b9)
+ [When deleting a Valkey or Redis OSS cluster with read replicas](#w2aac24b7c29b7b1c11)

### When deleting a Valkey cache, Memcached serverless cache, or Redis OSS cache
<a name="w2aac24b7c29b7b1b7"></a>

To create a final backup, use the `delete-serverless-cache` AWS CLI operation with the following parameters.
+ `--serverless-cache-name` – Name of the cache being deleted.
+ `--final-snapshot-name` – Name of the backup.

The following code creates the final backup `bkup-20231127-final` when deleting the cache `myserverlesscache`.

For Linux, macOS, or Unix:

```
aws elasticache delete-serverless-cache \
        --serverless-cache-name myserverlesscache \
        --final-snapshot-name bkup-20231127-final
```

For Windows:

```
aws elasticache delete-serverless-cache ^
        --serverless-cache-name myserverlesscache ^
        --final-snapshot-name bkup-20231127-final
```

For more information, see [delete-serverless-cache](https://docs.aws.amazon.com/cli/latest/reference/elasticache/delete-serverless-cache.html) in the *AWS CLI Command Reference*.

### When deleting a Valkey or Redis OSS cluster with no read replicas
<a name="w2aac24b7c29b7b1b9"></a>

To create a final backup for a node-based cluster with no read replicas, use the `delete-cache-cluster` AWS CLI operation with the following parameters.
+ `--cache-cluster-id` – Name of the cluster being deleted.
+ `--final-snapshot-identifier` – Name of the backup.

The following code creates the final backup `bkup-20150515-final` when deleting the cluster `myRedisCluster`.

For Linux, macOS, or Unix:

```
aws elasticache delete-cache-cluster \
        --cache-cluster-id myRedisCluster \
        --final-snapshot-identifier bkup-20150515-final
```

For Windows:

```
aws elasticache delete-cache-cluster ^
        --cache-cluster-id myRedisCluster ^
        --final-snapshot-identifier bkup-20150515-final
```

For more information, see [delete-cache-cluster](https://docs.aws.amazon.com/cli/latest/reference/elasticache/delete-cache-cluster.html) in the *AWS CLI Command Reference*.

### When deleting a Valkey or Redis OSS cluster with read replicas
<a name="w2aac24b7c29b7b1c11"></a>

To create a final backup when deleting a replication group, use the `delete-replication-group` AWS CLI operation, with the following parameters:
+ `--replication-group-id` – Name of the replication group being deleted.
+ `--final-snapshot-identifier` – Name of the final backup.

The following code takes the final backup `bkup-20150515-final` when deleting the replication group `myReplGroup`.

For Linux, macOS, or Unix:

```
aws elasticache delete-replication-group \
        --replication-group-id myReplGroup \
        --final-snapshot-identifier bkup-20150515-final
```

For Windows:

```
aws elasticache delete-replication-group ^
        --replication-group-id myReplGroup ^
        --final-snapshot-identifier bkup-20150515-final
```

For more information, see [delete-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/delete-replication-group.html) in the *AWS CLI Command Reference*.

# Describing backups
<a name="backups-describing"></a>

The following procedures show you how to display a list of your backups. If you desire, you can also view the details of a particular backup.

## Describing backups (Console)
<a name="backups-describing-CON"></a>

**To display backups using the AWS Management Console**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. From the navigation pane, choose **Backups**.

1. To see the details of a particular backup, choose the box to the left of the backup's name.

## Describing serverless backups (AWS CLI)
<a name="backups-describing-serverless-CLI"></a>

To display a list of serverless backups and optionally details about a specific backup, use the `describe-serverless-cache-snapshots` CLI operation. 

**Examples**

The following operation uses the parameter `--max-records` to list up to 20 backups associated with your account. Omitting the parameter `--max-records` lists up to 50 backups.

```
aws elasticache describe-serverless-cache-snapshots --max-records 20
```

The following operation uses the parameter `--serverless-cache-name` to list only the backups associated with the cache `my-cache`.

```
aws elasticache describe-serverless-cache-snapshots --serverless-cache-name my-cache
```

The following operation uses the parameter `--serverless-cache-snapshot-name` to display the details of the backup `my-backup`.

```
aws elasticache describe-serverless-cache-snapshots --serverless-cache-snapshot-name my-backup
```

For more information, see [describe-serverless-cache-snapshots](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-serverless-cache-snapshots.html) in the AWS CLI Command Reference.

## Describing node-based cluster backups (AWS CLI)
<a name="backups-describing-CLI"></a>

To display a list of node-based cluster backups and optionally details about a specific backup, use the `describe-snapshots` CLI operation. 

**Examples**

The following operation uses the parameter `--max-records` to list up to 20 backups associated with your account. Omitting the parameter `--max-records` lists up to 50 backups.

```
aws elasticache describe-snapshots --max-records 20
```

The following operation uses the parameter `--cache-cluster-id` to list only the backups associated with the cluster `my-cluster`.

```
aws elasticache describe-snapshots --cache-cluster-id my-cluster
```

The following operation uses the parameter `--snapshot-name` to display the details of the backup `my-backup`.

```
aws elasticache describe-snapshots --snapshot-name my-backup
```

For more information, see [describe-snapshots](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-snapshots.html) in the AWS CLI Command Reference.

# Copying backups
<a name="backups-copying"></a>

You can create a copy of any backup, whether it was created automatically or manually. You can also export your backup so you can access it from outside ElastiCache. For guidance on exporting your backup, see [Exporting a backup](backups-exporting.md).

The following steps show you how to copy a backup.

## Copying backups (Console)
<a name="backups-copying-CON"></a>

**To copy a backup (console)**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. To see a list of your backups, from the left navigation pane choose **Backups**.

1. From the list of backups, choose the box to the left of the name of the backup you want to copy.

1. Choose **Actions** then **Copy**.

1. In the **New backup name** box, type a name for your new backup.

1. Choose **Copy**.

## Copying a serverless backup (AWS CLI)
<a name="backups-copying-CLI"></a>

To copy a backup of a serverless cache, use the `copy-serverless-cache-snapshot` operation.

**Parameters**
+ `--source-serverless-cache-snapshot-name` – Name of the backup to be copied.
+ `--target-serverless-cache-snapshot-name` – Name of the backup's copy.

The following example makes a copy of an automatic backup.

For Linux, macOS, or Unix:

```
aws elasticache copy-serverless-cache-snapshot \
    --source-serverless-cache-snapshot-name automatic.my-cache-2023-11-27-03-15 \
    --target-serverless-cache-snapshot-name my-backup-copy
```

For Windows:

```
aws elasticache copy-serverless-cache-snapshot ^
    --source-serverless-cache-snapshot-name automatic.my-cache-2023-11-27-03-15 ^
    --target-serverless-cache-snapshot-name my-backup-copy
```

For more information, see [https://docs.aws.amazon.com/cli/latest/reference/elasticache/copy-serverless-cache-snapshot.html](https://docs.aws.amazon.com/cli/latest/reference/elasticache/copy-serverless-cache-snapshot.html) in the *AWS CLI*.

## Copying a node-based cluster backup (AWS CLI)
<a name="backups-copying-self-designed-CLI"></a>

To copy a backup of a node-based cluster, use the `copy-snapshot` operation.

**Parameters**
+ `--source-snapshot-name` – Name of the backup to be copied.
+ `--target-snapshot-name` – Name of the backup's copy.
+ `--target-bucket` – Reserved for exporting a backup. Do not use this parameter when making a copy of a backup. For more information, see [Exporting a backup](backups-exporting.md).

The following example makes a copy of an automatic backup.

For Linux, macOS, or Unix:

```
aws elasticache copy-snapshot  \
    --source-snapshot-name automatic.my-redis-primary-2014-03-27-03-15 \
    --target-snapshot-name amzn-s3-demo-bucket
```

For Windows:

```
aws elasticache copy-snapshot ^
    --source-snapshot-name automatic.my-redis-primary-2014-03-27-03-15 ^
    --target-snapshot-name amzn-s3-demo-bucket
```

For more information, see [https://docs.aws.amazon.com/cli/latest/reference/elasticache/copy-snapshot.html](https://docs.aws.amazon.com/cli/latest/reference/elasticache/copy-snapshot.html) in the *AWS CLI*.

# Exporting a backup
<a name="backups-exporting"></a>

Amazon ElastiCache supports exporting your ElastiCache for Redis OSS backup to an Amazon Simple Storage Service (Amazon S3) bucket, which gives you access to it from outside ElastiCache. You can export a backup using the ElastiCache console, the AWS CLI, or the ElastiCache API.

Exporting a backup can be helpful if you need to launch a cluster in another AWS Region. You can export your data in one AWS Region, copy the .rdb file to the new AWS Region, and then use that .rdb file to seed the new cache instead of waiting for the new cluster to populate through use. For information about seeding a new cluster, see [Tutorial: Seeding a new node-based cluster with an externally created backup](backups-seeding-redis.md). Another reason you might want to export your cache's data is to use the .rdb file for offline processing.

**Important**  
 The ElastiCache backup and the Amazon S3 bucket that you want to copy it to must be in the same AWS Region.  
Though backups copied to an Amazon S3 bucket are encrypted, we strongly recommend that you do not grant others access to the Amazon S3 bucket where you want to store your backups.
Exporting a backup to Amazon S3 is not supported for clusters using data tiering. For more information, see [Data tiering in ElastiCache](data-tiering.md).
Exporting a backup is available for: node-based Valkey clusters, node-based Redis OSS clusters, and Valkey, Memcached, and Redis OSS serverless caches. Exporting a backup is not available for node-based Memcached clusters.

Before you can export a backup to an Amazon S3 bucket, you must have an Amazon S3 bucket in the same AWS Region as the backup. Grant ElastiCache access to the bucket. The first two steps show you how to do this.

## Create an Amazon S3 bucket
<a name="backups-exporting-create-s3-bucket"></a>

The following steps use the Amazon S3 console to create an Amazon S3 bucket where you export and store your ElastiCache backup.

**To create an Amazon S3 bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose **Create Bucket**.

1. In **Create a Bucket - Select a Bucket Name and Region**, do the following:

   1. In **Bucket Name**, type a name for your Amazon S3 bucket.

      The name of your Amazon S3 bucket must be DNS-compliant. Otherwise, ElastiCache can't access your backup file. The rules for DNS compliance are:
      + Names must be at least 3 and no more than 63 characters long.
      + Names must be a series of one or more labels separated by a period (.) where each label:
        + Starts with a lowercase letter or a number.
        + Ends with a lowercase letter or a number.
        + Contains only lowercase letters, numbers, and dashes.
      + Names can't be formatted as an IP address (for example, 192.0.2.0).

   1. From the **Region** list, choose an AWS Region for your Amazon S3 bucket. This AWS Region must be the same AWS Region as the ElastiCache backup you want to export.

   1. Choose **Create**.

For more information about creating an Amazon S3 bucket, see [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/CreatingaBucket.html) in the *Amazon Simple Storage Service User Guide*. 

## Grant ElastiCache access to your Amazon S3 bucket
<a name="backups-exporting-grant-access"></a>

ElastiCache requires access to your Amazon S3 bucket to copy a snapshot to it. We recommend granting access by using an Amazon S3 bucket policy rather than access control lists (ACLs).

**Warning**  
Even though backups copied to an Amazon S3 bucket are encrypted, your data can be accessed by anyone with access to your Amazon S3 bucket. Therefore, we strongly recommend that you set up IAM policies to prevent unauthorized access to this Amazon S3 bucket. For more information, see [Managing access](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-access-control.html) in the *Amazon S3 User Guide*.

Add the following bucket policy to your Amazon S3 bucket. Replace `amzn-s3-demo-bucket` with the name of your Amazon S3 bucket and `region` with the AWS Region of your bucket (for example, `us-east-1`).

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "ElastiCacheSnapshotExport",
            "Effect": "Allow",
            "Principal": {
                "Service": "region.elasticache-snapshot.amazonaws.com"
            },
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:ListBucket",
                "s3:GetBucketAcl",
                "s3:ListMultipartUploadParts",
                "s3:ListBucketMultipartUploads"
            ],
            "Resource": [
                "arn:aws:s3:::amzn-s3-demo-bucket",
                "arn:aws:s3:::amzn-s3-demo-bucket/*"
            ]
        }
    ]
}
```

------

**To add the bucket policy using the Amazon S3 console**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose the name of the Amazon S3 bucket that you want to copy the backup to. This should be the S3 bucket that you created in [Create an Amazon S3 bucket](#backups-exporting-create-s3-bucket).

1. Choose the **Permissions** tab.

1. Under **Bucket policy**, choose **Edit**.

1. Paste the bucket policy into the policy editor. Replace the `region` and `amzn-s3-demo-bucket` placeholders with your values.

1. Choose **Save changes**.

For more information about migrating from ACLs to bucket policies, see [Grant Amazon ElastiCache (Redis OSS) access to your S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-ownership-migrating-acls-prerequisites.html#object-ownership-elasticache-redis) in the *Amazon S3 User Guide*.

## Export an ElastiCache backup
<a name="backups-exporting-procedures"></a>

Now you've created your S3 bucket and granted ElastiCache permissions to access it. Next, you can use the ElastiCache console, the AWS CLI, or the ElastiCache API to export your snapshot to it. 

### Exporting an ElastiCache backup (Console)
<a name="backups-exporting-CON"></a>

The following steps use the ElastiCache console to export a backup to an Amazon S3 bucket so that you can access it from outside ElastiCache. The Amazon S3 bucket must be in the same AWS Region as the ElastiCache backup.

**To export an ElastiCache backup to an Amazon S3 bucket**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. To see a list of your backups, from the left navigation pane choose **Backups**.

1. From the list of backups, choose the box to the left of the name of the backup you want to export. 

1. Choose **Copy**.

1. In **Create a Copy of the Backup?**, do the following: 

   1. In **New backup name** box, type a name for your new backup.

      The name must be between 1 and 1,000 characters and able to be UTF-8 encoded.

      ElastiCache adds an instance identifier and `.rdb` to the value that you enter here. For example, if you enter `my-exported-backup`, ElastiCache creates `my-exported-backup-0001.rdb`.

   1. From the **Target S3 Location** list, choose the name of the Amazon S3 bucket that you want to copy your backup to (the bucket that you created in [Create an Amazon S3 bucket](#backups-exporting-create-s3-bucket)).

      The **Target S3 Location** must be an Amazon S3 bucket in the backup's AWS Region with the following permissions for the export process to succeed.
      + Object access – **Read** and **Write**.
      + Permissions access – **Read**.

      For more information, see [Grant ElastiCache access to your Amazon S3 bucket](#backups-exporting-grant-access). 

   1. Choose **Copy**.

**Note**  
If your S3 bucket does not have the permissions needed for ElastiCache to export a backup to it, you receive one of the following error messages. Return to [Grant ElastiCache access to your Amazon S3 bucket](#backups-exporting-grant-access) to add the permissions specified and retry exporting your backup.  
ElastiCache has not been granted READ permissions %s on the S3 Bucket.  
**Solution:** Add Read permissions on the bucket.
ElastiCache has not been granted WRITE permissions %s on the S3 Bucket.  
**Solution:** Add Write permissions on the bucket.
ElastiCache has not been granted READ\$1ACP permissions %s on the S3 Bucket.  
**Solution:** Add **Read** for Permissions access on the bucket.

If you want to copy your backup to another AWS Region, use Amazon S3 to copy it. For more information, see [Copying an object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/MakingaCopyofanObject.html) in the *Amazon Simple Storage Service User Guide*.

### Exporting an ElastiCache serverless backup (AWS CLI)
<a name="backups-exporting-CLI"></a>

**Exporting a backup of a serverless cache**

Export the backup to an Amazon S3 bucket using the `export-serverless-cache-snapshot` CLI operation with the following parameters:

**Parameters**
+ `--serverless-cache-snapshot-name` – Name of the backup to be copied.
+ `--s3-bucket-name` – Name of the Amazon S3 bucket where you want to export the backup. A copy of the backup is made in the specified bucket.

  The `--s3-bucket-name` must be an Amazon S3 bucket in the backup's AWS Region with the following permissions for the export process to succeed.
  + Object access – **Read** and **Write**.
  + Permissions access – **Read**.

The following operation copies a backup to my-s3-bucket.

For Linux, macOS, or Unix:

```
aws elasticache export-serverless-cache-snapshot \
    --serverless-cache-snapshot-name automatic.my-redis-2023-11-27 \
    --s3-bucket-name my-s3-bucket
```

For Windows:

```
aws elasticache export-serverless-cache-snapshot ^
    --serverless-cache-snapshot-name automatic.my-redis-2023-11-27 ^
    --s3-bucket-name my-s3-bucket
```

### Exporting a node-based ElastiCache cluster backup (AWS CLI)
<a name="backups-exporting-self-designed-CON"></a>

**Exporting a backup of a node-based cluster**

Export the backup to an Amazon S3 bucket using the `copy-snapshot` CLI operation with the following parameters:

**Parameters**
+ `--source-snapshot-name` – Name of the backup to be copied.
+ `--target-snapshot-name` – Name of the backup's copy.

  The name must be between 1 and 1,000 characters and able to be UTF-8 encoded.

  ElastiCache adds an instance identifier and `.rdb` to the value you enter here. For example, if you enter `my-exported-backup`, ElastiCache creates `my-exported-backup-0001.rdb`.
+ `--target-bucket` – Name of the Amazon S3 bucket where you want to export the backup. A copy of the backup is made in the specified bucket.

  The `--target-bucket` must be an Amazon S3 bucket in the backup's AWS Region with the following permissions for the export process to succeed.
  + Object access – **Read** and **Write**.
  + Permissions access – **Read**.

  For more information, see [Grant ElastiCache access to your Amazon S3 bucket](#backups-exporting-grant-access).

The following operation copies a backup to my-s3-bucket.

For Linux, macOS, or Unix:

```
aws elasticache copy-snapshot \
    --source-snapshot-name automatic.my-redis-primary-2016-06-27-03-15 \
    --target-snapshot-name my-exported-backup \
    --target-bucket my-s3-bucket
```

For Windows:

```
aws elasticache copy-snapshot ^
    --source-snapshot-name automatic.my-redis-primary-2016-06-27-03-15 ^
    --target-snapshot-name my-exported-backup ^
    --target-bucket my-s3-bucket
```

# Restoring from a backup into a new cache
<a name="backups-restoring"></a>

You can restore an existing backup from Valkey into a new Valkey cache or node-based cluster, and restore an existing Redis OSS backup into a new Redis OSS cache or node-based cluster. You can also restore an existing Memcached serverless cache backup into a new Memcached serverless cache. 

## Restoring a backup into a serverless cache (Console)
<a name="backups-restoring-CON"></a>

**Note**  
ElastiCache Serverless supports RDB files compatible with Valkey 7.2 and above, and Redis OSS versions between 5.0 and the latest version available.

**To restore a backup to a serverless cache (console)**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. From the navigation pane, choose **Backups**.

1. In the list of backups, choose the box to the left of the backup name that you want to restore.

1. Choose **Actions** and then **Restore**.

1. Enter a name for the new serverless cache, and an optional description.

1. Click **Create** to create your new cache and import data from your backup.

## Restoring a backup into a node-based cluster (Console)
<a name="backups-restoring-self-designedCON"></a>

**To restore a backup to a node-based cluster (console)**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. From the navigation pane, choose **Backups**.

1. In the list of backups, choose the box to the left of the backup name you want to restore from.

1. Choose **Actions** and then **Restore**.

1. Choose **Node-based cache** and customize the cluster settings, such as node type, sizes, number of shards, replicas, AZ placement, and security settings.

1. Choose **Create** to create your new node-based cluster and import data from your backup.

## Restoring a backup into a serverless cache (AWS CLI)
<a name="backups-restoring-CLI"></a>

**Note**  
ElastiCache Serverless supports RDB files compatible with Valkey 7.2 and above, and Redis OSS versions between 5.0 and the latest version available.

**To restore a backup to a new serverless cache (AWS CLI)**

The following AWS CLI example creates a new cache using `create-serverless-cache` and imports data from a backup. 

For Linux, macOS, or Unix:

```
aws elasticache create-serverless-cache \

    --serverless-cache-name CacheName \
    --engine redis
    --snapshot-arns-to-restore Snapshot-ARN
```

For Windows:

```
aws elasticache create-serverless-cache ^

    --serverless-cache-name CacheName ^
    --engine redis ^
    --snapshot-arns-to-restore Snapshot-ARN
```

# Deleting a backup
<a name="backups-deleting"></a>

An automatic backup is automatically deleted when its retention limit expires. If you delete a cluster, all of its automatic backups are also deleted. If you delete a replication group, all of the automatic backups from the clusters in that group are also deleted.

ElastiCache provides a deletion API operation that lets you delete a backup at any time, regardless of whether the backup was created automatically or manually. Because manual backups don't have a retention limit, manual deletion is the only way to remove them.

You can delete a backup using the ElastiCache console, the AWS CLI, or the ElastiCache API.

## Deleting a backup (Console)
<a name="backups-deleting-CON"></a>

The following procedure deletes a backup using the ElastiCache console.

**To delete a backup**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Backups**.

   The Backups screen appears with a list of your backups.

1. Choose the box to the left of the name of the backup you want to delete.

1. Choose **Delete**.

1. If you want to delete this backup, choose **Delete** on the **Delete Backup** confirmation screen. The status changes to *deleting*.

## Deleting a serverless backup (AWS CLI)
<a name="backups-deleting-serverless-CLI"></a>

Use the delete-snapshot AWS CLI operation with the following parameter to delete a serverless backup.
+ `--serverless-cache-snapshot-name` – Name of the backup to be deleted.

The following code deletes the backup `myBackup`.

```
aws elasticache delete-serverless-cache-snapshot --serverless-cache-snapshot-name myBackup
```

For more information, see [delete-serverless-cache-snapshot](https://docs.aws.amazon.com/cli/latest/reference/elasticache/delete-serverless-cache-snapshot.html) in the *AWS CLI Command Reference*.

## Deleting a node-based cluster backup (AWS CLI)
<a name="backups-deleting-CLI"></a>

Use the delete-snapshot AWS CLI operation with the following parameter to delete a node-based cluster backup.
+ `--snapshot-name` – Name of the backup to be deleted.

The following code deletes the backup `myBackup`.

```
aws elasticache delete-snapshot --snapshot-name myBackup
```

For more information, see [delete-snapshot](https://docs.aws.amazon.com/cli/latest/reference/elasticache/delete-snapshot.html) in the *AWS CLI Command Reference*.

# Tagging backups
<a name="backups-tagging"></a>

You can assign your own metadata to each backup in the form of tags. Tags enable you to categorize your backups in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type—you can quickly identify a specific resource based on the tags that you've assigned to it. For more information, see [Resources you can tag](Tagging-Resources.md#Tagging-your-resources).

Cost allocation tags are a means of tracking your costs across multiple AWS services by grouping your expenses on invoices by tag values. To learn more about cost allocation tags, see [Use cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html).

Using the ElastiCache console, the AWS CLI, or ElastiCache API you can add, list, modify, remove, or copy cost allocation tags on your backups. For more information, see [Monitoring costs with cost allocation tags](Tagging.md).

# Tutorial: Seeding a new node-based cluster with an externally created backup
<a name="backups-seeding-redis"></a>

When you create a new Valkey or Redis OSS node-based cluster, you can seed it with data from a Valkey or Redis OSS .rdb backup file. Seeding the cluster is useful if you currently manage a Valkey or Redis OSS instance outside of ElastiCache and want to populate your new ElastiCache for Redis OSS node-based cluster with your existing Valkey or Redis OSS data.

To seed a new Valkey or Redis OSS node-based cluster from a Valkey or Redis OSS backup created within Amazon ElastiCache, see [Restoring from a backup into a new cache](backups-restoring.md).

When you use a Valkey or Redis OSS .rdb file to seed a new node-based cluster, you can do the following:
+ Upgrade from a nonpartitioned cluster to a Valkey or Redis OSS (cluster mode enabled) node-based cluster running Redis OSS version 3.2.4.
+ Specify a number of shards (called node groups in the API and CLI) in the new node-based cluster. This number can be different from the number of shards in the node-based cluster that was used to create the backup file.
+ Specify a different node type for the new node-based cluster—larger or smaller than that used in the cluster that made the backup. If you scale to a smaller node type, be sure that the new node type has sufficient memory for your data and Valkey or Redis OSS overhead. For more information, see [Ensuring you have enough memory to make a Valkey or Redis OSS snapshot](BestPractices.BGSAVE.md).
+ Distribute your keys in the slots of the new Valkey or Redis OSS (cluster mode enabled) cluster differently than in the cluster that was used to create the backup file.

**Note**  
You can't seed a Valkey or Redis OSS (cluster mode disabled) cluster from an .rdb file created from a Valkey or Redis OSS (cluster mode enabled) cluster.

**Important**  
You must ensure that your Valkey or Redis OSS backup data doesn't exceed the resources of the node. For example, you can't upload an .rdb file with 5 GB of Valkey or Redis OSS data to a cache.m3.medium node that has 2.9 GB of memory.  
If the backup is too large, the resulting cluster has a status of `restore-failed`. If this happens, you must delete the cluster and start over.  
For a complete listing of node types and specifications, see [Redis OSS node-type specific parameters](ParameterGroups.Engine.md#ParameterGroups.Redis.NodeSpecific) and [Amazon ElastiCache product features and details](https://aws.amazon.com/elasticache/details/).
You can encrypt a Valkey or Redis OSS .rdb file with Amazon S3 server-side encryption (SSE-S3) only. For more information, see [Protecting data using server-side encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html).

Following, you can find topics that walk you through migrating your cluster from outside ElastiCache for Valkey or Redis OSS to ElastiCache for Redis OSS.

**Topics**
+ [Step 1: Create a Valkey or Redis OSS backup](#backups-seeding-redis-create-backup)
+ [Step 2: Create an Amazon S3 bucket and folder](#backups-seeding-redis-create-s3-bucket)
+ [Step 3: Upload your backup to Amazon S3](#backups-seeding-redis-upload)
+ [Step 4: Grant ElastiCache read access to the .rdb file](#backups-seeding-redis-grant-access)

**Topics**
+ [Step 1: Create a Valkey or Redis OSS backup](#backups-seeding-redis-create-backup)
+ [Step 2: Create an Amazon S3 bucket and folder](#backups-seeding-redis-create-s3-bucket)
+ [Step 3: Upload your backup to Amazon S3](#backups-seeding-redis-upload)
+ [Step 4: Grant ElastiCache read access to the .rdb file](#backups-seeding-redis-grant-access)

## Step 1: Create a Valkey or Redis OSS backup
<a name="backups-seeding-redis-create-backup"></a>

**To create the Valkey or Redis OSS backup to seed your ElastiCache for Redis OSS instance**

1. Connect to your existing Valkey or Redis OSS instance.

1. Run either `BGSAVE` or `SAVE` operation to create a backup. Note where your .rdb file is located.

   `BGSAVE` is asynchronous and does not block other clients while processing. For more information, see [BGSAVE](https://valkey.io/commands/bgsave) at the Valkey website.

   `SAVE` is synchronous and blocks other processes until finished. For more information, see [SAVE](https://valkey.io/commands/save) at the Valkey website.

For additional information on creating a backup, see [Persistence](https://valkey.io/topics/persistence) at the Valkey website.

## Step 2: Create an Amazon S3 bucket and folder
<a name="backups-seeding-redis-create-s3-bucket"></a>

When you have created the backup file, you need to upload it to a folder within an Amazon S3 bucket. To do that, you must first have an Amazon S3 bucket and folder within that bucket. If you already have an Amazon S3 bucket and folder with the appropriate permissions, you can skip to [Step 3: Upload your backup to Amazon S3](#backups-seeding-redis-upload).

**To create an Amazon S3 bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Follow the instructions for creating an Amazon S3 bucket in [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket.html) in the *Amazon Simple Storage Service User Guide*.

   The name of your Amazon S3 bucket must be DNS-compliant. Otherwise, ElastiCache can't access your backup file. The rules for DNS compliance are:
   + Names must be at least 3 and no more than 63 characters long.
   + Names must be a series of one or more labels separated by a period (.) where each label:
     + Starts with a lowercase letter or a number.
     + Ends with a lowercase letter or a number.
     + Contains only lowercase letters, numbers, and dashes.
   + Names can't be formatted as an IP address (for example, 192.0.2.0).

   You must create your Amazon S3 bucket in the same AWS Region as your new ElastiCache for Redis OSS cluster. This approach makes sure that the highest data transfer speed when ElastiCache reads your .rdb file from Amazon S3.
**Note**  
To keep your data as secure as possible, make the permissions on your Amazon S3 bucket as restrictive as you can. At the same time, the permissions still need to allow the bucket and its contents to be used to seed your new Valkey or Redis OSS cluster.

**To add a folder to an Amazon S3 bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose the name of the bucket to upload your .rdb file to.

1. Choose **Create folder**.

1. Enter a name for your new folder.

1. Choose **Save**.

   Make note of both the bucket name and the folder name.

## Step 3: Upload your backup to Amazon S3
<a name="backups-seeding-redis-upload"></a>

Now, upload the .rdb file that you created in [Step 1: Create a Valkey or Redis OSS backup](#backups-seeding-redis-create-backup). You upload it to the Amazon S3 bucket and folder that you created in [Step 2: Create an Amazon S3 bucket and folder](#backups-seeding-redis-create-s3-bucket). For more information on this task, see [Add an object to a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/upload-objects.html). Between steps 2 and 3, choose the name of the folder you created .

**To upload your .rdb file to an Amazon S3 folder**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose the name of the Amazon S3 bucket you created in Step 2.

1. Choose the name of the folder you created in Step 2.

1. Choose **Upload**.

1. Choose **Add files**.

1. Browse to find the file or files you want to upload, then choose the file or files. To choose multiple files, hold down the Ctrl key while choosing each file name.

1. Choose **Open**.

1. Confirm the correct file or files are listed in the **Upload** dialog box, and then choose **Upload**.

Note the path to your .rdb file. For example, if your bucket name is `myBucket` and the path is `myFolder/redis.rdb`, enter `myBucket/myFolder/redis.rdb`. You need this path to seed the new cluster with the data in this backup.

For additional information, see [Bucket restrictions and limitations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/BucketRestrictions.html) in the *Amazon Simple Storage Service User Guide*.

## Step 4: Grant ElastiCache read access to the .rdb file
<a name="backups-seeding-redis-grant-access"></a>

Now, grant ElastiCache read access to your .rdb backup file. You grant ElastiCache access to your backup file in a different way depending if your bucket is in a default AWS Region or an opt-in AWS Region.

AWS Regions introduced before March 20, 2019, are enabled by default. You can begin working in these AWS Regions immediately. Regions introduced after March 20, 2019, such as Asia Pacific (Hong Kong) and Middle East (Bahrain), are disabled by default. You must enable, or opt in, to these Regions before you can use them, as described in [Managing AWS regions](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html) in *AWS General Reference*.

Choose your approach depending on your AWS Region:
+ For a default Region, use the procedure in [Grant ElastiCache read access to the .rdb file in a default Region](#backups-seeding-redis-default-region).
+ For an opt-in Region, use the procedure in [Grant ElastiCache read access to the .rdb file in an opt-in Region](#backups-seeding-opt-in-region).

### Grant ElastiCache read access to the .rdb file in a default Region
<a name="backups-seeding-redis-default-region"></a>

AWS Regions introduced before March 20, 2019, are enabled by default. You can begin working in these AWS Regions immediately. Regions introduced after March 20, 2019, such as Asia Pacific (Hong Kong) and Middle East (Bahrain), are disabled by default. You must enable, or opt in, to these Regions before you can use them, as described in [Managing AWS regions](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html) in *AWS General Reference*.

**To grant ElastiCache read access to the backup file in an AWS Region enabled by default**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose the name of the S3 bucket that contains your .rdb file.

1. Choose the name of the folder that contains your .rdb file.

1. Choose the name of your .rdb backup file. The name of the selected file appears above the tabs at the top of the page.

1. Choose **Permissions**.

1. If **aws-scs-s3-readonly** or one of the canonical IDs in the following list is not listed as a user, do the following:

   1. Under **Access for other AWS accounts**, choose **Add grantee**.

   1. In the box, add the AWS Region's canonical ID as shown following:
      + AWS GovCloud (US-West) Region: 

        ```
        40fa568277ad703bd160f66ae4f83fc9dfdfd06c2f1b5060ca22442ac3ef8be6
        ```
**Important**  
The backup must be located in an S3 bucket in AWS GovCloud (US) for you to download it to a Valkey or Redis OSS cluster in AWS GovCloud (US).
      + AWS Regions enabled by default: 

        ```
        540804c33a284a299d2547575ce1010f2312ef3da9b3a053c8bc45bf233e4353
        ```

   1. Set the permissions on the bucket by choosing **Yes** for the following:
      + **List/write object**
      + **Read/write object ACL permissions**

   1. Choose **Save**.

1. Choose **Overview**, and then choose **Download**.

### Grant ElastiCache read access to the .rdb file in an opt-in Region
<a name="backups-seeding-opt-in-region"></a>

AWS Regions introduced before March 20, 2019, are enabled by default. You can begin working in these AWS Regions immediately. Regions introduced after March 20, 2019, such as Asia Pacific (Hong Kong) and Middle East (Bahrain), are disabled by default. You must enable, or opt in, to these Regions before you can use them, as described in [Managing AWS regions](https://docs.aws.amazon.com/general/latest/gr/rande-manage.html) in *AWS General Reference*.

Now, grant ElastiCache read access to your .rdb backup file. 

**To grant ElastiCache read access to the backup file**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Choose the name of the S3 bucket that contains your .rdb file.

1. Choose the name of the folder that contains your .rdb file.

1. Choose the name of your .rdb backup file. The name of the selected file appears above the tabs at the top of the page.

1. Choose the **Permissions** tab.

1. Under **Permissions**, choose **Bucket policy** and then choose **Edit**.

1. Update the policy to grant ElastiCache required permissions to perform operations:
   + Add `[ "Service" : "region-full-name.elasticache-snapshot.amazonaws.com" ]` to `Principal`.
   + Add the following permissions required for exporting a snapshot to the Amazon S3 bucket: 
     + `"s3:GetObject"`
     + `"s3:ListBucket"`
     + `"s3:GetBucketAcl"`

   The following is an example of what the updated policy might look like.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "ElastiCacheSnapshotExport",
               "Effect": "Allow",
               "Principal": {
                   "Service": "region.elasticache-snapshot.amazonaws.com"
               },
               "Action": [
                   "s3:PutObject",
                   "s3:GetObject",
                   "s3:ListBucket",
                   "s3:GetBucketAcl",
                   "s3:ListMultipartUploadParts",
                   "s3:ListBucketMultipartUploads"
               ],
               "Resource": [
                   "arn:aws:s3:::amzn-s3-demo-bucket",
                   "arn:aws:s3:::amzn-s3-demo-bucket/*"
               ]
           }
       ]
   }
   ```

------

1. Choose **Save changes**.

### Seed the ElastiCache cluster with the .rdb file data
<a name="backups-seeding-redis-seed-cluster"></a>

Now you are ready to create an ElastiCache cluster and seed it with the data from the .rdb file. To create the cluster, follow the directions at [Creating a cluster for Valkey or Redis OSS](Clusters.Create.md) or [Creating a Valkey or Redis OSS replication group from scratch](Replication.CreatingReplGroup.NoExistingCluster.md). Be sure to choose Valkey or Redis OSS as your cluster engine.

The method you use to tell ElastiCache where to find the backup you uploaded to Amazon S3 depends on the method you use to create the cluster:

**Seed the ElastiCache for Redis OSS cluster or replication group with the .rdb file data**
+ **Using the ElastiCache console**

  When selecting **Cluster settings**, choose **Restore from backups** as your cluster creation method, then choose **Other backups** as your **Source** in the **Backup source** section. In the **Seed RDB file S3 location** box, type in the Amazon S3 path for the files(s). If you have multiple .rdb files, type in the path for each file in a comma separated list. The Amazon S3 path looks something like `myBucket/myFolder/myBackupFilename.rdb`.
+ **Using the AWS CLI**

  If you use the `create-cache-cluster` or the `create-replication-group` operation, use the parameter `--snapshot-arns` to specify a fully qualified ARN for each .rdb file. For example, `arn:aws:s3:::myBucket/myFolder/myBackupFilename.rdb`. The ARN must resolve to the backup files you stored in Amazon S3.
+ **Using the ElastiCache API**

  If you use the `CreateCacheCluster` or the `CreateReplicationGroup` ElastiCache API operation, use the parameter `SnapshotArns` to specify a fully qualified ARN for each .rdb file. For example, `arn:aws:s3:::myBucket/myFolder/myBackupFilename.rdb`. The ARN must resolve to the backup files you stored in Amazon S3.

**Important**  
When seeding a Valkey or Redis OSS (cluster mode enabled) cluster, you must configure each node group (shard) in the new cluster or replication group. Use the parameter `--node-group-configuration` (API: `NodeGroupConfiguration`) to do this. For more information, see the following:  
CLI: [create-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/create-replication-group.html) in the AWS CLI Reference
API: [CreateReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_CreateReplicationGroup.html) in the ElastiCache API Reference

During the process of creating your cluster, the data in your Valkey or Redis OSS backup is written to the cluster. You can monitor the progress by viewing the ElastiCache event messages. To do this, see the ElastiCache console and choose **Cache Events**. You can also use the AWS ElastiCache command line interface or ElastiCache API to obtain event messages. For more information, see [Viewing ElastiCache events](ECEvents.Viewing.md).

# Engine versions and upgrading in ElastiCache
<a name="engine-versions"></a>

This section covers the supported Valkey, Memcached, and Redis OSS engines and how to upgrade. Note that all features available with Redis OSS 7.2 are available in Valkey 7.2 and above by default. You can also upgrade from some existing ElastiCache for Redis OSS engines to a Valkey engine.

# Upgrading engine versions including cross engine upgrades
<a name="VersionManagement.HowTo"></a>

**Valkey and Redis OSS**

With Valkey and Redis OSS, you initiate version upgrades to your cluster or replication group by modifying it using the ElastiCache console, the AWS CLI, or the ElastiCache API and specifying a newer engine version. 

You can also cross upgrade from Redis OSS to Valkey. For more information on cross upgrades, see [How to upgrade from Redis OSS to Valkey](#VersionManagement.HowTo.cross-engine-upgrade).

**Topics**
+ [How to upgrade from Redis OSS to Valkey](#VersionManagement.HowTo.cross-engine-upgrade)
+ [Resolving blocked Valkey or Redis OSS engine upgrades](#resolving-blocked-engine-upgrades)


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/VersionManagement.HowTo.html)

**Memcached**

With Memcached, to start version upgrades to your cluster, you modify it and specify a newer engine version. You can do this by using the ElastiCache console, the AWS CLI, or the ElastiCache API:
+ To use the AWS Management Console, see – [Using the ElastiCache AWS Management Console](Clusters.Modify.md#Clusters.Modify.CON).
+ To use the AWS CLI, see [Using the AWS CLI with ElastiCache](Clusters.Modify.md#Clusters.Modify.CLI).
+ To use the ElastiCache API, see [Using the ElastiCache API](Clusters.Modify.md#Clusters.Modify.API).

## How to upgrade from Redis OSS to Valkey
<a name="VersionManagement.HowTo.cross-engine-upgrade"></a>

Valkey is designed as a drop-in replacement for Redis OSS 7. You can upgrade from Redis OSS to Valkey using the Console, API, or CLI, by specifying the new engine and major engine version. The endpoint IP address and all other aspects of the application will not be changed by the upgrade. When upgrading from Redis OSS 5.0.6 and higher you will experience no downtime. 

**Note**  
**AWS CLI version requirements for Redis OSS to Valkey upgrades:**  
For AWSCLI v1: Minimum required version 1.35.2 (Current version: 1.40.22)
For AWS CLI v2: Minimum required version 2.18.2 (Current version: 2.27.22)

**Note**  
When upgrading from earlier Redis OSS versions than 5.0.6, you may experience a failover time of 30 to 60 seconds during the DNS propagation.
To upgrade an existing Redis OSS (cluster mode disabled) single-node cluster to the Valkey engine, first follow these steps: [Creating a replication group using an existing cluster](Replication.CreatingReplGroup.ExistingCluster.md). Once the Redis OSS (cluster mode disabled) single-node cluster has been added to a replication group, you can cross-engine upgrade to Valkey.

### Upgrading a replication group from Redis OSS to Valkey
<a name="cross-engine-upgrades.replication-group"></a>

If you have an existing Redis OSS replication group that is using the default cache parameter group, you can upgrade to Valkey by specifying the new engine and engine version with modify-replication-group API.

For Linux, macOS, or Unix:

```
aws elasticache modify-replication-group \
   --replication-group-id myReplGroup \
   --engine valkey \
   --engine-version 8.0
```

For Windows:

```
aws elasticache modify-replication-group ^
   --replication-group-id myReplGroup ^
   --engine valkey ^
   --engine-version 8.0
```

If you have a custom cache parameter group applied to the existing Redis OSS replication group you wish to upgrade, you will need to pass a custom Valkey cache parameter group in the request as well. The input Valkey custom parameter group must have the same Redis OSS static parameter values as the existing Redis OSS custom parameter group.

For Linux, macOS, or Unix:

```
aws elasticache modify-replication-group \
   --replication-group-id myReplGroup \
   --engine valkey \
   --engine-version 8.0 \
   --cache-parameter-group-name myParamGroup
```

For Windows:

```
aws elasticache modify-replication-group ^
   --replication-group-id myReplGroup ^
   --engine valkey ^
   --engine-version 8.0 ^
   --cache-parameter-group-name myParamGroup
```

### Upgrading a Redis OSS serverless cache to Valkey with the CLI
<a name="cross-engine-upgrades.cli"></a>

For Linux, macOS, or Unix:

```
aws elasticache modify-serverless-cache \
   --serverless-cache-name myCluster \
   --engine valkey \
   --major-engine-version 8
```

For Windows:

```
aws elasticache modify-serverless-cache ^
   --serverless-cache-name myCluster ^
   --engine valkey ^
   --major-engine-version 8
```

### Upgrading Redis OSS to Valkey with the Console
<a name="cross-engine-upgrades.console"></a>

**Upgrading from Redis OSS 5 to Valkey**

1. Select the Redis OSS cache to upgrade.

1. An **Upgrade to Valkey** window should appear. Select the **Upgrade to Valkey** button.

1. Go to **Cache settings**, and then select **Engine version**. The most recent version of Valkey is recommended.

1. If this cache is serverless, then you will need to update the parameter group. Go to the **Parameter groups** area of **Cache settings**, select an appropriate parameter group such as *default.valkey8*.

1. Select **Upgrade**.

This cache will now be listed in the Valkey area of the console.

**Note**  
Upgrading directly from Redis OSS 4 or lower to Valkey may include a longer failover time of 30 to 60 seconds during the DNS propagation.

### How to downgrade from Valkey to Redis OSS
<a name="cross-engine-downgrades.console"></a>

 If for any reason you wish to rollback your upgraded cluster, Amazon ElastiCache supports rolling back a Valkey 7.2 cache to Redis OSS 7.1. You can perform a rollback using the same console, API, or CLI steps as an engine upgrade and specifying Redis OSS 7.1 as the target engine version. Rollbacks use the same processes as an upgrade. The endpoint IP address and all other aspects of the application will not be changed by the rollback and you will experience no downtime. 

 Additionally, you can restore a snapshot created from your Valkey 7.2 cache as a Redis OSS 7.1 cache. When you restore from a snapshot, you can specify Redis OSS 7.1 as the target engine version. When using this option, a new cache will be created from the snapshot. Restoring from a snapshot has no effect on the Valkey cache that the snapshot was created from. 

 The following requirements and limitations apply when performing a rollback: 
+  ElastiCache only supports rolling back from Valkey 7.2 to Redis OSS 7.1. This is true even if you upgraded to Valkey 7.2 from an earlier version than Redis OSS 7.1. 
+  Any user group and user associated with the replication group or serverless cache being rolled back must be configured with engine type `REDIS`. 

## Resolving blocked Valkey or Redis OSS engine upgrades
<a name="resolving-blocked-engine-upgrades"></a>

As shown in the following table, your Valkey or Redis OSS engine upgrade operation is blocked if you have a pending scale up operation.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/VersionManagement.HowTo.html)

**To resolve a blocked Valkey or Redis OSS engine upgrade**
+ Do one of the following:
  + Schedule your Redis OSS or Valkey engine upgrade operation for the next maintenance window by clearing the **Apply immediately** check box. 

    With the CLI, use `--no-apply-immediately`. With the API, use `ApplyImmediately=false`.
  + Wait until your next maintenance window (or after) to perform your Redis OSS engine upgrade operation.
  + Add the Redis OSS scale up operation to this cluster modification with the **Apply Immediately** check box chosen. 

    With the CLI, use `--apply-immediately`. With the API, use `ApplyImmediately=true`. 

    This approach effectively cancels the engine upgrade during the next maintenance window by performing it immediately.

# ElastiCache Extended Support
<a name="extended-support"></a>

With ElastiCache Extended Support, you can continue running your cache on a major engine version past the end of standard support date for an additional cost. If you don't upgrade after the end of standard support date, you will be charged. 

Extended Support provides the following updates and technical support:
+ Security updates for critical and high CVEs for your cache and cache engine
+ Bug fixes and patches for critical issues
+ The ability to open support cases and receive troubleshooting help within the standard ElastiCache service level agreement

This paid offering gives you more time to upgrade to a supported major engine version. 

For example, the ElastiCache end of standard support date for Redis OSS 4.0.10 is January 31, 2026. If you aren't ready to manually upgrade to Valkey or to Redis OSS 6 or later by that date, ElastiCache will automatically enroll your caches in Extended Support and you can continue to run Redis OSS 4.0.10. Starting the first day of the month after the standard support ends, February 1, 2026, ElastiCache automatically charges you for Extended Support.

Extended Support is available for up to 3 years past the end of standard support date for a major engine version. For Elasticache for Redis OSS versions 4 and 5, that will be January 31, 2029. After this date, any caches still running Redis OSS versions 4 and 5 will be automatically upgraded to the latest version of Valkey.

Once an engine’s support period ends, caches that continue to run that old version will automatically transition to Extended Support. You will be notified before the Extended Support pricing start date so you may upgrade your instance instead. You also can explicitly opt out at any time by upgrading to supported versions.

For more information about the end of standard support dates and the end of Extended Support dates, see [ElastiCache versions for Redis OSS end of life schedule](engine-versions.md#deprecated-engine-versions) for Valkey, Memcached or Redis OSS.

**Topics**
+ [ElastiCache Extended Support charges](extended-support-charges.md)
+ [Versions with ElastiCache Extended Support](extended-support-versions.md)
+ [ElastiCache and customer responsibilities with ElastiCache Extended Support](extended-support-responsibilities.md)

# ElastiCache Extended Support charges
<a name="extended-support-charges"></a>

You will incur charges for all engines enrolled in ElastiCache Extended Support beginning the day after the end of standard support. For the ElastiCache end of standard support date, see [Versions with ElastiCache Extended Support](extended-support-versions.md).

The additional charge for ElastiCache Extended Support automatically stops when you take one of the following actions:
+ Upgrade to an engine version that's covered under standard support.
+ Delete the cache that's running a major version past the ElastiCache end of standard support date.

The charges will restart if your target engine version enters Extended Support in the future.

For example, let’s say ElastiCache version 4 for Redis OSS enters Extended Support on February 1, 2026, and you upgrade your caches on v4 to v6 on January 1, 2027. You will only be charged for 11 months of Extended Support, on ElastiCache version 4 for Redis OSS. If you continue running ElastiCache version 6 for Redis OSS past its end of standard support date of January 31, 2027, then those caches will again incur Extended Support charges starting on February 1, 2027.

You can avoid being charged for ElastiCacheExtended Support by preventing ElastiCache from creating or restoring a cache past the ElastiCache end of standard support date.

For more information, see [Amazon ElastiCache pricing](https://aws.amazon.com/elasticache/pricing/).

# Versions with ElastiCache Extended Support
<a name="extended-support-versions"></a>

Redis Open Source Software (OSS) versions 4 and 5 reached their community End of Life in 2020 and 2022, respectively. This means no further updates, bug fixes, or security patches are being released by the community. Standard support for ElastiCache Redis OSS versions 4 and 5 on ElastiCache will end on January 31, 2026. Continuing to use unsupported versions of Redis OSS could leave your data vulnerable to known [Common Vulnerabilities and Exposures](https://nvd.nist.gov/vuln-metrics/cvss) (CVEs).

Starting on February 1, 2026, ElastiCache caches still running on Redis OSS versions 4 and 5 will be automatically enrolled in Extended Support, to provide continuous availability and security. Although Extended Support offers flexibility, we recommend treating the end of standard support as a planning milestone for your production workloads. We strongly encourage you to upgrade your Redis OSS v4 and v5 caches to ElastiCache for Valkey or Redis OSS v6 or later, before the end of standard support.

The following table summarizes the Amazon ElastiCache end of standard support date and Extended Support dates.

**Extended support and End of Life schedule**


| Major Engine Version | End of Standard Support | Start of Extended Support Y1 Premium | Start of Extended Support Y2 Premium | Start of Extended Support Y3 Premium | End of Extended Support and version EOL | 
| --- | --- | --- | --- | --- | --- | 
| Redis OSS v4 | 1/31/2026 | 2/1/2026 | 2/1/2027 | 2/1/2028 | 1/31/2029 | 
| Redis OSS v5 | 1/31/2026 | 2/1/2026 | 2/1/2027 | 2/1/2028 | 1/31/2029 | 
| Redis OSS v6 | 1/31/2027 | 2/1/2027 | 2/1/2028 | 2/1/2029 | 1/31/2030 | 

Extended Support will only be offered for the latest supported patch version of each major Redis OSS version. When Extended Support begins on February 1, 2026, if your Redis OSS v4 and v5 clusters are not already on the latest patch versions, they will be automatically upgraded to v4.0.10 for Redis OSS v4, and v5.0.6 for Redis OSS v5, before being enrolled in Extended Support. This ensures that you receive security updates and bug fixes through Extended Support. You do not need to take any action to upgrade to these latest patch versions as part of the Extended Support transition.

# ElastiCache and customer responsibilities with ElastiCache Extended Support
<a name="extended-support-responsibilities"></a>

Following are the responsibilities of Amazon ElastiCache, and your responsibilities with ElastiCache Extended Support.

**Amazon ElastiCache responsibilities**

After the ElastiCache end of standard support date, Amazon ElastiCache will supply patches, bug fixes, and upgrades for engines that are enrolled in ElastiCache Extended Support. This will occur for up to 3 years, or until you stop using the engines in Extended Support, whichever happens first.

**Your responsibilities**

You're responsible for applying the patches, bug fixes, and upgrades given for caches in ElastiCache Extended Support. Amazon ElastiCache reserves the right to change, replace, or withdraw such patches, bug fixes, and upgrades at any time. If a patch is necessary to address security or critical stability issues, Amazon ElastiCache reserves the right to update your caches with the patch, or to require that you install the patch.

You're also responsible for upgrading your engine to a newer engine version before the ElastiCache end of Extended Support date. The ElastiCache end of Extended Support date is typically 3 years after the ElastiCache end of standard support date. 

If you don't upgrade your engine, then after the ElastiCache end of Extended Support date, Amazon ElastiCache will attempt to upgrade your engine to a newer engine version that's supported under ElastiCache standard support. If the upgrade fails, then Amazon ElastiCache reserves the right to delete the cache that's running the engine past the ElastiCache end of standard support date. However, before doing so, Amazon ElastiCache will preserve your data from that engine.

# Version Management for ElastiCache
<a name="VersionManagement"></a>

Manage how you would like to update your ElastiCache caches and node-based clusters updated for the Valkey, Memcached, and Redis OSS engines.

## Version management for ElastiCache Serverless Cache
<a name="VersionManagement-serverless"></a>

Manage if and when the ElastiCache Serverless cache is upgraded and perform version upgrades on your own terms and timelines.

ElastiCache Serverless automatically applies the latest minor and patch software version to your cache, without any impact or downtime to your application. No action is required on your end. 

When a new major version is available, ElastiCache Serverless will send you a notification in the console and an event in EventBridge. You can choose to upgrade your cache to the latest major version by modifying your cache using the Console, CLI, or API and selecting the latest engine version. Similar to minor and patch upgrades, major version upgrades are performed without downtime to your application.

## Version management for node-based ElastiCache clusters
<a name="VersionManagement-clusters"></a>

When working with node-based ElastiCache clusters, you can control when the software powering your cluster is upgraded to new versions that are supported by ElastiCache. You can control when to upgrade your cache to the latest available major, minor, and patch versions. You initiate engine version upgrades to your cluster or replication group by modifying it and specifying a new engine version.

You can control if and when the protocol-compliant software powering your cluster is upgraded to new versions that are supported by ElastiCache. This level of control enables you to maintain compatibility with specific versions, test new versions with your application before deploying in production, and perform version upgrades on your own terms and timelines.

Because version upgrades might involve some compatibility risk, they don't occur automatically. You must initiate them. 

**Valkey and Redis OSS clusters**

**Note**  
If a Valkey or Redis OSS cluster is replicated across one or more Regions, the engine version is upgraded for secondary Regions and then for the primary Region.
 ElastiCache for Redis OSS versions are identified with a semantic version which comprise a major and minor component. For example, in Redis OSS 6.2, the major version is 6, and the minor version 2. When operating node-based clusters, ElastiCache for Redis OSS also exposes the patch component, e.g. Redis OSS 6.2.1, and the patch version is 1.   
Major versions are for API incompatible changes and minor versions are for new functionality added in a backwards-compatible way. Patch versions are for backwards-compatible bug fixes and non-functional changes. 

With Valkey and Redis OSS, you initiate engine version upgrades to your cluster or replication group by modifying it and specifying a new engine version. For more information, see [Modifying a replication group](Replication.Modify.md).

**Memcached**

With Memcached, to upgrade to a newer version you must modify your cluster and specify the new engine version you want to use. Upgrading to a newer Memcached version is a destructive process – you lose your data and start with a cold cache. For more information, see [Modifying an ElastiCache cluster](Clusters.Modify.md).

You should be aware of the following requirements when upgrading from an older version of Memcached to Memcached version 1.4.33 or newer. `CreateCacheCluster` and `ModifyCacheCluster` fails under the following conditions:
+ If `slab_chunk_max > max_item_size`.
+ If `max_item_size modulo slab_chunk_max != 0`.
+ If `max_item_size > ((max_cache_memory - memcached_connections_overhead) / 4)`.

  The value `(max_cache_memory - memcached_connections_overhead)` is the node's memory useable for data. For more information, see [Memcached connection overhead](ParameterGroups.Engine.md#ParameterGroups.Memcached.Overhead).

## Supported engines and versions
<a name="supported-engine-versions"></a>

ElastiCache serverless caches support ElastiCache version 7.2 for Valkey and above, ElastiCache version 1.6 for Memcached and above, and ElastiCache 7.0 for Redis OSS and above. 

Node-based ElastiCache clusters support ElastiCache version 7.2 for Valkey and above, ElastiCache version 1.4.5 for Memcached and above, and ElastiCache 4.0.10 for Redis OSS and above.

**Topics**
+ [Supported Valkey versions](#supported-engine-versions.valkey)
+ [Valkey 8.2](#valkey-version-8.2)
+ [Valkey 8.1](#valkey-version-8.1)
+ [Valkey 8.0](#valkey-version-8)
+ [ElastiCache version 7.2.6 for Valkey](#valkey-version-7.2.6)

### Supported Valkey versions
<a name="supported-engine-versions.valkey"></a>

Supported Valkey versions below. Note that Valkey supports most features available in ElastiCache version 7.2 for Redis OSS by default.
+ You can also upgrade your ElastiCache clusters with versions earlier than 5.0.6. The process involved is the same but may incur longer failover time during DNS propagation (30s-1m). 
+ Beginning with Redis OSS 7, ElastiCache supports switching between Valkey or Redis OSS (cluster mode disabled) and Valkey or Redis OSS (cluster mode enabled).
+ The Amazon ElastiCache for Redis OSS engine upgrade process is designed to make a best effort to retain your existing data and requires successful Redis OSS replication. 
+ When upgrading the engine, ElastiCache will terminate existing client connections. To minimize downtime during engine upgrades, we recommend you implement [best practices for Redis OSS clients](BestPractices.Clients.redis.md) with error retries and exponential backoff and the best practices for [minimizing downtime during maintenance](BestPractices.MinimizeDowntime.md). 
+ You can't upgrade directly from Valkey or Redis OSS (cluster mode disabled) to Valkey or Redis OSS (cluster mode enabled) when you upgrade your engine. The following procedure shows you how to upgrade from Valkey or Redis OSS (cluster mode disabled) to Valkey or Redis OSS (cluster mode enabled).

**To upgrade from a Valkey or Redis OSS (cluster mode disabled) to Valkey or Redis OSS (cluster mode enabled) engine version**

  1. Make a backup of your Valkey or Redis OSS (cluster mode disabled) cluster or replication group. For more information, see [Taking manual backups](backups-manual.md).

  1. Use the backup to create and seed a Valkey or Redis OSS (cluster mode enabled) cluster with one shard (node group). Specify the new engine version and enable cluster mode when creating the cluster or replication group. For more information, see [Tutorial: Seeding a new node-based cluster with an externally created backup](backups-seeding-redis.md).

  1. Delete the old Valkey or Redis OSS (cluster mode disabled) cluster or replication group. For more information, see [Deleting a cluster in ElastiCache](Clusters.Delete.md) or [Deleting a replication group](Replication.DeletingRepGroup.md).

  1. Scale the new Valkey or Redis OSS (cluster mode enabled) cluster or replication group to the number of shards (node groups) that you need. For more information, see [Scaling Valkey or Redis OSS (Cluster Mode Enabled) clusters](scaling-redis-cluster-mode-enabled.md)
+ When upgrading major engine versions, for example from 5.0.6 to 6.0, you need to also choose a new parameter group that is compatible with the new engine version.
+ For single Redis OSS clusters and clusters with Multi-AZ disabled, we recommend that sufficient memory be made available to Redis OSS as described in [Ensuring you have enough memory to make a Valkey or Redis OSS snapshot](BestPractices.BGSAVE.md). In these cases, the primary is unavailable to service requests during the upgrade process.
+ For Redis OSS clusters with Multi-AZ enabled, we also recommend that you schedule engine upgrades during periods of low incoming write traffic. When upgrading to Redis OSS 5.0.6 or above, the primary cluster continues to be available to service requests during the upgrade process. 

  Clusters and replication groups with multiple shards are processed and patched as follows:
  + All shards are processed in parallel. Only one upgrade operation is performed on a shard at any time.
  + In each shard, all replicas are processed before the primary is processed. If there are fewer replicas in a shard, the primary in that shard might be processed before the replicas in other shards are finished processing.
  + Across all the shards, primary nodes are processed in series. Only one primary node is upgraded at a time.
+ If encryption is enabled on your current cluster or replication group, you cannot upgrade to an engine version that does not support encryption, such as from 3.2.6 to 3.2.10.

**Memcached considerations**

When upgrading a node-based Memcached cluster, consider the following.
+ Engine version management is designed so that you can have as much control as possible over how patching occurs. However, ElastiCache reserves the right to patch your cluster on your behalf in the unlikely event of a critical security vulnerability in the system or cache software.
+ Because the Memcached engine does not support persistence, Memcached engine version upgrades are always a disruptive process that clears all cache data in the cluster.

### ElastiCache version 8.2 for Valkey
<a name="valkey-version-8.2"></a>

Here are some of the new features introduced in Valkey 8.2 (compared to ElastiCache Valkey 8.1):
+ ElastiCache for Valkey v8.2 provides native support for [vector search](vector-search.md), delivering latency as low as microseconds-the lowest latency vector search with the highest throughput and best price-performance at 95%\$1 recall rate among popular vector databases on AWS.

For more information on Valkey, see [Valkey](https://valkey.io/).

ElastiCache version 8.2 for Valkey enhances Valkey 8.1 with vector search capabilities based on [valkey-search module](https://github.com/valkey-io/valkey-search). For more information on the Valkey 8.2 release, see [release notes](https://github.com/valkey-io/valkey-search/blob/main/00-RELEASENOTES) for valkey-search. Please note that ElastiCache v8.2 is compatible with Valkey v8.1.

### ElastiCache version 8.1 for Valkey
<a name="valkey-version-8.1"></a>

Here are some of the new features introduced in Valkey 8.1 (compared to ElastiCache Valkey 8.0):
+ A [new hash table](https://valkey.io/blog/new-hash-table/) implementation that reduces memory overhead to lower memory usage by as much as 20% for common key/value patterns.
+ Native support for [Bloom filters](https://valkey.io/topics/bloomfilters/), a new data type allowing you to perform lookups using as much as 98% less memory compared to using the Set data type.
+ New command [COMMANDLOG](https://valkey.io/commands/commandlog-get/) that records slow executions, large requests, and large replies.
+ New conditional update support to the SET command using IFEQ argument.
+ Performance improvements, including up to 45% lower latency for the ZRANK command, up to 12x faster performance for PFMERGE and PFCOUNT, and up to 514% higher throughput for BITCOUNT. 

For more information on Valkey, see [Valkey](https://valkey.io/)

For more information on the Valkey 8.1 release, see [Valkey 8.1 Release Notes](https://github.com/valkey-io/valkey/blob/8.1/00-RELEASENOTES)

### ElastiCache version 8.0 for Valkey
<a name="valkey-version-8"></a>

Here are some of the new features introduced in Valkey 8.0 (compared to ElastiCache Valkey 7.2.6):
+ Memory efficiency improvements, allowing users to store up to 20% more data per node without any application changes.
+ Newly-introduced per-slot metrics infrastructure for node-based clusters, providing detailed visibility into the performance and resource usage of individual slots.
+ ElastiCache Serverless for Valkey 8.0 can double the supported requests per second (RPS) every 2-3 minutes, reaching 5M RPS per cache from zero in under 13 minutes, with consistent sub-millisecond p50 read latency.

For more information on Valkey, see [Valkey](https://valkey.io/)

For more information on the Valkey 8 release, see [Valkey 8 Release Notes](https://github.com/valkey-io/valkey/blob/8.0/00-RELEASENOTES)

### ElastiCache version 7.2.6 for Valkey
<a name="valkey-version-7.2.6"></a>

On October 10 2024, ElastiCache version 7.2.6 for Valkey was released. Here are some of the new features introduced in 7.2 (compared to ElastiCache version 7.1 for Redis OSS):
+ Performance and memory optimizations for various data types: memory optimization for list and set type keys, speed optimization for sorted sets commands, performance optimization for commands with multiple keys in cluster mode, pub/sub performance improvements, performance optimization for SCAN, SSCAN, HSCAN, ZSCAN commands and numerous other smaller optimizations.
+ New WITHSCORE option for ZRANK and ZREVRANK commands
+ CLIENT NO-TOUCH for clients to run commands without affecting LRU/LFU of keys.
+ New command CLUSTER MYSHARDID that returns the Shard ID of the node to logically group nodes in cluster mode based on replication.

For more information on Valkey, see [Valkey](https://valkey.io/)

For more information on the ElastiCache version 7.2 for Valkey release, see [Redis OSS 7.2.4 Release Notes](https://github.com/valkey-io/valkey/blob/d2c8a4b91e8c0e6aefd1f5bc0bf582cddbe046b7/00-RELEASENOTES) (ElastiCache version 7.2 for Valkey includes all changes from ElastiCache version 7.1 for Redis OSS up to ElastiCache version 7.2.4 for Redis OSS). [Valkey 7.2 release notes](https://github.com/valkey-io/valkey/blob/7.2/00-RELEASENOTES) at Valkey on GitHub.

## ElastiCache version 8.2 for Valkey
<a name="valkey-version-8.2.main"></a>

Here are some of the new features introduced in Valkey 8.2 (compared to ElastiCache Valkey 8.1):
+ ElastiCache for Valkey v8.2 provides native support for [vector search](vector-search.md), delivering latency as low as microseconds-the lowest latency vector search with the highest throughput and best price-performance at 95%\$1 recall rate among popular vector databases on AWS.

For more information on Valkey, see [Valkey.](https://valkey.io/)

ElastiCache version 8.2 for Valkey enhances Valkey 8.1 with vector search capabilities based on [valkey-search module](https://github.com/valkey-io/valkey-search). For more information on the Valkey 8.2 release, see [release notes](https://github.com/valkey-io/valkey-search/blob/main/00-RELEASENOTES) for valkey-search. Please note that ElastiCache v8.2 is compatible with Valkey v8.1.

## ElastiCache version 8.1 for Valkey
<a name="valkey-version-8.1.main"></a>

Here are some of the new features introduced in Valkey 8.1 (compared to ElastiCache Valkey 8.0):
+ A [new hash table](https://valkey.io/blog/new-hash-table/) implementation that reduces memory overhead to lower memory usage by as much as 20% for common key/value patterns.
+ Native support for [Bloom filters](https://valkey.io/topics/bloomfilters/), a new data type allowing you to perform lookups using as much as 98% less memory compared to using the Set data type.
+ New command [COMMANDLOG](https://valkey.io/commands/commandlog-get/) that records slow executions, large requests, and large replies.
+ New conditional update support to the SET command using IFEQ argument.
+ Performance improvements, including up to 45% lower latency for the ZRANK command, up to 12x faster performance for PFMERGE and PFCOUNT, and up to 514% higher throughput for BITCOUNT. 

For more information on Valkey, see [Valkey](https://valkey.io/)

For more information on the Valkey 8.1 release, see [Valkey 8.1 Release Notes](https://github.com/valkey-io/valkey/blob/8.1/00-RELEASENOTES)

## ElastiCache version 8.0 for Valkey
<a name="valkey-version-8.main"></a>

Here are some of the new features introduced in Valkey 8.0 (compared to ElastiCache Valkey 7.2.6):
+ Memory efficiency improvements, allowing users to store up to 20% more data per node without any application changes.
+ Newly-introduced per-slot metrics infrastructure for node-based clusters, providing detailed visibility into the performance and resource usage of individual slots.
+ ElastiCache Serverless for Valkey 8.0 can double the supported requests per second (RPS) every 2-3 minutes, reaching 5M RPS per cache from zero in under 13 minutes, with consistent sub-millisecond p50 read latency.

For more information on Valkey, see [Valkey](https://valkey.io/)

For more information on the Valkey 8 release, see [Valkey 8 Release Notes](https://github.com/valkey-io/valkey/blob/8.0/00-RELEASENOTES)

## ElastiCache version 7.2.6 for Valkey
<a name="valkey-version-7.2.6.main"></a>

On October 10 2024, ElastiCache version 7.2.6 for Valkey was released. Here are some of the new features introduced in 7.2 (compared to ElastiCache version 7.1 for Redis OSS):
+ Performance and memory optimizations for various data types: memory optimization for list and set type keys, speed optimization for sorted sets commands, performance optimization for commands with multiple keys in cluster mode, pub/sub performance improvements, performance optimization for SCAN, SSCAN, HSCAN, ZSCAN commands and numerous other smaller optimizations.
+ New WITHSCORE option for ZRANK and ZREVRANK commands
+ CLIENT NO-TOUCH for clients to run commands without affecting LRU/LFU of keys.
+ New command CLUSTER MYSHARDID that returns the Shard ID of the node to logically group nodes in cluster mode based on replication.

For more information on Valkey, see [Valkey](https://valkey.io/)

For more information on the ElastiCache version 7.2 for Valkey release, see [Redis OSS 7.2.4 Release Notes](https://github.com/valkey-io/valkey/blob/d2c8a4b91e8c0e6aefd1f5bc0bf582cddbe046b7/00-RELEASENOTES) (ElastiCache version 7.2 for Valkey includes all changes from ElastiCache version 7.1 for Redis OSS up to ElastiCache version 7.2.4 for Redis OSS). [Valkey 7.2 release notes](https://github.com/valkey-io/valkey/blob/7.2/00-RELEASENOTES) at Valkey on GitHub.

## Supported Redis OSS engine versions
<a name="supported-engine-versions.redis"></a>

ElastiCache Serverless caches and node-based clusters support all Redis OSS versions 7.1 and before.
+ [ElastiCache version 7.1 for Redis OSS (enhanced)](#redis-version-7.1)

**Topics**
+ [ElastiCache version 7.1 for Redis OSS (enhanced)](#redis-version-7.1)
+ [ElastiCache version 7.0 for Redis OSS (enhanced)](#redis-version-7.0)
+ [ElastiCache version 6.2 for Redis OSS (enhanced)](#redis-version-6.2)
+ [ElastiCache version 6.0 for Redis OSS (enhanced)](#redis-version-6.0)
+ [ElastiCache version 5.0.6 for Redis OSS (enhanced)](#redis-version-5-0.6)
+ [ElastiCache version 5.0.5 for Redis OSS (deprecated, use version 5.0.6)](#redis-version-5-0.5)
+ [ElastiCache version 5.0.4 for Redis OSS (deprecated, use version 5.0.6)](#redis-version-5-0.4)
+ [ElastiCache version 5.0.3 for Redis OSS (deprecated, use version 5.0.6)](#redis-version-5-0.3)
+ [ElastiCache version 5.0.0 for Redis OSS (deprecated, use version 5.0.6)](#redis-version-5-0)
+ [ElastiCache version 4.0.10 for Redis OSS (enhanced)](#redis-version-4-0-10)
+ [Past End of Life (EOL) versions (3.x)](#redis-version-3-2-10-scheduled-eol)
+ [Past End of Life (EOL) versions (2.x)](#redis-version-2-x-eol)

### ElastiCache version 7.1 for Redis OSS (enhanced)
<a name="redis-version-7.1"></a>

This release contains performance improvements which enable workloads to drive higher throughput and lower operation latencies. ElastiCache version 7.1 for Redis OSS introduces [two main enhancements](https://aws.amazon.com/blogs/database/achieve-over-500-million-requests-per-second-per-cluster-with-amazon-elasticache-for-redis-7-1/) :

We extended the enhanced I/O threads functionality to also handle the presentation layer logic. By presentation layer, we mean the Enhanced I/O threads which are now not only reading client input, but also parsing the input into Redis OSS binary command format. This is then forwarded to the main thread for execution which provides performance gain. Improved Redis OSS memory access pattern. Execution steps from many data structure operations are interleaved, to ensure parallel memory access and reduced memory access latency. When running ElastiCache on Graviton3-based `R7g.4xlarge` or larger, customers can achieve over 1 million requests per second per node. With the performance improvements to ElastiCache for Redis OSS v7.1, customers can achieve up to 100% more throughput and 50% lower P99 latency relative to ElastiCache for Redis OSS v7.0. These enhancements are enabled on node sizes with at least 8 physical cores (`2xlarge` on Graviton, and `4xlarge` on x86), regardless of the CPU type and require no client changes.

**Note**  
ElastiCache v7.1 is compatible with Redis OSS v7.0.

### ElastiCache version 7.0 for Redis OSS (enhanced)
<a name="redis-version-7.0"></a>

ElastiCache for Redis OSS 7.0 adds a number of improvements and support for new functionality:
+ [Functions](https://valkey.io/topics/functions-intro/): ElastiCache for Redis OSS 7 adds support for Redis OSS Functions, and provides a managed experience enabling developers to execute [LUA scripts](https://valkey.io/topics/eval-intro/) with application logic stored on the ElastiCache cluster, without requiring clients to re-send the scripts to the server with every connection. 
+ [ACL improvements](https://valkey.io/topics/acl/): Valkey and Redis OSS 7 adds support for the next version of Access Control Lists (ACLs). Clients can now specify multiple sets of permissions on specific keys or keyspaces in Valkey and Redis OSS. 
+ [Sharded Pub/Sub](https://valkey.io/topics/pubsub/): ElastiCache for Valkey and Redis OSS 7 adds support to run Pub/Sub functionality in a sharded way when running ElastiCache in Cluster Mode Enabled (CME). Pub/Sub capabilities enable publishers to issue messages to any number of subscribers on a channel. Channels are bound to a shard in the ElastiCache cluster, eliminating the need to propagate channel information across shards resulting in improved scalability. 
+ Enhanced I/O multiplexing: ElastiCache for Valkey and Redis OSS 7 introduces enhanced I/O multiplexing, which delivers increased throughput and reduced latency for high-throughput workloads that have many concurrent client connections to an ElastiCache cluster. For example, when using a cluster of r6g.xlarge nodes and running 5200 concurrent clients, you can achieve up to 72% increased throughput (read and write operations per second) and up to 71% decreaseed P99 latency, compared with ElastiCache version 6 for Redis OSS. 

For more information on Valkey, see [Valkey](https://valkey.io/). For more information on the Redis OSS 7.0 release, see [Redis OSS 7.0 Release Notes](https://github.com/redis/redis/blob/7.0/00-RELEASENOTES) at Redis OSS on GitHub.

### ElastiCache version 6.2 for Redis OSS (enhanced)
<a name="redis-version-6.2"></a>

ElastiCache for Redis OSS 6.2 includes performance improvements for TLS-enabled clusters using x86 node types with 8 vCPUs or more or Graviton2 node types with 4 vCPUs or more. These enhancements improve throughput and reduce client connection establishment time by offloading encryption to other vCPUs. With Redis OSS 6.2, you can also manage access to Pub/Sub channels with Access Control List (ACL) rules.

 With this version, we also introduce support for data tiering on cluster nodes containing locally attached NVMe SSD. For more information, see [Data tiering in ElastiCache](data-tiering.md).

Redis OSS engine version 6.2.6 also introduces support for native JavaScript Object Notation (JSON) format, a simple, schemaless way to encode complex datasets inside Redis OSS clusters. With JSON support, you can leverage the performance and Redis OSS APIs for applications that operate over JSON. For more information, see [Getting started with JSON](json-gs.md). Also included are JSON-related metrics, `JsonBasedCmds` and `JsonBasedCmdsLatency`, that are incorporated into CloudWatch to monitor the usage of this datatype. For more information, see [Metrics for Valkey and Redis OSS](CacheMetrics.Redis.md).

You specify the engine version by using 6.2. ElastiCache will automatically invoke the preferred patch version of Redis OSS 6.2 that is available. For example, when you create/modify a cluster, you set the `--engine-version` parameter to 6.2. The cluster will be launched with the current available preferred patch version of Redis OSS 6.2 at the creation/modification time. Specifying engine version 6.x in the API will result in the latest minor version of Redis OSS 6.

For existing 6.0 clusters, you can opt-in to the next auto minor version upgrade by setting the `AutoMinorVersionUpgrade` parameter to `yes` in the `CreateCacheCluster`, `ModifyCacheCluster`, `CreateReplicationGroup` or `ModifyReplicationGroup` APIs. ElastiCache will upgrade the minor version of your existing 6.0 clusters to 6.2 using self-service updates. For more information, see [Self-service updates in Amazon ElastiCache](Self-Service-Updates.md).

When calling the DescribeCacheEngineVersions API, the `EngineVersion` parameter value will be set to 6.2 and the actual engine version with the patch version will be returned in the `CacheEngineVersionDescription` field. 

For more information on the Redis OSS 6.2 release, see [Redis OSS 6.2 Release Notes](https://github.com/redis/redis/blob/6.2/00-RELEASENOTES) at Redis OSS on GitHub.

### ElastiCache version 6.0 for Redis OSS (enhanced)
<a name="redis-version-6.0"></a>

Amazon ElastiCache introduces the next version of ElastiCache for the Redis OSS engine, which includes [Authenticating Users with Role Based Access Control](Clusters.RBAC.md), client-side caching and significant operational improvements. 

 Beginning with Redis OSS 6.0, ElastiCache will offer a single version for each Redis OSS minor release, rather than offering multiple patch versions. ElastiCache will automatically manage the patch version of your running clusters, ensuring improved performance and enhanced security. 

You can also opt-in to the next auto minor version upgrade by setting the `AutoMinorVersionUpgrade` parameter to `yes` and ElastiCache will manage the minor version upgrade, through self-service updates. For more information, see [Service updates in ElastiCache](Self-Service-Updates.md). 

You specify the engine version by using `6.0`. ElastiCache will automatically invoke the preferred patch version of Redis OSS 6.0 that is available. For example, when you create/modify a cluster, you set the `--engine-version` parameter to 6.0. The cluster will be launched with the current available preferred patch version of Redis OSS 6.0 at the creation/modification time. Any request with a specific patch version value will be rejected, an exception will be thrown and the process will fail.

When calling the DescribeCacheEngineVersions API, the `EngineVersion` parameter value will be set to 6.0 and the actual engine version with the patch version will be returned in the `CacheEngineVersionDescription` field. 

For more information on the Redis OSS 6.0 release, see [Redis OSS 6.0 Release Notes](https://github.com/redis/redis/blob/6.0/00-RELEASENOTES) at Redis OSS on GitHub.

### ElastiCache version 5.0.6 for Redis OSS (enhanced)
<a name="redis-version-5-0.6"></a>

Amazon ElastiCache introduces the next version of ElastiCache for the Redis OSS engine, which includes bug fixes and the following cumulative updates: 
+ Engine stability guarantee in special conditions.
+ Improved Hyperloglog error handling.
+ Enhanced handshake commands for reliable replication.
+ Consistent message delivery tracking via `XCLAIM` command.
+ Improved `LFU `field management in objects.
+ Enhanced transaction management when using `ZPOP`. 
+ Ability to rename commands: A parameter called `rename-commands` that allows you to rename potentially dangerous or expensive Redis OSS commands that might cause accidental data loss, such as `FLUSHALL` or `FLUSHDB`. This is similar to the rename-command configuration in open source Redis OSS. However, ElastiCache has improved the experience by providing a fully managed workflow. The command name changes are applied immediately, and automatically propagated across all nodes in the cluster that contain the command list. There is no intervention required on your part, such as rebooting nodes. 

  The following examples demonstrate how to modify existing parameter groups. They include the `rename-commands` parameter, which is a space-separated list of commands you want to rename:

  ```
  aws elasticache modify-cache-parameter-group --cache-parameter-group-name custom_param_group
  --parameter-name-values "ParameterName=rename-commands,  ParameterValue='flushall restrictedflushall'" --region region
  ```

  In this example, the *rename-commands* parameter is used to rename the `flushall` command to `restrictedflushall`.

  To rename multiple commands, use the following:

  ```
  aws elasticache modify-cache-parameter-group --cache-parameter-group-name custom_param_group
  --parameter-name-values "ParameterName=rename-commands,  ParameterValue='flushall restrictedflushall flushdb restrictedflushdb''" --region region
  ```

  To revert any change, re-run the command and exclude any renamed values from the `ParameterValue` list that you want to retain, as shown following:

  ```
  aws elasticache modify-cache-parameter-group --cache-parameter-group-name custom_param_group
  --parameter-name-values "ParameterName=rename-commands,  ParameterValue='flushall restrictedflushall'" --region region
  ```

  In this case, the `flushall` command is renamed to `restrictedflushall` and any other renamed commands revert to their original command names.
**Note**  
When renaming commands, you are restricted to the following limitations:  
All renamed commands should be alphanumeric.
The maximum length of new command names is 20 alphanumeric characters.
When renaming commands, ensure that you update the parameter group associated with your cluster.
To prevent a command's use entirely, use the keyword `blocked`, as shown following:  

    ```
    aws elasticache modify-cache-parameter-group --cache-parameter-group-name custom_param_group
    --parameter-name-values "ParameterName=rename-commands,  ParameterValue='flushall blocked'" --region region
    ```

  For more information on the parameter changes and a list of what commands are eligible for renaming, see [Redis OSS 5.0.3 parameter changes](ParameterGroups.Engine.md#ParameterGroups.Redis.5-0-3).
+ Redis OSS Streams: This models a log data structure that allows producers to append new items in real time. It also allows consumers to consume messages either in a blocking or nonblocking fashion. Streams also allow consumer groups, which represent a group of clients to cooperatively consume different portions of the same stream of messages, similar to [Apache Kafka](https://kafka.apache.org/documentation/). For more information, see [Streams](https://valkey.io/topics/streams-intro).
+ Support for a family of stream commands, such as `XADD`, `XRANGE` and `XREAD`. For more information, see [Streams Commands](https://valkey.io/commands/#stream).
+ A number of new and renamed parameters. For more information, see [Redis OSS 5.0.0 parameter changes](ParameterGroups.Engine.md#ParameterGroups.Redis.5.0).
+ A new Redis OSS metric, `StreamBasedCmds`.
+ Slightly faster snapshot time for Redis OSS nodes.

**Important**  
ElastiCache has back-ported two critical bug fixes from [Redis OSS open source version 5.0.1](https://github.com/redis/redis/blob/5.0/00-RELEASENOTES). They are listed following:  
RESTORE mismatch reply when certain keys have already expired.
The `XCLAIM` command can potentially return a wrong entry or desynchronize the protocol.
Both of these bug fixes are included in ElastiCache for Redis OSS support for Redis OSS engine version 5.0.0 and are consumed in future version updates.

For more information, see [Redis OSS 5.0.6 Release Notes](https://github.com/redis/redis/blob/5.0/00-RELEASENOTES) at Redis OSS on GitHub.

### ElastiCache version 5.0.5 for Redis OSS (deprecated, use version 5.0.6)
<a name="redis-version-5-0.5"></a>

Amazon ElastiCache introduces the next version of ElastiCache for the Redis OSS engine;. It includes online configuration changes for ElastiCache of auto-failover clusters during all planned operations. You can now scale your cluster, upgrade the Redis OSS engine version and apply patches and maintenance updates while the cluster stays online and continues serving incoming requests. It also includes bug fixes.

For more information, see [Redis OSS 5.0.5 Release Notes](https://github.com/redis/redis/blob/5.0/00-RELEASENOTES) at Redis OSS on GitHub.

### ElastiCache version 5.0.4 for Redis OSS (deprecated, use version 5.0.6)
<a name="redis-version-5-0.4"></a>

Amazon ElastiCache introduces the next version of the Redis OSS engine supported by ElastiCache. It includes the following enhancements:
+ Engine stability guarantee in special conditions.
+ Improved Hyperloglog error handling.
+ Enhanced handshake commands for reliable replication.
+ Consistent message delivery tracking via `XCLAIM` command.
+ Improved `LFU `field management in objects.
+ Enhanced transaction management when using `ZPOP`. 

For more information, see [Redis OSS 5.0.4 Release Notes](https://github.com/redis/redis/blob/5.0/00-RELEASENOTES) at Redis OSS on GitHub.

### ElastiCache version 5.0.3 for Redis OSS (deprecated, use version 5.0.6)
<a name="redis-version-5-0.3"></a>

Amazon ElastiCache introduces the next version of ElastiCache for the Redis OSS engine, which includes bug fixes. 

### ElastiCache version 5.0.0 for Redis OSS (deprecated, use version 5.0.6)
<a name="redis-version-5-0"></a>

Amazon ElastiCache introduces the next major version of ElastiCache for the Redis OSS engine. ElastiCache version 5.0.0 for Redis OSS brings support for the following improvements:
+ Redis OSS Streams: This models a log data structure that allows producers to append new items in real time. It also allows consumers to consume messages either in a blocking or nonblocking fashion. Streams also allow consumer groups, which represent a group of clients to cooperatively consume different portions of the same stream of messages, similar to [Apache Kafka](https://kafka.apache.org/documentation/). For more information, see [Streams](https://valkey.io/topics/streams-intro).
+ Support for a family of stream commands, such as `XADD`, `XRANGE` and `XREAD`. For more information, see [Streams Commands](https://valkey.io/commands/#stream).
+ A number of new and renamed parameters. For more information, see [Redis OSS 5.0.0 parameter changes](ParameterGroups.Engine.md#ParameterGroups.Redis.5.0).
+ A new Redis OSS metric, `StreamBasedCmds`.
+ Slightly faster snapshot time for Redis OSS nodes.

### ElastiCache version 4.0.10 for Redis OSS (enhanced)
<a name="redis-version-4-0-10"></a>

Amazon ElastiCache introduces the next major version of ElastiCache for the Redis OSS engine. ElastiCache version 4.0.10 for Redis OSS brings support for the following improvements:
+ Both online cluster resizing and encryption in a single ElastiCache version. For more information, see the following:
  + [Scaling Valkey or Redis OSS (Cluster Mode Enabled) clusters](scaling-redis-cluster-mode-enabled.md)
  + [Online resharding for Valkey or Redis OSS (cluster mode enabled)](scaling-redis-cluster-mode-enabled.md#redis-cluster-resharding-online)
  + [Data security in Amazon ElastiCache](encryption.md)
+ A number of new parameters. For more information, see [Redis OSS 4.0.10 parameter changes](ParameterGroups.Engine.md#ParameterGroups.Redis.4-0-10).
+ Support for family of memory commands, such as `MEMORY`. For more information, see [Commands](https://valkey.io/commands) (search on MEMO).
+ Support for memory defragmentation while online thus allowing more efficient memory utilization and more memory available for your data.
+ Support for asynchronous flushes and deletes. ElastiCache for Redis OSS supports commands like `UNLINK`, `FLUSHDB` and `FLUSHALL` to run in a different thread from the main thread. Doing this helps improve performance and response times for your applications by freeing memory asynchronously.
+ A new Redis OSS metric, `ActiveDefragHits`. For more information, see [Metrics for Redis OSS](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CacheMetrics.Redis.html).

Redis OSS (cluster mode disabled) users running ElastiCache version 3.2.10 for Redis OSS can use the console to upgrade their clusters via online upgrade.


**Comparing ElastiCache cluster resizing and encryption support**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/engine-versions.html)

### Past End of Life (EOL) versions (3.x)
<a name="redis-version-3-2-10-scheduled-eol"></a>

#### ElastiCache version 3.2.10 for Redis OSS (enhanced)
<a name="redis-version-3-2-10"></a>

Amazon ElastiCache introduces the next major version of ElastiCache for the Redis OSS engine. ElastiCache version 3.2.10 for Redis OSS (enchanced) introduces online cluster resizing to add or remove shards from the cluster while it continues to serve incoming I/O requests. ElastiCache for Redis OSS 3.2.10 users have all the functionality of earlier Redis OSS versions except the ability to encrypt their data. This ability is currently available only in version 3.2.6. 


**Comparing ElastiCache versions 3.2.6 and 3.2.10 for Redis OSS**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/engine-versions.html)

For more information, see the following:
+ [Online resharding for Valkey or Redis OSS (cluster mode enabled)](scaling-redis-cluster-mode-enabled.md#redis-cluster-resharding-online)
+ [Online cluster resizing](best-practices-online-resharding.md)

#### ElastiCache version 3.2.6 for Redis OSS (enhanced)
<a name="redis-version-3-2-6"></a>

Amazon ElastiCache introduces the next major version of ElastiCache for the Redis OSS engine. ElastiCache version 3.2.6 for Redis OSS users have access to all the functionality of earlier Redis OSS versions, plus the option to encrypt their data. For more information, see the following:
+ [ElastiCache in-transit encryption (TLS)](in-transit-encryption.md)
+ [At-Rest Encryption in ElastiCache](at-rest-encryption.md)
+ [Compliance validation for Amazon ElastiCache](elasticache-compliance.md)

#### ElastiCache version 3.2.4 for Redis OSS (enhanced)
<a name="redis-version-3-2-4"></a>

Amazon ElastiCache version 3.2.4 introduces the next major version of ElastiCache for the Redis OSS engine. ElastiCache 3.2.4 users have all the functionality of earlier Redis OSS versions available to them, plus the option to run in *cluster mode* or *non-cluster mode*. The following table summarizes .


**Comparing Redis OSS 3.2.4 non-cluster mode and cluster mode**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/engine-versions.html)

**Notes:**
+ **Partitioning** – the ability to split your data across 2 to 500 node groups (shards) with replication support for each node group.
+ **Geospatial indexing** – Redis OSS 3.2.4 introduces support for geospatial indexing via six GEO commands. For more information, see the Redis OSS GEO\$1 command documentation [Commands: GEO](http://valkey.io/commands#geo) on the Valkey Commands page (filtered for GEO).

For information about additional Redis OSS 3 features, see [Redis OSS 3.2 release notes](https://github.com/redis/redis/blob/3.2/00-RELEASENOTES) and [Redis OSS 3.0 release notes](https://github.com/redis/redis/blob/3.0/00-RELEASENOTES).

Currently ElastiCache managed Valkey or Redis OSS (cluster mode enabled) does not support the following Redis OSS 3.2 features:
+ Replica migration
+ Cluster rebalancing
+ Lua debugger

ElastiCache disables the following Redis OSS 3.2 management commands:
+ `cluster meet`
+ `cluster replicate`
+ `cluster flushslots`
+ `cluster addslots`
+ `cluster delslots`
+ `cluster setslot`
+ `cluster saveconfig`
+ `cluster forget`
+ `cluster failover`
+ `cluster bumpepoch`
+ `cluster set-config-epoch`
+ `cluster reset`

For information about Redis OSS 3.2.4 parameters, see [Redis OSS 3.2.4 parameter changes](ParameterGroups.Engine.md#ParameterGroups.Redis.3-2-4).

### Past End of Life (EOL) versions (2.x)
<a name="redis-version-2-x-eol"></a>

#### ElastiCache version 2.8.24 for Redis OSS (enhanced)
<a name="redis-version-2-8-24"></a>

Redis OSS improvements added since version 2.8.23 include bug fixes and logging of bad memory access addresses. For more information, see [Redis OSS 2.8 release notes](https://github.com/redis/redis/blob/2.8/00-RELEASENOTES). 

#### ElastiCache version 2.8.23 for Redis OSS (enhanced)
<a name="redis-version-2-8-23"></a>

Redis OSS improvements added since version 2.8.22 include bug fixes. For more information, see [Redis OSS 2.8 release notes](https://github.com/redis/redis/blob/2.8/00-RELEASENOTES). This release also includes support for the new parameter `close-on-slave-write` which, if enabled, disconnects clients who attempt to write to a read-only replica.

For more information on Redis OSS 2.8.23 parameters, see [Redis OSS 2.8.23 (enhanced) added parameters](ParameterGroups.Engine.md#ParameterGroups.Redis.2-8-23) in the ElastiCache User Guide.

#### ElastiCache version 2.8.22 for Redis OSS (enhanced)
<a name="redis-version-2-8-22"></a>

Redis OSS improvements added since version 2.8.21 include the following:
+ Support for forkless backups and synchronizations, which allows you to allocate less memory for backup overhead and more for your application. For more information, see [How synchronization and backup are implemented](Replication.Redis.Versions.md). The forkless process can impact both latency and throughput. When there is high write throughput, when a replica re-syncs, it can be unreachable for the entire time it is syncing.
+ If there is a failover, replication groups now recover faster because replicas perform partial syncs with the primary rather than full syncs whenever possible. Additionally, both the primary and replicas no longer use the disk during syncs, providing further speed gains.
+ Support for two new CloudWatch metrics. 
  + `ReplicationBytes` – The number of bytes a replication group's primary cluster is sending to the read replicas.
  + `SaveInProgress` – A binary value that indicates whether or not there is a background save process running.

   For more information, see [Monitoring use with CloudWatch Metrics](CacheMetrics.md).
+ A number of critical bug fixes in replication PSYNC behavior. For more information, see [Redis OSS 2.8 release notes](https://github.com/redis/redis/blob/2.8/00-RELEASENOTES).
+ To maintain enhanced replication performance in Multi-AZ replication groups and for increased cluster stability, non-ElastiCache replicas are no longer supported.
+ To improve data consistency between the primary cluster and replicas in a replication group, the replicas no longer evict keys independent of the primary cluster.
+ Redis OSS configuration variables `appendonly` and `appendfsync` are not supported on Redis OSS version 2.8.22 and later.
+ In low-memory situations, clients with a large output buffer might be disconnected from a replica cluster. If disconnected, the client needs to reconnect. Such situations are most likely to occur for PUBSUB clients.

#### ElastiCache version 2.8.21 for Redis OSS
<a name="redis-version-2-8-21"></a>

Redis OSS improvements added since version 2.8.19 include a number of bug fixes. For more information, see [Redis OSS 2.8 release notes](https://github.com/redis/redis/blob/2.8/00-RELEASENOTES).

#### ElastiCache version 2.8.19 for Redis OSS
<a name="redis-version-2-8-19"></a>

Redis OSS improvements added since version 2.8.6 include the following:
+ Support for HyperLogLog. For more information, see [Redis OSS new data structure: HyperLogLog](http://antirez.com/news/75).
+ The sorted set data type now has support for lexicographic range queries with the new commands `ZRANGEBYLEX`, `ZLEXCOUNT`, and `ZREMRANGEBYLEX`.
+ To prevent a primary node from sending stale data to replica nodes, the master SYNC fails if a background save (`bgsave`) child process is aborted.
+ Support for the *HyperLogLogBasedCommands* CloudWatch metric. For more information, see [Metrics for Valkey and Redis OSS](CacheMetrics.Redis.md).

#### ElastiCache version 2.8.6 for Redis OSS
<a name="redis-version-2-8-6"></a>

Redis OSS improvements added since version 2.6.13 include the following:
+ Improved resiliency and fault tolerance for read replicas.
+ Support for partial resynchronization.
+ Support for user-defined minimum number of read replicas that must be available at all times.
+ Full support for pub/sub—notifying clients of events on the server.
+ Automatic detection of a primary node failure and failover of your primary node to a secondary node.

#### ElastiCache version 2.6.13 for Redis OSS
<a name="redis-version-2-6-13"></a>

ElastiCache version 2.6.13 for Redis OSS was the initial version of ElastiCache that supported Redis OSS. Multi-AZ is not supported on ElastiCache version 2.6.13 for Redis OSS.

## ElastiCache versions for Redis OSS end of life schedule
<a name="deprecated-engine-versions"></a>

This section defines end of life (EOL) dates for older major versions as they are announced. This allows you to make version and upgrade decisions for the future.

**Note**  
ElastiCache versions from 5.0.0 to 5.0.5 for Redis OSS are deprecated. Use versions 5.0.6 or greater.

The following table shows the schedule of [Extended Support](extended-support.md) for ElastiCache for Redis OSS engines.

**Extended Support and End of Life schedule**


| Major Engine Version | End of Standard Support | Start of Extended Support Y1 Premium | Start of Extended Support Y2 Premium | Start of Extended Support Y3 Premium | End of Extended Support and version EOL | 
| --- | --- | --- | --- | --- | --- | 
| Redis OSS v4 | 1/31/2026 | 2/1/2026 | 2/1/2027 | 2/1/2028 | 1/31/2029 | 
| Redis OSS v5 | 1/31/2026 | 2/1/2026 | 2/1/2027 | 2/1/2028 | 1/31/2029 | 
| Redis OSS v6 | 1/31/2027 | 2/1/2027 | 2/1/2028 | 2/1/2029 | 1/31/2030 | 

The following table summarizes each version and its announced EOL date, as well as the recommended upgrade target version. 

**Past EOL**


| Source Major Version | Source Minor Versions | Recommended Upgrade Target | EOL Date | 
| --- | --- | --- | --- | 
|  Version 3 |  3.2.4, 3.2.6 and 3.2.10  |  Version 6.2 or higher  For US-ISO-EAST-1, US-ISO-WEST-1, and US-ISOB-EAST-1 Regions, we recommend 5.0.6 or higher.   |  July 31, 2023  | 
|  Version 2  |  2.8.24, 2.8.23, 2.8.22, 2.8.21, 2.8.19, 2.8.12, 2.8.6, 2.6.13  |  Version 6.2 or higher  For US-ISO-EAST-1, US-ISO-WEST-1, and US-ISOB-EAST-1 Regions, we recommend 5.0.6 or higher.   |  January 13, 2023  | 

## Supported ElastiCache for Memcached versions
<a name="supported-engine-versions-mc"></a>

ElastiCache supports the following Memcached versions and upgrading to newer versions. When upgrading to a newer version, pay careful attention to the conditions that if not met cause your upgrade to fail.

**Topics**
+ [ElastiCache version 1.6.22 for Memcached](#memcached-version-1-6-22)
+ [ElastiCache version 1.6.17 for Memcached](#memcached-version-1-6-17)
+ [ElastiCache version 1.6.12 for Memcached](#memcached-version-1-6-12)
+ [ElastiCache version 1.6.6 for Memcached](#memcached-version-1-6-6)
+ [ElastiCache version 1.5.16 for Memcached](#memcached-version-1-5-16)
+ [ElastiCache version 1.5.10 for Memcached](#memcached-version-1-5-10)
+ [ElastiCache version 1.4.34 for Memcached](#memcached-version-1-4-34)
+ [ElastiCache version 1.4.33 for Memcached](#memcached-version-1-4-33)
+ [ElastiCache version 1.4.24 for Memcached](#memcached-version-1-4-24)
+ [ElastiCache version 1.4.14 for Memcached](#memcached-version-1-4-14)
+ [ElastiCache version 1.4.5 for Memcached](#memcached-version-1-4-5)

### ElastiCache version 1.6.22 for Memcached
<a name="memcached-version-1-6-22"></a>

ElastiCache for Memcached version 1.6.22 for Memcached adds support for Memcached version 1.6.22. It includes no new features, but does include bug fixes and cumulative updates from [Memcached 1.6.18](https://github.com/memcached/memcached/wiki/ReleaseNotes1618). 

For more information, see [ReleaseNotes1622](https://github.com/memcached/memcached/wiki/ReleaseNotes1622) at Memcached on GitHub.

### ElastiCache version 1.6.17 for Memcached
<a name="memcached-version-1-6-17"></a>

ElastiCache for Memcached version 1.6.17 for Memcached adds support for Memcached engine version 1.6.17. It includes no new features, but does include bug fixes and cumulative updates from [Memcached 1.6.17](https://github.com/memcached/memcached/wiki/ReleaseNotes1617). 

For more information, see [ReleaseNotes1617](https://github.com/memcached/memcached/wiki/ReleaseNotes1617) at Memcached on GitHub.

### ElastiCache version 1.6.12 for Memcached
<a name="memcached-version-1-6-12"></a>

ElastiCache for Memcached version 1.6.12 for Memcached adds support for Memcached engine 1.6.12 and encryption in-transit. It also includes bug fixes and cumulative updates from [Memcached 1.6.6](https://github.com/memcached/memcached/wiki/ReleaseNotes166). 

For more information, see [ReleaseNotes1612](https://github.com/memcached/memcached/wiki/ReleaseNotes1612) at Memcached on GitHub.

### ElastiCache version 1.6.6 for Memcached
<a name="memcached-version-1-6-6"></a>

ElastiCache for Memcached version 1.6.6 for Memcached adds support for Memcached version 1.6.6. It includes no new features, but does include bug fixes and cumulative updates from [Memcached 1.5.16](https://github.com/memcached/memcached/wiki/ReleaseNotes1.5.16). ElastiCache for Memcached does not include support for [Extstore](https://memcached.org/extstore).

For more information, see [ReleaseNotes166](https://github.com/memcached/memcached/wiki/ReleaseNotes166) at Memcached on GitHub.

### ElastiCache version 1.5.16 for Memcached
<a name="memcached-version-1-5-16"></a>

ElastiCache version 1.5.16 for Memcached adds support for Memcached version 1.5.16. It includes no new features, but does include bug fixes and cumulative updates from [Memcached 1.5.14](https://github.com/memcached/memcached/wiki/ReleaseNotes1514) and [Memcached 1.5.15](https://github.com/memcached/memcached/wiki/ReleaseNotes1515).

For more information, see [Memcached 1.5.16 Release Notes](https://github.com/memcached/memcached/wiki/ReleaseNotes1516) at Memcached on GitHub.

### ElastiCache version 1.5.10 for Memcached
<a name="memcached-version-1-5-10"></a>

ElastiCache version 1.5.10 for Memcached supports the following Memcached features:
+ Automated slab rebalancing.
+ Faster hash table lookups with `murmur3` algorithm.
+ Segmented LRU algorithm.
+ LRU crawler to background-reclaim memory.
+ `--enable-seccomp`: A compile-time option.

It also introduces the `no_modern` and `inline_ascii_resp` parameters. For more information, see [Memcached 1.5.10 parameter changes](ParameterGroups.Engine.md#ParameterGroups.Memcached.1-5-10).

Memcached improvements added since ElastiCache version 1.4.34 for Memcached include the following:
+ Cumulative fixes, such as ASCII multigets, CVE-2017-9951 and limit crawls for `metadumper`. 
+ Better connection management by closing connections at the connection limit. 
+ Improved item-size management for item size above 1MB. 
+ Better performance and memory-overhead improvements by reducing memory requirements per-item by a few bytes.

For more information, see [Memcached 1.5.10 Release Notes](https://github.com/memcached/memcached/wiki/ReleaseNotes1510) at Memcached on GitHub.

### ElastiCache version 1.4.34 for Memcached
<a name="memcached-version-1-4-34"></a>

ElastiCache version 1.4.34 for Memcached adds no new features to version 1.4.33. Version 1.4.34 is a bug fix release that is larger than the usual such release.

For more information, see [Memcached 1.4.34 Release Notes](https://github.com/memcached/memcached/wiki/ReleaseNotes1434) at Memcached on GitHub.

### ElastiCache version 1.4.33 for Memcached
<a name="memcached-version-1-4-33"></a>

Improvements added since version 1.4.24 include the following:
+ Ability to dump all of the metadata for a particular slab class, a list of slab classes, or all slab classes. For more information, see [Memcached 1.4.31 Release Notes](https://github.com/memcached/memcached/wiki/ReleaseNotes1431).
+ Improved support for large items over the 1 megabyte default. For more information, see [Memcached 1.4.29 Release Notes](https://github.com/memcached/memcached/wiki/ReleaseNotes1429).
+ Ability to specify how long a client can be idle before being asked to close.

  Ability to dynamically increase the amount of memory available to Memcached without having to restart the cluster. For more information, see [Memcached 1.4.27 Release Notes](https://github.com/memcached/memcached/wiki/ReleaseNotes1427).
+ Logging of `fetchers`, `mutations`, and `evictions` are now supported. For more information, see [Memcached 1.4.26 Release Notes](https://github.com/memcached/memcached/wiki/ReleaseNotes1426).
+ Freed memory can be reclaimed back into a global pool and reassigned to new slab classes. For more information, see [Memcached 1.4.25 Release Notes](https://github.com/memcached/memcached/wiki/ReleaseNotes1425).
+ Several bug fixes.
+ Some new commands and parameters. For a list, see [Memcached 1.4.33 added parameters](ParameterGroups.Engine.md#ParameterGroups.Memcached.1-4-33).

### ElastiCache version 1.4.24 for Memcached
<a name="memcached-version-1-4-24"></a>

Improvements added since version 1.4.14 include the following:
+ Least recently used (LRU) management using a background process.
+ Added the option of using *jenkins* or *murmur3* as your hash algorithm.
+ Some new commands and parameters. For a list, see [Memcached 1.4.24 added parameters](ParameterGroups.Engine.md#ParameterGroups.Memcached.1-4-24).
+ Several bug fixes.

### ElastiCache version 1.4.14 for Memcached
<a name="memcached-version-1-4-14"></a>

Improvements added since version 1.4.5 include the following:
+ Enhanced slab rebalancing capability.
+ Performance and scalability improvement.
+ Introduced the *touch* command to update the expiration time of an existing item without fetching it.
+ Auto discovery—the ability for client programs to automatically determine all of the cache nodes in a cluster, and to initiate and maintain connections to all of these nodes.

### ElastiCache version 1.4.5 for Memcached
<a name="memcached-version-1-4-5"></a>

ElastiCache version 1.4.5 for Memcached was the initial engine and version supported by Amazon ElastiCache for Memcached.

# Major engine version behavior and compatibility differences with Valkey
<a name="VersionManagementConsiderations-valkey"></a>

Valkey 7.2.6 has similar compatibility differences with previous versions of Redis OSS 7.2.4. For the most recent supported version of Valkey, see [Supported engines and versions](VersionManagement.md#supported-engine-versions).

For more information on the Valkey 7.2 release, see [Redis OSS 7.2.4 Release Notes](https://github.com/valkey-io/valkey/blob/d2c8a4b91e8c0e6aefd1f5bc0bf582cddbe046b7/00-RELEASENOTES) (Valkey 7.2 includes all changes from Redis OSS up to version 7.2.4) and [Valkey 7.2 release notes](https://github.com/valkey-io/valkey/blob/7.2/00-RELEASENOTES) at Valkey on GitHub.

Here are the potentially breaking behavior changes between Valkey 7.2 and Redis OSS 7.1 (or 7.0):
+ Freeze time sampling occurs during command execution and in scripts.
+ A blocked stream command that's released when key no longer exists carries a different error code (-NOGROUP or -WRONGTYPE instead of -UNBLOCKED). 
+ Client side tracking for scripts now tracks the keys that are read by the script, instead of the keys that are declared by the caller of EVAL / FCALL.

# Major engine version behavior and compatibility differences with Redis OSS
<a name="VersionManagementConsiderations"></a>

**Important**  
The following page is structured to signify all incompability differences between versions and inform you of any considerations you should make when upgrading to newer versions. This list is inclusive of any version incompability issues you may encounter when upgrading.  
You can upgrade directly from your current Redis OSS version to the latest Redis OSS version available, without the need for sequential upgrades. For example, you can upgrade directly from Redis OSS version 3.0 to version 7.0.

Redis OSS versions are identified with a semantic version which comprise a major, minor, and patch component. For example, in Redis OSS 4.0.10, the major version is 4, the minor version 0, and the patch version is 10. These values are generally incremented based off the following conventions:
+ Major versions are for API incompatible changes
+ Minor versions are for new functionality added in a backwards-compatible way
+ Patch versions are for backwards-compatible bug fixes and non-functional changes

We recommend always staying on the latest patch version within a given **major.minor** version in order to have the latest performance and stability improvements. Beginning with ElastiCache version 6.0 for Redis OSS, ElastiCache will offer a single version for each Redis OSS minor release rather than offering multiple patch versions. ElastiCache will automatically manage the patch version of your running clusters, ensuring improved performance and enhanced security.

We also recommend periodically upgrading to the latest major version, since most major improvements are not back ported to older versions. As ElastiCache expands availability to a new AWS region, ElastiCache for Redis OSS supports the two most recent **major.minor** versions at that time for the new region. For example, if a new AWS region launches and the latest major.minor ElastiCache versions for Redis OSS are **7.0** and **6.2**, ElastiCache will support Redis OSS versions **7.0** and **6.2** in the new AWS region. As newer major.minor versions of ElastiCache for Redis OSS are released, ElastiCache will continue to add support for the newly released versions. To learn more about choosing regions for ElastiCache, see [Choosing regions and availability zones](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/RegionsAndAZs.html#SupportedRegions). 

When doing an upgrade that spans major or minor versions, please consider the following list which includes behavior and backwards incompatible changes released with Redis OSS over time. 

## Redis OSS 7.0 behavior and backwards incompatible changes
<a name="VersionManagementConsiderations-redis70"></a>

For a full list of changes, see [Redis OSS 7.0 release notes](https://raw.githubusercontent.com/redis/redis/7.0/00-RELEASENOTES). 
+ `SCRIPT LOAD` and `SCRIPT FLUSH` are no longer propagated to replicas. If you need to have some durability for scripts, we recommend you consider using [Redis OSS functions](https://valkey.io/topics/functions-intro/).
+ Pubsub channels are now blocked by default for new ACL users.
+ `STRALGO` command was replaced with the `LCS` command.
+ The format for `ACL GETUSER` has changed so that all fields show the standard access string pattern. If you had automation using `ACL GETUSER`, you should verify that it will handle either format.
+ The ACL categories for `SELECT`, `WAIT`, `ROLE`, `LASTSAVE`, `READONLY`, `READWRITE`, and `ASKING` have changed.
+ The `INFO` command now shows command stats per sub-command instead of in the top level container commands.
+ The return values of `LPOP`, `RPOP`, `ZPOPMIN` and `ZPOPMAX` commands have changed under certain edge cases. If you use these commands, you should check the release notes and evaluate if you are impacted.
+ The `SORT` and `SORT_RO` commands now require access to the entire keyspace in order to use the `GET` and `BY` arguments. 

## Redis OSS 6.2 behavior and backwards incompatible changes
<a name="VersionManagementConsiderations-redis62"></a>

For a full list of changes, see [Redis OSS 6.2 release notes](https://raw.githubusercontent.com/redis/redis/6.2/00-RELEASENOTES). 
+ The ACL flags of the `TIME`, `ECHO`, `ROLE`, and `LASTSAVE` commands were changed. This may cause commands that were previously allowed to be rejected and vice versa. 
**Note**  
None of these commands modify or give access to data.
+ When upgrading from Redis OSS 6.0, the ordering of key/value pairs returned from a map response to a lua script are changed. If your scripts use `redis.setresp()` or return a map (new in Redis OSS 6.0), consider the implications that the script may break on upgrades.

## Redis OSS 6.0 behavior and backwards incompatible changes
<a name="VersionManagementConsiderations-redis60"></a>

For a full list of changes, see [Redis OSS 6.0 release notes](https://raw.githubusercontent.com/redis/redis/6.0/00-RELEASENOTES). 
+ The maximum number of allowed databases has been decreased from 1.2 million to 10 thousand. The default value is 16, and we discourage using values much larger than this as we’ve found performance and memory concerns.
+ Set `AutoMinorVersionUpgrade` parameter to yes, and ElastiCache will manage the minor version upgrade through self-service updates. This will be handled through standard customer-notification channels via a self-service update campaign. For more information, see [Self-service updates in ElastiCache](Self-Service-Updates.md).

## Redis OSS 5.0 behavior and backwards incompatible changes
<a name="VersionManagementConsiderations-redis50"></a>

For a full list of changes, see [Redis OSS 5.0 release notes](https://raw.githubusercontent.com/redis/redis/5.0/00-RELEASENOTES). 
+ Scripts are by replicated by effects instead of re-executing the script on the replica. This generally improves performance but may increase the amount of data replicated between primaries and replicas. There is an option to revert back to the previous behavior that is only available in ElastiCache version 5.0 for Redis OSS.
+ If you are upgrading from Redis OSS 4.0, some commands in LUA scripts will return arguments in a different order than they did in earlier versions. In Redis OSS 4.0, Redis OSS would order some responses lexographically in order to make the responses deterministic, this ordering is not applied when scripts are replicated by effects.
+ In Redis OSS 5.0.3 and above, ElastiCache for Redis OSS will offload some IO work to background cores on instance types with more than 4 VCPUs. This may change the performance characteristics Redis OSS and change the values of some metrics. For more information, see [Which Metrics Should I Monitor?](CacheMetrics.WhichShouldIMonitor.md) to understand if you need to change which metrics you watch.

## Redis OSS 4.0 behavior and backwards incompatible changes
<a name="VersionManagementConsiderations-redis40"></a>

For a full list of changes, see [Redis OSS 4.0 release notes](https://raw.githubusercontent.com/redis/redis/4.0/00-RELEASENOTES). 
+ Slow log now logs an additional two arguments, the client name and address. This change should be backwards compatible unless you are explicitly relying on each slow log entry containing 3 values.
+ The `CLUSTER NODES` command now returns a slight different format, which is not backwards compatible. We recommend that clients don’t use this command for learning about the nodes present in a cluster, and instead they should use `CLUSTER SLOTS`.

## Past EOL
<a name="VersionManagementConsiderations-redis3x-scheduled"></a>

### Redis OSS 3.2 behavior and backwards incompatible changes
<a name="VersionManagementConsiderations-redis32"></a>

For a full list of changes, see [Redis OSS 3.2 release notes](https://raw.githubusercontent.com/redis/redis/3.2/00-RELEASENOTES). 
+ There are no compatibility changes to call out for this version.

For more information, see [ElastiCache versions for Redis OSS end of life schedule](engine-versions.md#deprecated-engine-versions).

### Redis OSS 2.8 behavior and backwards incompatible changes
<a name="VersionManagementConsiderations-redis28"></a>

For a full list of changes, see [Redis OSS 2.8 release notes](https://raw.githubusercontent.com/redis/redis/2.8/00-RELEASENOTES). 
+ Starting in Redis OSS 2.8.22, Redis OSS AOF is no longer supported in ElastiCache for Redis OSS. We recommend using MemoryDB when data needs to be persisted durably.
+ Starting in Redis OSS 2.8.22, ElastiCache for Redis OSS no longer supports attaching replicas to primaries hosted within ElastiCache. While upgrading, external replicas will be disconnected and they will be unable to reconnect. We recommend using client-side caching, made available in Redis OSS 6.0 as an alternative to external replicas.
+ The `TTL` and `PTTL` commands now return -2 if the key does not exist and -1 if it exists but has no associated expire. Redis OSS 2.6 and previous versions used to return -1 for both the conditions.
+ `SORT` with `ALPHA` now sorts according to local collation locale if no `STORE` option is used.

For more information, see [ElastiCache versions for Redis OSS end of life schedule](engine-versions.md#deprecated-engine-versions).

# Upgrade considerations when working with node-based clusters
<a name="VersionManagement-upgrade-considerations"></a>

**Note**  
The following considerations only apply when upgrading node-based clusters. They do not apply to ElastiCache Serverless.

**Valkey and Redis OSS considerations**

When upgrading a node-based Valkey or Redis OSS cluster, consider the following.
+ Engine version management is designed so that you can have as much control as possible over how patching occurs. However, ElastiCache reserves the right to patch your cluster on your behalf in the unlikely event of a critical security vulnerability in the system or cache software.
+ Beginning with ElastiCache version 7.2 for Valkey and ElastiCache version 6.0 for Redis OSS, ElastiCache will offer a single version for each minor release, rather than offering multiple patch versions.
+ Starting with Redis OSS engine version 5.0.6, you can upgrade your cluster version with minimal downtime. The cluster is available for reads during the entire upgrade and is available for writes for most of the upgrade duration, except during the failover operation which lasts a few seconds.
+ You can also upgrade your ElastiCache clusters with versions earlier than 5.0.6. The process involved is the same but may incur longer failover time during DNS propagation (30s-1m). 
+ Beginning with Redis OSS 7, ElastiCache supports switching between Valkey or Redis OSS (cluster mode disabled) and Valkey or Redis OSS (cluster mode enabled).
+ The Amazon ElastiCache for Redis OSS engine upgrade process is designed to make a best effort to retain your existing data and requires successful Redis OSS replication. 
+ When upgrading the engine, ElastiCache will terminate existing client connections. To minimize downtime during engine upgrades, we recommend you implement [best practices for Redis OSS clients](BestPractices.Clients.redis.md) with error retries and exponential backoff and the best practices for [minimizing downtime during maintenance](BestPractices.MinimizeDowntime.md). 
+ You can't upgrade directly from Valkey or Redis OSS (cluster mode disabled) to Valkey or Redis OSS (cluster mode enabled) when you upgrade your engine. The following procedure shows you how to upgrade from Valkey or Redis OSS (cluster mode disabled) to Valkey or Redis OSS (cluster mode enabled).

**To upgrade from a Valkey or Redis OSS (cluster mode disabled) to Valkey or Redis OSS (cluster mode enabled) engine version**

  1. Make a backup of your Valkey or Redis OSS (cluster mode disabled) cluster or replication group. For more information, see [Taking manual backups](backups-manual.md).

  1. Use the backup to create and seed a Valkey or Redis OSS (cluster mode enabled) cluster with one shard (node group). Specify the new engine version and enable cluster mode when creating the cluster or replication group. For more information, see [Tutorial: Seeding a new node-based cluster with an externally created backup](backups-seeding-redis.md).

  1. Delete the old Valkey or Redis OSS (cluster mode disabled) cluster or replication group. For more information, see [Deleting a cluster in ElastiCache](Clusters.Delete.md) or [Deleting a replication group](Replication.DeletingRepGroup.md).

  1. Scale the new Valkey or Redis OSS (cluster mode enabled) cluster or replication group to the number of shards (node groups) that you need. For more information, see [Scaling Valkey or Redis OSS (Cluster Mode Enabled) clusters](scaling-redis-cluster-mode-enabled.md)
+ When upgrading major engine versions, for example from 5.0.6 to 6.0, you need to also choose a new parameter group that is compatible with the new engine version.
+ For single Redis OSS clusters and clusters with Multi-AZ disabled, we recommend that sufficient memory be made available to Redis OSS as described in [Ensuring you have enough memory to make a Valkey or Redis OSS snapshot](BestPractices.BGSAVE.md). In these cases, the primary is unavailable to service requests during the upgrade process.
+ For Redis OSS clusters with Multi-AZ enabled, we also recommend that you schedule engine upgrades during periods of low incoming write traffic. When upgrading to Redis OSS 5.0.6 or above, the primary cluster continues to be available to service requests during the upgrade process. 

  Clusters and replication groups with multiple shards are processed and patched as follows:
  + All shards are processed in parallel. Only one upgrade operation is performed on a shard at any time.
  + In each shard, all replicas are processed before the primary is processed. If there are fewer replicas in a shard, the primary in that shard might be processed before the replicas in other shards are finished processing.
  + Across all the shards, primary nodes are processed in series. Only one primary node is upgraded at a time.
+ If encryption is enabled on your current cluster or replication group, you cannot upgrade to an engine version that does not support encryption, such as from 3.2.6 to 3.2.10.

**Memcached considerations**

When upgrading a node-based Memcached cluster, consider the following.
+ Engine version management is designed so that you can have as much control as possible over how patching occurs. However, ElastiCache reserves the right to patch your cluster on your behalf in the unlikely event of a critical security vulnerability in the system or cache software.
+ Because the Memcached engine does not support persistence, Memcached engine version upgrades are always a disruptive process that clears all cache data in the cluster.

# ElastiCache best practices and caching strategies
<a name="BestPractices"></a>

Below you can find recommended best practices for Amazon ElastiCache. Following these improves your cache's performance and reliability. 

**Topics**
+ [Overall best practices](WorkingWithRedis.md)
+ [Best Practices for using Read Replicas](ReadReplicas.md)
+ [Supported and restricted Valkey, Memcached, and Redis OSS commands](SupportedCommands.md)
+ [Valkey and Redis OSS configuration and limits](RedisConfiguration.md)
+ [IPv6 client examples for Valkey, Memcached, and Redis OSS](network-type-best-practices.md)
+ [Best practices for clients (Valkey and Redis OSS)](BestPractices.Clients.redis.md)
+ [Best practices for clients (Memcached)](BestPractices.Clients.memcached.md)
+ [TLS enabled dual stack ElastiCache clusters](#network-type-configuring-tls-enabled-dual-stack)
+ [Managing reserved memory for Valkey and Redis OSS](redis-memory-management.md)
+ [Best practices when working with Valkey and Redis OSS node-based clusters](BestPractices.SelfDesigned.md)
+ [Caching database query results](caching-database-query-results.md)
+ [Caching strategies for Memcached](Strategies.md)

# Overall best practices
<a name="WorkingWithRedis"></a>

Below you can find information about best practices for using the Valkey, Memcached, and Redis OSS interfaces within ElastiCache.
+ **Use cluster-mode enabled configurations** – Cluster-mode enabled allows the cache to scale horizontally to achieve higher storage and throughput than a cluster-mode disabled configuration. ElastiCache serverless is only available in a cluster-mode enabled configuration.
+ **Use long-lived connections** – Creating a new connection is expensive, and takes time and CPU resources from the cache. Reuse connections when possible (e.g. with connection pooling) to amortize this cost over many commands.
+ **Read from replicas** – If you are using ElastiCache serverless or have provisioned read replicas (node-based clusters), direct reads to replicas to achieve better scalability and/or lower latency. Reads from replicas are eventually consistent with the primary.

  In a node-based cluster, avoid directing read requests to a single read replica since reads may not be available temporarily if the node fails. Either configure your client to direct read requests to at least two read replicas, or direct reads to a single replica and the primary.

  In ElastiCache serverless, reading from the replica port (6380) will direct reads to the client's local availability zone when possible, reducing retrieval latency. It will automatically fall back to the other nodes during failures.
+ **Avoid expensive commands** – Avoid running any computationally and I/O intensive operations, such as the `KEYS` and `SMEMBERS` commands. We suggest this approach because these operations increase the load on the cluster and have an impact on the performance of the cluster. Instead, use the `SCAN` and `SSCAN` commands.
+ **Follow Lua best practices** – Avoid long running Lua scripts, and always declare keys used in Lua scripts up front. We recommend this approach to determine that the Lua script is not using cross slot commands. Ensure that the keys used in Lua scripts belong to the same slot.
+ **Use sharded pub/sub** – When using Valkey or Redis OSS to support pub/sub workloads with high throughput, we recommend you use [sharded pub/sub](https://valkey.io/topics/pubsub/) (available with Valkey, and with Redis OSS 7 or later). Traditional pub/sub in cluster-mode enabled clusters broadcasts messages to all nodes in the cluster, which can result in high `EngineCPUUtilization`. Note that in ElastiCache serverless, traditional pub/sub commands internally use sharded pub/sub commands.

**Topics**

# Best Practices for using Read Replicas
<a name="ReadReplicas"></a>

Many applications, such as session stores, leaderboards, and recommendation engines, require high availability and handle significantly more read operations than write operations. These applications can often tolerate slightly stale data (eventual consistency), meaning that it's acceptable if different users momentarily see slightly different versions of the same data. For example:
+ Cached query results can often tolerate slightly stale data, especially for cache-aside patterns where the source of truth is external.
+ In a gaming leaderboard, a few seconds delay in updated scores often won't significantly impact the user experience.
+ For session stores, some slight delays in propagating session data across replicas rarely affect application functionality.
+ Recommendation engines typically use historical data analysis, so real-time consistency is less critical.

Eventual consistency means that all replica nodes will eventually return the same data once the replication process is complete, typically within milliseconds. For such use cases, implementing read replicas is an effective strategy to reduce latency when reading from your ElastiCache instance.

Using read replicas in Amazon ElastiCache can provide significant performance benefits through:

**Enhanced Read Scalability**
+ Distributes read operations across multiple replica nodes
+ Offloads read traffic from the primary node
+ Reduces read latency by serving requests from geographically closer replicas

**Optimized Primary Node Performance**
+ Dedicates primary node resources to write operations
+ Reduces connection overhead on the primary node
+ Improves write performance and maintains better response times during peak traffic periods

## Using Read from Replica in ElastiCache Serverless
<a name="ReadReplicas.serverless"></a>

ElastiCache serverless provides two different endpoints, for different consistency requirements. The two endpoints use the same DNS name but different ports. In order to use the read-from-replica port, you must authorize access to both ports from your client application by [ configuring the security groups and network access control lists of your VPC](set-up.md#elasticache-install-grant-access-VPN).

**Primary endpoint (Port 6379)**
+ Use for operations requiring strong consistency
+ Guarantees reading the most up-to-date data
+ Best for critical transactions and write operations
+ Necessary for write operations
+ Example: `test-12345.serverless.use1.cache.amazonaws.com:6379`

**Read-optimized endpoint (Port 6380)**
+ Optimized for read operations that can tolerate eventual consistency
+ When possible, ElastiCache serverless automatically routes read requests to a replica node in the client's local Availability Zone. This optimization provides lower latency by avoiding the additional network latency incurred when retrieving data from a node in a different availability zone.
+ ElastiCache serverless automatically selects available nodes in other zones if a local node is unavailable
+ Example: `test-12345.serverless.use1.cache.amazonaws.com:6380`
+ Clients like Glide and Lettuce will automatically detect and route reads to the latency optimized endpoint if you provide the read from replica configuration. If your client doesn’t support routing configuration (e.g., valkey-java and older jedis versions), you must define the right port and client configuration to read from replicas.

## Connecting to read replicas in ElastiCache Serverless - Valkey and Glide
<a name="ReadReplicas.connecting-primary"></a>

The following code snippet shows how you can configure read from replica for ElastiCache Serverless in the Valkey glide library. You don’t need to specify port for read from replicas, but you need to configure the routing configuration `ReadFrom.PREFER_REPLICA`.

```
package glide.examples;

import glide.api.GlideClusterClient;
import glide.api.logging.Logger;
import glide.api.models.configuration.GlideClusterClientConfiguration;
import glide.api.models.configuration.NodeAddress;
import glide.api.models.exceptions.ClosingException;
import glide.api.models.exceptions.ConnectionException;
import glide.api.models.exceptions.TimeoutException;
import glide.api.models.configuration.ReadFrom;

import java.util.concurrent.CompletableFuture;
import java.util.concurrent.ExecutionException;

public class ClusterExample {

    public static void main(String[] args) {
        // Set logger configuration
        Logger.setLoggerConfig(Logger.Level.INFO);

        GlideClusterClient client = null;

        try {
            System.out.println("Connecting to Valkey Glide...");

            // Configure the Glide Client
            GlideClusterClientConfiguration config = GlideClusterClientConfiguration.builder()
                .address(NodeAddress.builder()
                    .host("your-endpoint")
                    .port(6379)
                    .build())
                .useTLS(true)
                .readFrom(ReadFrom.PREFER_REPLICA)
                .build();

            // Create the GlideClusterClient
            client = GlideClusterClient.createClient(config).get();
            System.out.println("Connected successfully.");

            // Perform SET operation
            CompletableFuture<String> setResponse = client.set("key", "value");
            System.out.println("Set key 'key' to 'value': " + setResponse.get());

            // Perform GET operation
            CompletableFuture<String> getResponse = client.get("key");
            System.out.println("Get response for 'key': " + getResponse.get());

            // Perform PING operation
            CompletableFuture<String> pingResponse = client.ping();
            System.out.println("PING response: " + pingResponse.get());

        } catch (ClosingException | ConnectionException | TimeoutException | ExecutionException e) {
            System.err.println("An exception occurred: ");
            e.printStackTrace();
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        } finally {
            // Close the client connection
            if (client != null) {
                try {
                    client.close();
                    System.out.println("Client connection closed.");
                } catch (ClosingException | ExecutionException e) {
                    System.err.println("Error closing client: " + e.getMessage());
                }
            }
        }
    }
}
```

# Supported and restricted Valkey, Memcached, and Redis OSS commands
<a name="SupportedCommands"></a>

## Supported Valkey and Redis OSS commands
<a name="SupportedCommandsRedis"></a>

**Supported Valkey and Redis OSS commands**

The following Valkey and Redis OSS commands are supported by serverless caches. In addition to these commands, these [Supported Valkey and Redis OSS commandsJSON commands](json-list-commands.md) are also supported.

For information on Bloom Filter commands see [Bloom filter commands](BloomFilters.md#SupportedCommandsBloom)

**Bitmap Commands**
+ `BITCOUNT`

  Counts the number of set bits (population counting) in a string.

  [Learn more](https://valkey.io/commands/bitcount/)
+ `BITFIELD`

  Performs arbitrary bitfield integer operations on strings.

  [Learn more](https://valkey.io/commands/bitfield/)
+ `BITFIELD_RO`

  Performs arbitrary read-only bitfield integer operations on strings.

  [Learn more](https://valkey.io/commands/bitfield_ro/)
+ `BITOP`

  Performs bitwise operations on multiple strings, and stores the result.

  [Learn more](https://valkey.io/commands/bitop/)
+ `BITPOS`

  Finds the first set (1) or clear (0) bit in a string.

  [Learn more](https://valkey.io/commands/bitpos/)
+ `GETBIT`

  Returns a bit value by offset.

  [Learn more](https://valkey.io/commands/getbit/)
+ `SETBIT`

  Sets or clears the bit at offset of the string value. Creates the key if it doesn't exist.

  [Learn more](https://valkey.io/commands/setbit/)

**Cluster Management Commands**
+ `CLUSTER COUNTKEYSINSLOT`

  Returns the number of keys in a hash slot.

  [Learn more](https://valkey.io/commands/cluster-countkeysinslot/)
+ `CLUSTER GETKEYSINSLOT`

  Returns the key names in a hash slot.

  [Learn more](https://valkey.io/commands/cluster-getkeysinslot/)
+ `CLUSTER INFO`

  Returns information about the state of a node. In a serverless cache, returns state about the single virtual “shard” exposed to the client.

  [Learn more](https://valkey.io/commands/cluster-info/)
+ `CLUSTER KEYSLOT`

  Returns the hash slot for a key.

  [Learn more](https://valkey.io/commands/cluster-keyslot/)
+ `CLUSTER MYID`

  Returns the ID of a node. In a serverless cache, returns state about the single virtual “shard” exposed to the client. 

  [Learn more](https://valkey.io/commands/cluster-myid/)
+ `CLUSTER NODES`

  Returns the cluster configuration for a node. In a serverless cache, returns state about the single virtual “shard” exposed to the client. 

  [Learn more](https://valkey.io/commands/cluster-nodes/)
+ `CLUSTER REPLICAS`

  Lists the replica nodes of a master node. In a serverless cache, returns state about the single virtual “shard” exposed to the client. 

  [Learn more](https://valkey.io/commands/cluster-replicas/)
+ `CLUSTER SHARDS`

  Returns the mapping of cluster slots to shards. In a serverless cache, returns state about the single virtual “shard” exposed to the client. 

  [Learn more](https://valkey.io/commands/cluster-shards/)
+ `CLUSTER SLOTS`

  Returns the mapping of cluster slots to nodes. In a serverless cache, returns state about the single virtual “shard” exposed to the client. 

  [Learn more](https://valkey.io/commands/cluster-slots/)
+ `CLUSTER SLOT-STATS`

  Allows tracking of per slot metrics for key count, CPU utilization, network bytes in, and network bytes out. 

  [Learn more](https://valkey.io/commands/cluster-slot-stats/)
+ `READONLY`

  Enables read-only queries for a connection to a Valkey or Redis OSS Cluster replica node.

  [Learn more](https://valkey.io/commands/readonly/)
+ `READWRITE`

  Enables read-write queries for a connection to a Valkey or Redis OSS Cluster replica node.

  [Learn more](https://valkey.io/commands/readwrite/)
+ `SCRIPT SHOW`

  Returns the original source code of a script in the script cache.

  [Learn more](https://valkey.io/commands/script-show/)

**Connection Management Commands**
+ `AUTH`

  Authenticates the connection.

  [Learn more](https://valkey.io/commands/auth/)
+ `CLIENT GETNAME`

  Returns the name of the connection.

  [Learn more](https://valkey.io/commands/client-getname/)
+ `CLIENT REPLY`

  Instructs the server whether to reply to commands.

  [Learn more](https://valkey.io/commands/client-reply/)
+ `CLIENT SETNAME`

  Sets the connection name.

  [Learn more](https://valkey.io/commands/client-setname/)
+ `ECHO`

  Returns the given string.

  [Learn more](https://valkey.io/commands/echo/)
+ `HELLO`

  Handshakes with the Valkey or Redis OSS server.

  [Learn more](https://valkey.io/commands/hello/)
+ `PING`

  Returns the server's liveliness response.

  [Learn more](https://valkey.io/commands/ping/)
+ `QUIT`

  Closes the connection.

  [Learn more](https://valkey.io/commands/quit/)
+ `RESET`

  Resets the connection.

  [Learn more](https://valkey.io/commands/reset/)
+ `SELECT`

  Changes the selected database.

  [Learn more](https://valkey.io/commands/select/)

**Generic Commands**
+ `COPY`

  Copies the value of a key to a new key.

  [Learn more](https://valkey.io/commands/copy/)
+ `DEL`

  Deletes one or more keys.

  [Learn more](https://valkey.io/commands/del/)
+ `DUMP`

  Returns a serialized representation of the value stored at a key.

  [Learn more](https://valkey.io/commands/dump/)
+ `EXISTS`

  Determines whether one or more keys exist.

  [Learn more](https://valkey.io/commands/exists/)
+ `EXPIRE`

  Sets the expiration time of a key in seconds.

  [Learn more](https://valkey.io/commands/expire/)
+ `EXPIREAT`

  Sets the expiration time of a key to a Unix timestamp.

  [Learn more](https://valkey.io/commands/expireat/)
+ `EXPIRETIME`

  Returns the expiration time of a key as a Unix timestamp.

  [Learn more](https://valkey.io/commands/expiretime/)
+ `PERSIST`

  Removes the expiration time of a key.

  [Learn more](https://valkey.io/commands/persist/)
+ `PEXPIRE`

  Sets the expiration time of a key in milliseconds.

  [Learn more](https://valkey.io/commands/pexpire/)
+ `PEXPIREAT`

  Sets the expiration time of a key to a Unix milliseconds timestamp.

  [Learn more](https://valkey.io/commands/pexpireat/)
+ `PEXPIRETIME`

  Returns the expiration time of a key as a Unix milliseconds timestamp.

  [Learn more](https://valkey.io/commands/pexpiretime/)
+ `PTTL`

  Returns the expiration time in milliseconds of a key.

  [Learn more](https://valkey.io/commands/pttl/)
+ `RANDOMKEY`

  Returns a random key name from the database.

  [Learn more](https://valkey.io/commands/randomkey/)
+ `RENAME`

  Renames a key and overwrites the destination.

  [Learn more](https://valkey.io/commands/rename/)
+ `RENAMENX`

  Renames a key only when the target key name doesn't exist.

  [Learn more](https://valkey.io/commands/renamenx/)
+ `RESTORE`

  Creates a key from the serialized representation of a value.

  [Learn more](https://valkey.io/commands/restore/)
+ `SCAN`

  Iterates over the key names in the database.

  [Learn more](https://valkey.io/commands/scan/)
+ `SORT`

  Sorts the elements in a list, a set, or a sorted set, optionally storing the result.

  [Learn more](https://valkey.io/commands/sort/)
+ `SORT_RO`

  Returns the sorted elements of a list, a set, or a sorted set.

  [Learn more](https://valkey.io/commands/sort_ro/)
+ `TOUCH`

  Returns the number of existing keys out of those specified after updating the time they were last accessed.

  [Learn more](https://valkey.io/commands/touch/)
+ `TTL`

  Returns the expiration time in seconds of a key.

  [Learn more](https://valkey.io/commands/ttl/)
+ `TYPE`

  Determines the type of value stored at a key.

  [Learn more](https://valkey.io/commands/type/)
+ `UNLINK`

  Asynchronously deletes one or more keys.

  [Learn more](https://valkey.io/commands/unlink/)

**Geospatial Commands**
+ `GEOADD`

  Adds one or more members to a geospatial index. The key is created if it doesn't exist.

  [Learn more](https://valkey.io/commands/geoadd/)
+ `GEODIST`

  Returns the distance between two members of a geospatial index.

  [Learn more](https://valkey.io/commands/geodist/)
+ `GEOHASH`

  Returns members from a geospatial index as geohash strings.

  [Learn more](https://valkey.io/commands/geohash/)
+ `GEOPOS`

  Returns the longitude and latitude of members from a geospatial index.

  [Learn more](https://valkey.io/commands/geopos/)
+ `GEORADIUS`

  Queries a geospatial index for members within a distance from a coordinate, optionally stores the result.

  [Learn more](https://valkey.io/commands/georadius/)
+ `GEORADIUS_RO`

  Returns members from a geospatial index that are within a distance from a coordinate.

  [Learn more](https://valkey.io/commands/georadius_ro/)
+ `GEORADIUSBYMEMBER`

  Queries a geospatial index for members within a distance from a member, optionally stores the result.

  [Learn more](https://valkey.io/commands/georadiusbymember/)
+ `GEORADIUSBYMEMBER_RO`

  Returns members from a geospatial index that are within a distance from a member.

  [Learn more](https://valkey.io/commands/georadiusbymember_ro/)
+ `GEOSEARCH`

  Queries a geospatial index for members inside an area of a box or a circle.

  [Learn more](https://valkey.io/commands/geosearch/)
+ `GEOSEARCHSTORE`

  Queries a geospatial index for members inside an area of a box or a circle, optionally stores the result.

  [Learn more](https://valkey.io/commands/geosearchstore/)

**Hash Commands**
+ `HDEL`

  Deletes one or more fields and their values from a hash. Deletes the hash if no fields remain.

  [Learn more](https://valkey.io/commands/hdel/)
+ `HEXISTS`

  Determines whether a field exists in a hash.

  [Learn more](https://valkey.io/commands/hexists/)
+ `HGET`

  Returns the value of a field in a hash.

  [Learn more](https://valkey.io/commands/hget/)
+ `HGETALL`

  Returns all fields and values in a hash.

  [Learn more](https://valkey.io/commands/hgetall/)
+ `HINCRBY`

  Increments the integer value of a field in a hash by a number. Uses 0 as initial value if the field doesn't exist.

  [Learn more](https://valkey.io/commands/hincrby/)
+ `HINCRBYFLOAT`

  Increments the floating point value of a field by a number. Uses 0 as initial value if the field doesn't exist.

  [Learn more](https://valkey.io/commands/hincrbyfloat/)
+ `HKEYS`

  Returns all fields in a hash.

  [Learn more](https://valkey.io/commands/hkeys/)
+ `HLEN`

  Returns the number of fields in a hash.

  [Learn more](https://valkey.io/commands/hlen/)
+ `HMGET`

  Returns the values of all fields in a hash.

  [Learn more](https://valkey.io/commands/hmget/)
+ `HMSET`

  Sets the values of multiple fields.

  [Learn more](https://valkey.io/commands/hmset/)
+ `HRANDFIELD`

  Returns one or more random fields from a hash.

  [Learn more](https://valkey.io/commands/hrandfield/)
+ `HSCAN`

  Iterates over fields and values of a hash.

  [Learn more](https://valkey.io/commands/hscan/)
+ `HSET`

  Creates or modifies the value of a field in a hash.

  [Learn more](https://valkey.io/commands/hset/)
+ `HSETNX`

  Sets the value of a field in a hash only when the field doesn't exist.

  [Learn more](https://valkey.io/commands/hsetnx/)
+ `HSTRLEN`

  Returns the length of the value of a field.

  [Learn more](https://valkey.io/commands/hstrlen/)
+ `HVALS`

  Returns all values in a hash.

  [Learn more](https://valkey.io/commands/hvals/)

**HyperLogLog Commands**
+ `PFADD`

  Adds elements to a HyperLogLog key. Creates the key if it doesn't exist.

  [Learn more](https://valkey.io/commands/pfadd/)
+ `PFCOUNT`

  Returns the approximated cardinality of the set(s) observed by the HyperLogLog key(s).

  [Learn more](https://valkey.io/commands/pfcount/)
+ `PFMERGE`

  Merges one or more HyperLogLog values into a single key.

  [Learn more](https://valkey.io/commands/pfmerge/)

**List Commands**
+ `BLMOVE`

  Pops an element from a list, pushes it to another list and returns it. Blocks until an element is available otherwise. Deletes the list if the last element was moved.

  [Learn more](https://valkey.io/commands/blmove/)
+ `BLMPOP`

  Pops the first element from one of multiple lists. Blocks until an element is available otherwise. Deletes the list if the last element was popped.

  [Learn more](https://valkey.io/commands/blmpop/)
+ `BLPOP`

  Removes and returns the first element in a list. Blocks until an element is available otherwise. Deletes the list if the last element was popped.

  [Learn more](https://valkey.io/commands/blpop/)
+ `BRPOP`

  Removes and returns the last element in a list. Blocks until an element is available otherwise. Deletes the list if the last element was popped.

  [Learn more](https://valkey.io/commands/brpop/)
+ `BRPOPLPUSH`

  Pops an element from a list, pushes it to another list and returns it. Block until an element is available otherwise. Deletes the list if the last element was popped.

  [Learn more](https://valkey.io/commands/brpoplpush/)
+ `LINDEX`

  Returns an element from a list by its index.

  [Learn more](https://valkey.io/commands/lindex/)
+ `LINSERT`

  Inserts an element before or after another element in a list.

  [Learn more](https://valkey.io/commands/linsert/)
+ `LLEN`

  Returns the length of a list.

  [Learn more](https://valkey.io/commands/llen/)
+ `LMOVE`

  Returns an element after popping it from one list and pushing it to another. Deletes the list if the last element was moved.

  [Learn more](https://valkey.io/commands/lmove/)
+ `LMPOP`

  Returns multiple elements from a list after removing them. Deletes the list if the last element was popped.

  [Learn more](https://valkey.io/commands/lmpop/)
+ `LPOP`

  Returns the first elements in a list after removing it. Deletes the list if the last element was popped.

  [Learn more](https://valkey.io/commands/lpop/)
+ `LPOS`

  Returns the index of matching elements in a list.

  [Learn more](https://valkey.io/commands/lpos/)
+ `LPUSH`

  Prepends one or more elements to a list. Creates the key if it doesn't exist.

  [Learn more](https://valkey.io/commands/lpush/)
+ `LPUSHX`

  Prepends one or more elements to a list only when the list exists.

  [Learn more](https://valkey.io/commands/lpushx/)
+ `LRANGE`

  Returns a range of elements from a list.

  [Learn more](https://valkey.io/commands/lrange/)
+ `LREM`

  Removes elements from a list. Deletes the list if the last element was removed.

  [Learn more](https://valkey.io/commands/lrem/)
+ `LSET`

  Sets the value of an element in a list by its index.

  [Learn more](https://valkey.io/commands/lset/)
+ `LTRIM`

  Removes elements from both ends a list. Deletes the list if all elements were trimmed.

  [Learn more](https://valkey.io/commands/ltrim/)
+ `RPOP`

  Returns and removes the last elements of a list. Deletes the list if the last element was popped.

  [Learn more](https://valkey.io/commands/rpop/)
+ `RPOPLPUSH`

  Returns the last element of a list after removing and pushing it to another list. Deletes the list if the last element was popped.

  [Learn more](https://valkey.io/commands/rpoplpush/)
+ `RPUSH`

  Appends one or more elements to a list. Creates the key if it doesn't exist.

  [Learn more](https://valkey.io/commands/rpush/)
+ `RPUSHX`

  Appends an element to a list only when the list exists.

  [Learn more](https://valkey.io/commands/rpushx/)

**Pub/Sub Commands**

**Note**  
PUBSUB commands internally use sharded PUBSUB, so channel names will be mixed.
+ `PUBLISH`

  Posts a message to a channel.

  [Learn more](https://valkey.io/commands/publish/)
+ `PUBSUB CHANNELS`

  Returns the active channels.

  [Learn more](https://valkey.io/commands/pubsub-channels/)
+ `PUBSUB NUMSUB`

  Returns a count of subscribers to channels.

  [Learn more](https://valkey.io/commands/pubsub-numsub/)
+ `PUBSUB SHARDCHANNELS`

  Returns the active shard channels.

  [Learn more](https://valkey.io/commands/pubsub-shardchannels/)
+ `PUBSUB SHARDNUMSUB`

  Returns the count of subscribers of shard channels.

  [Learn more](https://valkey.io/commands/pubsub-shardnumsub/)
+ `SPUBLISH`

  Post a message to a shard channel

  [Learn more](https://valkey.io/commands/spublish/)
+ `SSUBSCRIBE`

  Listens for messages published to shard channels.

  [Learn more](https://valkey.io/commands/ssubscribe/)
+ `SUBSCRIBE`

  Listens for messages published to channels.

  [Learn more](https://valkey.io/commands/subscribe/)
+ `SUNSUBSCRIBE`

  Stops listening to messages posted to shard channels.

  [Learn more](https://valkey.io/commands/sunsubscribe/)
+ `UNSUBSCRIBE`

  Stops listening to messages posted to channels.

  [Learn more](https://valkey.io/commands/unsubscribe/)

**Scripting Commands**
+ `EVAL`

  Executes a server-side Lua script.

  [Learn more](https://valkey.io/commands/eval/)
+ `EVAL_RO`

  Executes a read-only server-side Lua script.

  [Learn more](https://valkey.io/commands/eval_ro/)
+ `EVALSHA`

  Executes a server-side Lua script by SHA1 digest.

  [Learn more](https://valkey.io/commands/evalsha/)
+ `EVALSHA_RO`

  Executes a read-only server-side Lua script by SHA1 digest.

  [Learn more](https://valkey.io/commands/evalsha_ro/)
+ `SCRIPT EXISTS`

  Determines whether server-side Lua scripts exist in the script cache.

  [Learn more](https://valkey.io/commands/script-exists/)
+ `SCRIPT FLUSH`

  Currently a no-op, script cache is managed by the service. 

  [Learn more](https://valkey.io/commands/script-flush/)
+ `SCRIPT LOAD`

  Loads a server-side Lua script to the script cache.

  [Learn more](https://valkey.io/commands/script-load/)

**Server Management Commands**

**Note**  
When using node-based ElastiCache clusters for Valkey and Redis OSS, flush commands must be sent to every primary by the client to flush all keys. ElastiCache Serverless for Valkey and Redis OSS works differently, because it abstracts away the underlying cluster topology. The result is that in ElastiCache Serverless, `FLUSHDB` and `FLUSHALL` commands will always flush all keys across the cluster. For this reason, flush commands cannot be included inside a Serverless transaction. 
+ `ACL CAT`

  Lists the ACL categories, or the commands inside a category.

  [Learn more](https://valkey.io/commands/acl-cat/)
+ `ACL GENPASS`

  Generates a pseudorandom, secure password that can be used to identify ACL users.

  [Learn more](https://valkey.io/commands/acl-genpass/)
+ `ACL GETUSER`

  Lists the ACL rules of a user.

  [Learn more](https://valkey.io/commands/acl-getuser/)
+ `ACL LIST`

  Dumps the effective rules in ACL file format.

  [Learn more](https://valkey.io/commands/acl-list/)
+ `ACL USERS`

  Lists all ACL users.

  [Learn more](https://valkey.io/commands/acl-users/)
+ `ACL WHOAMI`

  Returns the authenticated username of the current connection.

  [Learn more](https://valkey.io/commands/acl-whoami/)
+ `DBSIZE`

  Return the number of keys in the currently-selected database. This operation is not guaranteed to be atomic across all slots.

  [Learn more](https://valkey.io/commands/dbsize/)
+ `COMMAND`

  Returns detailed information about all commands.

  [Learn more](https://valkey.io/commands/command/)
+ `COMMAND COUNT`

  Returns a count of commands.

  [Learn more](https://valkey.io/commands/command-count/)
+ `COMMAND DOCS`

  Returns documentary information about one, multiple or all commands.

  [Learn more](https://valkey.io/commands/command-docs/)
+ `COMMAND GETKEYS`

  Extracts the key names from an arbitrary command.

  [Learn more](https://valkey.io/commands/command-getkeys/)
+ `COMMAND GETKEYSANDFLAGS`

  Extracts the key names and access flags for an arbitrary command.

  [Learn more](https://valkey.io/commands/command-getkeysandflags/)
+ `COMMAND INFO`

  Returns information about one, multiple or all commands.

  [Learn more](https://valkey.io/commands/command-info/)
+ `COMMAND LIST`

  Returns a list of command names.

  [Learn more](https://valkey.io/commands/command-list/)
+ `COMMANDLOG`

  A container for command log commands.

  [Learn more](https://valkey.io/commands/commandlog/)
+ `COMMANDLOG GET`

  Returns the specified command log's entries.

  [Learn more](https://valkey.io/commands/commandlog-get/)
+ `COMMANDLOG HELP`

  Show helpful text about the different subcommands.

  [Learn more](https://valkey.io/commands/commandlog-help/)
+ `COMMANDLOG LEN`

  Returns the number of entries in the specified type of command log.

  [Learn more](https://valkey.io/commands/commandlog-len/)
+ `COMMANDLOG RESET`

  Clears all entries from the specified type of command log.

  [Learn more](https://valkey.io/commands/commandlog-reset/)
+ `FLUSHALL`

  Removes all keys from all databases. This operation is not guaranteed to be atomic across all slots. 

  [Learn more](https://valkey.io/commands/flushall/)
+ `FLUSHDB`

  Remove all keys from the current database. This operation is not guaranteed to be atomic across all slots.

  [Learn more](https://valkey.io/commands/flushdb/)
+ `INFO`

  Returns information and statistics about the server.

  [Learn more](https://valkey.io/commands/info/)
+ `LOLWUT`

  Displays computer art and the Valkey or Redis OSS version.

  [Learn more](https://valkey.io/commands/lolwut/)
+ `ROLE`

  Returns the replication role.

  [Learn more](https://valkey.io/commands/role/)
+ `TIME`

  Returns the server time.

  [Learn more](https://valkey.io/commands/time/)

**Set Commands**
+ `SADD`

  Adds one or more members to a set. Creates the key if it doesn't exist.

  [Learn more](https://valkey.io/commands/sadd/)
+ `SCARD`

  Returns the number of members in a set.

  [Learn more](https://valkey.io/commands/scard/)
+ `SDIFF`

  Returns the difference of multiple sets.

  [Learn more](https://valkey.io/commands/sdiff/)
+ `SDIFFSTORE`

  Stores the difference of multiple sets in a key.

  [Learn more](https://valkey.io/commands/sdiffstore/)
+ `SINTER`

  Returns the intersect of multiple sets.

  [Learn more](https://valkey.io/commands/sinter/)
+ `SINTERCARD`

  Returns the number of members of the intersect of multiple sets.

  [Learn more](https://valkey.io/commands/sintercard/)
+ `SINTERSTORE`

  Stores the intersect of multiple sets in a key.

  [Learn more](https://valkey.io/commands/sinterstore/)
+ `SISMEMBER`

  Determines whether a member belongs to a set.

  [Learn more](https://valkey.io/commands/sismember/)
+ `SMEMBERS`

  Returns all members of a set.

  [Learn more](https://valkey.io/commands/smembers/)
+ `SMISMEMBER`

  Determines whether multiple members belong to a set.

  [Learn more](https://valkey.io/commands/smismember/)
+ `SMOVE`

  Moves a member from one set to another.

  [Learn more](https://valkey.io/commands/smove/)
+ `SPOP`

  Returns one or more random members from a set after removing them. Deletes the set if the last member was popped.

  [Learn more](https://valkey.io/commands/spop/)
+ `SRANDMEMBER`

  Get one or multiple random members from a set

  [Learn more](https://valkey.io/commands/srandmember/)
+ `SREM`

  Removes one or more members from a set. Deletes the set if the last member was removed.

  [Learn more](https://valkey.io/commands/srem/)
+ `SSCAN`

  Iterates over members of a set.

  [Learn more](https://valkey.io/commands/sscan/)
+ `SUNION`

  Returns the union of multiple sets.

  [Learn more](https://valkey.io/commands/sunion/)
+ `SUNIONSTORE`

  Stores the union of multiple sets in a key.

  [Learn more](https://valkey.io/commands/sunionstore/)

**Sorted Set Commands**
+ `BZMPOP`

  Removes and returns a member by score from one or more sorted sets. Blocks until a member is available otherwise. Deletes the sorted set if the last element was popped.

  [Learn more](https://valkey.io/commands/bzmpop/)
+ `BZPOPMAX`

  Removes and returns the member with the highest score from one or more sorted sets. Blocks until a member available otherwise. Deletes the sorted set if the last element was popped.

  [Learn more](https://valkey.io/commands/bzpopmax/)
+ `BZPOPMIN`

  Removes and returns the member with the lowest score from one or more sorted sets. Blocks until a member is available otherwise. Deletes the sorted set if the last element was popped.

  [Learn more](https://valkey.io/commands/bzpopmin/)
+ `ZADD`

  Adds one or more members to a sorted set, or updates their scores. Creates the key if it doesn't exist.

  [Learn more](https://valkey.io/commands/zadd/)
+ `ZCARD`

  Returns the number of members in a sorted set.

  [Learn more](https://valkey.io/commands/zcard/)
+ `ZCOUNT`

  Returns the count of members in a sorted set that have scores within a range.

  [Learn more](https://valkey.io/commands/zcount/)
+ `ZDIFF`

  Returns the difference between multiple sorted sets.

  [Learn more](https://valkey.io/commands/zdiff/)
+ `ZDIFFSTORE`

  Stores the difference of multiple sorted sets in a key.

  [Learn more](https://valkey.io/commands/zdiffstore/)
+ `ZINCRBY`

  Increments the score of a member in a sorted set.

  [Learn more](https://valkey.io/commands/zincrby/)
+ `ZINTER`

  Returns the intersect of multiple sorted sets.

  [Learn more](https://valkey.io/commands/zinter/)
+ `ZINTERCARD`

  Returns the number of members of the intersect of multiple sorted sets.

  [Learn more](https://valkey.io/commands/zintercard/)
+ `ZINTERSTORE`

  Stores the intersect of multiple sorted sets in a key.

  [Learn more](https://valkey.io/commands/zinterstore/)
+ `ZLEXCOUNT`

  Returns the number of members in a sorted set within a lexicographical range.

  [Learn more](https://valkey.io/commands/zlexcount/)
+ `ZMPOP`

  Returns the highest- or lowest-scoring members from one or more sorted sets after removing them. Deletes the sorted set if the last member was popped.

  [Learn more](https://valkey.io/commands/zmpop/)
+ `ZMSCORE`

  Returns the score of one or more members in a sorted set.

  [Learn more](https://valkey.io/commands/zmscore/)
+ `ZPOPMAX`

  Returns the highest-scoring members from a sorted set after removing them. Deletes the sorted set if the last member was popped.

  [Learn more](https://valkey.io/commands/zpopmax/)
+ `ZPOPMIN`

  Returns the lowest-scoring members from a sorted set after removing them. Deletes the sorted set if the last member was popped.

  [Learn more](https://valkey.io/commands/zpopmin/)
+ `ZRANDMEMBER`

  Returns one or more random members from a sorted set.

  [Learn more](https://valkey.io/commands/zrandmember/)
+ `ZRANGE`

  Returns members in a sorted set within a range of indexes.

  [Learn more](https://valkey.io/commands/zrange/)
+ `ZRANGEBYLEX`

  Returns members in a sorted set within a lexicographical range.

  [Learn more](https://valkey.io/commands/zrangebylex/)
+ `ZRANGEBYSCORE`

  Returns members in a sorted set within a range of scores.

  [Learn more](https://valkey.io/commands/zrangebyscore/)
+ `ZRANGESTORE`

  Stores a range of members from sorted set in a key.

  [Learn more](https://valkey.io/commands/zrangestore/)
+ `ZRANK`

  Returns the index of a member in a sorted set ordered by ascending scores.

  [Learn more](https://valkey.io/commands/zrank/)
+ `ZREM`

  Removes one or more members from a sorted set. Deletes the sorted set if all members were removed.

  [Learn more](https://valkey.io/commands/zrem/)
+ `ZREMRANGEBYLEX`

  Removes members in a sorted set within a lexicographical range. Deletes the sorted set if all members were removed.

  [Learn more](https://valkey.io/commands/zremrangebylex/)
+ `ZREMRANGEBYRANK`

  Removes members in a sorted set within a range of indexes. Deletes the sorted set if all members were removed.

  [Learn more](https://valkey.io/commands/zremrangebyrank/)
+ `ZREMRANGEBYSCORE`

  Removes members in a sorted set within a range of scores. Deletes the sorted set if all members were removed.

  [Learn more](https://valkey.io/commands/zremrangebyscore/)
+ `ZREVRANGE`

  Returns members in a sorted set within a range of indexes in reverse order.

  [Learn more](https://valkey.io/commands/zrevrange/)
+ `ZREVRANGEBYLEX`

  Returns members in a sorted set within a lexicographical range in reverse order.

  [Learn more](https://valkey.io/commands/zrevrangebylex/)
+ `ZREVRANGEBYSCORE`

  Returns members in a sorted set within a range of scores in reverse order.

  [Learn more](https://valkey.io/commands/zrevrangebyscore/)
+ `ZREVRANK`

  Returns the index of a member in a sorted set ordered by descending scores.

  [Learn more](https://valkey.io/commands/zrevrank/)
+ `ZSCAN`

  Iterates over members and scores of a sorted set.

  [Learn more](https://valkey.io/commands/zscan/)
+ `ZSCORE`

  Returns the score of a member in a sorted set.

  [Learn more](https://valkey.io/commands/zscore/)
+ `ZUNION`

  Returns the union of multiple sorted sets.

  [Learn more](https://valkey.io/commands/zunion/)
+ `ZUNIONSTORE`

  Stores the union of multiple sorted sets in a key.

  [Learn more](https://valkey.io/commands/zunionstore/)

**Stream Commands**
+ `XACK`

  Returns the number of messages that were successfully acknowledged by the consumer group member of a stream.

  [Learn more](https://valkey.io/commands/xack/)
+ `XADD`

  Appends a new message to a stream. Creates the key if it doesn't exist.

  [Learn more](https://valkey.io/commands/xadd/)
+ `XAUTOCLAIM`

  Changes, or acquires, ownership of messages in a consumer group, as if the messages were delivered to as consumer group member.

  [Learn more](https://valkey.io/commands/xautoclaim/)
+ `XCLAIM`

  Changes, or acquires, ownership of a message in a consumer group, as if the message was delivered a consumer group member.

  [Learn more](https://valkey.io/commands/xclaim/)
+ `XDEL`

  Returns the number of messages after removing them from a stream.

  [Learn more](https://valkey.io/commands/xdel/)
+ `XGROUP CREATE`

  Creates a consumer group. 

  [Learn more](https://valkey.io/commands/xgroup-create/)
+ `XGROUP CREATECONSUMER`

  Creates a consumer in a consumer group.

  [Learn more](https://valkey.io/commands/xgroup-createconsumer/)
+ `XGROUP DELCONSUMER`

  Deletes a consumer from a consumer group.

  [Learn more](https://valkey.io/commands/xgroup-delconsumer/)
+ `XGROUP DESTROY`

  Destroys a consumer group.

  [Learn more](https://valkey.io/commands/xgroup-destroy/)
+ `XGROUP SETID`

  Sets the last-delivered ID of a consumer group.

  [Learn more](https://valkey.io/commands/xgroup-setid/)
+ `XINFO CONSUMERS`

  Returns a list of the consumers in a consumer group.

  [Learn more](https://valkey.io/commands/xinfo-consumers/)
+ `XINFO GROUPS`

  Returns a list of the consumer groups of a stream.

  [Learn more](https://valkey.io/commands/xinfo-groups/)
+ `XINFO STREAM`

  Returns information about a stream.

  [Learn more](https://valkey.io/commands/xinfo-stream/)
+ `XLEN`

  Return the number of messages in a stream.

  [Learn more](https://valkey.io/commands/xlen/)
+ `XPENDING`

  Returns the information and entries from a stream consumer group's pending entries list.

  [Learn more](https://valkey.io/commands/xpending/)
+ `XRANGE`

  Returns the messages from a stream within a range of IDs.

  [Learn more](https://valkey.io/commands/xrange/)
+ `XREAD`

  Returns messages from multiple streams with IDs greater than the ones requested. Blocks until a message is available otherwise.

  [Learn more](https://valkey.io/commands/xread/)
+ `XREADGROUP`

  Returns new or historical messages from a stream for a consumer in a group. Blocks until a message is available otherwise.

  [Learn more](https://valkey.io/commands/xreadgroup/)
+ `XREVRANGE`

  Returns the messages from a stream within a range of IDs in reverse order.

  [Learn more](https://valkey.io/commands/xrevrange/)
+ `XTRIM`

  Deletes messages from the beginning of a stream.

  [Learn more](https://valkey.io/commands/xtrim/)

**String Commands**
+ `APPEND`

  Appends a string to the value of a key. Creates the key if it doesn't exist.

  [Learn more](https://valkey.io/commands/append/)
+ `DECR`

  Decrements the integer value of a key by one. Uses 0 as initial value if the key doesn't exist.

  [Learn more](https://valkey.io/commands/decr/)
+ `DECRBY`

  Decrements a number from the integer value of a key. Uses 0 as initial value if the key doesn't exist.

  [Learn more](https://valkey.io/commands/decrby/)
+ `GET`

  Returns the string value of a key.

  [Learn more](https://valkey.io/commands/get/)
+ `GETDEL`

  Returns the string value of a key after deleting the key.

  [Learn more](https://valkey.io/commands/getdel/)
+ `GETEX`

  Returns the string value of a key after setting its expiration time.

  [Learn more](https://valkey.io/commands/getex/)
+ `GETRANGE`

  Returns a substring of the string stored at a key.

  [Learn more](https://valkey.io/commands/getrange/)
+ `GETSET`

  Returns the previous string value of a key after setting it to a new value.

  [Learn more](https://valkey.io/commands/getset/)
+ `INCR`

  Increments the integer value of a key by one. Uses 0 as initial value if the key doesn't exist.

  [Learn more](https://valkey.io/commands/incr/)
+ `INCRBY`

  Increments the integer value of a key by a number. Uses 0 as initial value if the key doesn't exist.

  [Learn more](https://valkey.io/commands/incrby/)
+ `INCRBYFLOAT`

  Increment the floating point value of a key by a number. Uses 0 as initial value if the key doesn't exist.

  [Learn more](https://valkey.io/commands/incrbyfloat/)
+ `LCS`

  Finds the longest common substring.

  [Learn more](https://valkey.io/commands/lcs/)
+ `MGET`

  Atomically returns the string values of one or more keys.

  [Learn more](https://valkey.io/commands/mget/)
+ `MSET`

  Atomically creates or modifies the string values of one or more keys.

  [Learn more](https://valkey.io/commands/mset/)
+ `MSETNX`

  Atomically modifies the string values of one or more keys only when all keys don't exist.

  [Learn more](https://valkey.io/commands/msetnx/)
+ `PSETEX`

  Sets both string value and expiration time in milliseconds of a key. The key is created if it doesn't exist.

  [Learn more](https://valkey.io/commands/psetex/)
+ `SET`

  Sets the string value of a key, ignoring its type. The key is created if it doesn't exist.

  [Learn more](https://valkey.io/commands/set/)
+ `SETEX`

  Sets the string value and expiration time of a key. Creates the key if it doesn't exist.

  [Learn more](https://valkey.io/commands/setex/)
+ `SETNX`

  Set the string value of a key only when the key doesn't exist.

  [Learn more](https://valkey.io/commands/setnx/)
+ `SETRANGE`

  Overwrites a part of a string value with another by an offset. Creates the key if it doesn't exist.

  [Learn more](https://valkey.io/commands/setrange/)
+ `STRLEN`

  Returns the length of a string value.

  [Learn more](https://valkey.io/commands/strlen/)
+ `SUBSTR`

  Returns a substring from a string value.

  [Learn more](https://valkey.io/commands/substr/)

**Transaction Commands**
+ `DISCARD`

  Discards a transaction.

  [Learn more](https://valkey.io/commands/discard/)
+ `EXEC`

  Executes all commands in a transaction.

  [Learn more](https://valkey.io/commands/exec/)
+ `MULTI`

  Starts a transaction.

  [Learn more](https://valkey.io/commands/multi/)

## Restricted Valkey and Redis OSS commands
<a name="RestrictedCommandsRedis"></a>

To deliver a managed service experience, ElastiCache restricts access to certain cache engine-specific commands that require advanced privileges. For caches running Redis OSS, the following commands are unavailable:
+ `acl setuser`
+ `acl load`
+ `acl save`
+ `acl deluser`
+ `bgrewriteaof`
+ `bgsave`
+ `cluster addslot`
+ `cluster addslotsrange`
+ `cluster bumpepoch`
+ `cluster delslot`
+ `cluster delslotsrange `
+ `cluster failover `
+ `cluster flushslots `
+ `cluster forget `
+ `cluster links`
+ `cluster meet`
+ `cluster setslot`
+ `config`
+ `debug`
+ `migrate`
+ `psync`
+ `replicaof`
+ `save`
+ `slaveof`
+ `shutdown`
+ `sync`

In addition, the following commands are unavailable for serverless caches:
+ `acl log`
+ `client caching`
+ `client getredir`
+ `client id`
+ `client info`
+ `client kill`
+ `client list`
+ `client no-evict`
+ `client pause`
+ `client tracking`
+ `client trackinginfo`
+ `client unblock`
+ `client unpause`
+ `cluster count-failure-reports`
+ `commandlog`
+ `commandlog get`
+ `commandlog help`
+ `commandlog len`
+ `commandlog reset`
+ `fcall`
+ `fcall_ro`
+ `function`
+ `function delete`
+ `function dump`
+ `function flush`
+ `function help`
+ `function kill`
+ `function list`
+ `function load`
+ `function restore`
+ `function stats`
+ `keys`
+ `lastsave`
+ `latency`
+ `latency doctor`
+ `latency graph`
+ `latency help`
+ `latency histogram`
+ `latency history`
+ `latency latest`
+ `latency reset`
+ `memory`
+ `memory doctor`
+ `memory help`
+ `memory malloc-stats`
+ `memory purge`
+ `memory stats`
+ `memory usage`
+ `monitor`
+ `move`
+ `object`
+ `object encoding`
+ `object freq`
+ `object help`
+ `object idletime`
+ `object refcount`
+ `pfdebug`
+ `pfselftest`
+ `psubscribe`
+ `pubsub numpat`
+ `punsubscribe`
+ `script kill`
+ `slowlog`
+ `slowlog get`
+ `slowlog help`
+ `slowlog len`
+ `slowlog reset`
+ `swapdb`
+ `wait`

## Supported Memcached commands
<a name="SupportedCommandsMem"></a>

ElastiCache Serverless for Memcached supports all of the memcached [commands](https://github.com/memcached/memcached/wiki/Commands) in open source memcached 1.6 except for the following: 
+ Client connections require TLS, as a result UDP protocol is not supported.
+ Binary protocol is not supported, as it is officially [deprecated](https://github.com/memcached/memcached/wiki/ReleaseNotes160) in memcached 1.6.
+ `GET/GETS` commands are limited to 16KB to avoid potential DoS attack to the server with fetching large number of keys.
+ Delayed `flush_all` command will be rejected with `CLIENT_ERROR`.
+ Commands that configure the engine or reveal internal information about engine state or logs are not supported, such as:
  + For `STATS` command, only `stats` and `stats reset` are supported. Other variations will return `ERROR`
  + `lru / lru_crawler` - modification for LRU and LRU crawler settings
  + `watch` - watches memcached server logs
  + `verbosity` - configures the server log level
  + `me` - meta debug (me) command is not supported

# Valkey and Redis OSS configuration and limits
<a name="RedisConfiguration"></a>

The Valkey and Redis OSS engines each provides a number of configuration parameters, some of which are modifiable in ElastiCache for Redis OSS and some of which are not modifiable to provide stable performance and reliability.

## Serverless caches
<a name="RedisConfiguration.Serverless"></a>

For serverless caches, parameter groups are not used and all Valkey or Redis OSS configuration is not modifiable. The following Valkey or Redis OSS parameters are in place:


****  

|  Name  |  Details  |  Description  | 
| --- | --- | --- | 
| acl-pubsub-default | `allchannels` | Default pubsub channel permissions for ACL users on the cache. | 
| client-output-buffer-limit | `normal 0 0 0` `pubsub 32mb 8mb 60` | Normal clients have no buffer limit. PUB/SUB clients will be disconnected if they breach 32MiB backlog, or breach 8MiB backlog for 60s. | 
| client-query-buffer-limit | 1 GiB | The maximum size of a single client query buffer. Additionally, clients cannot issue a request with more than 3,999 arguments. | 
| cluster-allow-pubsubshard-when-down | yes | This allows the cache to serve pubsub traffic while the cache is partially down. | 
| cluster-allow-reads-when-down | yes | This allows the cache to serve read traffic while the cache is partially down. | 
| cluster-enabled | yes | All serverless caches are cluster mode enabled, which allows them to transparently partition their data across multiple backend shards. All slots are surfaced to clients as being owned by a single virtual node. | 
| cluster-require-full-coverage | no | When the keyspace is partially down (i.e. at least one hash slot is inaccessible), the cache will continue accepting queries for the part of the keyspace that is still covered. The entire keyspace will always be "covered" by a single virtual node in cluster slots. | 
| lua-time-limit | 5000 | The maximum execution time for a Lua script, in milliseconds, before ElastiCache takes action to stop the script. If `lua-time-limit` is exceeded, all Valkey or Redis OSS commands may return an error of the form *\$1\$1\$1\$1-BUSY*. Since this state can cause interference with many essential Valkey or Redis OSS operations, ElastiCache will first issue a *SCRIPT KILL* command. If this is unsuccessful, ElastiCache will forcibly restart Valkey or Redis OSS. | 
| maxclients | 65000 | The maximum number of clients that can be connected to the cache at one time. Further connections established may or may not succeed. | 
| maxmemory-policy | volatile-lru | Items with a TTL set are evicted following least-recently-used (LRU) estimation when a cache's memory limit is reached. | 
| notify-keyspace-events | (an empty string) | Keyspace events are currently not supported on serverless caches. | 
| port | Primary port: 6379 Read port: 6380 | Serverless caches advertise two ports with the same hostname. The primary port allows writes and reads, whereas the read port allows lower-latency eventually-consistent reads using the READONLY command. | 
| proto-max-bulk-len | 512 MiB | The maximum size of a single element request. | 
| timeout | 0 | Clients are not forcibly disconnected at a specific idle time, but they may be disconnected during steady-state for load balancing purposes. | 

Additionally, the following limits are in place:


****  

|  Name  |  Details  |  Description  | 
| --- | --- | --- | 
| Size per cache | 5,000 GiB | Maximum amount of data that can be stored per serverless cache. | 
| Size per slot | 32 GiB | The maximum size of a single Valkey or Redis OSS hash slot. Clients trying to set more data than this on a single Valkey or Redis OSS slot will trigger the eviction policy on the slot, and if no keys are evictable, will receive an out of memory (OOM) error. | 
| ECPU per cache | 15,000,000 ECPU/second | ElastiCache Processing Units (ECPU) metric. The number of ECPUs consumed by your requests depends on the vCPU time taken and the amount of data transferred. | 
| ECPU per slot | 30K - 90K ECPU/second | Maximum of 30K ECPUs/second per slot or 90K ECPUs/second when using Read from Replica using READONLY connections. | 
| Arguments per Request | 3,999 | Maximum number of arguments per request. Clients sending more arguments per request will receive an error. | 
| Key name length | 4 KiB | The maximum size for a single Valkey or Redis OSS key or channel name. Clients referencing keys larger than this will receive an error. | 
| Lua script size | 4 MiB | The maximum size of a single Valkey or Redis OSS Lua script. Attempts to load a Lua script larger than this will receive an error. | 

## Node-based clusters
<a name="RedisConfiguration.SelfDesigned"></a>

For node-based clusters, see [Valkey and Redis OSS parameters](ParameterGroups.Engine.md#ParameterGroups.Redis) for the default values of configuration parameters and which are configurable. The default values are generally recommended unless you have a specific use case requiring them to be overridden.

# IPv6 client examples for Valkey, Memcached, and Redis OSS
<a name="network-type-best-practices"></a>

ElastiCache is compatible with Valkey, Memcached, and Redis OSS. This means that clients which support IPv6 connections should be able to connect to IPv6 enabled ElastiCache for Memcached clusters. There are some caveats worth noting when interacting with IPv6 enabled resources.

You can view the [Best practices for Valkey and Redis clients](https://aws.amazon.com/blogs/database/best-practices-redis-clients-and-amazon-elasticache-for-redis/) blog post on the AWS Database Blog for recommendations on configuring Valkey and Redis OSS clients for ElastiCache resources.

Following are best practices for interacting with IPv6 enabled ElastiCache resources with commonly used open-source client libraries. 

## Validated clients with Valkey and Redis OSS
<a name="network-type-validated-clients-redis"></a>

ElastiCache is compatible with Valkey and open-source Redis OSS. This means that Valkey and open source Redis OSS clients that support IPv6 connections should be able to connect to IPv6 enabled ElastiCache for Redis OSS clusters. In addition, several of the most popular Python and Java clients have been specifically tested and validated to work with all supported network type configurations (IPv4 only, IPv6 only, and Dual Stack)

The following clients have specifically been validated to work with all supported network type configurations for Valkey and Redis OSS.

Validated Clients:
+ [Redis Py ()](https://github.com/redis/redis-py) – [4.1.2](https://github.com/redis/redis-py/tree/v4.1.2)
+ [Lettuce](https://lettuce.io/) – [Version: 6.1.6.RELEASE](https://github.com/lettuce-io/lettuce-core/tree/6.1.6.RELEASE)
+ [Jedis](https://github.com/redis/jedis) – [Version: 3.6.0](https://github.com/redis/jedis/tree/jedis-3.6.0)

# Best practices for clients (Valkey and Redis OSS)
<a name="BestPractices.Clients.redis"></a>

Learn best practices for common scenarios and follow along with code examples of some of the most popular open source Valkey and Redis OSS client libraries (redis-py, PHPRedis, and Lettuce), as well as best practices for interacting with ElastiCache resources with commonly used open-source Memcached client libraries.

**Topics**
+ [Large number of connections (Valkey and Redis OSS)](BestPractices.Clients.Redis.Connections.md)
+ [Cluster client discovery and exponential backoff (Valkey and Redis OSS)](BestPractices.Clients.Redis.Discovery.md)
+ [Configure a client-side timeout (Valkey and Redis OSS)](BestPractices.Clients.Redis.ClientTimeout.md)
+ [Configure a server-side idle timeout (Valkey and Redis OSS)](BestPractices.Clients.Redis.ServerTimeout.md)
+ [Lua scripts](BestPractices.Clients.Redis.LuaScripts.md)
+ [Storing large composite items (Valkey and Redis OSS)](BestPractices.Clients.Redis.LargeItems.md)
+ [Lettuce client configuration (Valkey and Redis OSS)](BestPractices.Clients-lettuce.md)
+ [Configuring a preferred protocol for dual stack clusters (Valkey and Redis OSS)](#network-type-configuring-dual-stack-redis)

# Large number of connections (Valkey and Redis OSS)
<a name="BestPractices.Clients.Redis.Connections"></a>

Serverless caches and individual ElastiCache for Redis OSS nodes support up to 65,000 concurrent client connections. However, to optimize for performance, we advise that client applications do not constantly operate at that level of connections. Valkey and Redis OSS each have a single-threaded process based on an event loop where incoming client requests are handled sequentially. That means the response time of a given client becomes longer as the number of connected clients increases.

You can take the following set of actions to avoid hitting a connection bottleneck on a Valkey or Redis OSS server:
+ Perform read operations from read replicas. This can be done by using the ElastiCache reader endpoints in cluster mode disabled or by using replicas for reads in cluster mode enabled, including a serverless cache.
+ Distribute write traffic across multiple primary nodes. You can do this in two ways. You can use a multi-sharded Valkey or Redis OSS cluster with a cluster mode capable client. You could also write to multiple primary nodes in cluster mode disabled with client-side sharding. This is done automatically in a serverless cache.
+ Use a connection pool when available in your client library.

In general, creating a TCP connection is a computationally expensive operation compared to typical Valkey or Redis OSS commands. For example, handling a SET/GET request is an order of magnitude faster when reusing an existing connection. Using a client connection pool with a finite size reduces the overhead of connection management. It also bounds the number of concurrent incoming connections from the client application.

The following code example of PHPRedis shows that a new connection is created for each new user request:

```
$redis = new Redis();
if ($redis->connect($HOST, $PORT) != TRUE) {
	//ERROR: connection failed
	return;
}
$redis->set($key, $value);
unset($redis);
$redis = NULL;
```

We benchmarked this code in a loop on an Amazon Elastic Compute Cloud (Amazon EC2) instance connected to a Graviton2 (m6g.2xlarge) ElastiCache for Redis OSS node. We placed both the client and server at the same Availability Zone. The average latency of the entire operation was 2.82 milliseconds.

When we updated the code and used persistent connections and a connection pool, the average latency of the entire operation was 0.21 milliseconds:

```
$redis = new Redis();
if ($redis->pconnect($HOST, $PORT) != TRUE) {
	// ERROR: connection failed
	return;
}
$redis->set($key, $value);
unset($redis);
$redis = NULL;
```

Required redis.ini configurations:
+ `redis.pconnect.pooling_enabled=1`
+ `redis.pconnect.connection_limit=10`

The following code is an example of a [Redis-py connection pool](https://redis.readthedocs.io/en/stable/):

```
conn = Redis(connection_pool=redis.BlockingConnectionPool(host=HOST, max_connections=10))
conn.set(key, value)
```

The following code is an example of a [Lettuce connection pool](https://lettuce.io/core/release/reference/#_connection_pooling):

```
RedisClient client = RedisClient.create(RedisURI.create(HOST, PORT));
GenericObjectPool<StatefulRedisConnection> pool = ConnectionPoolSupport.createGenericObjectPool(() -> client.connect(), new GenericObjectPoolConfig());
pool.setMaxTotal(10); // Configure max connections to 10
try (StatefulRedisConnection connection = pool.borrowObject()) {
	RedisCommands syncCommands = connection.sync();
	syncCommands.set(key, value);
}
```

# Cluster client discovery and exponential backoff (Valkey and Redis OSS)
<a name="BestPractices.Clients.Redis.Discovery"></a>

When connecting to an ElastiCache Valkey or Redis OSS cluster in cluster mode enabled, the corresponding client library must be cluster aware. The clients must obtain a map of hash slots to the corresponding nodes in the cluster in order to send requests to the right nodes and avoid the performance overhead of handing cluster redirections. As a result, the client must discover a complete list of slots and the mapped nodes in two different situations:
+ The client is initialized and must populate the initial slots configuration
+ A MOVED redirection is received from the server, such as in the situation of a failover when all slots served by the former primary node are taken over by the replica, or re-sharding when slots are being moved from the source primary to the target primary node

Client discovery is usually done via issuing a CLUSTER SLOT or CLUSTER NODE command to the Valkey or Redis OSS server. We recommend the CLUSTER SLOT method because it returns the set of slot ranges and the associated primary and replica nodes back to the client. This doesn't require additional parsing from the client and is more efficient.

Depending on the cluster topology, the size of the response for the CLUSTER SLOT command can vary based on the cluster size. Larger clusters with more nodes produce a larger response. As a result, it's important to ensure that the number of clients doing the cluster topology discovery doesn't grow unbounded. For example, when the client application starts up or loses connection from the server and must perform cluster discovery, one common mistake is that the client application fires several reconnection and discovery requests without adding exponential backoff upon retry. This can render the Valkey or Redis OSS server unresponsive for a prolonged period of time, with the CPU utilization at 100%. The outage is prolonged if each CLUSTER SLOT command must process a large number of nodes in the cluster bus. We have observed multiple client outages in the past due to this behavior across a number of different languages including Python (redis-py-cluster) and Java (Lettuce and Redisson).

In a serverless cache, many of the problems are automatically mitigated because the advertised cluster topology is static and consists of two entries: a write endpoint and a read endpoint. Cluster discovery is also automatically spread over multiple nodes when using the cache endpoint. The following recommendations are still useful, however.

To mitigate the impact caused by a sudden influx of connection and discovery requests, we recommend the following:
+ Implement a client connection pool with a finite size to bound the number of concurrent incoming connections from the client application.
+ When the client disconnects from the server due to timeout, retry with exponential backoff with jitter. This helps to avoid multiple clients overwhelming the server at the same time.
+ Use the guide at [Finding connection endpoints in ElastiCache](Endpoints.md) to find the cluster endpoint to perform cluster discovery. In doing so, you spread the discovery load across all nodes in the cluster (up to 90) instead of hitting a few hardcoded seed nodes in the cluster.

The following are some code examples for exponential backoff retry logic in redis-py, PHPRedis, and Lettuce.

**Backoff logic sample 1: redis-py**

redis-py has a built-in retry mechanism that retries one time immediately after a failure. This mechanism can be enabled through the `retry_on_timeout` argument supplied when creating a [Redis OSS](https://redis.readthedocs.io/en/stable/examples/connection_examples.html#redis.Redis) object. Here we demonstrate a custom retry mechanism with exponential backoff and jitter. We've submitted a pull request to natively implement exponential backoff in [redis-py (\$11494)](https://github.com/andymccurdy/redis-py/pull/1494). In the future it may not be necessary to implement manually.

```
def run_with_backoff(function, retries=5):
base_backoff = 0.1 # base 100ms backoff
max_backoff = 10 # sleep for maximum 10 seconds
tries = 0
while True:
try:
  return function()
except (ConnectionError, TimeoutError):
  if tries >= retries:
	raise
  backoff = min(max_backoff, base_backoff * (pow(2, tries) + random.random()))
  print(f"sleeping for {backoff:.2f}s")
  sleep(backoff)
  tries += 1
```

You can then use the following code to set a value:

```
client = redis.Redis(connection_pool=redis.BlockingConnectionPool(host=HOST, max_connections=10))
res = run_with_backoff(lambda: client.set("key", "value"))
print(res)
```

Depending on your workload, you might want to change the base backoff value from 1 second to a few tens or hundreds of milliseconds for latency-sensitive workloads.

**Backoff logic sample 2: PHPRedis**

PHPRedis has a built-in retry mechanism that retries a (non-configurable) maximum of 10 times. There is a configurable delay between tries (with a jitter from the second retry onwards). For more information, see the following [sample code](https://github.com/phpredis/phpredis/blob/b0b9dd78ef7c15af936144c1b17df1a9273d72ab/library.c#L335-L368). We've submitted a pull request to natively implement exponential backoff in [PHPredis (\$11986)](https://github.com/phpredis/phpredis/pull/1986) that has since been merged and [documented](https://github.com/phpredis/phpredis/blob/develop/README.md#retry-and-backoff). For those on the latest release of PHPRedis, it won't be necessary to implement manually but we've included the reference here for those on previous versions. For now, the following is a code example that configures the delay of the retry mechanism:

```
$timeout = 0.1; // 100 millisecond connection timeout
$retry_interval = 100; // 100 millisecond retry interval
$client = new Redis();
if($client->pconnect($HOST, $PORT, $timeout, NULL, $retry_interval) != TRUE) {
	return; // ERROR: connection failed
}
$client->set($key, $value);
```

**Backoff logic sample 3: Lettuce**

Lettuce has built-in retry mechanisms based on the exponential backoff strategies described in the post [Exponential Backoff and Jitter](https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/). The following is a code excerpt showing the full jitter approach:

```
public static void main(String[] args)
{
	ClientResources resources = null;
	RedisClient client = null;

	try {
		resources = DefaultClientResources.builder()
				.reconnectDelay(Delay.fullJitter(
			Duration.ofMillis(100),     // minimum 100 millisecond delay
			Duration.ofSeconds(5),      // maximum 5 second delay
			100, TimeUnit.MILLISECONDS) // 100 millisecond base
		).build();

		client = RedisClient.create(resources, RedisURI.create(HOST, PORT));
		client.setOptions(ClientOptions.builder()
	.socketOptions(SocketOptions.builder().connectTimeout(Duration.ofMillis(100)).build()) // 100 millisecond connection timeout
	.timeoutOptions(TimeoutOptions.builder().fixedTimeout(Duration.ofSeconds(5)).build()) // 5 second command timeout
	.build());

	    // use the connection pool from above example
	} finally {
		if (connection != null) {
			connection.close();
		}

		if (client != null){
			client.shutdown();
		}

		if (resources != null){
			resources.shutdown();
		}

	}
}
```

# Configure a client-side timeout (Valkey and Redis OSS)
<a name="BestPractices.Clients.Redis.ClientTimeout"></a>

**Configuring the client-side timeout**

Configure the client-side timeout appropriately to allow the server sufficient time to process the request and generate the response. This also allows it to fail fast if the connection to the server can't be established. Certain Valkey or Redis OSS commands can be more computationally expensive than others. For example, Lua scripts or MULTI/EXEC transactions that contain multiple commands that must be run atomically. In general, a higher client-side timeout is recommended to avoid a time out of the client before the response is received from the server, including the following:
+ Running commands across multiple keys
+ Running MULTI/EXEC transactions or Lua scripts that consist of multiple individual Valkey or Redis OSS commands
+ Reading large values
+ Performing blocking operations such as BLPOP

In case of a blocking operation such as BLPOP, the best practice is to set the command timeout to a number lower than the socket timeout.

The following are code examples for implementing a client-side timeout in redis-py, PHPRedis, and Lettuce.

**Timeout configuration sample 1: redis-py**

The following is a code example with redis-py:

```
# connect to Redis server with a 100 millisecond timeout
# give every Redis command a 2 second timeout
client = redis.Redis(connection_pool=redis.BlockingConnectionPool(host=HOST, max_connections=10,socket_connect_timeout=0.1,socket_timeout=2))

res = client.set("key", "value") # will timeout after 2 seconds
print(res)                       # if there is a connection error

res = client.blpop("list", timeout=1) # will timeout after 1 second
                                      # less than the 2 second socket timeout
print(res)
```

**Timeout config sample 2: PHPRedis**

The following is a code example with PHPRedis:

```
// connect to Redis server with a 100ms timeout
// give every Redis command a 2s timeout
$client = new Redis();
$timeout = 0.1; // 100 millisecond connection timeout
$retry_interval = 100; // 100 millisecond retry interval
$client = new Redis();
if($client->pconnect($HOST, $PORT, 0.1, NULL, 100, $read_timeout=2) != TRUE){
	return; // ERROR: connection failed
}
$client->set($key, $value);

$res = $client->set("key", "value"); // will timeout after 2 seconds
print "$res\n";                      // if there is a connection error

$res = $client->blpop("list", 1); // will timeout after 1 second
print "$res\n";                   // less than the 2 second socket timeout
```

**Timeout config sample 3: Lettuce**

The following is a code example with Lettuce:

```
// connect to Redis server and give every command a 2 second timeout
public static void main(String[] args)
{
	RedisClient client = null;
	StatefulRedisConnection<String, String> connection = null;
	try {
		client = RedisClient.create(RedisURI.create(HOST, PORT));
		client.setOptions(ClientOptions.builder()
	.socketOptions(SocketOptions.builder().connectTimeout(Duration.ofMillis(100)).build()) // 100 millisecond connection timeout
	.timeoutOptions(TimeoutOptions.builder().fixedTimeout(Duration.ofSeconds(2)).build()) // 2 second command timeout 
	.build());

		// use the connection pool from above example

		commands.set("key", "value"); // will timeout after 2 seconds
		commands.blpop(1, "list"); // BLPOP with 1 second timeout
	} finally {
		if (connection != null) {
			connection.close();
		}

		if (client != null){
			client.shutdown();
		}
	}
}
```

# Configure a server-side idle timeout (Valkey and Redis OSS)
<a name="BestPractices.Clients.Redis.ServerTimeout"></a>

We have observed cases when a customer's application has a high number of idle clients connected, but isn't actively sending commands. In such scenarios, you can exhaust all 65,000 connections with a high number of idle clients. To avoid such scenarios, configure the timeout setting appropriately on the server via [Valkey and Redis OSS parameters](ParameterGroups.Engine.md#ParameterGroups.Redis). This ensures that the server actively disconnects idle clients to avoid an increase in the number of connections. This setting is not available on serverless caches.

# Lua scripts
<a name="BestPractices.Clients.Redis.LuaScripts"></a>

Valkey and Redis OSS supports more than 200 commands, including those to run Lua scripts. However, when it comes to Lua scripts, there are several pitfalls that can affect memory and availability of Valkey or Redis OSS.

**Unparameterized Lua scripts**

Each Lua script is cached on the Valkey or Redis OSS server before it runs. Unparameterized Lua scripts are unique, which can lead to the Valkey or Redis OSS server storing a large number of Lua scripts and consuming more memory. To mitigate this, ensure that all Lua scripts are parameterized and regularly perform SCRIPT FLUSH to clean up cached Lua scripts if needed.

Also be aware that keys must be provided. If a value for the KEY parameter is not provided, the script will fail. For example, this will not work: 

```
serverless-test-lst4hg.serverless.use1.cache.amazonaws.com:6379> eval 'return "Hello World"' 0
(error) ERR Lua scripts without any input keys are not supported.
```

This will work:

```
serverless-test-lst4hg.serverless.use1.cache.amazonaws.com:6379> eval 'return redis.call("get", KEYS[1])' 1 mykey-2
"myvalue-2"
```

The following example shows how to use parameterized scripts. First, we have an example of an unparameterized approach that results in three different cached Lua scripts and is not recommended:

```
eval "return redis.call('set','key1','1')" 0
eval "return redis.call('set','key2','2')" 0
eval "return redis.call('set','key3','3')" 0
```

Instead, use the following pattern to create a single script that can accept passed parameters:

```
eval "return redis.call('set',KEYS[1],ARGV[1])" 1 key1 1 
eval "return redis.call('set',KEYS[1],ARGV[1])" 1 key2 2 
eval "return redis.call('set',KEYS[1],ARGV[1])" 1 key3 3
```

**Long-running Lua scripts**

Lua scripts can run multiple commands atomically, so it can take longer to complete than a regular Valkey or Redis OSS command. If the Lua script only runs read-only operations, you can stop it in the middle. However, as soon as the Lua script performs a write operation, it becomes unkillable and must run to completion. A long-running Lua script that is mutating can cause the Valkey or Redis OSS server to be unresponsive for a long time. To mitigate this issue, avoid long-running Lua scripts and test the script out in a pre-production environment.

**Lua script with stealth writes**

There are a few ways a Lua script can continue to write new data into Valkey or Redis OSS even when Valkey or Redis OSS is over `maxmemory`:
+ The script starts when the Valkey or Redis OSS server is below `maxmemory`, and contains multiple write operations inside
+ The script's first write command isn't consuming memory (such as DEL), followed by more write operations that consume memory
+ You can mitigate this problem by configuring a proper eviction policy in Valkey or Redis OSS server other than `noeviction`. This allows Redis OSS to evict items and free up memory in between Lua scripts.

# Storing large composite items (Valkey and Redis OSS)
<a name="BestPractices.Clients.Redis.LargeItems"></a>

In some scenarios, an application may store large composite items in Valkey or Redis OSS (such as a multi-GB hash dataset). This is not a recommended practice because it often leads to performance problems in Valkey or Redis OSS. For example, the client can do a HGETALL command to retrieve the entire multi GB hash collection. This can generate significant memory pressure to the Valkey or Redis OSS server buffering the large item in the client output buffer. Also, for slot migration in cluster mode, ElastiCache doesn't migrate slots that contain items with serialized size that is larger than 256 MB.

To solve the large item problems, we have the following recommendations:
+ Break up the large composite item into multiple smaller items. For example, break up a large hash collection into individual key-value fields with a key name scheme that appropriately reflects the collection, such as using a common prefix in the key name to identify the collection of items. If you must access multiple fields in the same collection atomically, you can use the MGET command to retrieve multiple key-values in the same command.
+ If you evaluated all options and still can't break up the large collection dataset, try to use commands that operate on a subset of the data in the collection instead of the entire collection. Avoid having a use case that requires you to atomically retrieve the entire multi-GB collection in the same command. One example is using HGET or HMGET commands instead of HGETALL on hash collections.

# Lettuce client configuration (Valkey and Redis OSS)
<a name="BestPractices.Clients-lettuce"></a>

This section describes the recommended Java and Lettuce configuration options, and how they apply to ElastiCache clusters.

The recommendations in this section were tested with Lettuce version 6.2.2.

**Topics**
+ [Example: Lettuce config for cluster mode, TLS enabled](BestPractices.Clients-lettuce-cme.md)
+ [Example: Lettuce config for cluster mode disabled, TLS enabled](BestPractices.Clients-lettuce-cmd.md)

**Java DNS cache TTL**

The Java virtual machine (JVM) caches DNS name lookups. When the JVM resolves a hostname to an IP address, it caches the IP address for a specified period of time, known as the *time-to-live* (TTL).

The choice of TTL value is a trade-off between latency and responsiveness to change. With shorter TTLs, DNS resolvers notice updates in the cluster's DNS faster. This can make your application respond faster to replacements or other workflows that your cluster undergoes. However, if the TTL is too low, it increases the query volume, which can increase the latency of your application. While there is no correct TTL value, it's worth considering the length of time that you can afford to wait for a change to take effect when setting your TTL value.

Because ElastiCache nodes use DNS name entries that might change, we recommend that you configure your JVM with a low TTL of 5 to 10 seconds. This ensures that when a node's IP address changes, your application will be able to receive and use the resource's new IP address by requerying the DNS entry.

On some Java configurations, the JVM default TTL is set so that it will never refresh DNS entries until the JVM is restarted.

For details on how to set your JVM TTL, see [How to set the JVM TTL](https://docs.aws.amazon.com/sdk-for-java/v1/developer-guide/java-dg-jvm-ttl.html#how-to-set-the-jvm-ttl).

**Lettuce version**

We recommend using Lettuce version 6.2.2 or later.

**Endpoints**

When you're using cluster mode enabled clusters, set the `redisUri` to the cluster configuration endpoint. The DNS lookup for this URI returns a list of all available nodes in the cluster, and is randomly resolved to one of them during the cluster initialization. For more details about how topology refresh works, see *dynamicRefreshSources* later in this topic.

**SocketOption**

Enable [KeepAlive](https://lettuce.io/core/release/api/io/lettuce/core/SocketOptions.KeepAliveOptions.html). Enabling this option reduces the need to handle failed connections during command runtime.

Ensure that you set the [Connection timeout](https://lettuce.io/core/release/api/io/lettuce/core/SocketOptions.Builder.html#connectTimeout-java.time.Duration-) based on your application requirements and workload. For more information, see the Timeouts section later in this topic.

**ClusterClientOption: Cluster Mode Enabled client options**

Enable [AutoReconnect](https://lettuce.io/core/release/api/io/lettuce/core/cluster/ClusterClientOptions.Builder.html#autoReconnect-boolean-) when connection is lost.

Set [CommandTimeout](https://lettuce.io/core/release/api/io/lettuPrce/core/RedisURI.html#getTimeout--). For more details, see the Timeouts section later in this topic.

Set [nodeFilter](https://lettuce.io/core/release/api/io/lettuce/core/cluster/ClusterClientOptions.Builder.html#nodeFilter-java.util.function.Predicate-) to filter out failed nodes from the topology. Lettuce saves all nodes that are found in the 'cluster nodes' output (including nodes with PFAIL/FAIL status) in the client's 'partitions' (also known as shards). During the process of creating the cluster topology, it attempts to connect to all the partition nodes. This Lettuce behavior of adding failed nodes can cause connection errors (or warnings) when nodes are getting replaced for any reason. 

For example, after a failover is finished and the cluster starts the recovery process, while the clusterTopology is getting refreshed, the cluster bus nodes map has a short period of time that the down node is listed as a FAIL node, before it's completely removed from the topology. During this period, the Lettuce client considers it a healthy node and continually connects to it. This causes a failure after retrying is exhausted. 

For example:

```
final ClusterClientOptions clusterClientOptions = 
    ClusterClientOptions.builder()
    ... // other options
    .nodeFilter(it -> 
        ! (it.is(RedisClusterNode.NodeFlag.FAIL) 
        || it.is(RedisClusterNode.NodeFlag.EVENTUAL_FAIL) 
        || it.is(RedisClusterNode.NodeFlag.HANDSHAKE)
        || it.is(RedisClusterNode.NodeFlag.NOADDR)))
    .validateClusterNodeMembership(false)
    .build();
redisClusterClient.setOptions(clusterClientOptions);
```

**Note**  
Node filtering is best used with DynamicRefreshSources set to true. Otherwise, if the topology view is taken from a single problematic seed node, that sees a primary node of some shard as failing, it will filter out this primary node, which will result in slots not being covered. Having multiple seed nodes (when DynamicRefreshSources is true) reduces the likelihood of this issue, since at least some of the seed nodes should have an updated topology view after a failover with the newly promoted primary.

**ClusterTopologyRefreshOptions: Options to control the cluster topology refreshing of the Cluster Mode Enabled client**

**Note**  
Cluster mode disabled clusters don't support the cluster discovery commands and aren't compatible with all clients dynamic topology discovery functionality.  
Cluster mode disabled with ElastiCache isn't compatible with Lettuce's `MasterSlaveTopologyRefresh`. Instead, for cluster mode disabled you can configure a `StaticMasterReplicaTopologyProvider` and provide the cluster read and write endpoints.  
For more information on connecting to cluster mode disabled clusters, see [Finding a Valkey or Redis OSS (Cluster Mode Disabled) Cluster's Endpoints (Console)](Endpoints.md#Endpoints.Find.Redis).  
If you wish to use Lettuce's dynamic topology discovery functionality, then you can create a cluster mode enabled cluster with the same shard configuration as your existing cluster. However, for cluster mode enabled clusters we recommend configuring at least 3 shards with at least one 1 replica to support fast failover.

Enable [enablePeriodicRefresh](https://lettuce.io/core/release/api/io/lettuce/core/cluster/ClusterTopologyRefreshOptions.Builder.html#enablePeriodicRefresh-java.time.Duration-). This enables periodic cluster topology updates so that the client updates the cluster topology in the intervals of the refreshPeriod (default: 60 seconds). When it's disabled, the client updates the cluster topology only when errors occur when it attempts to run commands against the cluster. 

With this option enabled, you can reduce the latency that's associated with refreshing the cluster topology by adding this job to a background task. While topology refreshment is performed in a background job, it can be somewhat slow for clusters with many nodes. This is because all nodes are being queried for their views to get the most updated cluster view. If you run a large cluster, you might want to increase the period.

Enable [enableAllAdaptiveRefreshTriggers](https://lettuce.io/core/release/api/io/lettuce/core/cluster/ClusterTopologyRefreshOptions.Builder.html#enableAllAdaptiveRefreshTriggers--). This enables adaptive topology refreshing that uses all [triggers](https://lettuce.io/core/6.1.6.RELEASE/api/io/lettuce/core/cluster/ClusterTopologyRefreshOptions.RefreshTrigger.html): MOVED\$1REDIRECT, ASK\$1REDIRECT, PERSISTENT\$1RECONNECTS, UNCOVERED\$1SLOT, UNKNOWN\$1NODE. Adaptive refresh triggers initiate topology view updates based on events that happen during Valkey or Redis OSS cluster operations. Enabling this option leads to an immediate topology refresh when one of the preceding triggers occur. Adaptive triggered refreshes are rate-limited using a timeout because events can happen on a large scale (default timeout between updates: 30).

Enable [closeStaleConnections](https://lettuce.io/core/release/api/io/lettuce/core/cluster/ClusterTopologyRefreshOptions.Builder.html#closeStaleConnections-boolean-). This enables closing stale connections when refreshing the cluster topology. It only comes into effect if [ClusterTopologyRefreshOptions.isPeriodicRefreshEnabled()](https://lettuce.io/core/release/api/io/lettuce/core/cluster/ClusterTopologyRefreshOptions.html#isPeriodicRefreshEnabled--) is true. When it's enabled, the client can close stale connections and create new ones in the background. This reduces the need to handle failed connections during command runtime.

Enable [dynamicRefreshSources](https://lettuce.io/core/release/api/io/lettuce/core/cluster/ClusterTopologyRefreshOptions.Builder.html#dynamicRefreshSources-boolean-). We recommend enabling dynamicRefreshSources for small clusters, and disabling it for large clusters. dynamicRefreshSources enables discovering cluster nodes from the provided seed node (for example, cluster configuration endpoint). It uses all the discovered nodes as sources for refreshing the cluster topology. 

Using dynamic refresh queries all discovered nodes for the cluster topology and attempts to choose the most accurate cluster view. If it's set to false, only the initial seed nodes are used as sources for topology discovery, and the number of clients are obtained only for the initial seed nodes. When it's disabled, if the cluster configuration endpoint is resolved to a failed node, trying to refresh the cluster view fails and leads to exceptions. This scenario can happen because it takes some time until a failed node's entry is removed from the cluster configuration endpoint. Therefore, the configuration endpoint can still be randomly resolved to a failed node for a short period of time. 

When it's enabled, however, we use all of the cluster nodes that are received from the cluster view to query for their current view. Because we filter out failed nodes from that view, the topology refresh will be successful. However, when dynamicRefreshSources is true, Lettuce queries all nodes to get the cluster view, and then compares the results. So it can be expensive for clusters with a lot of nodes. We suggest that you turn off this feature for clusters with many nodes. 

```
final ClusterTopologyRefreshOptions topologyOptions = 
    ClusterTopologyRefreshOptions.builder()
    .enableAllAdaptiveRefreshTriggers()
    .enablePeriodicRefresh()
    .dynamicRefreshSources(true)
    .build();
```

**ClientResources**

Configure [DnsResolver](https://lettuce.io/core/release/api/io/lettuce/core/resource/DefaultClientResources.Builder.html#dnsResolver-io.lettuce.core.resource.DnsResolver-) with [DirContextDnsResolver](https://lettuce.io/core/release/api/io/lettuce/core/resource/DirContextDnsResolver.html). The DNS resolver is based on Java's com.sun.jndi.dns.DnsContextFactory.

Configure [reconnectDelay](https://lettuce.io/core/release/api/io/lettuce/core/resource/DefaultClientResources.Builder.html#reconnectDelay-io.lettuce.core.resource.Delay-) with exponential backoff and full jitter. Lettuce has built-in retry mechanisms based on the exponential backoff strategies.. For details, see [Exponential Backoff and Jitter](https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter) on the AWS Architecture Blog. For more information about the importance of having a retry backoff strategy, see the backoff logic sections of the [Best practices blog post](https://aws.amazon.com/blogs/database/best-practices-redis-clients-and-amazon-elasticache-for-redis/) on the AWS Database Blog.

```
ClientResources clientResources = DefaultClientResources.builder()
   .addressResolverGroup(new DirContextDnsResolver())
    .reconnectDelay(
        Delay.fullJitter(
            Duration.ofMillis(100),     // minimum 100 millisecond delay
            Duration.ofSeconds(10),      // maximum 10 second delay
            100, TimeUnit.MILLISECONDS)) // 100 millisecond base
    .build();
```

**Timeouts **

Use a lower connect timeout value than your command timeout. Lettuce uses lazy connection establishment. So if the connect timeout is higher than the command timeout, you can have a period of persistent failure after a topology refresh if Lettuce tries to connect to an unhealthy node and the command timeout is always exceeded. 

Use a dynamic command timeout for different commands. We recommend that you set the command timeout based on the command expected duration. For example, use a longer timeout for commands that iterate over several keys, such as FLUSHDB, FLUSHALL, KEYS, SMEMBERS, or Lua scripts. Use shorter timeouts for single key commands, such as SET, GET, and HSET.

**Note**  
Timeouts that are configured in the following example are for tests that ran SET/GET commands with keys and values up to 20 bytes long. The processing time can be longer when the commands are complex or the keys and values are larger. You should set the timeouts based on the use case of your application. 

```
private static final Duration META_COMMAND_TIMEOUT = Duration.ofMillis(1000);
private static final Duration DEFAULT_COMMAND_TIMEOUT = Duration.ofMillis(250);
// Socket connect timeout should be lower than command timeout for Lettuce
private static final Duration CONNECT_TIMEOUT = Duration.ofMillis(100);
    
SocketOptions socketOptions = SocketOptions.builder()
    .connectTimeout(CONNECT_TIMEOUT)
    .build();
 

class DynamicClusterTimeout extends TimeoutSource {
     private static final Set<ProtocolKeyword> META_COMMAND_TYPES = ImmutableSet.<ProtocolKeyword>builder()
          .add(CommandType.FLUSHDB)
          .add(CommandType.FLUSHALL)
          .add(CommandType.CLUSTER)
          .add(CommandType.INFO)
          .add(CommandType.KEYS)
          .build();

    private final Duration defaultCommandTimeout;
    private final Duration metaCommandTimeout;

    DynamicClusterTimeout(Duration defaultTimeout, Duration metaTimeout)
    {
        defaultCommandTimeout = defaultTimeout;
        metaCommandTimeout = metaTimeout;
    }

    @Override
    public long getTimeout(RedisCommand<?, ?, ?> command) {
        if (META_COMMAND_TYPES.contains(command.getType())) {
            return metaCommandTimeout.toMillis();
        }
        return defaultCommandTimeout.toMillis();
    }
}

// Use a dynamic timeout for commands, to avoid timeouts during
// cluster management and slow operations.
TimeoutOptions timeoutOptions = TimeoutOptions.builder()
.timeoutSource(
    new DynamicClusterTimeout(DEFAULT_COMMAND_TIMEOUT, META_COMMAND_TIMEOUT))
.build();
```

# Example: Lettuce config for cluster mode, TLS enabled
<a name="BestPractices.Clients-lettuce-cme"></a>

**Note**  
Timeouts in the following example are for tests that ran SET/GET commands with keys and values up to 20 bytes long. The processing time can be longer when the commands are complex or the keys and values are larger. You should set the timeouts based on the use case of your application. 

```
// Set DNS cache TTL
public void setJVMProperties() {
    java.security.Security.setProperty("networkaddress.cache.ttl", "10");
}

private static final Duration META_COMMAND_TIMEOUT = Duration.ofMillis(1000);
private static final Duration DEFAULT_COMMAND_TIMEOUT = Duration.ofMillis(250);
// Socket connect timeout should be lower than command timeout for Lettuce
private static final Duration CONNECT_TIMEOUT = Duration.ofMillis(100);

// Create RedisURI from the cluster configuration endpoint
clusterConfigurationEndpoint = <cluster-configuration-endpoint> // TODO: add your cluster configuration endpoint
final RedisURI redisUriCluster =
    RedisURI.Builder.redis(clusterConfigurationEndpoint)
        .withPort(6379)
        .withSsl(true)
        .build();

// Configure the client's resources                
ClientResources clientResources = DefaultClientResources.builder()
    .reconnectDelay(
        Delay.fullJitter(
            Duration.ofMillis(100),     // minimum 100 millisecond delay
            Duration.ofSeconds(10),      // maximum 10 second delay
            100, TimeUnit.MILLISECONDS)) // 100 millisecond base
    .addressResolverGroup(new DirContextDnsResolver())
    .build(); 

// Create a cluster client instance with the URI and resources
RedisClusterClient redisClusterClient = 
    RedisClusterClient.create(clientResources, redisUriCluster);

// Use a dynamic timeout for commands, to avoid timeouts during
// cluster management and slow operations.
class DynamicClusterTimeout extends TimeoutSource {
     private static final Set<ProtocolKeyword> META_COMMAND_TYPES = ImmutableSet.<ProtocolKeyword>builder()
          .add(CommandType.FLUSHDB)
          .add(CommandType.FLUSHALL)
          .add(CommandType.CLUSTER)
          .add(CommandType.INFO)
          .add(CommandType.KEYS)
          .build();

    private final Duration metaCommandTimeout;
    private final Duration defaultCommandTimeout;

    DynamicClusterTimeout(Duration defaultTimeout, Duration metaTimeout)
    {
        defaultCommandTimeout = defaultTimeout;
        metaCommandTimeout = metaTimeout;
    }

    @Override
    public long getTimeout(RedisCommand<?, ?, ?> command) {
        if (META_COMMAND_TYPES.contains(command.getType())) {
            return metaCommandTimeout.toMillis();
        }
        return defaultCommandTimeout.toMillis();
    }
}

TimeoutOptions timeoutOptions = TimeoutOptions.builder()
    .timeoutSource(new DynamicClusterTimeout(DEFAULT_COMMAND_TIMEOUT, META_COMMAND_TIMEOUT))
     .build();

// Configure the topology refreshment options
final ClusterTopologyRefreshOptions topologyOptions = 
    ClusterTopologyRefreshOptions.builder()
    .enableAllAdaptiveRefreshTriggers()
    .enablePeriodicRefresh()
    .dynamicRefreshSources(true)
    .build();

// Configure the socket options
final SocketOptions socketOptions = 
    SocketOptions.builder()
    .connectTimeout(CONNECT_TIMEOUT) 
    .keepAlive(true)
    .build();

// Configure the client's options
final ClusterClientOptions clusterClientOptions = 
    ClusterClientOptions.builder()
    .topologyRefreshOptions(topologyOptions)
    .socketOptions(socketOptions)
    .autoReconnect(true)
    .timeoutOptions(timeoutOptions) 
    .nodeFilter(it -> 
        ! (it.is(RedisClusterNode.NodeFlag.FAIL) 
        || it.is(RedisClusterNode.NodeFlag.EVENTUAL_FAIL) 
        || it.is(RedisClusterNode.NodeFlag.NOADDR))) 
    .validateClusterNodeMembership(false)
    .build();
    
redisClusterClient.setOptions(clusterClientOptions);

// Get a connection
final StatefulRedisClusterConnection<String, String> connection = 
    redisClusterClient.connect();

// Get cluster sync/async commands   
RedisAdvancedClusterCommands<String, String> sync = connection.sync();
RedisAdvancedClusterAsyncCommands<String, String> async = connection.async();
```

# Example: Lettuce config for cluster mode disabled, TLS enabled
<a name="BestPractices.Clients-lettuce-cmd"></a>

**Note**  
Timeouts in the following example are for tests that ran SET/GET commands with keys and values up to 20 bytes long. The processing time can be longer when the commands are complex or the keys and values are larger. You should set the timeouts based on the use case of your application. 

```
// Set DNS cache TTL
public void setJVMProperties() {
    java.security.Security.setProperty("networkaddress.cache.ttl", "10");
}

private static final Duration META_COMMAND_TIMEOUT = Duration.ofMillis(1000);
private static final Duration DEFAULT_COMMAND_TIMEOUT = Duration.ofMillis(250);
// Socket connect timeout should be lower than command timeout for Lettuce
private static final Duration CONNECT_TIMEOUT = Duration.ofMillis(100);

// Create RedisURI from the primary/reader endpoint
clusterEndpoint = <primary/reader-endpoint> // TODO: add your node endpoint
RedisURI redisUriStandalone =
    RedisURI.Builder.redis(clusterEndpoint).withPort(6379).withSsl(true).withDatabase(0).build();

ClientResources clientResources =
    DefaultClientResources.builder()
        .addressResolverGroup(new DirContextDnsResolver())
        .reconnectDelay(
            Delay.fullJitter(
                Duration.ofMillis(100), // minimum 100 millisecond delay
                Duration.ofSeconds(10), // maximum 10 second delay
                100,
                TimeUnit.MILLISECONDS)) // 100 millisecond base
        .build();

// Use a dynamic timeout for commands, to avoid timeouts during
// slow operations.
class DynamicTimeout extends TimeoutSource {
     private static final Set<ProtocolKeyword> META_COMMAND_TYPES = ImmutableSet.<ProtocolKeyword>builder()
          .add(CommandType.FLUSHDB)
          .add(CommandType.FLUSHALL)
          .add(CommandType.INFO)
          .add(CommandType.KEYS)
          .build();

    private final Duration metaCommandTimeout;
    private final Duration defaultCommandTimeout;

    DynamicTimeout(Duration defaultTimeout, Duration metaTimeout)
    {
        defaultCommandTimeout = defaultTimeout;
        metaCommandTimeout = metaTimeout;
    }

    @Override
    public long getTimeout(RedisCommand<?, ?, ?> command) {
        if (META_COMMAND_TYPES.contains(command.getType())) {
            return metaCommandTimeout.toMillis();
        }
        return defaultCommandTimeout.toMillis();
    }
}

TimeoutOptions timeoutOptions = TimeoutOptions.builder()
    .timeoutSource(new DynamicTimeout(DEFAULT_COMMAND_TIMEOUT, META_COMMAND_TIMEOUT))
     .build();                      
                                    
final SocketOptions socketOptions =
    SocketOptions.builder().connectTimeout(CONNECT_TIMEOUT).keepAlive(true).build();

ClientOptions clientOptions =
    ClientOptions.builder().timeoutOptions(timeoutOptions).socketOptions(socketOptions).build();

RedisClient redisClient = RedisClient.create(clientResources, redisUriStandalone);
redisClient.setOptions(clientOptions);
```

## Configuring a preferred protocol for dual stack clusters (Valkey and Redis OSS)
<a name="network-type-configuring-dual-stack-redis"></a>

For cluster mode enabled Valkey or Redis OSS clusters, you can control the protocol clients will use to connect to the nodes in the cluster with the IP Discovery parameter. The IP Discovery parameter can be set to either IPv4 or IPv6. 

For Valkey or Redis OSS clusters, the IP discovery parameter sets the IP protocol used in the [cluster slots ()](https://valkey.io/commands/cluster-slots/), [cluster shards ()](https://valkey.io/commands/cluster-shards/), and [cluster nodes ()](https://valkey.io/commands/cluster-nodes/) output. These commands are used by clients to discover the cluster topology. Clients use the IPs in theses commands to connect to the other nodes in the cluster. 

Changing the IP Discovery will not result in any downtime for connected clients. However, the changes will take some time to propagate. To determine when the changes have completely propagated for a Valkey or Redis OSS Cluster, monitor the output of `cluster slots`. Once all of the nodes returned by the cluster slots command report IPs with the new protocol the changes have finished propagating. 

Example with Redis-Py:

```
cluster = RedisCluster(host="xxxx", port=6379)
target_type = IPv6Address # Or IPv4Address if changing to IPv4

nodes = set()
while len(nodes) == 0 or not all((type(ip_address(host)) is target_type) for host in nodes):
    nodes = set()

   # This refreshes the cluster topology and will discovery any node updates.
   # Under the hood it calls cluster slots
    cluster.nodes_manager.initialize()
    for node in cluster.get_nodes():
        nodes.add(node.host)
    self.logger.info(nodes)

    time.sleep(1)
```

Example with Lettuce:

```
RedisClusterClient clusterClient = RedisClusterClient.create(RedisURI.create("xxxx", 6379));

Class targetProtocolType = Inet6Address.class; // Or Inet4Address.class if you're switching to IPv4

Set<String> nodes;
    
do {
   // Check for any changes in the cluster topology.
   // Under the hood this calls cluster slots
    clusterClient.refreshPartitions();
    Set<String> nodes = new HashSet<>();

    for (RedisClusterNode node : clusterClient.getPartitions().getPartitions()) {
        nodes.add(node.getUri().getHost());
    }

    Thread.sleep(1000);
} while (!nodes.stream().allMatch(node -> {
            try {
                return finalTargetProtocolType.isInstance(InetAddress.getByName(node));
            } catch (UnknownHostException ignored) {}
            return false;
}));
```

# Best practices for clients (Memcached)
<a name="BestPractices.Clients.memcached"></a>

Learn best practices for common scenarios with ElastiCache for Memcached clusters.

**Topics**
+ [Configuring your ElastiCache client for efficient load balancing (Memcached)](BestPractices.LoadBalancing.md)
+ [Validated clients with Memcached](network-type-validated-clients-memcached.md)
+ [Configuring a preferred protocol for dual stack clusters (Memcached)](network-type-configuring-dual-stack-memcached.md)

# Configuring your ElastiCache client for efficient load balancing (Memcached)
<a name="BestPractices.LoadBalancing"></a>

**Note**  
This section applies to node-based multi-node Memcached clusters.

To effectively use multiple ElastiCache Memcached nodes, you need to be able to spread your cache keys across the nodes. A simple way to load balance a cluster with *n* nodes is to calculate the hash of the object's key and mod the result by *n*: `hash(key) mod n`. The resulting value (0 through *n*–1) is the number of the node where you place the object. 

This approach is simple and works well as long as the number of nodes (*n*) is constant. However, whenever you add or remove a node from the cluster, the number of keys that need to be moved is *(n - 1) / n* (where *n* is the new number of nodes). Thus, this approach results in a large number of keys being moved, which translates to a large number of initial cache misses, especially as the number of nodes gets large. Scaling from 1 to 2 nodes results in (2-1) / 2 (50 percent) of the keys being moved, the best case. Scaling from 9 to 10 nodes results in (10–1)/10 (90 percent) of the keys being moved. If you're scaling up due to a spike in traffic, you don't want to have a large number of cache misses. A large number of cache misses results in hits to the database, which is already overloaded due to the spike in traffic.

The solution to this dilemma is consistent hashing. Consistent hashing uses an algorithm such that whenever a node is added or removed from a cluster, the number of keys that must be moved is roughly *1 / n* (where *n* is the new number of nodes). Scaling from 1 to 2 nodes results in 1/2 (50 percent) of the keys being moved, the worst case. Scaling from 9 to 10 nodes results in 1/10 (10 percent) of the keys being moved.

As the user, you control which hashing algorithm is used for multi-node clusters. We recommend that you configure your clients to use consistent hashing. Fortunately, there are many Memcached client libraries in most popular languages that implement consistent hashing. Check the documentation for the library you are using to see if it supports consistent hashing and how to implement it.

If you are working in Java, PHP, or .NET, we recommend you use one of the Amazon ElastiCache client libraries.

## Consistent Hashing Using Java
<a name="BestPractices.LoadBalancing.Java"></a>

The ElastiCache Memcached Java client is based on the open-source spymemcached Java client, which has consistent hashing capabilities built in. The library includes a KetamaConnectionFactory class that implements consistent hashing. By default, consistent hashing is turned off in spymemcached.

For more information, see the KetamaConnectionFactory documentation at [KetamaConnectionFactory](https://github.com/RTBHOUSE/spymemcached/blob/master/src/main/java/net/spy/memcached/KetamaConnectionFactory.java).

## Consistent hashing using PHP with Memcached
<a name="BestPractices.LoadBalancing.PHP"></a>

The ElastiCache Memcached PHP client is a wrapper around the built-in Memcached PHP library. By default, consistent hashing is turned off by the Memcached PHP library.

Use the following code to turn on consistent hashing.

```
$m = new Memcached();
$m->setOption(Memcached::OPT_DISTRIBUTION, Memcached::DISTRIBUTION_CONSISTENT);
```

In addition to the preceding code, we recommend that you also turn `memcached.sess_consistent_hash` on in your php.ini file.

 For more information, see the run-time configuration documentation for Memcached PHP at [http://php.net/manual/en/memcached.configuration.php](http://php.net/manual/en/memcached.configuration.php). Note specifically the `memcached.sess_consistent_hash` parameter.

## Consistent hashing using .NET with Memcached
<a name="BestPractices.LoadBalancing.dotNET"></a>

The ElastiCache Memcached .NET client is a wrapper around Enyim Memcached. By default, consistent hashing is turned on by the Enyim Memcached client.

 For more information, see the `memcached/locator` documentation at [https://github.com/enyim/EnyimMemcached/wiki/MemcachedClient-Configuration\$1user-content-memcachedlocator](https://github.com/enyim/EnyimMemcached/wiki/MemcachedClient-Configuration#user-content-memcachedlocator).

# Validated clients with Memcached
<a name="network-type-validated-clients-memcached"></a>

The following clients have specifically been validated to work with all supported network type configurations for Memcached.

Validated Clients:
+ [AWS ElastiCache Cluster Client Memcached for Php](https://github.com/awslabs/aws-elasticache-cluster-client-memcached-for-php) – [Version \$13.6.2](https://github.com/awslabs/aws-elasticache-cluster-client-memcached-for-php/tree/v3.2.0)
+ [AWS ElastiCache Cluster Client Memcached for Java](https://github.com/awslabs/aws-elasticache-cluster-client-memcached-for-java) – Latest master on Github

# Configuring a preferred protocol for dual stack clusters (Memcached)
<a name="network-type-configuring-dual-stack-memcached"></a>

For Memcached clusters you can control the protocol clients will use to connect to the nodes in the cluster with the IP Discovery parameter. The IP Discovery parameter can be set to either IPv4 or IPv6. 

The IP discovery parameter controls the IP protocol used in the config get cluster output. Which in turn will determine the IP protocol used by clients which support auto-discovery for ElastiCache for Memcached clusters.

Changing the IP Discovery will not result in any downtime for connected clients. However, the changes will take some time to propagate. 

Monitor the output of `getAvailableNodeEndPoints` for Java and for Php monitor the output of `getServerList`. Once the output of these functions reports IPs for all of the nodes in the cluster that use the updated protocol, the changes have finished propagating.

Java Example:

```
MemcachedClient client = new MemcachedClient(new InetSocketAddress("xxxx", 11211));

Class targetProtocolType = Inet6Address.class; // Or Inet4Address.class if you're switching to IPv4

Set<String> nodes;
    
do {
    nodes = client.getAvailableNodeEndPoints().stream().map(NodeEndPoint::getIpAddress).collect(Collectors.toSet());

    Thread.sleep(1000);
} while (!nodes.stream().allMatch(node -> {
            try {
                return finalTargetProtocolType.isInstance(InetAddress.getByName(node));
            } catch (UnknownHostException ignored) {}
            return false;
        }));
```

Php Example:

```
$client = new Memcached;
$client->setOption(Memcached::OPT_CLIENT_MODE, Memcached::DYNAMIC_CLIENT_MODE);
$client->addServer("xxxx", 11211);

$nodes = [];
$target_ips_count = 0;
do {
    # The PHP memcached client only updates the server list if the polling interval has expired and a
    # command is sent
    $client->get('test');
 
    $nodes = $client->getServerList();

    sleep(1);
    $target_ips_count = 0;

    // For IPv4 use FILTER_FLAG_IPV4
    $target_ips_count = count(array_filter($nodes, function($node) { return filter_var($node["ipaddress"], FILTER_VALIDATE_IP, FILTER_FLAG_IPV6); }));
 
} while (count($nodes) !== $target_ips_count);
```

Any existing client connections that were created before the IP Discovery was updated will still be connected using the old protocol. All of the validated clients will automatically reconnect to the cluster using the new IP protocol once the changes are detected in the output of the cluster discovery commands. However, this depends on the implementation of the client.

## TLS enabled dual stack ElastiCache clusters
<a name="network-type-configuring-tls-enabled-dual-stack"></a>

When TLS is enabled for ElastiCache clusters the cluster discovery functions (`cluster slots`, `cluster shards`, and `cluster nodes` for Redis) or `config get cluster` for Memcached return hostnames instead of IPs. The hostnames are then used instead of IPs to connect to the ElastiCache cluster and perform a TLS handshake. This means that clients won't be affected by the IP Discovery parameter. For TLS enabled clusters the IP Discovery parameter has no effect on the preferred IP protocol. Instead, the IP protocol used will be determined by which IP protocol the client prefers when resolving DNS hostnames.

**Java clients**

When connecting from a Java environment that supports both IPv4 and IPv6, Java will by default prefer IPv4 over IPv6 for backwards compatibility. However, the IP protocol preference is configurable through the JVM arguments. To prefer IPv4, the JVM accepts `-Djava.net.preferIPv4Stack=true` and to prefer IPv6 set `-Djava.net.preferIPv6Stack=true`. Setting `-Djava.net.preferIPv4Stack=true` means that the JVM will no longer make any IPv6 connections. **For Valkey or Redis OSS, this includes those to other non-Valkey and non-Redis OSS applications.**

**Host Level Preferences**

In general, if the client or client runtime don't provide configuration options for setting an IP protocol preference, when performing DNS resolution, the IP protocol will depend on the host's configuration. By default, most hosts prefer IPv6 over IPv4 but this preference can be configured at the host level. This will affect all DNS requests from that host, not just those to ElastiCache clusters.

**Linux hosts**

For Linux, an IP protocol preference can be configured by modifying the `gai.conf` file. The `gai.conf` file can be found under `/etc/gai.conf`. If there is no `gai.conf` specified then an example one should be available under `/usr/share/doc/glibc-common-x.xx/gai.conf` which can be copied to `/etc/gai.conf` and then the default configuration should be un-commented. To update the configuration to prefer IPv4 when connecting to an ElastiCache cluster update the precedence for the CIDR range encompassing the cluster IPs to be above the precedence for default IPv6 connections. By default IPv6 connections have a precedence of 40. For example, assuming the cluster is located in a subnet with the CIDR 172.31.0.0:0/16, the configuration below would cause clients to prefer IPv4 connections to that cluster.

```
label ::1/128       0
label ::/0          1
label 2002::/16     2
label ::/96         3
label ::ffff:0:0/96 4
label fec0::/10     5
label fc00::/7      6
label 2001:0::/32   7
label ::ffff:172.31.0.0/112 8
#
#    This default differs from the tables given in RFC 3484 by handling
#    (now obsolete) site-local IPv6 addresses and Unique Local Addresses.
#    The reason for this difference is that these addresses are never
#    NATed while IPv4 site-local addresses most probably are.  Given
#    the precedence of IPv6 over IPv4 (see below) on machines having only
#    site-local IPv4 and IPv6 addresses a lookup for a global address would
#    see the IPv6 be preferred.  The result is a long delay because the
#    site-local IPv6 addresses cannot be used while the IPv4 address is
#    (at least for the foreseeable future) NATed.  We also treat Teredo
#    tunnels special.
#
# precedence  <mask>   <value>
#    Add another rule to the RFC 3484 precedence table.  See section 2.1
#    and 10.3 in RFC 3484.  The default is:
#
precedence  ::1/128       50
precedence  ::/0          40
precedence  2002::/16     30
precedence ::/96          20
precedence ::ffff:0:0/96  10
precedence ::ffff:172.31.0.0/112 100
```

More details on `gai.conf` are available on the [Linux man page](https://man7.org/linux/man-pages/man5/gai.conf.5.html) 

**Windows hosts**

The process for Windows hosts is similar. For Windows hosts you can run `netsh interface ipv6 set prefix CIDR_CONTAINING_CLUSTER_IPS PRECEDENCE LABEL`. This has the same effect as modifying the `gai.conf` file on Linux hosts.

This will update the preference policies to prefer IPv4 connections over IPv6 connections for the specified CIDR range. For example, assuming that the cluster is in a subnet with the 172.31.0.0:0/16 CIDR executing `netsh interface ipv6 set prefix ::ffff:172.31.0.0:0/112 100 15` would result in the following precedence table which would cause clients to prefer IPv4 when connecting to the cluster. 

```
C:\Users\Administrator>netsh interface ipv6 show prefixpolicies
Querying active state...

Precedence Label Prefix
---------- ----- --------------------------------
100 15 ::ffff:172.31.0.0:0/112
20 4 ::ffff:0:0/96
50 0 ::1/128
40 1 ::/0
30 2 2002::/16
5 5 2001::/32
3 13 fc00::/7
1 11 fec0::/10
1 12 3ffe::/16
1 3 ::/96
```

# Managing reserved memory for Valkey and Redis OSS
<a name="redis-memory-management"></a>

Reserved memory is memory set aside for nondata use. When performing a backup or failover, Valkey and Redis OSS use available memory to record write operations to your cluster while the cluster's data is being written to the .rdb file. If you don't have sufficient memory available for all the writes, the process fails. Following, you can find information on options for managing reserved memory for ElastiCache for Redis OSS and how to apply those options.

**Topics**
+ [How Much Reserved Memory Do You Need?](#redis-memory-management-need)
+ [Parameters to Manage Reserved Memory](#redis-memory-management-parameters)
+ [Specifying Your Reserved Memory Management Parameter](#redis-reserved-memory-management-change)

## How Much Reserved Memory Do You Need?
<a name="redis-memory-management-need"></a>

If you are running a version of Redis OSS before 2.8.22, reserve more memory for backups and failovers than if you are running Redis OSS 2.8.22 or later. This requirement is due to the different ways that ElastiCache for Redis OSS implements the backup process. The rule of thumb is to reserve half of a node type's `maxmemory` value for Redis OSS overhead for versions before 2.8.22, and one-fourth for Redis OSS versions 2.8.22 and later. 

Due to different ways that ElastiCache implements the backup and replication process, the rule of thumb is to reserve 25% of a node type's `maxmemory` value by using the `reserved-memory-percent` parameter. This is the default value and recommended for most cases.

When burstable micro and small instance types are operating near the `maxmemory` limits, they may experience swap usage. To improve the operational reliability on these instance types during backup, replication and high traffic, we recommend increasing the value of the `reserved-memory-percent` parameter up to 30% on small instance types, and up to 50% on micro instance types.

For write-heavy workloads on ElastiCache clusters with data tiering, we recommend increasing the `reserved-memory-percent` to up to 50% of the node's available memory.

For more information, see the following:
+ [Ensuring you have enough memory to make a Valkey or Redis OSS snapshot](BestPractices.BGSAVE.md)
+ [How synchronization and backup are implemented](Replication.Redis.Versions.md)
+ [Data tiering in ElastiCache](data-tiering.md)

## Parameters to Manage Reserved Memory
<a name="redis-memory-management-parameters"></a>

As of March 16, 2017, Amazon ElastiCache provides two mutually exclusive parameters for managing your Valkey or Redis OSS memory, `reserved-memory` and `reserved-memory-percent`. Neither of these parameters is part of the Valkey or Redis OSS distribution. 

Depending upon when you became an ElastiCache customer, one or the other of these parameters is the default memory management parameter. This parameter applies when you create a new Valkey or Redis OSS cluster or replication group and use a default parameter group. 
+ For customers who started before March 16, 2017 – When you create a Redis OSS cluster or replication group using the default parameter group, your memory management parameter is `reserved-memory`. In this case, zero (0) bytes of memory are reserved. 
+ For customers who started on or after March 16, 2017 – When you create a Valkey or Redis OSS cluster or replication group using the default parameter group, your memory management parameter is `reserved-memory-percent`. In this case, 25 percent of your node's `maxmemory` value is reserved for nondata purposes.

After reading about the two Valkey or Redis OSS memory management parameters, you might prefer to use the one that isn't your default or with nondefault values. If so, you can change to the other reserved memory management parameter. 

To change the value of that parameter, you can create a custom parameter group and modify it to use your preferred memory management parameter and value. You can then use the custom parameter group whenever you create a new Valkey or Redis OSS cluster or replication group. For existing clusters or replication groups, you can modify them to use your custom parameter group.

 For more information, see the following: 
+ [Specifying Your Reserved Memory Management Parameter](#redis-reserved-memory-management-change)
+ [Creating an ElastiCache parameter group](ParameterGroups.Creating.md)
+ [Modifying an ElastiCache parameter group](ParameterGroups.Modifying.md)
+ [Modifying an ElastiCache cluster](Clusters.Modify.md)
+ [Modifying a replication group](Replication.Modify.md)

### The reserved-memory Parameter
<a name="redis-memory-management-parameters-reserved-memory"></a>

Before March 16, 2017, all ElastiCache for Redis OSS reserved memory management was done using the parameter `reserved-memory`. The default value of `reserved-memory` is 0. This default reserves no memory for Valkey or Redis OSS overhead and allows Valkey or Redis OSS to consume all of a node's memory with data. 

Changing `reserved-memory` so you have sufficient memory available for backups and failovers requires you to create a custom parameter group. In this custom parameter group, you set `reserved-memory` to a value appropriate for the Valkey or Redis OSS version running on your cluster and cluster's node type. For more information, see [How Much Reserved Memory Do You Need?](#redis-memory-management-need)

The parameter `reserved-memory` is specific to ElastiCache and isn't part of the general Redis OSS distribution.

The following procedure shows how to use `reserved-memory` to manage the memory on your Valkey or Redis OSS cluster.

**To reserve memory using reserved-memory**

1. Create a custom parameter group specifying the parameter group family matching the engine version you’re running—for example, specifying the `redis2.8` parameter group family. For more information, see [Creating an ElastiCache parameter group](ParameterGroups.Creating.md).

   ```
   aws elasticache create-cache-parameter-group \
      --cache-parameter-group-name redis6x-m3xl \
      --description "Redis OSS 2.8.x for m3.xlarge node type" \
      --cache-parameter-group-family redis6.x
   ```

1. Calculate how many bytes of memory to reserve for Valkey or Redis OSS overhead. You can find the value of `maxmemory` for your node type at [Redis OSS node-type specific parameters](ParameterGroups.Engine.md#ParameterGroups.Redis.NodeSpecific).

1. Modify the custom parameter group so that the parameter `reserved-memory` is the number of bytes you calculated in the previous step. The following AWS CLI example assumes you’re running a version of Redis OSS before 2.8.22 and need to reserve half of the node’s `maxmemory`. For more information, see [Modifying an ElastiCache parameter group](ParameterGroups.Modifying.md).

   ```
   aws elasticache modify-cache-parameter-group \
      --cache-parameter-group-name redis28-m3xl \
      --parameter-name-values "ParameterName=reserved-memory, ParameterValue=7130316800"
   ```

   You need a separate custom parameter group for each node type that you use, because each node type has a different `maxmemory` value. Thus, each node type needs a different value for `reserved-memory`.

1. Modify your Redis OSS cluster or replication group to use your custom parameter group.

   The following CLI example modifies the cluster ` my-redis-cluster` to use the custom parameter group `redis28-m3xl` beginning immediately. For more information, see [Modifying an ElastiCache cluster](Clusters.Modify.md).

   ```
   aws elasticache modify-cache-cluster \
      --cache-cluster-id my-redis-cluster \
      --cache-parameter-group-name redis28-m3xl \
      --apply-immediately
   ```

   The following CLI example modifies the replication group `my-redis-repl-grp` to use the custom parameter group `redis28-m3xl` beginning immediately. For more information, [Modifying a replication group](Replication.Modify.md).

   ```
   aws elasticache modify-replication-group \
      --replication-group-id my-redis-repl-grp \
      --cache-parameter-group-name redis28-m3xl \
      --apply-immediately
   ```

### The reserved-memory-percent parameter
<a name="redis-memory-management-parameters-reserved-memory-percent"></a>

On March 16, 2017, Amazon ElastiCache introduced the parameter `reserved-memory-percent` and made it available on all versions of ElastiCache for Redis OSS. The purpose of `reserved-memory-percent` is to simplify reserved memory management across all your clusters. It does so by enabling you to have a single parameter group for each parameter group family (such as `redis2.8`) to manage your clusters' reserved memory, regardless of node type. The default value for `reserved-memory-percent` is 25 (25 percent).

The parameter `reserved-memory-percent` is specific to ElastiCache and isn't part of the general Redis OSS distribution.

If your cluster is using a node type from the r6gd family and your memory usage reaches 75 percent, data-tiering will automatically be triggered. For more information, see [Data tiering in ElastiCache](data-tiering.md).

**To reserve memory using reserved-memory-percent**  
To use `reserved-memory-percent` to manage the memory on your ElastiCache for Redis OSS cluster, do one of the following:
+ If you are running Redis OSS 2.8.22 or later, assign the default parameter group to your cluster. The default 25 percent should be adequate. If not, take the steps described following to change the value.
+ If you are running a version of Redis OSS before 2.8.22, you probably need to reserve more memory than `reserved-memory-percent`'s default 25 percent. To do so, use the following procedure. 

**To change the percent value of reserved-memory-percent**

1. Create a custom parameter group specifying the parameter group family matching the engine version you’re running—for example, specifying the `redis2.8` parameter group family. A custom parameter group is necessary because you can't modify a default parameter group. For more information, see [Creating an ElastiCache parameter group](ParameterGroups.Creating.md).

   ```
   aws elasticache create-cache-parameter-group \
      --cache-parameter-group-name redis28-50 \
      --description "Redis OSS 2.8.x 50% reserved" \
      --cache-parameter-group-family redis2.8
   ```

   Because `reserved-memory-percent` reserves memory as a percent of a node’s `maxmemory`, you don't need a custom parameter group for each node type.

1. Modify the custom parameter group so that `reserved-memory-percent` is 50 (50 percent). For more information, see [Modifying an ElastiCache parameter group](ParameterGroups.Modifying.md).

   ```
   aws elasticache modify-cache-parameter-group \
      --cache-parameter-group-name redis28-50 \
      --parameter-name-values "ParameterName=reserved-memory-percent, ParameterValue=50"
   ```

1. Use this custom parameter group for any Redis OSS clusters or replication groups running a version of Redis OSS older than 2.8.22.

   The following CLI example modifies the Redis OSS cluster `my-redis-cluster` to use the custom parameter group `redis28-50` beginning immediately. For more information, see [Modifying an ElastiCache cluster](Clusters.Modify.md).

   ```
   aws elasticache modify-cache-cluster \
      --cache-cluster-id my-redis-cluster \
      --cache-parameter-group-name redis28-50 \
      --apply-immediately
   ```

   The following CLI example modifies the Redis OSS replication group `my-redis-repl-grp` to use the custom parameter group `redis28-50` beginning immediately. For more information, see [Modifying a replication group](Replication.Modify.md).

   ```
   aws elasticache modify-replication-group \
      --replication-group-id my-redis-repl-grp \
      --cache-parameter-group-name redis28-50 \
      --apply-immediately
   ```

## Specifying Your Reserved Memory Management Parameter
<a name="redis-reserved-memory-management-change"></a>

If you were a current ElastiCache customer on March 16, 2017, your default reserved memory management parameter is `reserved-memory` with zero (0) bytes of reserved memory. If you became an ElastiCache customer after March 16, 2017, your default reserved memory management parameter is `reserved-memory-percent` with 25 percent of the node's memory reserved. This is true no matter when you created your ElastiCache for Redis OSS cluster or replication group. However, you can change your reserved memory management parameter using either the AWS CLI or ElastiCache API.

The parameters `reserved-memory` and `reserved-memory-percent` are mutually exclusive. A parameter group always has one but never both. You can change which parameter a parameter group uses for reserved memory management by modifying the parameter group. The parameter group must be a custom parameter group, because you can't modify default parameter groups. For more information, see [Creating an ElastiCache parameter group](ParameterGroups.Creating.md).

**To specify reserved-memory-percent**  
To use `reserved-memory-percent` as your reserved memory management parameter, modify a custom parameter group using the `modify-cache-parameter-group` command. Use the `parameter-name-values` parameter to specify `reserved-memory-percent` and a value for it.

The following CLI example modifies the custom parameter group `redis32-cluster-on` so that it uses `reserved-memory-percent` to manage reserved memory. A value must be assigned to `ParameterValue` for the parameter group to use the `ParameterName` parameter for reserved memory management. For more information, see [Modifying an ElastiCache parameter group](ParameterGroups.Modifying.md).

```
aws elasticache modify-cache-parameter-group \
   --cache-parameter-group-name redis32-cluster-on \
   --parameter-name-values "ParameterName=reserved-memory-percent, ParameterValue=25"
```

**To specify reserved-memory**  
To use `reserved-memory` as your reserved memory management parameter, modify a custom parameter group using the `modify-cache-parameter-group` command. Use the `parameter-name-values` parameter to specify `reserved-memory` and a value for it.

The following CLI example modifies the custom parameter group `redis32-m3xl` so that it uses `reserved-memory` to manage reserved memory. A value must be assigned to `ParameterValue` for the parameter group to use the `ParameterName` parameter for reserved memory management. Because the engine version is newer than 2.8.22, we set the value to `3565158400` which is 25 percent of a `cache.m3.xlarge`’s `maxmemory`. For more information, see [Modifying an ElastiCache parameter group](ParameterGroups.Modifying.md).

```
aws elasticache modify-cache-parameter-group \
   --cache-parameter-group-name redis32-m3xl \
   --parameter-name-values "ParameterName=reserved-memory, ParameterValue=3565158400"
```

# Best practices when working with Valkey and Redis OSS node-based clusters
<a name="BestPractices.SelfDesigned"></a>

Multi-AZ use, having sufficient memory, cluster resizing and minimizing downtime are all useful concepts to keep in mind when working with node-based clusters in Valkey or Redis OSS. We recommend you review and follow these best practices.

**Topics**
+ [Minimizing downtime with Multi-AZ](multi-az.md)
+ [Ensuring you have enough memory to make a Valkey or Redis OSS snapshot](BestPractices.BGSAVE.md)
+ [Online cluster resizing](best-practices-online-resharding.md)
+ [Minimizing downtime during maintenance](BestPractices.MinimizeDowntime.md)

# Minimizing downtime with Multi-AZ
<a name="multi-az"></a>

There are a number of instances where ElastiCache Valkey or Redis OSS may need to replace a primary node; these include certain types of planned maintenance and the unlikely event of a primary node or Availability Zone failure.

This replacement results in some downtime for the cluster, but if Multi-AZ is enabled, the downtime is minimized. The role of primary node will automatically fail over to one of the read replicas. There is no need to create and provision a new primary node, because ElastiCache will handle this transparently. This failover and replica promotion ensure that you can resume writing to the new primary as soon as promotion is complete.

See [Minimizing downtime in ElastiCache by using Multi-AZ with Valkey and Redis OSS](AutoFailover.md), to learn more about Multi-AZ and minimizing downtime.

# Ensuring you have enough memory to make a Valkey or Redis OSS snapshot
<a name="BestPractices.BGSAVE"></a>

**Snapshots and synchronizations in Valkey 7.2 and later, and Redis OSS version 2.8.22 and later**  
Valkey has default support for snapshots and synchronizations. Redis OSS 2.8.22 introduces a forkless save process that allows you to allocate more of your memory to your application's use without incurring increased swap usage during synchronizations and saves. For more information, see [How synchronization and backup are implemented](Replication.Redis.Versions.md).

**Redis OSS snapshots and synchronizations before version 2.8.22**

When you work with ElastiCache for Redis OSS, Redis OSS calls a background write command in a number of cases:
+ When creating a snapshot for a backup.
+ When synchronizing replicas with the primary in a replication group.
+ When enabling the append-only file feature (AOF) for Redis OSS.
+ When promoting a replica to primary (which causes a primary/replica sync).

Whenever Redis OSS executes a background write process, you must have sufficient available memory to accommodate the process overhead. Failure to have sufficient memory available causes the process to fail. Because of this, it is important to choose a node instance type that has sufficient memory when creating your Redis OSS cluster.

## Background Write Process and Memory Usage with Valkey and Redis OSS
<a name="BestPractices.BGSAVE.Process"></a>

Whenever a background write process is called, Valkey and Redis OSS forks its process (remember, these engines are single threaded). One fork persists your data to disk in a Redis OSS .rdb snapshot file. The other fork services all read and write operations. To ensure that your snapshot is a point-in-time snapshot, all data updates and additions are written to an area of available memory separate from the data area.

As long as you have sufficient memory available to record all write operations while the data is being persisted to disk, you should have no insufficient memory issues. You are likely to experience insufficient memory issues if any of the following are true:
+ Your application performs many write operations, thus requiring a large amount of available memory to accept the new or updated data.
+ You have very little memory available in which to write new or updated data.
+ You have a large dataset that takes a long time to persist to disk, thus requiring a large number of write operations.

The following diagram illustrates memory use when executing a background write process.

![\[Image: Diagram of memory use during a background write.\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/images/ElastiCache-bgsaveMemoryUseage.png)


For information on the impact of doing a backup on performance, see [Performance impact of backups of node-based clusters](backups.md#backups-performance).

For more information on how Valkey and Redis OSS perform snapshots, see [http://valkey.io](http://valkey.io).

For more information on regions and Availability Zones, see [Choosing regions and availability zones for ElastiCache](RegionsAndAZs.md). 

## Avoiding running out of memory when executing a background write
<a name="BestPractices.BGSAVE.memoryFix"></a>

Whenever a background write process such as `BGSAVE` or `BGREWRITEAOF` is called, to keep the process from failing, you must have more memory available than will be consumed by write operations during the process. The worst-case scenario is that during the background write operation every record is updated and some new records are added to the cache. Because of this, we recommend that you set `reserved-memory-percent` to 50 (50 percent) for Redis OSS versions before 2.8.22, or 25 (25 percent) for Valkey and all Redis OSS versions 2.8.22 and later. 

The `maxmemory` value indicates the memory available to you for data and operational overhead. Because you cannot modify the `reserved-memory` parameter in the default parameter group, you must create a custom parameter group for the cluster. The default value for `reserved-memory` is 0, which allows Redis OSS to consume all of *maxmemory* with data, potentially leaving too little memory for other uses, such as a background write process. For `maxmemory` values by node instance type, see [Redis OSS node-type specific parameters](ParameterGroups.Engine.md#ParameterGroups.Redis.NodeSpecific).

You can also use `reserved-memory` parameter to reduce the amount of memory used on the box.

For more information on Valkey and Redis-specific parameters in ElastiCache, see [Valkey and Redis OSS parameters](ParameterGroups.Engine.md#ParameterGroups.Redis).

For information on creating and modifying parameter groups, see [Creating an ElastiCache parameter group](ParameterGroups.Creating.md) and [Modifying an ElastiCache parameter group](ParameterGroups.Modifying.md).

# Online cluster resizing
<a name="best-practices-online-resharding"></a>

*Resharding * involves adding and removing shards or nodes to your cluster and redistributing key spaces. As a result, multiple things have an impact on the resharding operation, such as the load on the cluster, memory utilization, and overall size of data. For the best experience, we recommend that you follow overall cluster best practices for uniform workload pattern distribution. In addition, we recommend taking the following steps.

Before initiating resharding, we recommend the following:
+ **Test your application** – Test your application behavior during resharding in a staging environment if possible.
+ **Get early notification for scaling issues** – Resharding is a compute-intensive operation. Because of this, we recommend keeping CPU utilization under 80 percent on multicore instances and less than 50 percent on single core instances during resharding. Monitor ElastiCache for Redis OSS metrics and initiate resharding before your application starts observing scaling issues. Useful metrics to track are `CPUUtilization`, `NetworkBytesIn`, `NetworkBytesOut`, `CurrConnections`, `NewConnections`, `FreeableMemory`, `SwapUsage`, and `BytesUsedForCacheItems`.
+ **Ensure sufficient free memory is available before scaling in** – If you're scaling in, ensure that free memory available on the shards to be retained is at least 1.5 times the memory used on the shards you plan to remove.
+ **Initiate resharding during off-peak hours** – This practice helps to reduce the latency and throughput impact on the client during the resharding operation. It also helps to complete resharding faster as more resources can be used for slot redistribution.
+ **Review client timeout behavior** – Some clients might observe higher latency during online cluster resizing. Configuring your client library with a higher timeout can help by giving the system time to connect even under higher load conditions on server. In some cases, you might open a large number of connections to the server. In these cases, consider adding exponential backoff to reconnect logic. Doing this can help prevent a burst of new connections hitting the server at the same time.
+ **Load your Functions on every shard** – When scaling out your cluster, ElastiCache will automatically replicate the Functions loaded in one of the existing nodes (selected at random) to the new node(s). If your cluster has Valkey 7.2 and above, or Redis OSS 7.0 or above, and your application uses [Functions](https://valkey.io/topics/functions-intro/), we recommend loading all of your functions to all the shards before scaling out so that your cluster does not end up with different functions on different shards.

After resharding, note the following:
+ Scale-in might be partially successful if insufficient memory is available on target shards. If such a result occurs, review available memory and retry the operation, if necessary. The data on the target shards will not be deleted.
+ `FLUSHALL` and `FLUSHDB` commands are not supported inside Lua scripts during a resharding operation. Prior to Redis OSS 6, the `BRPOPLPUSH` command is not supported if it operates on the slot being migrated.

# Minimizing downtime during maintenance
<a name="BestPractices.MinimizeDowntime"></a>

Cluster mode configuration has the best availability during managed or unmanaged operations. We recommend that you use a cluster mode supported client that connects to the cluster discovery endpoint. For cluster mode disabled, we recommend that you use the primary endpoint for all write operations. 

For read activity, applications can also connect to any node in the cluster. Unlike the primary endpoint, node endpoints resolve to specific endpoints. If you make a change in your cluster, such as adding or deleting a replica, you must update the node endpoints in your application. This is why for cluster mode disabled, we recommend that you use the reader endpoint for read activity.

If AutoFailover is enabled in the cluster, the primary node might change. Therefore, the application should confirm the role of the node and update all the read endpoints. Doing this helps ensure that you aren't causing a major load on the primary. With AutoFailover disabled, the role of the node doesn't change. However, the downtime in managed or unmanaged operations is higher as compared to clusters with AutoFailover enabled.

 Avoid directing read requests to a single read replica node, as its unavailability could lead to a read outage. Either fallback to reading from the primary, or ensure that you have at least two read replicas to avoid any read interruption during maintenance. 

# Caching database query results
<a name="caching-database-query-results"></a>

A common pattern to reduce database query latencies is query caching. Applications implement query caching by querying the cache for results associated with a database query, returning those results if they are cached. If no cached results are found, the query is executed on the database, the results are populated in the cache, and the results are then returned to the application initiating the query. Subsequent database queries will then return cached results as long as they remain in the cache.

## When to use query caching
<a name="caching-database-query-results-when-to-use"></a>

Query caching with ElastiCache is most effective for the following workload types:
+ **Read-heavy applications** where the same queries are executed repeatedly with data that changes infrequently.
+ **Expensive queries** such as non-indexed lookups, aggregations, or multi-table joins where query execution times are long.
+ **High-concurrency scenarios** where offloading reads to ElastiCache reduces database CPU pressure and improves overall throughput.

Query caching is not recommended for queries where strong consistency is required, or for queries inside multi-statement transactions that require read-after-write consistency.

## Using the AWS Advanced JDBC Wrapper
<a name="caching-database-query-results-jdbc-wrapper"></a>

If your application is using a JDBC driver to query a relational database, you can implement query caching with the [Remote Query Cache Plugin](https://github.com/aws/aws-advanced-jdbc-wrapper/blob/main/docs/using-the-jdbc-driver/using-plugins/UsingTheRemoteQueryCachePlugin.md) in the [AWS Advanced JDBC Wrapper](https://github.com/aws/aws-advanced-jdbc-wrapper). The plugin automatically caches selected SQL query result sets in ElastiCache, returning the result set from the cache instead of the database for future queries. Caching query results can reduce database load and lower average query latencies with minimal application code changes.

## How query caching works with the plugin
<a name="caching-database-query-results-how-it-works"></a>

The Remote Query Cache Plugin makes it easy for Java applications that query PostgreSQL, MySQL, or MariaDB databases to automatically cache query results in ElastiCache. You configure the plugin with your cache endpoint information and indicate which queries in your code to cache using query hints. When the plugin detects a hinted query, it returns the query result from the cache if present (a cache hit). If the results are not in the cache (a cache miss), the plugin executes the query on the database, stores the results in the cache, and returns the results to your application so the next time the query is executed the results can be served from the cache.

## Caching hints
<a name="caching-database-query-results-hints"></a>

You control which queries to cache by setting hints on each query. You can apply query hints directly to query strings in your application code with a comment prefix:

```
/* CACHE_PARAM(ttl=300s) */ SELECT * FROM my_table WHERE id = 42
```

where `ttl` is the time-to-live in seconds. You can also set query hints in prepared statements using common frameworks like Hibernate and Spring Boot.

## Prerequisites
<a name="caching-database-query-results-prerequisites"></a>

To use the Remote Query Cache Plugin in your application, you need an ElastiCache for Valkey or Redis OSS cache (both Serverless and node-based are supported) along with the following dependencies:
+ [AWS Advanced JDBC Wrapper](https://github.com/aws/aws-advanced-jdbc-wrapper) version 3.3.0 or later.
+ [Apache Commons Pool](https://commons.apache.org/proper/commons-pool/) version 2.11.1 or later.
+ [Valkey Glide](https://glide.valkey.io/) version 2.3.0 or later.

## Example: Caching a query with the plugin
<a name="caching-database-query-results-example"></a>

The following example shows how to enable the plugin and cache a query result for 300 seconds (5 minutes) with an ElastiCache for Valkey serverless cache:

```
import java.sql.*;
import java.util.Properties;

public class QueryCacheExample {
    public static void main(String[] args) throws SQLException {
        Properties props = new Properties();
        props.setProperty("user", "myuser");
        props.setProperty("password", "mypassword");

        // Enable the remote query cache plugin
        props.setProperty("wrapperPlugins", "remoteQueryCache");

        // Point to your ElastiCache endpoint
        props.setProperty("cacheEndpointAddrRw", "my-cache.serverless.use1.cache.amazonaws.com:6379");

        Connection conn = DriverManager.getConnection(
            "jdbc:aws-wrapper:postgresql://my-database.cluster-abc123.us-east-1.rds.amazonaws.com:5432/mydb",
            props
        );

        Statement stmt = conn.createStatement();

        // The SQL comment hint tells the plugin to cache this query for 300 seconds
        ResultSet rs = stmt.executeQuery(
            "/* CACHE_PARAM(ttl=300s) */ SELECT product_name, price FROM products WHERE category = 'electronics'"
        );

        while (rs.next()) {
            System.out.println(rs.getString("product_name") + ": $" + rs.getBigDecimal("price"));
        }

        rs.close();
        stmt.close();
        conn.close();
    }
}
```

The first time this query runs, the result is returned from the database and cached in ElastiCache. For the next 300 seconds, subsequent executions of that query are served directly from the cache.

## Related resources
<a name="caching-database-query-results-related"></a>

You can find more extensive examples and detailed information about plugin configuration in the [Remote Query Cache Plugin documentation](https://github.com/aws/aws-advanced-jdbc-wrapper/blob/main/docs/using-the-jdbc-driver/using-plugins/UsingTheRemoteQueryCachePlugin.md).

# Caching strategies for Memcached
<a name="Strategies"></a>

In the following topic, you can find strategies for populating and maintaining your Memcached cache.

What strategies to implement for populating and maintaining your cache depend upon what data you cache and the access patterns to that data. For example, you likely don't want to use the same strategy for both a top-10 leaderboard on a gaming site and trending news stories. In the rest of this section, we discuss common cache maintenance strategies and their advantages and disadvantages.

**Topics**
+ [Read replicas](#Strategies.ReadReplicas)
+ [Lazy loading](#Strategies.LazyLoading)
+ [Write-through](#Strategies.WriteThrough)
+ [Adding TTL](#Strategies.WithTTL)
+ [Related topics](#Strategies.SeeAlso)

## Read replicas
<a name="Strategies.ReadReplicas"></a>

You can often significantly improve performance for ElastiCache serverless caches by reading from replicas instead of the primary cache node. For more information see [Best Practices for using Read Replicas](ReadReplicas.md).

## Lazy loading
<a name="Strategies.LazyLoading"></a>

As the name implies, *lazy loading* is a caching strategy that loads data into the cache only when necessary. It works as described following. 

Amazon ElastiCache is an in-memory key-value store that sits between your application and the data store (database) that it accesses. Whenever your application requests data, it first makes the request to the ElastiCache cache. If the data exists in the cache and is current, ElastiCache returns the data to your application. If the data doesn't exist in the cache or has expired, your application requests the data from your data store. Your data store then returns the data to your application. Your application next writes the data received from the store to the cache. This way, it can be more quickly retrieved the next time it's requested.

A *cache hit* occurs when data is in the cache and isn't expired:

1. Your application requests data from the cache.

1. The cache returns the data to the application.

A *cache miss* occurs when data isn't in the cache or is expired:

1. Your application requests data from the cache.

1. The cache doesn't have the requested data, so returns a `null`.

1. Your application requests and receives the data from the database.

1. Your application updates the cache with the new data.

### Advantages and disadvantages of lazy loading
<a name="Strategies.LazyLoading.Evaluation"></a>

The advantages of lazy loading are as follows:
+ Only requested data is cached.

  Because most data is never requested, lazy loading avoids filling up the cache with data that isn't requested.
+ Node failures aren't fatal for your application.

  When a node fails and is replaced by a new, empty node, your application continues to function, though with increased latency. As requests are made to the new node, each cache miss results in a query of the database. At the same time, the data copy is added to the cache so that subsequent requests are retrieved from the cache.

The disadvantages of lazy loading are as follows:
+ There is a cache miss penalty. Each cache miss results in three trips: 

  1. Initial request for data from the cache

  1. Query of the database for the data

  1. Writing the data to the cache

   These misses can cause a noticeable delay in data getting to the application.
+ Stale data.

  If data is written to the cache only when there is a cache miss, data in the cache can become stale. This result occurs because there are no updates to the cache when data is changed in the database. To address this issue, you can use the [Write-through](#Strategies.WriteThrough) and [Adding TTL](#Strategies.WithTTL) strategies.

### Lazy loading pseudocode example
<a name="Strategies.LazyLoading.CodeExample"></a>

The following is a pseudocode example of lazy loading logic.

```
// *****************************************
// function that returns a customer's record.
// Attempts to retrieve the record from the cache.
// If it is retrieved, the record is returned to the application.
// If the record is not retrieved from the cache, it is
//    retrieved from the database, 
//    added to the cache, and 
//    returned to the application
// *****************************************
get_customer(customer_id)

    customer_record = cache.get(customer_id)
    if (customer_record == null)
    
        customer_record = db.query("SELECT * FROM Customers WHERE id = {0}", customer_id)
        cache.set(customer_id, customer_record)
    
    return customer_record
```

For this example, the application code that gets the data is the following.

```
customer_record = get_customer(12345)
```

## Write-through
<a name="Strategies.WriteThrough"></a>

The write-through strategy adds data or updates data in the cache whenever data is written to the database.

### Advantages and disadvantages of write-through
<a name="Strategies.WriteThrough.Evaluation"></a>

The advantages of write-through are as follows:
+ Data in the cache is never stale.

  Because the data in the cache is updated every time it's written to the database, the data in the cache is always current.
+ Write penalty vs. read penalty.

  Every write involves two trips: 

  1. A write to the cache

  1. A write to the database

   Which adds latency to the process. That said, end users are generally more tolerant of latency when updating data than when retrieving data. There is an inherent sense that updates are more work and thus take longer.

The disadvantages of write-through are as follows:
+ Missing data.

  If you spin up a new node, whether due to a node failure or scaling out, there is missing data. This data continues to be missing until it's added or updated on the database. You can minimize this by implementing [lazy loading](#Strategies.LazyLoading) with write-through.
+ Cache churn.

  Most data is never read, which is a waste of resources. By [adding a time to live (TTL) value](#Strategies.WithTTL), you can minimize wasted space.

### Write-through pseudocode example
<a name="Strategies.WriteThrough.CodeExample"></a>

The following is a pseudocode example of write-through logic.

```
// *****************************************
// function that saves a customer's record.
// *****************************************
save_customer(customer_id, values)

    customer_record = db.query("UPDATE Customers WHERE id = {0}", customer_id, values)
    cache.set(customer_id, customer_record)
    return success
```

For this example, the application code that gets the data is the following.

```
save_customer(12345,{"address":"123 Main"})
```

## Adding TTL
<a name="Strategies.WithTTL"></a>

Lazy loading allows for stale data but doesn't fail with empty nodes. Write-through ensures that data is always fresh, but can fail with empty nodes and can populate the cache with superfluous data. By adding a time to live (TTL) value to each write, you can have the advantages of each strategy. At the same time, you can and largely avoid cluttering up the cache with extra data.

*Time to live (TTL)* is an integer value that specifies the number of seconds until the key expires. Valkey or Redis OSS can specify seconds or milliseconds for this value. Memcached specifies this value in seconds. When an application attempts to read an expired key, it is treated as though the key is not found. The database is queried for the key and the cache is updated. This approach doesn't guarantee that a value isn't stale. However, it keeps data from getting too stale and requires that values in the cache are occasionally refreshed from the database.

For more information, see the [Valkey and Redis OSS commands](https://valkey.io/commands) or the [Memcached `set` commands](https://www.tutorialspoint.com/memcached/memcached_set_data.htm).

### TTL pseudocode examples
<a name="Strategies.WithTTL.CodeExample"></a>

The following is a pseudocode example of write-through logic with TTL.

```
// *****************************************
// function that saves a customer's record.
// The TTL value of 300 means that the record expires
//    300 seconds (5 minutes) after the set command 
//    and future reads will have to query the database.
// *****************************************
save_customer(customer_id, values)

    customer_record = db.query("UPDATE Customers WHERE id = {0}", customer_id, values)
    cache.set(customer_id, customer_record, 300)

    return success
```

The following is a pseudocode example of lazy loading logic with TTL.

```
// *****************************************
// function that returns a customer's record.
// Attempts to retrieve the record from the cache.
// If it is retrieved, the record is returned to the application.
// If the record is not retrieved from the cache, it is 
//    retrieved from the database, 
//    added to the cache, and 
//    returned to the application.
// The TTL value of 300 means that the record expires
//    300 seconds (5 minutes) after the set command 
//    and subsequent reads will have to query the database.
// *****************************************
get_customer(customer_id)

    customer_record = cache.get(customer_id)
    
    if (customer_record != null)
        if (customer_record.TTL < 300)
            return customer_record        // return the record and exit function
            
    // do this only if the record did not exist in the cache OR
    //    the TTL was >= 300, i.e., the record in the cache had expired.
    customer_record = db.query("SELECT * FROM Customers WHERE id = {0}", customer_id)
    cache.set(customer_id, customer_record, 300)  // update the cache
    return customer_record                // return the newly retrieved record and exit function
```

For this example, the application code that gets the data is the following.

```
save_customer(12345,{"address":"123 Main"})
```

```
customer_record = get_customer(12345)
```

## Related topics
<a name="Strategies.SeeAlso"></a>
+ [In-Memory Data Store](elasticache-use-cases.md#elasticache-use-cases-data-store)
+ [Choosing an engine and version](SelectEngine.md)
+ [Scaling ElastiCache](Scaling.md)

# Managing your node-based cluster in ElastiCache
<a name="manage-self-designed-cluster"></a>

ElastiCache offers two deployment options, serverless caches and node-based clusters. Each have their own capabilities and requirements.

This section contains topics to help you manage your node-based clusters. 

**Note**  
These topics do not apply to ElastiCache Serverless.

**Topics**
+ [Auto Scaling Valkey and Redis OSS clusters](AutoScaling.md)
+ [Modifying cluster mode](modify-cluster-mode.md)
+ [Replication across AWS Regions using global datastores](Redis-Global-Datastore.md)
+ [High availability using replication groups](Replication.md)
+ [Managing ElastiCache cluster maintenance](maintenance-window.md)
+ [Configuring engine parameters using ElastiCache parameter groups](ParameterGroups.md)

# Auto Scaling Valkey and Redis OSS clusters
<a name="AutoScaling"></a>

## Prerequisites
<a name="AutoScaling-Prerequisites"></a>

ElastiCache Auto Scaling is limited to the following:
+ Valkey or Redis OSS (cluster mode enabled) clusters running Valkey 7.2 onwards, or running Redis OSS 6.0 onwards
+ Data tiering (cluster mode enabled) clusters running Valkey 7.2 onwards, or running Redis OSS 7.0.7 onwards 
+ Instance sizes - Large, XLarge, 2XLarge
+ Instance type families - R7g, R6g, R6gd, R5, M7g, M6g, M5, C7gn
+ Auto Scaling in ElastiCache is not supported for clusters running in Global datastores, Outposts or Local Zones.

## Managing Capacity Automatically with ElastiCache Auto Scaling with Valkey or Redis OSS
<a name="AutoScaling-Managing"></a>

ElastiCache auto scaling with Valkey or Redis OSS is the ability to increase or decrease the desired shards or replicas in your ElastiCache service automatically. ElastiCache leverages the Application Auto Scaling service to provide this functionality. For more information, see [Application Auto Scaling](https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html). To use automatic scaling, you define and apply a scaling policy that uses CloudWatch metrics and target values that you assign. ElastiCache auto scaling uses the policy to increase or decrease the number of instances in response to actual workloads. 

You can use the AWS Management Console to apply a scaling policy based on a predefined metric. A `predefined metric` is defined in an enumeration so that you can specify it by name in code or use it in the AWS Management Console. Custom metrics are not available for selection using the AWS Management Console. Alternatively, you can use either the AWS CLI or the Application Auto Scaling API to apply a scaling policy based on a predefined or custom metric. 

ElastiCache for Valkey and Redis OSS supports scaling for the following dimensions:
+ **Shards** – Automatically add/remove shards in the cluster similar to manual online resharding. In this case, ElastiCache auto scaling triggers scaling on your behalf.
+ **Replicas** – Automatically add/remove replicas in the cluster similar to manual Increase/Decrease replica operations. ElastiCache auto scaling for Valkey and Redis OSS adds/removes replicas uniformly across all shards in the cluster.

ElastiCache for Valkey and Redis OSS supports the following types of automatic scaling policies:
+ [Target tracking scaling policies](AutoScaling-Scaling-Policies-Target.md) – Increase or decrease the number of shards/replicas that your service runs based on a target value for a specific metric. This is similar to the way that your thermostat maintains the temperature of your home. You select a temperature and the thermostat does the rest.
+ [ Scheduled scaling for your application. ](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-scheduled-scaling.html) – ElastiCache for Valkey and Redis OSS auto scaling can increase or decrease the number of shards/replicas that your service runs based on the date and time.

![\[Image of auto scaling for ElastiCache for Valkey and Redis OSS\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/images/Auto-scaling.png)


The following steps summarize the ElastiCache for Valkey and Redis OSS auto scaling process as shown in the previous diagram: 

1. You create an ElastiCache auto scaling policy for your Replication Group.

1. ElastiCache auto scaling creates a pair of CloudWatch alarms on your behalf. Each pair represents your upper and lower boundaries for metrics. These CloudWatch alarms are triggered when the cluster's actual utilization deviates from your target utilization for a sustained period of time. You can view the alarms in the console.

1. If the configured metric value exceeds your target utilization (or falls below the target) for a specific length of time, CloudWatch triggers an alarm that invokes auto scaling to evaluate your scaling policy.

1. ElastiCache auto scaling issues a Modify request to adjust your cluster capacity. 

1. ElastiCache processes the Modify request, dynamically increasing (or decreasing) the cluster Shards/Replicas capacity so that it approaches your target utilization. 

 To understand how ElastiCache Auto Scaling works, suppose that you have a cluster named `UsersCluster`. By monitoring the CloudWatch metrics for `UsersCluster`, you determine the Max shards that the cluster requires when traffic is at its peak and Min Shards when traffic is at its lowest point. You also decide a target value for CPU utilization for the `UsersCluster` cluster. ElastiCache auto scaling uses its target tracking algorithm to ensure that the provisioned shards of `UsersCluster` is adjusted as required so that utilization remains at or near to the target value. 

**Note**  
Scaling may take noticeable time and will require extra cluster resources for shards to rebalance. ElastiCache Auto Scaling modifies resource settings only when the actual workload stays elevated (or depressed) for a sustained period of several minutes. The auto scaling target-tracking algorithm seeks to keep the target utilization at or near your chosen value over the long term. 

# Auto Scaling policies
<a name="AutoScaling-Policies"></a>

A scaling policy has the following components:
+ A target metric – The CloudWatch metric that ElastiCache for Valkey and Redis OSS Auto Scaling uses to determine when and how much to scale. 
+ Minimum and maximum capacity – The minimum and maximum number of shards or replicas to use for scaling. 
**Important**  
While creating Auto scaling policy , if current capacity is higher than max capacity configured, we scaleIn to the MaxCapacity during policy creation. Similarly if current capacity is lower than min capacity configured, we scaleOut to the MinCapacity. 
+ A cooldown period – The amount of time, in seconds, after a scale-in or scale-out activity completes before another scale-out activity can start. 
+ A service-linked role – An AWS Identity and Access Management (IAM) role that is linked to a specific AWS service. A service-linked role includes all of the permissions that the service requires to call other AWS services on your behalf. ElastiCache Auto Scaling automatically generates this role, `AWSServiceRoleForApplicationAutoScaling_ElastiCacheRG`, for you. 
+ Enable or disable scale-in activities - Ability to enable or disable scale-in activities for a policy.

**Topics**
+ [Target metric for Auto Scaling](#AutoScaling-TargetMetric)
+ [Minimum and maximum capacity](#AutoScaling-MinMax)
+ [Cool down period](#AutoScaling-Cooldown)
+ [Enable or disable scale-in activities](#AutoScaling-enable-disable-scale-in)

## Target metric for Auto Scaling
<a name="AutoScaling-TargetMetric"></a>

In this type of policy, a predefined or custom metric and a target value for the metric is specified in a target-tracking scaling policy configuration. ElastiCache for Valkey and Redis OSS Auto Scaling creates and manages CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and target value. The scaling policy adds or removes shards/replicas as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target-tracking scaling policy also adjusts to fluctuations in the metric due to a changing workload. Such a policy also minimizes rapid fluctuations in the number of available shards/replicas for your cluster. 

For example, consider a scaling policy that uses the predefined average `ElastiCachePrimaryEngineCPUUtilization` metric. Such a policy can keep CPU utilization at, or close to, a specified percentage of utilization, such as 70 percent. 

**Note**  
For each cluster, you can create only one Auto Scaling policy for each target metric. 

## Minimum and maximum capacity
<a name="AutoScaling-MinMax"></a>

**Shards**

You can specify the maximum number of shards that can be scaled to by ElastiCache for Valkey and Redis OSS auto scaling. This value must be less than or equal to 250 with a minimum of 1. You can also specify the minimum number of shards to be managed by auto scaling. This value must be at least 1, and equal to or less than the value specified for the maximum shards 250. 

**Replicas**

You can specify the maximum number of replicas to be managed by ElastiCache for Valkey and Redis OSS auto scaling. This value must be less than or equal to 5. You can also specify the minimum number of replicas to be managed by auto scaling. This value must be at least 1, and equal to or less than the value specified for the maximum replicas 5.

To determine the minimum and maximum number of shards/replicas that you need for typical traffic, test your Auto Scaling configuration with the expected rate of traffic to your model. 

**Note**  
ElastiCache auto scaling policies increase cluster capacity until it reaches your defined maximum size or until service limits apply. To request a limit increase, see [AWS Service Limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) and choose the limit type **Nodes per cluster per instance type**. 

**Important**  
Scaling-in occurs when there is no traffic. If a variant’s traffic becomes zero, ElastiCache automatically scales in to the minimum number of instances specified.

## Cool down period
<a name="AutoScaling-Cooldown"></a>

You can tune the responsiveness of a target-tracking scaling policy by adding cooldown periods that affect scaling your cluster. A cooldown period blocks subsequent scale-in or scale-out requests until the period expires. This slows the deletions of shards/replicas in your ElastiCache for Valkey and Redis OSS cluster for scale-in requests, and the creation of shards/replicas for scale-out requests. You can specify the following cooldown periods:
+ A scale-in activity reduces the number of shards/replicas in your cluster. A scale-in cooldown period specifies the amount of time, in seconds, after a scale-in activity completes before another scale-in activity can start.
+ A scale-out activity increases the number of shards/replicas in your cluster. A scale-out cooldown period specifies the amount of time, in seconds, after a scale-out activity completes before another scale-out activity can start. 

When a scale-in or a scale-out cooldown period is not specified, the default for scale-out is 600 seconds and for scale-in 900 seconds. 

## Enable or disable scale-in activities
<a name="AutoScaling-enable-disable-scale-in"></a>

You can enable or disable scale-in activities for a policy. Enabling scale-in activities allows the scaling policy to delete shards/replicas. When scale-in activities are enabled, the scale-in cooldown period in the scaling policy applies to scale-in activities. Disabling scale-in activities prevents the scaling policy from deleting shards/replicas. 

**Note**  
Scale-out activities are always enabled so that the scaling policy can create ElastiCache shards or replicas as needed.

## IAM Permissions Required for Auto Scaling
<a name="AutoScaling-IAM-permissions"></a>

ElastiCache for Valkey and Redis OSS Auto Scaling is made possible by a combination of the ElastiCache, CloudWatch, and Application Auto Scaling APIs. Clusters are created and updated with ElastiCache, alarms are created with CloudWatch, and scaling policies are created with Application Auto Scaling. In addition to the standard IAM permissions for creating and updating clusters, the IAM user that accesses ElastiCache Auto Scaling settings must have the appropriate permissions for the services that support dynamic scaling. In this most recent policy we have added support for Memcached vertical scaling, with the action `elasticache:ModifyCacheCluster`. IAM users must have permissions to use the actions shown in the following example policy: 

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "application-autoscaling:*",
                "elasticache:DescribeReplicationGroups",
                "elasticache:ModifyReplicationGroupShardConfiguration",
                "elasticache:IncreaseReplicaCount",
                "elasticache:DecreaseReplicaCount",
                "elasticache:DescribeCacheClusters",
                "elasticache:DescribeCacheParameters",
                "cloudwatch:DeleteAlarms",
                "cloudwatch:DescribeAlarmHistory",
                "cloudwatch:DescribeAlarms",
                "cloudwatch:DescribeAlarmsForMetric",
                "cloudwatch:GetMetricStatistics",
                "cloudwatch:ListMetrics",
                "cloudwatch:PutMetricAlarm",
                "cloudwatch:DisableAlarmActions",
                "cloudwatch:EnableAlarmActions",
                "iam:CreateServiceLinkedRole",
                "sns:CreateTopic",
                "sns:Subscribe",
                "sns:Get*",
                "sns:List*"
            ],
            "Resource": "arn:aws:iam::123456789012:role/autoscaling-roles-for-cluster"
        }
    ]
}
```

------

## Service-linked role
<a name="AutoScaling-SLR"></a>

The ElastiCache for Valkey and Redis OSS auto scaling service also needs permission to describe your clusters and CloudWatch alarms, and permissions to modify your ElastiCache target capacity on your behalf. If you enable Auto Scaling for your cluster, it creates a service-linked role named `AWSServiceRoleForApplicationAutoScaling_ElastiCacheRG`. This service-linked role grants ElastiCache auto scaling permission to describe the alarms for your policies, to monitor the current capacity of the fleet, and to modify the capacity of the fleet. The service-linked role is the default role for ElastiCache auto scaling. For more information, see [Service-linked roles for ElastiCache for Redis OSS auto scaling ](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-service-linked-roles.html) in the Application Auto Scaling User Guide.

## Auto Scaling Best Practices
<a name="AutoScaling-best-practices"></a>

Before registering to Auto Scaling, we recommend the following:

1. **Use just one tracking metric** – Identify if your cluster has CPU or data intensive workloads and use a corresponding predefined metric to define Scaling Policy. 
   + Engine CPU: `ElastiCachePrimaryEngineCPUUtilization` (shard dimension) or `ElastiCacheReplicaEngineCPUUtilization` (replica dimension)
   + Database usage: `ElastiCacheDatabaseCapacityUsageCountedForEvictPercentage` This scaling policy works best with maxmemory-policy set to noeviction on the cluster.

   We recommend you avoid multiple policies per dimension on the cluster. ElastiCache for Valkey and Redis OSS Auto scaling will scale out the scalable target if any target tracking policies are ready for scale out, but will scale in only if all target tracking policies (with the scale-in portion enabled) are ready to scale in. If multiple policies instruct the scalable target to scale out or in at the same time, it scales based on the policy that provides the largest capacity for both scale in and scale out.

1. **Customized Metrics for Target Tracking** – Be cautious when using customized metrics for Target Tracking as Auto scaling is best suited to scale-out/in proportional to changes in metrics chosen for the policy. If those metrics don't change proportionally to the scaling actions used for policy creation, it might lead to continuous scale-out or scale-in actions which might affect availability or cost. 

    For data-tiering clusters (r6gd family instance types), avoid using memory-based metrics for scaling.

1. **Scheduled Scaling** – If you identify that your workload is deterministic (reaches high/low at a specific time), we recommend using Scheduled Scaling and configure your target capacity according to the need. Target Tracking is best suitable for non-deterministic workloads and for the cluster to operate at the required target metric by scaling out when you need more resources and scaling in when you need less. 

1. **Disable Scale-In** – Auto scaling on Target Tracking is best suited for clusters with gradual increase/decrease of workloads as spikes/dip in metrics can trigger consecutive scale-out/in oscillations. In order to avoid such oscillations, you can start with scale-in disabled and later you can always manually scale-in to your need. 

1. **Test your application** – We recommend you test your application with your estimated Min/Max workloads to determine absolute Min,Max shards/replicas required for the cluster while creating Scaling policies to avoid availability issues. Auto scaling can scale out to the Max and scale-in to the Min threshold configured for the target.

1. **Defining Target Value** – You can analyze corresponding CloudWatch metrics for cluster utilization over a four-week period to determine the target value threshold. If you are still not sure of of what value to choose, we recommend starting with a minimum supported predefined metric value.

1. AutoScaling on Target Tracking is best suited for clusters with uniform distribution of workloads across shards/replicas dimension. Having non-uniform distribution can lead to:
   + Scaling when not required due to workload spike/dip on a few hot shards/replicas.
   + Not scaling when required due to overall avg close to target even though having hot shards/replicas.

**Note**  
When scaling out your cluster, ElastiCache will automatically replicate the Functions loaded in one of the existing nodes (selected at random) to the new node(s). If your cluster has Valkey or Redis OSS 7.0 or above and your application uses [Functions](https://valkey.io/topics/functions-intro/), we recommend loading all of your functions to all the shards before scaling out so that your cluster does not end up with different functions on different shards.

After registering to AutoScaling, note the following:
+ There are limitations on Auto scaling Supported Configurations, so we recommend you not change configuration of a replication group that is registered for Auto scaling. The following are examples:
  + Manually modifying instance type to unsupported types.
  + Associating the replication group to a Global datastore.
  + Changing `ReservedMemoryPercent` parameter.
  + Manually increasing/decreasing shards/replicas beyond the Min/Max capacity configured during policy creation.

# Using Auto Scaling with shards
<a name="AutoScaling-Using-Shards"></a>

With ElastiCache's AutoScaling you can use tracking and scheduled policies with your Valkey or Redis OSS engine. 

The following provides details on target tracking and scheduled policies and how to apply them using the AWS Management Console AWS CLI and APIs.

**Topics**
+ [Target tracking scaling policies](AutoScaling-Scaling-Policies-Target.md)
+ [Adding a scaling policy](AutoScaling-Scaling-Adding-Policy-Shards.md)
+ [Registering a Scalable Target](AutoScaling-Scaling-Registering-Policy-CLI.md)
+ [Defining a scaling policy](AutoScaling-Scaling-Defining-Policy-API.md)
+ [Disabling scale-in activity](AutoScaling-Scaling-Disabling-Scale-in.md)
+ [Applying a scaling policy](AutoScaling-Scaling-Applying-a-Scaling-Policy.md)
+ [Editing a scaling policy](AutoScaling-Scaling-Editing-a-Scaling-Policy.md)
+ [Deleting a scaling policy](AutoScaling-Scaling-Deleting-a-Scaling-Policy.md)
+ [Use CloudFormation for Auto Scaling policies](AutoScaling-with-Cloudformation-Shards.md)
+ [Scheduled scaling](AutoScaling-with-Scheduled-Scaling-Shards.md)

# Target tracking scaling policies
<a name="AutoScaling-Scaling-Policies-Target"></a>

With target tracking scaling policies, you select a metric and set a target value. ElastiCache for Valkey and Redis OSS Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes shards as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking scaling policy also adjusts to the fluctuations in the metric due to a fluctuating load pattern and minimizes rapid fluctuations in the capacity of the fleet. 

For example, consider a scaling policy that uses the predefined average `ElastiCachePrimaryEngineCPUUtilization` metric with configured target value. Such a policy can keep CPU utilization at, or close to the specified target value.

## Predefined metrics
<a name="AutoScaling-Scaling-Criteria-predfined-metrics"></a>

A predefined metric is a structure that refers to a specific name, dimension, and statistic (`average`) of a given CloudWatch metric. Your Auto Scaling policy defines one of the below predefined metrics for your cluster:


****  

| Predefined Metric Name | CloudWatch Metric Name | CloudWatch Metric Dimension | Ineligible Instance Types  | 
| --- | --- | --- | --- | 
| ElastiCachePrimaryEngineCPUUtilization |  `EngineCPUUtilization`  |  ReplicationGroupId, Role = Primary  | None | 
| ElastiCacheDatabaseCapacityUsageCountedForEvictPercentage |  `DatabaseCapacityUsageCountedForEvictPercentage`  |  Valkey or Redis OSS Replication Group Metrics  | None | 
| ElastiCacheDatabaseMemoryUsageCountedForEvictPercentage |  `DatabaseMemoryUsageCountedForEvictPercentage`  |  Valkey or Redis OSS Replication Group Metrics  | R6gd | 

Data-tiered instance types cannot use `ElastiCacheDatabaseMemoryUsageCountedForEvictPercentage`, as these instance types store data in both memory and SSD. The expected use case for data-tiered instances is to have 100 percent memory usage and fill up SSD as needed.

## Auto Scaling criteria for shards
<a name="AutoScaling-Scaling-Criteria"></a>

When the service detects that your predefined metric is equal to or greater than the Target setting, it will increase your shards capacity automatically. ElastiCache for Valkey and Redis OSS scales out your cluster shards by a count equal to the larger of two numbers: Percent variation from Target and 20 percent of current shards. For scale-in, ElastiCache won't auto scale-in unless the overall metric value is below 75 percent of your defined Target. 

For a scale out example, if you have 50 shards and
+ if your Target breaches by 30 percent, ElastiCache scales out by 30 percent, which results in 65 shards per cluster. 
+ if your Target breaches by 10 percent, ElastiCache scales out by default Minimum of 20 percent, which results in 60 shards per cluster. 

For a scale-in example, if you have selected a Target value of 60 percent, ElastiCache won't auto scale-in until the metric is less than or equal to 45 percent (25 percent below the Target 60 percent).

## Auto Scaling considerations
<a name="AutoScaling-Scaling-Considerations"></a>

Keep the following considerations in mind:
+ A target tracking scaling policy assumes that it should perform scale out when the specified metric is above the target value. You cannot use a target tracking scaling policy to scale out when the specified metric is below the target value. ElastiCache for Valkey and Redis OSS scales out shards by a minimum of 20 percent deviation of target of existing shards in the cluster.
+ A target tracking scaling policy does not perform scaling when the specified metric has insufficient data. It does not perform scale-in because it does not interpret insufficient data as low utilization. 
+ You may see gaps between the target value and the actual metric data points. This is because ElastiCache Auto Scaling always acts conservatively by rounding up or down when it determines how much capacity to add or remove. This prevents it from adding insufficient capacity or removing too much capacity. 
+ To ensure application availability, the service scales out proportionally to the metric as fast as it can, but scales in more conservatively. 
+ You can have multiple target tracking scaling policies for an ElastiCache for Valkey and Redis OSS cluster, provided that each of them uses a different metric. The intention of ElastiCache Auto Scaling is to always prioritize availability, so its behavior differs depending on whether the target tracking policies are ready for scale out or scale in. It will scale out the service if any of the target tracking policies are ready for scale out, but will scale in only if all of the target tracking policies (with the scale-in portion enabled) are ready to scale in. 
+ Do not edit or delete the CloudWatch alarms that ElastiCache Auto Scaling manages for a target tracking scaling policy. ElastiCache Auto Scaling deletes the alarms automatically when you delete the scaling policy. 
+ ElastiCache Auto Scaling doesn't prevent you from manually modifying cluster shards. These manual adjustments don't affect any existing CloudWatch alarms that are attached to the scaling policy but can impact metrics that may trigger these CloudWatch alarms. 
+ These CloudWatch alarms managed by Auto Scaling are defined over the AVG metric across all the shards in the cluster. So, having hot shards can result in either scenario of:
  + scaling when not required due to load on a few hot shards triggering a CloudWatch alarm
  + not scaling when required due to aggregated AVG across all shards affecting alarm not to breach. 
+ ElastiCache default limits on Nodes per cluster still applies. So, when opting for Auto Scaling and if you expect maximum nodes to be more than default limit, request a limit increase at [AWS Service Limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) and choose the limit type **Nodes per cluster per instance type**. 
+ Ensure that you have enough ENIs (Elastic Network Interfaces) available in your VPC, which are required during scale-out. For more information, see [Elastic network interfaces](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_ElasticNetworkInterfaces.html).
+ If there is not enough capacity available from EC2, ElastiCache Auto Scaling would not scale and be delayed til the capacity is available.
+ ElastiCache for Redis OSS Auto Scaling during scale-in will not remove shards with slots having an item size larger than 256 MB post-serialization.
+ During scale-in it will not remove shards if insufficient memory available on resultant shard configuration.

# Adding a scaling policy
<a name="AutoScaling-Scaling-Adding-Policy-Shards"></a>

You can add a scaling policy using the AWS Management Console. 

**To add an Auto Scaling policy to an ElastiCache for Valkey and Redis OSS cluster**

1. Sign in to the AWS Management Console and open the Amazon ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Valkey** or **Redis OSS**. 

1. Choose the cluster that you want to add a policy to (choose the cluster name and not the button to its left). 

1. Choose the **Auto Scaling policies** tab. 

1. Choose **add dynamic scaling**. 

1. For **Policy name** enter a policy name. 

1. For **Scalable Dimension** choose **shards**. 

1. For the target metric, choose one of the following:
   + **Primary CPU Utilization** to create a policy based on the average CPU utilization. 
   + **Memory** to create a policy based on the average database memory. 
   + **Capacity** to create a policy based on average database capacity usage. The Capacity metric includes memory and SSD utilization for data-tiered instances, and memory utilization for all other instance types.

1. For the target value, choose a value greater than or equal to 35 and less than or equal to 70. Auto scaling will maintain this value for the selected target metric across your ElastiCache shards: 
   + **Primary CPU Utilization**: maintains target value for `EngineCPUUtilization` metric on primary nodes. 
   + **Memory**: maintains target value for `DatabaseMemoryUsageCountedForEvictPercentage` metric 
   + **Capacity** maintains target value for `DatabaseCapacityUsageCountedForEvictPercentage` metric,

   Cluster shards are added or removed to keep the metric close to the specified value. 

1. (Optional) Scale-in or scale-out cooldown periods are not supported from the console. Use the AWS CLI to modify the cooldown values. 

1. For **Minimum capacity**, type the minimum number of shards that the ElastiCache Auto Scaling policy is required to maintain. 

1. For **Maximum capacity**, type the maximum number of shards that the ElastiCache Auto Scaling policy is required to maintain. This value must be less than or equal to 250.

1. Choose **Create**.

# Registering a Scalable Target
<a name="AutoScaling-Scaling-Registering-Policy-CLI"></a>

Before you can use Auto Scaling with an ElastiCache for Valkey and Redis OSS cluster, you register your cluster with ElastiCache auto scaling. You do so to define the scaling dimension and limits to be applied to that cluster. ElastiCache auto scaling dynamically scales the cluster along the `elasticache:replication-group:NodeGroups` scalable dimension, which represents the number of cluster shards. 

 **Using the AWS CLI** 

To register your ElastiCache for Valkey and Redis OSS cluster, use the [register-scalable-target](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/register-scalable-target.html) command with the following parameters: 
+ `--service-namespace` – Set this value to `elasticache`
+ `--resource-id` – The resource identifier for the cluster. For this parameter, the resource type is `ReplicationGroup` and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ `--scalable-dimension` – Set this value to `elasticache:replication-group:NodeGroups`. 
+ `--max-capacity ` – The maximum number of shards to be managed by ElastiCache auto scaling. For information about the relationship between `--min-capacity`, `--max-capacity`, and the number of shards in your cluster, see [Minimum and maximum capacity](AutoScaling-Policies.md#AutoScaling-MinMax). 
+ `--min-capacity ` – The minimum number of shards to be managed by ElastiCache auto scaling. For information about the relationship between `--min-capacity`, `--max-capacity`, and the number of shards in your cluster, see [Minimum and maximum capacity](AutoScaling-Policies.md#AutoScaling-MinMax). 

**Example**  
 In the following example, you register an ElastiCache cluster named `myscalablecluster`. The registration indicates that the cluster should be dynamically scaled to have from one to ten shards.   
For Linux, macOS, or Unix:  

```
aws application-autoscaling register-scalable-target \
    --service-namespace elasticache \
    --resource-id replication-group/myscalablecluster \
    --scalable-dimension elasticache:replication-group:NodeGroups \
    --min-capacity 1 \
    --max-capacity 10 \
```
For Windows:  

```
aws application-autoscaling register-scalable-target ^
    --service-namespace elasticache ^
    --resource-id replication-group/myscalablecluster ^
    --scalable-dimension elasticache:replication-group:NodeGroups ^
    --min-capacity 1 ^
    --max-capacity 10 ^
```

**Using the API**

To register your ElastiCache cluster, use the [register-scalable-target](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/register-scalable-target.html) command with the following parameters: 
+ ServiceNamespace – Set this value to elasticache. 
+ ResourceID – The resource identifier for the ElastiCache cluster. For this parameter, the resource type is ReplicationGroup and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ ScalableDimension – Set this value to `elasticache:replication-group:NodeGroups`. 
+ MinCapacity – The minimum number of shards to be managed by ElastiCache auto scaling. For information about the relationship between --min-capacity, --max-capacity, and the number of replicas in your cluster, see [Minimum and maximum capacity](AutoScaling-Policies.md#AutoScaling-MinMax).
+ MaxCapacity – The maximum number of shards to be managed by ElastiCache auto scaling. For information about the relationship between --min-capacity, --max-capacity, and the number of replicas in your cluster, see [Minimum and maximum capacity](AutoScaling-Policies.md#AutoScaling-MinMax).

**Example**  
In the following example, you register an ElastiCache cluster named `myscalablecluster` with the Application Auto Scaling API. This registration indicates that the cluster should be dynamically scaled to have from one to 5 replicas.   

```
POST / HTTP/1.1
Host: autoscaling.us-east-2.amazonaws.com
Accept-Encoding: identity
Content-Length: 219
X-Amz-Target: AnyScaleFrontendService.RegisterScalableTarget
X-Amz-Date: 20160506T182145Z
User-Agent: aws-cli/1.10.23 Python/2.7.11 Darwin/15.4.0 botocore/1.4.8
Content-Type: application/x-amz-json-1.1
Authorization: AUTHPARAMS
{
    "ServiceNamespace": "elasticache",
    "ResourceId": "replication-group/myscalablecluster",
    "ScalableDimension": "elasticache:replication-group:NodeGroups",
    "MinCapacity": 1,
    "MaxCapacity": 5
}
```

# Defining a scaling policy
<a name="AutoScaling-Scaling-Defining-Policy-API"></a>

A target-tracking scaling policy configuration is represented by a JSON block that the metrics and target values are defined in. You can save a scaling policy configuration as a JSON block in a text file. You use that text file when invoking the AWS CLI or the Application Auto Scaling API. For more information about policy configuration syntax, see [TargetTrackingScalingPolicyConfiguration](https://docs.aws.amazon.com/autoscaling/application/APIReference/API_TargetTrackingScalingPolicyConfiguration.html) in the Application Auto Scaling API Reference. 

The following options are available for defining a target-tracking scaling policy configuration: 

**Topics**
+ [Using a predefined metric](#AutoScaling-Scaling-Predefined-Metric)
+ [Using a custom metric](#AutoScaling-Scaling-Custom-Metric)
+ [Using cooldown periods](#AutoScaling-Scaling-Cooldown-periods)

## Using a predefined metric
<a name="AutoScaling-Scaling-Predefined-Metric"></a>

By using predefined metrics, you can quickly define a target-tracking scaling policy for an ElastiCache for Valkey and Redis OSS cluster that works with target tracking in ElastiCache Auto Scaling. 

Currently, ElastiCache supports the following predefined metrics in NodeGroup Auto Scaling: 
+ **ElastiCachePrimaryEngineCPUUtilization** – The average value of the `EngineCPUUtilization` metric in CloudWatch across all primary nodes in the cluster.
+ **ElastiCacheDatabaseMemoryUsageCountedForEvictPercentage** – The average value of the `DatabaseMemoryUsageCountedForEvictPercentage` metric in CloudWatch across all primary nodes in the cluster.
+ **ElastiCacheDatabaseCapacityUsageCountedForEvictPercentage** – The average value of the `ElastiCacheDatabaseCapacityUsageCountedForEvictPercentage` metric in CloudWatch across all primary nodes in the cluster.

For more information about the `EngineCPUUtilization`, `DatabaseMemoryUsageCountedForEvictPercentage` and `DatabaseCapacityUsageCountedForEvictPercentage` metrics, see [Monitoring use with CloudWatch Metrics](CacheMetrics.md). To use a predefined metric in your scaling policy, you create a target tracking configuration for your scaling policy. This configuration must include a `PredefinedMetricSpecification` for the predefined metric and a TargetValue for the target value of that metric. 

**Example**  
The following example describes a typical policy configuration for target-tracking scaling for an ElastiCache for Valkey and Redis OSS cluster. In this configuration, the `ElastiCachePrimaryEngineCPUUtilization` predefined metric is used to adjust the cluster based on an average CPU utilization of 40 percent across all primary nodes in the cluster.   

```
{
    "TargetValue": 40.0,
    "PredefinedMetricSpecification":
    {
        "PredefinedMetricType": "ElastiCachePrimaryEngineCPUUtilization"
    }
}
```

## Using a custom metric
<a name="AutoScaling-Scaling-Custom-Metric"></a>

 By using custom metrics, you can define a target-tracking scaling policy that meets your custom requirements. You can define a custom metric based on any ElastiCache metric that changes in proportion to scaling. Not all ElastiCache metrics work for target tracking. The metric must be a valid utilization metric and describe how busy an instance is. The value of the metric must increase or decrease in proportion to the number of Shards in the cluster. This proportional increase or decrease is necessary to use the metric data to proportionally scale out or in the number of shards. 

**Example**  
The following example describes a target-tracking configuration for a scaling policy. In this configuration, a custom metric adjusts an ElastiCache for Redis OSS cluster based on an average CPU utilization of 50 percent across all shards in an cluster named `my-db-cluster`. 

```
{
    "TargetValue": 50,
    "CustomizedMetricSpecification":
    {
        "MetricName": "EngineCPUUtilization",
        "Namespace": "AWS/ElastiCache",
        "Dimensions": [
            {
                "Name": "ReplicationGroup","Value": "my-db-cluster"
            },
            {
                "Name": "Role","Value": "PRIMARY"
            }
        ],
        "Statistic": "Average",
        "Unit": "Percent"
    }
}
```

## Using cooldown periods
<a name="AutoScaling-Scaling-Cooldown-periods"></a>

You can specify a value, in seconds, for `ScaleOutCooldown` to add a cooldown period for scaling out your cluster. Similarly, you can add a value, in seconds, for `ScaleInCooldown` to add a cooldown period for scaling in your cluster. For more information, see [TargetTrackingScalingPolicyConfiguration](https://docs.aws.amazon.com/autoscaling/application/APIReference/API_TargetTrackingScalingPolicyConfiguration.html) in the Application Auto Scaling API Reference. 

 The following example describes a target-tracking configuration for a scaling policy. In this configuration, the `ElastiCachePrimaryEngineCPUUtilization` predefined metric is used to adjust an ElastiCache for Redis OSS cluster based on an average CPU utilization of 40 percent across all primary nodes in that cluster. The configuration provides a scale-in cooldown period of 10 minutes and a scale-out cooldown period of 5 minutes. 

```
{
    "TargetValue": 40.0,
    "PredefinedMetricSpecification":
    {
        "PredefinedMetricType": "ElastiCachePrimaryEngineCPUUtilization"
    },
    "ScaleInCooldown": 600,
    "ScaleOutCooldown": 300
}
```

# Disabling scale-in activity
<a name="AutoScaling-Scaling-Disabling-Scale-in"></a>

You can prevent the target-tracking scaling policy configuration from scaling in your cluster by disabling scale-in activity. Disabling scale-in activity prevents the scaling policy from deleting shards, while still allowing the scaling policy to create them as needed. 

You can specify a Boolean value for `DisableScaleIn` to enable or disable scale in activity for your cluster. For more information, see [TargetTrackingScalingPolicyConfiguration](https://docs.aws.amazon.com/autoscaling/application/APIReference/API_TargetTrackingScalingPolicyConfiguration.html) in the Application Auto Scaling API Reference. 

The following example describes a target-tracking configuration for a scaling policy. In this configuration, the `ElastiCachePrimaryEngineCPUUtilization` predefined metric adjusts an ElastiCache for Valkey and Redis OSS cluster based on an average CPU utilization of 40 percent across all primary nodes in that cluster. The configuration disables scale-in activity for the scaling policy. 

```
{
    "TargetValue": 40.0,
    "PredefinedMetricSpecification":
    {
        "PredefinedMetricType": "ElastiCachePrimaryEngineCPUUtilization"
    },
    "DisableScaleIn": true
}
```

# Applying a scaling policy
<a name="AutoScaling-Scaling-Applying-a-Scaling-Policy"></a>

After registering your cluster with ElastiCache for Valkey and Redis OSS auto scaling and defining a scaling policy, you apply the scaling policy to the registered cluster. To apply a scaling policy to an ElastiCache for Redis OSS cluster, you can use the AWS CLI or the Application Auto Scaling API. 

## Applying a scaling policy using the AWS CLI
<a name="AutoScaling-Scaling-Applying-a-Scaling-Policy-CLI"></a>

To apply a scaling policy to your ElastiCache for Valkey and Redis OSS cluster, use the [put-scaling-policy](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/put-scaling-policy.html) command with the following parameters: 
+ **--policy-name** – The name of the scaling policy. 
+ **--policy-type** – Set this value to `TargetTrackingScaling`. 
+ **--resource-id** – The resource identifier. For this parameter, the resource type is `ReplicationGroup` and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ **--service-namespace** – Set this value to `elasticache`. 
+ **--scalable-dimension** – Set this value to `elasticache:replication-group:NodeGroups`. 
+ **--target-tracking-scaling-policy-configuration** – The target-tracking scaling policy configuration to use for the cluster. 

In the following example, you apply a target-tracking scaling policy named `myscalablepolicy` to an ElastiCache for Valkey and Redis OSS cluster named `myscalablecluster` with ElastiCache auto scaling. To do so, you use a policy configuration saved in a file named `config.json`. 

For Linux, macOS, or Unix:

```
aws application-autoscaling put-scaling-policy \
    --policy-name myscalablepolicy \
    --policy-type TargetTrackingScaling \
    --resource-id replication-group/myscalablecluster \
    --service-namespace elasticache \
    --scalable-dimension elasticache:replication-group:NodeGroups \
    --target-tracking-scaling-policy-configuration file://config.json
```

For Windows:

```
aws application-autoscaling put-scaling-policy ^
    --policy-name myscalablepolicy ^
    --policy-type TargetTrackingScaling ^
    --resource-id replication-group/myscalablecluster ^
    --service-namespace elasticache ^
    --scalable-dimension elasticache:replication-group:NodeGroups ^
    --target-tracking-scaling-policy-configuration file://config.json
```

## Applying a scaling policy using the API
<a name="AutoScaling-Scaling-Applying-a-Scaling-Policy-API"></a>

To apply a scaling policy to your ElastiCache for Valkey and Redis OSS cluster, use the [PutScalingPolicy](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/put-scaling-policy.html) AWS CLI command with the following parameters: 
+ **--policy-name** – The name of the scaling policy. 
+ **--resource-id** – The resource identifier. For this parameter, the resource type is `ReplicationGroup` and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ **--service-namespace** – Set this value to `elasticache`. 
+ **--scalable-dimension** – Set this value to `elasticache:replication-group:NodeGroups`. 
+ **--target-tracking-scaling-policy-configuration** – The target-tracking scaling policy configuration to use for the cluster. 

In the following example, you apply a target-tracking scaling policy named `myscalablepolicy` to an ElastiCache cluster named `myscalablecluster` with ElastiCache auto scaling. You use a policy configuration based on the `ElastiCachePrimaryEngineCPUUtilization` predefined metric. 

```
POST / HTTP/1.1
Host: autoscaling.us-east-2.amazonaws.com
Accept-Encoding: identity
Content-Length: 219
X-Amz-Target: AnyScaleFrontendService.PutScalingPolicy
X-Amz-Date: 20160506T182145Z
User-Agent: aws-cli/1.10.23 Python/2.7.11 Darwin/15.4.0 botocore/1.4.8
Content-Type: application/x-amz-json-1.1
Authorization: AUTHPARAMS
{
    "PolicyName": "myscalablepolicy",
    "ServiceNamespace": "elasticache",
    "ResourceId": "replication-group/myscalablecluster",
    "ScalableDimension": "elasticache:replication-group:NodeGroups",
    "PolicyType": "TargetTrackingScaling",
    "TargetTrackingScalingPolicyConfiguration": {
        "TargetValue": 40.0,
        "PredefinedMetricSpecification":
        {
            "PredefinedMetricType": "ElastiCachePrimaryEngineCPUUtilization"
        }
    }
}
```

# Editing a scaling policy
<a name="AutoScaling-Scaling-Editing-a-Scaling-Policy"></a>

You can edit a scaling policy using the AWS Management Console, the AWS CLI, or the Application Auto Scaling API. 

## Editing a scaling policy using the AWS Management Console
<a name="AutoScaling-Scaling-Editing-a-Scaling-Policy-CON"></a>

**To edit an Auto Scaling policy for an ElastiCache for Valkey and Redis OSS cluster**

1. Sign in to the AWS Management Console and open the Amazon ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose the appropriate engine. 

1. Choose the cluster that you want to add a policy to (choose the cluster name and not the button to its left). 

1. Choose the **Auto Scaling policies** tab. 

1. Under **Scaling policies**, choose the button to the left of the Auto Scaling policy you wish to change, and then choose **Modify**. 

1. Make the requisite changes to the policy.

1. Choose **Modify**.

## Editing a scaling policy using the AWS CLI and API
<a name="AutoScaling-Scaling-Editing-a-Scaling-Policy-CLI"></a>

You can use the AWS CLI or the Application Auto Scaling API to edit a scaling policy in the same way that you apply a scaling policy: 
+ When using the AWS CLI, specify the name of the policy you want to edit in the `--policy-name` parameter. Specify new values for the parameters you want to change. 
+ When using the Application Auto Scaling API, specify the name of the policy you want to edit in the `PolicyName` parameter. Specify new values for the parameters you want to change. 

For more information, see [Applying a scaling policy](AutoScaling-Scaling-Applying-a-Scaling-Policy.md).

# Deleting a scaling policy
<a name="AutoScaling-Scaling-Deleting-a-Scaling-Policy"></a>

You can delete a scaling policy using the AWS Management Console, the AWS CLI, or the Application Auto Scaling API. 

## Deleting a scaling policy using the AWS Management Console
<a name="AutoScaling-Scaling-Editing-a-Scaling-Policy-CON"></a>

**To delete an Auto Scaling policy for an ElastiCache for Redis OSS cluster**

1. Sign in to the AWS Management Console and open the Amazon ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Valkey** or **Redis OSS**. 

1. Choose the cluster whose Auto Scaling policy you want to edit (choose the cluster name, not the button to its left). 

1. Choose the **Auto Scaling policies** tab. 

1. Under **Scaling policies**, choose the Auto Scaling policy, and then choose **Delete**. 

## Deleting a scaling policy using the AWS CLI
<a name="AutoScaling-Scaling-Deleting-a-Scaling-Policy-CLI"></a>

To delete a scaling policy to your ElastiCache for Valkey and Redis OSS cluster, use the [delete-scaling-policy](https://docs.aws.amazon.com/cli/latest/reference/autoscaling/delete-scaling-policy.html) AWS CLI command with the following parameters: 
+ **--policy-name** – The name of the scaling policy. 
+ **--resource-id** – The resource identifier. For this parameter, the resource type is `ReplicationGroup` and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ **--service-namespace** – Set this value to `elasticache`. 
+ **--scalable-dimension** – Set this value to `elasticache:replication-group:NodeGroups`. 

In the following example, you delete a target-tracking scaling policy named `myscalablepolicy` from a cluster named `myscalablecluster`. 

For Linux, macOS, or Unix:

```
aws application-autoscaling delete-scaling-policy \
    --policy-name myscalablepolicy \
    --resource-id replication-group/myscalablecluster \
    --service-namespace elasticache \
    --scalable-dimension elasticache:replication-group:NodeGroups
```

For Windows:

```
aws application-autoscaling delete-scaling-policy ^
    --policy-name myscalablepolicy ^
    --resource-id replication-group/myscalablecluster ^
    --service-namespace elasticache ^
    --scalable-dimension elasticache:replication-group:NodeGroups
```

## Deleting a scaling policy using the API
<a name="AutoScaling-Scaling-Deleting-a-Scaling-Policy-API"></a>

To delete a scaling policy to your ElastiCache for Valkey and Redis OSS cluster, use the [DeleteScalingPolicy](https://docs.aws.amazon.com/cli/latest/reference/autoscaling/delete-scaling-policy.html) AWS CLI command with the following parameters: 
+ **--policy-name** – The name of the scaling policy. 
+ **--resource-id** – The resource identifier. For this parameter, the resource type is `ReplicationGroup` and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ **--service-namespace** – Set this value to `elasticache`. 
+ **--scalable-dimension** – Set this value to `elasticache:replication-group:NodeGroups`. 

In the following example, you delete a target-tracking scaling policy named `myscalablepolicy` from a cluster named `myscalablecluster`. 

```
POST / HTTP/1.1
Host: autoscaling.us-east-2.amazonaws.com
Accept-Encoding: identity
Content-Length: 219
X-Amz-Target: AnyScaleFrontendService.DeleteScalingPolicy
X-Amz-Date: 20160506T182145Z
User-Agent: aws-cli/1.10.23 Python/2.7.11 Darwin/15.4.0 botocore/1.4.8
Content-Type: application/x-amz-json-1.1
Authorization: AUTHPARAMS
{
    "PolicyName": "myscalablepolicy",
    "ServiceNamespace": "elasticache",
    "ResourceId": "replication-group/myscalablecluster",
    "ScalableDimension": "elasticache:replication-group:NodeGroups"
}
```

# Use CloudFormation for Auto Scaling policies
<a name="AutoScaling-with-Cloudformation-Shards"></a>

This snippet shows how to create a target tracking policy and apply it to an [AWS::ElastiCache::ReplicationGroup](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticache-replicationgroup.html) resource using the [AWS::ApplicationAutoScaling::ScalableTarget](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-applicationautoscaling-scalabletarget.html) resource. It uses the [Fn::Join](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-join.html) and [Ref](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html) intrinsic functions to construct the `ResourceId` property with the logical name of the `AWS::ElastiCache::ReplicationGroup` resource that is specified in the same template. 

```
ScalingTarget:
   Type: 'AWS::ApplicationAutoScaling::ScalableTarget'
   Properties:
     MaxCapacity: 3
     MinCapacity: 1
     ResourceId: !Sub replication-group/${logicalName}
     ScalableDimension: 'elasticache:replication-group:NodeGroups'
     ServiceNamespace: elasticache
     RoleARN: !Sub "arn:aws:iam::${AWS::AccountId}:role/aws-service-role/elasticache.application-autoscaling.amazonaws.com/AWSServiceRoleForApplicationAutoScaling_ElastiCacheRG"

  ScalingPolicy:
    Type: "AWS::ApplicationAutoScaling::ScalingPolicy"
    Properties:
      ScalingTargetId: !Ref ScalingTarget
      ServiceNamespace: elasticache
      PolicyName: testpolicy
      PolicyType: TargetTrackingScaling
      ScalableDimension: 'elasticache:replication-group:NodeGroups'
      TargetTrackingScalingPolicyConfiguration:
        PredefinedMetricSpecification:
          PredefinedMetricType: ElastiCachePrimaryEngineCPUUtilization
        TargetValue: 40
```

# Scheduled scaling
<a name="AutoScaling-with-Scheduled-Scaling-Shards"></a>

Scaling based on a schedule enables you to scale your application in response to predictable changes in demand. To use scheduled scaling, you create scheduled actions, which tell ElastiCache for Valkey and Redis OSS to perform scaling activities at specific times. When you create a scheduled action, you specify an existing cluster, when the scaling activity should occur, minimum capacity, and maximum capacity. You can create scheduled actions that scale one time only or that scale on a recurring schedule. 

 You can only create a scheduled action for clusters that already exist. You can't create a scheduled action at the same time that you create a cluster.

For more information on terminology for scheduled action creation, management, and deletion, see [ Commonly used commands for scheduled action creation, management, and deletion ](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-scheduled-scaling.html#scheduled-scaling-commonly-used-commands) 

**To create on a recurring schedule:**

1. Sign in to the AWS Management Console and open the Amazon ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Valkey** or **Redis OSS**. 

1. Choose the cluster that you want to add a policy for. 

1. Choose the **Manage Auto Scaling policie** from the **Actions** dropdown. 

1. Choose the **Auto Scaling policies** tab.

1. In the **Auto scaling policies** section, the **Add Scaling policy** dialog box appears. Choose **Scheduled scaling**.

1. For **Policy Name**, enter the policy name. 

1. For **Scalable Dimension**, choose **Shards**. 

1. For **Target Shards**, choose the value. 

1. For **Recurrence**, choose **Recurring**. 

1. For **Frequency**, choose the respective value. 

1. For **Start Date** and **Start time**, choose the time from when the policy will go into effect. 

1. Choose **Add Policy**. 

**To create a one-time scheduled action:**

1. Sign in to the AWS Management Console and open the Amazon ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Valkey** or **Redis OSS**. 

1. Choose the cluster that you want to add a policy for. 

1. Choose the **Manage Auto Scaling policie** from the **Actions** dropdown. 

1. Choose the **Auto Scaling policies** tab.

1. In the **Auto scaling policies** section, the **Add Scaling policy** dialog box appears. Choose **Scheduled scaling**.

1. For **Policy Name**, enter the policy name. 

1. For **Scalable Dimension**, choose **Shards**. 

1. For **Target Shards**, choose the value. 

1. For **Recurrence**, choose **One Time**. 

1. For **Start Date** and **Start time**, choose the time from when the policy will go into effect. 

1. For **End Date** choose the date until when the policy would be in effect. 

1. Choose **Add Policy**. 

**To delete a scheduled action**

1. Sign in to the AWS Management Console and open the Amazon ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Valkey** or **Redis OSS**. 

1. Choose the cluster that you want to add a policy for. 

1. Choose the **Manage Auto Scaling policie** from the **Actions** dropdown. 

1. Choose the **Auto Scaling policies** tab.

1. In the **Auto scaling policies** section, choose the auto scaling policy, and then choose **Delete** from the **Actions** dialog.

**To manage scheduled scaling using the AWS CLI **

Use the following application-autoscaling APIs:
+ [put-scheduled-action](https://docs.aws.amazon.com/cli/latest/reference/autoscaling/put-scheduled-action.html) 
+ [describe-scheduled-actions](https://docs.aws.amazon.com/cli/latest/reference/autoscaling/describe-scheduled-actions.html) 
+ [delete-scheduled-action](https://docs.aws.amazon.com/cli/latest/reference/autoscaling/delete-scheduled-action.html) 

## Use CloudFormation to create a scheduled action
<a name="AutoScaling-with-Cloudformation-Declare-Scheduled-Action"></a>

This snippet shows how to create a target tracking policy and apply it to an [AWS::ElastiCache::ReplicationGroup](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticache-replicationgroup.html) resource using the [AWS::ApplicationAutoScaling::ScalableTarget](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-applicationautoscaling-scalabletarget.html) resource. It uses the [Fn::Join](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-join.html) and [Ref](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html) intrinsic functions to construct the `ResourceId` property with the logical name of the `AWS::ElastiCache::ReplicationGroup` resource that is specified in the same template. 

```
ScalingTarget:
   Type: 'AWS::ApplicationAutoScaling::ScalableTarget'
   Properties:
     MaxCapacity: 3
     MinCapacity: 1
     ResourceId: !Sub replication-group/${logicalName}
     ScalableDimension: 'elasticache:replication-group:NodeGroups'
     ServiceNamespace: elasticache
     RoleARN: !Sub "arn:aws:iam::${AWS::AccountId}:role/aws-service-role/elasticache.application-autoscaling.amazonaws.com/AWSServiceRoleForApplicationAutoScaling_ElastiCacheRG"
     ScheduledActions:
       - EndTime: '2020-12-31T12:00:00.000Z'
         ScalableTargetAction:
           MaxCapacity: '5'
           MinCapacity: '2'
         ScheduledActionName: First
         Schedule: 'cron(0 18 * * ? *)'
```

# Using Auto Scaling with replicas
<a name="AutoScaling-Using-Replicas"></a>

An ElastiCache replication group can set up one or more caches to work as a single logical node. 

The following provides details on target tracking and scheduled policies and how to apply them using the AWS Management Console AWS CLI and APIs.

# Target tracking scaling policies
<a name="AutoScaling-Scaling-Policies-Replicas-Replicas"></a>

With target tracking scaling policies, you select a metric and set a target value. ElastiCache for Valkey and Redis OSS AutoScaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes replicas uniformly across all shards as required to keep the metric at, or close to, the specified target value. In addition to keeping the metric close to the target value, a target tracking scaling policy also adjusts to the fluctuations in the metric due to a fluctuating load pattern and minimizes rapid fluctuations in the capacity of the fleet. 

## Auto Scaling criteria for replicas
<a name="AutoScaling-Scaling-Criteria-Replicas"></a>

Your Auto Scaling policy defines the following predefined metric for your cluster:

`ElastiCacheReplicaEngineCPUUtilization`: The AVG EngineCPU utilization threshold aggregated across all replicas that ElastiCache uses to trigger an auto-scaling operation. You can set the utilization target between 35 percent and 70 percent.

When the service detects that your `ElastiCacheReplicaEngineCPUUtilization` metric is equal to or greater than the Target setting, it will increase replicas across your shards automatically. ElastiCache scales out your cluster replicas by a count equal to the larger of two numbers: Percent variation from Target and one replica. For scale-in, ElastiCache won't auto scale-in unless the overall metric value is below 75 percent of your defined Target. 

For a scale-out example, if you have 5 shards and 1 replica each:

If your Target breaches by 30 percent, ElastiCache for Valkey and Redis OSS scales out by 1 replica (max(0.3, default 1)) across all shards. which results in 5 shards with 2 replicas each,

For a scale-in example, if you have selected Target value of 60 percent, ElastiCache for Valkey and Redis OSS won't auto scale-in until the metric is less than or equal to 45 percent (25 percent below the Target 60 percent).

### Auto Scaling considerations
<a name="AutoScaling-Scaling-Considerations-Replicas"></a>

Keep the following considerations in mind:
+ A target tracking scaling policy assumes that it should perform scale out when the specified metric is above the target value. You cannot use a target tracking scaling policy to scale out when the specified metric is below the target value. ElastiCache for Valkey and Redis OSS scales out replicas by maximum of (% deviation rounded off from Target, default 1) of existing replicas across all shards in the cluster.
+ A target tracking scaling policy does not perform scaling when the specified metric has insufficient data. It does not perform scale in because it does not interpret insufficient data as low utilization. 
+ You may see gaps between the target value and the actual metric data points. This is because ElastiCache Auto Scaling always acts conservatively by rounding up or down when it determines how much capacity to add or remove. This prevents it from adding insufficient capacity or removing too much capacity. 
+ To ensure application availability, the service scales out proportionally to the metric as fast as it can, but scales in more gradually with max scale in of 1 replica across the shards in the cluster. 
+ You can have multiple target tracking scaling policies for an ElastiCache for Valkey and Redis OSS cluster, provided that each of them uses a different metric. The intention of Auto Scaling is to always prioritize availability, so its behavior differs depending on whether the target tracking policies are ready for scale out or scale in. It will scale out the service if any of the target tracking policies are ready for scale out, but will scale in only if all of the target tracking policies (with the scale-in portion enabled) are ready to scale in. 
+ Do not edit or delete the CloudWatch alarms that ElastiCache Auto Scaling manages for a target tracking scaling policy. Auto Scaling deletes the alarms automatically when you delete the scaling policy or deleting the cluster. 
+ ElastiCache Auto Scaling doesn't prevent you from manually modifying replicas across shards. These manual adjustments don't affect any existing CloudWatch alarms that are attached to the scaling policy but can impact metrics that may trigger these CloudWatch alarms. 
+ These CloudWatch alarms managed by Auto Scaling are defined over the AVG metric across all the shards in the cluster. So, having hot shards can result in either scenario of:
  + scaling when not required due to load on a few hot shards triggering a CloudWatch alarm
  + not scaling when required due to aggregated AVG across all shards affecting alarm not to breach. 
+ ElastiCache default limits on nodes per cluster still applies. So, when opting for Auto Scaling and if you expect maximum nodes to be more than default limit, request a limit increase at [AWS Service Limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) and choose the limit type **Nodes per cluster per instance type**. 
+ Ensure that you have enough ENIs (Elastic Network Interfaces) available in your VPC, which are required during scale-out. For more information, see [Elastic network interfaces](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_ElasticNetworkInterfaces.html).
+ If there is not enough capacity available from EC2, ElastiCache Auto Scaling will not scale out until either the capacity is available, or if you manually modify the cluster to the instance types that have enough capacity.
+ ElastiCache Auto Scaling doesn't support scaling of replicas with a cluster having `ReservedMemoryPercent` less than 25 percent. For more information, see [Managing reserved memory for Valkey and Redis OSS](redis-memory-management.md). 

# Adding a scaling policy
<a name="AutoScaling-Adding-Policy-Replicas"></a>

You can add a scaling policy using the AWS Management Console. 

**Adding a scaling policy using the AWS Management Console**

To add an auto scaling policy to ElastiCache for Valkey and Redis OSS

1. Sign in to the AWS Management Console and open the Amazon ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Valkey** or **Redis OSS**. 

1. Choose the cluster that you want to add a policy to (choose the cluster name and not the button to its left). 

1. Choose the **Auto Scaling policies** tab. 

1. Choose **add dynamic scaling**. 

1. Under **Scaling policies**, choose **Add dynamic scaling**.

1. For **Policy Name**, enter the policy name. 

1. For **Scalable Dimension**, select **Replicas** from dialog box. 

1. For the target value, type the Avg percentage of CPU utilization that you want to maintain on ElastiCache Replicas. This value must be >=35 and <=70. Cluster replicas are added or removed to keep the metric close to the specified value.

1. (Optional) scale-in or scale-out cooldown periods are not supported from the Console. Use the AWS CLI to modify the cool down values. 

1. For **Minimum capacity**, type the minimum number of replicas that the ElastiCache Auto Scaling policy is required to maintain. 

1. For **Maximum capacity**, type the maximum number of replicas the ElastiCache Auto Scaling policy is required to maintain. This value must be >=5. 

1. Choose **Create**.

# Registering a Scalable Target
<a name="AutoScaling-Register-Policy"></a>

You can apply a scaling policy based on either a predefined or custom metric. To do so, you can use the AWS CLI or the Application Auto Scaling API. The first step is to register your ElastiCache for Valkey and Redis OSS replication group with Auto Scaling. 

Before you can use ElastiCache auto scaling with a cluster, you must register your cluster with ElastiCache auto scaling. You do so to define the scaling dimension and limits to be applied to that cluster. ElastiCache auto scaling dynamically scales the cluster along the `elasticache:replication-group:Replicas` scalable dimension, which represents the number of cluster replicas per shard. 

**Using the CLI** 

To register your ElastiCache cluster, use the [register-scalable-target](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/register-scalable-target.html) command with the following parameters: 
+ --service-namespace – Set this value to elasticache. 
+ --resource-id – The resource identifier for the ElastiCache cluster. For this parameter, the resource type is ReplicationGroup and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ --scalable-dimension – Set this value to `elasticache:replication-group:Replicas`. 
+ --min-capacity – The minimum number of replicas to be managed by ElastiCache auto scaling. For information about the relationship between --min-capacity, --max-capacity, and the number of replicas in your cluster, see [Minimum and maximum capacity](AutoScaling-Policies.md#AutoScaling-MinMax).
+ --max-capacity – The maximum number of replicas to be managed by ElastiCache auto scaling. For information about the relationship between --min-capacity, --max-capacity, and the number of replicas in your cluster, see [Minimum and maximum capacity](AutoScaling-Policies.md#AutoScaling-MinMax).

**Example**  
In the following example, you register an ElastiCache cluster named `myscalablecluster`. The registration indicates that the cluster should be dynamically scaled to have from one to 5 replicas.   
For Linux, macOS, or Unix:  

```
aws application-autoscaling register-scalable-target \
    --service-namespace elasticache \
    --resource-id replication-group/myscalablecluster \
    --scalable-dimension elasticache:replication-group:Replicas \
    --min-capacity 1 \
    --max-capacity 5 \
```
For Windows:  

```
aws application-autoscaling register-scalable-target ^
    --service-namespace elasticache ^
    --resource-id replication-group/myscalablecluster ^
    --scalable-dimension elasticache:replication-group:Replicas ^
    --min-capacity 1 ^
    --max-capacity 5 ^
```

**Using the API**

To register your ElastiCache cluster, use the [register-scalable-target](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/register-scalable-target.html) command with the following parameters: 
+ ServiceNamespace – Set this value to elasticache. 
+ ResourceID – The resource identifier for the ElastiCache cluster. For this parameter, the resource type is ReplicationGroup and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ ScalableDimension – Set this value to `elasticache:replication-group:Replicas`. 
+ MinCapacity – The minimum number of replicas to be managed by ElastiCache auto scaling. For information about the relationship between --min-capacity, --max-capacity, and the number of replicas in your cluster, see [Minimum and maximum capacity](AutoScaling-Policies.md#AutoScaling-MinMax).
+ MaxCapacity – The maximum number of replicas to be managed by ElastiCache auto scaling. For information about the relationship between --min-capacity, --max-capacity, and the number of replicas in your cluster, see [Minimum and maximum capacity](AutoScaling-Policies.md#AutoScaling-MinMax).

**Example**  
In the following example, you register a cluster named `myscalablecluster` with the Application Auto Scaling API. This registration indicates that the cluster should be dynamically scaled to have from one to 5 replicas. 

```
POST / HTTP/1.1
Host: autoscaling.us-east-2.amazonaws.com
Accept-Encoding: identity
Content-Length: 219
X-Amz-Target: AnyScaleFrontendService.RegisterScalableTarget
X-Amz-Date: 20160506T182145Z
User-Agent: aws-cli/1.10.23 Python/2.7.11 Darwin/15.4.0 botocore/1.4.8
Content-Type: application/x-amz-json-1.1
Authorization: AUTHPARAMS
{
    "ServiceNamespace": "elasticache",
    "ResourceId": "replication-group/myscalablecluster",
    "ScalableDimension": "elasticache:replication-group:Replicas",
    "MinCapacity": 1,
    "MaxCapacity": 5
}
```

# Defining a scaling policy
<a name="AutoScaling-Defining-Policy"></a>

A target-tracking scaling policy configuration is represented by a JSON block that the metrics and target values are defined in. You can save a scaling policy configuration as a JSON block in a text file. You use that text file when invoking the AWS CLI or the Application Auto Scaling API. For more information about policy configuration syntax, see [TargetTrackingScalingPolicyConfiguration](https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_TargetTrackingScalingPolicyConfiguration.html) in the *Application Auto Scaling API Reference*. 

The following options are available for defining a target-tracking scaling policy configuration:

**Topics**
+ [Using a predefined metric](#AutoScaling-Predefined-Metric)
+ [Editing a scaling policy](AutoScaling-Editing-Policy.md)
+ [Deleting a scaling policy](AutoScaling-Deleting-Policy.md)
+ [Use CloudFormation for Auto Scaling policies](AutoScaling-with-Cloudformation.md)
+ [Scheduled scaling](AutoScaling-with-Scheduled-Scaling-Replicas.md)

## Using a predefined metric
<a name="AutoScaling-Predefined-Metric"></a>

A target-tracking scaling policy configuration is represented by a JSON block that the metrics and target values are defined in. You can save a scaling policy configuration as a JSON block in a text file. You use that text file when invoking the AWS CLI or the Application Auto Scaling API. For more information about policy configuration syntax, see [TargetTrackingScalingPolicyConfiguration](https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_TargetTrackingScalingPolicyConfiguration.html) in the *Application Auto Scaling API Reference*. 

The following options are available for defining a target-tracking scaling policy configuration:

**Topics**
+ [Using a predefined metric](#AutoScaling-Predefined-Metric)
+ [Using a custom metric](#AutoScaling-Custom-Metric)
+ [Using cooldown periods](#AutoScaling-Using-Cooldowns)
+ [Disabling scale-in activity](#AutoScaling-Disabling-Scalein)
+ [Applying a scaling policy to an ElastiCache for Valkey and Redis OSS cluster](#AutoScaling-Applying-Policy)

### Using a predefined metric
<a name="AutoScaling-Predefined-Metric"></a>

By using predefined metrics, you can quickly define a target-tracking scaling policy for an ElastiCache for Valkey and Redis OSS cluster that works with target tracking in ElastiCache Auto Scaling. Currently, ElastiCache supports the following predefined metric in ElastiCache Replicas Auto Scaling: 

`ElastiCacheReplicaEngineCPUUtilization` – The average value of the EngineCPUUtilization metric in CloudWatch across all replicas in the cluster. You can find the aggregated metric value in CloudWatch under ElastiCache `ReplicationGroupId, Role` for required ReplicationGroupId and Role Replica. 

To use a predefined metric in your scaling policy, you create a target tracking configuration for your scaling policy. This configuration must include a `PredefinedMetricSpecification` for the predefined metric and a `TargetValue` for the target value of that metric. 

### Using a custom metric
<a name="AutoScaling-Custom-Metric"></a>

By using custom metrics, you can define a target-tracking scaling policy that meets your custom requirements. You can define a custom metric based on any ElastiCache for Valkey and Redis OSS metric that changes in proportion to scaling. Not all ElastiCache metrics work for target tracking. The metric must be a valid utilization metric and describe how busy an instance is. The value of the metric must increase or decrease in proportion to the number of replicas in the cluster. This proportional increase or decrease is necessary to use the metric data to proportionally increase or decrease the number of replicas. 

**Example**  
The following example describes a target-tracking configuration for a scaling policy. In this configuration, a custom metric adjusts a cluster based on an average CPU utilization of 50 percent across all replicas in an cluster named `my-db-cluster`.   

```
{"TargetValue": 50,
    "CustomizedMetricSpecification":
    {"MetricName": "EngineCPUUtilization",
        "Namespace": "AWS/ElastiCache",
        "Dimensions": [
            {"Name": "ReplicationGroup","Value": "my-db-cluster"},
            {"Name": "Role","Value": "REPLICA"}
        ],
        "Statistic": "Average",
        "Unit": "Percent"
    }
}
```

### Using cooldown periods
<a name="AutoScaling-Using-Cooldowns"></a>

You can specify a value, in seconds, for `ScaleOutCooldown` to add a cooldown period for scaling out your cluster. Similarly, you can add a value, in seconds, for `ScaleInCooldown` to add a cooldown period for scaling in your cluster. For more information about `ScaleInCooldown` and `ScaleOutCooldown`, see [TargetTrackingScalingPolicyConfiguration](https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_TargetTrackingScalingPolicyConfiguration.html) in the *Application Auto Scaling API Reference*. The following example describes a target-tracking configuration for a scaling policy. In this configuration, the `ElastiCacheReplicaEngineCPUUtilization`predefined metric is used to adjust a cluster based on an average CPU utilization of 40 percent across all replicas in that cluster. The configuration provides a scale-in cooldown period of 10 minutes and a scale-out cooldown period of 5 minutes. 

```
{"TargetValue": 40.0,
    "PredefinedMetricSpecification":
    {"PredefinedMetricType": "ElastiCacheReplicaEngineCPUUtilization"
    },
    "ScaleInCooldown": 600,
    "ScaleOutCooldown": 300
}
```

### Disabling scale-in activity
<a name="AutoScaling-Disabling-Scalein"></a>

You can prevent the target-tracking scaling policy configuration from scaling in your ElastiCache for Valkey and Redis OSS cluster by disabling scale-in activity. Disabling scale-in activity prevents the scaling policy from deleting replicas, while still allowing the scaling policy to add them as needed. 

You can specify a Boolean value for `DisableScaleIn` to enable or disable scale in activity for your cluster. For more information about `DisableScaleIn`, see [TargetTrackingScalingPolicyConfiguration](https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_TargetTrackingScalingPolicyConfiguration.html) in the *Application Auto Scaling API Reference*. 

**Example**  
The following example describes a target-tracking configuration for a scaling policy. In this configuration, the `ElastiCacheReplicaEngineCPUUtilization` predefined metric adjusts a cluster based on an average CPU utilization of 40 percent across all replicas in that cluster. The configuration disables scale-in activity for the scaling policy. 

```
{"TargetValue": 40.0,
    "PredefinedMetricSpecification":
    {"PredefinedMetricType": "ElastiCacheReplicaEngineCPUUtilization"
    },
    "DisableScaleIn": true
}
```

### Applying a scaling policy to an ElastiCache for Valkey and Redis OSS cluster
<a name="AutoScaling-Applying-Policy"></a>

After registering your cluster with ElastiCache for Valkey and Redis OSS auto scaling and defining a scaling policy, you apply the scaling policy to the registered cluster. To apply a scaling policy to an ElastiCache for Valkey and Redis OSS cluster, you can use the AWS CLI or the Application Auto Scaling API. 

**Using the AWS CLI**

To apply a scaling policy to your ElastiCache for Valkey and Redis OSS cluster, use the [put-scaling-policy](https://docs.aws.amazon.com/cli/latest/reference/autoscaling/put-scaling-policy.html) command with the following parameters: 
+ --policy-name – The name of the scaling policy. 
+ --policy-type – Set this value to `TargetTrackingScaling`. 
+ --resource-id – The resource identifier for the cluster. For this parameter, the resource type is ReplicationGroup and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ --service-namespace – Set this value to elasticache. 
+ --scalable-dimension – Set this value to `elasticache:replication-group:Replicas`. 
+ --target-tracking-scaling-policy-configuration – The target-tracking scaling policy configuration to use for the cluster. 

**Example**  
In the following example, you apply a target-tracking scaling policy named `myscalablepolicy` to a cluster named `myscalablecluster` with ElastiCache auto scaling. To do so, you use a policy configuration saved in a file named `config.json`. 

For Linux, macOS, or Unix:

```
aws application-autoscaling put-scaling-policy \
    --policy-name myscalablepolicy \
    --policy-type TargetTrackingScaling \
    --resource-id replication-group/myscalablecluster \
    --service-namespace elasticache \
    --scalable-dimension elasticache:replication-group:Replicas \
    --target-tracking-scaling-policy-configuration file://config.json
```

```
{"TargetValue": 40.0,
    "PredefinedMetricSpecification":
    {"PredefinedMetricType": "ElastiCacheReplicaEngineCPUUtilization"
    },
    "DisableScaleIn": true
}
```

For Windows:

```
aws application-autoscaling put-scaling-policy ^
    --policy-name myscalablepolicy ^
    --policy-type TargetTrackingScaling ^
    --resource-id replication-group/myscalablecluster ^
    --service-namespace elasticache ^
    --scalable-dimension elasticache:replication-group:Replicas ^
    --target-tracking-scaling-policy-configuration file://config.json
```

**Using the API**

To apply a scaling policy to your ElastiCache cluster with the Application Auto Scaling API, use the [PutScalingPolicy](https://docs.aws.amazon.com/autoscaling/application/APIReference/API_PutScalingPolicy.html) Application Auto Scaling API operation with the following parameters: 
+ PolicyName – The name of the scaling policy. 
+ PolicyType – Set this value to `TargetTrackingScaling`. 
+ ResourceID – The resource identifier for the cluster. For this parameter, the resource type is ReplicationGroup and the unique identifier is the name of the ElastiCache for Redis OSS cluster, for example `replication-group/myscalablecluster`. 
+ ServiceNamespace – Set this value to elasticache. 
+ ScalableDimension – Set this value to `elasticache:replication-group:Replicas`. 
+ TargetTrackingScalingPolicyConfiguration – The target-tracking scaling policy configuration to use for the cluster. 

**Example**  
In the following example, you apply a target-tracking scaling policy named `scalablepolicy` to an cluster named `myscalablecluster` with ElastiCache auto scaling. You use a policy configuration based on the `ElastiCacheReplicaEngineCPUUtilization` predefined metric. 

```
POST / HTTP/1.1
Host: autoscaling.us-east-2.amazonaws.com
Accept-Encoding: identity
Content-Length: 219
X-Amz-Target: AnyScaleFrontendService.PutScalingPolicy
X-Amz-Date: 20160506T182145Z
User-Agent: aws-cli/1.10.23 Python/2.7.11 Darwin/15.4.0 botocore/1.4.8
Content-Type: application/x-amz-json-1.1
Authorization: AUTHPARAMS
{
    "PolicyName": "myscalablepolicy",
    "ServiceNamespace": "elasticache",
    "ResourceId": "replication-group/myscalablecluster",
    "ScalableDimension": "elasticache:replication-group:Replicas",
    "PolicyType": "TargetTrackingScaling",
    "TargetTrackingScalingPolicyConfiguration": {
        "TargetValue": 40.0,
        "PredefinedMetricSpecification":
        {
            "PredefinedMetricType": "ElastiCacheReplicaEngineCPUUtilization"
        }
    }
}
```

# Editing a scaling policy
<a name="AutoScaling-Editing-Policy"></a>

You can edit a scaling policy using the AWS Management Console, the AWS CLI, or the Application Auto Scaling API. 

**Editing a scaling policy using the AWS Management Console**

You can only edit policies with type Predefined metrics by using the AWS Management Console

1. Sign in to the AWS Management Console and open the Amazon ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Valkey** or **Redis OSS**

1. Choose the cluster that you want to add a policy to (choose the cluster name and not the button to its left). 

1. Choose the **Auto Scaling policies** tab. 

1. Under **Scaling policies**, choose the button to the left of the Auto Scaling policy you wish to change, and then choose **Modify**. 

1. Make the requisite changes to the policy.

1. Choose **Modify**.

1. Make changes to the policy. 

1. Choose **Modify**.

**Editing a scaling policy using the AWS CLI or the Application Auto Scaling API **

You can use the AWS CLI or the Application Auto Scaling API to edit a scaling policy in the same way that you apply a scaling policy: 
+ When using the Application Auto Scaling API, specify the name of the policy you want to edit in the `PolicyName` parameter. Specify new values for the parameters you want to change. 

For more information, see [Applying a scaling policy to an ElastiCache for Valkey and Redis OSS cluster](AutoScaling-Defining-Policy.md#AutoScaling-Applying-Policy).

# Deleting a scaling policy
<a name="AutoScaling-Deleting-Policy"></a>

You can delete a scaling policy using the AWS Management Console, the AWS CLI or the Application Auto Scaling API

**Deleting a scaling policy using the AWS Management Console**

You can only edit policies with type Predefined metrics by using the AWS Management Console

1. Sign in to the AWS Management Console and open the Amazon ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Valkey** or **Redis OSS**

1. Choose the cluster whose auto scaling policy you want to delete.

1. Choose the **Auto Scaling policies** tab. 

1. Under **Scaling policies**, choose the auto scaling policy, and then choose **Delete**. 

**Deleting a scaling policy using the AWS CLI or the Application Auto Scaling API **

You can use the AWS CLI or the Application Auto Scaling API to delete a scaling policy from an ElastiCache cluster. 

**CLI**

To delete a scaling policy from your ElastiCache for Valkey and Redis OSS cluster, use the [delete-scaling-policy](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/delete-scaling-policy.html) command with the following parameters: 
+ --policy-name – The name of the scaling policy. 
+ --resource-id – The resource identifier for the cluster. For this parameter, the resource type is ReplicationGroup and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ --service-namespace – Set this value to elasticache. 
+ --scalable-dimension – Set this value to `elasticache:replication-group:Replicas`. 

**Example**  
In the following example, you delete a target-tracking scaling policy named `myscalablepolicy` from an ELC; cluster named `myscalablecluster`. 

For Linux, macOS, or Unix:

```
aws application-autoscaling delete-scaling-policy \
    --policy-name myscalablepolicy \
    --resource-id replication-group/myscalablecluster \
    --service-namespace elasticache \
    --scalable-dimension elasticache:replication-group:Replicas \
```

For Windows:

```
aws application-autoscaling delete-scaling-policy ^
    --policy-name myscalablepolicy ^
    --resource-id replication-group/myscalablecluster ^
    --service-namespace elasticache ^
    --scalable-dimension elasticache:replication-group:Replicas ^
```

**API**

To delete a scaling policy from your ElastiCache for Valkey and Redis OSS cluster, use the [DeleteScalingPolicy](https://docs.aws.amazon.com/ApplicationAutoScaling/latest/APIReference/API_DeleteScalingPolicy.html) Application Auto Scaling API operation with the following parameters: 
+ PolicyName – The name of the scaling policy. 
+ ResourceID – The resource identifier for the cluster. For this parameter, the resource type is ReplicationGroup and the unique identifier is the name of the cluster, for example `replication-group/myscalablecluster`. 
+ ServiceNamespace – Set this value to elasticache. 
+ ScalableDimension – Set this value to `elasticache:replication-group:Replicas`. 

In the following example, you delete a target-tracking scaling policy named `myscalablepolicy` from a cluster named `myscalablecluster` with the Application Auto Scaling API. 

```
POST / HTTP/1.1
>>>>>>> mainline
Host: autoscaling.us-east-2.amazonaws.com
Accept-Encoding: identity
Content-Length: 219
X-Amz-Target: AnyScaleFrontendService.DeleteScalingPolicy
X-Amz-Date: 20160506T182145Z
User-Agent: aws-cli/1.10.23 Python/2.7.11 Darwin/15.4.0 botocore/1.4.8
Content-Type: application/x-amz-json-1.1
Authorization: AUTHPARAMS
{
    "PolicyName": "myscalablepolicy",
    "ServiceNamespace": "elasticache",
    "ResourceId": "replication-group/myscalablecluster",
    "ScalableDimension": "elasticache:replication-group:Replicas"
}
```

# Use CloudFormation for Auto Scaling policies
<a name="AutoScaling-with-Cloudformation"></a>

This snippet shows how to create a scheduled action and apply it to an [AWS::ElastiCache::ReplicationGroup](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticache-replicationgroup.html) resource using the [AWS::ApplicationAutoScaling::ScalableTarget](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-applicationautoscaling-scalabletarget.html) resource. It uses the [Fn::Join](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-join.html) and [Ref](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html) intrinsic functions to construct the `ResourceId` property with the logical name of the `AWS::ElastiCache::ReplicationGroup` resource that is specified in the same template. 

```
ScalingTarget:
   Type: 'AWS::ApplicationAutoScaling::ScalableTarget'
   Properties:
     MaxCapacity: 0
     MinCapacity: 0
     ResourceId: !Sub replication-group/${logicalName}
     ScalableDimension: 'elasticache:replication-group:Replicas'
     ServiceNamespace: elasticache
     RoleARN: !Sub "arn:aws:iam::${AWS::AccountId}:role/aws-service-role/elasticache.application-autoscaling.amazonaws.com/AWSServiceRoleForApplicationAutoScaling_ElastiCacheRG"

  ScalingPolicy:
    Type: "AWS::ApplicationAutoScaling::ScalingPolicy"
    Properties:
      ScalingTargetId: !Ref ScalingTarget
      ServiceNamespace: elasticache
      PolicyName: testpolicy
      PolicyType: TargetTrackingScaling
      ScalableDimension: 'elasticache:replication-group:Replicas'
      TargetTrackingScalingPolicyConfiguration:
        PredefinedMetricSpecification:
          PredefinedMetricType: ElastiCacheReplicaEngineCPUUtilization
        TargetValue: 40
```

# Scheduled scaling
<a name="AutoScaling-with-Scheduled-Scaling-Replicas"></a>

Scaling based on a schedule enables you to scale your application in response to predictable changes in demand. To use scheduled scaling, you create scheduled actions, which tell ElastiCache for Valkey and Redis OSS to perform scaling activities at specific times. When you create a scheduled action, you specify an existing ElastiCache cluster, when the scaling activity should occur, minimum capacity, and maximum capacity. You can create scheduled actions that scale one time only or that scale on a recurring schedule. 

 You can only create a scheduled action for ElastiCache clusters that already exist. You can't create a scheduled action at the same time that you create a cluster.

For more information on terminology for scheduled action creation, management, and deletion, see [ Commonly used commands for scheduled action creation, management, and deletion ](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-scheduled-scaling.html#scheduled-scaling-commonly-used-commands) 

**To create a one-time scheduled action:**

Similar to Shard dimension. See [Scheduled scaling](AutoScaling-with-Scheduled-Scaling-Shards.md).

**To delete a scheduled action**

Similar to Shard dimension. See [Scheduled scaling](AutoScaling-with-Scheduled-Scaling-Shards.md).

**To manage scheduled scaling using the AWS CLI **

Use the following application-autoscaling APIs:
+ [put-scheduled-action](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/put-scheduled-action.html) 
+ [describe-scheduled-actions](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/describe-scheduled-actions.html) 
+ [delete-scheduled-action](https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/delete-scheduled-action.html) 

## Use CloudFormation to create Auto Scaling policies
<a name="AutoScaling-with-Cloudformation-Update-Action"></a>

This snippet shows how to create a scheduled action and apply it to an [AWS::ElastiCache::ReplicationGroup](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-elasticache-replicationgroup.html) resource using the [AWS::ApplicationAutoScaling::ScalableTarget](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-applicationautoscaling-scalabletarget.html) resource. It uses the [Fn::Join](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-join.html) and [Ref](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-ref.html) intrinsic functions to construct the `ResourceId` property with the logical name of the `AWS::ElastiCache::ReplicationGroup` resource that is specified in the same template. 

```
ScalingTarget:
   Type: 'AWS::ApplicationAutoScaling::ScalableTarget'
   Properties:
     MaxCapacity: 0
     MinCapacity: 0
     ResourceId: !Sub replication-group/${logicalName}
     ScalableDimension: 'elasticache:replication-group:Replicas'
     ServiceNamespace: elasticache
     RoleARN: !Sub "arn:aws:iam::${AWS::AccountId}:role/aws-service-role/elasticache.application-autoscaling.amazonaws.com/AWSServiceRoleForApplicationAutoScaling_ElastiCacheRG"
     ScheduledActions:
       - EndTime: '2020-12-31T12:00:00.000Z'
         ScalableTargetAction:
           MaxCapacity: '5'
           MinCapacity: '2'
         ScheduledActionName: First
         Schedule: 'cron(0 18 * * ? *)'
```

# Modifying cluster mode
<a name="modify-cluster-mode"></a>

Valkey and Redis OSS are a distributed in-memory databases that supports sharding and replication. ElastiCache Valkey and Redis OSS clusters are the distributed implementation that allows data to be partitioned across multiple nodes. An ElastiCache for Redis OSS cluster has two modes of operation, Cluster mode enabled (CME) and cluster mode disabled (CMD). In CME, a Valkey and Redis OSS engine works as a distributed database with multiple shards and nodes, while in CMD, Valkey and Redis OSS work as a single node.

Before migrating from CMD to CME, the following conditions must be met:

**Important**  
Cluster mode configuration can only be changed from cluster mode disabled to cluster mode enabled. Reverting this configuration is not possible.
+ The cluster may only have keys in database 0 only.
+ Applications must use a Valkey or Redis OSS client that is capable of using Cluster protocol and use a configuration endpoint.
+ Auto-failover must be enabled on the cluster with a minimum of 1 replica.
+ The minimum engine version required for migration is Valkey 7.2 and above, or Redis OSS 7.0 and above.

To migrate from CMD to CME, the cluster mode configuration must be changed from cluster mode disabled to cluster mode enabled. This is a two-step procedure that ensures cluster availability during the migration process.

**Note**  
You need to provide a parameter group with cluster-enabled configuration, that is, the cluster-enabled parameter is set as `yes`. If you are using a default parameter group, ElastiCache for Redis OSS will automatically pick the corresponding default parameter group with a cluster-enabled configuration. The cluster-enabled parameter value is set to `no` for a CMD cluster. As the cluster moves to the compatible mode, the cluster-enabled parameter value is updated to `yes` as part of the modification action.   
For more information, see [Configuring engine parameters using ElastiCache parameter groups](ParameterGroups.md)

1. **Prepare** – Create a test CME cluster and make sure your stack is ready to work with it. ElastiCache for Redis OSS has no way to verify your readiness. For more information, see [Creating a cluster for Valkey or Redis OSS](Clusters.Create.md).

1. **Modify existing CMD Cluster Configuration to cluster mode compatible** – In this mode, there will be a single shard deployed, and ElastiCache for Redis OSS will work as a single node but also as a single shard cluster. Compatible mode means the client application can use either protocol to communicate with the cluster. In this mode, applications must be reconfigured to start using Valkey or Redis OSS Cluster protocol and configuration endpoint. To change the Valkey or Redis OSS cluster mode to cluster mode compatible, follow the steps below:
**Note**  
In compatible mode, other modification operations such as scaling and engine version are not allowed for the cluster. Additionally, parameters (excluding `cacheParameterGroupName`) cannot be modified when defining cluster-mode parameter within the [ModifyReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroup.html) request. 

   1. Using the AWS Management Console, see [Modifying a replication group](Replication.Modify.md) and set the cluster mode to **Compatible**

   1. Using the API, see [ModifyReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroup.html) and update the `ClusterMode` parameter to `compatible`.

   1. Using the AWS CLI, see [modify-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-replication-group.html) and update the `cluster-mode` parameter to `compatible`.

   After changing the Valkey or Redis OSS cluster mode to cluster mode compatible, the [DescribeReplicationGroups](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeReplicationGroups.html) API will return the ElastiCache for Redis OSS cluster configuration endpoint. The cluster configuration endpoint is a single endpoint that can be used by applications to connect to the cluster. For more information, see [Finding connection endpoints in ElastiCache](Endpoints.md).

1. **Modify Cluster Configuration to cluster mode enabled** – Once the cluster mode is set to cluster mode compatible, the second step is to modify the cluster configuration to cluster mode enabled. In this mode, a single shard is running, and customers can now scale their clusters or modify other cluster configurations.

   To change the cluster mode to enabled, follow the steps below:

   Before you begin, make sure your Valkey or Redis OSS clients have migrated to using cluster protocol and that the cluster's configuration endpoint is not in use.

   1. Using the AWS Management Console, see [Modifying a replication group](Replication.Modify.md) and set the cluster mode to **Enabled**.

   1. Using the API, see [ModifyReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroup.html) and update the `ClusterMode` parameter to `enabled`.

   1. Using the AWS CLI, see [modify-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-replication-group.html) and update the `cluster-mode` parameter to `enabled`.

   After changing the cluster mode to enabled, the endpoints will be configured as per the Valkey or Redis OSS cluster specification. The [DescribeReplicationGroups](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeReplicationGroups.html) API will return the cluster mode parameter as `enabled` and the cluster endpoints that are now available to be used by applications to connect to the cluster.

   Note that the cluster endpoints will change once the cluster mode is changed to enabled. Make sure to update your applications with the new endpoints.

You can also choose to revert back to cluster mode disabled (CMD) from cluster mode compatible and preserve the original configurations.

**Modify Cluster Configuration to cluster mode disabled from cluster mode compatible**

1. Using the AWS Management Console, see [Modifying a replication group](Replication.Modify.md) and set the cluster mode to **Disabled**

1. Using the API, see [ModifyReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroup.html) and update the `ClusterMode` parameter to `disabled`. 

1. Using the AWS CLI, see [modify-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-replication-group.html) and update the `cluster-mode` parameter to `disabled`.

After changing the cluster mode to disabled, the [DescribeReplicationGroups](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeReplicationGroups.html) API will return the cluster mode parameter as `disabled`.

# Replication across AWS Regions using global datastores
<a name="Redis-Global-Datastore"></a>

**Note**  
Global Datastore is currently available for node-based clusters only.

By using the Global Datastore feature, you can work with fully managed, fast, reliable, and secure Valkey or Redis OSS cluster replication across AWS Regions. Using this feature, you can create cross-Region read replica clusters to enable low-latency reads and disaster recovery across AWS Regions.

In the following sections, you can find a description of how to work with global datastores.

**Topics**
+ [Overview](#Redis-Global-Data-Stores-Overview)
+ [Prerequisites and limitations](Redis-Global-Datastores-Getting-Started.md)
+ [Using global datastores (console)](Redis-Global-Datastores-Console.md)
+ [Using global datastores (CLI)](Redis-Global-Datastores-CLI.md)

## Overview
<a name="Redis-Global-Data-Stores-Overview"></a>

Each *global datastore* is a collection of one or more clusters that replicate to one another. 

A global datastore consists of the following:
+ **Primary (active) cluster ** – A primary cluster accepts writes that are replicated to all clusters within the global datastore. A primary cluster also accepts read requests. 
+ **Secondary (passive) cluster ** – A secondary cluster only accepts read requests and replicates data updates from a primary cluster. A secondary cluster needs to be in a different AWS Region than the primary cluster. 

When you create a global datastore in ElastiCache for Valkey or Redis OSS, it automatically replicates your data from the primary cluster to the secondary cluster. You choose the AWS Region where the Valkey or Redis OSS data should be replicated and then create a secondary cluster in that AWS Region. ElastiCache then sets up and manages automatic, asynchronous replication of data between the two clusters. 

Using a global datastore for Valkey or Redis OSS provides the following advantages: 
+ **Geolocal performance** – By setting up remote replica clusters in additional AWS Regions and synchronizing your data between them, you can reduce latency of data access in that AWS Region. A global datastore can help increase the responsiveness of your application by serving low-latency, geolocal reads across AWS Regions. 
+ **Disaster recovery** – If your primary cluster in a global datastore experiences degradation, you can promote a secondary cluster as your new primary cluster. You can do so by connecting to any AWS Region that contains a secondary cluster.

The following diagram shows how global datastores can work.

![\[global datastore\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/images/Global-DataStore.png)


# Prerequisites and limitations
<a name="Redis-Global-Datastores-Getting-Started"></a>

Before getting started with global datastores, be aware of the following:
+ Global datastores are supported in the following AWS Regions:
  + **Africa** - Cape Town
  + **Asia Pacific** - Hong Kong, Hyderabad, Jakarta, Malaysia, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, Thailand, and Tokyo 
  + **Canada** - Canada Central and Canada West (Calgary)
  + **China** - Beijing and Ningxia
  + **Europe ** - Frankfurt, London, Ireland, Milan, Paris, Spain, Stockholm, and Zurich
  + **AWS GovCloud** -US-West and US-East
  + **Israel** - Tel Aviv
  + **Middle East** - Bahrain and UAE
  + **US** - East (N. Virginia and Ohio) and US West (N. California and Oregon)
  + **South America** - Mexico (Central) and São Paulo
+  All clusters—primary and secondary—in your global datastore should have the same number of primary nodes, node type, engine version, and number of shards (in case of cluster-mode enabled). Each cluster in your global datastore can have a different number of read replicas to accommodate the read traffic local to that cluster. 

  Replication must be enabled if you plan to use an existing single-node cluster.
+ Global datastores are supported on the following instance families in size large and above: M5, M6g, M7g, R5, R6g, R6gd, R7g, and C7gn. Previous generation instance types (such as M4 and R4) are not supported.
+ You can set up replication for a primary cluster from one AWS Region to a secondary cluster in up to two other AWS Regions. 
**Note**  
The exception to this are China (Beijing) Region and China (Ningxia) regions, where replication can only occur between the two regions. 
+ You can work with global datastores only in VPC clusters. For more information, see [Access Patterns for Accessing an ElastiCache Cache in an Amazon VPC](elasticache-vpc-accessing.md). Global datastores aren't supported when you use EC2-Classic. For more information, see [EC2-Classic](https://docs.aws.amazon.com//AWSEC2/latest/UserGuide/ec2-classic-platform.html) in the *Amazon EC2 User Guide.*
**Note**  
At this time, you can't use global datastores in [Using local zones with ElastiCache](Local_zones.md).
+ ElastiCache doesn't support autofailover from one AWS Region to another. When needed, you can promote a secondary cluster manually. For an example, see [Promoting the secondary cluster to primary](Redis-Global-Datastores-Console.md#Redis-Global-Datastores-Console-Promote-Secondary). 
+ To bootstrap from existing data, use an existing cluster as primary to create a global datastore. We don't support adding an existing cluster as secondary. The process of adding the cluster as secondary wipes data, which may result in data loss. 
+ Parameter updates are applied to all clusters when you modify a local parameter group of a cluster belonging to a global datastore. 
+ You can scale regional clusters both vertically (scaling up and down) and horizontally (scaling in and out). You can scale the clusters by modifying the global datastore. All the regional clusters in the global datastore are then scaled without interruption. For more information, see [Scaling ElastiCache](Scaling.md).
+ Global datastores support [encryption at rest](at-rest-encryption.md), [encryption in transit](in-transit-encryption.md), and [AUTH](auth.md). 
+ Global datastores doesn't support Internet Protocol version 6 (IPv6).
+  Global datastores support AWS KMS keys. For more information, see [AWS key management service concepts](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#master_keys) in the *AWS Key Management Service Developer Guide.* 

**Note**  
Global datastores support [pub/sub messaging](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/elasticache-use-cases.html#elasticache-for-redis-use-cases-messaging) with the following stipulations:  
For cluster-mode disabled, pub/sub is fully supported. Events published on the primary cluster of the primary AWS Region are propagated to secondary AWS Regions.
For cluster mode enabled, the following applies:  
For published events that aren't in a keyspace, only subscribers in the same AWS Region receive the events.
For published keyspace events, subscribers in all AWS Regions receive the events.

# Using global datastores (console)
<a name="Redis-Global-Datastores-Console"></a>

To create a global datastore using the console, follow this two-step process:

1. Create a primary cluster, either by using an existing cluster or creating a new cluster. The engine must be Valkey 7.2 or later, or Redis OSS 5.0.6 or later.

1. Add up to two secondary clusters in different AWS Regions, again using Valkey 7.2 or later, or Redis OSS 5.0.6 or later.

The following procedures guide you on how to create a global datastore for Valkey or Redis OSS and perform other operations using the ElastiCache console.

**Topics**
+ [Creating a global datastore using an existing cluster](#Redis-Global-Datastores-Console-Create-Primary)
+ [Creating a new global datastore using a new primary cluster](#Redis-Global-Datastores-Create-From-Scratch)
+ [Viewing global datastore details](#Redis-Global-Datastores-Console-Details)
+ [Adding a Region to a global datastore](#Redis-Global-Datastores-Console-Create-Secondary)
+ [Modifying a global datastore](#Redis-Global-Datastores-Console-Modify-Regional-Clusters)
+ [Promoting the secondary cluster to primary](#Redis-Global-Datastores-Console-Promote-Secondary)
+ [Removing a Region from a global datastore](#Redis-Global-Datastore-Console-Remove-Region)
+ [Deleting a global datastore](#Redis-Global-Datastores-Console-Delete-GlobalDatastore)

## Creating a global datastore using an existing cluster
<a name="Redis-Global-Datastores-Console-Create-Primary"></a>

In this scenario, you use an existing cluster to serve as the primary of the new global datastore. You then create a secondary, read-only cluster in a separate AWS Region. This secondary cluster receives automatic and asynchronous updates from the primary cluster. 

**Important**  
The existing cluster must use an engine that is Valkey 7.2 or later or Redis OSS 5.0.6 or later.

**To create a global datastore using an existing cluster**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. On the navigation pane, choose **Global Datastores** and then choose **Create global datastore**.

1. On the **Primary cluster settings** page, do the following:
   + In the **Global Datastore info** field, enter a name for the new global datastore. 
   + (Optional) Enter a **Description** value. 

1. Under **Regional cluster**, select **Use existing regional cluster**.

1. Under **Existing cluster**, select the existing cluster you want to use.

1. Keep the following options as they are. They're prepopulated to match the primary cluster configuration, you can't change them.
   + Engine version
   + Node type
   + Parameter group
**Note**  
ElastiCache autogenerates a new parameter group from values of the provided parameter group and applies the new parameter group to the cluster. Use this new parameter group to modify parameters on a global datastore. Each autogenerated parameter group is associated with one and only one cluster and, therefore, only one global datastore.
   + Number of shards
   + Encryption at rest – Enables encryption of data stored on disk. For more information, see [Encryption at rest](at-rest-encryption.md).
**Note**  
You can supply a different encryption key by choosing **Customer Managed AWS KMS key** and choosing the key. For more information, see [Using Customer Managed AWS KMS keys](at-rest-encryption.md#using-customer-managed-keys-for-elasticache-security).
   + Encryption in-transit – Enables encryption of data on the wire. For more information, see [Encryption in transit](in-transit-encryption.md). For Valkey 7.2 and onwards and Redis OSS 6.0 onwards, if you enable encryption in-transit you are prompted to specify one of the following **Access Control** options:
     + **No Access Control** – This is the default setting. This indicates no restrictions.
     + **User Group Access Control List** – Choose a user group with a defined set of users and permissions on available operations. For more information, see [Managing User Groups with the Console and CLI](Clusters.RBAC.md#User-Groups).
     + **AUTH Default User** – An authentication mechanism for a Valkey or Redis OSS server. For more information, see [AUTH](auth.md).

1. (Optional) As needed, update the remaining secondary cluster settings. These are prepopulated with the same values as the primary cluster, but you can update them to meet specific requirements for that cluster.
   + Port
   + Number of replicas
   + Subnet group
   + Preferred Availability Zone(s)
   + Security groups
   + Customer Managed (AWS KMS key)
   + AUTH Token
   + Enable automatic backups
   + Backup retention period
   + Backup window
   + Maintenance window
   + Topic for SNS notification

1. Choose **Create**. Doing this sets the status of the global datastore to **Creating**. The status transitions to **Modifying** after the primary cluster is associated to the global datastore and the secondary cluster is in **Associating** status.

   After the primary cluster and secondary clusters are associated with the global datastore, the status changes to **Available**. At this point, you have a primary cluster that accepts reads and writes and secondary clusters that accept reads replicated from the primary cluster.

   The page is updated to indicate whether a cluster is part of a global datastore, including:
   + **Global Datastore** – The name of the global datastore to which the cluster belongs.
   + **Global Datastore Role** – The role of the cluster, either primary or secondary.

You can add up to one additional secondary cluster in a different AWS Region. For more information, see [Adding a Region to a global datastore](#Redis-Global-Datastores-Console-Create-Secondary).

## Creating a new global datastore using a new primary cluster
<a name="Redis-Global-Datastores-Create-From-Scratch"></a>

If you choose to create a global datastore with a new cluster, use the following procedure. 

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. On the navigation pane, choose **Global Datastores** and then choose **Create global datastore**.

1. Under **Primary cluster settings**, do the following:

   1. For **Cluster mode**, choose **Enabled** or **Disabled**.

   1. For **Global Datastore info** enter a value for **Name**. ElastiCache uses the suffix to generate a unique name for the global datastore. You can search for the global datastore by using the suffix that you specify here.

   1. (Optional) Enter a value for **Global Datastore Description**.

1. Under **Regional cluster**:

   1. For **Region**, choose an available AWS Region.

   1. Choose **Create new regional cluster** or **Use existing regional cluster**

   1. If you choose **Create new regional cluster**, under **Cluster info**, enter a name and optional description of the cluster.

   1. Under **Location**, we recommend you accept the default settings for **Multi-AZ** and **Auto-failover**.

1. Under **Cluster settings**

   1. For **Engine version**, choose an available version, which is 5.0.6 or later.

   1. For **Port**, use the default port, 6379. If you have a reason to use a different port, enter the port number.

   1. For **Parameter group**, choose a parameter group or create a new one. Parameter groups control the runtime parameters of your cluster. For more information on parameter groups, see [Valkey and Redis OSS parameters](ParameterGroups.Engine.md#ParameterGroups.Redis) and [Creating an ElastiCache parameter group](ParameterGroups.Creating.md).
**Note**  
When you select a parameter group to set the engine configuration values, that parameter group is applied to all clusters in the global datastore. On the **Parameter Groups** page, the yes/no **Global** attribute indicates whether a parameter group is part of a global datastore.

   1. For **Node type**, choose the down arrow (![\[Downward-pointing triangle icon, typically used to indicate a dropdown menu.\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/images/ElastiCache-DnArrow.png)). In the **Change node type** dialog box, choose a value for **Instance family** for the node type that you want. Then choose the node type that you want to use for this cluster, and then choose **Save**.

      For more information, see [Choosing your node size](CacheNodes.SelectSize.md).

      If you choose an r6gd node type, data-tiering is automatically enabled. For more information, see [Data tiering in ElastiCache](data-tiering.md).

   1. If you are creating a Valkey or Redis OSS (cluster mode disabled) cluster:

      For **Number of replicas**, choose the number of replicas that you want for this cluster.

   1. If you are creating a Valkey or Redis OSS (cluster mode enabled) cluster:

      1. For **Number of shards**, choose the number of shards (partitions/node groups) that you want for this Valkey or Redis OSS (cluster mode enabled) cluster.

         For some versions of Valkey or Redis OSS (cluster mode enabled), you can change the number of shards in your cluster dynamically:
         + **Redis OSS 3.2.10 and later** – If your cluster is running Redis OSS 3.2.10 or later versions, you can change the number of shards in your cluster dynamically. For more information, see [Scaling Valkey or Redis OSS (Cluster Mode Enabled) clusters](scaling-redis-cluster-mode-enabled.md).
         + **Other Redis OSS versions** – If your cluster is running a version of Redis OSS before version 3.2.10, there's another approach. To change the number of shards in your cluster in this case, create a new cluster with the new number of shards. For more information, see [Restoring from a backup into a new cache](backups-restoring.md).

      1. For **Replicas per shard**, choose the number of read replica nodes that you want in each shard.

         The following restrictions exist for Valkey or Redis OSS (cluster mode enabled).
         + If you have Multi-AZ enabled, make sure that you have at least one replica per shard.
         + The number of replicas is the same for each shard when creating the cluster using the console.
         + The number of read replicas per shard is fixed and cannot be changed. If you find you need more or fewer replicas per shard (API/CLI: node group), you must create a new cluster with the new number of replicas. For more information, see [Tutorial: Seeding a new node-based cluster with an externally created backup](backups-seeding-redis.md).

1. For **Subnet group settings**, choose the subnet that you want to apply to this cluster. ElastiCache provides a default IPv4 subnet group or you can choose to create a new one. For IPv6, you need to create a subnet group with an IPv6 CIDR block. If you choose **dual stack**, you then must select a Discovery IP type, either IPv6 or IPv4.

   For more information see, [Create a subnet in your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-vpcs.html#AddaSubnet).

1. For **Availability zone placements**, you have two options:
   + **No preference** – ElastiCache chooses the Availability Zone.
   + **Specify availability zones** – You specify the Availability Zone for each cluster.

     If you chose to specify the Availability Zones, for each cluster in each shard, choose the Availability Zone from the list.

   For more information, see [Choosing regions and availability zones for ElastiCache](RegionsAndAZs.md).  
![\[Image: Specifying Keyspaces and Availability Zones\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/images/ElastiCache-ClusterOn-Slots-AZs.png)

   *Specifying Keyspaces and Availability Zones*

1. Choose **Next**

1. Under **Advanced Valkey and Redis OSS settings**

   1. For **Security**: 

     1. To encrypt your data, you have the following options:
        + **Encryption at rest** – Enables encryption of data stored on disk. For more information, see [Encryption at Rest](at-rest-encryption.md).
**Note**  
You have the option to supply a different encryption key by choosing **Customer Managed AWS KMS key** and choosing the key. For more information, see [Using customer managed keys from AWS KMS](at-rest-encryption.md#using-customer-managed-keys-for-elasticache-security).
        + **Encryption in-transit** – Enables encryption of data on the wire. For more information, see [encryption in transit](in-transit-encryption.md). For Valkey 7.2 and above and Redis OSS 6.0 and above, if you enable Encryption in-transit you will be prompted to specify one of the following **Access Control** options:
          + **No Access Control** – This is the default setting. This indicates no restrictions on user access to the cluster.
          + **User Group Access Control List** – Select a user group with a defined set of users that can access the cluster. For more information, see [Managing User Groups with the Console and CLI](Clusters.RBAC.md#User-Groups).
          + **AUTH Default User** – An authentication mechanism for a Valkey or Redis OSS server. For more information, see [AUTH](auth.md).
        + **AUTH** – An authentication mechanism for a Valkey or Redis OSS server. For more information, see [AUTH](auth.md).
**Note**  
For Redis OSS versions between 3.2.6 onward, excluding version 3.2.10, AUTH is the sole option.

     1. For **Security groups**, choose the security groups that you want for this cluster. A *security group* acts as a firewall to control network access to your cluster. You can use the default security group for your VPC or create a new one.

        For more information on security groups, see [Security groups for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) in the *Amazon VPC User Guide.*

1. For regularly scheduled automatic backups, select **Enable automatic backups** and then enter the number of days that you want each automatic backup retained before it is automatically deleted. If you don't want regularly scheduled automatic backups, clear the **Enable automatic backups** check box. In either case, you always have the option to create manual backups.

   For more information on backup and restore, see [Snapshot and restore](backups.md).

1. (Optional) Specify a maintenance window. The *maintenance window* is the time, generally an hour in length, each week when ElastiCache schedules system maintenance for your cluster. You can allow ElastiCache to choose the day and time for your maintenance window (*No preference*), or you can choose the day, time, and duration yourself (*Specify maintenance window*). If you choose *Specify maintenance window* from the lists, choose the *Start day*, *Start time*, and *Duration* (in hours) for your maintenance window. All times are UCT times.

   For more information, see [Managing ElastiCache cluster maintenance](maintenance-window.md).

1. (Optional) For **Logs**:
   + Under **Log format**, choose either **Text** or **JSON**.
   + Under **Destination Type**, choose either **CloudWatch Logs** or **Kinesis Firehose**.
   + Under **Log destination**, choose either **Create new** and enter either your CloudWatch Logs log group name or your Firehose stream name, or choose **Select existing** and then choose either your CloudWatch Logs log group name or your Firehose stream name,

1. For **Tags**, to help you manage your clusters and other ElastiCache resources, you can assign your own metadata to each resource in the form of tags. For mor information, see [Tagging your ElastiCache resources](Tagging-Resources.md).

1. Review all your entries and choices, then make any needed corrections. When you're ready, choose **Next**.

1. After you have configured the cluster in the previous steps, you now configure your secondary cluster details..

1. Under **Regional cluster**, choose the AWS Region where th cluster is located.

1. Under **Cluster info**, enter a name and optional description of the cluster.

1. The following options are prepopulated to match the primary cluster configuration and cannot be changed:
   + Location
   + Engine version
   + Instance type
   + Node type
   + Number of shards
   + Parameter group
**Note**  
ElastiCache autogenerates a new parameter group from values of the provided parameter group and applies the new parameter group to the cluster. Use this new parameter group to modify parameters on a global datastore. Each autogenerated parameter group is associated with one and only one cluster and, therefore, only one global datastore.
   + Encryption at rest – Enables encryption of data stored on disk. For more information, see [Encryption at rest](at-rest-encryption.md).
**Note**  
You can supply a different encryption key by choosing **Customer Managed AWS KMS key** and choosing the key. For more information, see [Using Customer Managed AWS KMS keys](at-rest-encryption.md#using-customer-managed-keys-for-elasticache-security).
   + Encryption in-transit – Enables encryption of data on the wire. For more information, see [Encryption in transit](in-transit-encryption.md). For Valkey 7.2 and above and Redis OSS 6.4 and above, if you enable encryption in-transit you are prompted to specify one of the following **Access Control** options:
     + **No Access Control** – This is the default setting. This indicates no restrictions on user access to the cluster.
     + **User Group Access Control List** – Choose a user group with a defined set of users that can access the cluster. For more information, see [Managing User Groups with the Console and CLI](Clusters.RBAC.md#User-Groups).
     + **AUTH Default User** – An authentication mechanism for a Valkey or Redis OSS server. For more information, see [AUTH](auth.md).
**Note**  
For Redis OSS versions between 4.0.2, when Encryption in-transit was first supported, and 6.0.4, AUTH is the sole option.

   The remaining secondary cluster settings are pre-populated with the same values as the primary cluster, but the following can be updated to meet specific requirements for that cluster:
   + Port
   + Number of replicas
   + Subnet group
   + Preferred Availability Zone(s) 
   + Security groups
   + Customer Managed (AWS KMS key) 
   + AUTH Token
   + Enable automatic backups
   + Backup retention period
   + Backup window
   + Maintenance window
   + Topic for SNS notification

1. Choose **Create**. This sets the status of the global datastore to **Creating**. After the primary cluster and secondary clusters are associated with the global datastore, the status changes to **Available**. You have a primary cluster that accepts reads and writes and a secondary cluster that accepts reads replicated from the primary cluster.

   The page is also updated to indicate whether a cluster is part of a global datastore, including the following:
   + **Global Datastore** – The name of the global datastore to which the cluster belongs.
   + **Global Datastore Role** – The role of the cluster, either primary or secondary.

You can add up to one additional secondary cluster in a different AWS Region. For more information, see [Adding a Region to a global datastore](#Redis-Global-Datastores-Console-Create-Secondary).

## Viewing global datastore details
<a name="Redis-Global-Datastores-Console-Details"></a>

You can view the details of existing global datastores and also modify them on the **Global Datastores** page.

**To view global datastore details**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. On the navigation pane, choose **Global Datastores** and then choose an available global datastore.

You can then examine the following global datastore properties:
+ **Global Datastore Name:** The name of the global datastore
+ **Description:** A description of the global datastore
+ **Status:** Options include:
  + Creating
  + Modifying
  + Available
  + Deleting
  + Primary-Only - This status indicates the global datastore contains only a primary cluster. Either all secondary clusters are deleted or not successfully created.
+ **Cluster Mode:** Either enabled or disabled
+ **Engine Version:** The Valkey or Redis OSS engine version running the global datastore
+ **Instance Node Type:** The node type used for the global datastore
+ **Encryption at-rest:** Either enabled or disabled
+ **Encryption in-transit:** Either enabled or disabled
+ **AUTH:** Either enabled or disabled

You can make the following changes to the global datastore:
+ [Adding a Region to a global datastore](#Redis-Global-Datastores-Console-Create-Secondary) 
+ [Removing a Region from a global datastore](#Redis-Global-Datastore-Console-Remove-Region) 
+ [Promoting the secondary cluster to primary](#Redis-Global-Datastores-Console-Promote-Secondary)
+ [Modifying a global datastore](#Redis-Global-Datastores-Console-Modify-Regional-Clusters)

The Global Datastore page also lists the individual clusters that make up the global datastore and the following properties for each:
+ **Region** - The AWS Region where the cluster is stored
+ **Role** - Either primary or secondary
+ **Cluster name** - The name of the cluster
+ **Status** - Options include:
  + **Associating** - The cluster is in the process of being associated to the global datastore
  + **Associated** - The cluster is associated to the global datastore
  + **Disassociating** - The process of removing a secondary cluster from the global datastore using the global datastore name. After this, the secondary cluster no longer receives updates from the primary cluster but it remains as a standalone cluster in that AWS Region.
  + **Disassociated** - The secondary cluster has been removed from the global datastore and is now a standalone cluster in its AWS Region.
+ **Global Datastore Replica lag** – Shows one value per secondary AWS Region in the global datastore. This is the lag between the secondary Region's primary node and the primary Region's primary node. For cluster mode enabled Valkey or Redis OSS, the lag indicates the maximum delay in seconds among the shards. 

## Adding a Region to a global datastore
<a name="Redis-Global-Datastores-Console-Create-Secondary"></a>

You can add up to one additional AWS Region to an existing global datastore. In this scenario, you are creating a read-only cluster in a separate AWS Region that receives automatic and asynchronous updates from the primary cluster.

**To add an AWS Region to a global datastore**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. On the navigation pane, choose **Global Datastores**, and then select an existing global datastore.

1. Choose **Add Regional cluster**, and choose the AWS Region where the secondary cluster is to reside.

1. Under **Cluster info**, enter a value for **Name** and, optionally, for **Description** for the cluster.

1. Keep the following options as they are. They're prepopulated to match the primary cluster configuration, and you can't change them.
   + Engine version
   + Instance type
   + Node type
   + Number of shards
   + Parameter group
**Note**  
ElastiCache autogenerates a new parameter group from values of the provided parameter group and applies the new parameter group to the cluster. Use this new parameter group to modify parameters on a global datastore. Each autogenerated parameter group is associated with one and only one cluster and, therefore, only one global datastore.
   + Encryption at rest
**Note**  
You can supply a different encryption key by choosing **Customer Managed AWS KMS key** and choosing the key.
   + Encryption in transit
   + AUTH

1. (Optional) Update the remaining secondary cluster settings. These are prepopulated with the same values as the primary cluster, but you can update them to meet specific requirements for that cluster:
   + Port
   + Number of replicas
   + Subnet group
   + Preferred Availability Zone(s)
   + Security groups
   + Customer Managed AWS KMS key) 
   + AUTH Token
   + Enable automatic backups
   + Backup retention period
   + Backup window
   + Maintenance window
   + Topic for SNS notification

1. Choose **Add**.

## Modifying a global datastore
<a name="Redis-Global-Datastores-Console-Modify-Regional-Clusters"></a>

You can modify properties of regional clusters. Only one modify operation can be in progress on a global datastore, with the exception of promoting a secondary cluster to primary. For more information, see [Promoting the secondary cluster to primary](#Redis-Global-Datastores-Console-Promote-Secondary).

**To modify a global datastore**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. On the navigation pane, choose **Global Datastores**, and then for **Global Datastore Name**, choose a global datastore.

1. Choose **Modify** and choose among the following options:
   + **Modify description** – Update the description of the global datastore
   + **Modify engine version** – Only Valkey 7.2 and later or Redis OSS 5.0.6 and later are available.
   + **Modify node type** – Scale regional clusters both vertically (scaling up and down) and horizontally (scaling in and out). Options include the R5 and M5 node families. For more information on node types, see [Supported node types](CacheNodes.SupportedTypes.md).
   + **Modify Automatic Failover** – Enable or disable Automatic Failover. When you enable failover and primary nodes in regional clusters shut down unexpectedly, ElastiCache fails over to one of the regional replicas. For more information, see [Auto failover](AutoFailover.md).

   For Valkey or Redis OSS clusters with cluster-mode enabled:
   + **Add shards** – Enter the number of shards to add and optionally specify one or more Availability Zones.
   + **Delete shards** – Choose shards to be deleted in each AWS Region.
   + **Rebalance shards** – Rebalance the slot distribution to ensure uniform distribution across existing shards in the cluster. 

To modify a global datastore's parameters, modify the parameter group of any member cluster for the global datastore. ElastiCache applies this change to all clusters within that global datastore automatically. To modify the parameter group of that cluster, use the Valkey or Redis OSS console or the [ModifyCacheCluster](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyCacheCluster.html) API operation. For more information, see [Modifying an ElastiCache parameter group](ParameterGroups.Modifying.md). When you modify the parameter group of any cluster contained within a global datastore, it is applied to all the clusters within that global datastore.

To reset an entire parameter group or specific parameters, use the [ResetCacheParameterGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ResetCacheParameterGroup.html) API operation.

## Promoting the secondary cluster to primary
<a name="Redis-Global-Datastores-Console-Promote-Secondary"></a>

If the primary cluster or AWS Region becomes unavailable or is experiencing performance issues, you can promote a secondary cluster to primary. Promotion is allowed anytime, even if other modifications are in progress. You can also issue multiple promotions in parallel and the global datastore resolves to one primary eventually. If you promote multiple secondary clusters simultaneously, ElastiCache doesn't guarantee which one ultimately resolves to primary.

**To promote a secondary cluster to primary**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. On the navigation pane, choose **Global Datastores**.

1. Choose the global datastore name to view the details.

1. Choose the **Secondary** cluster.

1. Choose **Promote to primary**.

   You're then prompted to confirm your decision with the following warning: ` Promoting a region to primary will make the cluster in this region as read/writable. Are you sure you want to promote the secondary cluster to primary?`

   `The current primary cluster in primary region will become secondary and will stop accepting writes after this operation completes. Please ensure you update your application stack to direct traffic to the new primary region.`

1. Choose **Confirm** if you want to continue the promotion or **Cancel** if you don't.

If you choose to confirm, your global datastore moves to a **Modifying ** state and is unavailable until the promotion is complete.

## Removing a Region from a global datastore
<a name="Redis-Global-Datastore-Console-Remove-Region"></a>

You can remove an AWS Region from a global datastore by using the following procedure.

**To remove an AWS Region from a global datastore**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. On the navigation pane, choose **Global Datastores**.

1. Choose a global datastore.

1. Choose the **Region** you want to remove.

1. Choose **Remove region**.
**Note**  
This option is only available for secondary clusters. 

   You're then be prompted to confirm your decision with the following warning: ` Removing the region will remove your only available cross region replica for the primary cluster. Your primary cluster will no longer be set up for disaster recovery and improved read latency in remote region. Are you sure you want to remove the selected region from the global datastore?`

1. Choose **Confirm** if you want to continue the promotion or **Cancel** if you don't.

If you choose confirm, the AWS Region is removed and the secondary cluster no longer receives replication updates.

## Deleting a global datastore
<a name="Redis-Global-Datastores-Console-Delete-GlobalDatastore"></a>

To delete a global datastore, first remove all secondary clusters. For more information, see [Removing a Region from a global datastore](#Redis-Global-Datastore-Console-Remove-Region). Doing this leaves the global datastore in **primary-only** status. 

**To delete a global datastore**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. On the navigation pane, choose **Global Datastores**.

1. Under **Global Datastore Name** choose the global datastore you want to delete and then choose **Delete**.

   You're then be prompted to confirm your decision with the following warning: `Are you sure you want to delete this Global Datastore?`

1. Choose **Delete**.

The global datastore transitions to **Deleting** status.

# Using global datastores (CLI)
<a name="Redis-Global-Datastores-CLI"></a>

You can use the AWS Command Line Interface (AWS CLI) to control multiple AWS services from the command line and automate them through scripts. You can use the AWS CLI for ad hoc (one-time) operations. 

## Downloading and configuring the AWS CLI
<a name="Redis-Global-Datastores-Downloading-CLI"></a>

The AWS CLI runs on Windows, macOS, or Linux. Use the following procedure to download and configure it.

**To download, install, and configure the CLI**

1. Download the AWS CLI on the [AWS command line interface](http://aws.amazon.com/cli) webpage.

1. Follow the instructions for Installing the AWS CLI and Configuring the AWS CLI in the *AWS Command Line Interface User Guide*.

## Using the AWS CLI with global datastores
<a name="Redis-Global-Datastores-Using-CLI"></a>

Use the following CLI operations to work with global datastores: 
+ [create-global-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/create-global-replication-group.html)

  ```
  aws elasticache create-global-replication-group \
     --global-replication-group-id-suffix my global datastore  \
     --primary-replication-group-id sample-repl-group  \
     --global-replication-group-description an optional description of the global datastore
  ```

  Amazon ElastiCache automatically applies a prefix to the global datastore ID when it is created. Each AWS Region has its own prefix. For instance, a global datastore ID created in the US West (N. California) Region begins with "virxk" along with the suffix name that you provide. The suffix, combined with the autogenerated prefix, guarantees uniqueness of the global datastore name across multiple Regions. 

  The following table lists each AWS Region and its global datastore ID prefix.

    
****    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/Redis-Global-Datastores-CLI.html)
+  [create-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/create-replication-group.html) – Use this operation to create secondary clusters for a global datastore by supplying the name of the global datastore to the `--global-replication-group-id` parameter.

  ```
  aws elasticache create-replication-group \
    --replication-group-id secondary replication group name \
    --replication-group-description “Replication group description" \
    --global-replication-group-id global datastore name
  ```

  When calling this operation and passing in a `--global-replication-group-id` value, ElastiCache will infer the values from the primary replication group of the global replication group for the following paramaeters. Do not pass in values for these parameters:

  `"PrimaryClusterId",`

  `"AutomaticFailoverEnabled",`

  ` "NumNodeGroups",`

  ` "CacheParameterGroupName",`

  ` "CacheNodeType",`

  ` "Engine",`

  ` "EngineVersion",`

  ` "CacheSecurityGroupNames",`

  ` "EnableTransitEncryption",`

  ` "AtRestEncryptionEnabled",`

  ` "SnapshotArns",`

  ` "SnapshotName"`
+ [describe-global-replication-groups](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-global-replication-groups.html)

  ```
  aws elasticache describe-global-replication-groups \
     --global-replication-group-id my global datastore  \
     --show-member-info an optional parameter that returns a list of the primary and secondary clusters that make up the global datastore
  ```
+ [modify-global-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-global-replication-group.html)

  ```
  aws elasticache modify-global-replication-group \
     --global-replication-group-id my global datastore  \
     --automatic-failover-enabled \
     --cache-node-type node type \
     --cache-parameter-group-name parameter group name \ 
     --engine-version engine version \
     -—apply-immediately \
     --global-replication-group-description description
  ```

  **Redis to OSS Valkey cross-engine upgrade for ElastiCache GlobalDataStore**

  You can upgrade an existing Redis OSS global replication group to Valkey using the Console, API or CLI. 

  If you have an existing Redis OSS global replication group you can upgrade to Valkey by specifying the new engine and engine version with modify-global-replication-group API.

  For Linux, macOS, or Unix:

  ```
  aws elasticache modify-global-replication-group \
     --global-replication-group-id myGlobalReplGroup \
     --engine valkey \
     --apply-immediately \
     --engine-version 8.0
  ```

  For Windows:

  ```
  aws elasticache modify-global-replication-group ^
     --global-replication-group-id myGlobalReplGroup ^
     --engine valkey ^
     --apply-immediately ^
     --engine-version 8.0
  ```

  If you have a custom cache parameter group applied to the existing Redis OSS global replication group you wish to upgrade, you will need to pass a custom Valkey cache parameter group in the request as well. The input Valkey custom parameter group must have the same Redis OSS static parameter values as the existing Redis OSS custom parameter group.

  For Linux, macOS, or Unix:

  ```
  aws elasticache modify-global-replication-group \
     --global-replication-group-id myGlobalReplGroup \
     --engine valkey \
     --engine-version 8.0 \
     --apply-immediately \
     --cache-parameter-group-name myParamGroup
  ```

  For Windows:

  ```
  aws elasticache modify-global-replication-group ^
     --global-replication-group-id myGlobalReplGroup ^
     --engine valkey ^
     --engine-version 8.0 ^
     --apply-immediately ^
     --cache-parameter-group-name myParamGroup
  ```
+ [delete-global-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/delete-global-replication-group.html)

  ```
  aws elasticache delete-global-replication-group \
     --global-replication-group-id my global datastore  \
     --retain-primary-replication-group defaults to true
  ```
+ [disassociate-global-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/disassociate-global-replication-group.html)

  ```
  aws elasticache disassociate-global-replication-group \
     --global-replication-group-id my global datastore  \
     --replication-group-id my secondary cluster  \
     --replication-group-region the AWS Region in which the secondary cluster resides
  ```
+ [failover-global-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/failover-global-replication-group.html)

  ```
  aws elasticache failover-replication-group \
     --global-replication-group-id my global datastore \
     --primary-region The AWS Region of the primary cluster \  
     --primary-replication-group-id  The name of the global datastore, including the suffix.
  ```
+ [increase-node-groups-in-global-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/increase-node-groups-in-global-replication-group.html)

  ```
  aws elasticache increase-node-groups-in-global-replication-group \
     --apply-immediately yes \
     --global-replication-group-id global-replication-group-name \
     --node-group-count 3
  ```
+ [decrease-node-groups-in-global-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/decrease-node-groups-in-global-replication-group.html)

  ```
  aws elasticache decrease-node-groups-in-global-replication-group \
     --apply-immediately yes \
     --global-replication-group-id global-replication-group-name \
     --node-group-count 3
  ```
+ [rebalance-shards-in-global-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/rebalance-slots-in-global-replication-group.html)

  ```
  aws elasticache rebalance-shards-in-global-replication-group \
     --apply-immediately yes \
     --global-replication-group-id global-replication-group-name
  ```

Use help to list all available commands for ElastiCache for Valkey or Redis OSS.

```
aws elasticache help
```

You can also use help to describe a specific command and learn more about its usage: 

```
aws elasticache create-global-replication-group help
```

# High availability using replication groups
<a name="Replication"></a>

Single-node Amazon ElastiCache Valkey and Redis OSS clusters are in-memory entities with limited data protection services (AOF). If your cluster fails for any reason, you lose all the cluster's data. However, if you're running a Valkey or Redis OSS engine, you can group 2 to 6 nodes into a cluster with replicas where 1 to 5 read-only nodes contain replicate data of the group's single read/write primary node. In this scenario, if one node fails for any reason, you do not lose all your data since it is replicated in one or more other nodes. Due to replication latency, some data may be lost if it is the primary read/write node that fails.

As seen in the following graphic, the replication structure is contained within a shard (called *node group* in the API/CLI) which is contained within a Valkey or Redis OSS cluster. Valkey or Redis OSS (cluster mode disabled) clusters always have one shard. Valkey or Redis OSS (cluster mode enabled) clusters can have up to 500 shards with the cluster's data partitioned across the shards. You can create a cluster with higher number of shards and lower number of replicas totaling up to 90 nodes per cluster. This cluster configuration can range from 90 shards and 0 replicas to 15 shards and 5 replicas, which is the maximum number of replicas allowed. 

The node or shard limit can be increased to a maximum of 500 per cluster with ElastiCache for Valkey, and with ElastiCache version 5.0.6 or higher for Redis OSS. For example, you can choose to configure a 500 node cluster that ranges between 83 shards (one primary and 5 replicas per shard) and 500 shards (single primary and no replicas). Make sure there are enough available IP addresses to accommodate the increase. Common pitfalls include the subnets in the subnet group have too small a CIDR range or the subnets are shared and heavily used by other clusters. For more information, see [Creating a subnet group](SubnetGroups.Creating.md).

 For versions below 5.0.6, the limit is 250 per cluster.

To request a limit increase, see [AWS Service Limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) and choose the limit type **Nodes per cluster per instance type**. 

![\[Image: Valkey or Redis OSS (cluster mode disabled) cluster has one shard and 0 to 5 replica nodes\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/images/ElastiCacheClusters-CSN-Redis-Replicas.png)


*Valkey or Redis OSS (cluster mode disabled) cluster has one shard and 0 to 5 replica nodes*

If the cluster with replicas has Multi-AZ enabled and the primary node fails, the primary fails over to a read replica. Because the data is updated on the replica nodes asynchronously, there may be some data loss due to latency in updating the replica nodes. For more information, see [Mitigating Failures when Running Valkey or Redis OSS](disaster-recovery-resiliency.md#FaultTolerance.Redis).

**Topics**
+ [Understanding Valkey and Redis OSS replication](Replication.Redis.Groups.md)
+ [Replication: Valkey and Redis OSS Cluster Mode Disabled vs. Enabled](Replication.Redis-RedisCluster.md)
+ [Minimizing downtime in ElastiCache by using Multi-AZ with Valkey and Redis OSS](AutoFailover.md)
+ [How synchronization and backup are implemented](Replication.Redis.Versions.md)
+ [Creating a Valkey or Redis OSS replication group](Replication.CreatingRepGroup.md)
+ [Viewing a replication group's details](Replication.ViewDetails.md)
+ [Finding replication group endpoints](Replication.Endpoints.md)
+ [Modifying a replication group](Replication.Modify.md)
+ [Deleting a replication group](Replication.DeletingRepGroup.md)
+ [Changing the number of replicas](increase-decrease-replica-count.md)
+ [Promoting a read replica to primary, for Valkey or Redis OSS (cluster mode disabled) replication groups](Replication.PromoteReplica.md)

# Understanding Valkey and Redis OSS replication
<a name="Replication.Redis.Groups"></a>

Redis OSS implements replication in two ways: 
+ With a single shard that contains all of the cluster's data in each node—Valkey or Redis OSS (cluster mode disabled)
+ With data partitioned across up to 500 shards—Valkey or Redis OSS (cluster mode enabled)

Each shard in a replication group has a single read/write primary node and up to 5 read-only replica nodes. You can create a cluster with higher number of shards and lower number of replicas totaling up to 90 nodes per cluster. This cluster configuration can range from 90 shards and 0 replicas to 15 shards and 5 replicas, which is the maximum number of replicas allowed.

The node or shard limit can be increased to a maximum of 500 per cluster if the Redis OSS engine version is 5.0.6 or higher. For example, you can choose to configure a 500 node cluster that ranges between 83 shards (one primary and 5 replicas per shard) and 500 shards (single primary and no replicas). Make sure there are enough available IP addresses to accommodate the increase. Common pitfalls include the subnets in the subnet group have too small a CIDR range or the subnets are shared and heavily used by other clusters. For more information, see [Creating a subnet group](SubnetGroups.Creating.md).

 For versions below 5.0.6, the limit is 250 per cluster.

To request a limit increase, see [AWS Service Limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) and choose the limit type **Nodes per cluster per instance type**. 

**Topics**
+ [Valkey or Redis OSS (Cluster Mode Disabled)](#Replication.Redis.Groups.Classic)
+ [Valkey or Redis OSS (cluster mode enabled)](#Replication.Redis.Groups.Cluster)

## Valkey or Redis OSS (Cluster Mode Disabled)
<a name="Replication.Redis.Groups.Classic"></a>

A Valkey or Redis OSS (cluster mode disabled) cluster has a single shard, inside of which is a collection of nodes; one primary read/write node and up to five secondary, read-only replica nodes. Each read replica maintains a copy of the data from the cluster's primary node. Asynchronous replication mechanisms are used to keep the read replicas synchronized with the primary. Applications can read from any node in the cluster. Applications can write only to the primary node. Read replicas improve read throughput and guard against data loss in cases of a node failure.

![\[Image: Valkey or Redis OSS (cluster mode disabled) cluster with a single shard and replica nodes\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/images/ElastiCacheClusters-CSN-Redis-Replicas.png)


*Valkey or Redis OSS (cluster mode disabled) cluster with a single shard and replica nodes*

You can use Valkey or Redis OSS (cluster mode disabled) clusters with replica nodes to scale your solution for ElastiCache to handle applications that are read-intensive or to support large numbers of clients that simultaneously read from the same cluster.

All of the nodes in a Valkey or Redis OSS (cluster mode disabled) cluster must reside in the same region. 

When you add a read replica to a cluster, all of the data from the primary is copied to the new node. From that point on, whenever data is written to the primary, the changes are asynchronously propagated to all the read replicas.

To improve fault tolerance and reduce write downtime, enable Multi-AZ with Automatic Failover for your Valkey or Redis OSS (cluster mode disabled) cluster with replicas. For more information, see [Minimizing downtime in ElastiCache by using Multi-AZ with Valkey and Redis OSS](AutoFailover.md).

You can change the roles of the nodes within the Valkey or Redis OSS (cluster mode disabled) cluster, with the primary and one of the replicas exchanging roles. You might decide to do this for performance tuning reasons. For example, with a web application that has heavy write activity, you can choose the node that has the lowest network latency. For more information, see [Promoting a read replica to primary, for Valkey or Redis OSS (cluster mode disabled) replication groups](Replication.PromoteReplica.md).

## Valkey or Redis OSS (cluster mode enabled)
<a name="Replication.Redis.Groups.Cluster"></a>

A Valkey or Redis OSS (cluster mode enabled) cluster is comprised of from 1 to 500 shards (API/CLI: node groups). Each shard has a primary node and up to five read-only replica nodes. The configuration can range from 90 shards and 0 replicas to 15 shards and 5 replicas, which is the maximum number of replicas allowed. 

The node or shard limit can be increased to a maximum of 500 per cluster if the engine version is Valkey 7.2 and higher, or Redis OSS 5.0.6 and higher. For example, you can choose to configure a 500 node cluster that ranges between 83 shards (one primary and 5 replicas per shard) and 500 shards (single primary and no replicas). Make sure there are enough available IP addresses to accommodate the increase. Common pitfalls include the subnets in the subnet group have too small a CIDR range or the subnets are shared and heavily used by other clusters. For more information, see [Creating a subnet group](SubnetGroups.Creating.md).

 For versions below 5.0.6, the limit is 250 per cluster.

To request a limit increase, see [AWS Service Limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) and choose the limit type **Nodes per cluster per instance type**. 

 Each read replica in a shard maintains a copy of the data from the shard's primary. Asynchronous replication mechanisms are used to keep the read replicas synchronized with the primary. Applications can read from any node in the cluster. Applications can write only to the primary nodes. Read replicas enhance read scalability and guard against data loss. Data is partitioned across the shards in a Valkey or Redis OSS (cluster mode enabled) cluster.

Applications use the Valkey or Redis OSS (cluster mode enabled) cluster's *configuration endpoint* to connect with the nodes in the cluster. For more information, see [Finding connection endpoints in ElastiCache](Endpoints.md).

![\[Image: Valkey or Redis OSS (cluster mode enabled) cluster with multiple shards and replica nodes\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/images/ElastiCacheClusters-CSN-RedisClusters.png)


*Valkey or Redis OSS (cluster mode enabled) cluster with multiple shards and replica nodes*

All of the nodes in a Valkey or Redis OSS (cluster mode enabled) cluster must reside in the same region. To improve fault tolerance, you can provision both primaries and read replicas in multiple Availability Zones within that region.

Currently, Valkey or Redis OSS (cluster mode enabled) features have some limitations.
+ You cannot manually promote any of the replica nodes to primary.

# Replication: Valkey and Redis OSS Cluster Mode Disabled vs. Enabled
<a name="Replication.Redis-RedisCluster"></a>

Beginning with Valkey 7.2 and Redis OSS version 3.2, you have the ability to create one of two distinct types of clusters (API/CLI: replication groups). A Valkey or Redis OSS (cluster mode disabled) cluster always has a single shard (API/CLI: node group) with up to 5 read replica nodes. A Valkey or Redis OSS (cluster mode enabled) cluster has up to 500 shards with 1 to 5 read replica nodes in each.

![\[Image: Valkey or Redis OSS (cluster mode disabled), and Valkey or Redis OSS (cluster mode enabled) clusters\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/images/ElastiCache-NodeGroups.png)


*Valkey or Redis OSS (cluster mode disabled), and Valkey or Redis OSS (cluster mode enabled) clusters*

The following table summarizes important differences between Valkey or Redis OSS (cluster mode disabled) and Valkey or Redis OSS (cluster mode enabled) clusters.


**Comparing Valkey or Redis OSS (cluster mode disabled) and Valkey or Redis OSS (cluster mode enabled) Clusters**  

| Feature | Valkey or Redis OSS (cluster mode disabled) | Valkey or Redis OSS (cluster mode enabled) | 
| --- | --- | --- | 
| Modifiable | Yes. Supports adding and deleting replica nodes, and scaling up node type. | Limited. For more information, see [Version Management for ElastiCache](VersionManagement.md) and [Scaling Valkey or Redis OSS (Cluster Mode Enabled) clusters](scaling-redis-cluster-mode-enabled.md). | 
| Data Partitioning | No | Yes | 
| Shards | 1 | 1 to 500  | 
| Read replicas | 0 to 5 If you have no replicas and the node fails, you experience total data loss. | 0 to 5 per shard. If you have no replicas and a node fails, you experience loss of all data in that shard. | 
| Multi-AZ  | Yes, with at least 1 replica. Optional. On by default. | YesOptional. On by default. | 
| Snapshots (Backups) | Yes, creating a single .rdb file. | Yes, creating a unique .rdb file for each shard. | 
| Restore | Yes, using a single .rdb file from a Valkey or Redis OSS (cluster mode disabled) cluster. | Yes, using .rdb files from either a Valkey or Redis OSS (cluster mode disabled) or a Valkey or Redis OSS (cluster mode enabled) cluster. | 
| Supported by | All Valkey and Redis OSS versions | All Valkey versions, and Redis OSS 3.2 and following | 
| Engine upgradeable | Yes, with some limits. For more information, see [Version Management for ElastiCache](VersionManagement.md). | Yes, with some limits. For more information, see [Version Management for ElastiCache](VersionManagement.md). | 
| Encryption | Versions 3.2.6 (scheduled for EOL, see [Redis OSS versions end of life schedule](engine-versions.md#deprecated-engine-versions)) and 4.0.10 and later. | Versions 3.2.6 (scheduled for EOL, see [Redis OSS versions end of life schedule](engine-versions.md#deprecated-engine-versions)) and 4.0.10 and later. | 
| HIPAA Eligible | Versions 3.2.6 (scheduled for EOL, see [Redis OSS versions end of life schedule](engine-versions.md#deprecated-engine-versions)) and 4.0.10 and later. | Versions 3.2.6 (scheduled for EOL, see [Redis OSS versions end of life schedule](engine-versions.md#deprecated-engine-versions)) and 4.0.10 and later. | 
| PCI DSS Compliant | Versions 3.2.6 (scheduled for EOL, see [Redis OSS versions end of life schedule](engine-versions.md#deprecated-engine-versions)) and 4.0.10 and later. | Versions 3.2.6 (scheduled for EOL, see [Redis OSS versions end of life schedule](engine-versions.md#deprecated-engine-versions)) and 4.0.10 and later. | 
| Online resharding | N/A | Version 3.2.10 (scheduled for EOL, see [Redis OSS versions end of life schedule](engine-versions.md#deprecated-engine-versions)) and later. | 

## Which should I choose?
<a name="Replication.Redis-RedisCluster.Choose"></a>

When choosing between Valkey or Redis OSS (cluster mode disabled) or Valkey or Redis OSS (cluster mode enabled), consider the following factors:
+ **Scaling v. partitioning** – Business needs change. You need to either provision for peak demand or scale as demand changes. Valkey or Redis OSS (cluster mode disabled) supports scaling. You can scale read capacity by adding or deleting replica nodes, or you can scale capacity by scaling up to a larger node type. Both of these operations take time. For more information, see [Scaling replica nodes for Valkey or Redis OSS (Cluster Mode Disabled)](Scaling.RedisReplGrps.md).

   

  Valkey or Redis OSS (cluster mode enabled) supports partitioning your data across up to 500 node groups. You can dynamically change the number of shards as your business needs change. One advantage of partitioning is that you spread your load over a greater number of endpoints, which reduces access bottlenecks during peak demand. Additionally, you can accommodate a larger data set since the data can be spread across multiple servers. For information on scaling your partitions, see [Scaling Valkey or Redis OSS (Cluster Mode Enabled) clusters](scaling-redis-cluster-mode-enabled.md).

   
+ **Node size v. number of nodes** – Because a Valkey or Redis OSS (cluster mode disabled) cluster has only one shard, the node type must be large enough to accommodate all the cluster's data plus necessary overhead. On the other hand, because you can partition your data across several shards when using a Valkey or Redis OSS (cluster mode enabled) cluster, the node types can be smaller, though you need more of them. For more information, see [Choosing your node size](CacheNodes.SelectSize.md).

   
+ **Reads v. writes** – If the primary load on your cluster is applications reading data, you can scale a Valkey or Redis OSS (cluster mode disabled) cluster by adding and deleting read replicas. However, there is a maximum of 5 read replicas. If the load on your cluster is write-heavy, you can benefit from the additional write endpoints of a Valkey or Redis OSS (cluster mode enabled) cluster with multiple shards.

Whichever type of cluster you choose to implement, be sure to choose a node type that is adequate for your current and future needs.

# Minimizing downtime in ElastiCache by using Multi-AZ with Valkey and Redis OSS
<a name="AutoFailover"></a>

There are a number of instances where ElastiCache for Valkey and Redis OSS may need to replace a primary node; these include certain types of planned maintenance and the unlikely event of a primary node or Availability Zone failure. 

This replacement results in some downtime for the cluster, but if Multi-AZ is enabled, the downtime is minimized. The role of primary node will automatically fail over to one of the read replicas. There is no need to create and provision a new primary node, because ElastiCache will handle this transparently. This failover and replica promotion ensure that you can resume writing to the new primary as soon as promotion is complete. 

ElastiCache also propagates the Domain Name Service (DNS) name of the promoted replica. It does so because then if your application is writing to the primary endpoint, no endpoint change is required in your application. If you are reading from individual endpoints, make sure that you change the read endpoint of the replica promoted to primary to the new replica's endpoint.

In case of planned node replacements initiated due to maintenance updates or self-service updates, be aware of the following:
+ For Valkey and Redis OSS clusters, the planned node replacements complete while the cluster serves incoming write requests. 
+ For Valkey and Redis OSS cluster mode disabled clusters with Multi-AZ enabled that run on the 5.0.6 or later engine, the planned node replacements complete while the cluster serves incoming write requests. 
+ For Valkey and Redis OSS cluster mode disabled clusters with Multi-AZ enabled that run on the 4.0.10 or earlier engine, you might notice a brief write interruption associated with DNS updates. This interruption might take up to a few seconds. This process is much faster than recreating and provisioning a new primary, which is what occurs if you don't enable Multi-AZ. 

You can enable Multi-AZ using the ElastiCache Management Console, the AWS CLI, or the ElastiCache API.

Enabling ElastiCache Multi-AZ on your Valkey or Redis OSS cluster (in the API and CLI, replication group) improves your fault tolerance. This is true particularly in cases where your cluster's read/write primary cluster becomes unreachable or fails for any reason. Multi-AZ is only supported on Valkey and Redis OSS clusters with more than one node in each shard.

**Topics**
+ [Enabling Multi-AZ](#AutoFailover.Enable)
+ [Failure scenarios with Multi-AZ responses](#AutoFailover.Scenarios)
+ [Testing automatic failover](#auto-failover-test)
+ [Limitations on Multi-AZ](#AutoFailover.Limitations)

## Enabling Multi-AZ
<a name="AutoFailover.Enable"></a>

You can enable Multi-AZ when you create or modify a cluster (API or CLI, replication group) using the ElastiCache console, AWS CLI, or the ElastiCache API.

You can enable Multi-AZ only on Valkey or Redis OSS (cluster mode disabled) clusters that have at least one available read replica. Clusters without read replicas do not provide high availability or fault tolerance. For information about creating a cluster with replication, see [Creating a Valkey or Redis OSS replication group](Replication.CreatingRepGroup.md). For information about adding a read replica to a cluster with replication, see [Adding a read replica for Valkey or Redis OSS (Cluster Mode Disabled)](Replication.AddReadReplica.md).

**Topics**
+ [Enabling Multi-AZ (Console)](#AutoFailover.Enable.Console)
+ [Enabling Multi-AZ (AWS CLI)](#AutoFailover.Enable.CLI)
+ [Enabling Multi-AZ (ElastiCache API)](#AutoFailover.Enable.API)

### Enabling Multi-AZ (Console)
<a name="AutoFailover.Enable.Console"></a>

You can enable Multi-AZ using the ElastiCache console when you create a new Valkey or Redis OSS cluster or by modifying an existing cluster with replication.

Multi-AZ is enabled by default on Valkey or Redis OSS (cluster mode enabled) clusters.

**Important**  
ElastiCache will automatically enable Multi-AZ only if the cluster contains at least one replica in a different Availability Zone from the primary in all shards.

#### Enabling Multi-AZ when creating a cluster using the ElastiCache console
<a name="AutoFailover.Enable.Console.NewCacheCluster"></a>

For more information on this process, see [Creating a Valkey (cluster mode disabled) cluster (Console)](SubnetGroups.designing-cluster-pre.valkey.md#Clusters.Create.CON.valkey-gs). Be sure to have one or more replicas and enable Multi-AZ.

#### Enabling Multi-AZ on an existing cluster (Console)
<a name="AutoFailover.Enable.Console.ReplGrp"></a>

For more information on this process, see Modifying a Cluster [Using the ElastiCache AWS Management Console](Clusters.Modify.md#Clusters.Modify.CON).

### Enabling Multi-AZ (AWS CLI)
<a name="AutoFailover.Enable.CLI"></a>

The following code example uses the AWS CLI to enable Multi-AZ for the replication group `redis12`.

**Important**  
The replication group `redis12` must already exist and have at least one available read replica.

For Linux, macOS, or Unix:

```
aws elasticache modify-replication-group \
    --replication-group-id redis12 \
    --automatic-failover-enabled \
    --multi-az-enabled \
    --apply-immediately
```

For Windows:

```
aws elasticache modify-replication-group ^
    --replication-group-id redis12 ^
    --automatic-failover-enabled ^
    --multi-az-enabled ^
    --apply-immediately
```

The JSON output from this command should look something like the following.

```
{
    "ReplicationGroup": {
        "Status": "modifying", 
        "Description": "One shard, two nodes", 
        "NodeGroups": [
            {
                "Status": "modifying", 
                "NodeGroupMembers": [
                    {
                        "CurrentRole": "primary", 
                        "PreferredAvailabilityZone": "us-west-2b", 
                        "CacheNodeId": "0001", 
                        "ReadEndpoint": {
                            "Port": 6379, 
                            "Address": "redis12-001.v5r9dc.0001.usw2.cache.amazonaws.com"
                        }, 
                        "CacheClusterId": "redis12-001"
                    }, 
                    {
                        "CurrentRole": "replica", 
                        "PreferredAvailabilityZone": "us-west-2a", 
                        "CacheNodeId": "0001", 
                        "ReadEndpoint": {
                            "Port": 6379, 
                            "Address": "redis12-002.v5r9dc.0001.usw2.cache.amazonaws.com"
                        }, 
                        "CacheClusterId": "redis12-002"
                    }
                ], 
                "NodeGroupId": "0001", 
                "PrimaryEndpoint": {
                    "Port": 6379, 
                    "Address": "redis12.v5r9dc.ng.0001.usw2.cache.amazonaws.com"
                }
            }
        ], 
        "ReplicationGroupId": "redis12", 
        "SnapshotRetentionLimit": 1, 
        "AutomaticFailover": "enabling", 
        "MultiAZ": "enabled", 
        "SnapshotWindow": "07:00-08:00", 
        "SnapshottingClusterId": "redis12-002", 
        "MemberClusters": [
            "redis12-001", 
            "redis12-002"
        ], 
        "PendingModifiedValues": {}
    }
}
```

For more information, see these topics in the *AWS CLI Command Reference*:
+ [create-cache-cluster](https://docs.aws.amazon.com/cli/latest/reference/elasticache/create-cache-cluster.html)
+ [create-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/create-replication-group.html)
+ [modify-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-replication-group.html) in the *AWS CLI Command Reference.*

### Enabling Multi-AZ (ElastiCache API)
<a name="AutoFailover.Enable.API"></a>

The following code example uses the ElastiCache API to enable Multi-AZ for the replication group `redis12`.

**Note**  
To use this example, the replication group `redis12` must already exist and have at least one available read replica.

```
https://elasticache.us-west-2.amazonaws.com/
    ?Action=ModifyReplicationGroup
    &ApplyImmediately=true
    &AutoFailover=true
    &MultiAZEnabled=true
    &ReplicationGroupId=redis12
    &Version=2015-02-02
    &SignatureVersion=4
    &SignatureMethod=HmacSHA256
    &Timestamp=20140401T192317Z
    &X-Amz-Credential=<credential>
```

For more information, see these topics in the *ElastiCache API Reference*:
+ [CreateCacheCluster](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_CreateCacheCluster.html)
+ [CreateReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_CreateReplicationGroup.html)
+ [ModifyReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroup.html)

## Failure scenarios with Multi-AZ responses
<a name="AutoFailover.Scenarios"></a>

Before the introduction of Multi-AZ, ElastiCache detected and replaced a cluster's failed nodes by recreating and reprovisioning the failed node. If you enable Multi-AZ, a failed primary node fails over to the replica with the least replication lag. The selected replica is automatically promoted to primary, which is much faster than creating and reprovisioning a new primary node. This process usually takes just a few seconds until you can write to the cluster again.

When Multi-AZ is enabled, ElastiCache continually monitors the state of the primary node. If the primary node fails, one of the following actions is performed depending on the type of failure.

**Topics**
+ [Failure scenarios when only the primary node fails](#AutoFailover.Scenarios.PrimaryOnly)
+ [Failure scenarios when the primary node and some read replicas fail](#AutoFailover.Scenarios.PrimaryAndReplicas)
+ [Failure scenarios when the entire cluster fails](#AutoFailover.Scenarios.AllFail)

### Failure scenarios when only the primary node fails
<a name="AutoFailover.Scenarios.PrimaryOnly"></a>

If only the primary node fails, the read replica with the least replication lag is promoted to primary. A replacement read replica is then created and provisioned in the same Availability Zone as the failed primary.

When only the primary node fails, ElastiCache Multi-AZ does the following:

1. The failed primary node is taken offline.

1. The read replica with the least replication lag is promoted to primary.

   Writes can resume as soon as the promotion process is complete, typically just a few seconds. If your application is writing to the primary endpoint, you don't need to change the endpoint for writes or reads. ElastiCache propagates the DNS name of the promoted replica.

1. A replacement read replica is launched and provisioned.

   The replacement read replica is launched in the Availability Zone that the failed primary node was in so that the distribution of nodes is maintained.

1. The replicas sync with the new primary node.

After the new replica is available, be aware of these effects:
+ **Primary endpoint** – You don't need to make any changes to your application, because the DNS name of the new primary node is propagated to the primary endpoint.
+ **Read endpoint** – The reader endpoint is automatically updated to point to the new replica nodes.

For information about finding the endpoints of a cluster, see the following topics:
+ [Finding a Valkey or Redis OSS (Cluster Mode Disabled) Cluster's Endpoints (Console)](Endpoints.md#Endpoints.Find.Redis)
+ [Finding the Endpoints for Valkey or Redis OSS Replication Groups (AWS CLI)](Endpoints.md#Endpoints.Find.CLI.ReplGroups)
+ [Finding Endpoints for Valkey or Redis OSS Replication Groups (ElastiCache API)](Endpoints.md#Endpoints.Find.API.ReplGroups)

 

### Failure scenarios when the primary node and some read replicas fail
<a name="AutoFailover.Scenarios.PrimaryAndReplicas"></a>

If the primary and at least one read replica fails, the available replica with the least replication lag is promoted to primary cluster. New read replicas are also created and provisioned in the same Availability Zones as the failed nodes and replica that was promoted to primary.

When the primary node and some read replicas fail, ElastiCache Multi-AZ does the following:

1. The failed primary node and failed read replicas are taken offline.

1. The available replica with the least replication lag is promoted to primary node.

   Writes can resume as soon as the promotion process is complete, typically just a few seconds. If your application is writing to the primary endpoint, there is no need to change the endpoint for writes. ElastiCache propagates the DNS name of the promoted replica.

1. Replacement replicas are created and provisioned.

   The replacement replicas are created in the Availability Zones of the failed nodes so that the distribution of nodes is maintained.

1. All clusters sync with the new primary node.

Make the following changes to your application after the new nodes are available:
+ **Primary endpoint** – Don't make any changes to your application. The DNS name of the new primary node is propagated to the primary endpoint.
+ **Read endpoint** – The read endpoint is automatically updated to point to the new replica nodes.

For information about finding the endpoints of a replication group, see the following topics:
+ [Finding a Valkey or Redis OSS (Cluster Mode Disabled) Cluster's Endpoints (Console)](Endpoints.md#Endpoints.Find.Redis)
+ [Finding the Endpoints for Valkey or Redis OSS Replication Groups (AWS CLI)](Endpoints.md#Endpoints.Find.CLI.ReplGroups)
+ [Finding Endpoints for Valkey or Redis OSS Replication Groups (ElastiCache API)](Endpoints.md#Endpoints.Find.API.ReplGroups)

 

### Failure scenarios when the entire cluster fails
<a name="AutoFailover.Scenarios.AllFail"></a>

If everything fails, all the nodes are recreated and provisioned in the same Availability Zones as the original nodes. 

In this scenario, all the data in the cluster is lost due to the failure of every node in the cluster. This occurrence is rare.

When the entire cluster fails, ElastiCache Multi-AZ does the following:

1. The failed primary node and read replicas are taken offline.

1. A replacement primary node is created and provisioned.

1. Replacement replicas are created and provisioned.

   The replacements are created in the Availability Zones of the failed nodes so that the distribution of nodes is maintained.

   Because the entire cluster failed, data is lost and all the new nodes start cold.

Because each of the replacement nodes has the same endpoint as the node it's replacing, you don't need to make any endpoint changes in your application.

For information about finding the endpoints of a replication group, see the following topics:
+ [Finding a Valkey or Redis OSS (Cluster Mode Disabled) Cluster's Endpoints (Console)](Endpoints.md#Endpoints.Find.Redis)
+ [Finding the Endpoints for Valkey or Redis OSS Replication Groups (AWS CLI)](Endpoints.md#Endpoints.Find.CLI.ReplGroups)
+ [Finding Endpoints for Valkey or Redis OSS Replication Groups (ElastiCache API)](Endpoints.md#Endpoints.Find.API.ReplGroups)

We recommend that you create the primary node and read replicas in different Availability Zones to raise your fault tolerance level.

## Testing automatic failover
<a name="auto-failover-test"></a>

After you enable automatic failover, you can test it using the ElastiCache console, the AWS CLI, and the ElastiCache API.

When testing, note the following:
+ You can use this operation to test automatic failover on up to 15 shards (called node groups in the ElastiCache API and AWS CLI) in any rolling 24-hour period.
+ If you call this operation on shards in different clusters (called replication groups in the API and CLI), you can make the calls concurrently.
+ In some cases, you might call this operation multiple times on different shards in the same Valkey or Redis OSS (cluster mode enabled) replication group. In such cases, the first node replacement must complete before a subsequent call can be made.
+ To determine whether the node replacement is complete, check events using the Amazon ElastiCache console, the AWS CLI, or the ElastiCache API. Look for the following events related to automatic failover, listed here in order of likely occurrence:

  1. Replication group message: `Test Failover API called for node group <node-group-id>`

  1. Cache cluster message: `Failover from primary node <primary-node-id> to replica node <node-id> completed`

  1. Replication group message: `Failover from primary node <primary-node-id> to replica node <node-id> completed`

  1. Cache cluster message: `Recovering cache nodes <node-id>`

  1. Cache cluster message: `Finished recovery for cache nodes <node-id>`

  For more information, see the following:
  + [Viewing ElastiCache events](ECEvents.Viewing.md) in the *ElastiCache User Guide*
  + [DescribeEvents](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeEvents.html) in the *ElastiCache API Reference*
  + [describe-events](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-events.html) in the *AWS CLI Command Reference.*
+ This API is designed for testing the behavior of your application in case of ElastiCache failover. It is not designed to be an operational tool for initiating a failover to address an issue with the cluster. Moreover, in certain conditions such as large-scale operational events, AWS may block this API.

**Topics**
+ [Testing automatic failover using the AWS Management Console](#auto-failover-test-con)
+ [Testing automatic failover using the AWS CLI](#auto-failover-test-cli)
+ [Testing automatic failover using the ElastiCache API](#auto-failover-test-api)

### Testing automatic failover using the AWS Management Console
<a name="auto-failover-test-con"></a>

Use the following procedure to test automatic failover with the console.

**To test automatic failover**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Valkey** or **Redis OSS**.

1. From the list of clusters, choose the box to the left of the cluster you want to test. This cluster must have at least one read replica node.

1. In the **Details** area, confirm that this cluster is Multi-AZ enabled. If the cluster isn't Multi-AZ enabled, either choose a different cluster or modify this cluster to enable Multi-AZ. For more information, see [Using the ElastiCache AWS Management Console](Clusters.Modify.md#Clusters.Modify.CON).  
![\[Image: Details area of a Multi-AZ enabled cluster\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/images/ElastiCache-AutoFailover-MultiAZ-Enabled.png)

1. For Valkey or Redis OSS (cluster mode disabled), choose the cluster's name.

   For Valkey or Redis OSS (cluster mode enabled), do the following:

   1. Choose the cluster's name. 

   1. On the **Shards** page, for the shard (called node group in the API and CLI) on which you want to test failover, choose the shard's name. 

1. On the Nodes page, choose **Failover Primary**.

1. Choose **Continue** to fail over the primary, or **Cancel** to cancel the operation and not fail over the primary node.

   During the failover process, the console continues to show the node's status as *available*. To track the progress of your failover test, choose **Events** from the console navigation pane. On the **Events** tab, watch for events that indicate your failover has started (`Test Failover API called`) and completed (`Recovery completed`).

 

### Testing automatic failover using the AWS CLI
<a name="auto-failover-test-cli"></a>

You can test automatic failover on any Multi-AZ enabled cluster using the AWS CLI operation `test-failover`.

**Parameters**
+ `--replication-group-id` – Required. The replication group (on the console, cluster) that is to be tested.
+ `--node-group-id` – Required. The name of the node group you want to test automatic failover on. You can test a maximum of 15 node groups in a rolling 24-hour period.

The following example uses the AWS CLI to test automatic failover on the node group `redis00-0003` in the Valkey or Redis OSS (cluster mode enabled) cluster `redis00`.

**Example Test automatic failover**  
For Linux, macOS, or Unix:  

```
aws elasticache test-failover \
   --replication-group-id redis00 \
   --node-group-id redis00-0003
```
For Windows:  

```
aws elasticache test-failover ^
   --replication-group-id redis00 ^
   --node-group-id redis00-0003
```

Output from the preceding command looks something like the following.

```
{
    "ReplicationGroup": {
        "Status": "available", 
        "Description": "1 shard, 3 nodes (1 + 2 replicas)", 
        "NodeGroups": [
            {
                "Status": "available", 
                "NodeGroupMembers": [
                    {
                        "CurrentRole": "primary", 
                        "PreferredAvailabilityZone": "us-west-2c", 
                        "CacheNodeId": "0001", 
                        "ReadEndpoint": {
                            "Port": 6379, 
                            "Address": "redis1x3-001.7ekv3t.0001.usw2.cache.amazonaws.com"
                        }, 
                        "CacheClusterId": "redis1x3-001"
                    }, 
                    {
                        "CurrentRole": "replica", 
                        "PreferredAvailabilityZone": "us-west-2a", 
                        "CacheNodeId": "0001", 
                        "ReadEndpoint": {
                            "Port": 6379, 
                            "Address": "redis1x3-002.7ekv3t.0001.usw2.cache.amazonaws.com"
                        }, 
                        "CacheClusterId": "redis1x3-002"
                    }, 
                    {
                        "CurrentRole": "replica", 
                        "PreferredAvailabilityZone": "us-west-2b", 
                        "CacheNodeId": "0001", 
                        "ReadEndpoint": {
                            "Port": 6379, 
                            "Address": "redis1x3-003.7ekv3t.0001.usw2.cache.amazonaws.com"
                        }, 
                        "CacheClusterId": "redis1x3-003"
                    }
                ], 
                "NodeGroupId": "0001", 
                "PrimaryEndpoint": {
                    "Port": 6379, 
                    "Address": "redis1x3.7ekv3t.ng.0001.usw2.cache.amazonaws.com"
                }
            }
        ], 
        "ClusterEnabled": false, 
        "ReplicationGroupId": "redis1x3", 
        "SnapshotRetentionLimit": 1, 
        "AutomaticFailover": "enabled", 
        "MultiAZ": "enabled",
        "SnapshotWindow": "11:30-12:30", 
        "SnapshottingClusterId": "redis1x3-002", 
        "MemberClusters": [
            "redis1x3-001", 
            "redis1x3-002", 
            "redis1x3-003"
        ], 
        "CacheNodeType": "cache.m3.medium", 
        "DataTiering": "disabled",
        "PendingModifiedValues": {}
    }
}
```

To track the progress of your failover, use the AWS CLI `describe-events` operation.

For more information, see the following:
+ [test-failover](https://docs.aws.amazon.com/cli/latest/reference/elasticache/test-failover.html) in the *AWS CLI Command Reference.*
+ [describe-events](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-events.html) in the *AWS CLI Command Reference.*

 

### Testing automatic failover using the ElastiCache API
<a name="auto-failover-test-api"></a>

You can test automatic failover on any cluster enabled with Multi-AZ using the ElastiCache API operation `TestFailover`.

**Parameters**
+ `ReplicationGroupId` – Required. The replication group (on the console, cluster) to be tested.
+ `NodeGroupId` – Required. The name of the node group that you want to test automatic failover on. You can test a maximum of 15 node groups in a rolling 24-hour period.

The following example tests automatic failover on the node group `redis00-0003` in the replication group (on the console, cluster) `redis00`.

**Example Testing automatic failover**  

```
https://elasticache.us-west-2.amazonaws.com/
    ?Action=TestFailover
    &NodeGroupId=redis00-0003
    &ReplicationGroupId=redis00
    &Version=2015-02-02
    &SignatureVersion=4
    &SignatureMethod=HmacSHA256
    &Timestamp=20140401T192317Z
    &X-Amz-Credential=<credential>
```

To track the progress of your failover, use the ElastiCache `DescribeEvents` API operation.

For more information, see the following:
+ [TestFailover](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_TestFailover.html) in the *ElastiCache API Reference * 
+ [DescribeEvents](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeEvents.html) in the *ElastiCache API Reference * 

 

## Limitations on Multi-AZ
<a name="AutoFailover.Limitations"></a>

Be aware of the following limitations for Multi-AZ:
+ Multi-AZ is supported on Valkey, and on Redis OSS version 2.8.6 and later.
+ Multi-AZ isn't supported on T1 node types.
+ Valkey and Redis OSS replication is asynchronous. Therefore, when a primary node fails over to a replica, a small amount of data might be lost due to replication lag.

  When choosing the replica to promote to primary, ElastiCache chooses the replica with the least replication lag. In other words, it chooses the replica that is most current. Doing so helps minimize the amount of lost data. The replica with the least replication lag can be in the same or different Availability Zone from the failed primary node.
+ When you manually promote read replicas to primary on Valkey or Redis OSS clusters with cluster mode disabled, you can do so only when Multi-AZ and automatic failover are disabled. To promote a read replica to primary, take the following steps:

  1. Disable Multi-AZ on the cluster.

  1. Disable automatic failover on the cluster. You can do this through the console by clearing the **Auto failover** check box for the replication group. You can also do this using the AWS CLI by setting the `AutomaticFailoverEnabled` property to `false` when calling the `ModifyReplicationGroup` operation.

  1. Promote the read replica to primary.

  1. Re-enable Multi-AZ.
+ ElastiCache for Redis OSS Multi-AZ and append-only file (AOF) are mutually exclusive. If you enable one, you can't enable the other.
+ A node's failure can be caused by the rare event of an entire Availability Zone failing. In this case, the replica replacing the failed primary is created only when the Availability Zone is back up. For example, consider a replication group with the primary in AZ-a and replicas in AZ-b and AZ-c. If the primary fails, the replica with the least replication lag is promoted to primary cluster. Then, ElastiCache creates a new replica in AZ-a (where the failed primary was located) only when AZ-a is back up and available.
+ A customer-initiated reboot of a primary doesn't trigger automatic failover. Other reboots and failures do trigger automatic failover.
+ When the primary is rebooted, it's cleared of data when it comes back online. When the read replicas see the cleared primary cluster, they clear their copy of the data, which causes data loss.
+ After a read replica has been promoted, the other replicas sync with the new primary. After the initial sync, the replicas' content is deleted and they sync the data from the new primary. This sync process causes a brief interruption, during which the replicas are not accessible. The sync process also causes a temporary load increase on the primary while syncing with the replicas. This behavior is native to Valkey and Redis OSS and isn't unique to ElastiCache Multi-AZ. For details about this behavior, see [Replication](http://valkey.io/topics/replication) on the Valkey website.

**Important**  
For Valkey 7.2.6 and later or Redis OSS version 2.8.22 and later, you can't create external replicas.  
For Redis OSS versions before 2.8.22, we recommend that you don't connect an external replica to an ElastiCache cluster that is Multi-AZ enabled. This unsupported configuration can create issues that prevent ElastiCache from properly performing failover and recovery. To connect an external replica to an ElastiCache cluster, make sure that Multi-AZ isn't enabled before you make the connection.

# How synchronization and backup are implemented
<a name="Replication.Redis.Versions"></a>

All supported versions of Valkey and Redis OSS support backup and synchronization between the primary and replica nodes. However, the way that backup and synchronization is implemented varies depending on the version.

## Redis OSS Version 2.8.22 and Later
<a name="Replication.Redis.Version2-8-22"></a>

Redis OSS replication, in versions 2.8.22 and later, choose between two methods. For more information, see [Redis OSS Versions Before 2.8.22](#Replication.Redis.Earlier2-8-22) and [Snapshot and restore](backups.md).

During the forkless process, if the write loads are heavy, writes to the cluster are delayed to ensure that you don't accumulate too many changes and thus prevent a successful snapshot. 

## Redis OSS Versions Before 2.8.22
<a name="Replication.Redis.Earlier2-8-22"></a>

Redis OSS backup and synchronization in versions before 2.8.22 is a three-step process.

1. Fork, and in the background process, serialize the cluster data to disk. This creates a point-in-time snapshot.

1. In the foreground, accumulate a change log in the *client output buffer*.
**Important**  
If the change log exceeds the *client output buffer* size, the backup or synchronization fails. For more information, see [Ensuring you have enough memory to make a Valkey or Redis OSS snapshot](BestPractices.BGSAVE.md).

1. Finally, transmit the cache data and then the change log to the replica node.

# Creating a Valkey or Redis OSS replication group
<a name="Replication.CreatingRepGroup"></a>

You have the following options for creating a cluster with replica nodes. One applies when you already have an available Valkey or Redis OSS (cluster mode disabled) cluster not associated with any cluster that has replicas to use as the primary node. The other applies when you need to create a primary node with the cluster and read replicas. Currently, a Valkey or Redis OSS (cluster mode enabled) cluster must be created from scratch.

**Option 1: [Creating a replication group using an existing cluster](Replication.CreatingReplGroup.ExistingCluster.md)**  
Use this option to leverage an existing single-node Valkey or Redis OSS (cluster mode disabled) cluster. You specify this existing node as the primary node in the new cluster, and then individually add 1 to 5 read replicas to the cluster. If the existing cluster is active, read replicas synchronize with it as they are created. See [Creating a replication group using an existing cluster](Replication.CreatingReplGroup.ExistingCluster.md).  
You cannot create a Valkey or Redis OSS (cluster mode enabled) cluster using an existing cluster. To create a Valkey or Redis OSS (cluster mode enabled) cluster (API/CLI: replication group) using the ElastiCache console, see [Creating a Valkey or Redis OSS (cluster mode enabled) cluster (Console)](Clusters.Create.md#Clusters.Create.CON.RedisCluster).

**Option 2: [Creating a Valkey or Redis OSS replication group from scratch](Replication.CreatingReplGroup.NoExistingCluster.md)**  
Use this option if you don't already have an available Valkey or Redis OSS (cluster mode disabled) cluster to use as the cluster's primary node, or if you want to create a Valkey or Redis OSS (cluster mode enabled) cluster. See [Creating a Valkey or Redis OSS replication group from scratch](Replication.CreatingReplGroup.NoExistingCluster.md).

# Creating a replication group using an existing cluster
<a name="Replication.CreatingReplGroup.ExistingCluster"></a>

The following procedure adds a replication group to your Valkey or Redis OSS (cluster mode disabled) single-node cluster, which is necessary in order to upgrade your cluster to the latest version of Valkey. This is an in-place procedure that involves zero downtime and zero data loss. When you create a replication group for your single-node cluster, the cluster's node becomes the primary node in the new cluster. If you do not have a Valkey or Redis OSS (cluster mode disabled) cluster that you can use as the new cluster's primary, see [Creating a Valkey or Redis OSS replication group from scratch](Replication.CreatingReplGroup.NoExistingCluster.md).

An available cluster is an existing single-node Valkey or Redis OSS cluster. Currently, Valkey or Redis OSS (cluster mode enabled) does not support creating a cluster with replicas using an available single-node cluster. If you want to create a Valkey or Redis OSS (cluster mode enabled) cluster, see [Creating a Valkey or Redis OSS (Cluster Mode Enabled) cluster (Console)](Replication.CreatingReplGroup.NoExistingCluster.Cluster.md#Replication.CreatingReplGroup.NoExistingCluster.Cluster.CON).

## Creating a replication group using an existing cluster (Console)
<a name="Replication.CreatingReplGroup.ExistingCluster.CON"></a>

See the topic [Using the ElastiCache AWS Management Console](Clusters.AddNode.md#Clusters.AddNode.CON).

## Creating a replication group using an available Valkey or Redis OSS cluster (AWS CLI)
<a name="Replication.CreatingReplGroup.ExistingCluster.CLI"></a>

There are two steps to creating a replication group with read replicas when using an available Valkey or Redis OSS Cache Cluster for the primary when using the AWS CLI.

When using the AWS CLI you create a replication group specifying the available standalone node as the cluster's primary node, `--primary-cluster-id` and the number of nodes you want in the cluster using the CLI command, `create-replication-group`. Include the following parameters.

**--replication-group-id**  
The name of the replication group you are creating. The value of this parameter is used as the basis for the names of the added nodes with a sequential 3-digit number added to the end of the `--replication-group-id`. For example, `sample-repl-group-001`.  
Valkey or Redis OSS (cluster mode disabled) replication group naming constraints are as follows:  
+ Must contain 1–40 alphanumeric characters or hyphens.
+ Must begin with a letter.
+ Can't contain two consecutive hyphens.
+ Can't end with a hyphen.

**--replication-group-description**  
Description of the replication group.

**--num-node-groups**  
The number of nodes you want in this cluster. This value includes the primary node. This parameter has a maximum value of six.

**--primary-cluster-id**  
The name of the available Valkey or Redis OSS (cluster mode disabled) cluster's node that you want to be the primary node in this replication group.

The following command creates the replication group `sample-repl-group` using the available Valkey or Redis OSS (cluster mode disabled) cluster `redis01` as the replication group's primary node. It creates 2 new nodes which are read replicas. The settings of `redis01` (that is, parameter group, security group, node type, engine version, and so on.) will be applied to all nodes in the replication group.

For Linux, macOS, or Unix:

```
aws elasticache create-replication-group \
   --replication-group-id sample-repl-group \
   --replication-group-description "demo cluster with replicas" \
   --num-cache-clusters 3 \
   --primary-cluster-id redis01
```

For Windows:

```
aws elasticache create-replication-group ^
   --replication-group-id sample-repl-group ^
   --replication-group-description "demo cluster with replicas" ^
   --num-cache-clusters 3 ^
   --primary-cluster-id redis01
```

For additional information and parameters you might want to use, see the AWS CLI topic [create-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/create-replication-group.html).

**Next, add read replicas to the replication group**  
After the replication group is created, add one to five read replicas to it using the `create-cache-cluster` command, being sure to include the following parameters. 

**--cache-cluster-id**  
The name of the cluster you are adding to the replication group.  
Cluster naming constraints are as follows:  
+ Must contain 1–40 alphanumeric characters or hyphens.
+ Must begin with a letter.
+ Can't contain two consecutive hyphens.
+ Can't end with a hyphen.


**--replication-group-id**  
The name of the replication group to which you are adding this cluster.

Repeat this command for each read replica you want to add to the replication group, changing only the value of the `--cache-cluster-id` parameter.

**Note**  
Remember, a replication group cannot have more than five read replicas. Attempting to add a read replica to a replication group that already has five read replicas causes the operation to fail.

The following code adds the read replica `my-replica01` to the replication group `sample-repl-group`. The settings of the primary cluster–parameter group, security group, node type, and so on.–will be applied to nodes as they are added to the replication group.

For Linux, macOS, or Unix:

```
aws elasticache create-cache-cluster \
   --cache-cluster-id my-replica01 \
   --replication-group-id sample-repl-group
```

For Windows:

```
aws elasticache create-cache-cluster ^
   --cache-cluster-id my-replica01 ^
   --replication-group-id sample-repl-group
```

Output from this command will look something like this.

```
{
    "ReplicationGroup": {
        "Status": "creating",
        "Description": "demo cluster with replicas",
        "ClusterEnabled": false,
        "ReplicationGroupId": "sample-repl-group",
        "SnapshotRetentionLimit": 1,
        "AutomaticFailover": "disabled",
        "SnapshotWindow": "00:00-01:00",
        "SnapshottingClusterId": "redis01",
        "MemberClusters": [
            "sample-repl-group-001",
            "sample-repl-group-002",
            "redis01"
        ],
        "CacheNodeType": "cache.m4.large",
        "DataTiering": "disabled",
        "PendingModifiedValues": {}
    }
}
```

For additional information, see the AWS CLI topics:
+ [create-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/create-replication-group.html)
+ [modify-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-replication-group.html)

## Adding replicas to a standalone Valkey or Redis OSS (Cluster Mode Disabled) cluster (ElastiCache API)
<a name="Replication.CreatingReplGroup.ExistingCluster.API"></a>

When using the ElastiCache API, you create a replication group specifying the available standalone node as the cluster's primary node, `PrimaryClusterId` and the number of nodes you want in the cluster using the CLI command, `CreateReplicationGroup`. Include the following parameters.

**ReplicationGroupId**  
The name of the replication group you are creating. The value of this parameter is used as the basis for the names of the added nodes with a sequential 3-digit number added to the end of the `ReplicationGroupId`. For example, `sample-repl-group-001`.  
Valkey or Redis OSS (cluster mode disabled) replication group naming constraints are as follows:  
+ Must contain 1–40 alphanumeric characters or hyphens.
+ Must begin with a letter.
+ Can't contain two consecutive hyphens.
+ Can't end with a hyphen.

**ReplicationGroupDescription**  
Description of the cluster with replicas.

**NumCacheClusters**  
The number of nodes you want in this cluster. This value includes the primary node. This parameter has a maximum value of six.

**PrimaryClusterId**  
The name of the available Valkey or Redis OSS (cluster mode disabled) cluster that you want to be the primary node in this cluster.

The following command creates the cluster with replicas `sample-repl-group` using the available Valkey or Redis OSS (cluster mode disabled) cluster `redis01` as the replication group's primary node. It creates 2 new nodes which are read replicas. The settings of `redis01` (that is, parameter group, security group, node type, engine version, and so on.) will be applied to all nodes in the replication group.

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=CreateReplicationGroup 
   &Engine=redis
   &EngineVersion=6.0
   &ReplicationGroupDescription=Demo%20cluster%20with%20replicas
   &ReplicationGroupId=sample-repl-group
   &PrimaryClusterId=redis01
   &Version=2015-02-02
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &X-Amz-Credential=<credential>
```

For additional information, see the ElastiCache APL topics:
+ [CreateReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_CreateReplicationGroup.html)
+ [ModifyReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroup.html)

**Next, add read replicas to the replication group**  
After the replication group is created, add one to five read replicas to it using the `CreateCacheCluster` operation, being sure to include the following parameters. 

**CacheClusterId**  
The name of the cluster you are adding to the replication group.  
Cluster naming constraints are as follows:  
+ Must contain 1–40 alphanumeric characters or hyphens.
+ Must begin with a letter.
+ Can't contain two consecutive hyphens.
+ Can't end with a hyphen.


**ReplicationGroupId**  
The name of the replication group to which you are adding this cluster.

Repeat this operation for each read replica you want to add to the replication group, changing only the value of the `CacheClusterId` parameter.

The following code adds the read replica `myReplica01` to the replication group `myReplGroup` The settings of the primary cluster–parameter group, security group, node type, and so on.–will be applied to nodes as they are added to the replication group.

```
https://elasticache.us-west-2.amazonaws.com/
	?Action=CreateCacheCluster
	&CacheClusterId=myReplica01
	&ReplicationGroupId=myReplGroup
	&SignatureMethod=HmacSHA256
	&SignatureVersion=4
	&Version=2015-02-02
	&X-Amz-Algorithm=&AWS;4-HMAC-SHA256
	&X-Amz-Credential=[your-access-key-id]/20150202/us-west-2/elasticache/aws4_request
	&X-Amz-Date=20150202T170651Z
	&X-Amz-SignedHeaders=content-type;host;user-agent;x-amz-content-sha256;x-amz-date
	&X-Amz-Signature=[signature-value]
```

For additional information and parameters you might want to use, see the ElastiCache API topic [CreateCacheCluster](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_CreateCacheCluster.html).

# Creating a Valkey or Redis OSS replication group from scratch
<a name="Replication.CreatingReplGroup.NoExistingCluster"></a>

Following, you can find how to create a Valkey or Redis OSS replication group without using an existing Valkey or Redis OSS cluster as the primary. You can create a Valkey or Redis OSS (cluster mode disabled) or Valkey or Redis OSS (cluster mode enabled) replication group from scratch using the ElastiCache console, the AWS CLI, or the ElastiCache API.

Before you continue, decide whether you want to create a Valkey or Redis OSS (cluster mode disabled) or a Valkey or Redis OSS (cluster mode enabled) replication group. For guidance in deciding, see [Replication: Valkey and Redis OSS Cluster Mode Disabled vs. Enabled](Replication.Redis-RedisCluster.md).

**Topics**
+ [Creating a Valkey or Redis OSS (Cluster Mode Disabled) replication group from scratch](Replication.CreatingReplGroup.NoExistingCluster.Classic.md)
+ [Creating a replication group in Valkey or Redis OSS (Cluster Mode Enabled) from scratch](Replication.CreatingReplGroup.NoExistingCluster.Cluster.md)

# Creating a Valkey or Redis OSS (Cluster Mode Disabled) replication group from scratch
<a name="Replication.CreatingReplGroup.NoExistingCluster.Classic"></a>

You can create a Valkey or Redis OSS (cluster mode disabled) replication group from scratch using the ElastiCache console, the AWS CLI, or the ElastiCache API. A Valkey or Redis OSS (cluster mode disabled) replication group always has one node group, a primary cluster, and up to five read replicas. Valkey or Redis OSS (cluster mode disabled) replication groups don't support partitioning your data.

**Note**  
The node/shard limit can be increased to a maximum of 500 per cluster. To request a limit increase, see [AWS Service Limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) and include the instance type in the request.

To create a Valkey or Redis OSS (cluster mode disabled) replication group from scratch, take one of the following approaches:

## Creating a Valkey or Redis OSS (Cluster Mode Disabled) replication group from scratch (AWS CLI)
<a name="Replication.CreatingReplGroup.NoExistingCluster.Classic.CLI"></a>

The following procedure creates a Valkey or Redis OSS (cluster mode disabled) replication group using the AWS CLI.

When you create a Valkey or Redis OSS (cluster mode disabled) replication group from scratch, you create the replication group and all its nodes with a single call to the AWS CLI `create-replication-group` command. Include the following parameters.

**--replication-group-id**  
The name of the replication group you are creating.  
Valkey or Redis OSS (cluster mode disabled) replication group naming constraints are as follows:  
+ Must contain 1–40 alphanumeric characters or hyphens.
+ Must begin with a letter.
+ Can't contain two consecutive hyphens.
+ Can't end with a hyphen.

**--replication-group-description**  
Description of the replication group.

**--num-cache-clusters**  
The number of nodes you want created with this replication group, primary and read replicas combined.  
If you enable Multi-AZ (`--automatic-failover-enabled`), the value of `--num-cache-clusters` must be at least 2.

**--cache-node-type**  
The node type for each node in the replication group.  
ElastiCache supports the following node types. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts.  
For more information on performance details for each node type, see [Amazon EC2 Instance Types](https://aws.amazon.com/ec2/instance-types/).

**--data-tiering-enabled**  
Set this parameter if you are using an r6gd node type. If you don't want data tiering, set `--no-data-tiering-enabled`. For more information, see [Data tiering in ElastiCache](data-tiering.md).

**--cache-parameter-group**  
Specify a parameter group that corresponds to your engine version. If you are running Redis OSS 3.2.4 or later, specify the `default.redis3.2` parameter group or a parameter group derived from `default.redis3.2` to create a Valkey or Redis OSS (cluster mode disabled) replication group. For more information, see [Valkey and Redis OSS parameters](ParameterGroups.Engine.md#ParameterGroups.Redis).

**--network-type**  
Either `ipv4`, `ipv6` or `dual-stack`. If you choose dual-stack, you must set the `--IpDiscovery` parameter to either `ipv4` or `ipv6`.

**--engine**  
redis

**--engine-version**  
To have the richest set of features, choose the latest engine version.

The names of the nodes will be derived from the replication group name by postpending `-00`*\$1* to the replication group name. For example, using the replication group name `myReplGroup`, the name for the primary will be `myReplGroup-001` and the read replicas `myReplGroup-002` through `myReplGroup-006`.

If you want to enable in-transit or at-rest encryption on this replication group, add either or both of the `--transit-encryption-enabled` or `--at-rest-encryption-enabled` parameters and meet the following conditions.
+ Your replication group must be running Redis OSS version 3.2.6 or 4.0.10.
+ The replication group must be created in an Amazon VPC.
+ You must also include the parameter `--cache-subnet-group`.
+ You must also include the parameter `--auth-token` with the customer specified string value for your AUTH token (password) needed to perform operations on this replication group.

The following operation creates a Valkey or Redis OSS (cluster mode disabled) replication group `sample-repl-group` with three nodes, a primary and two replicas.

For Linux, macOS, or Unix:

```
aws elasticache create-replication-group \
   --replication-group-id sample-repl-group \
   --replication-group-description "Demo cluster with replicas" \
   --num-cache-clusters 3 \
   --cache-node-type cache.m4.large \ 
   --engine redis
```

For Windows:

```
aws elasticache create-replication-group ^
   --replication-group-id sample-repl-group ^
   --replication-group-description "Demo cluster with replicas" ^
   --num-cache-clusters 3 ^
   --cache-node-type cache.m4.large ^  
   --engine redis
```

Output from the this command is something like this.

```
{
    "ReplicationGroup": {
        "Status": "creating",
        "Description": "Demo cluster with replicas",
        "ClusterEnabled": false,
        "ReplicationGroupId": "sample-repl-group",
        "SnapshotRetentionLimit": 0,
        "AutomaticFailover": "disabled",
        "SnapshotWindow": "01:30-02:30",
        "MemberClusters": [
            "sample-repl-group-001",
            "sample-repl-group-002",
            "sample-repl-group-003"
        ],
        "CacheNodeType": "cache.m4.large",
        "DataTiering": "disabled",
        "PendingModifiedValues": {}
    }
}
```

For additional information and parameters you might want to use, see the AWS CLI topic [create-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/create-replication-group.html).

## Creating a Valkey or Redis OSS (cluster mode disabled) replication group from scratch (ElastiCache API)
<a name="Replication.CreatingReplGroup.NoExistingCluster.Classic.API"></a>

The following procedure creates a Valkey or Redis OSS (cluster mode disabled) replication group using the ElastiCache API.

When you create a Valkey or Redis OSS (cluster mode disabled) replication group from scratch, you create the replication group and all its nodes with a single call to the ElastiCache API `CreateReplicationGroup` operation. Include the following parameters.

**ReplicationGroupId**  
The name of the replication group you are creating.  
Valkey or Redis OSS (cluster mode enabled) replication group naming constraints are as follows:  
+ Must contain 1–40 alphanumeric characters or hyphens.
+ Must begin with a letter.
+ Can't contain two consecutive hyphens.
+ Can't end with a hyphen.

**ReplicationGroupDescription**  
Your description of the replication group.

**NumCacheClusters**  
The total number of nodes you want created with this replication group, primary and read replicas combined.  
If you enable Multi-AZ (`AutomaticFailoverEnabled=true`), the value of `NumCacheClusters` must be at least 2.

**CacheNodeType**  
The node type for each node in the replication group.  
ElastiCache supports the following node types. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts.  
For more information on performance details for each node type, see [Amazon EC2 Instance Types](https://aws.amazon.com/ec2/instance-types/).

**--data-tiering-enabled**  
Set this parameter if you are using an r6gd node type. If you don't want data tiering, set `--no-data-tiering-enabled`. For more information, see [Data tiering in ElastiCache](data-tiering.md).

**CacheParameterGroup**  
Specify a parameter group that corresponds to your engine version. If you are running Redis OSS 3.2.4 or later, specify the `default.redis3.2` parameter group or a parameter group derived from `default.redis3.2` to create a Valkey or Redis OSS (cluster mode disabled) replication group. For more information, see [Valkey and Redis OSS parameters](ParameterGroups.Engine.md#ParameterGroups.Redis).

**--network-type**  
Either `ipv4`, `ipv` or `dual-stack`. If you choose dual-stack, you must set the `--IpDiscovery` parameter to either `ipv4` or `ipv6`.

**Engine**  
redis

**EngineVersion**  
6.0

The names of the nodes will be derived from the replication group name by postpending `-00`*\$1* to the replication group name. For example, using the replication group name `myReplGroup`, the name for the primary will be `myReplGroup-001` and the read replicas `myReplGroup-002` through `myReplGroup-006`.

If you want to enable in-transit or at-rest encryption on this replication group, add either or both of the `TransitEncryptionEnabled=true` or `AtRestEncryptionEnabled=true` parameters and meet the following conditions.
+ Your replication group must be running Redis OSS version 3.2.6 or 4.0.10.
+ The replication group must be created in an Amazon VPC.
+ You must also include the parameter `CacheSubnetGroup`.
+ You must also include the parameter `AuthToken` with the customer specified string value for your AUTH token (password) needed to perform operations on this replication group.

The following operation creates the Valkey or Redis OSS (cluster mode disabled) replication group `myReplGroup` with three nodes, a primary and two replicas.

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=CreateReplicationGroup 
   &CacheNodeType=cache.m4.large
   &CacheParameterGroup=default.redis6.x
   &Engine=redis
   &EngineVersion=6.0
   &NumCacheClusters=3
   &ReplicationGroupDescription=test%20group
   &ReplicationGroupId=myReplGroup
   &Version=2015-02-02
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &X-Amz-Credential=<credential>
```

For additional information and parameters you might want to use, see the ElastiCache API topic [CreateReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_CreateReplicationGroup.html).

# Creating a replication group in Valkey or Redis OSS (Cluster Mode Enabled) from scratch
<a name="Replication.CreatingReplGroup.NoExistingCluster.Cluster"></a>

You can create a Valkey or Redis OSS (cluster mode enabled) cluster (API/CLI: *replication group*) using the ElastiCache console, the AWS CLI, or the ElastiCache API. A Valkey or Redis OSS (cluster mode enabled) replication group has from 1 to 500 shards (API/CLI: node groups), a primary node in each shard, and up to 5 read replicas in each shard. You can create a cluster with higher number of shards and lower number of replicas totaling up to 90 nodes per cluster. This cluster configuration can range from 90 shards and 0 replicas to 15 shards and 5 replicas, which is the maximum number of replicas allowed.

The node or shard limit can be increased to a maximum of 500 per cluster if the Valkey or Redis OSS engine version is 5.0.6 or higher. For example, you can choose to configure a 500 node cluster that ranges between 83 shards (one primary and 5 replicas per shard) and 500 shards (single primary and no replicas). Make sure there are enough available IP addresses to accommodate the increase. Common pitfalls include the subnets in the subnet group have too small a CIDR range or the subnets are shared and heavily used by other clusters. For more information, see [Creating a subnet group](SubnetGroups.Creating.md).

 For versions below 5.0.6, the limit is 250 per cluster.

To request a limit increase, see [AWS Service Limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) and choose the limit type **Nodes per cluster per instance type**. 

**Topics**
+ [Using the ElastiCache Console](#Replication.CreatingReplGroup.NoExistingCluster.Cluster.CON)
+ [Creating a Valkey or Redis OSS (Cluster Mode Enabled) replication group from scratch (AWS CLI)](#Replication.CreatingReplGroup.NoExistingCluster.Cluster.CLI)
+ [Creating a replication group in Valkey or Redis OSS (Cluster Mode Enabled) from scratch (ElastiCache API)](#Replication.CreatingReplGroup.NoExistingCluster.Cluster.API)

## Creating a Valkey or Redis OSS (Cluster Mode Enabled) cluster (Console)
<a name="Replication.CreatingReplGroup.NoExistingCluster.Cluster.CON"></a>

To create a Valkey or Redis OSS (cluster mode enabled) cluster, see [Creating a Valkey or Redis OSS (cluster mode enabled) cluster (Console)](Clusters.Create.md#Clusters.Create.CON.RedisCluster). Be sure to enable cluster mode, **Cluster Mode enabled (Scale Out)**, and specify at least two shards and one replica node in each.

## Creating a Valkey or Redis OSS (Cluster Mode Enabled) replication group from scratch (AWS CLI)
<a name="Replication.CreatingReplGroup.NoExistingCluster.Cluster.CLI"></a>

The following procedure creates a Valkey or Redis OSS (cluster mode enabled) replication group using the AWS CLI.

When you create a Valkey or Redis OSS (cluster mode enabled) replication group from scratch, you create the replication group and all its nodes with a single call to the AWS CLI `create-replication-group` command. Include the following parameters.

**--replication-group-id**  
The name of the replication group you are creating.  
Valkey or Redis OSS (cluster mode enabled) replication group naming constraints are as follows:  
+ Must contain 1–40 alphanumeric characters or hyphens.
+ Must begin with a letter.
+ Can't contain two consecutive hyphens.
+ Can't end with a hyphen.

**--replication-group-description**  
Description of the replication group.

**--cache-node-type**  
The node type for each node in the replication group.  
ElastiCache supports the following node types. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts.  
For more information on performance details for each node type, see [Amazon EC2 Instance Types](https://aws.amazon.com/ec2/instance-types/).

**--data-tiering-enabled**  
Set this parameter if you are using an r6gd node type. If you don't want data tiering, set `--no-data-tiering-enabled`. For more information, see [Data tiering in ElastiCache](data-tiering.md).

**--cache-parameter-group**  
Specify the `default.redis6.x.cluster.on` parameter group or a parameter group derived from `default.redis6.x.cluster.on` to create a Valkey or Redis OSS (cluster mode enabled) replication group. For more information, see [Redis OSS 6.x parameter changes](ParameterGroups.Engine.md#ParameterGroups.Redis.6-x).

**--engine**  
redis

**--engine-version**  
3.2.4

**--num-node-groups**  
The number of node groups in this replication group. Valid values are 1 to 500.  
The node/shard limit can be increased to a maximum of 500 per cluster. To request a limit increase, see [AWS Service Limits](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) and select limit type "Nodes per cluster per instance type”. 

**--replicas-per-node-group**  
The number of replica nodes in each node group. Valid values are 0 to 5.

**--network-type**  
Either `ipv4`, `ipv` or `dual-stack`. If you choose dual-stack, you must set the `--IpDiscovery` parameter to either `ipv4` or `ipv6`.

If you want to enable in-transit or at-rest encryption on this replication group, add either or both of the `--transit-encryption-enabled` or `--at-rest-encryption-enabled` parameters and meet the following conditions.
+ Your replication group must be running Redis OSS version 3.2.6 or 4.0.10.
+ The replication group must be created in an Amazon VPC.
+ You must also include the parameter `--cache-subnet-group`.
+ You must also include the parameter `--auth-token` with the customer specified string value for your AUTH token (password) needed to perform operations on this replication group.

The following operation creates the Valkey or Redis OSS (cluster mode enabled) replication group `sample-repl-group` with three node groups/shards (--num-node-groups), each with three nodes, a primary and two read replicas (--replicas-per-node-group).

For Linux, macOS, or Unix:

```
aws elasticache create-replication-group \
   --replication-group-id sample-repl-group \
   --replication-group-description "Demo cluster with replicas" \
   --num-node-groups 3 \
   --replicas-per-node-group 2 \
   --cache-node-type cache.m4.large \ 
   --engine redis \   
   --security-group-ids SECURITY_GROUP_ID \    
   --cache-subnet-group-name SUBNET_GROUP_NAME>
```

For Windows:

```
aws elasticache create-replication-group ^
   --replication-group-id sample-repl-group ^
   --replication-group-description "Demo cluster with replicas" ^
   --num-node-groups 3 ^
   --replicas-per-node-group 2 ^
   --cache-node-type cache.m4.large ^ 
   --engine redis ^   
   --security-group-ids SECURITY_GROUP_ID ^      
   --cache-subnet-group-name SUBNET_GROUP_NAME>
```

The preceding command generates the following output.

```
{
    "ReplicationGroup": {
        "Status": "creating", 
        "Description": "Demo cluster with replicas", 
        "ReplicationGroupId": "sample-repl-group", 
        "SnapshotRetentionLimit": 0, 
        "AutomaticFailover": "enabled", 
        "SnapshotWindow": "05:30-06:30", 
        "MemberClusters": [
            "sample-repl-group-0001-001", 
            "sample-repl-group-0001-002", 
            "sample-repl-group-0001-003", 
            "sample-repl-group-0002-001", 
            "sample-repl-group-0002-002", 
            "sample-repl-group-0002-003", 
            "sample-repl-group-0003-001", 
            "sample-repl-group-0003-002", 
            "sample-repl-group-0003-003"
        ], 
        "PendingModifiedValues": {}
    }
}
```

When you create a Valkey or Redis OSS (cluster mode enabled) replication group from scratch, you are able to configure each shard in the cluster using the `--node-group-configuration` parameter as shown in the following example which configures two node groups (Console: shards). The first shard has two nodes, a primary and one read replica. The second shard has three nodes, a primary and two read replicas.

**--node-group-configuration**  
The configuration for each node group. The `--node-group-configuration` parameter consists of the following fields.  
+ `PrimaryAvailabilityZone` – The Availability Zone where the primary node of this node group is located. If this parameter is omitted, ElastiCache chooses the Availability Zone for the primary node.

  **Example:** us-west-2a.
+ `ReplicaAvailabilityZones` – A comma separated list of Availability Zones where the read replicas are located. The number of Availability Zones in this list must match the value of `ReplicaCount`. If this parameter is omitted, ElastiCache chooses the Availability Zones for the replica nodes.

  **Example:** "us-west-2a,us-west-2b,us-west-2c"
+ `ReplicaCount` – The number of replica nodes in this node group.
+ `Slots` – A string that specifies the keyspace for the node group. The string is in the format `startKey-endKey`. If this parameter is omitted, ElastiCache allocates keys equally among the node groups.

  **Example:** "0-4999"

   

The following operation creates the Valkey or Redis OSS (cluster mode enabled) replication group `new-group` with two node groups/shards (`--num-node-groups`). Unlike the preceding example, each node group is configured differently from the other node group (`--node-group-configuration`).

For Linux, macOS, or Unix:

```
aws elasticache create-replication-group \
  --replication-group-id new-group \
  --replication-group-description "Sharded replication group" \
  --engine redis \    
  --snapshot-retention-limit 8 \
  --cache-node-type cache.m4.medium \
  --num-node-groups 2 \
  --node-group-configuration \
      "ReplicaCount=1,Slots=0-8999,PrimaryAvailabilityZone='us-east-1c',ReplicaAvailabilityZones='us-east-1b'" \
      "ReplicaCount=2,Slots=9000-16383,PrimaryAvailabilityZone='us-east-1a',ReplicaAvailabilityZones='us-east-1a','us-east-1c'"
```

For Windows:

```
aws elasticache create-replication-group ^
  --replication-group-id new-group ^
  --replication-group-description "Sharded replication group" ^
  --engine redis ^    
  --snapshot-retention-limit 8 ^
  --cache-node-type cache.m4.medium ^
  --num-node-groups 2 ^
  --node-group-configuration \
      "ReplicaCount=1,Slots=0-8999,PrimaryAvailabilityZone='us-east-1c',ReplicaAvailabilityZones='us-east-1b'" \
      "ReplicaCount=2,Slots=9000-16383,PrimaryAvailabilityZone='us-east-1a',ReplicaAvailabilityZones='us-east-1a','us-east-1c'"
```

The preceding operation generates the following output.

```
{
    "ReplicationGroup": {
        "Status": "creating", 
        "Description": "Sharded replication group", 
        "ReplicationGroupId": "rc-rg", 
        "SnapshotRetentionLimit": 8, 
        "AutomaticFailover": "enabled", 
        "SnapshotWindow": "10:00-11:00", 
        "MemberClusters": [
            "rc-rg-0001-001", 
            "rc-rg-0001-002", 
            "rc-rg-0002-001", 
            "rc-rg-0002-002", 
            "rc-rg-0002-003"
        ], 
        "PendingModifiedValues": {}
    }
}
```

For additional information and parameters you might want to use, see the AWS CLI topic [create-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/create-replication-group.html).

## Creating a replication group in Valkey or Redis OSS (Cluster Mode Enabled) from scratch (ElastiCache API)
<a name="Replication.CreatingReplGroup.NoExistingCluster.Cluster.API"></a>

The following procedure creates a Valkey or Redis OSS (cluster mode enabled) replication group using the ElastiCache API.

When you create a Valkey or Redis OSS (cluster mode enabled) replication group from scratch, you create the replication group and all its nodes with a single call to the ElastiCache API `CreateReplicationGroup` operation. Include the following parameters.

**ReplicationGroupId**  
The name of the replication group you are creating.  
Valkey or Redis OSS (cluster mode enabled) replication group naming constraints are as follows:  
+ Must contain 1–40 alphanumeric characters or hyphens.
+ Must begin with a letter.
+ Can't contain two consecutive hyphens.
+ Can't end with a hyphen.

**ReplicationGroupDescription**  
Description of the replication group.

**NumNodeGroups**  
The number of node groups you want created with this replication group. Valid values are 1 to 500.

**ReplicasPerNodeGroup**  
The number of replica nodes in each node group. Valid values are 1 to 5.

**NodeGroupConfiguration**  
The configuration for each node group. The `NodeGroupConfiguration` parameter consists of the following fields.  
+ `PrimaryAvailabilityZone` – The Availability Zone where the primary node of this node group is located. If this parameter is omitted, ElastiCache chooses the Availability Zone for the primary node.

  **Example:** us-west-2a.
+ `ReplicaAvailabilityZones` – A list of Availability Zones where the read replicas are located. The number of Availability Zones in this list must match the value of `ReplicaCount`. If this parameter is omitted, ElastiCache chooses the Availability Zones for the replica nodes.
+ `ReplicaCount` – The number of replica nodes in this node group.
+ `Slots` – A string that specifies the keyspace for the node group. The string is in the format `startKey-endKey`. If this parameter is omitted, ElastiCache allocates keys equally among the node groups.

  **Example:** "0-4999"

   

**CacheNodeType**  
The node type for each node in the replication group.  
ElastiCache supports the following node types. Generally speaking, the current generation types provide more memory and computational power at lower cost when compared to their equivalent previous generation counterparts.  
For more information on performance details for each node type, see [Amazon EC2 Instance Types](https://aws.amazon.com/ec2/instance-types/).

**--data-tiering-enabled**  
Set this parameter if you are using an r6gd node type. If you don't want data tiering, set `--no-data-tiering-enabled`. For more information, see [Data tiering in ElastiCache](data-tiering.md).

**CacheParameterGroup**  
Specify the `default.redis6.x.cluster.on` parameter group or a parameter group derived from `default.redis6.x.cluster.on` to create a Valkey or Redis OSS (cluster mode enabled) replication group. For more information, see [Redis OSS 6.x parameter changes](ParameterGroups.Engine.md#ParameterGroups.Redis.6-x).

**--network-type**  
Either `ipv4`, `ipv` or `dual-stack`. If you choose dual-stack, you must set the `--IpDiscovery` parameter to either `ipv4` or `ipv6`.

**Engine**  
redis

**EngineVersion**  
6.0

If you want to enable in-transit or at-rest encryption on this replication group, add either or both of the `TransitEncryptionEnabled=true` or `AtRestEncryptionEnabled=true` parameters and meet the following conditions.
+ Your replication group must be running Redis OSS version 3.2.6 or 4.0.10.
+ The replication group must be created in an Amazon VPC.
+ You must also include the parameter `CacheSubnetGroup`.
+ You must also include the parameter `AuthToken` with the customer specified string value for your AUTH token (password) needed to perform operations on this replication group.

Line breaks are added for ease of reading.

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=CreateReplicationGroup 
   &CacheNodeType=cache.m4.large
   &CacheParemeterGroup=default.redis6.xcluster.on
   &Engine=redis
   &EngineVersion=6.0
   &NumNodeGroups=3
   &ReplicasPerNodeGroup=2
   &ReplicationGroupDescription=test%20group
   &ReplicationGroupId=myReplGroup
   &Version=2015-02-02
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &X-Amz-Credential=<credential>
```

For additional information and parameters you might want to use, see the ElastiCache API topic [CreateReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_CreateReplicationGroup.html).

# Viewing a replication group's details
<a name="Replication.ViewDetails"></a>

There are times you may want to view the details of a replication group. You can use the ElastiCache console, the AWS CLI for ElastiCache, or the ElastiCache API. The console process is different for Valkey or Redis OSS (cluster mode disabled) and Valkey or Redis OSS (cluster mode enabled).

**Contents**
+ [Viewing a Valkey or Redis OSS (Cluster Mode Disabled) with replicas](Replication.ViewDetails.Redis.md)
  + [Using the ElastiCache Console](Replication.ViewDetails.Redis.md#Replication.ViewDetails.Redis.CON)
  + [Using the AWS CLI](Replication.ViewDetails.Redis.md#Replication.ViewDetails.Redis.CLI)
  + [Using the ElastiCache API](Replication.ViewDetails.Redis.md#Replication.ViewDetails.Redis.API)
+ [Viewing a replication group: Valkey or Redis OSS (Cluster Mode Enabled)](Replication.ViewDetails.RedisCluster.md)
  + [Using the ElastiCache Console](Replication.ViewDetails.RedisCluster.md#Replication.ViewDetails.RedisCluster.CON)
  + [Using the AWS CLI](Replication.ViewDetails.RedisCluster.md#Replication.ViewDetails.RedisCluster.CLI)
  + [Using the ElastiCache API](Replication.ViewDetails.RedisCluster.md#Replication.ViewDetails.RedisCluster.API)
+ [Viewing a replication group's details (AWS CLI)](Replication.ViewDetails.CLI.md)
+ [Viewing a replication group's details (ElastiCache API)](Replication.ViewDetails.API.md)

# Viewing a Valkey or Redis OSS (Cluster Mode Disabled) with replicas
<a name="Replication.ViewDetails.Redis"></a>

You can view the details of a Valkey or Redis OSS (cluster mode disabled) cluster with replicas (API/CLI: *replication group*) using the ElastiCache console, the AWS CLI for ElastiCache, or the ElastiCache API.

**Contents**
+ [Using the ElastiCache Console](#Replication.ViewDetails.Redis.CON)
+ [Using the AWS CLI](#Replication.ViewDetails.Redis.CLI)
+ [Using the ElastiCache API](#Replication.ViewDetails.Redis.API)

## Viewing a Valkey or Redis OSS (Cluster Mode Disabled) Replication Group (Console)
<a name="Replication.ViewDetails.Redis.CON"></a>

To view the details of a Valkey or Redis OSS (cluster mode disabled) cluster with replicas using the ElastiCache console, see the topic [Viewing Valkey or Redis OSS (Cluster Mode Disabled) details (Console)](Clusters.ViewDetails.md#Clusters.ViewDetails.CON.Redis).

## Viewing a Valkey or Redis OSS (Cluster Mode Disabled) replication group (AWS CLI)
<a name="Replication.ViewDetails.Redis.CLI"></a>

For an AWS CLI example that displays a Valkey or Redis OSS (cluster mode disabled) replication group's details, see [Viewing a replication group's details (AWS CLI)](Replication.ViewDetails.CLI.md).

## Viewing a Valkey or Redis OSS (Cluster Mode Disabled) Replication Group (ElastiCache API)
<a name="Replication.ViewDetails.Redis.API"></a>

For an ElastiCache API example that displays a Valkey or Redis OSS (cluster mode disabled) replication group's details, see [Viewing a replication group's details (ElastiCache API)](Replication.ViewDetails.API.md).

# Viewing a replication group: Valkey or Redis OSS (Cluster Mode Enabled)
<a name="Replication.ViewDetails.RedisCluster"></a>

## Viewing a Valkey or Redis OSS (Cluster Mode Enabled) cluster (Console)
<a name="Replication.ViewDetails.RedisCluster.CON"></a>

To view the details of a Valkey or Redis OSS (cluster mode enabled) cluster using the ElastiCache console, see [Viewing details for a Valkey or Redis OSS (Cluster Mode Enabled) cluster (Console)](Clusters.ViewDetails.md#Clusters.ViewDetails.CON.RedisCluster).

## Viewing a Valkey or Redis OSS (Cluster Mode Enabled) cluster (AWS CLI)
<a name="Replication.ViewDetails.RedisCluster.CLI"></a>

For an ElastiCache CLI example that displays a Valkey or Redis OSS (cluster mode enabled) replication group's details, see [Viewing a replication group's details (AWS CLI)](Replication.ViewDetails.CLI.md).

## Viewing a Valkey or Redis OSS (Cluster Mode Enabled) Cluster (ElastiCache API)
<a name="Replication.ViewDetails.RedisCluster.API"></a>

For an ElastiCache API example that displays a Valkey or Redis OSS (cluster mode enabled) replication group's details, see [Viewing a replication group's details (ElastiCache API)](Replication.ViewDetails.API.md).

# Viewing a replication group's details (AWS CLI)
<a name="Replication.ViewDetails.CLI"></a>

You can view the details for a replication group using the AWS CLI `describe-replication-groups` command. Use the following optional parameters to refine the listing. Omitting the parameters returns the details for up to 100 replication groups.

**Optional Parameters**
+ `--replication-group-id` – Use this parameter to list the details of a specific replication group. If the specified replication group has more than one node group, results are returned grouped by node group.
+ `--max-items` – Use this parameter to limit the number of replication groups listed. The value of `--max-items` cannot be less than 20 or greater than 100.

**Example**  
The following code lists the details for up to 100 replication groups.  

```
aws elasticache describe-replication-groups
```
The following code lists the details for `sample-repl-group`.  

```
aws elasticache describe-replication-groups --replication-group-id sample-repl-group
```
The following code lists the details for `sample-repl-group`.  

```
aws elasticache describe-replication-groups --replication-group-id sample-repl-group
```
The following code list the details for up to 25 replication groups.  

```
aws elasticache describe-replication-groups --max-items 25
```
Output from this operation should look something like this (JSON format).  

```
{
   "ReplicationGroups": [
     {
       "Status": "available", 
       "Description": "test", 
       "NodeGroups": [
         {
            "Status": "available", 
               "NodeGroupMembers": [
                  {
                     "CurrentRole": "primary", 
                     "PreferredAvailabilityZone": "us-west-2a", 
                     "CacheNodeId": "0001", 
                     "ReadEndpoint": {
                        "Port": 6379, 
                        "Address": "rg-name-001.1abc4d.0001.usw2.cache.amazonaws.com"
                     }, 
                     "CacheClusterId": "rg-name-001"
                  }, 
                  {
                     "CurrentRole": "replica", 
                     "PreferredAvailabilityZone": "us-west-2b", 
                     "CacheNodeId": "0001", 
                     "ReadEndpoint": {
                        "Port": 6379, 
                        "Address": "rg-name-002.1abc4d.0001.usw2.cache.amazonaws.com"
                     }, 
                     "CacheClusterId": "rg-name-002"
                  }, 
                  {
                     "CurrentRole": "replica", 
                     "PreferredAvailabilityZone": "us-west-2c", 
                     "CacheNodeId": "0001", 
                     "ReadEndpoint": {
                        "Port": 6379, 
                        "Address": "rg-name-003.1abc4d.0001.usw2.cache.amazonaws.com"
                     }, 
                     "CacheClusterId": "rg-name-003"
                  }
               ], 
               "NodeGroupId": "0001", 
               "PrimaryEndpoint": {
                  "Port": 6379, 
                  "Address": "rg-name.1abc4d.ng.0001.usw2.cache.amazonaws.com"
               }
            }
         ], 
         "ReplicationGroupId": "rg-name", 
         "AutomaticFailover": "enabled", 
         "SnapshottingClusterId": "rg-name-002", 
         "MemberClusters": [
            "rg-name-001", 
            "rg-name-002", 
            "rg-name-003"
         ], 
         "PendingModifiedValues": {}
      }, 
      {
      ... some output omitted for brevity
      }
   ]
}
```

For more information, see the AWS CLI for ElastiCache topic [describe-replication-groups](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-replication-groups.html).

# Viewing a replication group's details (ElastiCache API)
<a name="Replication.ViewDetails.API"></a>

You can view the details for a replication using the AWS CLI `DescribeReplicationGroups` operation. Use the following optional parameters to refine the listing. Omitting the parameters returns the details for up to 100 replication groups.

**Optional Parameters**
+ `ReplicationGroupId` – Use this parameter to list the details of a specific replication group. If the specified replication group has more than one node group, results are returned grouped by node group.
+ `MaxRecords` – Use this parameter to limit the number of replication groups listed. The value of `MaxRecords` cannot be less than 20 or greater than 100. The default is 100.

**Example**  
The following code list the details for up to 100 replication groups.  

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=DescribeReplicationGroups
   &Version=2015-02-02
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &X-Amz-Credential=<credential>
```
The following code lists the details for `myReplGroup`.  

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=DescribeReplicationGroups
   &ReplicationGroupId=myReplGroup
   &Version=2015-02-02
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &X-Amz-Credential=<credential>
```
The following code list the details for up to 25 clusters.  

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=DescribeReplicationGroups
   &MaxRecords=25
   &Version=2015-02-02
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &X-Amz-Credential=<credential>
```

For more information, see the ElastiCache API reference topic [DescribeReplicationGroups](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeReplicationGroups.html).

# Finding replication group endpoints
<a name="Replication.Endpoints"></a>

An application can connect to any node in a replication group, provided that it has the DNS endpoint and port number for that node. Depending upon whether you are running a Valkey or Redis OSS (cluster mode disabled) or a Valkey or Redis OSS (cluster mode enabled) replication group, you will be interested in different endpoints.

**Valkey or Redis OSS (cluster mode disabled)**  
Valkey or Redis OSS (cluster mode disabled) clusters with replicas have three types of endpoints; the *primary endpoint*, the *reader endpoint* and the *node endpoints*. The primary endpoint is a DNS name that always resolves to the primary node in the cluster. The primary endpoint is immune to changes to your cluster, such as promoting a read replica to the primary role. For write activity, we recommend that your applications connect to the primary endpoint.

A reader endpoint will evenly split incoming connections to the endpoint between all read replicas in an ElastiCache cluster. Additional factors such as when the application creates the connections or how the application (re)-uses the connections will determine the traffic distribution. Reader endpoints keep up with cluster changes in real-time as replicas are added or removed. You can place your ElastiCache for Redis OSS cluster’s multiple read replicas in different AWS Availability Zones (AZ) to ensure high availability of reader endpoints. 

**Note**  
A reader endpoint is not a load balancer. It is a DNS record that will resolve to an IP address of one of the replica nodes in a round robin fashion.

For read activity, applications can also connect to any node in the cluster. Unlike the primary endpoint, node endpoints resolve to specific endpoints. If you make a change in your cluster, such as adding or deleting a replica, you must update the node endpoints in your application.

**Valkey or Redis OSS (cluster mode enabled)**  
Valkey or Redis OSS (cluster mode enabled) clusters with replicas, because they have multiple shards (API/CLI: node groups), which mean they also have multiple primary nodes, have a different endpoint structure than Valkey or Redis OSS (cluster mode disabled) clusters. Valkey or Redis OSS (cluster mode enabled) has a *configuration endpoint* which "knows" all the primary and node endpoints in the cluster. Your application connects to the configuration endpoint. Whenever your application writes to or reads from the cluster's configuration endpoint, Valkey and Redis OSS, behind the scenes, determine which shard the key belongs to and which endpoint in that shard to use. It is all quite transparent to your application.

You can find the endpoints for a cluster using the ElastiCache console, the AWS CLI, or the ElastiCache API.

**Finding Replication Group Endpoints**

To find the endpoints for your replication group, see one of the following topics:
+ [Finding a Valkey or Redis OSS (Cluster Mode Disabled) Cluster's Endpoints (Console)](Endpoints.md#Endpoints.Find.Redis)
+ [Finding Endpoints for a Valkey or Redis OSS (Cluster Mode Enabled) Cluster (Console)](Endpoints.md#Endpoints.Find.RedisCluster)
+ [Finding the Endpoints for Valkey or Redis OSS Replication Groups (AWS CLI)](Endpoints.md#Endpoints.Find.CLI.ReplGroups)
+ [Finding Endpoints for Valkey or Redis OSS Replication Groups (ElastiCache API)](Endpoints.md#Endpoints.Find.API.ReplGroups)

# Modifying a replication group
<a name="Replication.Modify"></a>

**Important Constraints**  
Currently, ElastiCache supports limited modifications of a Valkey or Redis OSS (cluster mode enabled) replication group, for example changing the engine version, using the API operation `ModifyReplicationGroup` (CLI: `modify-replication-group`). You can modify the number of shards (node groups) in a Valkey or Redis OSS (cluster mode enabled) cluster with the API operation [https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroupShardConfiguration.html](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroupShardConfiguration.html) (CLI: [https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-replication-group-shard-configuration.html](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-replication-group-shard-configuration.html)). For more information, see [Scaling Valkey or Redis OSS (Cluster Mode Enabled) clusters](scaling-redis-cluster-mode-enabled.md).  
Other modifications to a Valkey or Redis OSS (cluster mode enabled) cluster require that you create a cluster with the new cluster incorporating the changes.
You can upgrade Valkey or Redis OSS (cluster mode disabled) and Valkey or Redis OSS (cluster mode enabled) clusters and replication groups to newer engine versions. However, you can't downgrade to earlier engine versions except by deleting the existing cluster or replication group and creating it again. For more information, see [Version Management for ElastiCache](VersionManagement.md).
You can upgrade an existing ElastiCache for Valkey or Redis OSS cluster that uses cluster mode disabled to use cluster mode enabled, using the console, [ModifyReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroup.html) API or the [modify-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-replication-group.html) CLI command, as shown in the example below. Or you can follow the steps at [Modifying cluster mode](modify-cluster-mode.md).

You can modify a Valkey or Redis OSS (cluster mode disabled) cluster's settings using the ElastiCache console, the AWS CLI, or the ElastiCache API. Currently, ElastiCache supports a limited number of modifications on a Valkey or Redis OSS (cluster mode enabled) replication group. Other modifications require you create a backup of the current replication group then using that backup to seed a new Valkey or Redis OSS (cluster mode enabled) replication group.

**Topics**
+ [Using the AWS Management Console](#Replication.Modify.CON)
+ [Using the AWS CLI](#Replication.Modify.CLI)
+ [Using the ElastiCache API](#Replication.Modify.API)

## Using the AWS Management Console
<a name="Replication.Modify.CON"></a>

To modify a Valkey or Redis OSS (cluster mode disabled) cluster, see [Modifying an ElastiCache cluster](Clusters.Modify.md).

## Using the AWS CLI
<a name="Replication.Modify.CLI"></a>

The following are AWS CLI examples of the `modify-replication-group` command. You can use the same command to make other modifications to a replication group.

**Enable Multi-AZ on an existing Valkey or Redis OSS replication group:**

For Linux, macOS, or Unix:

```
aws elasticache modify-replication-group \
   --replication-group-id myReplGroup \
   --multi-az-enabled = true
```

For Windows:

```
aws elasticache modify-replication-group ^
   --replication-group-id myReplGroup ^
   --multi-az-enabled
```

**Modify cluster mode from disabled to enabled:**

To modify cluster mode from *disabled* to *enabled*, you must first set the cluster mode to *compatible*. Compatible mode allows your Valkey or Redis OSS clients to connect using both cluster mode enabled and cluster mode disabled. After you migrate all Valkey or Redis OSS clients to use cluster mode enabled, you can then complete cluster mode configuration and set the cluster mode to *enabled*.

For Linux, macOS, or Unix:

Set to cluster mode to *compatible*.

```
aws elasticache modify-replication-group \
   --replication-group-id myReplGroup \
   --cache-parameter-group-name myParameterGroupName \
   --cluster-mode compatible
```

Set to cluster mode to *enabled*.

```
aws elasticache modify-replication-group \
   --replication-group-id myReplGroup \
   --cluster-mode enabled
```

For Windows:

Set to cluster mode to *compatible*.

```
aws elasticache modify-replication-group ^
   --replication-group-id myReplGroup ^
   --cache-parameter-group-name myParameterGroupName ^
   --cluster-mode compatible
```

Set to cluster mode to *enabled*.

```
aws elasticache modify-replication-group ^
   --replication-group-id myReplGroup ^
   --cluster-mode enabled
```

For more information on the AWS CLI `modify-replication-group` command, see [modify-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-replication-group.html) or [Modifying cluster mode]() in the *ElastiCache for Redis OSS User Guide*.

## Using the ElastiCache API
<a name="Replication.Modify.API"></a>

The following ElastiCache API operation enables Multi-AZ on an existing Valkey or Redis OSS replication group. You can use the same operation to make other modifications to a replication group.

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=ModifyReplicationGroup
   &AutomaticFailoverEnabled=true  
   &Mutli-AZEnabled=true  
   &ReplicationGroupId=myReplGroup
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20141201T220302Z
   &Version=2014-12-01
   &X-Amz-Algorithm=&AWS;4-HMAC-SHA256
   &X-Amz-Date=20141201T220302Z
   &X-Amz-SignedHeaders=Host
   &X-Amz-Expires=20141201T220302Z
   &X-Amz-Credential=<credential>
   &X-Amz-Signature=<signature>
```

For more information on the ElastiCache API `ModifyReplicationGroup` operation, see [ModifyReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroup.html).

# Deleting a replication group
<a name="Replication.DeletingRepGroup"></a>

If you no longer need one of your clusters with replicas (called *replication groups* in the API/CLI), you can delete it. When you delete a replication group, ElastiCache deletes all of the nodes in that group.

After you have begun this operation, it cannot be interrupted or canceled.

**Warning**  
When you delete an ElastiCache for Redis OSS cluster, your manual snapshots are retained. You will also have an option to create a final snapshot before the cluster is deleted. Automatic cache snapshots are not retained.
`CreateSnapshot` permission is required to create a final snapshot. Without this permission, the API call will fail with an `Access Denied` exception.

## Deleting a Replication Group (Console)
<a name="Replication.DeletingRepGroup.CON"></a>

To delete a cluster that has replicas, see [Deleting a cluster in ElastiCache](Clusters.Delete.md).

## Deleting a Replication Group (AWS CLI)
<a name="Replication.DeletingRepGroup.CLI"></a>

Use the command [delete-replication-group](https://docs.aws.amazon.com/AmazonElastiCache/latest/CommandLineReference/CLIReference-cmd-DeleteReplicationGroup.html) to delete a replication group.

```
aws elasticache delete-replication-group --replication-group-id my-repgroup 
```

A prompt asks you to confirm your decision. Enter *y* (yes) to start the operation immediately. After the process starts, it is irreversible.

```
						
   After you begin deleting this replication group, all of its nodes will be deleted as well.
   Are you sure you want to delete this replication group? [Ny]y

REPLICATIONGROUP  my-repgroup  My replication group  deleting
```

## Deleting a replication group (ElastiCache API)
<a name="Replication.DeletingRepGroup.API"></a>

Call [DeleteReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DeleteReplicationGroup.html) with the `ReplicationGroup` parameter. 

**Example**  

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=DeleteReplicationGroup
   &ReplicationGroupId=my-repgroup
   &Version=2014-12-01
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20141201T220302Z
   &X-Amz-Algorithm=&AWS;4-HMAC-SHA256
   &X-Amz-Date=20141201T220302Z
   &X-Amz-SignedHeaders=Host
   &X-Amz-Expires=20141201T220302Z
   &X-Amz-Credential=<credential>
   &X-Amz-Signature=<signature>
```

**Note**  
If you set the `RetainPrimaryCluster` parameter to `true`, all of the read replicas will be deleted, but the primary cluster will be retained.

# Changing the number of replicas
<a name="increase-decrease-replica-count"></a>

You can dynamically increase or decrease the number of read replicas in your Valkey or Redis OSS replication group using the AWS Management Console, the AWS CLI, or the ElastiCache API. If your replication group is a Valkey or Redis OSS (cluster mode enabled) replication group, you can choose which shards (node groups) to increase or decrease the number of replicas.

To dynamically change the number of replicas in your replication group, choose the operation from the following table that fits your situation.


| To Do This | For Valkey or Redis OSS (cluster mode enabled) | For Valkey or Redis OSS (cluster mode disabled) | 
| --- | --- | --- | 
|  Add replicas  |  [Increasing the number of replicas in a shard](increase-replica-count.md)  |  [Increasing the number of replicas in a shard](increase-replica-count.md) [Adding a read replica for Valkey or Redis OSS (Cluster Mode Disabled)](Replication.AddReadReplica.md)  | 
|  Delete replicas  |  [Decreasing the number of replicas in a shard](decrease-replica-count.md)  |  [Decreasing the number of replicas in a shard](decrease-replica-count.md) [Deleting a read replica for Valkey or Redis OSS (Cluster Mode Disabled)](Replication.RemoveReadReplica.md)  | 

# Increasing the number of replicas in a shard
<a name="increase-replica-count"></a>

You can increase the number of replicas in a Valkey or Redis OSS (cluster mode enabled) shard or Valkey or Redis OSS (cluster mode disabled) replication group up to a maximum of five. You can do so using the AWS Management Console, the AWS CLI, or the ElastiCache API.

**Topics**
+ [Using the AWS Management Console](#increase-replica-count-con)
+ [Using the AWS CLI](#increase-replica-count-cli)
+ [Using the ElastiCache API](#increase-replica-count-api)

## Using the AWS Management Console
<a name="increase-replica-count-con"></a>

The following procedure uses the console to increase the number of replicas in a Valkey or Redis OSS (cluster mode enabled) replication group.

**To increase the number of replicas in shards**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Valkey** or **Redis OSS**, and then choose the name of the replication group that you want to add replicas to.

1. Choose the box for each shard that you want to add replicas to.

1. Choose **Add replicas**.

1. Complete the **Add Replicas to Shards** page:
   + For **New number of replicas/shard**, enter the number of replicas that you want all of your selected shards to have. This value must be greater than or equal to **Current Number of Replicas per shard** and less than or equal to five. We recommend at least two replicas as a working minimum.
   + For **Availability Zones**, choose either **No preference** to have ElastiCache chose an Availability Zone for each new replica, or **Specify Availability Zones** to choose an Availability Zone for each new replica.

     If you choose **Specify Availability Zones**, for each new replica specify an Availability Zone using the list.

1. Choose **Add** to add the replicas or **Cancel** to cancel the operation.

## Using the AWS CLI
<a name="increase-replica-count-cli"></a>

To increase the number of replicas in a Valkey or Redis OSS shard, use the `increase-replica-count` command with the following parameters:
+ `--replication-group-id` – Required. Identifies which replication group you want to increase the number of replicas in.
+ `--apply-immediately` or `--no-apply-immediately` – Required. Specifies whether to increase the replica count immediately (`--apply-immediately`) or at the next maintenance window (`--no-apply-immediately`). Currently, `--no-apply-immediately` is not supported.
+ `--new-replica-count` – Optional. Specifies the number of replica nodes you want when finished, up to a maximum of five. Use this parameter for Valkey or Redis OSS (cluster mode disabled) replication groups where there is only one node group or Valkey or Redis OSS (cluster mode enabled) group, or where you want all node groups to have the same number of replicas. If this value is not larger than the current number of replicas in the node group, the call fails with an exception.
+ `--replica-configuration` – Optional. Allows you to set the number of replicas and Availability Zones for each node group independently. Use this parameter for Valkey or Redis OSS (cluster mode enabled) groups where you want to configure each node group independently. 

  `--replica-configuration` has three optional members:
  + `NodeGroupId` – The four-digit ID for the node group that you are configuring. For Valkey or Redis OSS (cluster mode disabled) replication groups, the shard ID is always `0001`. To find a Valkey or Redis OSS (cluster mode enabled) node group's (shard's) ID, see [Finding a shard's ID](Shards.md#shard-find-id).
  + `NewReplicaCount` – The number of replicas that you want in this node group at the end of this operation. The value must be more than the current number of replicas, up to a maximum of five. If this value is not larger than the current number of replicas in the node group, the call fails with an exception.
  + `PreferredAvailabilityZones` – A list of `PreferredAvailabilityZone` strings that specify which Availability Zones the replication group's nodes are to be in. The number of `PreferredAvailabilityZone` values must equal the value of `NewReplicaCount` plus 1 to account for the primary node. If this member of `--replica-configuration` is omitted, ElastiCache for Redis OSS chooses the Availability Zone for each of the new replicas.

**Important**  
You must include either the `--new-replica-count` or `--replica-configuration` parameter, but not both, in your call.

**Example**  
The following example increases the number of replicas in the replication group `sample-repl-group` to three. When the example is finished, there are three replicas in each node group. This number applies whether this is a Valkey or Redis OSS (cluster mode disabled) group with a single node group or a Valkey or Redis OSS (cluster mode enabled) group with multiple node groups.  
For Linux, macOS, or Unix:  

```
aws elasticache increase-replica-count \
    --replication-group-id sample-repl-group \
    --new-replica-count 3 \
    --apply-immediately
```
For Windows:  

```
aws elasticache increase-replica-count ^
    --replication-group-id sample-repl-group ^
    --new-replica-count 3 ^
    --apply-immediately
```
The following example increases the number of replicas in the replication group `sample-repl-group` to the value specified for the two specified node groups. Given that there are multiple node groups, this is a Valkey or Redis OSS (cluster mode enabled) replication group. When specifying the optional `PreferredAvailabilityZones`, the number of Availability Zones listed must equal the value of `NewReplicaCount` plus 1 more. This approach accounts for the primary node for the group identified by `NodeGroupId`.  
For Linux, macOS, or Unix:  

```
aws elasticache increase-replica-count \
    --replication-group-id sample-repl-group \
    --replica-configuration \
        NodeGroupId=0001,NewReplicaCount=2,PreferredAvailabilityZones=us-east-1a,us-east-1c,us-east-1b \
        NodeGroupId=0003,NewReplicaCount=3,PreferredAvailabilityZones=us-east-1a,us-east-1b,us-east-1c,us-east-1c \
    --apply-immediately
```
For Windows:  

```
aws elasticache increase-replica-count ^
    --replication-group-id sample-repl-group ^
    --replica-configuration ^
        NodeGroupId=0001,NewReplicaCount=2,PreferredAvailabilityZones=us-east-1a,us-east-1c,us-east-1b ^
        NodeGroupId=0003,NewReplicaCount=3,PreferredAvailabilityZones=us-east-1a,us-east-1b,us-east-1c,us-east-1c \
    --apply-immediately
```

For more information about increasing the number of replicas using the CLI, see [increase-replica-count](https://docs.aws.amazon.com/cli/latest/reference/elasticache/increase-replica-count.html) in the *Amazon ElastiCache Command Line Reference.*

## Using the ElastiCache API
<a name="increase-replica-count-api"></a>

To increase the number of replicas in a Valkey or Redis OSS shard, use the `IncreaseReplicaCount` action with the following parameters:
+ `ReplicationGroupId` – Required. Identifies which replication group you want to increase the number of replicas in.
+ `ApplyImmediately` – Required. Specifies whether to increase the replica count immediately (`ApplyImmediately=True`) or at the next maintenance window (`ApplyImmediately=False`). Currently, `ApplyImmediately=False` is not supported.
+ `NewReplicaCount` – Optional. Specifies the number of replica nodes you want when finished, up to a maximum of five. Use this parameter for Valkey or Redis OSS (cluster mode disabled) replication groups where there is only one node group, or Valkey or Redis OSS (cluster mode enabled) groups where you want all node groups to have the same number of replicas. If this value is not larger than the current number of replicas in the node group, the call fails with an exception.
+ `ReplicaConfiguration` – Optional. Allows you to set the number of replicas and Availability Zones for each node group independently. Use this parameter for Valkey or Redis OSS (cluster mode enabled) groups where you want to configure each node group independently. 

  `ReplicaConfiguraion` has three optional members:
  + `NodeGroupId` – The four-digit ID for the node group you are configuring. For Valkey or Redis OSS (cluster mode disabled) replication groups, the node group (shard) ID is always `0001`. To find a Valkey or Redis OSS (cluster mode enabled) node group's (shard's) ID, see [Finding a shard's ID](Shards.md#shard-find-id).
  + `NewReplicaCount` – The number of replicas that you want in this node group at the end of this operation. The value must be more than the current number of replicas and a maximum of five. If this value is not larger than the current number of replicas in the node group, the call fails with an exception.
  + `PreferredAvailabilityZones` – A list of `PreferredAvailabilityZone` strings that specify which Availability Zones the replication group's nodes are to be in. The number of `PreferredAvailabilityZone` values must equal the value of `NewReplicaCount` plus 1 to account for the primary node. If this member of `ReplicaConfiguration` is omitted, ElastiCache for Redis OSS chooses the Availability Zone for each of the new replicas.

**Important**  
You must include either the `NewReplicaCount` or `ReplicaConfiguration` parameter, but not both, in your call.

**Example**  
The following example increases the number of replicas in the replication group `sample-repl-group` to three. When the example is finished, there are three replicas in each node group. This number applies whether this is a Valkey or Redis OSS (cluster mode disabled) group with a single node group, or a Valkey or Redis OSS (cluster mode enabled) group with multiple node groups.  

```
https://elasticache.us-west-2.amazonaws.com/
      ?Action=IncreaseReplicaCount
      &ApplyImmediately=True
      &NewReplicaCount=3
      &ReplicationGroupId=sample-repl-group
      &Version=2015-02-02
      &SignatureVersion=4
      &SignatureMethod=HmacSHA256
      &Timestamp=20150202T192317Z
      &X-Amz-Credential=<credential>
```
The following example increases the number of replicas in the replication group `sample-repl-group` to the value specified for the two specified node groups. Given that there are multiple node groups, this is a Valkey or Redis OSS (cluster mode enabled) replication group. When specifying the optional `PreferredAvailabilityZones`, the number of Availability Zones listed must equal the value of `NewReplicaCount` plus 1 more. This approach accounts for the primary node, for the group identified by `NodeGroupId`.  

```
https://elasticache.us-west-2.amazonaws.com/
      ?Action=IncreaseReplicaCount
      &ApplyImmediately=True
      &ReplicaConfiguration.ConfigureShard.1.NodeGroupId=0001
      &ReplicaConfiguration.ConfigureShard.1.NewReplicaCount=2
      &ReplicaConfiguration.ConfigureShard.1.PreferredAvailabilityZones.PreferredAvailabilityZone.1=us-east-1a
      &ReplicaConfiguration.ConfigureShard.1.PreferredAvailabilityZones.PreferredAvailabilityZone.2=us-east-1c
      &ReplicaConfiguration.ConfigureShard.1.PreferredAvailabilityZones.PreferredAvailabilityZone.3=us-east-1b
      &ReplicaConfiguration.ConfigureShard.2.NodeGroupId=0003
      &ReplicaConfiguration.ConfigureShard.2.NewReplicaCount=3
      &ReplicaConfiguration.ConfigureShard.2.PreferredAvailabilityZones.PreferredAvailabilityZone.1=us-east-1a
      &ReplicaConfiguration.ConfigureShard.2.PreferredAvailabilityZones.PreferredAvailabilityZone.2=us-east-1b
      &ReplicaConfiguration.ConfigureShard.2.PreferredAvailabilityZones.PreferredAvailabilityZone.3=us-east-1c
      &ReplicaConfiguration.ConfigureShard.2.PreferredAvailabilityZones.PreferredAvailabilityZone.4=us-east-1c
      &ReplicationGroupId=sample-repl-group
      &Version=2015-02-02
      &SignatureVersion=4
      &SignatureMethod=HmacSHA256
      &Timestamp=20150202T192317Z
      &X-Amz-Credential=<credential>
```

For more information about increasing the number of replicas using the API, see [IncreaseReplicaCount](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_IncreaseReplicaCount.html) in the *Amazon ElastiCache API Reference.*

# Decreasing the number of replicas in a shard
<a name="decrease-replica-count"></a>

You can decrease the number of replicas in a shard for Valkey or Redis OSS (cluster mode enabled), or in a replication group for Valkey or Redis OSS (cluster mode disabled):
+ For Valkey or Redis OSS (cluster mode disabled), you can decrease the number of replicas to one if Multi-AZ is enabled, and to zero if it isn't enabled.
+ For Valkey or Redis OSS (cluster mode enabled), you can decrease the number of replicas to zero. However, you can't fail over to a replica if your primary node fails.

You can use the AWS Management Console, the AWS CLI or the ElastiCache API to decrease the number of replicas in a node group (shard) or replication group.

**Topics**
+ [Using the AWS Management Console](#decrease-replica-count-con)
+ [Using the AWS CLI](#decrease-replica-count-cli)
+ [Using the ElastiCache API](#decrease-replica-count-api)

## Using the AWS Management Console
<a name="decrease-replica-count-con"></a>

The following procedure uses the console to decrease the number of replicas in a Valkey or Redis OSS (cluster mode enabled) replication group.

**To decrease the number of replicas in a Valkey or Redis OSS shard**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose **Valkey** or **Redis OSS**, then choose the name of the replication group from which you want to delete replicas.

1. Choose the box for each shard you want to remove a replica node from.

1. Choose **Delete replicas**.

1. Complete the **Delete Replicas from Shards** page:

   1. For **New number of replicas/shard**, enter the number of replicas that you want the selected shards to have. This number must be greater than or equal to 1. We recommend at least two replicas per shard as a working minimum.

   1. Choose **Delete** to delete the replicas or **Cancel** to cancel the operation.

**Important**  
If you don’t specify the replica nodes to be deleted, ElastiCache for Redis OSS automatically selects replica nodes for deletion. While doing so, ElastiCache for Redis OSS attempts to retain the Multi-AZ architecture for your replication group followed by retaining replicas with minimum replication lag with the primary.
You can't delete the primary or primary nodes in a replication group. If you specify a primary node for deletion, the operation fails with an error event indicating that the primary node was selected for deletion. 

## Using the AWS CLI
<a name="decrease-replica-count-cli"></a>

To decrease the number of replicas in a Valkey or Redis OSS shard, use the `decrease-replica-count` command with the following parameters:
+ `--replication-group-id` – Required. Identifies which replication group you want to decrease the number of replicas in.
+ `--apply-immediately` or `--no-apply-immediately` – Required. Specifies whether to decrease the replica count immediately (`--apply-immediately`) or at the next maintenance window (`--no-apply-immediately`). Currently, `--no-apply-immediately` is not supported.
+ `--new-replica-count` – Optional. Specifies the number of replica nodes that you want. The value of `--new-replica-count` must be a valid value less than the current number of replicas in the node groups. For minimum permitted values, see [Decreasing the number of replicas in a shard](#decrease-replica-count). If the value of `--new-replica-count` doesn't meet this requirement, the call fails.
+ `--replicas-to-remove` – Optional. Contains a list of node IDs specifying the replica nodes to remove.
+ `--replica-configuration` – Optional. Allows you to set the number of replicas and Availability Zones for each node group independently. Use this parameter for Valkey or Redis OSS (cluster mode enabled) groups where you want to configure each node group independently. 

  `--replica-configuration` has three optional members:
  + `NodeGroupId` – The four-digit ID for the node group that you are configuring. For Valkey or Redis OSS (cluster mode disabled) replication groups, the shard ID is always `0001`. To find a Valkey or Redis OSS (cluster mode enabled) node group's (shard's) ID, see [Finding a shard's ID](Shards.md#shard-find-id).
  + `NewReplicaCount` – An optional parameter that specifies the number of replica nodes you want. The value of `NewReplicaCount` must be a valid value less than the current number of replicas in the node groups. For minimum permitted values, see [Decreasing the number of replicas in a shard](#decrease-replica-count). If the value of `NewReplicaCount` doesn't meet this requirement, the call fails.
  + `PreferredAvailabilityZones` – A list of `PreferredAvailabilityZone` strings that specify which Availability Zones the replication group's nodes are in. The number of `PreferredAvailabilityZone` values must equal the value of `NewReplicaCount` plus 1 to account for the primary node. If this member of `--replica-configuration` is omitted, ElastiCache for Redis OSS chooses the Availability Zone for each of the new replicas.

**Important**  
You must include one and only one of the `--new-replica-count`, `--replicas-to-remove`, or `--replica-configuration` parameters.

**Example**  
The following example uses `--new-replica-count` to decrease the number of replicas in the replication group `sample-repl-group` to one. When the example is finished, there is one replica in each node group. This number applies whether this is a Valkey or Redis OSS (cluster mode disabled) group with a single node group or a Valkey or Redis OSS (cluster mode enabled) group with multiple node groups.  
For Linux, macOS, or Unix:  

```
aws elasticache decrease-replica-count
    --replication-group-id sample-repl-group \
    --new-replica-count 1 \
    --apply-immediately
```
For Windows:  

```
aws elasticache decrease-replica-count ^
    --replication-group-id sample-repl-group ^
    --new-replica-count 1 ^
    --apply-immediately
```
The following example decreases the number of replicas in the replication group `sample-repl-group` by removing two specified replicas (`0001` and `0003`) from the node group.  
For Linux, macOS, or Unix:  

```
aws elasticache decrease-replica-count \
    --replication-group-id sample-repl-group \
    --replicas-to-remove 0001,0003 \
    --apply-immediately
```
For Windows:  

```
aws elasticache decrease-replica-count ^
    --replication-group-id sample-repl-group ^
    --replicas-to-remove 0001,0003 \
    --apply-immediately
```
The following example uses `--replica-configuration` to decrease the number of replicas in the replication group `sample-repl-group` to the value specified for the two specified node groups. Given that there are multiple node groups, this is a Valkey or Redis OSS (cluster mode enabled) replication group. When specifying the optional `PreferredAvailabilityZones`, the number of Availability Zones listed must equal the value of `NewReplicaCount` plus 1 more. This approach accounts for the primary node for the group identified by `NodeGroupId`.  
For Linux, macOS, or Unix:  

```
aws elasticache decrease-replica-count \
    --replication-group-id sample-repl-group \
    --replica-configuration \
        NodeGroupId=0001,NewReplicaCount=1,PreferredAvailabilityZones=us-east-1a,us-east-1c \
        NodeGroupId=0003,NewReplicaCount=2,PreferredAvailabilityZones=us-east-1a,us-east-1b,us-east-1c \
    --apply-immediately
```
For Windows:  

```
aws elasticache decrease-replica-count ^
    --replication-group-id sample-repl-group ^
    --replica-configuration ^
        NodeGroupId=0001,NewReplicaCount=2,PreferredAvailabilityZones=us-east-1a,us-east-1c ^
        NodeGroupId=0003,NewReplicaCount=3,PreferredAvailabilityZones=us-east-1a,us-east-1b,us-east-1c \
    --apply-immediately
```

For more information about decreasing the number of replicas using the CLI, see [decrease-replica-count](https://docs.aws.amazon.com/cli/latest/reference/elasticache/decrease-replica-count.html) in the *Amazon ElastiCache Command Line Reference.*

## Using the ElastiCache API
<a name="decrease-replica-count-api"></a>

To decrease the number of replicas in a Valkey or Redis OSS shard, use the `DecreaseReplicaCount` action with the following parameters:
+ `ReplicationGroupId` – Required. Identifies which replication group you want to decrease the number of replicas in.
+ `ApplyImmediately` – Required. Specifies whether to decrease the replica count immediately (`ApplyImmediately=True`) or at the next maintenance window (`ApplyImmediately=False`). Currently, `ApplyImmediately=False` is not supported.
+ `NewReplicaCount` – Optional. Specifies the number of replica nodes you want. The value of `NewReplicaCount` must be a valid value less than the current number of replicas in the node groups. For minimum permitted values, see [Decreasing the number of replicas in a shard](#decrease-replica-count). If the value of `--new-replica-count` doesn't meet this requirement, the call fails.
+ `ReplicasToRemove` – Optional. Contains a list of node IDs specifying the replica nodes to remove.
+ `ReplicaConfiguration` – Optional. Contains a list of node groups that allows you to set the number of replicas and Availability Zones for each node group independently. Use this parameter for Valkey or Redis OSS (cluster mode enabled) groups where you want to configure each node group independently. 

  `ReplicaConfiguraion` has three optional members:
  + `NodeGroupId` – The four-digit ID for the node group you are configuring. For Valkey or Redis OSS (cluster mode disabled) replication groups, the node group ID is always `0001`. To find a Valkey or Redis OSS (cluster mode enabled) node group's (shard's) ID, see [Finding a shard's ID](Shards.md#shard-find-id).
  + `NewReplicaCount` – The number of replicas that you want in this node group at the end of this operation. The value must be less than the current number of replicas down to a minimum of 1 if Multi-AZ is enabled or 0 if Multi-AZ with Automatic Failover isn't enabled. If this value is not less than the current number of replicas in the node group, the call fails with an exception.
  + `PreferredAvailabilityZones` – A list of `PreferredAvailabilityZone` strings that specify which Availability Zones the replication group's nodes are in. The number of `PreferredAvailabilityZone` values must equal the value of `NewReplicaCount` plus 1 to account for the primary node. If this member of `ReplicaConfiguration` is omitted, ElastiCache for Redis OSS chooses the Availability Zone for each of the new replicas.

**Important**  
You must include one and only one of the `NewReplicaCount`, `ReplicasToRemove`, or `ReplicaConfiguration` parameters.

**Example**  
The following example uses `NewReplicaCount` to decrease the number of replicas in the replication group `sample-repl-group` to one. When the example is finished, there is one replica in each node group. This number applies whether this is a Valkey or Redis OSS (cluster mode disabled) group with a single node group or a Valkey or Redis OSS (cluster mode enabled) group with multiple node groups.  

```
https://elasticache.us-west-2.amazonaws.com/
      ?Action=DecreaseReplicaCount
      &ApplyImmediately=True
      &NewReplicaCount=1
      &ReplicationGroupId=sample-repl-group
      &Version=2015-02-02
      &SignatureVersion=4
      &SignatureMethod=HmacSHA256
      &Timestamp=20150202T192317Z
      &X-Amz-Credential=<credential>
```
The following example decreases the number of replicas in the replication group `sample-repl-group` by removing two specified replicas (`0001` and `0003`) from the node group.  

```
https://elasticache.us-west-2.amazonaws.com/
      ?Action=DecreaseReplicaCount
      &ApplyImmediately=True
      &ReplicasToRemove.ReplicaToRemove.1=0001
      &ReplicasToRemove.ReplicaToRemove.2=0003
      &ReplicationGroupId=sample-repl-group
      &Version=2015-02-02
      &SignatureVersion=4
      &SignatureMethod=HmacSHA256
      &Timestamp=20150202T192317Z
      &X-Amz-Credential=<credential>
```
The following example uses `ReplicaConfiguration` to decrease the number of replicas in the replication group `sample-repl-group` to the value specified for the two specified node groups. Given that there are multiple node groups, this is a Valkey or Redis OSS (cluster mode enabled) replication group. When specifying the optional `PreferredAvailabilityZones`, the number of Availability Zones listed must equal the value of `NewReplicaCount` plus 1 more. This approach accounts for the primary node for the group identified by `NodeGroupId`.  

```
https://elasticache.us-west-2.amazonaws.com/
      ?Action=DecreaseReplicaCount
      &ApplyImmediately=True
      &ReplicaConfiguration.ConfigureShard.1.NodeGroupId=0001
      &ReplicaConfiguration.ConfigureShard.1.NewReplicaCount=1
      &ReplicaConfiguration.ConfigureShard.1.PreferredAvailabilityZones.PreferredAvailabilityZone.1=us-east-1a
      &ReplicaConfiguration.ConfigureShard.1.PreferredAvailabilityZones.PreferredAvailabilityZone.2=us-east-1c
      &ReplicaConfiguration.ConfigureShard.2.NodeGroupId=0003
      &ReplicaConfiguration.ConfigureShard.2.NewReplicaCount=2
      &ReplicaConfiguration.ConfigureShard.2.PreferredAvailabilityZones.PreferredAvailabilityZone.1=us-east-1a
      &ReplicaConfiguration.ConfigureShard.2.PreferredAvailabilityZones.PreferredAvailabilityZone.2=us-east-1b
      &ReplicaConfiguration.ConfigureShard.2.PreferredAvailabilityZones.PreferredAvailabilityZone.4=us-east-1c
      &ReplicationGroupId=sample-repl-group
      &Version=2015-02-02
      &SignatureVersion=4
      &SignatureMethod=HmacSHA256
      &Timestamp=20150202T192317Z
      &X-Amz-Credential=<credential>
```

For more information about decreasing the number of replicas using the API, see [DecreaseReplicaCount](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DecreaseReplicaCount.html) in the *Amazon ElastiCache API Reference.*

# Adding a read replica for Valkey or Redis OSS (Cluster Mode Disabled)
<a name="Replication.AddReadReplica"></a>

Information in the following topic applies to Valkey or Redis OSS (cluster mode disabled) replication groups only.

As your read traffic increases, you might want to spread those reads across more nodes and reduce the read pressure on any one node. In this topic, you can find how to add a read replica to a Valkey or Redis OSS (cluster mode disabled) cluster. 

A Valkey or Redis OSS (cluster mode disabled) replication group can have a maximum of five read replicas. If you attempt to add a read replica to a replication group that already has five read replicas, the operation fails.

For information about adding replicas to a Valkey or Redis OSS (cluster mode enabled) replication group, see the following:
+ [Scaling Valkey or Redis OSS (Cluster Mode Enabled) clusters](scaling-redis-cluster-mode-enabled.md)
+ [Increasing the number of replicas in a shard](increase-replica-count.md)

You can add a read replica to a Valkey or Redis OSS (cluster mode disabled) cluster using the ElastiCache Console, the AWS CLI, or the ElastiCache API.

**Related topics**
+ [Adding nodes to an ElastiCache cluster](Clusters.AddNode.md)
+ [Adding a read replica to a replication group (AWS CLI)](#Replication.AddReadReplica.CLI)
+ [Adding a read replica to a replication group using the API](#Replication.AddReadReplica.API)

## Adding a read replica to a replication group (AWS CLI)
<a name="Replication.AddReadReplica.CLI"></a>

To add a read replica to a Valkey or Redis OSS (cluster mode disabled) replication group, use the AWS CLI `create-cache-cluster` command, with the parameter `--replication-group-id` to specify which replication group to add the cluster (node) to.

The following example creates the cluster `my-read replica` and adds it to the replication group `my-replication-group`. The node types, parameter groups, security groups, maintenance window, and other settings for the read replica are the same as for the other nodes in `my-replication-group`. 

For Linux, macOS, or Unix:

```
aws elasticache create-cache-cluster \
      --cache-cluster-id my-read-replica \
      --replication-group-id my-replication-group
```

For Windows:

```
aws elasticache create-cache-cluster ^
      --cache-cluster-id my-read-replica ^
      --replication-group-id my-replication-group
```

For more information on adding a read replica using the CLI, see [create-cache-cluster](https://docs.aws.amazon.com/cli/latest/reference/elasticache/create-cache-cluster.html) in the *Amazon ElastiCache Command Line Reference.*

## Adding a read replica to a replication group using the API
<a name="Replication.AddReadReplica.API"></a>

To add a read replica to a Valkey or Redis OSS (cluster mode disabled) replication group, use the ElastiCache `CreateCacheCluster` operation, with the parameter `ReplicationGroupId` to specify which replication group to add the cluster (node) to.

The following example creates the cluster `myReadReplica` and adds it to the replication group `myReplicationGroup`. The node types, parameter groups, security groups, maintenance window, and other settings for the read replica are the same as for the other nodes `myReplicationGroup`.

```
https://elasticache.us-west-2.amazonaws.com/
      ?Action=CreateCacheCluster
      &CacheClusterId=myReadReplica
      &ReplicationGroupId=myReplicationGroup
      &Version=2015-02-02
      &SignatureVersion=4
      &SignatureMethod=HmacSHA256
      &Timestamp=20150202T192317Z
      &X-Amz-Credential=<credential>
```

For more information on adding a read replica using the API, see [CreateCacheCluster](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_CreateCacheCluster.html) in the *Amazon ElastiCache API Reference.*

# Deleting a read replica for Valkey or Redis OSS (Cluster Mode Disabled)
<a name="Replication.RemoveReadReplica"></a>

Information in the following topic applies to Valkey or Redis OSS (cluster mode disabled) replication groups only.

As read traffic on your Valkey or Redis OSS replication group changes, you might want to add or remove read replicas. Removing a node from a replication group is the same as just deleting a cluster, though there are restrictions:
+ You cannot remove the primary from a replication group. If you want to delete the primary, do the following:

  1. Promote a read replica to primary. For more information on promoting a read replica to primary, see [Promoting a read replica to primary, for Valkey or Redis OSS (cluster mode disabled) replication groups](Replication.PromoteReplica.md).

  1. Delete the old primary. For a restriction on this method, see the next point.
+ If Multi-AZ is enabled on a replication group, you can't remove the last read replica from the replication group. In this case, do the following:

  1. Modify the replication group by disabling Multi-AZ. For more information, see [Modifying a replication group](Replication.Modify.md).

  1. Delete the read replica.

You can remove a read replica from a Valkey or Redis OSS (cluster mode disabled) replication group using the ElastiCache console, the AWS CLI for ElastiCache, or the ElastiCache API.

For directions on deleting a cluster from a Valkey or Redis OSS replication group, see the following:
+ [Using the AWS Management Console](Clusters.Delete.md#Clusters.Delete.CON)
+ [Using the AWS CLI to delete an ElastiCache cluster](Clusters.Delete.md#Clusters.Delete.CLI)
+ [Using the ElastiCache API](Clusters.Delete.md#Clusters.Delete.API)
+ [Scaling Valkey or Redis OSS (Cluster Mode Enabled) clusters](scaling-redis-cluster-mode-enabled.md)
+ [Decreasing the number of replicas in a shard](decrease-replica-count.md)

# Promoting a read replica to primary, for Valkey or Redis OSS (cluster mode disabled) replication groups
<a name="Replication.PromoteReplica"></a>

Information in the following topic applies to only Valkey or Redis OSS (cluster mode disabled) replication groups.

You can promote a Valkey or Redis OSS (cluster mode disabled) read replica to primary using the AWS Management Console, the AWS CLI, or the ElastiCache API. You can't promote a read replica to primary while Multi-AZ with Automatic Failover is enabled on the replication group. To promote a Valkey or Redis OSS (cluster mode disabled) replica to primary on a Multi-AZ enabled replication group, do the following:

1. Modify the replication group to disable Multi-AZ (doing this doesn't require that all your clusters be in the same Availability Zone). For more information, see [Modifying a replication group](Replication.Modify.md).

1. Promote the read replica to primary.

1. Modify the replication group to re-enable Multi-AZ.

Multi-AZ is not available on replication groups running Redis OSS 2.6.13 or earlier.

## Using the AWS Management Console
<a name="Replication.PromoteReplica.CON"></a>

The following procedure uses the console to promote a replica node to primary. 

**To promote a read replica to primary (console)**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. If the replica you want to promote is a member of a Valkey or Redis OSS (cluster mode disabled) replication group where Multi-AZ is enabled, modify the replication group to disable Multi-AZ before you proceed. For more information, see [Modifying a replication group](Replication.Modify.md).

1. Choose **Valkey** or **Redis OSS**, then from the list of clusters, choose the replication group that you want to modify. This replication group must be running the "Redis" engine, not the "Clustered Redis" engine, and must have two or more nodes.

1. From the list of nodes, choose the replica node you want to promote to primary, then for **Actions**, choose **Promote**.

1. In the **Promote Read Replica** dialog box, do the following:

   1. For **Apply Immediately**, choose **Yes** to promote the read replica immediately, or **No** to promote it at the cluster's next maintenance window.

   1. Choose **Promote** to promote the read replica or **Cancel** to cancel the operation.

1. If the cluster was Multi-AZ enabled before you began the promotion process, wait until the replication group's status is **available**, then modify the cluster to re-enable Multi-AZ. For more information, see [Modifying a replication group](Replication.Modify.md).

## Using the AWS CLI
<a name="Replication.PromoteReplica.CLI"></a>

You can't promote a read replica to primary if the replication group is Multi-AZ enabled. In some cases, the replica that you want to promote might be a member of a replication group where Multi-AZ is enabled. In these cases, you must modify the replication group to disable Multi-AZ before you proceed. Doing this doesn't require that all your clusters be in the same Availability Zone. For more information on modifying a replication group, see [Modifying a replication group](Replication.Modify.md).

The following AWS CLI command modifies the replication group `sample-repl-group`, making the read replica `my-replica-1` the primary in the replication group.

For Linux, macOS, or Unix:

```
aws elasticache modify-replication-group \
   --replication-group-id sample-repl-group \
   --primary-cluster-id my-replica-1
```

For Windows:

```
aws elasticache modify-replication-group ^
   --replication-group-id sample-repl-group ^
   --primary-cluster-id my-replica-1
```

For more information on modifying a replication group, see [modify-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-replication-group.html) in the *Amazon ElastiCache Command Line Reference.*

## Using the ElastiCache API
<a name="Replication.PromoteReplica.API"></a>

You can't promote a read replica to primary if the replication group is Multi-AZ enabled. In some cases, the replica that you want to promote might be a member of a replication group where Multi-AZ is enabled. In these cases, you must modify the replication group to disable Multi-AZ before you proceed. Doing this doesn't require that all your clusters be in the same Availability Zone. For more information on modifying a replication group, see [Modifying a replication group](Replication.Modify.md).

The following ElastiCache API action modifies the replication group `myReplGroup`, making the read replica `myReplica-1` the primary in the replication group.

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=ModifyReplicationGroup
   &ReplicationGroupId=myReplGroup
   &PrimaryClusterId=myReplica-1  
   &Version=2014-12-01
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20141201T220302Z
   &X-Amz-Algorithm=&AWS;4-HMAC-SHA256
   &X-Amz-Date=20141201T220302Z
   &X-Amz-SignedHeaders=Host
   &X-Amz-Expires=20141201T220302Z
   &X-Amz-Credential=<credential>
   &X-Amz-Signature=<signature>
```

For more information on modifying a replication group, see [ModifyReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroup.html) in the *Amazon ElastiCache API Reference.*

# Managing ElastiCache cluster maintenance
<a name="maintenance-window"></a>

Every cluster has a weekly maintenance window during which any system changes are applied. With Valkey and Redis OSS, replication groups have this same weekly maintenance window. If you don't specify a preferred maintenance window when you create or modify a cluster or replication group, ElastiCache assigns a 60-minute maintenance window within your region's maintenance window on a randomly chosen day of the week.

The 60-minute maintenance window is chosen at random from an 8-hour block of time per region. The following table lists the time blocks for each region from which the default maintenance windows are assigned. You may choose a preferred maintenance window outside the region's maintenance window block.


| Region Code | Region Name | Region Maintenance Window | 
| --- | --- | --- | 
| af-south-1 | Africa (Cape Town) Region | 13:00–21:00 UTC | 
| ap-east-1 | Asia Pacific (Hong Kong) Region | 13:00–21:00 UTC | 
| ap-east-2 | Asia Pacific (Taipei) Region | 09:00–17:00 UTC | 
| ap-northeast-1 | Asia Pacific (Tokyo) Region | 13:00–21:00 UTC | 
| ap-northeast-2 | Asia Pacific (Seoul) Region | 12:00–20:00 UTC | 
| ap-northeast-3 | Asia Pacific (Osaka) Region | 12:00–20:00 UTC | 
| ap-south-1 | Asia Pacific (Mumbai) Region | 17:30–1:30 UTC | 
| ap-south-2 | Asia Pacific (Hyderabad) Region | 06:30–14:30 UTC | 
| ap-southeast-1 | Asia Pacific (Singapore) Region | 14:00–22:00 UTC | 
| ap-southeast-2 | Asia Pacific (Sydney) Region | 12:00–20:00 UTC | 
| ap-southeast-3 | Asia Pacific (Jakarta) Region | 14:00–22:00 UTC | 
| ap-southeast-4 | Asia Pacific (Melbourne) Region | 11:00–19:00 UTC | 
| ap-southeast-5 | Asia Pacific (Malaysia) Region | 09:00–17:00 UTC | 
| ap-southeast-7 | Asia Pacific (Thailand) Region | 08:00–16:00 UTC | 
| ca-central-1 | Canada (Central) Region | 03:00–11:00 UTC | 
| ca-west-1 | Canada West (Calgary) Region | 18:00–02:00 UTC | 
| cn-north-1 | China (Beijing) Region | 14:00–22:00 UTC | 
| cn-northwest-1 | China (Ningxia) Region | 14:00–22:00 UTC | 
| eu-central-1 | Europe (Frankfurt) Region | 23:00–07:00 UTC | 
| eu-central-2 | Europe (Zurich) Region | 02:00–10:00 UTC | 
| eu-north-1 | Europe (Stockholm) Region | 23:00–07:00 UTC | 
| eu-south-1 | Europe (Milan) Region | 21:00–05:00 UTC | 
| eu-south-2 | Europe (Spain) Region | 02:00–10:00 UTC | 
| eu-west-1 | Europe (Ireland) Region | 22:00–06:00 UTC | 
| eu-west-2 | Europe (London) Region | 23:00–07:00 UTC | 
| eu-west-3 | EU (Paris) Region | 23:59–07:29 UTC | 
| il-central-1 | Israel (Tel Aviv) Region | 03:00–11:00 UTC | 
| me-central-1 | Middle East (UAE) Region | 13:00–21:00 UTC | 
| me-south-1 | Middle East (Bahrain) Region | 13:00–21:00 UTC | 
| mx-central-1 | Mexico (Central) Region | 19:00–03:00 UTC | 
| sa-east-1 | South America (São Paulo) Region | 01:00–09:00 UTC | 
| us-east-1 | US East (N. Virginia) Region | 03:00–11:00 UTC | 
| us-east-2 | US East (Ohio) Region | 04:00–12:00 UTC | 
| us-gov-east-1 | AWS GovCloud (US-East) | 17:00–01:00 UTC | 
| us-gov-west-1 | AWS GovCloud (US) region | 06:00–14:00 UTC | 
| us-west-1 | US West (N. California) Region | 06:00–14:00 UTC | 
| us-west-2 | US West (Oregon) Region | 06:00–14:00 UTC | 

**Changing your cluster's or replication group's maintenance window**  
The maintenance window should fall at the time of lowest usage and thus might need modification from time to time. You can modify your cluster or replication group to specify a time range of up to 24 hours in duration during which any maintenance activities you have requested should occur. Any deferred or pending cluster modifications you requested occur during this time. 

**Note**  
If you want to apply node type modifications and/or engine upgrades immediately using the AWS Management Console select the **Apply now** box. Otherwise these modifications will be applied during your next scheduled maintenance window. To the use the API, see [modify-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-replication-group.html) or [modify-cache-cluster](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-cache-cluster.html).

**More information**  
For information on your maintenance window and node replacement, see the following:
+ [ElastiCache Maintenance](https://aws.amazon.com/elasticache/elasticache-maintenance/)—FAQ on maintenance and node replacement
+ [Replacing nodes (Memcached)](CacheNodes.NodeReplacement-mc.md)—Managing node replacement for Memcached
+ [Modifying an ElastiCache cluster](Clusters.Modify.md)—Changing a cluster's maintenance window
+ [Replacing nodes (Valkey and Redis OSS)](CacheNodes.NodeReplacement.md)—Managing node replacement
+ [Modifying a replication group](Replication.Modify.md)—Changing a replication group's maintenance window

# Configuring engine parameters using ElastiCache parameter groups
<a name="ParameterGroups"></a>

Amazon ElastiCache uses parameters to control the runtime properties of your nodes and clusters. Generally, newer engine versions include additional parameters to support the newer functionality. For tables of Memcached parameters, see [Memcached specific parameters](ParameterGroups.Engine.md#ParameterGroups.Memcached). For tables of Valkey and Redis OSS parameters, see [Valkey and Redis OSS parameters](ParameterGroups.Engine.md#ParameterGroups.Redis). 

As you would expect, some parameter values, such as `maxmemory`, are determined by the engine and node type. For a table of these Memcached parameter values by node type, see [Memcached node-type specific parameters](ParameterGroups.Engine.md#ParameterGroups.Memcached.NodeSpecific). For a table of these Valkey and Redis OSS parameter values by node type, see [Redis OSS node-type specific parameters](ParameterGroups.Engine.md#ParameterGroups.Redis.NodeSpecific).

**Note**  
For a list of Memcached-specific parameters, see [Memcached Specific Parameters](ParameterGroups.Engine.md#ParameterGroups.Memcached).

**Topics**
+ [Parameter management in ElastiCache](ParameterGroups.Management.md)
+ [Cache parameter group tiers in ElastiCache](ParameterGroups.Tiers.md)
+ [Creating an ElastiCache parameter group](ParameterGroups.Creating.md)
+ [Listing ElastiCache parameter groups by name](ParameterGroups.ListingGroups.md)
+ [Listing an ElastiCache parameter group's values](ParameterGroups.ListingValues.md)
+ [Modifying an ElastiCache parameter group](ParameterGroups.Modifying.md)
+ [Deleting an ElastiCache parameter group](ParameterGroups.Deleting.md)
+ [Engine specific parameters](ParameterGroups.Engine.md)

# Parameter management in ElastiCache
<a name="ParameterGroups.Management"></a>

ElastiCache parameters are grouped together into named parameter groups for easier parameter management. A parameter group represents a combination of specific values for the parameters that are passed to the engine software during startup. These values determine how the engine processes on each node behave at runtime. The parameter values on a specific parameter group apply to all nodes that are associated with the group, regardless of which cluster they belong to.

To fine-tune your cluster's performance, you can modify some parameter values or change the cluster's parameter group.
+ You cannot modify or delete the default parameter groups. If you need custom parameter values, you must create a custom parameter group.
+ For Memcached, the parameter group family and the cluster you're assigning it to must be compatible. For example, if your cluster is running Memcached version 1.4.8, you can only use parameter groups, default or custom, from the Memcached 1.4 family.

  For Redis OSS, the parameter group family and the cluster you're assigning it to must be compatible. For example, if your cluster is running Redis OSS version 3.2.10, you can only use parameter groups, default or custom, from the Redis OSS 3.2 family.
+ If you change a cluster's parameter group, the values for any conditionally modifiable parameter must be the same in both the current and new parameter groups.
+ For Memcached, when you change a cluster's parameters the change is applied to the cluster immediately. This is true whether you change the cluster's parameter group itself or a parameter value within the cluster's parameter group. To determine when a particular parameter change is applied, see the **Changes Take Effect** column in the tables for [Memcached specific parameters](ParameterGroups.Engine.md#ParameterGroups.Memcached). For information on rebooting a cluster's nodes, see [Rebooting clusters](Clusters.html#Rebooting).
+ For Redis OSS, when you change a cluster's parameters, the change is applied to the cluster either immediately or, with the exceptions noted following, after the cluster nodes are rebooted. This is true whether you change the cluster's parameter group itself or a parameter value within the cluster's parameter group. To determine when a particular parameter change is applied, see the **Changes Take Effect** column in the tables for [Valkey and Redis OSS parameters](ParameterGroups.Engine.md#ParameterGroups.Redis). 

  For more information on rebooting Valkey or Redis OSS nodes, see [Rebooting nodes](nodes.rebooting.md).
**Valkey or Redis OSS (Cluster Mode Enabled) parameter changes**  
If you make changes to the following parameters on a Valkey or Redis OSS (cluster mode enabled) cluster, follow the ensuing steps.  
activerehashing
databases
Create a manual backup of your cluster. See [Taking manual backups](backups-manual.md).
Delete the cluster. See [Deleting clusters](Clusters.html#Delete).
store the cluster using the altered parameter group and backup to seed the new cluster. See [Restoring from a backup into a new cache](backups-restoring.md).
Changes to other parameters do not require this.
+ You can associate parameter groups with Valkey and Redis OSS global datastores. *Global datastores *are a collection of one or more clusters that span AWS Regions. In this case, the parameter group is shared by all clusters that make up the global datastore. Any modifications to the parameter group of the primary cluster are replicated to all remaining clusters in the global datastore. For more information, see [Replication across AWS Regions using global datastores](Redis-Global-Datastore.md).

  You can check if a parameter group is part of a global datastore by looking in these locations:
  + On the ElastiCache console on the **Parameter Groups** page, the yes/no **Global** attribute 
  + The yes/no `IsGlobal` property of the [CacheParameterGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_CacheParameterGroup.html) API operation

# Cache parameter group tiers in ElastiCache
<a name="ParameterGroups.Tiers"></a>

Amazon ElastiCache has three tiers of cache parameter groups as shown following.

![\[Image: Amazon ElastiCache parameter group tiers\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/images/ElastiCache-ParameterGroups-Tiers.png)


*Amazon ElastiCache parameter group tiers*

**Global Default**

The top-level root parameter group for all Amazon ElastiCache customers in the region.

The global default cache parameter group:
+ Is reserved for ElastiCache and not available to the customer.

**Customer Default**

A copy of the Global Default cache parameter group which is created for the customer's use.

The Customer Default cache parameter group:
+ Is created and owned by ElastiCache.
+ Is available to the customer for use as a cache parameter group for any clusters running an engine version supported by this cache parameter group.
+ Cannot be edited by the customer.

**Customer Owned**

A copy of the Customer Default cache parameter group. A Customer Owned cache parameter group is created whenever the customer creates a cache parameter group.

The Customer Owned cache parameter group:
+ Is created and owned by the customer.
+ Can be assigned to any of the customer's compatible clusters.
+ Can be modified by the customer to create a custom cache parameter group. 

   Not all parameter values can be modified. For more information on Memcached values, see [Memcached specific parameters](ParameterGroups.Engine.md#ParameterGroups.Memcached). For more information on Valkey and Redis OSS values, see [Valkey and Redis OSS parameters](ParameterGroups.Engine.md#ParameterGroups.Redis).

# Creating an ElastiCache parameter group
<a name="ParameterGroups.Creating"></a>

You need to create a new parameter group if there is one or more parameter values that you want changed from the default values. You can create a parameter group using the ElastiCache console, the AWS CLI, or the ElastiCache API.

## Creating an ElastiCache parameter group (Console)
<a name="ParameterGroups.Creating.CON"></a>

The following procedure shows how to create a parameter group using the ElastiCache console.

**To create a parameter group using the ElastiCache console**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. To see a list of all available parameter groups, in the left hand navigation pane choose **Parameter Groups**.

1. To create a parameter group, choose **Create Parameter Group**.

   The **Create Parameter Group** screen appears.

1. From the **Family** list, choose the parameter group family that will be the template for your parameter group.

   The parameter group family, such as *memcached1.4* or *redis3.2* defines the actual parameters in your parameter group and their initial values. The parameter group family must coincide with the cluster's engine and version.

1. In the **Name** box, type in a unique name for this parameter group.

   When creating a cluster or modifying a cluster's parameter group, you will choose the parameter group by its name. Therefore, we recommend that the name be informative and somehow identify the parameter group's family.

   Parameter group naming constraints are as follows:
   + Must begin with an ASCII letter.
   + Can only contain ASCII letters, digits, and hyphens.
   + Must be 1–255 characters long.
   + Can't contain two consecutive hyphens.
   + Can't end with a hyphen.

1. In the **Description** box, type in a description for the parameter group.

1. To create the parameter group, choose **Create**.

   To terminate the process without creating the parameter group, choose **Cancel**.

1. When the parameter group is created, it will have the family's default values. To change the default values you must modify the parameter group. For more information, see [Modifying an ElastiCache parameter group](ParameterGroups.Modifying.md).

## Creating an ElastiCache parameter group (AWS CLI)
<a name="ParameterGroups.Creating.CLI"></a>

To create a parameter group using the AWS CLI, use the command `create-cache-parameter-group` with these parameters.
+ `--cache-parameter-group-name` — The name of the parameter group.

  Parameter group naming constraints are as follows:
  + Must begin with an ASCII letter.
  + Can only contain ASCII letters, digits, and hyphens.
  + Must be 1–255 characters long.
  + Can't contain two consecutive hyphens.
  + Can't end with a hyphen.
+ `--cache-parameter-group-family` — The engine and version family for the parameter group.
+ `--description` — A user supplied description for the parameter group.

**Example**  
The following example creates a parameter group named *myMem14* using the memcached1.4 family as the template.   
For Linux, macOS, or Unix:  

```
aws elasticache create-cache-parameter-group \
    --cache-parameter-group-name myMem14  \
    --cache-parameter-group-family memcached1.4 \
    --description "My first parameter group"
```
For Windows:  

```
aws elasticache create-cache-parameter-group ^
    --cache-parameter-group-name myMem14  ^
    --cache-parameter-group-family memcached1.4 ^
    --description "My first parameter group"
```
The output from this command should look something like this.  

```
{
    "CacheParameterGroup": {
        "CacheParameterGroupName": "myMem14", 
        "CacheParameterGroupFamily": "memcached1.4", 
        "Description": "My first  parameter group"
    }
}
```

**Example**  
The following example creates a parameter group named *myRed28* using the redis2.8 family as the template.   
For Linux, macOS, or Unix:  

```
aws elasticache create-cache-parameter-group \
    --cache-parameter-group-name myRed28  \
    --cache-parameter-group-family redis2.8 \
    --description "My first parameter group"
```
For Windows:  

```
aws elasticache create-cache-parameter-group ^
    --cache-parameter-group-name myRed28  ^
    --cache-parameter-group-family redis2.8 ^
    --description "My first parameter group"
```
The output from this command should look something like this.  

```
{
    "CacheParameterGroup": {
        "CacheParameterGroupName": "myRed28", 
        "CacheParameterGroupFamily": "redis2.8", 
        "Description": "My first parameter group"
    }
}
```

When the parameter group is created, it will have the family's default values. To change the default values you must modify the parameter group. For more information, see [Modifying an ElastiCache parameter group](ParameterGroups.Modifying.md).

For more information, see [https://docs.aws.amazon.com/cli/latest/reference/elasticache/create-cache-parameter-group.html](https://docs.aws.amazon.com/cli/latest/reference/elasticache/create-cache-parameter-group.html).

## Creating an ElastiCache parameter group (ElastiCache API)
<a name="ParameterGroups.Creating.API"></a>

To create a parameter group using the ElastiCache API, use the `CreateCacheParameterGroup` action with these parameters.
+ `ParameterGroupName` — The name of the parameter group.

  Parameter group naming constraints are as follows:
  + Must begin with an ASCII letter.
  + Can only contain ASCII letters, digits, and hyphens.
  + Must be 1–255 characters long.
  + Can't contain two consecutive hyphens.
  + Can't end with a hyphen.
+ `CacheParameterGroupFamily` — The engine and version family for the parameter group. For example, `memcached1.4`.
+ `CacheParameterGroupFamily` — The engine and version family for the parameter group. For example, `redis2.8`.
+ `Description` — A user supplied description for the parameter group.

**Example**  
The following example creates a parameter group named *myMem14* using the memcached1.4 family as the template.   

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=CreateCacheParameterGroup
   &CacheParameterGroupFamily=memcached1.4
   &CacheParameterGroupName=myMem14
   &Description=My%20first%20parameter%20group
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &Version=2015-02-02
   &X-Amz-Credential=<credential>
```
The response from this action should look something like this.  

```
<CreateCacheParameterGroupResponse xmlns="http://elasticache.amazonaws.com/doc/2013-06-15/">
  <CreateCacheParameterGroupResult>
    <CacheParameterGroup>
      <CacheParameterGroupName>myMem14</CacheParameterGroupName>
      <CacheParameterGroupFamily>memcached1.4</CacheParameterGroupFamily>
      <Description>My first  parameter group</Description>
    </CacheParameterGroup>
  </CreateCacheParameterGroupResult>
  <ResponseMetadata>
    <RequestId>d8465952-af48-11e0-8d36-859edca6f4b8</RequestId>
  </ResponseMetadata>
</CreateCacheParameterGroupResponse>
```

**Example**  
The following example creates a parameter group named *myRed28* using the redis2.8 family as the template.   

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=CreateCacheParameterGroup
   &CacheParameterGroupFamily=redis2.8
   &CacheParameterGroupName=myRed28
   &Description=My%20first%20parameter%20group
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &Version=2015-02-02
   &X-Amz-Credential=<credential>
```
The response from this action should look something like this.  

```
<CreateCacheParameterGroupResponse xmlns="http://elasticache.amazonaws.com/doc/2013-06-15/">
  <CreateCacheParameterGroupResult>
    <CacheParameterGroup>
      <CacheParameterGroupName>myRed28</CacheParameterGroupName>
      <CacheParameterGroupFamily>redis2.8</CacheParameterGroupFamily>
      <Description>My first parameter group</Description>
    </CacheParameterGroup>
  </CreateCacheParameterGroupResult>
  <ResponseMetadata>
    <RequestId>d8465952-af48-11e0-8d36-859edca6f4b8</RequestId>
  </ResponseMetadata>
</CreateCacheParameterGroupResponse>
```

When the parameter group is created, it will have the family's default values. To change the default values you must modify the parameter group. For more information, see [Modifying an ElastiCache parameter group](ParameterGroups.Modifying.md).

For more information, see [https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_CreateCacheParameterGroup.html](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_CreateCacheParameterGroup.html).

# Listing ElastiCache parameter groups by name
<a name="ParameterGroups.ListingGroups"></a>

You can list the parameter groups using the ElastiCache console, the AWS CLI, or the ElastiCache API.

## Listing parameter groups by name (Console)
<a name="ParameterGroups.ListingGroups.CON"></a>

The following procedure shows how to view a list of the parameter groups using the ElastiCache console.

**To list parameter groups using the ElastiCache console**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. To see a list of all available parameter groups, in the left hand navigation pane choose **Parameter Groups**.

## Listing ElastiCache parameter groups by name (AWS CLI)
<a name="ParameterGroups.ListingGroups.CLI"></a>

To generate a list of parameter groups using the AWS CLI, use the command `describe-cache-parameter-groups`. If you provide a parameter group's name, only that parameter group will be listed. If you do not provide a parameter group's name, up to `--max-records` parameter groups will be listed. In either case, the parameter group's name, family, and description are listed.

**Example**  
The following sample code lists the parameter group *myMem14*.  
For Linux, macOS, or Unix:  

```
aws elasticache describe-cache-parameter-groups \
    --cache-parameter-group-name myMem14
```
For Windows:  

```
aws elasticache describe-cache-parameter-groups ^
    --cache-parameter-group-name myMem14
```
The output of this command will look something like this, listing the name, family, and description for the parameter group.  

```
{
    "CacheParameterGroups": [
	    {
	        "CacheParameterGroupName": "myMem14", 
	        "CacheParameterGroupFamily": "memcached1.4", 
	        "Description": "My first parameter group"
	    }
    ]
}
```

**Example**  
The following sample code lists the parameter group *myRed28*.  
For Linux, macOS, or Unix:  

```
aws elasticache describe-cache-parameter-groups \
    --cache-parameter-group-name myRed28
```
For Windows:  

```
aws elasticache describe-cache-parameter-groups ^
    --cache-parameter-group-name myRed28
```
The output of this command will look something like this, listing the name, family, and description for the parameter group.  

```
{
    "CacheParameterGroups": [
	    {
	        "CacheParameterGroupName": "myRed28", 
	        "CacheParameterGroupFamily": "redis2.8", 
	        "Description": "My first parameter group"
	    }
    ]
}
```

**Example**  
The following sample code lists the parameter group *myRed56* for parameter groups running on Redis OSS engine version 5.0.6 onwards. If the parameter group is part of a [Replication across AWS Regions using global datastores](Redis-Global-Datastore.md), the `IsGlobal` property value returned in the output will be `Yes`.  
For Linux, macOS, or Unix:  

```
aws elasticache describe-cache-parameter-groups \
    --cache-parameter-group-name myRed56
```
For Windows:  

```
aws elasticache describe-cache-parameter-groups ^
    --cache-parameter-group-name myRed56
```
The output of this command will look something like this, listing the name, family, isGlobal and description for the parameter group.  

```
{
    "CacheParameterGroups": [
	    {
	        "CacheParameterGroupName": "myRed56", 
	        "CacheParameterGroupFamily": "redis5.0", 	        
	        "Description": "My first parameter group",
	        "IsGlobal": "yes"	        
	    }
    ]
}
```

**Example**  
The following sample code lists up to 10 parameter groups.  

```
aws elasticache describe-cache-parameter-groups --max-records 10
```
The JSON output of this command will look something like this, listing the name, family, description and, in the case of redis5.6 whether the parameter group is part of a global datastore (isGlobal), for each parameter group.  

```
{
    "CacheParameterGroups": [
        {
            "CacheParameterGroupName": "custom-redis32", 
            "CacheParameterGroupFamily": "redis3.2", 
            "Description": "custom parameter group with reserved-memory > 0"
        }, 
        {
            "CacheParameterGroupName": "default.memcached1.4", 
            "CacheParameterGroupFamily": "memcached1.4", 
            "Description": "Default parameter group for memcached1.4"
        }, 
        {
            "CacheParameterGroupName": "default.redis2.6", 
            "CacheParameterGroupFamily": "redis2.6", 
            "Description": "Default parameter group for redis2.6"
        }, 
        {
            "CacheParameterGroupName": "default.redis2.8", 
            "CacheParameterGroupFamily": "redis2.8", 
            "Description": "Default parameter group for redis2.8"
        }, 
        {
            "CacheParameterGroupName": "default.redis3.2", 
            "CacheParameterGroupFamily": "redis3.2", 
            "Description": "Default parameter group for redis3.2"
        }, 
        {
            "CacheParameterGroupName": "default.redis3.2.cluster.on", 
            "CacheParameterGroupFamily": "redis3.2", 
            "Description": "Customized default parameter group for redis3.2 with cluster mode on"
        },
        {
            "CacheParameterGroupName": "default.redis5.6.cluster.on", 
            "CacheParameterGroupFamily": "redis5.0", 
            "Description": "Customized default parameter group for redis5.6 with cluster mode on",
            "isGlobal": "yes"
        },
    ]
}
```

For more information, see [https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-cache-parameter-groups.html](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-cache-parameter-groups.html).

## Listing ElastiCache parameter groups by name (ElastiCache API)
<a name="ParameterGroups.ListingGroups.API"></a>

To generate a list of parameter groups using the ElastiCache API, use the `DescribeCacheParameterGroups` action. If you provide a parameter group's name, only that parameter group will be listed. If you do not provide a parameter group's name, up to `MaxRecords` parameter groups will be listed. In either case, the parameter group's name, family, and description are listed.

**Example**  
The following sample code lists the parameter group *myMem14*.  

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=DescribeCacheParameterGroups
   &CacheParameterGroupName=myMem14
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &Version=2015-02-02
   &X-Amz-Credential=<credential>
```
The response from this action will look something like this, listing the name, family, and description for each parameter group.  

```
<DescribeCacheParameterGroupsResponse xmlns="http://elasticache.amazonaws.com/doc/2013-06-15/">
  <DescribeCacheParameterGroupsResult>
    <CacheParameterGroups>
      <CacheParameterGroup>
        <CacheParameterGroupName>myMem14</CacheParameterGroupName>
        <CacheParameterGroupFamily>memcached1.4</CacheParameterGroupFamily>
        <Description>My custom Memcached 1.4 parameter group</Description>
      </CacheParameterGroup>
    </CacheParameterGroups>
  </DescribeCacheParameterGroupsResult>
  <ResponseMetadata>
    <RequestId>3540cc3d-af48-11e0-97f9-279771c4477e</RequestId>
  </ResponseMetadata>
</DescribeCacheParameterGroupsResponse>
```

**Example**  
The following sample code lists up to 10 parameter groups.  

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=DescribeCacheParameterGroups
   &MaxRecords=10
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &Version=2015-02-02
   &X-Amz-Credential=<credential>
```
The response from this action will look something like this, listing the name, family, description and, in the case of redis5.6 if the parameter group belongs to a global datastore (isGlobal), for each parameter group.  

```
<DescribeCacheParameterGroupsResponse xmlns="http://elasticache.amazonaws.com/doc/2013-06-15/">
  <DescribeCacheParameterGroupsResult>
    <CacheParameterGroups>
      <CacheParameterGroup>
        <CacheParameterGroupName>myRedis28</CacheParameterGroupName>
        <CacheParameterGroupFamily>redis2.8</CacheParameterGroupFamily>
        <Description>My custom Redis 2.8 parameter group</Description>
      </CacheParameterGroup>
      <CacheParameterGroup>
        <CacheParameterGroupName>myMem14</CacheParameterGroupName>
        <CacheParameterGroupFamily>memcached1.4</CacheParameterGroupFamily>
        <Description>My custom Memcached 1.4 parameter group</Description>
      </CacheParameterGroup>
       <CacheParameterGroup>
        <CacheParameterGroupName>myRedis56</CacheParameterGroupName>
        <CacheParameterGroupFamily>redis5.0</CacheParameterGroupFamily>
        <Description>My custom redis 5.6 parameter group</Description>
        <isGlobal>yes</isGlobal>
      </CacheParameterGroup>
    </CacheParameterGroups>
  </DescribeCacheParameterGroupsResult>
  <ResponseMetadata>
    <RequestId>3540cc3d-af48-11e0-97f9-279771c4477e</RequestId>
  </ResponseMetadata>
</DescribeCacheParameterGroupsResponse>
```

**Example**  
The following sample code lists the parameter group *myRed28*.  

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=DescribeCacheParameterGroups
   &CacheParameterGroupName=myRed28
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &Version=2015-02-02
   &X-Amz-Credential=<credential>
```
The response from this action will look something like this, listing the name, family, and description.  

```
<DescribeCacheParameterGroupsResponse xmlns="http://elasticache.amazonaws.com/doc/2013-06-15/">
  <DescribeCacheParameterGroupsResult>
    <CacheParameterGroups>
      <CacheParameterGroup>
        <CacheParameterGroupName>myRed28</CacheParameterGroupName>
        <CacheParameterGroupFamily>redis2.8</CacheParameterGroupFamily>
        <Description>My custom Redis 2.8 parameter group</Description>
      </CacheParameterGroup>
    </CacheParameterGroups>
  </DescribeCacheParameterGroupsResult>
  <ResponseMetadata>
    <RequestId>3540cc3d-af48-11e0-97f9-279771c4477e</RequestId>
  </ResponseMetadata>
</DescribeCacheParameterGroupsResponse>
```

**Example**  
The following sample code lists the parameter group *myRed56*.  

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=DescribeCacheParameterGroups
   &CacheParameterGroupName=myRed56
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &Version=2015-02-02
   &X-Amz-Credential=<credential>
```
The response from this action will look something like this, listing the name, family, description and whether the parameter group is part of a global datastore (isGlobal).  

```
<DescribeCacheParameterGroupsResponse xmlns="http://elasticache.amazonaws.com/doc/2013-06-15/">
  <DescribeCacheParameterGroupsResult>
    <CacheParameterGroups>
      <CacheParameterGroup>
        <CacheParameterGroupName>myRed56</CacheParameterGroupName>
        <CacheParameterGroupFamily>redis5.0</CacheParameterGroupFamily>
        <Description>My custom Redis 5.6 parameter group</Description>
        <isGlobal>yes</isGlobal>
      </CacheParameterGroup>
    </CacheParameterGroups>
  </DescribeCacheParameterGroupsResult>
  <ResponseMetadata>
    <RequestId>3540cc3d-af48-11e0-97f9-279771c4477e</RequestId>
  </ResponseMetadata>
</DescribeCacheParameterGroupsResponse>
```

For more information, see [https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeCacheParameterGroups.html](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeCacheParameterGroups.html).

# Listing an ElastiCache parameter group's values
<a name="ParameterGroups.ListingValues"></a>

You can list the parameters and their values for a parameter group using the ElastiCache console, the AWS CLI, or the ElastiCache API.

## Listing an ElastiCache parameter group's values (Console)
<a name="ParameterGroups.ListingValues.CON"></a>

The following procedure shows how to list the parameters and their values for a parameter group using the ElastiCache console.

**To list a parameter group's parameters and their values using the ElastiCache console**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. To see a list of all available parameter groups, in the left hand navigation pane choose **Parameter Groups**.

1. Choose the parameter group for which you want to list the parameters and values by choosing the box to the left of the parameter group's name.

   The parameters and their values will be listed at the bottom of the screen. Due to the number of parameters, you may have to scroll up and down to find the parameter you're interested in.

## Listing a parameter group's values (AWS CLI)
<a name="ParameterGroups.ListingValues.CLI"></a>

To list a parameter group's parameters and their values using the AWS CLI, use the command `describe-cache-parameters`.

**Example**  
The following sample code list all the Memcached parameters and their values for the parameter group *myMem14*.  
For Linux, macOS, or Unix:  

```
aws elasticache describe-cache-parameters \
    --cache-parameter-group-name myMem14
```
For Windows:  

```
aws elasticache describe-cache-parameters ^
    --cache-parameter-group-name myMem14
```

**Example**  
The following sample code list all the parameters and their values for the parameter group *myRedis28*.  
For Linux, macOS, or Unix:  

```
aws elasticache describe-cache-parameters \
    --cache-parameter-group-name myRedis28
```
For Windows:  

```
aws elasticache describe-cache-parameters ^
    --cache-parameter-group-name myRed28
```

For more information, see [https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-cache-parameters.html](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-cache-parameters.html).

## Listing a parameter group's values (ElastiCache API)
<a name="ParameterGroups.ListingValues.API"></a>

To list a parameter group's parameters and their values using the ElastiCache API, use the `DescribeCacheParameters` action.

**Example**  
The following sample code list all the Memcached parameters for the parameter group *myMem14*.  

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=DescribeCacheParameters
   &CacheParameterGroupName=myMem14
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &Version=2015-02-02
   &X-Amz-Credential=<credential>
```
The response from this action will look something like this. This response has been truncated.  

```
<DescribeCacheParametersResponse xmlns="http://elasticache.amazonaws.com/doc/2013-06-15/">
  <DescribeCacheParametersResult>
    <CacheClusterClassSpecificParameters>
      <CacheNodeTypeSpecificParameter>
        <DataType>integer</DataType>
        <Source>system</Source>
        <IsModifiable>false</IsModifiable>
        <Description>The maximum configurable amount of memory to use to store items, in megabytes.</Description>
        <CacheNodeTypeSpecificValues>
          <CacheNodeTypeSpecificValue>
            <Value>1000</Value>
            <CacheClusterClass>cache.c1.medium</CacheClusterClass>
          </CacheNodeTypeSpecificValue>
          <CacheNodeTypeSpecificValue>
            <Value>6000</Value>
            <CacheClusterClass>cache.c1.xlarge</CacheClusterClass>
          </CacheNodeTypeSpecificValue>
          <CacheNodeTypeSpecificValue>
            <Value>7100</Value>
            <CacheClusterClass>cache.m1.large</CacheClusterClass>
          </CacheNodeTypeSpecificValue>
          <CacheNodeTypeSpecificValue>
            <Value>1300</Value>
            <CacheClusterClass>cache.m1.small</CacheClusterClass>
          </CacheNodeTypeSpecificValue>
          
...output omitted...

    </CacheClusterClassSpecificParameters>
  </DescribeCacheParametersResult>
  <ResponseMetadata>
    <RequestId>6d355589-af49-11e0-97f9-279771c4477e</RequestId>
  </ResponseMetadata>
</DescribeCacheParametersResponse>
```

**Example**  
The following sample code list all the parameters for the parameter group *myRed28*.  

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=DescribeCacheParameters
   &CacheParameterGroupName=myRed28
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &Version=2015-02-02
   &X-Amz-Credential=<credential>
```
The response from this action will look something like this. This response has been truncated.  

```
<DescribeCacheParametersResponse xmlns="http://elasticache.amazonaws.com/doc/2013-06-15/">
  <DescribeCacheParametersResult>
    <CacheClusterClassSpecificParameters>
      <CacheNodeTypeSpecificParameter>
        <DataType>integer</DataType>
        <Source>system</Source>
        <IsModifiable>false</IsModifiable>
        <Description>The maximum configurable amount of memory to use to store items, in megabytes.</Description>
        <CacheNodeTypeSpecificValues>
          <CacheNodeTypeSpecificValue>
            <Value>1000</Value>
            <CacheClusterClass>cache.c1.medium</CacheClusterClass>
          </CacheNodeTypeSpecificValue>
          <CacheNodeTypeSpecificValue>
            <Value>6000</Value>
            <CacheClusterClass>cache.c1.xlarge</CacheClusterClass>
          </CacheNodeTypeSpecificValue>
          <CacheNodeTypeSpecificValue>
            <Value>7100</Value>
            <CacheClusterClass>cache.m1.large</CacheClusterClass>
          </CacheNodeTypeSpecificValue>
          <CacheNodeTypeSpecificValue>
            <Value>1300</Value>
            <CacheClusterClass>cache.m1.small</CacheClusterClass>
          </CacheNodeTypeSpecificValue>
          
...output omitted...

    </CacheClusterClassSpecificParameters>
  </DescribeCacheParametersResult>
  <ResponseMetadata>
    <RequestId>6d355589-af49-11e0-97f9-279771c4477e</RequestId>
  </ResponseMetadata>
</DescribeCacheParametersResponse>
```

For more information, see [https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeCacheParameters.html](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeCacheParameters.html).

# Modifying an ElastiCache parameter group
<a name="ParameterGroups.Modifying"></a>

**Important**  
You cannot modify any default parameter group.

You can modify some parameter values in a parameter group. These parameter values are applied to clusters associated with the parameter group. For more information on when a parameter value change is applied to a parameter group, see [Valkey and Redis OSS parameters](ParameterGroups.Engine.md#ParameterGroups.Redis) and [Memcached specific parameters](ParameterGroups.Engine.md#ParameterGroups.Memcached).

## Modifying a parameter group (Console)
<a name="ParameterGroups.Modifying.CON"></a>

The following procedure shows how to change the `cluster-enabled` parameter's value using the ElastiCache console. You would use the same procedure to change the value of any parameter.

**To change a parameter's value using the ElastiCache console**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. To see a list of all available parameter groups, in the left hand navigation pane choose **Parameter Groups**.

1. Choose the parameter group you want to modify by choosing the box to the left of the parameter group's name.

   The parameter group's parameters will be listed at the bottom of the screen. You may need to page through the list to see all the parameters.

1. To modify one or more parameters, choose **Edit Parameters**.

1. In the **Edit Parameter Group:** screen, scroll using the left and right arrows until you find the `binding_protocol` parameter, then type `ascii` in the **Value** column.

1. Choose **Save Changes**.

1. For Memcached, to find the name of the parameter you changed, see [Memcached specific parameters](ParameterGroups.Engine.md#ParameterGroups.Memcached). If changes to the parameter take place *After restart*, reboot every cluster that uses this parameter group. For more information, see [Rebooting clusters](Clusters.html#Rebooting).

1. With Valkey and Redis OSS, to find the name of the parameter you changed, see [Valkey and Redis OSS parameters](ParameterGroups.Engine.md#ParameterGroups.Redis). If you have a Valkey or Redis OSS (cluster mode disabled) cluster and make changes to the following parameters, you must reboot the nodes in the cluster:
   + activerehashing
   + databases

    For more information, see [Rebooting nodes](nodes.rebooting.md).
**Valkey or Redis OSS (Cluster Mode Enabled) parameter changes**  
If you make changes to the following parameters on a Valkey or Redis OSS (cluster mode enabled) cluster, follow the ensuing steps.  
activerehashing
databases
With Redis OSS, you can reate a manual backup of your cluster. See [Taking manual backups](backups-manual.md).
Delete the cluster. See [Deleting clusters](Clusters.html#Delete).
Restore the cluster using the altered parameter group and backup to seed the new cluster. See [Restoring from a backup into a new cache](backups-restoring.md).
Changes to other parameters do not require this.



## Modifying a parameter group (AWS CLI)
<a name="ParameterGroups.Modifying.CLI"></a>

To change a parameter's value using the AWS CLI, use the command `modify-cache-parameter-group`.

**Example**  
With Memcached, to find the name and permitted values of the parameter you want to change, see [Memcached specific parameters](ParameterGroups.Engine.md#ParameterGroups.Memcached)  
The following sample code sets the value of two parameters, *chunk\$1size* and *chunk\$1size\$1growth\$1fact* on the parameter group `myMem14`.  
For Linux, macOS, or Unix:  

```
aws elasticache modify-cache-parameter-group \
    --cache-parameter-group-name myMem14 \
    --parameter-name-values \
        ParameterName=chunk_size,ParameterValue=96 \
        ParameterName=chunk_size_growth_fact,ParameterValue=1.5
```
For Windows:  

```
aws elasticache modify-cache-parameter-group ^
    --cache-parameter-group-name myMem14 ^
    --parameter-name-values ^
        ParameterName=chunk_size,ParameterValue=96 ^
        ParameterName=chunk_size_growth_fact,ParameterValue=1.5
```
Output from this command will look something like this.  

```
{
    "CacheParameterGroupName": "myMem14"
}
```

**Example**  
With Valkey and Redis OSS, to find the name and permitted values of the parameter you want to change, see [Valkey and Redis OSS parameters](ParameterGroups.Engine.md#ParameterGroups.Redis)  
The following sample code sets the value of two parameters, *reserved-memory-percent* and *cluster-enabled* on the parameter group `myredis32-on-30`. We set *reserved-memory-percent* to `30` (30 percent) and *cluster-enabled* to `yes` so that the parameter group can be used with Valkey or Redis OSS (cluster mode enabled) clusters (replication groups).  
For Linux, macOS, or Unix:  

```
aws elasticache modify-cache-parameter-group \
    --cache-parameter-group-name myredis32-on-30 \
    --parameter-name-values \
        ParameterName=reserved-memory-percent,ParameterValue=30 \
        ParameterName=cluster-enabled,ParameterValue=yes
```
For Windows:  

```
aws elasticache modify-cache-parameter-group ^
    --cache-parameter-group-name myredis32-on-30 ^
    --parameter-name-values ^
        ParameterName=reserved-memory-percent,ParameterValue=30 ^
        ParameterName=cluster-enabled,ParameterValue=yes
```
Output from this command will look something like this.  

```
{
    "CacheParameterGroupName": "my-redis32-on-30"
}
```

For more information, see [https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-cache-parameter-group.html](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-cache-parameter-group.html).

To find the name of the parameter you changed, see [Valkey and Redis OSS parameters](ParameterGroups.Engine.md#ParameterGroups.Redis). 

 If you have a Valkey or Redis OSS (cluster mode disabled) cluster and make changes to the following parameters, you must reboot the nodes in the cluster:
+ activerehashing
+ databases

 For more information, see [Rebooting nodes](nodes.rebooting.md).

**Valkey or Redis OSS (Cluster Mode Enabled) parameter changes**  
If you make changes to the following parameters on a Valkey or Redis OSS (cluster mode enabled) cluster, follow the ensuing steps.  
activerehashing
databases
Create a manual backup of your cluster. See [Taking manual backups](backups-manual.md).
Delete the cluster. See [Deleting clusters](Clusters.html#Delete).
Restore the cluster using the altered parameter group and backup to seed the new cluster. See [Restoring from a backup into a new cache](backups-restoring.md).
Changes to other parameters do not require this.

## Modifying a parameter group (ElastiCache API)
<a name="ParameterGroups.Modifying.API"></a>

To change a parameter group's parameter values using the ElastiCache API, use the `ModifyCacheParameterGroup` action.

**Example**  
With Memcached, to find the name and permitted values of the parameter you want to change, see [Memcached specific parameters](ParameterGroups.Engine.md#ParameterGroups.Memcached)  
The following sample code sets the value of two parameters, *chunk\$1size* and *chunk\$1size\$1growth\$1fact* on the parameter group `myMem14`.  

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=ModifyCacheParameterGroup
   &CacheParameterGroupName=myMem14
   &ParameterNameValues.member.1.ParameterName=chunk_size
   &ParameterNameValues.member.1.ParameterValue=96
   &ParameterNameValues.member.2.ParameterName=chunk_size_growth_fact
   &ParameterNameValues.member.2.ParameterValue=1.5
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &Version=2015-02-02
   &X-Amz-Credential=<credential>
```

**Example**  
With Valkey and Redis OSS, to find the name and permitted values of the parameter you want to change, see [Valkey and Redis OSS parameters](ParameterGroups.Engine.md#ParameterGroups.Redis)  
The following sample code sets the value of two parameters, *reserved-memory-percent* and *cluster-enabled* on the parameter group `myredis32-on-30`. We set *reserved-memory-percent* to `30` (30 percent) and *cluster-enabled* to `yes` so that the parameter group can be used with Valkey or Redis OSS (cluster mode enabled) clusters (replication groups).  

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=ModifyCacheParameterGroup
   &CacheParameterGroupName=myredis32-on-30
   &ParameterNameValues.member.1.ParameterName=reserved-memory-percent
   &ParameterNameValues.member.1.ParameterValue=30
   &ParameterNameValues.member.2.ParameterName=cluster-enabled
   &ParameterNameValues.member.2.ParameterValue=yes
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &Version=2015-02-02
   &X-Amz-Credential=<credential>
```

For more information, see [https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyCacheParameterGroup.html](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyCacheParameterGroup.html).

If you have a Valkey or Redis OSS (cluster mode disabled) cluster and make changes to the following parameters, you must reboot the nodes in the cluster:
+ activerehashing
+ databases

 For more information, see [Rebooting nodes](nodes.rebooting.md).

**Valkey or Redis OSS (Cluster Mode Enabled) parameter changes**  
If you make changes to the following parameters on a Valkey or Redis OSS (cluster mode enabled) cluster, follow the ensuing steps.  
activerehashing
databases
Create a manual backup of your cluster. See [Taking manual backups](backups-manual.md).
Delete the cluster. See [Deleting a cluster in ElastiCache](Clusters.Delete.md).
Restore the cluster using the altered parameter group and backup to seed the new cluster. See [Restoring from a backup into a new cache](backups-restoring.md).
Changes to other parameters do not require this.

# Deleting an ElastiCache parameter group
<a name="ParameterGroups.Deleting"></a>

You can delete a custom parameter group using the ElastiCache console, the AWS CLI, or the ElastiCache API.

You cannot delete a parameter group if it is associated with any clusters. Nor can you delete any of the default parameter groups.

## Deleting a parameter group (Console)
<a name="ParameterGroups.Deleting.CON"></a>

The following procedure shows how to delete a parameter group using the ElastiCache console.

**To delete a parameter group using the ElastiCache console**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. To see a list of all available parameter groups, in the left hand navigation pane choose **Parameter Groups**.

1. Choose the parameter groups you want to delete by choosing the box to the left of the parameter group's name.

   The **Delete** button will become active.

1. Choose **Delete**.

   The **Delete Parameter Groups** confirmation screen will appear.

1. To delete the parameter groups, on the **Delete Parameter Groups** confirmation screen, choose **Delete**.

   To keep the parameter groups, choose **Cancel**.

## Deleting a parameter group (AWS CLI)
<a name="ParameterGroups.Deleting.CLI"></a>

To delete a parameter group using the AWS CLI, use the command `delete-cache-parameter-group`. For the parameter group to delete, the parameter group specified by `--cache-parameter-group-name` cannot have any clusters associated with it, nor can it be a default parameter group.

The following sample code deletes the *myMem14* parameter group.

**Example**  
For Linux, macOS, or Unix:  

```
aws elasticache delete-cache-parameter-group \
    --cache-parameter-group-name myRed28
```
For Windows:  

```
aws elasticache delete-cache-parameter-group ^
    --cache-parameter-group-name myRed28
```

For more information, see [https://docs.aws.amazon.com/cli/latest/reference/elasticache/delete-cache-parameter-group.html](https://docs.aws.amazon.com/cli/latest/reference/elasticache/delete-cache-parameter-group.html).

## Deleting a parameter group (ElastiCache API)
<a name="ParameterGroups.Deleting.API"></a>

To delete a parameter group using the ElastiCache API, use the `DeleteCacheParameterGroup` action. For the parameter group to delete, the parameter group specified by `CacheParameterGroupName` cannot have any clusters associated with it, nor can it be a default parameter group.

**Example**  
With Memcached, the following sample code deletes the *myMem14* parameter group.  

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=DeleteCacheParameterGroup
   &CacheParameterGroupName=myMem14
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &Version=2015-02-02
   &X-Amz-Credential=<credential>
```

**Example**  
The following sample code deletes the *myRed28* parameter group.  

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=DeleteCacheParameterGroup
   &CacheParameterGroupName=myRed28
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Timestamp=20150202T192317Z
   &Version=2015-02-02
   &X-Amz-Credential=<credential>
```

For more information, see [https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DeleteCacheParameterGroup.html](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DeleteCacheParameterGroup.html).

# Engine specific parameters
<a name="ParameterGroups.Engine"></a>

**Valkey and Redis OSS**

Most Valkey 8 parameters are compatible with Redis OSS 7.1 parameters. Valkey 7.2 parameters are the same as Redis OSS 7 parameters.

If you do not specify a parameter group for your Valkey or Redis OSS cluster, then a default parameter group appropriate to your engine version will be used. You can't change the values of any parameters in the default parameter group. However, you can create a custom parameter group and assign it to your cluster at any time as long as the values of conditionally modifiable parameters are the same in both parameter groups. For more information, see [Creating an ElastiCache parameter group](ParameterGroups.Creating.md).

**Topics**
+ [Valkey and Redis OSS parameters](#ParameterGroups.Redis)
+ [Memcached specific parameters](#ParameterGroups.Memcached)

## Valkey and Redis OSS parameters
<a name="ParameterGroups.Redis"></a>

**Topics**
+ [Valkey 8.2 parameter changes](#ParameterGroups.Valkey.8.2)
+ [Valkey 8.1 parameter changes](#ParameterGroups.Valkey.8.1)
+ [Valkey 8.0 parameter changes](#ParameterGroups.Valkey.8)
+ [Valkey 7.2 and Redis OSS 7 parameter changes](#ParameterGroups.Redis.7)
+ [Redis OSS 6.x parameter changes](#ParameterGroups.Redis.6-x)
+ [Redis OSS 5.0.3 parameter changes](#ParameterGroups.Redis.5-0-3)
+ [Redis OSS 5.0.0 parameter changes](#ParameterGroups.Redis.5.0)
+ [Redis OSS 4.0.10 parameter changes](#ParameterGroups.Redis.4-0-10)
+ [Redis OSS 3.2.10 parameter changes](#ParameterGroups.Redis.3-2-10)
+ [Redis OSS 3.2.6 parameter changes](#ParameterGroups.Redis.3-2-6)
+ [Redis OSS 3.2.4 parameter changes](#ParameterGroups.Redis.3-2-4)
+ [Redis OSS 2.8.24 (enhanced) added parameters](#ParameterGroups.Redis.2-8-24)
+ [Redis OSS 2.8.23 (enhanced) added parameters](#ParameterGroups.Redis.2-8-23)
+ [Redis OSS 2.8.22 (enhanced) added parameters](#ParameterGroups.Redis.2-8-22)
+ [Redis OSS 2.8.21 added parameters](#ParameterGroups.Redis.2-8-21)
+ [Redis OSS 2.8.19 added parameters](#ParameterGroups.Redis.2-8-19)
+ [Redis OSS 2.8.6 added parameters](#ParameterGroups.Redis.2-8-6)
+ [Redis OSS 2.6.13 parameters](#ParameterGroups.Redis.2-6-13)
+ [Redis OSS node-type specific parameters](#ParameterGroups.Redis.NodeSpecific)

### Valkey 8.2 parameter changes
<a name="ParameterGroups.Valkey.8.2"></a>

**Parameter group family:** valkey8

**Note**  
Valkey 8.2 parameter changes don't apply to Valkey 8.1
Valkey 8.0 and above parameter groups are incompatible with Redis OSS 7.2.4.
in Valkey 8.2, the following commands are unavailable for serverless caches: `commandlog`, `commandlog get`, `commandlog help`, `commandlog len`, and `commandlog reset.` 


**New parameter groups in Valkey 8.2**  

| Name | Details | Description | 
| --- | --- | --- | 
| search-fanout-target-mode (added in 8.2) | Default: client Type: string Modifiable: Yes Changes Take Effect: Immediately |   The search-fanout-target-mode configuration parameter controls how search queries are distributed across nodes in a Valkey cluster environment. This setting accepts two values: "throughput" which optimizes for maximum throughput by randomly distributing search queries across all cluster nodes regardless of client type or READONLY status, and "client" which respects client connection characteristics by routing non-READONLY clients to primary nodes only, READONLY clients on replica connections to replica nodes only, and READONLY clients on primary connections randomly across all nodes.  The default behavior is "client' mode, meaning the system will respect client connection types and READONLY status for query routing decisions. Use throughput mode for high-volume search workloads where maximum cluster resource utilization is desired, and client mode when you want to maintain read/write separation and respect application-level READONLY connection patterns. | 
| search-default-timeout-ms |  Default: 50000 Permitted values: 1 to 60000 Type: integer Modifiable: Yes Changes Take Effect: Immediately | The default Valkey search query timeout (in milliseconds). | 
| search-enable-partial-results | Default: yes Permitted values: yes, no Type: boolean Modifiable: Yes Changes Take Effect: Immediately | Configures the query failure behavior for Valkey search. When enabled, search queries will return partial results if timeouts occur on one or more shards. When disabled, any shard timeout will cause the entire search query to fail and return an error. | 

### Valkey 8.1 parameter changes
<a name="ParameterGroups.Valkey.8.1"></a>

**Parameter group family:** valkey8

**Note**  
Valkey 8.1 parameter changes don't apply to Valkey 8.0
Valkey 8.0 and above parameter groups are incompatible with Redis OSS 7.2.4.
in Valkey 8.1, the following commands are unavailable for serverless caches: `commandlog`, `commandlog get`, `commandlog help`, `commandlog len`, and `commandlog reset.` 


**New parameter groups in Valkey 8.1**  

| Name | Details | Description | 
| --- | --- | --- | 
|  commandlog-request-larger-than (added in 8.1)  |  Default: 1048576 Type: integer Modifiable: Yes Changes Take Effect: Immediately  |  The maximum size, in bytes, for requests to be logged by the Valkey Command Log feature.  | 
|  commandlog-large-request-max-len (added in 8.1)  |  Default: 128 Permitted values: 0-1024 Type: integer Modifiable: Yes Changes Take Effect: Immediately  |  The maximum length of the Valkey Command Log for requests.  | 
|  commandlog-reply-larger-than (added in 8.1)  |  Default: 1048576 Type: integer Modifiable: Yes Changes Take Effect: Immediately  |  The maximum size, in bytes, for responses to be logged by the Valkey Command Log feature.  | 
|  commandlog-large-reply-max-len (added in 8.1)  |  Default: 128 Permitted values: 0-1024 Type: integer Modifiable: Yes Changes Take Effect: Immediately  |  The maximum length of the Valkey Command Log for responses.  | 

### Valkey 8.0 parameter changes
<a name="ParameterGroups.Valkey.8"></a>

**Parameter group family:** valkey8

**Note**  
Redis OSS 7.2.4 is incompatible with Valkey 8 and above parameter groups.


**Specific parameter changes in Valkey 8.0**  

| Name | Details | Description | 
| --- | --- | --- | 
|  repl-backlog-size  |  Default: 10485760 Type: integer Modifiable: Yes Changes Take Effect: Immediately  |  The size, in bytes, of the primary node backlog buffer. The backlog is used for recording updates to data at the primary node. When a read replica connects to the primary, it attempts to perform a partial sync (psync), where it applies data from the backlog to catch up with the primary node. If the psync fails, then a full sync is required. The minimum value for this parameter is 16384. Note: Beginning with Redis OSS 2.8.22, this parameter applies to the primary cluster as well as the read replicas.  | 
|  maxmemory-samples  |  Default: 3 Permitted values: 1 to 64 Type: integer Modifiable: Yes Changes Take Effect: Immediately  |  For least-recently-used (LRU) and time-to-live (TTL) calculations, this parameter represents the sample size of keys to check. By default, Redis OSS chooses 3 keys and uses the one that was used least recently.  | 


**New parameter groups in Valkey 8.0**  

| Name | Details | Description | 
| --- | --- | --- | 
|  extended-redis-compatibility  |  Permitted values: yes, no Default: yes Type: boolean Modifiable: Yes Changes take place: immediately  |  Extended Redis OSS compatibility mode makes Valkey pretend to be Redis OSS 7.2. Enable this only if you have problems with tools or clients. Customer-facing impacts: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/ParameterGroups.Engine.html)  | 


**Removed parameter groups in Valkey 8.0**  

| Name | Details | Description | 
| --- | --- | --- | 
|  lazyfree-lazy-eviction  |  Permitted values: yes, no Default: no Type: boolean Modifiable: Yes Changes take place: immediately  |  Performs an asynchronous delete on evictions.  | 
|  lazyfree-lazy-expire  |  Permitted values: yes, no Default: no Type: boolean Modifiable: Yes Changes take place: immediately  |  Performs an asynchronous delete on expired keys.  | 
|  lazyfree-lazy-server-del  |  Permitted values: yes, no Default: no Type: boolean Modifiable: Yes Changes take place: immediately  |  Performs an asynchronous delete for commands which update values.  | 
|  lazyfree-lazy-user-del  |  Default: no Type: string Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster  |   When the value is set to yes, the DEL command acts the same as UNLINK.  | 
|  replica-lazy-flush  |  Default: yes Type: boolean Modifiable: No Former name: slave-lazy-flush  |  Performs an asynchronous flushDB during replica sync.  | 

### Valkey 7.2 and Redis OSS 7 parameter changes
<a name="ParameterGroups.Redis.7"></a>

**Parameter group family:** valkey7

Valkey 7.2 default parameter groups are as follows:
+ `default.valkey7` – Use this parameter group, or one derived from it, for Valkey (cluster mode disabled) clusters and replication groups.
+ `default.valkey7.cluster.on` – Use this parameter group, or one derived from it, for Valkey (cluster mode enabled) clusters and replication groups.

**Parameter group family:** redis7

Redis OSS 7 default parameter groups are as follows:
+ `default.redis7` – Use this parameter group, or one derived from it, for Redis OSS (cluster mode disabled) clusters and replication groups.
+ `default.redis7.cluster.on` – Use this parameter group, or one derived from it, for Redis OSS (cluster mode enabled) clusters and replication groups.

**Specific parameter changes**

Parameters added in Redis OSS 7 are as follows. Valkey 7.2 also supports these parameters.


|  Name  |  Details |  Description  | 
| --- | --- | --- | 
| cluster-allow-pubsubshard-when-down |  Permitted values: `yes`, `no` Default: `yes` Type: string Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster. | When set to the default of yes, allows nodes to serve pubsub shard traffic while the cluster is in a down state, as long as it believes it owns the slots.  | 
| cluster-preferred-endpoint-type |  Permitted values: `ip`, `tls-dynamic` Default: `tls-dynamic` Type: string Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster. | This value controls what endpoint is returned for MOVED/ASKING requests as well as the endpoint field for `CLUSTER SLOTS` and `CLUSTER SHARDS`. When the value is set to ip, the node will advertise its ip address. When the value is set to tls-dynamic, the node will advertise a hostname when encryption-in-transit is enabled and an ip address otherwise.  | 
| latency-tracking |  Permitted values: `yes`, `no` Default: `no` Type: string Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster. | When set to yes tracks the per command latencies and enables exporting the percentile distribution via the `INFO` latency statistics command, and cumulative latency distributions (histograms) via the `LATENCY` command.  | 
| hash-max-listpack-entries |  Permitted values: `0+` Default: `512` Type: integer Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster. | The maximum number of hash entries in order for the dataset to be compressed.  | 
| hash-max-listpack-value |  Permitted values: `0+` Default: `64` Type: integer Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster. | The threshold of biggest hash entries in order for the dataset to be compressed.  | 
| zset-max-listpack-entries |  Permitted values: `0+` Default: `128` Type: integer Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster. | The maximum number of sorted set entries in order for the dataset to be compressed.  | 
| zset-max-listpack-value |  Permitted values: `0+` Default: `64` Type: integer Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster. | The threshold of biggest sorted set entries in order for the dataset to be compressed.  | 

Parameters changed in Redis OSS 7 are as follows. 


|  Name  |  Details |  Description  | 
| --- | --- | --- | 
| activerehashing |  Modifiable: `no`. In Redis OSS 7, this parameter is hidden and enabled by default. In order to disable it, you need to create a [support case](https://console.aws.amazon.com/support/home).  | Modifiable was yes.  | 

Parameters removed in Redis OSS 7 are as follows. 


|  Name  |  Details |  Description  | 
| --- | --- | --- | 
| hash-max-ziplist-entries |  Permitted values: `0+` Default: `512` Type: integer Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster. | Use `listpack` instead of `ziplist` for representing small hash encoding  | 
| hash-max-ziplist-value |  Permitted values: `0+` Default: `64` Type: integer Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster. | Use `listpack` instead of `ziplist` for representing small hash encoding  | 
| zset-max-ziplist-entries |  Permitted values: `0+` Default: `128` Type: integer Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster. | Use `listpack` instead of `ziplist` for representing small hash encoding.  | 
| zset-max-ziplist-value |  Permitted values: `0+` Default: `64` Type: integer Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster. | Use `listpack` instead of `ziplist` for representing small hash encoding.  | 
| list-max-ziplist-size |  Permitted values: Default: `-2` Type: integer Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster. | The number of entries allowed per internal list node.  | 

### Redis OSS 6.x parameter changes
<a name="ParameterGroups.Redis.6-x"></a>

**Parameter group family:** redis6.x

Redis OSS 6.x default parameter groups are as follows:
+ `default.redis6.x` – Use this parameter group, or one derived from it, for Valkey or Redis OSS (cluster mode disabled) clusters and replication groups.
+ `default.redis6.x.cluster.on` – Use this parameter group, or one derived from it, for Valkey or Redis OSS (cluster mode enabled) clusters and replication groups.

**Note**  
 In Redis OSS engine version 6.2, when the r6gd node family was introduced for use with [Data tiering in ElastiCache](data-tiering.md), only *noeviction*, *volatile-lru* and *allkeys-lru* max-memory policies are supported with r6gd node types. 

For more information, see [ElastiCache version 6.2 for Redis OSS (enhanced)](engine-versions.md#redis-version-6.2) and [ElastiCache version 6.0 for Redis OSS (enhanced)](engine-versions.md#redis-version-6.0). 

Parameters added in Redis OSS 6.x are as follows. 


|  Details |  Description  | 
| --- | --- | 
| acl-pubsub-default (added in 6.2) |  Permitted values: `resetchannels`, `allchannels` Default: `allchannels` Type: string Modifiable: Yes Changes take effect: The existing Redis OSS users associated to the cluster will continue to have existing permissions. Either update the users or reboot the cluster to update the existing Redis OSS users. | Default pubsub channel permissions for ACL users deployed to this cluster.   | 
| cluster-allow-reads-when-down (added in 6.0) |  Default: no Type: string Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster | When set to yes, a Redis OSS (cluster mode enabled) replication group continues to process read commands even when a node is not able to reach a quorum of primaries.  When set to the default of no, the replication group rejects all commands. We recommend setting this value to yes if you are using a cluster with fewer than three node groups or your application can safely handle stale reads.   | 
| tracking-table-max-keys (added in 6.0) |  Default: 1,000,000 Type: number Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster | To assist client-side caching, Redis OSS supports tracking which clients have accessed which keys.  When the tracked key is modified, invalidation messages are sent to all clients to notify them their cached values are no longer valid. This value enables you to specify the upper bound of this table. After this parameter value is exceeded, clients are sent invalidation randomly. This value should be tuned to limit memory usage while still keeping track of enough keys. Keys are also invalidated under low memory conditions.   | 
| acllog-max-len (added in 6.0) |  Default: 128 Type: number Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster | This value corresponds to the max number of entries in the ACL log.   | 
| active-expire-effort (added in 6.0) |  Default: 1 Type: number Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster | Redis OSS deletes keys that have exceeded their time to live by two mechanisms. In one, a key is accessed and is found to be expired. In the other, a periodic job samples keys and causes those that have exceeded their time to live to expire. This parameter defines the amount of effort that Redis OSS uses to expire items in the periodic job.  The default value of 1 tries to avoid having more than 10 percent of expired keys still in memory. It also tries to avoid consuming more than 25 percent of total memory and to add latency to the system. You can increase this value up to 10 to increase the amount of effort spent on expiring keys. The tradeoff is higher CPU and potentially higher latency. We recommend a value of 1 unless you are seeing high memory usage and can tolerate an increase in CPU utilization.   | 
| lazyfree-lazy-user-del (added in 6.0) |  Default: no Type: string Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster | When the value is set to yes, the `DEL` command acts the same as `UNLINK`.   | 

Parameters removed in Redis OSS 6.x are as follows. 


|  Name  |  Details |  Description  | 
| --- | --- | --- | 
| lua-replicate-commands |  Permitted values: yes/no Default: yes Type: boolean Modifiable: Yes Changes take effect: Immediately | Always enable Lua effect replication or not in Lua scripts  | 

### Redis OSS 5.0.3 parameter changes
<a name="ParameterGroups.Redis.5-0-3"></a>

**Parameter group family:** redis5.0

Redis OSS 5.0 default parameter groups
+ `default.redis5.0` – Use this parameter group, or one derived from it, for Valkey or Redis OSS (cluster mode disabled) clusters and replication groups.
+ `default.redis5.0.cluster.on` – Use this parameter group, or one derived from it, for Valkey or Redis OSS (cluster mode enabled) clusters and replication groups.


**Parameters added in Redis OSS 5.0.3**  

|  Name  |  Details |  Description  | 
| --- | --- | --- | 
| rename-commands |  Default: none Type: string Modifiable: Yes Changes take effect: Immediately across all nodes in the cluster | A space-separated list of renamed Redis OSS commands. The following is a restricted list of commands available for renaming:  `APPEND AUTH BITCOUNT BITFIELD BITOP BITPOS BLPOP BRPOP BRPOPLPUSH BZPOPMIN BZPOPMAX CLIENT CLUSTER COMMAND DBSIZE DECR DECRBY DEL DISCARD DUMP ECHO EVAL EVALSHA EXEC EXISTS EXPIRE EXPIREAT FLUSHALL FLUSHDB GEOADD GEOHASH GEOPOS GEODIST GEORADIUS GEORADIUSBYMEMBER GET GETBIT GETRANGE GETSET HDEL HEXISTS HGET HGETALL HINCRBY HINCRBYFLOAT HKEYS HLEN HMGET HMSET HSET HSETNX HSTRLEN HVALS INCR INCRBY INCRBYFLOAT INFO KEYS LASTSAVE LINDEX LINSERT LLEN LPOP LPUSH LPUSHX LRANGE LREM LSET LTRIM MEMORY MGET MONITOR MOVE MSET MSETNX MULTI OBJECT PERSIST PEXPIRE PEXPIREAT PFADD PFCOUNT PFMERGE PING PSETEX PSUBSCRIBE PUBSUB PTTL PUBLISH PUNSUBSCRIBE RANDOMKEY READONLY READWRITE RENAME RENAMENX RESTORE ROLE RPOP RPOPLPUSH RPUSH RPUSHX SADD SCARD SCRIPT SDIFF SDIFFSTORE SELECT SET SETBIT SETEX SETNX SETRANGE SINTER SINTERSTORE SISMEMBER SLOWLOG SMEMBERS SMOVE SORT SPOP SRANDMEMBER SREM STRLEN SUBSCRIBE SUNION SUNIONSTORE SWAPDB TIME TOUCH TTL TYPE UNSUBSCRIBE UNLINK UNWATCH WAIT WATCH ZADD ZCARD ZCOUNT ZINCRBY ZINTERSTORE ZLEXCOUNT ZPOPMAX ZPOPMIN ZRANGE ZRANGEBYLEX ZREVRANGEBYLEX ZRANGEBYSCORE ZRANK ZREM ZREMRANGEBYLEX ZREMRANGEBYRANK ZREMRANGEBYSCORE ZREVRANGE ZREVRANGEBYSCORE ZREVRANK ZSCORE ZUNIONSTORE SCAN SSCAN HSCAN ZSCAN XINFO XADD XTRIM XDEL XRANGE XREVRANGE XLEN XREAD XGROUP XREADGROUP XACK XCLAIM XPENDING GEORADIUS_RO GEORADIUSBYMEMBER_RO LOLWUT XSETID SUBSTR`  | 

For more information, see [ElastiCache version 5.0.6 for Redis OSS (enhanced)](engine-versions.md#redis-version-5-0.6). 

### Redis OSS 5.0.0 parameter changes
<a name="ParameterGroups.Redis.5.0"></a>

**Parameter group family:** redis5.0

Redis OSS 5.0 default parameter groups
+ `default.redis5.0` – Use this parameter group, or one derived from it, for Valkey or Redis OSS (cluster mode disabled) clusters and replication groups.
+ `default.redis5.0.cluster.on` – Use this parameter group, or one derived from it, for Valkey or Redis OSS (cluster mode enabled) clusters and replication groups.


**Parameters added in Redis OSS 5.0**  

|  Name  |  Details |  Description  | 
| --- | --- | --- | 
| stream-node-max-bytes |  Permitted values: 0\$1 Default: 4096 Type: integer Modifiable: Yes Changes take effect: Immediately | The stream data structure is a radix tree of nodes that encode multiple items inside. Use this configuration to specify the maximum size of a single node in radix tree in Bytes. If set to 0, the size of the tree node is unlimited.  | 
| stream-node-max-entries |  Permitted values: 0\$1 Default: 100 Type: integer Modifiable: Yes Changes take effect: Immediately | The stream data structure is a radix tree of nodes that encode multiple items inside. Use this configuration to specify the maximum number of items a single node can contain before switching to a new node when appending new stream entries. If set to 0, the number of items in the tree node is unlimited  | 
| active-defrag-max-scan-fields |  Permitted values: 1 to 1000000 Default: 1000 Type: integer Modifiable: Yes Changes take effect: Immediately | Maximum number of set/hash/zset/list fields that will be processed from the main dictionary scan  | 
| lua-replicate-commands |  Permitted values: yes/no Default: yes Type: boolean Modifiable: Yes Changes take effect: Immediately | Always enable Lua effect replication or not in Lua scripts  | 
| replica-ignore-maxmemory |  Default: yes Type: boolean Modifiable: No  | Determines if replica ignores maxmemory setting by not evicting items independent from the primary  | 

Redis OSS has renamed several parameters in engine version 5.0 in response to community feedback. For more information, see [What's New in Redis OSS 5?](https://aws.amazon.com/redis/Whats_New_Redis5/). The following table lists the new names and how they map to previous versions.


**Parameters renamed in Redis OSS 5.0**  

|  Name  |  Details |  Description  | 
| --- | --- | --- | 
| replica-lazy-flush |  Default: yes Type: boolean Modifiable: No Former name: slave-lazy-flush  | Performs an asynchronous flushDB during replica sync. | 
| client-output-buffer-limit-replica-hard-limit | Default: For values see [Redis OSS node-type specific parameters](#ParameterGroups.Redis.NodeSpecific) Type: integer Modifiable: No Former name: client-output-buffer-limit-slave-hard-limit | For Redis OSS read replicas: If a client's output buffer reaches the specified number of bytes, the client will be disconnected. | 
| client-output-buffer-limit-replica-soft-limit | Default: For values see [Redis OSS node-type specific parameters](#ParameterGroups.Redis.NodeSpecific) Type: integer Modifiable: No Former name: client-output-buffer-limit-slave-soft-limit | For Redis OSS read replicas: If a client's output buffer reaches the specified number of bytes, the client will be disconnected, but only if this condition persists for client-output-buffer-limit-replica-soft-seconds. | 
| client-output-buffer-limit-replica-soft-seconds | Default: 60 Type: integer Modifiable: No Former name: client-output-buffer-limit-slave-soft-seconds  | For Redis OSS read replicas: If a client's output buffer remains at client-output-buffer-limit-replica-soft-limit bytes for longer than this number of seconds, the client will be disconnected. | 
| replica-allow-chaining | Default: no Type: string Modifiable: No Former name: slave-allow-chaining | Determines whether a read replica in Redis OSS can have read replicas of its own. | 
| min-replicas-to-write | Default: 0 Type: integer Modifiable: Yes Former name: min-slaves-to-write Changes Take Effect: Immediately | The minimum number of read replicas which must be available in order for the primary node to accept writes from clients. If the number of available replicas falls below this number, then the primary node will no longer accept write requests. If either this parameter or min-replicas-max-lag is 0, then the primary node will always accept writes requests, even if no replicas are available. | 
| min-replicas-max-lag  | Default: 10 Type: integer Modifiable: Yes Former name: min-slaves-max-lag Changes Take Effect: Immediately | The number of seconds within which the primary node must receive a ping request from a read replica. If this amount of time passes and the primary does not receive a ping, then the replica is no longer considered available. If the number of available replicas drops below min-replicas-to-write, then the primary will stop accepting writes at that point. If either this parameter or min-replicas-to-write is 0, then the primary node will always accept write requests, even if no replicas are available. | 
| close-on-replica-write  | Default: yes Type: boolean Modifiable: Yes Former name: close-on-slave-write Changes Take Effect: Immediately | If enabled, clients who attempt to write to a read-only replica will be disconnected. | 


**Parameters removed in Redis OSS 5.0**  

|  Name  |  Details |  Description  | 
| --- | --- | --- | 
| repl-timeout |  Default: 60 Modifiable: No  | Parameter is not available in this version. | 

### Redis OSS 4.0.10 parameter changes
<a name="ParameterGroups.Redis.4-0-10"></a>

**Parameter group family:** redis4.0

Redis OSS 4.0.x default parameter groups
+ `default.redis4.0` – Use this parameter group, or one derived from it, for Valkey or Redis OSS (cluster mode disabled) clusters and replication groups.
+ `default.redis4.0.cluster.on` – Use this parameter group, or one derived from it, for Valkey or Redis OSS (cluster mode enabled) clusters and replication groups.


**Parameters changed in Redis OSS 4.0.10**  

|  Name  |  Details |  Description  | 
| --- | --- | --- | 
| maxmemory-policy |  Permitted values: `allkeys-lru`, `volatile-lru`, **allkeys-lfu**, **volatile-lfu**, `allkeys-random`, `volatile-random`, `volatile-ttl`, `noeviction` Default: volatile-lru Type: string Modifiable: Yes Changes take place: immediately | maxmemory-policy was added in version 2.6.13. In version 4.0.10 two new permitted values are added: allkeys-lfu, which will evict any key using approximated LFU, and volatile-lfu, which will evict using approximated LFU among the keys with an expire set. In version 6.2, when the r6gd node family was introduced for use with data-tiering, only noeviction, volatile-lru and allkeys-lru max-memory policies are supported with r6gd node types.  | 


**Parameters added in Redis OSS 4.0.10**  

|  Name  |  Details |  Description  | 
| --- |--- |--- |
| **Async deletion parameters** | 
| --- |
| lazyfree-lazy-eviction |  Permitted values: yes/no Default: no Type: boolean Modifiable: Yes Changes take place: immediately | Performs an asynchronous delete on evictions. | 
| lazyfree-lazy-expire |  Permitted values: yes/no Default: no Type: boolean Modifiable: Yes Changes take place: immediately | Performs an asynchronous delete on expired keys. | 
| lazyfree-lazy-server-del |  Permitted values: yes/no Default: no Type: boolean Modifiable: Yes Changes take place: immediately | Performs an asynchronous delete for commands which update values. | 
| slave-lazy-flush |  Permitted values: N/A Default: no Type: boolean Modifiable: No Changes take place: N/A | Performs an asynchronous flushDB during slave sync. | 
| **LFU parameters** | 
| --- |
| lfu-log-factor |  Permitted values: any integer > 0 Default: 10 Type: integer Modifiable: Yes Changes take place: immediately | Set the log factor, which determines the number of key hits to saturate the key counter. | 
| lfu-decay-time |  Permitted values: any integer Default: 1 Type: integer Modifiable: Yes Changes take place: immediately | The amount of time in minutes to decrement the key counter. | 
| **Active defragmentation parameters** | 
| --- |
| activedefrag |  Permitted values: yes/no Default: no Type: boolean Modifiable: Yes Changes take place: immediately | Enables active defragmentation. In Valkey and Redis OSS versions 7.0 and above, AWS may automatically perform defragmentation when operationally necessary, regardless of this setting.  | 
| active-defrag-ignore-bytes |  Permitted values: 10485760-104857600 Default: 104857600 Type: integer Modifiable: Yes Changes take place: immediately | Minimum amount of fragmentation waste to start active defrag. | 
| active-defrag-threshold-lower |  Permitted values: 1-100 Default: 10 Type: integer Modifiable: Yes Changes take place: immediately | Minimum percentage of fragmentation to start active defrag. | 
| active-defrag-threshold-upper |  Permitted values: 1-100 Default: 100 Type: integer Modifiable: Yes Changes take place: immediately | Maximum percentage of fragmentation at which we use maximum effort. | 
| active-defrag-cycle-min |  Permitted values: 1-75 Default: 25 Type: integer Modifiable: Yes Changes take place: immediately | Minimal effort for defrag in CPU percentage. | 
| active-defrag-cycle-max |  Permitted values: 1-75 Default: 75 Type: integer Modifiable: Yes Changes take place: immediately | Maximal effort for defrag in CPU percentage. | 
| **Client output buffer parameters** | 
| --- |
| client-query-buffer-limit |  Permitted values: 1048576-1073741824 Default: 1073741824 Type: integer Modifiable: Yes Changes take place: immediately | Max size of a single client query buffer. | 
| proto-max-bulk-len |  Permitted values: 1048576-536870912 Default: 536870912 Type: integer Modifiable: Yes Changes take place: immediately | Max size of a single element request. | 

### Redis OSS 3.2.10 parameter changes
<a name="ParameterGroups.Redis.3-2-10"></a>

**Parameter group family: **redis3.2

ElastiCache for Redis OSS 3.2.10 there are no additional parameters supported.

### Redis OSS 3.2.6 parameter changes
<a name="ParameterGroups.Redis.3-2-6"></a>

**Parameter group family: **redis3.2

For Redis OSS 3.2.6 there are no additional parameters supported.

### Redis OSS 3.2.4 parameter changes
<a name="ParameterGroups.Redis.3-2-4"></a>

**Parameter group family:** redis3.2

Beginning with Redis OSS 3.2.4 there are two default parameter groups.
+ `default.redis3.2` – When running Redis OSS 3.2.4, specify this parameter group or one derived from it, if you want to create a Valkey or Redis OSS (cluster mode disabled) replication group and still use the additional features of Redis OSS 3.2.4.
+ `default.redis3.2.cluster.on` – Specify this parameter group or one derived from it, when you want to create a Valkey or Redis OSS (cluster mode enabled) replication group.

**Topics**
+ [New parameters for Redis OSS 3.2.4](#ParameterGroups.Redis.3-2-4.New)
+ [Parameters changed in Redis OSS 3.2.4 (enhanced)](#ParameterGroups.Redis.3-2-4.Changed)

#### New parameters for Redis OSS 3.2.4
<a name="ParameterGroups.Redis.3-2-4.New"></a>

**Parameter group family:** redis3.2

For Redis OSS 3.2.4 the following additional parameters are supported.


****  

|  Name  |  Details |  Description  | 
| --- | --- | --- | 
| list-max-ziplist-size | Default: -2 Type: integer Modifiable: No  | Lists are encoded in a special way to save space. The number of entries allowed per internal list node can be specified as a fixed maximum size or a maximum number of elements. For a fixed maximum size, use -5 through -1, meaning: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/ParameterGroups.Engine.html) | 
| list-compress-depth | Default: 0 Type: integer Modifiable: Yes Changes Take Effect: Immediately | Lists may also be compressed. Compress depth is the number of quicklist ziplist nodes from each side of the list to exclude from compression. The head and tail of the list are always uncompressed for fast push and pop operations. Settings are: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/ParameterGroups.Engine.html) | 
| cluster-enabled |  Default: no/yes \$1 Type: string Modifiable: No | Indicates whether this is a Valkey or Redis OSS (cluster mode enabled) replication group in cluster mode (yes) or a Valkey or Redis OSS (cluster mode enabled) replication group in non-cluster mode (no). Valkey or Redis OSS (cluster mode enabled) replication groups in cluster mode can partition their data across up to 500 node groups. \$1 Redis OSS 3.2.*x* has two default parameter groups. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/ParameterGroups.Engine.html). | 
| cluster-require-full-coverage | Default: no Type: boolean Modifiable: yes Changes Take Effect: Immediately |  When set to `yes`, Valkey or Redis OSS (cluster mode enabled) nodes in cluster mode stop accepting queries if they detect there is at least one hash slot uncovered (no available node is serving it). This way if the cluster is partially down, the cluster becomes unavailable. It automatically becomes available again as soon as all the slots are covered again. However, sometimes you want the subset of the cluster which is working to continue to accept queries for the part of the key space that is still covered. To do so, just set the `cluster-require-full-coverage` option to `no`. | 
| hll-sparse-max-bytes | Default: 3000 Type: integer Modifiable: Yes Changes Take Effect: Immediately | HyperLogLog sparse representation bytes limit. The limit includes the 16 byte header. When a HyperLogLog using the sparse representation crosses this limit, it is converted into the dense representation. A value greater than 16000 is not recommended, because at that point the dense representation is more memory efficient. We recommend a value of about 3000 to have the benefits of the space-efficient encoding without slowing down PFADD too much, which is O(N) with the sparse encoding. The value can be raised to \$110000 when CPU is not a concern, but space is, and the data set is composed of many HyperLogLogs with cardinality in the 0 - 15000 range. | 
| reserved-memory-percent | Default: 25 Type: integer Modifiable: Yes Changes Take Effect: Immediately |  The percent of a node's memory reserved for nondata use. By default, the Redis OSS data footprint grows until it consumes all of the node's memory. If this occurs, then node performance will likely suffer due to excessive memory paging. By reserving memory, you can set aside some of the available memory for non-Redis OSS purposes to help reduce the amount of paging. This parameter is specific to ElastiCache, and is not part of the standard Redis OSS distribution. For more information, see `reserved-memory` and [Managing reserved memory for Valkey and Redis OSS](redis-memory-management.md). | 

#### Parameters changed in Redis OSS 3.2.4 (enhanced)
<a name="ParameterGroups.Redis.3-2-4.Changed"></a>

**Parameter group family:** redis3.2

For Redis OSS 3.2.4 the following parameters were changed.


****  

|  Name  |  Details |  Change  | 
| --- | --- | --- | 
| activerehashing | Modifiable: Yes if the parameter group is not associated with any clusters. Otherwise, no. | Modifiable was No. | 
| databases | Modifiable: Yes if the parameter group is not associated with any clusters. Otherwise, no. | Modifiable was No. | 
| appendonly | Default: off Modifiable: No | If you want to upgrade from an earlier Redis OSS version, you must first turn `appendonly` off. | 
| appendfsync | Default: off Modifiable: No | If you want to upgrade from an earlier Redis OSS version, you must first turn `appendfsync` off. | 
| repl-timeout | Default: 60 Modifiable: No | Is now unmodifiable with a default of 60. | 
| tcp-keepalive | Default: 300 | Default was 0. | 
| list-max-ziplist-entries |  | Parameter is no longer available. | 
| list-max-ziplist-value |  | Parameter is no longer available. | 

### Redis OSS 2.8.24 (enhanced) added parameters
<a name="ParameterGroups.Redis.2-8-24"></a>

**Parameter group family:** redis2.8

For Redis OSS 2.8.24 there are no additional parameters supported.

### Redis OSS 2.8.23 (enhanced) added parameters
<a name="ParameterGroups.Redis.2-8-23"></a>

**Parameter group family:** redis2.8

For Redis OSS 2.8.23 the following additional parameter is supported.


****  

|  Name  |  Details |  Description  | 
| --- | --- | --- | 
| close-on-slave-write  | Default: yes Type: string (yes/no) Modifiable: Yes Changes Take Effect: Immediately | If enabled, clients who attempt to write to a read-only replica will be disconnected. | 

#### How close-on-slave-write works
<a name="w2aac24c16c30c49c15c39b9"></a>

The `close-on-slave-write` parameter is introduced by Amazon ElastiCache to give you more control over how your cluster responds when a primary node and a read replica node swap roles due to promoting a read replica to primary.

![\[Image: close-on-replica-write, everything working fine\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/images/ElastiCache-close-on-slave-write-01.png)


If the read-replica cluster is promoted to primary for any reason other than a Multi-AZ enabled replication group failing over, the client will continue trying to write to endpoint A. Because endpoint A is now the endpoint for a read-replica, these writes will fail. This is the behavior for Redis OSS before ElastiCache introducing `close-on-replica-write` and the behavior if you disable `close-on-replica-write`.

![\[Image: close-on-slave-write, writes failing\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/images/ElastiCache-close-on-slave-write-02.png)


With `close-on-replica-write` enabled, any time a client attempts to write to a read-replica, the client connection to the cluster is closed. Your application logic should detect the disconnection, check the DNS table, and reconnect to the primary endpoint, which now would be endpoint B.

![\[Image: close-on-slave-write, writing to new primary cluster\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/images/ElastiCache-close-on-slave-write-03.png)


#### When you might disable close-on-replica-write
<a name="w2aac24c16c30c49c15c39c11"></a>

If disabling `close-on-replica-write` results in writes to the failing cluster, why disable `close-on-replica-write`?

As previously mentioned, with `close-on-replica-write` enabled, any time a client attempts to write to a read-replica the client connection to the cluster is closed. Establishing a new connection to the node takes time. Thus, disconnecting and reconnecting as a result of a write request to the replica also affects the latency of read requests that are served through the same connection. This effect remains in place until a new connection is established. If your application is especially read-heavy or very latency-sensitive, you might keep your clients connected to avoid degrading read performance. 

### Redis OSS 2.8.22 (enhanced) added parameters
<a name="ParameterGroups.Redis.2-8-22"></a>

**Parameter group family:** redis2.8

For Redis OSS 2.8.22 there are no additional parameters supported.

**Important**  
Beginning with Redis OSS version 2.8.22, `repl-backlog-size` applies to the primary cluster as well as to replica clusters.
Beginning with Redis OSS version 2.8.22, the `repl-timeout` parameter is not supported. If it is changed, ElastiCache will overwrite with the default (60s), as we do with `appendonly`.

The following parameters are no longer supported.
+ *appendonly*
+ *appendfsync*
+ *repl-timeout*

### Redis OSS 2.8.21 added parameters
<a name="ParameterGroups.Redis.2-8-21"></a>

**Parameter group family:** redis2.8

For Redis OSS 2.8.21, there are no additional parameters supported.

### Redis OSS 2.8.19 added parameters
<a name="ParameterGroups.Redis.2-8-19"></a>

**Parameter group family:** redis2.8

For Redis OSS 2.8.19 there are no additional parameters supported.

### Redis OSS 2.8.6 added parameters
<a name="ParameterGroups.Redis.2-8-6"></a>

**Parameter group family:** redis2.8

For Redis OSS 2.8.6 the following additional parameters are supported.


****  

|  Name  |  Details  |  Description  | 
| --- | --- | --- | 
| min-slaves-max-lag  | Default: 10 Type: integer Modifiable: Yes Changes Take Effect: Immediately | The number of seconds within which the primary node must receive a ping request from a read replica. If this amount of time passes and the primary does not receive a ping, then the replica is no longer considered available. If the number of available replicas drops below min-slaves-to-write, then the primary will stop accepting writes at that point. If either this parameter or min-slaves-to-write is 0, then the primary node will always accept writes requests, even if no replicas are available. | 
| min-slaves-to-write | Default: 0 Type: integer Modifiable: Yes Changes Take Effect: Immediately | The minimum number of read replicas which must be available in order for the primary node to accept writes from clients. If the number of available replicas falls below this number, then the primary node will no longer accept write requests. If either this parameter or min-slaves-max-lag is 0, then the primary node will always accept writes requests, even if no replicas are available. | 
| notify-keyspace-events | Default: (an empty string) Type: string Modifiable: Yes Changes Take Effect: Immediately | The types of keyspace events that Redis OSS can notify clients of. Each event type is represented by a single letter: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/ParameterGroups.Engine.html) You can have any combination of these event types. For example, *AKE* means that Redis OSS can publish notifications of all event types. Do not use any characters other than those listed above; attempts to do so will result in error messages. By default, this parameter is set to an empty string, meaning that keyspace event notification is disabled. | 
| repl-backlog-size | Default: 1048576 Type: integer Modifiable: Yes Changes Take Effect: Immediately | The size, in bytes, of the primary node backlog buffer. The backlog is used for recording updates to data at the primary node. When a read replica connects to the primary, it attempts to perform a partial sync (`psync`), where it applies data from the backlog to catch up with the primary node. If the `psync` fails, then a full sync is required. The minimum value for this parameter is 16384.  Beginning with Redis OSS 2.8.22, this parameter applies to the primary cluster as well as the read replicas.  | 
| repl-backlog-ttl | Default: 3600 Type: integer Modifiable: Yes Changes Take Effect: Immediately | The number of seconds that the primary node will retain the backlog buffer. Starting from the time the last replica node disconnected, the data in the backlog will remain intact until `repl-backlog-ttl` expires. If the replica has not connected to the primary within this time, then the primary will release the backlog buffer. When the replica eventually reconnects, it will have to perform a full sync with the primary. If this parameter is set to 0, then the backlog buffer will never be released. | 
| repl-timeout | Default: 60 Type: integer Modifiable: Yes Changes Take Effect: Immediately | Represents the timeout period, in seconds, for: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/ParameterGroups.Engine.html) | 

### Redis OSS 2.6.13 parameters
<a name="ParameterGroups.Redis.2-6-13"></a>

**Parameter group family:** redis2.6

Redis OSS 2.6.13 was the first version of Redis OSS supported by ElastiCache. The following table shows the Redis OSS 2.6.13 parameters that ElastiCache supports.


****  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/ParameterGroups.Engine.html)

**Note**  
If you do not specify a parameter group for your Redis OSS 2.6.13 cluster, then a default parameter group (`default.redis2.6`) will be used. You cannot change the values of any parameters in the default parameter group; however, you can always create a custom parameter group and assign it to your cluster at any time.

### Redis OSS node-type specific parameters
<a name="ParameterGroups.Redis.NodeSpecific"></a>

Although most parameters have a single value, some parameters have different values depending on the node type used. The following table shows the default values for the `maxmemory`, `client-output-buffer-limit-slave-hard-limit`, and `client-output-buffer-limit-slave-soft-limit` parameters for each node type. The value of `maxmemory` is the maximum number of bytes available to you for use, data and other uses, on the node. For more information, see [Available memory](https://aws.amazon.com/premiumsupport/knowledge-center/available-memory-elasticache-redis-node/).

**Note**  
The `maxmemory` parameter cannot be modified.


|  Node type  | Maxmemory  | Client-output-buffer-limit-slave-hard-limit | Client-output-buffer-limit-slave-soft-limit | 
| --- | --- | --- | --- | 
| cache.t1.micro | 142606336 | 14260633 | 14260633 | 
| cache.t2.micro | 581959680 | 58195968 | 58195968 | 
| cache.t2.small | 1665138688 | 166513868 | 166513868 | 
| cache.t2.medium | 3461349376 | 346134937 | 346134937 | 
| cache.t3.micro | 536870912 | 53687091 | 53687091 | 
| cache.t3.small | 1471026299 | 147102629 | 147102629 | 
| cache.t3.medium | 3317862236 | 331786223 | 331786223 | 
| cache.t4g.micro | 536870912 | 53687091 | 53687091 | 
| cache.t4g.small | 1471026299 | 147102629 | 147102629 | 
| cache.t4g.medium | 3317862236 | 331786223 | 331786223 | 
| cache.m1.small | 943718400 | 94371840 | 94371840 | 
| cache.m1.medium | 3093299200 | 309329920 | 309329920 | 
| cache.m1.large | 7025459200 | 702545920 | 702545920 | 
| cache.m1.xlarge | 14889779200 | 1488977920 | 1488977920 | 
| cache.m2.xlarge | 17091788800 | 1709178880 | 1709178880 | 
| cache.m2.2xlarge | 35022438400 | 3502243840 | 3502243840 | 
| cache.m2.4xlarge | 70883737600 | 7088373760 | 7088373760 | 
| cache.m3.medium | 2988441600 | 309329920 | 309329920 | 
| cache.m3.large | 6501171200 | 650117120 | 650117120 | 
| cache.m3.xlarge | 14260633600 | 1426063360 | 1426063360 | 
| cache.m3.2xlarge | 29989273600 | 2998927360 | 2998927360 | 
| cache.m4.large | 6892593152 | 689259315 | 689259315 | 
| cache.m4.xlarge | 15328501760 | 1532850176 | 1532850176 | 
| cache.m4.2xlarge | 31889126359 | 3188912636 | 3188912636 | 
| cache.m4.4xlarge | 65257290629 | 6525729063 | 6525729063 | 
| cache.m4.10xlarge | 166047614239 | 16604761424 | 16604761424 | 
| cache.m5.large | 6854542746 | 685454275  | 685454275 | 
| cache.m5.xlarge | 13891921715 | 1389192172 | 1389192172 | 
| cache.m5.2xlarge | 27966669210 | 2796666921 | 2796666921 | 
| cache.m5.4xlarge | 56116178125 | 5611617812 | 5611617812 | 
| cache.m5.12xlarge | 168715971994 | 16871597199 | 16871597199 | 
| cache.m5.24xlarge | 337500562842 | 33750056284 | 33750056284 | 
| cache.m6g.large | 6854542746 | 685454275 | 685454275 | 
| cache.m6g.xlarge | 13891921715 | 1389192172 | 1389192172 | 
| cache.m6g.2xlarge | 27966669210 | 2796666921 | 2796666921 | 
| cache.m6g.4xlarge | 56116178125 | 5611617812 | 5611617812 | 
| cache.m6g.8xlarge | 111325552312 | 11132555231 | 11132555231 | 
| cache.m6g.12xlarge | 168715971994 | 16871597199 | 16871597199 | 
| cache.m6g.16xlarge | 225000375228 | 22500037523 | 22500037523 | 
| cache.c1.xlarge | 6501171200 | 650117120 | 650117120 | 
| cache.r3.large | 14470348800 | 1468006400 | 1468006400 | 
| cache.r3.xlarge | 30513561600 | 3040870400 | 3040870400 | 
| cache.r3.2xlarge | 62495129600 | 6081740800 | 6081740800 | 
| cache.r3.4xlarge | 126458265600 | 12268339200 | 12268339200 | 
| cache.r3.8xlarge | 254384537600 | 24536678400 | 24536678400 | 
| cache.r4.large | 13201781556 | 1320178155 | 1320178155 | 
| cache.r4.xlarge | 26898228839 | 2689822883 | 2689822883 | 
| cache.r4.2xlarge | 54197537997 | 5419753799 | 5419753799 | 
| cache.r4.4xlarge | 108858546586 | 10885854658 | 10885854658 | 
| cache.r4.8xlarge | 218255432090 | 21825543209 | 21825543209 | 
| cache.r4.16xlarge | 437021573120 | 43702157312 | 43702157312 | 
| cache.r5.large | 14037181030 | 1403718103 | 1403718103 | 
| cache.r5.xlarge | 28261849702 | 2826184970 | 2826184970 | 
| cache.r5.2xlarge | 56711183565 | 5671118356 | 5671118356 | 
| cache.r5.4xlarge | 113609865216 | 11360986522 | 11360986522 | 
| cache.r5.12xlarge | 341206346547 | 34120634655 | 34120634655 | 
| cache.r5.24xlarge | 682485973811 | 68248597381 | 68248597381 | 
| cache.r6g.large | 14037181030 | 1403718103 | 1403718103 | 
| cache.r6g.xlarge | 28261849702 | 2826184970 | 2826184970 | 
| cache.r6g.2xlarge | 56711183565 | 5671118356 | 5671118356 | 
| cache.r6g.4xlarge | 113609865216 | 11360986522 | 11360986522 | 
| cache.r6g.8xlarge | 225000375228 | 22500037523 | 22500037523 | 
| cache.r6g.12xlarge | 341206346547 | 34120634655 | 34120634655 | 
| cache.r6g.16xlarge | 450000750456 | 45000075046 | 45000075046 | 
| cache.r6gd.xlarge | 28261849702 | 2826184970 | 2826184970 | 
| cache.r6gd.2xlarge | 56711183565 | 5671118356 | 5671118356 | 
| cache.r6gd.4xlarge | 113609865216 | 11360986522 | 11360986522 | 
| cache.r6gd.8xlarge | 225000375228 | 22500037523 | 22500037523 | 
| cache.r6gd.12xlarge | 341206346547 | 34120634655 | 34120634655 | 
| cache.r6gd.16xlarge | 450000750456 | 45000075046 | 45000075046 | 
| cache.r7g.large | 14037181030 | 1403718103 | 1403718103 | 
| cache.r7g.xlarge | 28261849702 | 2826184970 | 2826184970 | 
| cache.r7g.2xlarge | 56711183565 | 5671118356 | 5671118356 | 
| cache.r7g.4xlarge | 113609865216 | 11360986522 | 11360986522 | 
| cache.r7g.8xlarge | 225000375228 | 22500037523 | 22500037523 | 
| cache.r7g.12xlarge | 341206346547 | 34120634655 | 34120634655 | 
| cache.r7g.16xlarge | 450000750456 | 45000075046 | 45000075046 | 
| cache.m7g.large | 6854542746 | 685454275 | 685454275 | 
| cache.m7g.xlarge | 13891921715 | 1389192172 | 1389192172 | 
| cache.m7g.2xlarge | 27966669210 | 2796666921 | 2796666921 | 
| cache.m7g.4xlarge | 56116178125 | 5611617812 | 5611617812 | 
| cache.m7g.8xlarge | 111325552312 | 11132555231 | 11132555231 | 
| cache.m7g.12xlarge | 168715971994 | 16871597199 | 16871597199 | 
| cache.m7g.16xlarge | 225000375228 | 22500037523 | 22500037523 | 
| cache.c7gn.large | 3317862236 | 1403718103 | 1403718103 | 
| cache.c7gn.xlarge | 6854542746 | 2826184970 | 2826184970 | 
| cache.c7gn.2xlarge | 13891921715 | 5671118356 | 5671118356 | 
| cache.c7gn.4xlarge | 27966669210 | 11360986522 | 11360986522 | 
| cache.c7gn.8xlarge | 56116178125 | 22500037523 | 22500037523 | 
| cache.c7gn.12xlarge | 84357985997 | 34120634655 | 34120634655 | 
| cache.c7gn.16xlarge | 113609865216 | 45000075046 | 45000075046 | 

**Note**  
All current generation instance types are created in an Amazon Virtual Private Cloud VPC by default.  
T1 instances do not support Multi-AZ.  
T1 and T2 instances do not support Redis OSS AOF.  
Redis OSS configuration variables `appendonly` and `appendfsync` are not supported on Redis OSS version 2.8.22 and later.

## Memcached specific parameters
<a name="ParameterGroups.Memcached"></a>

**Memcached**

If you do not specify a parameter group for your Memcached cluster, then a default parameter group appropriate to your engine version will be used. You can't change the values of any parameters in a default parameter group. However, you can create a custom parameter group and assign it to your cluster at any time. For more information, see [Creating an ElastiCache parameter group](ParameterGroups.Creating.md).

**Topics**
+ [Memcached 1.6.17 changes](#ParameterGroups.Memcached.1.6.17)
+ [Memcached 1.6.6 added parameters](#ParameterGroups.Memcached.1-6-6)
+ [Memcached 1.5.10 parameter changes](#ParameterGroups.Memcached.1-5-10)
+ [Memcached 1.4.34 added parameters](#ParameterGroups.Memcached.1-4-34)
+ [Memcached 1.4.33 added parameters](#ParameterGroups.Memcached.1-4-33)
+ [Memcached 1.4.24 added parameters](#ParameterGroups.Memcached.1-4-24)
+ [Memcached 1.4.14 added parameters](#ParameterGroups.Memcached.1-4-14)
+ [Memcached 1.4.5 supported parameters](#ParameterGroups.Memcached.1-4-5)
+ [Memcached connection overhead](#ParameterGroups.Memcached.Overhead)
+ [Memcached node-type specific parameters](#ParameterGroups.Memcached.NodeSpecific)

### Memcached 1.6.17 changes
<a name="ParameterGroups.Memcached.1.6.17"></a>

From Memcached 1.6.17, we no longer support these administrative commands: `lru_crawler`, `lru`, and `slabs`. With these changes, you will not be able to enable/disable `lru_crawler` at runtime via commands. Please enable/disable `lru_crawler` by modifying your custom parameter group.

### Memcached 1.6.6 added parameters
<a name="ParameterGroups.Memcached.1-6-6"></a>

For Memcached 1.6.6, no additional parameters are supported.

**Parameter group family:** memcached1.6

### Memcached 1.5.10 parameter changes
<a name="ParameterGroups.Memcached.1-5-10"></a>

For Memcached 1.5.10, the following additional parameters are supported.

**Parameter group family:** memcached1.5


| Name | Details | Description | 
| --- | --- | --- | 
| no\$1modern  | Default: 1 Type: boolean Modifiable: Yes Allowed\$1Values: 0,1 Changes Take Effect: At launch  |  An alias for disabling `slab_reassign`, `lru_maintainer_thread`, `lru_segmented`, and`maxconns_fast` commands. When using Memcached 1.5 and higher, `no_modern` also sets the hash\$1algorithm to `jenkins`. In addition, when using Memcached 1.5.10, `inline_ascii_reponse` is controlled by the parameter `parallelly`. This means that if `no_modern` is disabled then `inline_ascii_reponse` is disabled. From Memcached engine 1.5.16 onward the `inline_ascii_response` parameter no longer applies, so `no_modern` being abled or disabled has no effect on `inline_ascii_reponse`. If `no_modern` is disabled, then `slab_reassign`, `lru_maintainer_thread`, `lru_segmented`, and `maxconns_fast` WILL be enabled. Since `slab_automove` and `hash_algorithm` parameters are not SWITCH parameters, their setting is based on the configurations in the parameter group. If you want to disable `no_modern` and revert to `modern`, you must configure a custom parameter group to disable this parameter and then reboot for these changes to take effect.   The default configuration value for this parameter has been changed from 0 to 1 as of August 20, 2021. The updated default value will get automatically picked up by new ElastiCache users for each regions after August 20th, 2021. Existing ElastiCache users in the regions before August 20th, 2021 need to manually modify their custom parameter groups in order to pick up this new change.   | 
| inline\$1ascii\$1resp  | Default: 0 Type: boolean Modifiable: Yes Allowed\$1Values: 0,1 Changes Take Effect: At launch  |  Stores numbers from `VALUE` response, inside an item, using up to 24 bytes. Small slowdown for ASCII `get`, `faster` sets.  | 

For Memcached 1.5.10, the following parameters are removed.


| Name | Details | Description | 
| --- | --- | --- | 
| expirezero\$1does\$1not\$1evict  | Default: 0 Type: boolean Modifiable: Yes Allowed\$1Values: 0,1 Changes Take Effect: At launch  |  No longer supported in this version. | 
| modern  | Default: 1 Type: boolean Modifiable: Yes (requires re-launch if set to `no_modern`) Allowed\$1Values: 0,1 Changes Take Effect: At launch  |  No longer supported in this version. Starting with this version, `no-modern` is enabled by default with every launch or re-launch.  | 

### Memcached 1.4.34 added parameters
<a name="ParameterGroups.Memcached.1-4-34"></a>

For Memcached 1.4.34, no additional parameters are supported.

**Parameter group family:** memcached1.4

### Memcached 1.4.33 added parameters
<a name="ParameterGroups.Memcached.1-4-33"></a>

For Memcached 1.4.33, the following additional parameters are supported.

**Parameter group family:** memcached1.4


| Name | Details | Description | 
| --- | --- | --- | 
|  modern  | Default: enabled Type: boolean Modifiable: Yes Changes Take Effect: At launch  |  An alias to multiple features. Enabling `modern` is equivalent to turning following commands on and using a murmur3 hash algorithm: `slab_reassign`, `slab_automove`, `lru_crawler`, `lru_maintainer`, `maxconns_fast`, and `hash_algorithm=murmur3`. | 
|  watch  | Default: enabled Type: boolean Modifiable: Yes Changes Take Effect: Immediately Logs can get dropped if user hits their `watcher_logbuf_size` and `worker_logbuf_size` limits.  |  Logs fetches, evictions or mutations. When, for example, user turns `watch` on, they can see logs when `get`, `set`, `delete`, or `update` occur. | 
|  idle\$1timeout  | Default: 0 (disabled) Type: integer Modifiable: Yes Changes Take Effect: At Launch  |  The minimum number of seconds a client will be allowed to idle before being asked to close. Range of values: 0 to 86400. | 
|  track\$1sizes  | Default: disabled Type: boolean Modifiable: Yes Changes Take Effect: At Launch  |  Shows the sizes each slab group has consumed. Enabling `track_sizes` lets you run `stats sizes` without the need to run `stats sizes_enable`. | 
|  watcher\$1logbuf\$1size  | Default: 256 (KB) Type: integer Modifiable: Yes Changes Take Effect: At Launch  |  The `watch` command turns on stream logging for Memcached. However `watch` can drop logs if the rate of evictions, mutations or fetches are high enough to cause the logging buffer to become full. In such situations, users can increase the buffer size to reduce the chance of log losses. | 
|  worker\$1logbuf\$1size  | Default: 64 (KB) Type: integer Modifiable: Yes Changes Take Effect: At Launch  |  The `watch` command turns on stream logging for Memcached. However `watch` can drop logs if the rate of evictions, mutations or fetches are high enough to cause logging buffer get full. In such situations, users can increase the buffer size to reduce the chance of log losses. | 
|  slab\$1chunk\$1max  | Default: 524288 (bytes)  Type: integer Modifiable: Yes Changes Take Effect: At Launch  |  Specifies the maximum size of a slab. Setting smaller slab size uses memory more efficiently. Items larger than `slab_chunk_max` are split over multiple slabs. | 
|  lru\$1crawler metadump [all\$11\$12\$13] | Default: disabled  Type: boolean Modifiable: Yes Changes Take Effect: Immediately  |  if lru\$1crawler is enabled this command dumps all keys. `all\|1\|2\|3` - all slabs, or specify a particular slab number | 

### Memcached 1.4.24 added parameters
<a name="ParameterGroups.Memcached.1-4-24"></a>

For Memcached 1.4.24, the following additional parameters are supported.

**Parameter group family:** memcached1.4


| Name | Details | Description | 
| --- | --- | --- | 
|  disable\$1flush\$1all  | Default: 0 (disabled) Type: boolean Modifiable: Yes Changes Take Effect: At launch  |  Add parameter (`-F`) to disable flush\$1all. Useful if you never want to be able to run a full flush on production instances. Values: 0, 1 (user can do a `flush_all` when the value is 0). | 
|  hash\$1algorithm  | Default: jenkins Type: string Modifiable: Yes Changes Take Effect: At launch  | The hash algorithm to be used. Permitted values: murmur3 and jenkins. | 
|  lru\$1crawler  | Default: 0 (disabled) Type: boolean Modifiable: Yes Changes Take Effect: After restart  You can temporarily enable `lru_crawler` at runtime from the command line. For more information, see the Description column.   |  Cleans slab classes of items that have expired. This is a low impact process that runs in the background. Currently requires initiating a crawl using a manual command. To temporarily enable, run `lru_crawler enable` at the command line. `lru_crawler 1,3,5` crawls slab classes 1, 3, and 5 looking for expired items to add to the freelist. Values: 0,1  Enabling `lru_crawler` at the command line enables the crawler until either disabled at the command line or the next reboot. To enable permanently, you must modify the parameter value. For more information, see [Modifying an ElastiCache parameter group](ParameterGroups.Modifying.md).   | 
|  lru\$1maintainer  | Default: 0 (disabled) Type: boolean Modifiable: Yes Changes Take Effect: At launch  |  A background thread that shuffles items between the LRUs as capacities are reached. Values: 0, 1.  | 
|  expirezero\$1does\$1not\$1evict  | Default: 0 (disabled) Type: boolean Modifiable: Yes Changes Take Effect: At launch  |  When used with `lru_maintainer`, makes items with an expiration time of 0 unevictable.   This can crowd out memory available for other evictable items.   Can be set to disregard `lru_maintainer`. | 

### Memcached 1.4.14 added parameters
<a name="ParameterGroups.Memcached.1-4-14"></a>

For Memcached 1.4.14, the following additional parameters are supported.

**Parameter group family:** memcached1.4


**Parameters added in Memcached 1.4.14**  

|  Name  |  Details  |  Description  | 
| --- | --- | --- | 
| config\$1max | Default: 16 Type: integer Modifiable: No | The maximum number of ElastiCache configuration entries. | 
| config\$1size\$1max | Default: 65536 Type: integer Modifiable: No | The maximum size of the configuration entries, in bytes. | 
| hashpower\$1init | Default: 16 Type: integer Modifiable: No | The initial size of the ElastiCache hash table, expressed as a power of two. The default is 16 (2^16), or 65536 keys. | 
| maxconns\$1fast | Default: 0 (false) Type: Boolean Modifiable: Yes Changes Take Effect: After restart | Changes the way in which new connections requests are handled when the maximum connection limit is reached. If this parameter is set to 0 (zero), new connections are added to the backlog queue and will wait until other connections are closed. If the parameter is set to 1, ElastiCache sends an error to the client and immediately closes the connection. | 
| slab\$1automove | Default: 0 Type: integer Modifiable: Yes Changes Take Effect: After restart | Adjusts the slab automove algorithm: If this parameter is set to 0 (zero), the automove algorithm is disabled. If it is set to 1, ElastiCache takes a slow, conservative approach to automatically moving slabs. If it is set to 2, ElastiCache aggressively moves slabs whenever there is an eviction. (This mode is not recommended except for testing purposes.) | 
| slab\$1reassign | Default: 0 (false) Type: Boolean Modifiable: Yes Changes Take Effect: After restart | Enable or disable slab reassignment. If this parameter is set to 1, you can use the "slabs reassign" command to manually reassign memory. | 

### Memcached 1.4.5 supported parameters
<a name="ParameterGroups.Memcached.1-4-5"></a>

**Parameter group family:** memcached1.4

For Memcached 1.4.5, the following parameters are supported.


**Parameters added in Memcached 1.4.5**  

|  Name  |  Details  |  Description  | 
| --- | --- | --- | 
| backlog\$1queue\$1limit | Default: 1024 Type: integer Modifiable: No | The backlog queue limit. | 
| binding\$1protocol | Default: auto Type: string Modifiable: Yes Changes Take Effect: After restart | The binding protocol. Permissible values are: `ascii` and `auto`. For guidance on modifying the value of `binding_protocol`, see [Modifying an ElastiCache parameter group](ParameterGroups.Modifying.md). | 
| cas\$1disabled | Default: 0 (false) Type: Boolean Modifiable: Yes Changes Take Effect: After restart | If 1 (true), check and set (CAS) operations will be disabled, and items stored will consume 8 fewer bytes than with CAS enabled. | 
| chunk\$1size | Default: 48 Type: integer Modifiable: Yes Changes Take Effect: After restart | The minimum amount, in bytes, of space to allocate for the smallest item's key, value, and flags. | 
| chunk\$1size\$1growth\$1factor | Default: 1.25 Type: float Modifiable: Yes Changes Take Effect: After restart | The growth factor that controls the size of each successive Memcached chunk; each chunk will be chunk\$1size\$1growth\$1factor times larger than the previous chunk. | 
| error\$1on\$1memory\$1exhausted | Default: 0 (false) Type: Boolean Modifiable: Yes Changes Take Effect: After restart | If 1 (true), when there is no more memory to store items, Memcached will return an error rather than evicting items. | 
| large\$1memory\$1pages | Default: 0 (false) Type: Boolean Modifiable: No | If 1 (true), ElastiCache will try to use large memory pages. | 
| lock\$1down\$1paged\$1memory | Default: 0 (false) Type: Boolean Modifiable: No | If 1 (true), ElastiCache will lock down all paged memory. | 
| max\$1item\$1size | Default: 1048576 Type: integer Modifiable: Yes Changes Take Effect: After restart | The size, in bytes, of the largest item that can be stored in the cluster. | 
| max\$1simultaneous\$1connections | Default: 65000 Type: integer Modifiable: No | The maximum number of simultaneous connections. | 
| maximize\$1core\$1file\$1limit | Default: 0 (false) Type: Boolean Modifiable:  Changes Take Effect: After restart | If 1 (true), ElastiCache will maximize the core file limit. | 
| memcached\$1connections\$1overhead | Default: 100 Type: integer Modifiable: Yes Changes Take Effect: After restart | The amount of memory to be reserved for Memcached connections and other miscellaneous overhead. For information about this parameter, see [Memcached connection overhead](#ParameterGroups.Memcached.Overhead). | 
| requests\$1per\$1event | Default: 20 Type: integer Modifiable: No | The maximum number of requests per event for a given connection. This limit is required to prevent resource starvation. | 

### Memcached connection overhead
<a name="ParameterGroups.Memcached.Overhead"></a>

On each node, the memory made available for storing items is the total available memory on that node (which is stored in the `max_cache_memory` parameter) minus the memory used for connections and other overhead (which is stored in the `memcached_connections_overhead` parameter). For example, a node of type `cache.m1.small` has a `max_cache_memory` of 1300MB. With the default `memcached_connections_overhead` value of 100MB, the Memcached process will have 1200MB available to store items.

The default values for the `memcached_connections_overhead` parameter satisfy most use cases; however, the required amount of allocation for connection overhead can vary depending on multiple factors, including request rate, payload size, and the number of connections.

You can change the value of the `memcached_connections_overhead` to better suit the needs of your application. For example, increasing the value of the `memcached_connections_overhead` parameter will reduce the amount of memory available for storing items and provide a larger buffer for connection overhead. Decreasing the value of the `memcached_connections_overhead` parameter will give you more memory to store items, but can increase your risk of swap usage and degraded performance. If you observe swap usage and degraded performance, try increasing the value of the `memcached_connections_overhead` parameter.

**Important**  
For the `cache.t1.micro` node type, the value for `memcached_connections_overhead` is determined as follows:  
If you cluster is using the default parameter group, ElastiCache will set the value for `memcached_connections_overhead` to 13MB.
If your cluster is using a parameter group that you have created yourself, you can set the value of `memcached_connections_overhead` to a value of your choice.

### Memcached node-type specific parameters
<a name="ParameterGroups.Memcached.NodeSpecific"></a>

Although most parameters have a single value, some parameters have different values depending on the node type used. The following table shows the default values for the `max_cache_memory` and `num_threads` parameters for each node type. The values on these parameters cannot be modified.


|  Node type  | max\$1cache\$1memory (in megabytes)  | num\$1threads  | 
| --- | --- | --- | 
| cache.t1.micro | 213  | 1 | 
| cache.t2.micro | 555 | 1 | 
| cache.t2.small | 1588 | 1 | 
| cache.t2.medium | 3301 | 2 | 
| cache.t3.micro | 512 | 2 | 
| cache.t3.small | 1402 | 2 | 
| cache.t3.medium | 3364 | 2 | 
| cache.t4g.micro | 512 | 2 | 
| cache.t4g.small | 1402 | 2 | 
| cache.t4g.medium | 3164 | 2 | 
| cache.m1.small | 1301 | 1 | 
| cache.m1.medium | 3350 | 1 | 
| cache.m1.large | 7100 | 2 | 
| cache.m1.xlarge | 14600  | 4 | 
| cache.m2.xlarge | 33800 | 2 | 
| cache.m2.2xlarge | 30412 | 4 | 
| cache.m2.4xlarge | 68000  | 16 | 
| cache.m3.medium | 2850 | 1 | 
| cache.m3.large | 6200 | 2 | 
| cache.m3.xlarge | 13600 | 4 | 
| cache.m3.2xlarge | 28600 | 8 | 
| cache.m4.large | 6573 | 2 | 
| cache.m4.xlarge | 11496  | 4 | 
| cache.m4.2xlarge | 30412 | 8 | 
| cache.m4.4xlarge | 62234 | 16 | 
| cache.m4.10xlarge | 158355 | 40 | 
| cache.m5.large | 6537 | 2 | 
| cache.m5.xlarge | 13248 | 4 | 
| cache.m5.2xlarge | 26671 | 8 | 
| cache.m5.4xlarge | 53516 | 16 | 
| cache.m5.12xlarge | 160900 | 48 | 
| cache.m5.24xlarge | 321865  | 96 | 
| cache.m6g.large | 6537 | 2 | 
| cache.m6g.xlarge | 13248 | 4 | 
| cache.m6g.2xlarge | 26671 | 8 | 
| cache.m6g.4xlarge | 53516 | 16 | 
| cache.m6g.8xlarge | 107000 | 32 | 
| cache.m6g.12xlarge | 160900 | 48 | 
| cache.m6g.16xlarge | 214577 | 64 | 
| cache.c1.xlarge | 6600 | 8 | 
| cache.r3.large | 13800 | 2 | 
| cache.r3.xlarge | 29100 | 4 | 
| cache.r3.2xlarge | 59600 | 8 | 
| cache.r3.4xlarge | 120600 | 16 | 
| cache.r3.8xlarge | 120600 | 32 | 
| cache.r4.large | 12590 | 2 | 
| cache.r4.xlarge | 25652 | 4 | 
| cache.r4.2xlarge | 51686 | 8 | 
| cache.r4.4xlarge | 103815 | 16 | 
| cache.r4.8xlarge | 208144 | 32 | 
| cache.r4.16xlarge | 416776 | 64 | 
| cache.r5.large | 13387 | 2 | 
| cache.r5.xlarge | 26953 | 4 | 
| cache.r5.2xlarge | 54084 | 8 | 
| cache.r5.4xlarge | 108347 | 16 | 
| cache.r5.12xlarge | 325400 | 48 | 
| cache.r5.24xlarge | 650869 | 96 | 
| cache.r6g.large | 13387 | 2 | 
| cache.r6g.xlarge | 26953 | 4 | 
| cache.r6g.2xlarge | 54084 | 8 | 
| cache.r6g.4xlarge | 108347 | 16 | 
| cache.r6g.8xlarge | 214577 | 32 | 
| cache.r6g.12xlarge | 325400 | 48 | 
| cache.r6g.16xlarge | 429154 | 64 | 
| cache.c7gn.large | 3164 | 2 | 
| cache.c7gn.xlarge | 6537 | 4 | 
| cache.c7gn.2xlarge | 13248 | 8 | 
| cache.c7gn.4xlarge | 26671 | 16 | 
| cache.c7gn.8xlarge | 53516 | 32 | 
| cache.c7gn.12xlarge | 325400 | 48 | 
| cache.c7gn.16xlarge | 108347 | 64 | 

**Note**  
All T2 instances are created in an Amazon Virtual Private Cloud (Amazon VPC).

# Connecting an EC2 instance and an ElastiCache cache automatically
<a name="compute-connection"></a>

You can use the ElastiCache console to simplify setting up a connection between an Amazon Elastic Compute Cloud (Amazon EC2) instance and an ElastiCache cache. Often, your cache is in a private subnet and your EC2 instance is in a public subnet within a VPC. You can use a SQL client on your EC2 instance to connect to your ElastiCache cache. The EC2 instance can also run web servers or applications that access your private ElastiCache cache. 

![\[Automatically connect an ElastiCache cache with an EC2 instance.\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/images/ec2-elasticache-connect-network_diagram.png)


**Topics**
+ [Automatic connectivity with an EC2 instance](#ec2-elc-connect-overview)
+ [Viewing connected compute resources](#ec2-elc-connect-viewing)

## Automatic connectivity with an EC2 instance
<a name="ec2-elc-connect-overview"></a>

When you set up a connection between an EC2 instance and an ElastiCache cache, ElastiCache automatically configures the VPC security group for your EC2 instance and for your ElastiCache cache.

The following are requirements for connecting an EC2 instance with an ElastiCache cache:
+ The EC2 instance must exist in the same VPC as the ElastiCache cache.

  If no EC2 instances exist in the same VPC, then the console provides a link to create one.
+ The user who sets up connectivity must have permissions to perform the following Amazon EC2 operations. These permissiosn are generally added to EC2 accounts when they're created. For more information on EC2 permissions, see [Granting required permissions for Amazon EC2 resources](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/ec2-api-permissions.html). 
  + `ec2:AuthorizeSecurityGroupEgress` 
  + `ec2:AuthorizeSecurityGroupIngress` 
  + `ec2:CreateSecurityGroup` 
  + `ec2:DescribeInstances` 
  + `ec2:DescribeNetworkInterfaces` 
  + `ec2:DescribeSecurityGroups` 
  + `ec2:ModifyNetworkInterfaceAttribute` 
  + `ec2:RevokeSecurityGroupEgress` 

When you set up a connection to an EC2 instance, ElastiCache acts according to the current configuration of the security groups associated with the ElastiCache cache and EC2 instance, as described in the following table.


****  

| Current ElastiCache security group configuration | Current EC2 security group configuration | ElastiCache action | 
| --- | --- | --- | 
|  There are one or more security groups associated with the ElastiCache cache with a name that matches the pattern `elasticache-ec2-${cacheId}:${ec2InstanceId}`. A security group that matches the pattern hasn't been modified. This security group has only one inbound rule with the VPC security group of the EC2 instance as the source.  |  There are one or more security groups associated with the EC2 instance with a name that matches the pattern `elasticache-ec2-${cacheId}:${ec2InstanceId}`. A security group that matches the pattern hasn't been modified. This security group has only one outbound rule with the VPC security group of the ElastiCache cache as the source.  |  ElastiCache takes no action. A connection was already configured automatically between the EC2 instance and the ElastiCache cache. Because a connection already exists between the EC2 instance and the ElastiCache cache, the security groups aren't modified.  | 
|  Either of the following conditions apply: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/compute-connection.html)  |  Either of the following conditions apply: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/compute-connection.html)  |  [ELC action: create new security groups](#elc-action-create-new-security-groups)  | 
|  There are one or more security groups associated with the ElastiCache cache with a name that matches the pattern `elasticache-ec2-${cacheId}:${ec2InstanceId}`. A security group that matches the pattern hasn't been modified. This security group has only one inbound rule with the VPC security group of the EC2 instance as the source.  |  There are one or more security groups associated with the EC2 instance with a name that matches the pattern `elasticache-ec2-${cacheId}:${ec2InstanceId}`. However, ElastiCache can't use any of these security groups for the connection with the ElastiCache cache. ElastiCache can't use a security group that doesn't have one outbound rule with the VPC security group of the ElastiCache cache as the source. ElastiCache also can't use a security group that has been modified.  |  [ELC action: create new security groups](#elc-action-create-new-security-groups)  | 
|  There are one or more security groups associated with the ElastiCache cache with a name that matches the pattern `elasticache-ec2-${cacheId}:${ec2InstanceId}`. A security group that matches the pattern hasn't been modified. This security group has only one inbound rule with the VPC security group of the EC2 instance as the source.  |  A valid EC2 security group for the connection exists, but it is not associated with the EC2 instance. This security group has a name that matches the pattern `ec2-elasticache-${ec2InstanceId}:${cacheId}`. It hasn't been modified. It has only one outbound rule with the VPC security group of theElastiCache cache as the source.  |  [ELC action: associate EC2 security group](#elc-action-associate-ec2-security-group)  | 
|  Either of the following conditions apply: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/compute-connection.html)  |  There are one or more security groups associated with the EC2 instance with a name that matches the pattern `ec2-elasticache-${ec2InstanceId}:${cacheId}`. A security group that matches the pattern hasn't been modified. This security group has only one outbound rule with the VPC security group of the ElastiCache cache as the source.  |  [ELC action: create new security groups](#elc-action-create-new-security-groups)  | 

**ElastiCache action: create new security groups**  
ElastiCache takes the following actions:
+ Creates a new security group that matches the pattern `elasticache-ec2-${cacheId}:${ec2InstanceId}`. This security group has an inbound rule with the VPC security group of the EC2 instance as the source. This security group is associated with the ElastiCache cache and allows the EC2 instance to access it.
+ Creates a new security group that matches the pattern `elasticache-ec2-${cacheId}:${ec2InstanceId}`. This security group has an outbound rule with the VPC security group of the ElastiCache cache as the target. This security group is associated with the EC2 instance and allows the EC2 instance to send traffic to the ElastiCache cache.

**ElastiCache action: associate EC2 security group**  
ElastiCache associates the valid, existing EC2 security group with the EC2 instance. This security group allows the EC2 instance to send traffic to the ElastiCache cache.

## Viewing connected compute resources
<a name="ec2-elc-connect-viewing"></a>

You can use the AWS Management Console to view the compute resources that are connected to an ElastiCache cache. The resources shown include compute resource connections that were set up automatically. For example, you can allow a compute resource to access a cache manually by adding a rule to the VPC security group associated with the cache. These resources will not appear in the connected compute resources list.

For a compute resource to be listed, the same conditions must apply as when automatically connecting an EC2 instance and an ElastiCache cache.

**To view compute resources connected to an ElastiCache cache**

1. Sign in to the AWS Management Console and open the ElastiCache console

1. In the navigation pane, choose **Caches**, and then choose a Valkey or Redis OSS cache.

1. On the **Connectivity & security** tab, view the compute resources in the **Set up compute connection**.  
![\[Connected compute resources.\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/images/ec2-elasticache-connected_resources.png)

# Scaling ElastiCache
<a name="Scaling"></a>

You can scale your ElastiCache cache to suit your needs. Serverless caches and node-based clusters offer several different scaling options.
+ [Considerations](Scaling-considerations.md)
+ [Scaling ElastiCache Serverless clusters](Scaling-serverless.md)
+ [Scaling node-based clusters](Scaling-self-designed.md)

# Considerations
<a name="Scaling-considerations"></a>

## Potential impact on CPU utilization when scaling
<a name="Scaling-considerations-cpu"></a>

When scaling up or down between node types, be aware of the potential impact on CPU utilization related to enhanced I/O features. For [supported node types](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/CacheNodes.SupportedTypes.html#CacheNodes.CurrentGen), ElastiCache offloads network I/O and TLS operations to dedicated threads by default, which utilize the extra CPU cores available on the node. The availability of these features depends on your engine version and node types:
+ **Enhanced I/O (Redis 5.0.6\$1):** Network I/O is handled on dedicated threads, leveraging additional CPU cores on supported node types.
+ **TLS offloading (Redis 6.2.5\$1):** TLS operations are offloaded to the I/O threads, further utilizing available CPU cores.
+ **Enhanced I/O Multiplexing (Redis OSS 7.0.4\$1 or Valkey 7.2.6\$1):** Multiple client connections are multiplexed onto I/O threads, increasing throughput and optimizing CPU usage across available cores.

These features distribute processing across the extra CPU cores available on the node, which affects CPU metrics in the following ways:

**Impact on CPUUtilization metric**  
`CPUUtilization` reflects the aggregate CPU usage across all cores on the node, including the dedicated I/O threads. Because enhanced I/O features consume CPU on these additional cores, `CPUUtilization` is not a reliable indicator of your engine's actual capacity and loads.

**Impact on EngineCPUUtilization metric**  
`EngineCPUUtilization` measures only the main Redis or Valkey engine thread. When enhanced I/O features are active, operations such as network I/O and TLS processing are offloaded from the main thread to dedicated I/O threads. This means `EngineCPUUtilization` may decrease because the main thread is doing less work. `EngineCPUUtilization` accurately reflects your actual workload capacity and whether your instance is approaching its processing limits.

**Scaling scenarios**
+ **Scaling from a non-supported to a supported node type:** When enhanced I/O features become active on the new node type, `CPUUtilization` may increase as dedicated I/O threads begin utilizing additional CPU cores. At the same time, `EngineCPUUtilization` may decrease as operations are offloaded from the main engine thread.
+ **Scaling up within supported node types:** Additional CPU cores become available, which may reduce `CPUUtilization` as I/O operations are distributed across more resources.
+ **Scaling down within supported node types:** Fewer CPU cores are available to handle I/O operations, which may increase `CPUUtilization` as network I/O, TLS processing, and connection handling compete for limited resources.

**Recommended monitoring approach**

We recommend using `EngineCPUUtilization` rather than `CPUUtilization` for monitoring. `EngineCPUUtilization` measures the main engine thread's performance and accurately reflects whether your instance is approaching its processing limits. `CPUUtilization` may vary across engine versions and node types due to changes in how enhanced I/O features utilize available cores, making it an unreliable metric for capacity planning.

# Scaling ElastiCache Serverless clusters
<a name="Scaling-serverless"></a>

ElastiCache Serverless automatically accommodates your workload traffic as it ramps up or down. For each ElastiCache Serverless cache, ElastiCache continuously tracks the utilization of resources such as CPU, memory, and network. When any of these resources are constrained, ElastiCache Serverless scales out by adding a new shard and redistributing data to the new shard, without any downtime to your application. You can monitor the resources being consumed by your cache in CloudWatch by monitoring the `BytesUsedForCache` metric for cache data storage and `ElastiCacheProcessingUnits` (ECPU) for compute usage. 

## Setting scaling limits to manage costs
<a name="Pre-Scaling"></a>

You can choose to configure a maximum usage on both cache data storage and ECPU/second for your cache to control cache costs. Doing so will ensure that your cache usage never exceeds the configured maximum. 

If you set a scaling maximum then your application may experience decreased cache performance when the cache hits the maximum. When you set a cache data storage maximum and your cache data storage hits the maximum, ElastiCache will begin evicting data in your cache that has a Time-To-Live (TTL) set, using the LRU logic. If there is no data that can be evicted, then requests to write additional data will receive an Out Of Memory (OOM) error message. When you set an ECPU/second maximum and the compute utilization of your workload exceeds this value, ElastiCache will begin throttling requests. 

If you setup a maximum limit on `BytesUsedForCache` or `ElastiCacheProcessingUnits`, we highly recommend setting up a CloudWatch alarm at a value lower than the maximum limit so that you are notified when your cache is operating close to these limits. We recommend setting an alarm at 75% of the maximum limit you set. See documentation about how to set up CloudWatch alarms.

## Pre-scaling with ElastiCache Serverless
<a name="Pre-Scaling"></a>

**ElastiCache Serverless pre-scaling**

With pre-scaling, also called pre-warming, you can set minimum supported limits for your ElastiCache cache. You can set these minimums for ElastiCache Processing Units (ECPUs) per second or data storage. This can be useful in preparation for anticipated scaling events. For example, if a gaming company expects a 5x increase in logins within the first minute that their new game launches, they can ready their cache for this significant spike in usage. 

You can perform pre-scaling using the ElastiCache console, CLI, or API. ElastiCache Serverless updates the available ECPUs/second on the cache within 60 minutes, and sends an event notification when the minimum limit update is completed. 

**How pre-scaling works**

When the minimum limit for ECPUs/second or data storage is updated via the console, CLI, or API, that new limit is available within 1 hour. ElastiCache Serverless supports 30K ECPUs/second on an empty cache, and up to 90K ECPUs/sec when using the Read from Replica feature. ElastiCache Serverless for Valkey 8.0 can double the supported requests per second (RPS) every 2-3 minutes, reaching 5M RPS per cache from zero in under 13 minutes, with consistent sub-millisecond p50 read latency. If you anticipate that an upcoming scaling event might exceed this rate, then we recommend setting the minimum ECPUs/second to the peak ECPUs/sec you expect at least 60 minutes before the peak event. Otherwise, the application may experience elevated latency and throttling of requests. 

Once the minimum limit update is complete, ElastiCache Serverless will start metering you for the new minimum ECPUs per second or the new minimum storage. This occurs even if your application is not executing requests on the cache, or if your data storage usage is below the minimum. When you lower the minimum limit from its current setting, the update is immediate so ElastiCache Serverless will begin metering at the new minimum limit immediately. 

**Note**  
When you set a minimum usage limit, you are charged for that limit even if your actual usage is lower than the minimum usage limit. ECPU or data storage usage that exceeds the minimum usage limit are charged the regular rate. For example, if you set a minimum usage limit of 100,000 ECPUs/second then you will be charged at least \$11.224 per hour (using ECPU prices in us-east-1), even if your usage is lower than that set minimum.
ElastiCache Serverless supports the requested minimum scale at an aggregate level on the cache. ElastiCache Serverless also supports a maximum of 30K ECPUs/second per slot (90K ECPUs/second when using Read from Replica using READONLY connections). As a best practice, your application should ensure that key distribution across Valkey or Redis OSS slots and traffic across keys is as uniform as possible.

## Setting scaling limits using the console and AWS CLI
<a name="Pre-Scaling.console"></a>

*Setting scaling limits using the AWS Console*

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. In the navigation pane, choose the engine that runs on the cache that you want to modify.

1. A list of caches running the chosen engine appears.

1. Choose the cache to modify by choosing the radio button to the left of the cache’s name.

1. Choose **Actions** and then choose **Modify**.

1. Under **Usage limits**, set appropriate **Memory** or **Compute** limits.

1. Click **Preview** changes and then **Save** changes.

**Setting scaling limits using the AWS CLI**

To change scaling limits using the CLI, use the modify-serverless-cache API.

**Linux:**

```
aws elasticache modify-serverless-cache --serverless-cache-name <cache name> \
--cache-usage-limits 'DataStorage={Minimum=10,Maximum=100,Unit=GB}, ECPUPerSecond={Minimum=1000,Maximum=100000}'
```

**Windows:**

```
aws elasticache modify-serverless-cache --serverless-cache-name <cache name> ^
--cache-usage-limits 'DataStorage={Minimum=10,Maximum=100,Unit=GB}, ECPUPerSecond={Minimum=1000,Maximum=100000}'
```

**Removing scaling limits using the CLI**

To remove scaling limits using the CLI, set the Minimum and Maximum limit parameters to 0.

**Linux:**

```
aws elasticache modify-serverless-cache --serverless-cache-name <cache name> \
--cache-usage-limits 'DataStorage={Minimum=0,Maximum=0,Unit=GB}, ECPUPerSecond={Minimum=0,Maximum=0}'
```

**Windows:**

```
aws elasticache modify-serverless-cache --serverless-cache-name <cache name> ^
--cache-usage-limits 'DataStorage={Minimum=0,Maximum=0,Unit=GB}, ECPUPerSecond={Minimum=0,Maximum=0}'
```

# Scaling node-based clusters
<a name="Scaling-self-designed"></a>

The amount of data your application needs to process is seldom static. It increases and decreases as your business grows or experiences normal fluctuations in demand. If you self-manage your cache, you need to provision sufficient hardware for your demand peaks, which can be expensive. By using Amazon ElastiCache you can scale to meet current demand, paying only for what you use. ElastiCache enables you to scale your cache to match demand.

**Note**  
If a Valkey or Redis OSS cluster is replicated across one or more Regions, then those Regions are scaled in order. When scaling up, secondary Regions are scaled first and then the primary Region. When scaling down, the primary Region is first and then any secondary Regions follow.  
When updating the engine version, the order is secondary Region and then primary Region.

**Topics**
+ [On-demand scaling for Memcached clusters](Scaling-self-designed.mem-heading.md)
+ [Manual scaling for Memcached clusters](Scaling.Memcached.manually.md)
+ [Scaling for Valkey or Redis OSS (Cluster Mode Disabled) clusters](scaling-redis-classic.md)
+ [Scaling replica nodes for Valkey or Redis OSS (Cluster Mode Disabled)](Scaling.RedisReplGrps.md)
+ [Scaling Valkey or Redis OSS (Cluster Mode Enabled) clusters](scaling-redis-cluster-mode-enabled.md)

# On-demand scaling for Memcached clusters
<a name="Scaling-self-designed.mem-heading"></a>

ElastiCache for Memcached offers a fully managed, in-memory caching service that deploys, operates, and vertically scales Memcached in the AWS cloud. 

**On-demand vertical scaling**

With vertical scaling, ElastiCache for Memcached provides a high-performance, distributed memory caching system widely used to speed up dynamic applications by alleviating database load. It stores data and objects in RAM, reducing the need to read from external data sources.

You can apply vertical scaling to existing node-based clusters as well as new ones. This can provide flexibility in resource allocation, allowing users to efficiently adapt to changing workloads without altering cluster architecture. This ability to scale improves performance by increasing cache capacity during high demand periods, and scaling down to optimize costs during low-demand periods. This simplifies operations, eliminates the need to create new clusters for shifting resource requirements, and enables quick responses to traffic fluctuations. Overall, vertical scaling for Memcached node-based clusters can help enhance cost efficiency, improve resource utilization, and even let users change their Memcached instance type. All making it easier for users to align their caching infrastructure with actual application needs. 

**Note**  
Node type modifications are only available for node-based Memcached clusters with engine versions 1.5 or later.
Auto Discovery must be enabled in order to make use of vertical scaling. 

## Setting up on-demand vertical scaling for node-based Memcached clusters
<a name="Scaling.Memcached.automatically.setup.cli"></a>

You can configure on-demand vertical scaling for Memcached with `scale-config`, which contains two parameters: 

1. **ScaleIntervalMinutes:** Time (in minutes) between scaling batches during the Memcached upgrade process

1. **ScalePercentage:** Percentage of nodes to scale concurrently during the Memcached upgrade process

**Converting an existing Memcached node type to a cache that can vertically scale via the CLI**

To convert an existing Memcached node-based cluster to a cache that can vertically scale, you can use `elasticache modify-cache-cluster` via the CLI.

```
aws elasticache modify-cache-cluster \
    --cache-cluster-id <your-cluster-id> \
    --cache-node-type <new-node-type> \
    --scale-config <scale-config> \ 
    --apply-immediately
```

**Setting up vertical scaling with the CLI**

To set up vertical scaling for a node-based Memcached cluster via the CLI, use `elasticache modify-cache-cluster` with `scale-config` and its parameters `ScalePercentage` and `ScaleIntervalMinutes`. 
+ **scale-interval-minutes:**This defines the time (in minutes) between scaling batches. This setting can range from 2-30 minutes. If no value is specified, the default value of 5 minutes is applied.
+ **scale-percentage:**This specifies the percentage of nodes to scale concurrently in each batch. This setting can range from 10-100. The setting is rounded up when dividing, so for example if the result would be 49.5 a setting of 50 is applied. If no value is specified, the default value of 20 is applied.

These configuration options will enable you to fine-tune the scaling process according to your specific needs, balancing between minimizing cluster disruption and optimizing scaling speed. The scale-config parameter will only be applicable for Memcached engine types and will be ignored for other cache engines, ensuring backward compatibility with existing API usage for other clusters.

**API call**

```
aws elasticache modify-cache-cluster \
    --cache-cluster-id <your-cluster-id> \
    --cache-node-type <new-node-type> \
    --scale-config '{
            "ScalePercentage": 30,
            "ScaleIntervalMinutes": 2
          }'
    --apply-immediately
```

**Result:**

Returns the cluster ID and the pending change.

```
{
    "CacheCluster": {
        "CacheNodeType": "old_insance_type",
         ...
         ...
         "PendingModifiedValues": {
            "CacheNodeType": "new_instance_type"
         },
    }
}
```

**List your Memcached cache vertical scaling setting**

You can retrieve scaling options for your Memcached caches, and see what their current options are for vertical scaling. 

**API call**

```
aws elasticache list-allowed-node-type-modifications --cache-cluster-id <your-cluster-id>
```

**Result: **

```
{ 
  "ScaleUpModifications": [
      "cache.x.xxxx", 
      "cache.x.xxxx"
   	  ],
   "ScaleDownModifications": [ 
      "cache.x.xxxx", 
      "cache.x.xxxx", 
      "cache.x.xxxx" 
      ] 
}
```

**Vertical scaling for Memcached with the AWS Management Console**

Follow these steps to use the AWS Management Console to convert a node-based Memcached cluster to a vertically scalable cluster.

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. Select the Memcached cluster to convert.

1. Select the **Modify** tab.

1. Go to the **Cache settings** section, and select the desired **Node type**.

1. Select **Preview changes**, and review the changes.

1. Select **Modify**.

## Automated horizontal scaling for Memcached
<a name="Scaling-self-designed.mem-heading.horizontal"></a>

ElastiCache now integrates with the AWS Application Auto Scaling (AAS) service to include automated horizontal scaling for Memcached clusters. You can define scaling policies through the AWS Application Auto Scaling service, and automatically adjust the number of nodes in Memcached clusters as needed, based on predefined metrics or schedules.

**Note**  
Automated horizontal scaling is not currently available in the Beijing and Ningxia Regions. 

These are the available methods for automatically horizontally scaling your node-based clusters.
+ **Scheduled Scaling:** Scaling based on a schedule allows you to set your own scaling schedule for predictable load changes. For example, every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can configure Auto Scaling to increase capacity on Wednesday and decrease capacity on Friday. 
+ **Target Tracking:** With target tracking scaling policies, you choose a scaling metric and set a target value. Application Auto Scaling creates and manages the CloudWatch alarms that trigger the scaling policy and calculates the scaling adjustment based on the metric and the target value. The scaling policy adds or removes capacity as required to keep the metric at, or close to, the specified target value. 

**How to set up horizontal scaling for a node-based Memcached cluster via the CLI**

When horizontal scaling a node-based Memcached cluster, you can use a target tracking policy, a scheduled policy, or both.

1. **Register a resource as scalable target**

   Call the `RegisterScalableTarget` API in AWS Application Auto Scaling to register the target for the scalable dimension `elasticache:cache-cluster:Nodes`. 

   **API: ApplicationAutoScaling.RegisterScalableTarget**

   Input:

   ```
   {
   	"ScalableDimension": "elasticache:cache-cluster:Nodes",
   	"ResourceId": "cache-cluster/test-cluster-1",
   	"ServiceNamespace": "elasticache",
   	"MinCapacity": 20,  
   	"MaxCapacity": 50 
   }
   ```

1. **Create a Target tracking scaling policy**

   Next, you can create a target tracking scaling policy for the resource by calling put scaling policy API. 

1. **Predefined Metric**

   Following is a policy that scales along the dimension of Cache Node, using the predefined metric ` ElastiCacheCPUUtilization`, set at 50 for cluster test-cluster-1. When deleting nodes for scale-in, the last n nodes will be removed.

   API: ApplicationAutoScaling.PutScalingPolicy

   Input:

   ```
   {
   	"PolicyName": "cpu50-target-tracking-scaling-policy",
   	"PolicyType": "TargetTrackingScaling",
   	"TargetTrackingScalingPolicyConfiguration": {
   		"TargetValue": 50,
   		"PredefinedMetricSpecification": {
   			"PredefinedMetricType": "ElastiCacheCPUUtilization"
   			},
   		"ScaleOutCooldown": 600,
   		"ScaleInCooldown": 600
   			},
   	"ServiceNamespace": "elasticache",
   	"ScalableDimension": "elasticache:cache-cluster:Nodes",
   	"ResourceId": "cache-cluster/test-cluster-1"
   }
   ```

   Output:

   ```
   {
   	"PolicyARN": "arn:aws:autoscaling:us-west-2:012345678910:scalingPolicy:6d8972f3-efc8-437c-92d1-6270f29a66e7:resource/elasticache/cache-cluster/test-cluster-1:policyName/cpu50-target-tracking-scaling-policy",
   	"Alarms": [
   		{
   		"AlarmARN": "arn:aws:cloudwatch:us-west-2:012345678910:alarm:TargetTracking-elasticache/cache-cluster/test-cluster-1-AlarmHigh-d4f0770c-b46e-434a-a60f-3b36d653feca",
   		"AlarmName": "TargetTracking-elasticache/cache-cluster/test-cluster-1-AlarmHigh-d4f0770c-b46e-434a-a60f-3b36d653feca"
   		},
   		{
   		"AlarmARN": "arn:aws:cloudwatch:us-west-2:012345678910:alarm:TargetTracking-elasticache/cache-cluster/test-cluster-1-AlarmLow-1b437334-d19b-4a63-a812-6c67aaf2910d",
   		"AlarmName": "TargetTracking-elasticache/cache-cluster/test-cluster-1-AlarmLow-1b437334-d19b-4a63-a812-6c67aaf2910d"
   		}
   	]
   }
   ```

1. **Custom Metric**

   You can also set scaling policy on dimension by using a custom percentage that's based on the Cloudwatch metric.

   Input:

   ```
   {
   	"PolicyName": "cpu50-target-tracking-scaling-policy",
   	"PolicyType": "TargetTrackingScaling",
   	"TargetTrackingScalingPolicyConfiguration": {
   		"CustomizedMetricSpecification": { 
   			"Dimensions": [ 
   				{ 
   				"Name": "MyMetricDimension",
   				"Value": "DimensionValue"
   				}
   				],
   			"MetricName": "MyCustomMetric",
   			"Namespace": "MyNamespace",
   			"Statistic": "Average",
   			"Unit": "Percent"
   			},
   		"TargetValue": 40,
   		"ScaleOutCooldown": 600,
   		"ScaleInCooldown": 600
   		},
   	"ServiceNamespace": "elasticache",
   	"ScalableDimension": "elasticache:cache-cluster:Nodes",
   	"ResourceId": "cache-cluster/test-cluster-1"
   }
   ```

1. **Scheduled Actions**

   When you need to scale out for a particular event and then scale in after the event, you can create two scheduled actions by calling the `PutScheduledAction` API. 

   **Policy 1: Scaling out**

   The `at` command in `--schedule` schedules the action to be run once at a specified date and time in the future. The schedule field also supports rate (minute, hour, day etc) and cron (for cron expression).

   At the date and time specified, Application Auto Scaling updates the `MinCapacity` and `MaxCapacity` values. Application Auto Scaling scales out to MinCapacity to put the cache nodes to 70. 

   **API: ApplicationAutoScaling.PutScheduledAction**

   Input:

   ```
   {
   	"ResourceId": "elasticache:ache-cluster:test-cluster-1",
   	"ScalableDimension": "elasticache:cache-cluster:Nodes",
   		"ScalableTargetAction": { 
   			"MaxCapacity": 100,
   			"MinCapacity": 70
   			},
   	"Schedule": "at(2020-05-20T17:05:00)",
   	"ScheduledActionName": "ScalingOutScheduledAction",
   	"ServiceNamespace": "elasticache",
   }
   ```

   **Policy 2: Scaling in**

   At the date and time specified, Application Auto Scaling updates the table's `MinCapacity` and `MaxCapacity`, and scales in to `MaxCapacity` to return the cache nodes to 60.

   **API: ApplicationAutoScaling.PutScheduledAction**

   Input:

   ```
   {
   	"ResourceId": "elasticache:cache-cluster:test-cluster-1",
   	"ScalableDimension": "elasticache:cache-cluster:Nodes",
   	"ScalableTargetAction": { 
   		"MaxCapacity": 60,
   		"MinCapacity": 40
   		},
   	"Schedule": "at(2020-05-21T17:05:00)",
   	"ScheduledActionName": "ScalingInScheduledAction",
   	"ServiceNamespace": "elasticache",
   }
   ```

1. **View the Scaling Activities**

   You can view the scaling activities using the `DescribeScalingActivities` API. 

   **API: ApplicationAutoScaling.DescribeScalingActivities**

   Output:

   ```
   {
   	"ScalingActivities": [
   		{
   		"ScalableDimension": "elasticache:elasticache:DesiredCount",
   		"Description": "Setting desired count to 30.",
   		"ResourceId": "elasticache/cache-cluster/test-cluster-1",
   		"ActivityId": "4d759079-a31f-4d0c-8468-504c56e2eecf",
   		"StartTime": 1462574194.658,
   		"elasticacheNamespace": "elasticache",
   		"EndTime": 1462574276.686,
   		"Cause": "monitor alarm TargetTracking-elasticache/cache-cluster/test-cluster-1-AlarmHigh-d4f0770c-b46e-434a-a60f-3b36d653feca in state ALARM triggered policy cpu50-target-tracking-scaling-policy",
   		"StatusMessage": "Failed to set desired count to 30",
   		"StatusCode": "Failed"
   		},
   		{
   		"ScalableDimension": "elasticache:elasticache:DesiredCount",
   		"Description": "Setting desired count to 25.",
   		"ResourceId": "elasticache/cache-cluster/test-cluster-1",
   		"ActivityId": "90aff0eb-dd6a-443c-889b-b809e78061c1",
   		"StartTime": 1462574254.223,
   		"elasticacheNamespace": "elasticache",
   		"EndTime": 1462574333.492,
   		"Cause": "monitor alarm TargetTracking-elasticache/cache-cluster/test-cluster-1-AlarmHigh-d4f0770c-b46e-434a-a60f-3b36d653feca in state ALARM triggered policy cpu50-target-tracking-scaling-policy",
   		"StatusMessage": "Successfully set desired count to 25. Change successfully fulfilled by elasticache.",
   		"StatusCode": "Successful"
   		}
   	]
   }
   ```

1. **Edit/Delete Scaling Policy**

   You can edit or delete policies by calling `PutScalingPolicy` API again, or by calling `DeleteScalingPolicy` or `DeleteScheduled` Action. 

1. **De-register scalable targets**

   You can de-register the scalable target through the `DeregisterScalableTarget` API. Deregistering a scalable target deletes the scaling policies and the scheduled actions that are associated with it. 

   **API: ApplicationAutoScaling.DeregisterScalableTarget**

   Input:

   ```
   {
   	"ResourceId": "elasticache/cache-cluster/test-cluster-1",
   	"ServiceNamespace": "elasticache",
   	"ScalableDimension": "elasticache:cache-cluster:Nodes"
   }
   ```

1. **Scaling Policy Cleanup**

1. **Multiple Scaling Policies**

   You can create multiple scaling policies. Following are key callouts on behavior from [Auto scaling target tracking](https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-target-tracking.html). 
   + You can have multiple target tracking scaling policies for a scalable target, provided that each of them uses a different metric.
   + The intention of Application Auto Scaling is to always prioritize availability, so its behavior differs depending on whether the target tracking policies are ready for scale out or scale in. It will scale out the scalable target if any of the target tracking policies are ready for scale out, but will scale in only if all of the target tracking policies (with the scale-in portion enabled) are ready to scale in. 
   + If multiple policies instruct the scalable target to scale out or in at the same time, Application Auto Scaling scales based on the policy that provides the largest capacity for both scale in and scale out. This provides greater flexibility to cover multiple scenarios and ensures that there is always enough capacity to process your application workloads. 
**Note**  
AWS Application Auto Scaling does not queue scaling policies. Application Auto Scaling will wait for the first scaling to complete, then cooldown, and then repeat the above algorithm.

**Automatically horizontally scale a node-based Memcached cluster via the AWS Management Console**

Follow these steps to use the AWS Management Console to convert an existing node-based Memcached cluster to a horizontally scalable cluster.

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. Select the Memcached cache to convert.

1. Go to the **Autoscaling** tab.

1. Select the scaling policy to apply, by selecting either **Add dynamic scaling** or **Add scheduled scaling**.

1. Fill in the details for the selected policy as needed.

1. Click **Create**.

# Manual scaling for Memcached clusters
<a name="Scaling.Memcached.manually"></a>

Manually horizontally scaling a Memcached cluster in or out is as easy as adding or removing nodes from the cluster. Memcached clusters are composed of 1 to 60 nodes. 

Because you can partition your data across all the nodes in a Memcached cluster, scaling up to a node type with greater memory is seldom required. However, because the Memcached engine does not persist data, if you do scale to a different node type then your new cluster starts out empty unless your application populates it.

To manually vertically scale your Memcached cluster, you must create a new cluster. Memcached clusters always start out empty unless your application populates it. 


**Manually scaling Memcached clusters**  

| Action | Topic | 
| --- | --- | 
|  Scaling out  |  [Adding nodes to a cluster](Clusters.html#AddNode)  | 
|  Scaling in  |  [Deleting nodes from a cluster](Clusters.html#DeleteNode)  | 
|  Changing node types  |  [Manually scaling node-based Memcached clusters vertically](#Scaling.Memcached.Vertically)  | 

**Topics**
+ [Manually scaling a node-based Memcached cluster horizontally](#Scaling.Memcached.Horizontally)
+ [Manually scaling node-based Memcached clusters vertically](#Scaling.Memcached.Vertically)

## Manually scaling a node-based Memcached cluster horizontally
<a name="Scaling.Memcached.Horizontally"></a>

The Memcached engine supports partitioning your data across multiple nodes. Because of this, Memcached clusters scale horizontally easily. To horizontally scale your Memcached cluster, merely add or remove nodes.

The following topics detail how to scale your Memcached cluster out or in by adding or removing nodes.
+ [Adding nodes to a cluster](Clusters.html#AddNode)
+ [Deleting nodes from your cluster](Clusters.html#AddNode)

Each time you change the number of nodes in your Memcached cluster, you must re-map at least some of your keyspace so it maps to the correct node. For more detailed information on load balancing your Memcached cluster, see [Configuring your ElastiCache client for efficient load balancing (Memcached)](BestPractices.LoadBalancing.md).

If you use auto discovery on your Memcached cluster, you do not need to change the endpoints in your application as you add or remove nodes. For more information on auto discovery, see [Automatically identify nodes in your cluster (Memcached)](AutoDiscovery.md) If you do not use auto discovery, each time you change the number of nodes in your Memcached cluster you must update the endpoints in your application.

## Manually scaling node-based Memcached clusters vertically
<a name="Scaling.Memcached.Vertically"></a>

When you manually scale your Memcached cluster up or down, you must create a new cluster. Memcached clusters always start out empty unless your application populates it. 

**Important**  
If you are scaling down to a smaller node type, be sure that the smaller node type is adequate for your data and overhead. For more information, see [Choosing your node size](CacheNodes.SelectSize.md).

**Topics**
+ [Scaling a node-based Memcached cluster vertically (Console)](#Scaling.Memcached.Vertically.CON)
+ [Scaling a node-based Memcached cluster vertically (AWS CLI)](#Scaling.Memcached.Vertically.CLI)
+ [Scaling a node-based Memcached cluster vertically (ElastiCache API)](#Scaling.Memcached.Vertically.API)

### Scaling a node-based Memcached cluster vertically (Console)
<a name="Scaling.Memcached.Vertically.CON"></a>

The following procedure walks you through scaling a node-based Memcached cluster vertically using the AWS Management Console.

1. Create a new cluster with the new node type. For more information, see [Creating a Memcached cluster (console)](Clusters.Create-mc.md#Clusters.Create.CON.Memcached).

1. In your application, update the endpoints to the new cluster's endpoints. For more information, see [Finding a Cluster's Endpoints (Console) (Memcached)](Endpoints.md#Endpoints.Find.Memcached).

1. Delete the old cluster. For more information, see [Deleting a new node in Memcached](Clusters.html#Delete.CON.Memcached).

### Scaling a node-based Memcached cluster vertically (AWS CLI)
<a name="Scaling.Memcached.Vertically.CLI"></a>

The following procedure walks you through scaling a node-based Memcached cluster vertically using the AWS CLI.

1. Create a new cluster with the new node type. For more information, see [Creating a cluster (AWS CLI)](Clusters.Create.md#Clusters.Create.CLI).

1. In your application, update the endpoints to the new cluster's endpoints. For more information, see [Finding Endpoints (AWS CLI)](Endpoints.md#Endpoints.Find.CLI).

1. Delete the old cluster. For more information, see [Using the AWS CLI to delete an ElastiCache cluster](Clusters.Delete.md#Clusters.Delete.CLI).

### Scaling a node-based Memcached cluster vertically (ElastiCache API)
<a name="Scaling.Memcached.Vertically.API"></a>

The following procedure walks you through scaling a node-based Memcached cluster vertically using the ElastiCache API.

1. Create a new cluster with the new node type. For more information, see [Creating a cluster for Memcached (ElastiCache API)](Clusters.Create-mc.md#Clusters.Create.API.mem-heading)

1. In your application, update the endpoints to the new cluster's endpoints. For more information, see [Finding Endpoints (ElastiCache API)](Endpoints.md#Endpoints.Find.API).

1. Delete the old cluster. For more information, see [Using the ElastiCache API](Clusters.Delete.md#Clusters.Delete.API).

# Scaling for Valkey or Redis OSS (Cluster Mode Disabled) clusters
<a name="scaling-redis-classic"></a>

Valkey or Redis OSS (cluster mode disabled) clusters can be a single-node cluster with 0 shards or multi-node clusters with 1 shard. Single-node clusters use the one node for both reads and writes. Multi-node clusters always have 1 node as the read/write primary node with 0 to 5 read-only replica nodes.

**Topics**
+ [Scaling for Valkey or Redis OSS (Cluster Mode Disabled) clusters](#Scaling.RedisStandalone)


**Scaling Valkey or Redis OSS clusters**  

| Action | Valkey or Redis OSS (cluster mode disabled) | Valkey or Redis OSS (cluster mode enabled) | 
| --- | --- | --- | 
|  Scaling in  |  [Removing nodes from an ElastiCache cluster](Clusters.DeleteNode.md)  |  [Scaling Valkey or Redis OSS (Cluster Mode Enabled) clusters](scaling-redis-cluster-mode-enabled.md)  | 
|  Scaling out  |  [Adding nodes to a cluster](Clusters.html#AddNode)  |  [Online resharding for Valkey or Redis OSS (cluster mode enabled)](scaling-redis-cluster-mode-enabled.md#redis-cluster-resharding-online)  | 
|  Changing node types  |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/scaling-redis-classic.html) [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/scaling-redis-classic.html)  |  [Online vertical scaling by modifying node type](redis-cluster-vertical-scaling.md)  | 
|  Changing the number of node groups  |  Not supported for Valkey or Redis OSS (cluster mode disabled) clusters  |  [Scaling Valkey or Redis OSS (Cluster Mode Enabled) clusters](scaling-redis-cluster-mode-enabled.md)  | 

**Contents**
+ [Scaling for Valkey or Redis OSS (Cluster Mode Disabled) clusters](#Scaling.RedisStandalone)
  + [Scaling up single-node Valkey or Redis OSS clusters](#Scaling.RedisStandalone.ScaleUp)
    + [Scaling up single-node Valkey or Redis OSS (Cluster Mode Disabled) (Console) clusters](#Scaling.RedisStandalone.ScaleUp.CON)
    + [Scaling up single-node Valkey or Redis OSS clusters (AWS CLI)](#Scaling.RedisStandalone.ScaleUp.CLI)
    + [Scaling up single-node Valkey or Redis OSS clusters (ElastiCache API)](#Scaling.RedisStandalone.ScaleUp.API)
  + [Scaling down single-node Valkey or Redis OSS clusters](#Scaling.RedisStandalone.ScaleDown)
    + [Scaling down a single-node Valkey or Redis OSS cluster (Console)](#Scaling.RedisStandalone.ScaleDown.CON)
    + [Scaling down single-node Valkey or Redis OSS clusters (AWS CLI)](#Scaling.RedisStandalone.ScaleUpDown-Modify.CLI)
    + [Scaling down single-node Valkey or Redis OSS clusters (ElastiCache API)](#Scaling.RedisStandalone.ScaleDown.API)

## Scaling for Valkey or Redis OSS (Cluster Mode Disabled) clusters
<a name="Scaling.RedisStandalone"></a>

Valkey or Redis OSS (cluster mode disabled) nodes must be large enough to contain all the cache's data plus the Valkey or Redis OSS overhead. To change the data capacity of your Valkey or Redis OSS (cluster mode disabled) cluster, you must scale vertically; scaling up to a larger node type to increase data capacity, or scaling down to a smaller node type to reduce data capacity.

The ElastiCache scaling up process is designed to make a best effort to retain your existing data and requires successful Valkey or Redis OSS replication. For Valkey or Redis OSS (cluster mode disabled) clusters, we recommend that sufficient memory be made available to Valkey or Redis OSS. 

You cannot partition your data across multiple Valkey or Redis OSS (cluster mode disabled) clusters. However, if you only need to increase or decrease your cluster's read capacity, you can create a Valkey or Redis OSS (cluster mode disabled) cluster with replica nodes and add or remove read replicas. To create a Valkey or Redis OSS (cluster mode disabled) cluster with replica nodes using your single-node Valkey or Redis OSS cluster as the primary cluster, see [Creating a Valkey (cluster mode disabled) cluster (Console)](SubnetGroups.designing-cluster-pre.valkey.md#Clusters.Create.CON.valkey-gs).

After you create the cluster with replicas, you can increase read capacity by adding read replicas. Later, if you need to, you can reduce read capacity by removing read replicas. For more information, see [Increasing read capacity](Scaling.RedisReplGrps.md#Scaling.RedisReplGrps.ScaleOut) or [Decreasing read capacity](Scaling.RedisReplGrps.md#Scaling.RedisReplGrps.ScaleIn).

In addition to being able to scale read capacity, Valkey or Redis OSS (cluster mode disabled) clusters with replicas provide other business advantages. For more information, see [High availability using replication groups](Replication.md).

**Important**  
If your parameter group uses `reserved-memory` to set aside memory for Valkey or Redis OSS overhead, before you begin scaling be sure that you have a custom parameter group that reserves the correct amount of memory for your new node type. Alternatively, you can modify a custom parameter group so that it uses `reserved-memory-percent` and use that parameter group for your new cluster.  
If you're using `reserved-memory-percent`, doing this is not necessary.   
For more information, see [Managing reserved memory for Valkey and Redis OSS](redis-memory-management.md).

**Topics**
+ [Scaling up single-node Valkey or Redis OSS clusters](#Scaling.RedisStandalone.ScaleUp)
+ [Scaling down single-node Valkey or Redis OSS clusters](#Scaling.RedisStandalone.ScaleDown)

### Scaling up single-node Valkey or Redis OSS clusters
<a name="Scaling.RedisStandalone.ScaleUp"></a>

When you scale up a single-node Valkey or Redis OSS cluster, ElastiCache performs the following process, whether you use the ElastiCache console, the AWS CLI, or the ElastiCache API.

1. A new cluster with the new node type is spun up in the same Availability Zone as the existing cluster.

1. The cache data in the existing cluster is copied to the new cluster. How long this process takes depends upon your node type and how much data is in the cluster.

1. Reads and writes are now served using the new cluster. Because the new cluster's endpoints are the same as they were for the old cluster, you do not need to update the endpoints in your application. You will notice a brief interruption (a few seconds) of reads and writes from the primary node while the DNS entry is updated.

1. ElastiCache deletes the old cluster. You will notice a brief interruption (a few seconds) of reads and writes from the old node because the connections to the old node will be disconnected. 

**Note**  
For clusters running the r6gd node type, you can only scale to node sizes within the r6gd node family.

As shown in the following table, your Valkey or Redis OSS scale-up operation is blocked if you have an engine upgrade scheduled for the next maintenance window. For more information on Maintenance Windows, see [Managing ElastiCache cluster maintenance](maintenance-window.md).


**Blocked Valkey or Redis OSS operations**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/scaling-redis-classic.html)

If you have a pending operation that is blocking you, you can do one of the following.
+ Schedule your Valkey or Redis OSS scale-up operation for the next maintenance window by clearing the **Apply immediately** check box (CLI use: `--no-apply-immediately`, API use: `ApplyImmediately=false`).
+ Wait until your next maintenance window (or after) to perform your Valkey or Redis OSS scale up operation.
+ Add the Valkey or Redis OSS engine upgrade to this cluster modification with the **Apply Immediately** check box chosen (CLI use: `--apply-immediately`, API use: `ApplyImmediately=true`). This unblocks your scale up operation by causing the engine upgrade to be performed immediately.

You can scale up a single-node Valkey or Redis OSS (cluster mode disabled) cluster using the ElastiCache console, the AWS CLI, or ElastiCache API.

**Important**  
If your parameter group uses `reserved-memory` to set aside memory for Valkey or Redis OSS overhead, before you begin scaling be sure that you have a custom parameter group that reserves the correct amount of memory for your new node type. Alternatively, you can modify a custom parameter group so that it uses `reserved-memory-percent` and use that parameter group for your new cluster.  
If you're using `reserved-memory-percent`, doing this is not necessary.   
For more information, see [Managing reserved memory for Valkey and Redis OSS](redis-memory-management.md).

#### Scaling up single-node Valkey or Redis OSS (Cluster Mode Disabled) (Console) clusters
<a name="Scaling.RedisStandalone.ScaleUp.CON"></a>

The following procedure describes how to scale up a single-node Valkey or Redis OSS cluster using the ElastiCache Management Console. During this process, your Valkey or Redis OSS cluster will continue to serve requests with minimal downtime.

**To scale up a single-node Valkey or Redis OSS cluster (console)**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. From the navigation pane, choose **Valkey or Redis OSS clusters**.

1. From the list of clusters, choose the cluster you want to scale up (it must be running the Valkey or Redis OSS engine, not the clustered Valkey or Redis OSS engine). 

1. Choose **Modify**.

1. In the **Modify Cluster** wizard:

   1. Choose the node type you want to scale to from the **Node type** list.

   1. If you're using `reserved-memory` to manage your memory, from the **Parameter Group** list, choose the custom parameter group that reserves the correct amount of memory for your new node type.

1. If you want to perform the scale up process right away, choose the **Apply immediately** box. If the **Apply immediately** box is not chosen, the scale-up process is performed during this cluster's next maintenance window.

1. Choose **Modify**.

   If you chose **Apply immediately** in the previous step, the cluster's status changes to *modifying*. When the status changes to *available*, the modification is complete and you can begin using the new cluster.

#### Scaling up single-node Valkey or Redis OSS clusters (AWS CLI)
<a name="Scaling.RedisStandalone.ScaleUp.CLI"></a>

The following procedure describes how to scale up a single-node Valkey or Redis OSS cluster using the AWS CLI. During this process, your Valkey or Redis OSS cluster will continue to serve requests with minimal downtime.

**To scale up a single-node Valkey or Redis OSS cluster (AWS CLI)**

1. Determine the node types you can scale up to by running the AWS CLI `list-allowed-node-type-modifications` command with the following parameter.
   + `--cache-cluster-id`

   For Linux, macOS, or Unix:

   ```
   aws elasticache list-allowed-node-type-modifications \
   	    --cache-cluster-id my-cache-cluster-id
   ```

   For Windows:

   ```
   aws elasticache list-allowed-node-type-modifications ^
   	    --cache-cluster-id my-cache-cluster-id
   ```

   Output from the above command looks something like this (JSON format).

   ```
   {
   	    "ScaleUpModifications": [
   	        "cache.m3.2xlarge", 
   	        "cache.m3.large", 
   	        "cache.m3.xlarge", 
   	        "cache.m4.10xlarge", 
   	        "cache.m4.2xlarge", 
   	        "cache.m4.4xlarge", 
   	        "cache.m4.large", 
   	        "cache.m4.xlarge", 
   	        "cache.r3.2xlarge", 
   	        "cache.r3.4xlarge", 
   	        "cache.r3.8xlarge", 
   	        "cache.r3.large", 
   	        "cache.r3.xlarge"
   	    ]
   	       "ScaleDownModifications": [
   	        "cache.t2.micro", 
   	        "cache.t2.small ", 
   	        "cache.t2.medium ",
               "cache.t1.small ",
   	    ], 
   
   	}
   ```

   For more information, see [list-allowed-node-type-modifications](https://docs.aws.amazon.com/cli/latest/reference/elasticache/list-allowed-node-type-modifications.html) in the *AWS CLI Reference*.

1. Modify your existing cluster specifying the cluster to scale up and the new, larger node type, using the AWS CLI `modify-cache-cluster` command and the following parameters.
   + `--cache-cluster-id` – The name of the cluster you are scaling up. 
   + `--cache-node-type` – The new node type you want to scale the cluster. This value must be one of the node types returned by the `list-allowed-node-type-modifications` command in step 1.
   + `--cache-parameter-group-name` – [Optional] Use this parameter if you are using `reserved-memory` to manage your cluster's reserved memory. Specify a custom cache parameter group that reserves the correct amount of memory for your new node type. If you are using `reserved-memory-percent` you can omit this parameter.
   + `--apply-immediately` – Causes the scale-up process to be applied immediately. To postpone the scale-up process to the cluster's next maintenance window, use the `--no-apply-immediately` parameter.

   For Linux, macOS, or Unix:

   ```
   aws elasticache modify-cache-cluster \
   	    --cache-cluster-id my-redis-cache-cluster \
   	    --cache-node-type cache.m3.xlarge \
   	    --cache-parameter-group-name redis32-m2-xl \
   	    --apply-immediately
   ```

   For Windows:

   ```
   aws elasticache modify-cache-cluster ^
   	    --cache-cluster-id my-redis-cache-cluster ^
   	    --cache-node-type cache.m3.xlarge ^
   	    --cache-parameter-group-name redis32-m2-xl ^
   	    --apply-immediately
   ```

   Output from the above command looks something like this (JSON format).

   ```
   {
   	    "CacheCluster": {
   	        "Engine": "redis", 
   	        "CacheParameterGroup": {
   	            "CacheNodeIdsToReboot": [], 
   	            "CacheParameterGroupName": "default.redis6.x", 
   	            "ParameterApplyStatus": "in-sync"
   	        }, 
   	        "SnapshotRetentionLimit": 1, 
   	        "CacheClusterId": "my-redis-cache-cluster", 
   	        "CacheSecurityGroups": [], 
   	        "NumCacheNodes": 1, 
   	        "SnapshotWindow": "00:00-01:00", 
   	        "CacheClusterCreateTime": "2017-02-21T22:34:09.645Z", 
   	        "AutoMinorVersionUpgrade": true, 
   	        "CacheClusterStatus": "modifying", 
   	        "PreferredAvailabilityZone": "us-west-2a", 
   	        "ClientDownloadLandingPage": "https://console.aws.amazon.com/elasticache/home#client-download:", 
   	        "CacheSubnetGroupName": "default", 
   	        "EngineVersion": "6.0", 
   	        "PendingModifiedValues": {
   	            "CacheNodeType": "cache.m3.2xlarge"
   	        }, 
   	        "PreferredMaintenanceWindow": "tue:11:30-tue:12:30", 
   	        "CacheNodeType": "cache.m3.medium",
   	         "DataTiering": "disabled"
   	    }
   	}
   ```

   For more information, see [modify-cache-cluster](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-cache-cluster.html) in the *AWS CLI Reference*.

1. If you used the `--apply-immediately`, check the status of the new cluster using the AWS CLI `describe-cache-clusters` command with the following parameter. When the status changes to *available*, you can begin using the new, larger cluster.
   + `--cache-cluster-id` – The name of your single-node Valkey or Redis OSS cluster. Use this parameter to describe a particular cluster rather than all clusters.

   ```
   aws elasticache describe-cache-clusters --cache-cluster-id my-redis-cache-cluster
   ```

   For more information, see [describe-cache-clusters](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-cache-clusters.html) in the *AWS CLI Reference*.

#### Scaling up single-node Valkey or Redis OSS clusters (ElastiCache API)
<a name="Scaling.RedisStandalone.ScaleUp.API"></a>

The following procedure describes how to scale up a single-node Valkey or Redis OSS cluster using the ElastiCache API. During this process, your Valkey or Redis OSS cluster will continue to serve requests with minimal downtime.

**To scale up a single-node Valkey or Redis OSS cluster (ElastiCache API)**

1. Determine the node types you can scale up to by running the ElastiCache API `ListAllowedNodeTypeModifications` action with the following parameter.
   + `CacheClusterId` – The name of the single-node Valkey or Redis OSS cluster you want to scale up.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=ListAllowedNodeTypeModifications
   	   &CacheClusterId=MyRedisCacheCluster
   	   &Version=2015-02-02
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20150202T192317Z
   	   &X-Amz-Credential=<credential>
   ```

   For more information, see [ListAllowedNodeTypeModifications](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ListAllowedNodeTypeModifications.html) in the *Amazon ElastiCache API Reference*.

1. Modify your existing cluster specifying the cluster to scale up and the new, larger node type, using the `ModifyCacheCluster` ElastiCache API action and the following parameters.
   + `CacheClusterId` – The name of the cluster you are scaling up.
   + `CacheNodeType` – The new, larger node type you want to scale the cluster up to. This value must be one of the node types returned by the `ListAllowedNodeTypeModifications` action in the previous step.
   + `CacheParameterGroupName` – [Optional] Use this parameter if you are using `reserved-memory` to manage your cluster's reserved memory. Specify a custom cache parameter group that reserves the correct amount of memory for your new node type. If you are using `reserved-memory-percent` you can omit this parameter.
   + `ApplyImmediately` – Set to `true` to cause the scale-up process to be performed immediately. To postpone the scale-up process to the cluster's next maintenance window, use `ApplyImmediately``=false`.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=ModifyCacheCluster
   	   &ApplyImmediately=true
   	   &CacheClusterId=MyRedisCacheCluster
   	   &CacheNodeType=cache.m3.xlarge
   	   &CacheParameterGroupName redis32-m2-xl
   	   &Version=2015-02-02
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20150202T192317Z
   	   &X-Amz-Credential=<credential>
   ```

   For more information, see [ModifyCacheCluster](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyCacheCluster.html) in the *Amazon ElastiCache API Reference*.

1. If you used `ApplyImmediately``=true`, check the status of the new cluster using the ElastiCache API `DescribeCacheClusters` action with the following parameter. When the status changes to *available*, you can begin using the new, larger cluster.
   + `CacheClusterId` – The name of your single-node Valkey or Redis OSS cluster. Use this parameter to describe a particular cluster rather than all clusters.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=DescribeCacheClusters
   	   &CacheClusterId=MyRedisCacheCluster
   	   &Version=2015-02-02
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20150202T192317Z
   	   &X-Amz-Credential=<credential>
   ```

   For more information, see [DescribeCacheClusters](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeCacheClusters.html) in the *Amazon ElastiCache API Reference*.

### Scaling down single-node Valkey or Redis OSS clusters
<a name="Scaling.RedisStandalone.ScaleDown"></a>

The following sections walk you through how to scale a single-node Valkey or Redis OSS cluster down to a smaller node type. Ensuring that the new, smaller node type is large enough to accommodate all the data and Valkey or Redis OSS overhead is important to the long-term success of your new Valkey or Redis OSS cluster. For more information, see [Ensuring you have enough memory to make a Valkey or Redis OSS snapshot](BestPractices.BGSAVE.md).

**Note**  
For clusters running the r6gd node type, you can only scale to node sizes within the r6gd node family.

**Topics**
+ [Scaling down a single-node Valkey or Redis OSS cluster (Console)](#Scaling.RedisStandalone.ScaleDown.CON)
+ [Scaling down single-node Valkey or Redis OSS clusters (AWS CLI)](#Scaling.RedisStandalone.ScaleUpDown-Modify.CLI)
+ [Scaling down single-node Valkey or Redis OSS clusters (ElastiCache API)](#Scaling.RedisStandalone.ScaleDown.API)

#### Scaling down a single-node Valkey or Redis OSS cluster (Console)
<a name="Scaling.RedisStandalone.ScaleDown.CON"></a>

The following procedure walks you through scaling your single-node Valkey or Redis OSS cluster down to a smaller node type using the ElastiCache console.

**Important**  
If your parameter group uses `reserved-memory` to set aside memory for Valkey or Redis OSS overhead, before you begin scaling be sure that you have a custom parameter group that reserves the correct amount of memory for your new node type. Alternatively, you can modify a custom parameter group so that it uses `reserved-memory-percent` and use that parameter group for your new cluster.  
If you're using `reserved-memory-percent`, doing this is not necessary.   
For more information, see [Managing reserved memory for Valkey and Redis OSS](redis-memory-management.md).

**To scale down your single-node Valkey or Redis OSS cluster (console)**

1. Ensure that the smaller node type is adequate for your data and overhead needs. 

1. If your parameter group uses `reserved-memory` to set aside memory for Valkey or Redis OSS overhead, ensure that you have a custom parameter group to set aside the correct amount of memory for your new node type.

   Alternatively, you can modify your custom parameter group to use `reserved-memory-percent`. For more information, see [Managing reserved memory for Valkey and Redis OSS](redis-memory-management.md).

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. From the list of clusters, choose the cluster you want to scale down. This cluster must be running the Valkey or Redis OSS engine and not the clustered Valkey or Redis OSS engine.

1. Choose **Modify**.

1. In the **Modify Cluster** wizard:

   1. Choose the node type you want to scale down to from the **Node type** list.

   1. If you're using `reserved-memory` to manage your memory, from the **Parameter Group** list, choose the custom parameter group that reserves the correct amount of memory for your new node type.

1. If you want to perform the scale-down process right away, choose the **Apply immediately** check box. If the **Apply immediately** check box is left not chosen, the scale-down process is performed during this cluster's next maintenance window.

1. Choose **Modify**.

1. When the cluster’s status changes from *modifying* to *available*, your cluster has scaled to the new node type. There is no need to update the endpoints in your application.

#### Scaling down single-node Valkey or Redis OSS clusters (AWS CLI)
<a name="Scaling.RedisStandalone.ScaleUpDown-Modify.CLI"></a>

The following procedure describes how to scale down a single-node Valkey or Redis OSS cluster using the AWS CLI. 

**To scale down a single-node Valkey or Redis OSS cluster (AWS CLI)**

1. Determine the node types you can scale down to by running the AWS CLI `list-allowed-node-type-modifications` command with the following parameter.
   + `--cache-cluster-id`

   For Linux, macOS, or Unix:

   ```
   aws elasticache list-allowed-node-type-modifications \
   	    --cache-cluster-id my-cache-cluster-id
   ```

   For Windows:

   ```
   aws elasticache list-allowed-node-type-modifications ^
   	    --cache-cluster-id my-cache-cluster-id
   ```

   Output from the above command looks something like this (JSON format).

   ```
   {
   	    "ScaleUpModifications": [
   	        "cache.m3.2xlarge", 
   	        "cache.m3.large", 
   	        "cache.m3.xlarge", 
   	        "cache.m4.10xlarge", 
   	        "cache.m4.2xlarge", 
   	        "cache.m4.4xlarge", 
   	        "cache.m4.large", 
   	        "cache.m4.xlarge", 
   	        "cache.r3.2xlarge", 
   	        "cache.r3.4xlarge", 
   	        "cache.r3.8xlarge", 
   	        "cache.r3.large", 
   	        "cache.r3.xlarge"
   	    ]
   	       "ScaleDownModifications": [
   	        "cache.t2.micro", 
   	        "cache.t2.small ", 
   	        "cache.t2.medium ",
               "cache.t1.small ",
   	    ], 
   
   	}
   ```

   For more information, see [list-allowed-node-type-modifications](https://docs.aws.amazon.com/cli/latest/reference/elasticache/list-allowed-node-type-modifications.html) in the *AWS CLI Reference*.

1. Modify your existing cluster specifying the cluster to scale down and the new, smaller node type, using the AWS CLI `modify-cache-cluster` command and the following parameters.
   + `--cache-cluster-id` – The name of the cluster you are scaling down. 
   + `--cache-node-type` – The new node type you want to scale the cluster. This value must be one of the node types returned by the `list-allowed-node-type-modifications` command in step 1.
   + `--cache-parameter-group-name` – [Optional] Use this parameter if you are using `reserved-memory` to manage your cluster's reserved memory. Specify a custom cache parameter group that reserves the correct amount of memory for your new node type. If you are using `reserved-memory-percent` you can omit this parameter.
   + `--apply-immediately` – Causes the scale-down process to be applied immediately. To postpone the scale-up process to the cluster's next maintenance window, use the `--no-apply-immediately` parameter.

   For Linux, macOS, or Unix:

   ```
   aws elasticache modify-cache-cluster \
   	    --cache-cluster-id my-redis-cache-cluster \
   	    --cache-node-type cache.m3.xlarge \
   	    --cache-parameter-group-name redis32-m2-xl \
   	    --apply-immediately
   ```

   For Windows:

   ```
   aws elasticache modify-cache-cluster ^
   	    --cache-cluster-id my-redis-cache-cluster ^
   	    --cache-node-type cache.m3.xlarge ^
   	    --cache-parameter-group-name redis32-m2-xl ^
   	    --apply-immediately
   ```

   Output from the above command looks something like this (JSON format).

   ```
   {
   	    "CacheCluster": {
   	        "Engine": "redis", 
   	        "CacheParameterGroup": {
   	            "CacheNodeIdsToReboot": [], 
   	            "CacheParameterGroupName": "default.redis6,x", 
   	            "ParameterApplyStatus": "in-sync"
   	        }, 
   	        "SnapshotRetentionLimit": 1, 
   	        "CacheClusterId": "my-redis-cache-cluster", 
   	        "CacheSecurityGroups": [], 
   	        "NumCacheNodes": 1, 
   	        "SnapshotWindow": "00:00-01:00", 
   	        "CacheClusterCreateTime": "2017-02-21T22:34:09.645Z", 
   	        "AutoMinorVersionUpgrade": true, 
   	        "CacheClusterStatus": "modifying", 
   	        "PreferredAvailabilityZone": "us-west-2a", 
   	        "ClientDownloadLandingPage": "https://console.aws.amazon.com/elasticache/home#client-download:", 
   	        "CacheSubnetGroupName": "default", 
   	        "EngineVersion": "6.0", 
   	        "PendingModifiedValues": {
   	            "CacheNodeType": "cache.m3.2xlarge"
   	        }, 
   	        "PreferredMaintenanceWindow": "tue:11:30-tue:12:30", 
   	        "CacheNodeType": "cache.m3.medium",
   	         "DataTiering": "disabled"
   	    }
   	}
   ```

   For more information, see [modify-cache-cluster](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-cache-cluster.html) in the *AWS CLI Reference*.

1. If you used the `--apply-immediately`, check the status of the new cluster using the AWS CLI `describe-cache-clusters` command with the following parameter. When the status changes to *available*, you can begin using the new, larger cluster.
   + `--cache-cluster-id` – The name of your single-node Valkey or Redis OSS cluster. Use this parameter to describe a particular cluster rather than all clusters.

   ```
   aws elasticache describe-cache-clusters --cache-cluster-id my-redis-cache-cluster
   ```

   For more information, see [describe-cache-clusters](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-cache-clusters.html) in the *AWS CLI Reference*.

#### Scaling down single-node Valkey or Redis OSS clusters (ElastiCache API)
<a name="Scaling.RedisStandalone.ScaleDown.API"></a>

The following procedure describes how to scale updown a single-node Valkey or Redis OSS cluster using the ElastiCache API. 

**To scale down a single-node Valkey or Redis OSS cluster (ElastiCache API)**

1. Determine the node types you can scale down to by running the ElastiCache API `ListAllowedNodeTypeModifications` action with the following parameter.
   + `CacheClusterId` – The name of the single-node Valkey or Redis OSS cluster you want to scale down.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=ListAllowedNodeTypeModifications
   	   &CacheClusterId=MyRedisCacheCluster
   	   &Version=2015-02-02
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20150202T192317Z
   	   &X-Amz-Credential=<credential>
   ```

   For more information, see [ListAllowedNodeTypeModifications](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ListAllowedNodeTypeModifications.html) in the *Amazon ElastiCache API Reference*.

1. Modify your existing cluster specifying the cluster to scale up and the new, larger node type, using the `ModifyCacheCluster` ElastiCache API action and the following parameters.
   + `CacheClusterId` – The name of the cluster you are scaling down.
   + `CacheNodeType` – The new, smaller node type you want to scale the cluster down to. This value must be one of the node types returned by the `ListAllowedNodeTypeModifications` action in previous step.
   + `CacheParameterGroupName` – [Optional] Use this parameter if you are using `reserved-memory` to manage your cluster's reserved memory. Specify a custom cache parameter group that reserves the correct amount of memory for your new node type. If you are using `reserved-memory-percent` you can omit this parameter.
   + `ApplyImmediately` – Set to `true` to cause the scale-down process to be performed immediately. To postpone the scale-up process to the cluster's next maintenance window, use `ApplyImmediately``=false`.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=ModifyCacheCluster
   	   &ApplyImmediately=true
   	   &CacheClusterId=MyRedisCacheCluster
   	   &CacheNodeType=cache.m3.xlarge
   	   &CacheParameterGroupName redis32-m2-xl
   	   &Version=2015-02-02
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20150202T192317Z
   	   &X-Amz-Credential=<credential>
   ```

   For more information, see [ModifyCacheCluster](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyCacheCluster.html) in the *Amazon ElastiCache API Reference*.

1. If you used `ApplyImmediately``=true`, check the status of the new cluster using the ElastiCache API `DescribeCacheClusters` action with the following parameter. When the status changes to *available*, you can begin using the new, smaller cluster.
   + `CacheClusterId` – The name of your single-node Valkey or Redis OSS cluster. Use this parameter to describe a particular cluster rather than all clusters.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=DescribeCacheClusters
   	   &CacheClusterId=MyRedisCacheCluster
   	   &Version=2015-02-02
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20150202T192317Z
   	   &X-Amz-Credential=<credential>
   ```

   For more information, see [DescribeCacheClusters](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeCacheClusters.html) in the *Amazon ElastiCache API Reference*.

# Scaling replica nodes for Valkey or Redis OSS (Cluster Mode Disabled)
<a name="Scaling.RedisReplGrps"></a>

A Valkey or Redis OSS cluster with replica nodes (called *replication group* in the API/CLI) provides high availability via replication that has Multi-AZ with automatic failover enabled. A cluster with replica nodes is a logical collection of up to six Valkey or Redis OSS nodes where one node, the Primary, is able to serve both read and write requests. All the other nodes in the cluster are read-only replicas of the Primary. Data written to the Primary is asynchronously replicated to all the read replicas in the cluster. Because Valkey or Redis OSS (cluster mode disabled) does not support partitioning your data across multiple clusters, each node in a Valkey or Redis OSS (cluster mode disabled) replication group contains the entire cache dataset. Valkey or Redis OSS (cluster mode enabled) clusters support partitioning your data across up to 500 shards.

To change the data capacity of your cluster you must scale it up to a larger node type, or down to a smaller node type.

To change the read capacity of your cluster, add more read replicas, up to a maximum of 5, or remove read replicas.

The ElastiCache scaling up process is designed to make a best effort to retain your existing data and requires successful Valkey or Redis OSS replication. For Valkey or Redis OSS clusters with replicas, we recommend that sufficient memory be made available to Valkey or Redis OSS. 

**Topics**
+ [Scaling up Valkey or Redis OSS clusters with replicas](#Scaling.RedisReplGrps.ScaleUp)
+ [Scaling down Valkey or Redis OSS clusters with replicas](#Scaling.RedisReplGrps.ScaleDown)
+ [Increasing read capacity](#Scaling.RedisReplGrps.ScaleOut)
+ [Decreasing read capacity](#Scaling.RedisReplGrps.ScaleIn)

**Related Topics**
+ [High availability using replication groups](Replication.md)
+ [Replication: Valkey and Redis OSS Cluster Mode Disabled vs. Enabled](Replication.Redis-RedisCluster.md)
+ [Minimizing downtime in ElastiCache by using Multi-AZ with Valkey and Redis OSS](AutoFailover.md)
+ [Ensuring you have enough memory to make a Valkey or Redis OSS snapshot](BestPractices.BGSAVE.md)

**Topics**
+ [Scaling up Valkey or Redis OSS clusters with replicas](#Scaling.RedisReplGrps.ScaleUp)
+ [Scaling down Valkey or Redis OSS clusters with replicas](#Scaling.RedisReplGrps.ScaleDown)
+ [Increasing read capacity](#Scaling.RedisReplGrps.ScaleOut)
+ [Decreasing read capacity](#Scaling.RedisReplGrps.ScaleIn)

## Scaling up Valkey or Redis OSS clusters with replicas
<a name="Scaling.RedisReplGrps.ScaleUp"></a>

Amazon ElastiCache provides console, CLI, and API support for scaling your Valkey or Redis OSS (cluster mode disabled) replication group up. 

When the scale-up process is initiated, ElastiCache does the following:

1. Launches a replication group using the new node type.

1. Copies all the data from the current primary node to the new primary node.

1. Syncs the new read replicas with the new primary node.

1. Updates the DNS entries so they point to the new nodes. Because of this you don't have to update the endpoints in your application. For Valkey 7.2 and above or Redis OSS 5.0.5 and above, you can scale auto failover enabled clusters while the cluster continues to stay online and serve incoming requests. On Redis OSS version 4.0.10 and below, you may notice a brief interruption of reads and writes on previous versions from the primary node while the DNS entry is updated. 

1. Deletes the old nodes (CLI/API: replication group). You will notice a brief interruption (a few seconds) of reads and writes from the old nodes because the connections to the old nodes will be disconnected.

How long this process takes is dependent upon your node type and how much data is in your cluster.

As shown in the following table, your Valkey or Redis OSS scale-up operation is blocked if you have an engine upgrade scheduled for the cluster’s next maintenance window.


**Blocked Valkey or Redis OSS operations**  
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonElastiCache/latest/dg/Scaling.RedisReplGrps.html)

If you have a pending operation that is blocking you, you can do one of the following.
+ Schedule your Valkey or Redis OSS scale-up operation for the next maintenance window by clearing the **Apply immediately** check box (CLI use: `--no-apply-immediately`, API use: `ApplyImmediately=false`).
+ Wait until your next maintenance window (or after) to perform your Valkey or Redis OSS scale-up operation.
+ Add the Valkey or Redis OSS engine upgrade to this cluster modification with the **Apply Immediately** check box chosen (CLI use: `--apply-immediately`, API use: `ApplyImmediately=true`). This unblocks your scale-up operation by causing the engine upgrade to be performed immediately.

The following sections describe how to scale your Valkey or Redis OSS cluster with replicas up using the ElastiCache console, the AWS CLI, and the ElastiCache API.

**Important**  
If your parameter group uses `reserved-memory` to set aside memory for Valkey or Redis OSS overhead, before you begin scaling be sure that you have a custom parameter group that reserves the correct amount of memory for your new node type. Alternatively, you can modify a custom parameter group so that it uses `reserved-memory-percent` and use that parameter group for your new cluster.  
If you're using `reserved-memory-percent`, doing this is not necessary.   
For more information, see [Managing reserved memory for Valkey and Redis OSS](redis-memory-management.md).

### Scaling up a Valkey or Redis OSS cluster with replicas (Console)
<a name="Scaling.RedisReplGrps.ScaleUp.CON"></a>

The amount of time it takes to scale up to a larger node type varies, depending upon the node type and the amount of data in your current cluster.

The following process scales your cluster with replicas from its current node type to a new, larger node type using the ElastiCache console. During this process, there may be a brief interruption of reads and writes for other versions from the primary node while the DNS entry is updated. you might see less than 1 second downtime for nodes running on 5.0.6 versions and above and a few seconds for older versions. 

**To scale up Valkey or Redis OSS cluster with replicas (console)**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. From the navigation pane, choose **Valkey clusters** or **Redis OSS clusters**

1. From the list of clusters, choose the cluster you want to scale up. This cluster must be running the Valkey or Redis OSS engine and not the clustered Valkey or Redis OSS engine.

1. Choose **Modify**.

1. In the **Modify Cluster** wizard:

   1. Choose the node type you want to scale to from the **Node type** list. Note that not all node types are available to scale down to.

   1. If you're using `reserved-memory` to manage your memory, from the **Parameter Group** list, choose the custom parameter group that reserves the correct amount of memory for your new node type.

1. If you want to perform the scale-up process right away, choose the **Apply immediately** check box. If the **Apply immediately** check box is left not chosen, the scale-up process is performed during this cluster's next maintenance window.

1. Choose **Modify**.

1. When the cluster’s status changes from *modifying* to *available*, your cluster has scaled to the new node type. There is no need to update the endpoints in your application.

### Scaling up a Valkey or Redis OSS replication group (AWS CLI)
<a name="Scaling.RedisReplGrps.ScaleUp.CLI"></a>

The following process scales your replication group from its current node type to a new, larger node type using the AWS CLI. During this process, ElastiCache updates the DNS entries so they point to the new nodes. Because of this you don't have to update the endpoints in your application. For Valkey 7.2 and above or Redis OSS 5.0.5 and above, you can scale auto failover enabled clusters while the cluster continues to stay online and serve incoming requests. On version 4.0.10 and below, you may notice a brief interruption of reads and writes on previous versions from the primary node while the DNS entry is updated..

The amount of time it takes to scale up to a larger node type varies, depending upon your node type and the amount of data in your current cluster.

**To scale up a Valkey or Redis OSS Replication Group (AWS CLI)**

1. Determine which node types you can scale up to by running the AWS CLI `list-allowed-node-type-modifications` command with the following parameter.
   + `--replication-group-id` – the name of the replication group. Use this parameter to describe a particular replication group rather than all replication groups.

   For Linux, macOS, or Unix:

   ```
   aws elasticache list-allowed-node-type-modifications \
   	    --replication-group-id my-repl-group
   ```

   For Windows:

   ```
   aws elasticache list-allowed-node-type-modifications ^
   	    --replication-group-id my-repl-group
   ```

   Output from this operation looks something like this (JSON format).

   ```
   {
   	    "ScaleUpModifications": [
   	        "cache.m3.2xlarge", 
   	        "cache.m3.large", 
   	        "cache.m3.xlarge", 
   	        "cache.m4.10xlarge", 
   	        "cache.m4.2xlarge", 
   	        "cache.m4.4xlarge", 
   	        "cache.m4.large", 
   	        "cache.m4.xlarge", 
   	        "cache.r3.2xlarge", 
   	        "cache.r3.4xlarge", 
   	        "cache.r3.8xlarge", 
   	        "cache.r3.large", 
   	        "cache.r3.xlarge"
   	    ]
   	}
   ```

   For more information, see [list-allowed-node-type-modifications](https://docs.aws.amazon.com/cli/latest/reference/elasticache/list-allowed-node-type-modifications.html) in the *AWS CLI Reference*.

1. Scale your current replication group up to the new node type using the AWS CLI `modify-replication-group` command with the following parameters.
   + `--replication-group-id` – the name of the replication group.
   + `--cache-node-type` – the new, larger node type of the clusters in this replication group. This value must be one of the instance types returned by the `list-allowed-node-type-modifications` command in the previous step.
   + `--cache-parameter-group-name` – [Optional] Use this parameter if you are using `reserved-memory` to manage your cluster's reserved memory. Specify a custom cache parameter group that reserves the correct amount of memory for your new node type. If you are using `reserved-memory-percent` you can omit this parameter.
   + `--apply-immediately` – Causes the scale-up process to be applied immediately. To postpone the scale-up operation to the next maintenance window, use `--no-apply-immediately`.

   For Linux, macOS, or Unix:

   ```
   aws elasticache modify-replication-group \
   	    --replication-group-id my-repl-group \
   	    --cache-node-type cache.m3.xlarge \
   	    --cache-parameter-group-name redis32-m3-2xl \
   	    --apply-immediately
   ```

   For Windows:

   ```
   aws elasticache modify-replication-group ^
   	    --replication-group-id my-repl-group ^
   	    --cache-node-type cache.m3.xlarge ^
   	    --cache-parameter-group-name redis32-m3-2xl \
   	    --apply-immediately
   ```

   Output from this command looks something like this (JSON format).

   ```
   {
   	"ReplicationGroup": {
   		"Status": "available",
   		"Description": "Some description",
   		"NodeGroups": [{
   			"Status": "available",
   			"NodeGroupMembers": [{
   					"CurrentRole": "primary",
   					"PreferredAvailabilityZone": "us-west-2b",
   					"CacheNodeId": "0001",
   					"ReadEndpoint": {
   						"Port": 6379,
   						"Address": "my-repl-group-001.8fdx4s.0001.usw2.cache.amazonaws.com"
   					},
   					"CacheClusterId": "my-repl-group-001"
   				},
   				{
   					"CurrentRole": "replica",
   					"PreferredAvailabilityZone": "us-west-2c",
   					"CacheNodeId": "0001",
   					"ReadEndpoint": {
   						"Port": 6379,
   						"Address": "my-repl-group-002.8fdx4s.0001.usw2.cache.amazonaws.com"
   					},
   					"CacheClusterId": "my-repl-group-002"
   				}
   			],
   			"NodeGroupId": "0001",
   			"PrimaryEndpoint": {
   				"Port": 6379,
   				"Address": "my-repl-group.8fdx4s.ng.0001.usw2.cache.amazonaws.com"
   			}
   		}],
   		"ReplicationGroupId": "my-repl-group",
   		"SnapshotRetentionLimit": 1,
   		"AutomaticFailover": "disabled",
   		"SnapshotWindow": "12:00-13:00",
   		"SnapshottingClusterId": "my-repl-group-002",
   		"MemberClusters": [
   			"my-repl-group-001",
   			"my-repl-group-002"
   		],
   		"PendingModifiedValues": {}
   	}
   }
   ```

   For more information, see [modify-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-replication-group.html) in the *AWS CLI Reference*.

1. If you used the `--apply-immediately` parameter, monitor the status of the replication group using the AWS CLI `describe-replication-group` command with the following parameter. While the status is still in *modifying*, you might see less than 1 second downtime for nodes running on 5.0.6 versions and above and a brief interruption of reads and writes for older versions from the primary node while the DNS entry is updated.
   + `--replication-group-id` – the name of the replication group. Use this parameter to describe a particular replication group rather than all replication groups.

   For Linux, macOS, or Unix:

   ```
   aws elasticache describe-replication-groups \
   	    --replication-group-id my-replication-group
   ```

   For Windows:

   ```
   aws elasticache describe-replication-groups ^
   	    --replication-group-id my-replication-group
   ```

   For more information, see [describe-replication-groups](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-replication-groups.html) in the *AWS CLI Reference*.

### Scaling up a Valkey or Redis OSS replication group (ElastiCache API)
<a name="Scaling.RedisReplGrps.ScaleUp.API"></a>

The following process scales your replication group from its current node type to a new, larger node type using the ElastiCache API. For Valkey 7.2 and above or Redis OSS 5.0.5 and above, you can scale auto failover enabled clusters while the cluster continues to stay online and serve incoming requests. On version Redis OSS 4.0.10 and below, you may notice a brief interruption of reads and writes on previous versions from the primary node while the DNS entry is updated.

The amount of time it takes to scale up to a larger node type varies, depending upon your node type and the amount of data in your current cluster.

**To scale up a Valkey or Redis OSS Replication Group (ElastiCache API)**

1. Determine which node types you can scale up to using the ElastiCache API `ListAllowedNodeTypeModifications` action with the following parameter.
   + `ReplicationGroupId` – the name of the replication group. Use this parameter to describe a specific replication group rather than all replication groups.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=ListAllowedNodeTypeModifications
   	   &ReplicationGroupId=MyReplGroup
   	   &Version=2015-02-02
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20150202T192317Z
   	   &X-Amz-Credential=<credential>
   ```

   For more information, see [ListAllowedNodeTypeModifications](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ListAllowedNodeTypeModifications.html) in the *Amazon ElastiCache API Reference*.

1. Scale your current replication group up to the new node type using the `ModifyReplicationGroup` ElastiCache API action and with the following parameters.
   + `ReplicationGroupId` – the name of the replication group.
   + `CacheNodeType` – the new, larger node type of the clusters in this replication group. This value must be one of the instance types returned by the `ListAllowedNodeTypeModifications` action in th previous step.
   + `CacheParameterGroupName` – [Optional] Use this parameter if you are using `reserved-memory` to manage your cluster's reserved memory. Specify a custom cache parameter group that reserves the correct amount of memory for your new node type. If you are using `reserved-memory-percent` you can omit this parameter.
   + `ApplyImmediately` – Set to `true` to causes the scale-up process to be applied immediately. To postpone the scale-up process to the next maintenance window, use `ApplyImmediately``=false`.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=ModifyReplicationGroup
   	   &ApplyImmediately=true
   	   &CacheNodeType=cache.m3.2xlarge
   	   &CacheParameterGroupName=redis32-m3-2xl
   	   &ReplicationGroupId=myReplGroup
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20141201T220302Z
   	   &Version=2014-12-01
   	   &X-Amz-Algorithm=&AWS;4-HMAC-SHA256
   	   &X-Amz-Date=20141201T220302Z
   	   &X-Amz-SignedHeaders=Host
   	   &X-Amz-Expires=20141201T220302Z
   	   &X-Amz-Credential=<credential>
   	   &X-Amz-Signature=<signature>
   ```

   For more information, see [ModifyReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroup.html) in the *Amazon ElastiCache API Reference*.

1. If you used `ApplyImmediately``=true`, monitor the status of the replication group using the ElastiCache API `DescribeReplicationGroups` action with the following parameters. When the status changes from *modifying* to *available*, you can begin writing to your new, scaled up replication group.
   + `ReplicationGroupId` – the name of the replication group. Use this parameter to describe a particular replication group rather than all replication groups.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=DescribeReplicationGroups
   	   &ReplicationGroupId=MyReplGroup
   	   &Version=2015-02-02
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20150202T192317Z
   	   &X-Amz-Credential=<credential>
   ```

   For more information, see [DescribeReplicationGroups](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeReplicationGroups.html) in the *Amazon ElastiCache API Reference*.

## Scaling down Valkey or Redis OSS clusters with replicas
<a name="Scaling.RedisReplGrps.ScaleDown"></a>

The following sections walk you through how to scale a Valkey or Redis OSS (cluster mode disabled) cluster with replica nodes down to a smaller node type. Ensuring that the new, smaller node type is large enough to accommodate all the data and overhead is very important to success. For more information, see [Ensuring you have enough memory to make a Valkey or Redis OSS snapshot](BestPractices.BGSAVE.md).

**Note**  
For clusters running the r6gd node type, you can only scale to node sizes within the r6gd node family.

**Important**  
If your parameter group uses `reserved-memory` to set aside memory for Valkey or Redis OSS overhead, before you begin scaling be sure that you have a custom parameter group that reserves the correct amount of memory for your new node type. Alternatively, you can modify a custom parameter group so that it uses `reserved-memory-percent` and use that parameter group for your new cluster.  
If you're using `reserved-memory-percent`, doing this is not necessary.   
For more information, see [Managing reserved memory for Valkey and Redis OSS](redis-memory-management.md).

**Topics**

### Scaling down a Valkey or Redis OSS cluster with replicas (Console)
<a name="Scaling.RedisReplGrps.ScaleDown.CON"></a>

The following process scales your Valkey or Redis OSS cluster with replica nodes to a smaller node type using the ElastiCache console.

**To scale down a Valkey or Redis OSS cluster with replica nodes (console)**

1. Ensure that the smaller node type is adequate for your data and overhead needs. 

1. If your parameter group uses `reserved-memory` to set aside memory for Valkey or Redis OSS overhead, ensure that you have a custom parameter group to set aside the correct amount of memory for your new node type.

   Alternatively, you can modify your custom parameter group to use `reserved-memory-percent`. For more information, see [Managing reserved memory for Valkey and Redis OSS](redis-memory-management.md).

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. From the list of clusters, choose the cluster you want to scale down. This cluster must be running the Valkey or Redis OSS engine and not the clustered Valkey or Redis OSS engine.

1. Choose **Modify**.

1. In the **Modify Cluster** wizard:

   1. Choose the node type you want to scale down to from the **Node type** list.

   1. If you're using `reserved-memory` to manage your memory, from the **Parameter Group** list, choose the custom parameter group that reserves the correct amount of memory for your new node type.

1. If you want to perform the scale-down process right away, choose the **Apply immediately** check box. If the **Apply immediately** check box is left not chosen, the scale-down process is performed during this cluster's next maintenance window.

1. Choose **Modify**.

1. When the cluster’s status changes from *modifying* to *available*, your cluster has scaled to the new node type. There is no need to update the endpoints in your application.

### Scaling down a Valkey or Redis OSS replication group (AWS CLI)
<a name="Scaling.RedisReplGrps.ScaleUp.CLI"></a>

The following process scales your replication group from its current node type to a new, smaller node type using the AWS CLI. During this process, ElastiCache updates the DNS entries so they point to the new nodes. Because of this you don't have to update the endpoints in your application. For Valkey 7.2 above or Redis OSS 5.0.5 and above, you can scale auto failover enabled clusters while the cluster continues to stay online and serve incoming requests. On version 4.0.10 and below, you may notice a brief interruption of reads and writes on previous versions from the primary node while the DNS entry is updated..

However, reads from the read replica clusters continue uninterrupted.

The amount of time it takes to scale down to a smaller node type varies, depending upon your node type and the amount of data in your current cluster.

**To scale down a Valkey or Redis OSS Replication Group (AWS CLI)**

1. Determine which node types you can scale down to by running the AWS CLI `list-allowed-node-type-modifications` command with the following parameter.
   + `--replication-group-id` – the name of the replication group. Use this parameter to describe a particular replication group rather than all replication groups.

   For Linux, macOS, or Unix:

   ```
   aws elasticache list-allowed-node-type-modifications \
   	    --replication-group-id my-repl-group
   ```

   For Windows:

   ```
   aws elasticache list-allowed-node-type-modifications ^
   	    --replication-group-id my-repl-group
   ```

   Output from this operation looks something like this (JSON format).

   ```
   {
   	    "ScaleDownModifications": [
   	        "cache.m3.2xlarge", 
   	        "cache.m3.large", 
   	        "cache.m3.xlarge", 
   	        "cache.m4.10xlarge", 
   	        "cache.m4.2xlarge", 
   	        "cache.m4.4xlarge", 
   	        "cache.m4.large", 
   	        "cache.m4.xlarge", 
   	        "cache.r3.2xlarge", 
   	        "cache.r3.4xlarge", 
   	        "cache.r3.8xlarge", 
   	        "cache.r3.large", 
   	        "cache.r3.xlarge"
   	    ]
   	}
   ```

   For more information, see [list-allowed-node-type-modifications](https://docs.aws.amazon.com/cli/latest/reference/elasticache/list-allowed-node-type-modifications.html) in the *AWS CLI Reference*.

1. Scale your current replication group up to the new node type using the AWS CLI `modify-replication-group` command with the following parameters.
   + `--replication-group-id` – the name of the replication group.
   + `--cache-node-type` – the new, smaller node type of the clusters in this replication group. This value must be one of the instance types returned by the `list-allowed-node-type-modifications` command in the previous step.
   + `--cache-parameter-group-name` – [Optional] Use this parameter if you are using `reserved-memory` to manage your cluster's reserved memory. Specify a custom cache parameter group that reserves the correct amount of memory for your new node type. If you are using `reserved-memory-percent` you can omit this parameter.
   + `--apply-immediately` – Causes the scale-up process to be applied immediately. To postpone the scale-up operation to the next maintenance window, use `--no-apply-immediately`.

   For Linux, macOS, or Unix:

   ```
   aws elasticache modify-replication-group \
   	    --replication-group-id my-repl-group \
   	    --cache-node-type cache.t2.small  \
   	    --cache-parameter-group-name redis32-m3-2xl \
   	    --apply-immediately
   ```

   For Windows:

   ```
   aws elasticache modify-replication-group ^
   	    --replication-group-id my-repl-group ^
   	    --cache-node-type cache.t2.small  ^
   	    --cache-parameter-group-name redis32-m3-2xl \
   	    --apply-immediately
   ```

   Output from this command looks something like this (JSON format).

   ```
   {"ReplicationGroup": {
   	        "Status": "available", 
   	        "Description": "Some description", 
   	        "NodeGroups": [
   	            {
   	                "Status": "available", 
   	                "NodeGroupMembers": [
   	                    {
   	                        "CurrentRole": "primary", 
   	                        "PreferredAvailabilityZone": "us-west-2b", 
   	                        "CacheNodeId": "0001", 
   	                        "ReadEndpoint": {
   	                            "Port": 6379, 
   	                            "Address": "my-repl-group-001.8fdx4s.0001.usw2.cache.amazonaws.com"
   	                        }, 
   	                        "CacheClusterId": "my-repl-group-001"
   	                    }, 
   	                    {
   	                        "CurrentRole": "replica", 
   	                        "PreferredAvailabilityZone": "us-west-2c", 
   	                        "CacheNodeId": "0001", 
   	                        "ReadEndpoint": {
   	                            "Port": 6379, 
   	                            "Address": "my-repl-group-002.8fdx4s.0001.usw2.cache.amazonaws.com"
   	                        }, 
   	                        "CacheClusterId": "my-repl-group-002"
   	                    }
   	                ], 
   	                "NodeGroupId": "0001", 
   	                "PrimaryEndpoint": {
   	                    "Port": 6379, 
   	                    "Address": "my-repl-group.8fdx4s.ng.0001.usw2.cache.amazonaws.com"
   	                }
   	            }
   	        ], 
   	        "ReplicationGroupId": "my-repl-group", 
   	        "SnapshotRetentionLimit": 1, 
   	        "AutomaticFailover": "disabled", 
   	        "SnapshotWindow": "12:00-13:00", 
   	        "SnapshottingClusterId": "my-repl-group-002", 
   	        "MemberClusters": [
   	            "my-repl-group-001", 
   	            "my-repl-group-002", 
   	        ], 
   	        "PendingModifiedValues": {}
   	    }
   	}
   ```

   For more information, see [modify-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-replication-group.html) in the *AWS CLI Reference*.

1. If you used the `--apply-immediately` parameter, monitor the status of the replication group using the AWS CLI `describe-replication-group` command with the following parameter. When the status changes from *modifying* to *available*, you can begin writing to your new, scaled down replication group.
   + `--replication-group-id` – the name of the replication group. Use this parameter to describe a particular replication group rather than all replication groups.

   For Linux, macOS, or Unix:

   ```
   aws elasticache describe-replication-group \
   	    --replication-group-id my-replication-group
   ```

   For Windows:

   ```
   aws elasticache describe-replication-groups ^
   	    --replication-group-id my-replication-group
   ```

   For more information, see [describe-replication-groups](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-replication-groups.html) in the *AWS CLI Reference*.

### Scaling down a Valkey or Redis OSS replication group (ElastiCache API)
<a name="Scaling.RedisReplGrps.ScaleDown.API"></a>

The following process scales your replication group from its current node type to a new, smaller node type using the ElastiCache API. During this process, ElastiCache updates the DNS entries so they point to the new nodes. Because of this you don't have to update the endpoints in your application. For Valkey 7.2 and above or Redis OSS 5.0.5 and above, you can scale auto failover enabled clusters while the cluster continues to stay online and serve incoming requests. On Redis OSS version 4.0.10 and below, you may notice a brief interruption of reads and writes on previous versions from the primary node while the DNS entry is updated.. However, reads from the read replica clusters continue uninterrupted.

The amount of time it takes to scale down to a smaller node type varies, depending upon your node type and the amount of data in your current cluster.

**To scale down a Valkey or Redis OSS Replication Group (ElastiCache API)**

1. Determine which node types you can scale down to using the ElastiCache API `ListAllowedNodeTypeModifications` action with the following parameter.
   + `ReplicationGroupId` – the name of the replication group. Use this parameter to describe a specific replication group rather than all replication groups.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=ListAllowedNodeTypeModifications
   	   &ReplicationGroupId=MyReplGroup
   	   &Version=2015-02-02
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20150202T192317Z
   	   &X-Amz-Credential=<credential>
   ```

   For more information, see [ListAllowedNodeTypeModifications](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ListAllowedNodeTypeModifications.html) in the *Amazon ElastiCache API Reference*.

1. Scale your current replication group up to the new node type using the `ModifyReplicationGroup` ElastiCache API action and with the following parameters.
   + `ReplicationGroupId` – the name of the replication group.
   + `CacheNodeType` – the new, smaller node type of the clusters in this replication group. This value must be one of the instance types returned by the `ListAllowedNodeTypeModifications` action in the previous step.
   + `CacheParameterGroupName` – [Optional] Use this parameter if you are using `reserved-memory` to manage your cluster's reserved memory. Specify a custom cache parameter group that reserves the correct amount of memory for your new node type. If you are using `reserved-memory-percent` you can omit this parameter.
   + `ApplyImmediately` – Set to `true` to causes the scale-up process to be applied immediately. To postpone the scale-down process to the next maintenance window, use `ApplyImmediately``=false`.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=ModifyReplicationGroup
   	   &ApplyImmediately=true
   	   &CacheNodeType=cache.m3.2xlarge
   	   &CacheParameterGroupName=redis32-m3-2xl
   	   &ReplicationGroupId=myReplGroup
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20141201T220302Z
   	   &Version=2014-12-01
   	   &X-Amz-Algorithm=&AWS;4-HMAC-SHA256
   	   &X-Amz-Date=20141201T220302Z
   	   &X-Amz-SignedHeaders=Host
   	   &X-Amz-Expires=20141201T220302Z
   	   &X-Amz-Credential=<credential>
   	   &X-Amz-Signature=<signature>
   ```

   For more information, see [ModifyReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroup.html) in the *Amazon ElastiCache API Reference*.

1. If you used `ApplyImmediately``=true`, monitor the status of the replication group using the ElastiCache API `DescribeReplicationGroups` action with the following parameters. When the status changes from *modifying* to *available*, you can begin writing to your new, scaled down replication group.
   + `ReplicationGroupId` – the name of the replication group. Use this parameter to describe a particular replication group rather than all replication groups.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=DescribeReplicationGroups
   	   &ReplicationGroupId=MyReplGroup
   	   &Version=2015-02-02
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20150202T192317Z
   	   &X-Amz-Credential=<credential>
   ```

   For more information, see [DescribeReplicationGroups](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeReplicationGroups.html) in the *Amazon ElastiCache API Reference*.

## Increasing read capacity
<a name="Scaling.RedisReplGrps.ScaleOut"></a>

To increase read capacity, add read replicas (up to a maximum of five) to your Valkey or Redis OSS replication group.

You can scale your Valkey or Redis OSS cluster’s read capacity using the ElastiCache console, the AWS CLI, or the ElastiCache API. For more information, see [Adding a read replica for Valkey or Redis OSS (Cluster Mode Disabled)](Replication.AddReadReplica.md).

## Decreasing read capacity
<a name="Scaling.RedisReplGrps.ScaleIn"></a>

To decrease read capacity, delete one or more read replicas from your Valkey or Redis OSS cluster with replicas (called *replication group* in the API/CLI). If the cluster is Multi-AZ with automatic failover enabled, you cannot delete the last read replica without first disabling Multi-AZ. For more information, see [Modifying a replication group](Replication.Modify.md).

For more information, see [Deleting a read replica for Valkey or Redis OSS (Cluster Mode Disabled)](Replication.RemoveReadReplica.md).

# Scaling Valkey or Redis OSS (Cluster Mode Enabled) clusters
<a name="scaling-redis-cluster-mode-enabled"></a>

As demand on your clusters changes, you might decide to improve performance or reduce costs by changing the number of shards in your Valkey or Redis OSS (cluster mode enabled) cluster. We recommend using online horizontal scaling to do so, because it allows your cluster to continue serving requests during the scaling process.

Conditions under which you might decide to rescale your cluster include the following:
+ **Memory pressure:**

  If the nodes in your cluster are under memory pressure, you might decide to scale out so that you have more resources to better store data and serve requests.

  You can determine whether your nodes are under memory pressure by monitoring the following metrics: *FreeableMemory*, *SwapUsage*, and *BytesUsedForCache*.
+ **CPU or network bottleneck:**

  If latency/throughput issues are plaguing your cluster, you might need to scale out to resolve the issues.

  You can monitor your latency and throughput levels by monitoring the following metrics: *CPUUtilization*, *NetworkBytesIn*, *NetworkBytesOut*, *CurrConnections*, and *NewConnections*.
+ **Your cluster is over-scaled:**

  Current demand on your cluster is such that scaling in doesn't hurt performance and reduces your costs.

  You can monitor your cluster's use to determine whether or not you can safely scale in using the following metrics: *FreeableMemory*, *SwapUsage*, *BytesUsedForCache*, *CPUUtilization*, *NetworkBytesIn*, *NetworkBytesOut*, *CurrConnections*, and *NewConnections*.

**Performance Impact of Scaling**  
When you scale using the offline process, your cluster is offline for a significant portion of the process and thus unable to serve requests. When you scale using the online method, because scaling is a compute-intensive operation, there is some degradation in performance, nevertheless, your cluster continues to serve requests throughout the scaling operation. How much degradation you experience depends upon your normal CPU utilization and your data.

There are two ways to scale your Valkey or Redis OSS (cluster mode enabled) cluster; horizontal and vertical scaling.
+ Horizontal scaling allows you to change the number of node groups (shards) in the replication group by adding or removing node groups (shards). The online resharding process allows scaling in/out while the cluster continues serving incoming requests. 

  Configure the slots in your new cluster differently than they were in the old cluster. Offline method only.
+ Vertical Scaling - Change the node type to resize the cluster. The online vertical scaling allows scaling up/down while the cluster continues serving incoming requests.

If you are reducing the size and memory capacity of the cluster, by either scaling in or scaling down, ensure that the new configuration has sufficient memory for your data and Valkey or Redis OSS overhead. 

For more information, see [Choosing your node size](CacheNodes.SelectSize.md).

**Contents**
+ [Offline resharding for Valkey or Redis OSS (cluster mode enabled)](#redis-cluster-resharding-offline)
+ [Online resharding for Valkey or Redis OSS (cluster mode enabled)](#redis-cluster-resharding-online)
  + [Adding shards with online resharding](#redis-cluster-resharding-online-add)
  + [Removing shards with online resharding](#redis-cluster-resharding-online-remove)
    + [Removing shards (Console)](#redis-cluster-resharding-online-remove-console)
    + [Removing shards (AWS CLI)](#redis-cluster-resharding-online-remove-cli)
    + [Removing shards (ElastiCache API)](#redis-cluster-resharding-online-remove-api)
  + [Online shard rebalancing](#redis-cluster-resharding-online-rebalance)
    + [Online Shard Rebalancing (Console)](#redis-cluster-resharding-online-rebalance-console)
    + [Online shard rebalancing (AWS CLI)](#redis-cluster-resharding-online-rebalance-cli)
    + [Online shard rebalancing (ElastiCache API)](#redis-cluster-resharding-online-rebalance-api)
+ [Online vertical scaling by modifying node type](redis-cluster-vertical-scaling.md)
  + [Online scaling up](redis-cluster-vertical-scaling.md#redis-cluster-vertical-scaling-scaling-up)
    + [Scaling up Valkey or Redis OSS clusters (Console)](redis-cluster-vertical-scaling.md#redis-cluster-vertical-scaling-console)
    + [Scaling up Valkey or Redis OSS clusters (AWS CLI)](redis-cluster-vertical-scaling.md#Scaling.RedisStandalone.ScaleUp.CLI)
    + [Scaling up Valkey or Redis OSS clusters (ElastiCache API)](redis-cluster-vertical-scaling.md#VeticalScaling.RedisReplGrps.ScaleUp.API)
  + [Online scaling down](redis-cluster-vertical-scaling.md#redis-cluster-vertical-scaling-scaling-down)
    + [Scaling down Valkey or Redis OSS clusters (Console)](redis-cluster-vertical-scaling.md#redis-cluster-vertical-scaling-down-console)
    + [Scaling down Valkey or Redis OSS clusters (AWS CLI)](redis-cluster-vertical-scaling.md#Scaling.RedisStandalone.ScaleDown.CLI)
    + [Scaling down Valkey or Redis OSS clusters (ElastiCache API)](redis-cluster-vertical-scaling.md#Scaling.Vertical.ScaleDown.API)

## Offline resharding for Valkey or Redis OSS (cluster mode enabled)
<a name="redis-cluster-resharding-offline"></a>

The main advantage you get from offline shard reconfiguration is that you can do more than merely add or remove shards from your replication group. When you reshard and rebalance offline, in addition to changing the number of shards in your replication group, you can do the following:

**Note**  
Offline resharding is not supported on Valkey or Redis OSS clusters with data tiering enabled. For more information, see [Data tiering in ElastiCache](data-tiering.md).
+ Change the node type of your replication group.
+ Specify the Availability Zone for each node in the replication group.
+ Upgrade to a newer engine version.
+ Specify the number of replica nodes in each shard independently.
+ Specify the keyspace for each shard.

The main disadvantage of offline shard reconfiguration is that your cluster is offline beginning with the restore portion of the process and continuing until you update the endpoints in your application. The length of time that your cluster is offline varies with the amount of data in your cluster.

**To reconfigure your shards Valkey or Redis OSS (cluster mode enabled) cluster offline**

1. Create a manual backup of your existing Valkey or Redis OSS cluster. For more information, see [Taking manual backups](backups-manual.md).

1. Create a new cluster by restoring from the backup. For more information, see [Restoring from a backup into a new cache](backups-restoring.md).

1. Update the endpoints in your application to the new cluster's endpoints. For more information, see [Finding connection endpoints in ElastiCache](Endpoints.md).

## Online resharding for Valkey or Redis OSS (cluster mode enabled)
<a name="redis-cluster-resharding-online"></a>

By using online resharding and shard rebalancing with ElastiCache Valkey 7.2 or newer, or Redis OSS version 3.2.10 or newer, you can scale your Valkey or Redis OSS (cluster mode enabled) cluster dynamically with no downtime. This approach means that your cluster can continue to serve requests even while scaling or rebalancing is in process.

You can do the following:
+ **Scale out** – Increase read and write capacity by adding shards (node groups) to your Valkey or Redis OSS (cluster mode enabled) cluster (replication group).

  If you add one or more shards to your replication group, the number of nodes in each new shard is the same as the number of nodes in the smallest of the existing shards.
+ **Scale in** – Reduce read and write capacity, and thereby costs, by removing shards from your Valkey or Redis OSS (cluster mode enabled) cluster.
+ **Rebalance** – Move the keyspaces among the shards in your Valkey or Redis OSS (cluster mode enabled) cluster so they are as equally distributed among the shards as possible.

You can't do the following:
+ **Configure shards independently:**

  You can't specify the keyspace for shards independently. To do this, you must use the offline process.

Currently, the following limitations apply to ElastiCache online resharding and rebalancing:
+ These processes require Valkey 7.2 and newer or Redis OSS 3.2.10 or newer. For information on upgrading your engine version, see [Version Management for ElastiCache](VersionManagement.md).
+ There are limitations with slots or keyspaces and large items:

  If any of the keys in a shard contain a large item, that key isn't migrated to a new shard when scaling out or rebalancing. This functionality can result in unbalanced shards.

  If any of the keys in a shard contain a large item (items greater than 256 MB after serialization), that shard isn't deleted when scaling in. This functionality can result in some shards not being deleted.
+ When scaling out, the number of nodes in any new shards equals the number of nodes in the smallest existing shard.
+ When scaling out, any tags that are common to all existing shards are copied to the new shards.
+ When scaling out a Global Data Store cluster, ElastiCache will not automatically replicate Functions from one of the existing nodes to the new node(s). We recommend loading your Functions in the new shard(s) after scaling out your cluster so that every shards have the same functions. 

**Note**  
In ElastiCache for Valkey 7.2 and above, and ElastiCache for Redis OSS version 7 and above: When scaling out your cluster, ElastiCache will automatically replicate the Functions loaded in one of the existing nodes (selected at random) to the new node(s). If your application uses [Functions](https://valkey.io/topics/functions-intro/), we recommend loading all of your functions to all the shards before scaling out so that your cluster does not end up with different function definitions on different shards.

For more information, see [Online cluster resizing](best-practices-online-resharding.md).

You can horizontally scale or rebalance your Valkey or Redis OSS (cluster mode enabled) clusters using the AWS Management Console, the AWS CLI, and the ElastiCache API.

### Adding shards with online resharding
<a name="redis-cluster-resharding-online-add"></a>

You can add shards to your Valkey or Redis OSS (cluster mode enabled) cluster using the AWS Management Console, AWS CLI, or ElastiCache API. When you add shards to a Valkey or Redis OSS (cluster mode enabled) cluster, any tags on the existing shards are copied over to the new shards.

**Topics**

#### Adding shards (Console)
<a name="redis-cluster-resharding-online-add-console"></a>

You can use the AWS Management Console to add one or more shards to your Valkey or Redis OSS (cluster mode enabled) cluster. The following procedure describes the process.

**To add shards to your Valkey or Redis OSS (cluster mode enabled) cluster**

1. Open the ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. From the navigation pane, choose **Valkey clusters** or **Redis OSS clusters**.

1. Locate and choose the name, not the box to the left of the cluster's name, of the Valkey or Redis OSS (cluster mode enabled) cluster that you want to add shards to.
**Tip**  
Valkey or Redis OSS (cluster mode enabled) show **Clustered Valkey** or **Clustered Redis OSS** in the **Mode** column

1. Choose **Add shard**.

   1. For **Number of shards to be added**, choose the number of shards you want added to this cluster.

   1. For **Availability zone(s)**, choose either **No preference** or **Specify availability zones**.

   1. If you chose **Specify availability zones**, for each node in each shard, select the node's Availability Zone from the list of Availability Zones.

   1. Choose **Add**.

#### Adding shards (AWS CLI)
<a name="redis-cluster-resharding-online-add-cli"></a>

The following process describes how to reconfigure the shards in your Valkey or Redis OSS (cluster mode enabled) cluster by adding shards using the AWS CLI.

Use the following parameters with `modify-replication-group-shard-configuration`.

**Parameters**
+ `--apply-immediately` – Required. Specifies the shard reconfiguration operation is to be started immediately.
+ `--replication-group-id` – Required. Specifies which replication group (cluster) the shard reconfiguration operation is to be performed on.
+ `--node-group-count` – Required. Specifies the number of shards (node groups) to exist when the operation is completed. When adding shards, the value of `--node-group-count` must be greater than the current number of shards.

  Optionally, you can specify the Availability Zone for each node in the replication group using `--resharding-configuration`.
+ `--resharding-configuration` – Optional. A list of preferred Availability Zones for each node in each shard in the replication group. Use this parameter only if the value of `--node-group-count` is greater than the current number of shards. If this parameter is omitted when adding shards, Amazon ElastiCache selects the Availability Zones for the new nodes.

The following example reconfigures the keyspaces over four shards in a Valkey or Redis OSS (cluster mode enabled) cluster named `my-cluster`. The example also specifies the Availability Zone for each node in each shard. The operation begins immediately.

**Example - Adding Shards**  
For Linux, macOS, or Unix:  

```
aws elasticache modify-replication-group-shard-configuration \
    --replication-group-id my-cluster \
    --node-group-count 4 \
    --resharding-configuration \
        "PreferredAvailabilityZones=us-east-2a,us-east-2c" \
        "PreferredAvailabilityZones=us-east-2b,us-east-2a" \
        "PreferredAvailabilityZones=us-east-2c,us-east-2d" \
        "PreferredAvailabilityZones=us-east-2d,us-east-2c" \
    --apply-immediately
```
For Windows:  

```
aws elasticache modify-replication-group-shard-configuration ^
    --replication-group-id my-cluster ^
    --node-group-count 4 ^
    --resharding-configuration ^
        "PreferredAvailabilityZones=us-east-2a,us-east-2c" ^
        "PreferredAvailabilityZones=us-east-2b,us-east-2a" ^
        "PreferredAvailabilityZones=us-east-2c,us-east-2d" ^
        "PreferredAvailabilityZones=us-east-2d,us-east-2c" ^
    --apply-immediately
```

For more information, see [modify-replication-group-shard-configuration](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-replication-group-shard-configuration.html) in the AWS CLI documentation.

#### Adding shards (ElastiCache API)
<a name="redis-cluster-resharding-online-add-api"></a>

You can use the ElastiCache API to reconfigure the shards in your Valkey or Redis OSS (cluster mode enabled) cluster online by using the `ModifyReplicationGroupShardConfiguration` operation.

Use the following parameters with `ModifyReplicationGroupShardConfiguration`.

**Parameters**
+ `ApplyImmediately=true` – Required. Specifies the shard reconfiguration operation is to be started immediately.
+ `ReplicationGroupId` – Required. Specifies which replication group (cluster) the shard reconfiguration operation is to be performed on.
+ `NodeGroupCount` – Required. Specifies the number of shards (node groups) to exist when the operation is completed. When adding shards, the value of `NodeGroupCount` must be greater than the current number of shards.

  Optionally, you can specify the Availability Zone for each node in the replication group using `ReshardingConfiguration`.
+ `ReshardingConfiguration` – Optional. A list of preferred Availability Zones for each node in each shard in the replication group. Use this parameter only if the value of `NodeGroupCount` is greater than the current number of shards. If this parameter is omitted when adding shards, Amazon ElastiCache selects the Availability Zones for the new nodes.

The following process describes how to reconfigure the shards in your Valkey or Redis OSS (cluster mode enabled) cluster by adding shards using the ElastiCache API.

**Example - Adding Shards**  
The following example adds node groups to the Valkey or Redis OSS (cluster mode enabled) cluster `my-cluster`, so there are a total of four node groups when the operation completes. The example also specifies the Availability Zone for each node in each shard. The operation begins immediately.  

```
https://elasticache.us-east-2.amazonaws.com/
    ?Action=ModifyReplicationGroupShardConfiguration
    &ApplyImmediately=true
    &NodeGroupCount=4
    &ReplicationGroupId=my-cluster
    &ReshardingConfiguration.ReshardingConfiguration.1.PreferredAvailabilityZones.AvailabilityZone.1=us-east-2a 
    &ReshardingConfiguration.ReshardingConfiguration.1.PreferredAvailabilityZones.AvailabilityZone.2=us-east-2c 
    &ReshardingConfiguration.ReshardingConfiguration.2.PreferredAvailabilityZones.AvailabilityZone.1=us-east-2b 
    &ReshardingConfiguration.ReshardingConfiguration.2.PreferredAvailabilityZones.AvailabilityZone.2=us-east-2a 
    &ReshardingConfiguration.ReshardingConfiguration.3.PreferredAvailabilityZones.AvailabilityZone.1=us-east-2c 
    &ReshardingConfiguration.ReshardingConfiguration.3.PreferredAvailabilityZones.AvailabilityZone.2=us-east-2d 
    &ReshardingConfiguration.ReshardingConfiguration.4.PreferredAvailabilityZones.AvailabilityZone.1=us-east-2d 
    &ReshardingConfiguration.ReshardingConfiguration.4.PreferredAvailabilityZones.AvailabilityZone.2=us-east-2c 
    &Version=2015-02-02
    &SignatureVersion=4
    &SignatureMethod=HmacSHA256
    &Timestamp=20171002T192317Z
    &X-Amz-Credential=<credential>
```

For more information, see [ModifyReplicationGroupShardConfiguration](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroupShardConfiguration.html) in the ElastiCache API Reference.

### Removing shards with online resharding
<a name="redis-cluster-resharding-online-remove"></a>

You can remove shards from your Valkey or Redis OSS (cluster mode enabled) cluster using the AWS Management Console, AWS CLI, or ElastiCache API.

**Topics**
+ [Removing shards (Console)](#redis-cluster-resharding-online-remove-console)
+ [Removing shards (AWS CLI)](#redis-cluster-resharding-online-remove-cli)
+ [Removing shards (ElastiCache API)](#redis-cluster-resharding-online-remove-api)

#### Removing shards (Console)
<a name="redis-cluster-resharding-online-remove-console"></a>

The following process describes how to reconfigure the shards in your Valkey or Redis OSS (cluster mode enabled) cluster by removing shards using the AWS Management Console.

Before removing node groups (shards) from your replication group, ElastiCache makes sure that all your data will fit in the remaining shards. If the data will fit, the specified shards are deleted from the replication group as requested. If the data won't fit in the remaining node groups, the process is terminated and the replication group is left with the same node group configuration as before the request was made.

You can use the AWS Management Console to remove one or more shards from your Valkey or Redis OSS (cluster mode enabled) cluster. You cannot remove all the shards in a replication group. Instead, you must delete the replication group. For more information, see [Deleting a replication group](Replication.DeletingRepGroup.md). The following procedure describes the process for deleting one or more shards.

**To remove shards from your Valkey or Redis OSS (cluster mode enabled) cluster**

1. Open the ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. From the navigation pane, choose **Valkey clusters** or **Redis OSS clusters**.

1. Locate and choose the name, not the box to the left of the cluster's name, of the Valkey or Redis OSS (cluster mode enabled) cluster you want to remove shards from.
**Tip**  
Valkey or Redis OSS (cluster mode enabled) clusters have a value of 1 or greater in the **Shards** column.

1. From the list of shards, choose the box to the left of the name of each shard that you want to delete.

1. Choose **Delete shard**.

#### Removing shards (AWS CLI)
<a name="redis-cluster-resharding-online-remove-cli"></a>

The following process describes how to reconfigure the shards in your Valkey or Redis OSS (cluster mode enabled) cluster by removing shards using the AWS CLI.

**Important**  
Before removing node groups (shards) from your replication group, ElastiCache makes sure that all your data will fit in the remaining shards. If the data will fit, the specified shards (`--node-groups-to-remove`) are deleted from the replication group as requested and their keyspaces mapped into the remaining shards. If the data will not fit in the remaining node groups, the process is terminated and the replication group is left with the same node group configuration as before the request was made.

You can use the AWS CLI to remove one or more shards from your Valkey or Redis OSS (cluster mode enabled) cluster. You cannot remove all the shards in a replication group. Instead, you must delete the replication group. For more information, see [Deleting a replication group](Replication.DeletingRepGroup.md).

Use the following parameters with `modify-replication-group-shard-configuration`.

**Parameters**
+ `--apply-immediately` – Required. Specifies the shard reconfiguration operation is to be started immediately.
+ `--replication-group-id` – Required. Specifies which replication group (cluster) the shard reconfiguration operation is to be performed on.
+ `--node-group-count` – Required. Specifies the number of shards (node groups) to exist when the operation is completed. When removing shards, the value of `--node-group-count` must be less than the current number of shards.

  
+ `--node-groups-to-remove` – Required when `--node-group-count` is less than the current number of node groups (shards). A list of shard (node group) IDs to remove from the replication group.

The following procedure describes the process for deleting one or more shards.

**Example - Removing Shards**  
The following example removes two node groups from the Valkey or Redis OSS (cluster mode enabled) cluster `my-cluster`, so there are a total of two node groups when the operation completes. The keyspaces from the removed shards are distributed evenly over the remaining shards.  
For Linux, macOS, or Unix:  

```
aws elasticache modify-replication-group-shard-configuration \
    --replication-group-id my-cluster \
    --node-group-count 2 \
    --node-groups-to-remove "0002" "0003" \
    --apply-immediately
```
For Windows:  

```
aws elasticache modify-replication-group-shard-configuration ^
    --replication-group-id my-cluster ^
    --node-group-count 2 ^
    --node-groups-to-remove "0002" "0003" ^
    --apply-immediately
```

#### Removing shards (ElastiCache API)
<a name="redis-cluster-resharding-online-remove-api"></a>

You can use the ElastiCache API to reconfigure the shards in your Valkey or Redis OSS (cluster mode enabled) cluster online by using the `ModifyReplicationGroupShardConfiguration` operation.

The following process describes how to reconfigure the shards in your Valkey or Redis OSS (cluster mode enabled) cluster by removing shards using the ElastiCache API.

**Important**  
Before removing node groups (shards) from your replication group, ElastiCache makes sure that all your data will fit in the remaining shards. If the data will fit, the specified shards (`NodeGroupsToRemove`) are deleted from the replication group as requested and their keyspaces mapped into the remaining shards. If the data will not fit in the remaining node groups, the process is terminated and the replication group is left with the same node group configuration as before the request was made.

You can use the ElastiCache API to remove one or more shards from your Valkey or Redis OSS (cluster mode enabled) cluster. You cannot remove all the shards in a replication group. Instead, you must delete the replication group. For more information, see [Deleting a replication group](Replication.DeletingRepGroup.md).

Use the following parameters with `ModifyReplicationGroupShardConfiguration`.

**Parameters**
+ `ApplyImmediately=true` – Required. Specifies the shard reconfiguration operation is to be started immediately.
+ `ReplicationGroupId` – Required. Specifies which replication group (cluster) the shard reconfiguration operation is to be performed on.
+ `NodeGroupCount` – Required. Specifies the number of shards (node groups) to exist when the operation is completed. When removing shards, the value of `NodeGroupCount` must be less than the current number of shards.
+ `NodeGroupsToRemove` – Required when `--node-group-count` is less than the current number of node groups (shards). A list of shard (node group) IDs to remove from the replication group.

The following procedure describes the process for deleting one or more shards.

**Example - Removing Shards**  
The following example removes two node groups from the Valkey or Redis OSS (cluster mode enabled) cluster `my-cluster`, so there are a total of two node groups when the operation completes. The keyspaces from the removed shards are distributed evenly over the remaining shards.  

```
https://elasticache.us-east-2.amazonaws.com/
    ?Action=ModifyReplicationGroupShardConfiguration
    &ApplyImmediately=true
    &NodeGroupCount=2
    &ReplicationGroupId=my-cluster
    &NodeGroupsToRemove.member.1=0002
    &NodeGroupsToRemove.member.2=0003
    &Version=2015-02-02
    &SignatureVersion=4
    &SignatureMethod=HmacSHA256
    &Timestamp=20171002T192317Z
    &X-Amz-Credential=<credential>
```

### Online shard rebalancing
<a name="redis-cluster-resharding-online-rebalance"></a>

You can rebalance shards in your Valkey or Redis OSS (cluster mode enabled) cluster using the AWS Management Console, AWS CLI, or ElastiCache API.

**Topics**
+ [Online Shard Rebalancing (Console)](#redis-cluster-resharding-online-rebalance-console)
+ [Online shard rebalancing (AWS CLI)](#redis-cluster-resharding-online-rebalance-cli)
+ [Online shard rebalancing (ElastiCache API)](#redis-cluster-resharding-online-rebalance-api)

#### Online Shard Rebalancing (Console)
<a name="redis-cluster-resharding-online-rebalance-console"></a>

The following process describes how to reconfigure the shards in your Valkey or Redis OSS (cluster mode enabled) cluster by rebalancing shards using the AWS Management Console.

**To rebalance the keyspaces among the shards on your Valkey or Redis OSS (cluster mode enabled) cluster**

1. Open the ElastiCache console at [https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. From the navigation pane, choose **Valkey clusters** or **Redis OSS clusters**.

1. Choose the name, not the box to the left of the name, of the Valkey or Redis OSS (cluster mode enabled) cluster that you want to rebalance.
**Tip**  
Valkey or Redis OSS (cluster mode enabled) clusters have a value of 1 or greater in the **Shards** column.

1. Choose **Rebalance**.

1. When prompted, choose **Rebalance**. You might see a message similar to this one: *Slots in the replication group are uniformly distributed. Nothing to do. (Service: AmazonElastiCache; Status Code: 400; Error Code: InvalidReplicationGroupState; Request ID: 2246cebd-9721-11e7-8d5b-e1b0f086c8cf)*. If you do, choose **Cancel**.

#### Online shard rebalancing (AWS CLI)
<a name="redis-cluster-resharding-online-rebalance-cli"></a>

Use the following parameters with `modify-replication-group-shard-configuration`.

**Parameters**
+ `-apply-immediately` – Required. Specifies the shard reconfiguration operation is to be started immediately.
+ `--replication-group-id` – Required. Specifies which replication group (cluster) the shard reconfiguration operation is to be performed on.
+ `--node-group-count` – Required. To rebalance the keyspaces across all shards in the cluster, this value must be the same as the current number of shards.

The following process describes how to reconfigure the shards in your Valkey or Redis OSS (cluster mode enabled) cluster by rebalancing shards using the AWS CLI.

**Example - Rebalancing the Shards in a Cluster**  
The following example rebalances the slots in the Valkey or Redis OSS (cluster mode enabled) cluster `my-cluster` so that the slots are distributed as equally as possible. The value of `--node-group-count` (`4`) is the number of shards currently in the cluster.  
For Linux, macOS, or Unix:  

```
aws elasticache modify-replication-group-shard-configuration \
    --replication-group-id my-cluster \
    --node-group-count 4 \
    --apply-immediately
```
For Windows:  

```
aws elasticache modify-replication-group-shard-configuration ^
    --replication-group-id my-cluster ^
    --node-group-count 4 ^
    --apply-immediately
```

#### Online shard rebalancing (ElastiCache API)
<a name="redis-cluster-resharding-online-rebalance-api"></a>

You can use the ElastiCache API to reconfigure the shards in your Valkey or Redis OSS (cluster mode enabled) cluster online by using the `ModifyReplicationGroupShardConfiguration` operation.

Use the following parameters with `ModifyReplicationGroupShardConfiguration`.

**Parameters**
+ `ApplyImmediately=true` – Required. Specifies the shard reconfiguration operation is to be started immediately.
+ `ReplicationGroupId` – Required. Specifies which replication group (cluster) the shard reconfiguration operation is to be performed on.
+ `NodeGroupCount` – Required. To rebalance the keyspaces across all shards in the cluster, this value must be the same as the current number of shards.

The following process describes how to reconfigure the shards in your Valkey or Redis OSS (cluster mode enabled) cluster by rebalancing the shards using the ElastiCache API.

**Example - Rebalancing a Cluster**  
The following example rebalances the slots in the Valkey or Redis OSS (cluster mode enabled) cluster `my-cluster` so that the slots are distributed as equally as possible. The value of `NodeGroupCount` (`4`) is the number of shards currently in the cluster.  

```
https://elasticache.us-east-2.amazonaws.com/
    ?Action=ModifyReplicationGroupShardConfiguration
    &ApplyImmediately=true
    &NodeGroupCount=4
    &ReplicationGroupId=my-cluster
    &Version=2015-02-02
    &SignatureVersion=4
    &SignatureMethod=HmacSHA256
    &Timestamp=20171002T192317Z
    &X-Amz-Credential=<credential>
```

# Online vertical scaling by modifying node type
<a name="redis-cluster-vertical-scaling"></a>

By using online vertical scaling with Valkey version 7.2 or newer, or Redis OSS version 3.2.10 or newer, you can scale your Valkey or Redis OSS clusters dynamically with minimal downtime. This allows your Valkey or Redis OSS cluster to serve requests even while scaling.

**Note**  
Scaling is not supported between a data tiering cluster (for example, a cluster using an r6gd node type) and a cluster that does not use data tiering (for example, a cluster using an r6g node type). For more information, see [Data tiering in ElastiCache](data-tiering.md).

You can do the following:
+ **Scale up** – Increase read and write capacity by adjusting the node type of your Valkey or Redis OSS cluster to use a larger node type.

  ElastiCache dynamically resizes your cluster while remaining online and serving requests.
+ **Scale down** – Reduce read and write capacity by adjusting the node type down to use a smaller node. Again, ElastiCache dynamically resizes your cluster while remaining online and serving requests. In this case, you reduce costs by downsizing the node.

**Note**  
The scale up and scale down processes rely on creating clusters with newly selected node types and synchronizing the new nodes with the previous ones. To ensure a smooth scale up/down flow, do the following:  
Ensure you have sufficient ENI (Elastic Network Interface) capacity. If scaling down, ensure the smaller node has sufficient memory to absorb expected traffic.   
For best practices on memory management, see [Managing reserved memory for Valkey and Redis OSS](redis-memory-management.md). 
While the vertical scaling process is designed to remain fully online, it does rely on synchronizing data between the old node and the new node. We recommend that you initiate scale up/down during hours when you expect data traffic to be at its minimum. 
Test your application behavior during scaling in a staging environment, if possible. 

**Contents**
+ [Online scaling up](#redis-cluster-vertical-scaling-scaling-up)
  + [Scaling up Valkey or Redis OSS clusters (Console)](#redis-cluster-vertical-scaling-console)
  + [Scaling up Valkey or Redis OSS clusters (AWS CLI)](#Scaling.RedisStandalone.ScaleUp.CLI)
  + [Scaling up Valkey or Redis OSS clusters (ElastiCache API)](#VeticalScaling.RedisReplGrps.ScaleUp.API)
+ [Online scaling down](#redis-cluster-vertical-scaling-scaling-down)
  + [Scaling down Valkey or Redis OSS clusters (Console)](#redis-cluster-vertical-scaling-down-console)
  + [Scaling down Valkey or Redis OSS clusters (AWS CLI)](#Scaling.RedisStandalone.ScaleDown.CLI)
  + [Scaling down Valkey or Redis OSS clusters (ElastiCache API)](#Scaling.Vertical.ScaleDown.API)

## Online scaling up
<a name="redis-cluster-vertical-scaling-scaling-up"></a>

**Topics**
+ [Scaling up Valkey or Redis OSS clusters (Console)](#redis-cluster-vertical-scaling-console)
+ [Scaling up Valkey or Redis OSS clusters (AWS CLI)](#Scaling.RedisStandalone.ScaleUp.CLI)
+ [Scaling up Valkey or Redis OSS clusters (ElastiCache API)](#VeticalScaling.RedisReplGrps.ScaleUp.API)

### Scaling up Valkey or Redis OSS clusters (Console)
<a name="redis-cluster-vertical-scaling-console"></a>

The following procedure describes how to scale up a Valkey or Redis OSS cluster using the ElastiCache Management Console. During this process, your cluster will continue to serve requests with minimal downtime.

**To scale up a Valkey or Redis OSS cluster (console)**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. From the navigation pane, choose **Valkey clusters** or **Redis OSS clusters**.

1. From the list of clusters, choose the cluster. 

1. Choose **Modify**.

1. In the **Modify Cluster** wizard:

   1. Choose the node type you want to scale to from the **Node type** list. To scale up, select a node type larger than your existing node. 

1. If you want to perform the scale-up process right away, choose the **Apply immediately** box. If the **Apply immediately** box is not chosen, the scale-up process is performed during this cluster's next maintenance window.

1. Choose **Modify**.

   If you chose **Apply immediately** in the previous step, the cluster's status changes to *modifying*. When the status changes to *available*, the modification is complete and you can begin using the new cluster.

### Scaling up Valkey or Redis OSS clusters (AWS CLI)
<a name="Scaling.RedisStandalone.ScaleUp.CLI"></a>

The following procedure describes how to scale up a Valkey or Redis OSS cluster using the AWS CLI. During this process, your cluster will continue to serve requests with minimal downtime.

**To scale up a Valkey or Redis OSS cluster (AWS CLI)**

1. Determine the node types you can scale up to by running the AWS CLI `list-allowed-node-type-modifications` command with the following parameter.

   For Linux, macOS, or Unix:

   ```
   aws elasticache list-allowed-node-type-modifications \
   	    --replication-group-id my-replication-group-id
   ```

   For Windows:

   ```
   aws elasticache list-allowed-node-type-modifications ^
   	    --replication-group-id my-replication-group-id
   ```

   Output from the above command looks something like this (JSON format).

   ```
   {
   	    "ScaleUpModifications": [
   	        "cache.m3.2xlarge", 
   	        "cache.m3.large", 
   	        "cache.m3.xlarge", 
   	        "cache.m4.10xlarge", 
   	        "cache.m4.2xlarge", 
   	        "cache.m4.4xlarge", 
   	        "cache.m4.large", 
   	        "cache.m4.xlarge", 
   	        "cache.r3.2xlarge", 
   	        "cache.r3.4xlarge", 
   	        "cache.r3.8xlarge", 
   	        "cache.r3.large", 
   	        "cache.r3.xlarge"
   	    ]
   	       "ScaleDownModifications": [
   	        "cache.t2.micro", 
   	        "cache.t2.small ", 
   	        "cache.t2.medium",
   	       	"cache.t1.small "
   	    ], 
   }
   ```

   For more information, see [list-allowed-node-type-modifications](https://docs.aws.amazon.com/cli/latest/reference/elasticache/list-allowed-node-type-modifications.html) in the *AWS CLI Reference*.

1. Modify your replication group to scale up to the new, larger node type, using the AWS CLI `modify-replication-group` command and the following parameters.
   + `--replication-group-id` – The name of the replication group you are scaling up to. 
   + `--cache-node-type` – The new node type you want to scale the cluster. This value must be one of the node types returned by the `list-allowed-node-type-modifications` command in step 1.
   + `--cache-parameter-group-name` – [Optional] Use this parameter if you are using `reserved-memory` to manage your cluster's reserved memory. Specify a custom cache parameter group that reserves the correct amount of memory for your new node type. If you are using `reserved-memory-percent` you can omit this parameter.
   + `--apply-immediately` – Causes the scale-up process to be applied immediately. To postpone the scale-up process to the cluster's next maintenance window, use the `--no-apply-immediately` parameter.

   For Linux, macOS, or Unix:

   ```
   aws elasticache modify-replication-group  \
   	    --replication-group-id my-redis-cluster \
   	    --cache-node-type cache.m3.xlarge \	    
   	    --apply-immediately
   ```

   For Windows:

   ```
   aws elasticache modify-replication-group ^
   	    --replication-group-id my-redis-cluster ^
   	    --cache-node-type cache.m3.xlarge ^	   
   	    --apply-immediately
   ```

   Output from the above command looks something like this (JSON format).

   ```
   {
   		"ReplicationGroup": {
           "Status": "modifying",
           "Description": "my-redis-cluster",
           "NodeGroups": [
               {
                   "Status": "modifying",
                   "Slots": "0-16383",
                   "NodeGroupId": "0001",
                   "NodeGroupMembers": [
                       {
                           "PreferredAvailabilityZone": "us-east-1f",
                           "CacheNodeId": "0001",
                           "CacheClusterId": "my-redis-cluster-0001-001"
                       },
                       {
                           "PreferredAvailabilityZone": "us-east-1d",
                           "CacheNodeId": "0001",
                           "CacheClusterId": "my-redis-cluster-0001-002"
                       }
                   ]
               }
           ],
           "ConfigurationEndpoint": {
               "Port": 6379,
               "Address": "my-redis-cluster.r7gdfi.clustercfg.use1.cache.amazonaws.com"
           },
           "ClusterEnabled": true,
           "ReplicationGroupId": "my-redis-cluster",
           "SnapshotRetentionLimit": 1,
           "AutomaticFailover": "enabled",
           "SnapshotWindow": "07:30-08:30",
           "MemberClusters": [
               "my-redis-cluster-0001-001",
               "my-redis-cluster-0001-002"
           ],
           "CacheNodeType": "cache.m3.xlarge",
            "DataTiering": "disabled"
           "PendingModifiedValues": {}
       }
   }
   ```

   For more information, see [modify-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-replication-group.html) in the *AWS CLI Reference*.

1. If you used the `--apply-immediately`, check the status of the cluster using the AWS CLI `describe-cache-clusters` command with the following parameter. When the status changes to *available*, you can begin using the new, larger cluster node.

### Scaling up Valkey or Redis OSS clusters (ElastiCache API)
<a name="VeticalScaling.RedisReplGrps.ScaleUp.API"></a>

The following process scales your cluster from its current node type to a new, larger node type using the ElastiCache API. During this process, ElastiCache updates the DNS entries so they point to the new nodes. Because of this you don't have to update the endpoints in your application. For Valkey 7.2 and above Redis OSS 5.0.5 and above, you can scale auto failover enabled clusters while the cluster continues to stay online and serve incoming requests. On version Redis OSS 4.0.10 and below, you may notice a brief interruption of reads and writes on previous versions from the primary node while the DNS entry is updated..

The amount of time it takes to scale up to a larger node type varies, depending upon your node type and the amount of data in your current cluster.

**To scale up a Valkey or Redis OSS Cache Cluster (ElastiCache API)**

1. Determine which node types you can scale up to using the ElastiCache API `ListAllowedNodeTypeModifications` action with the following parameter.
   + `ReplicationGroupId` – the name of the replication group. Use this parameter to describe a specific replication group rather than all replication groups.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=ListAllowedNodeTypeModifications
   	   &ReplicationGroupId=MyReplGroup
   	   &Version=2015-02-02
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20150202T192317Z
   	   &X-Amz-Credential=<credential>
   ```

   For more information, see [ListAllowedNodeTypeModifications](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ListAllowedNodeTypeModifications.html) in the *Amazon ElastiCache API Reference*.

1. Scale your current replication group up to the new node type using the `ModifyReplicationGroup` ElastiCache API action and with the following parameters.
   + `ReplicationGroupId` – the name of the replication group.
   + `CacheNodeType` – the new, larger node type of the clusters in this replication group. This value must be one of the instance types returned by the `ListAllowedNodeTypeModifications` action in the previous step.
   + `CacheParameterGroupName` – [Optional] Use this parameter if you are using `reserved-memory` to manage your cluster's reserved memory. Specify a custom cache parameter group that reserves the correct amount of memory for your new node type. If you are using `reserved-memory-percent` you can omit this parameter.
   + `ApplyImmediately` – Set to `true` to causes the scale-up process to be applied immediately. To postpone the scale-up process to the next maintenance window, use `ApplyImmediately``=false`.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=ModifyReplicationGroup
   	   &ApplyImmediately=true
   	   &CacheNodeType=cache.m3.2xlarge
   	   &CacheParameterGroupName=redis32-m3-2xl
   	   &ReplicationGroupId=myReplGroup
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20141201T220302Z
   	   &Version=2014-12-01
   	   &X-Amz-Algorithm=&AWS;4-HMAC-SHA256
   	   &X-Amz-Date=20141201T220302Z
   	   &X-Amz-SignedHeaders=Host
   	   &X-Amz-Expires=20141201T220302Z
   	   &X-Amz-Credential=<credential>
   	   &X-Amz-Signature=<signature>
   ```

   For more information, see [ModifyReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroup.html) in the *Amazon ElastiCache API Reference*.

1. If you used `ApplyImmediately``=true`, monitor the status of the replication group using the ElastiCache API `DescribeReplicationGroups` action with the following parameters. When the status changes from *modifying* to *available*, you can begin writing to your new, scaled up replication group.
   + `ReplicationGroupId` – the name of the replication group. Use this parameter to describe a particular replication group rather than all replication groups.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=DescribeReplicationGroups
   	   &ReplicationGroupId=MyReplGroup
   	   &Version=2015-02-02
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20150202T192317Z
   	   &X-Amz-Credential=<credential>
   ```

   For more information, see [DescribeReplicationGroups](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeReplicationGroups.html) in the *Amazon ElastiCache API Reference*.

## Online scaling down
<a name="redis-cluster-vertical-scaling-scaling-down"></a>

**Topics**
+ [Scaling down Valkey or Redis OSS clusters (Console)](#redis-cluster-vertical-scaling-down-console)
+ [Scaling down Valkey or Redis OSS clusters (AWS CLI)](#Scaling.RedisStandalone.ScaleDown.CLI)
+ [Scaling down Valkey or Redis OSS clusters (ElastiCache API)](#Scaling.Vertical.ScaleDown.API)

### Scaling down Valkey or Redis OSS clusters (Console)
<a name="redis-cluster-vertical-scaling-down-console"></a>

The following procedure describes how to scale down a Valkey or Redis OSS cluster using the ElastiCache Management Console. During this process, your Valkey or Redis OSS cluster will continue to serve requests with minimal downtime.

**To scale Down a Valkey or Redis OSS cluster (console)**

1. Sign in to the AWS Management Console and open the ElastiCache console at [ https://console.aws.amazon.com/elasticache/](https://console.aws.amazon.com/elasticache/).

1. From the navigation pane, choose **Valkey clusters** or **Redis OSS clusters**.

1. From the list of clusters, choose your preferred cluster. 

1. Choose **Modify**.

1. In the **Modify Cluster** wizard:

   1. Choose the node type you want to scale to from the **Node type** list. To scale down, select a node type smaller than your existing node. Note that not all node types are available to scale down to.

1. If you want to perform the scale down process right away, choose the **Apply immediately** box. If the **Apply immediately** box is not chosen, the scale-down process is performed during this cluster's next maintenance window.

1. Choose **Modify**.

   If you chose **Apply immediately** in the previous step, the cluster's status changes to *modifying*. When the status changes to *available*, the modification is complete and you can begin using the new cluster.

### Scaling down Valkey or Redis OSS clusters (AWS CLI)
<a name="Scaling.RedisStandalone.ScaleDown.CLI"></a>

The following procedure describes how to scale down a Valkey or Redis OSS cluster using the AWS CLI. During this process, your cluster will continue to serve requests with minimal downtime.

**To scale down a Valkey or Redis OSS cluster (AWS CLI)**

1. Determine the node types you can scale down to by running the AWS CLI `list-allowed-node-type-modifications` command with the following parameter.

   For Linux, macOS, or Unix:

   ```
   aws elasticache list-allowed-node-type-modifications \
   	    --replication-group-id my-replication-group-id
   ```

   For Windows:

   ```
   aws elasticache list-allowed-node-type-modifications ^
   	    --replication-group-id my-replication-group-id
   ```

   Output from the above command looks something like this (JSON format).

   ```
   {
   	    "ScaleUpModifications": [
   	        "cache.m3.2xlarge", 
   	        "cache.m3.large", 
   	        "cache.m3.xlarge", 
   	        "cache.m4.10xlarge", 
   	        "cache.m4.2xlarge", 
   	        "cache.m4.4xlarge", 
   	        "cache.m4.large", 
   	        "cache.m4.xlarge", 
   	        "cache.r3.2xlarge", 
   	        "cache.r3.4xlarge", 
   	        "cache.r3.8xlarge", 
   	        "cache.r3.large", 
   	        "cache.r3.xlarge"
   	    ]
   	
   	       "ScaleDownModifications": [
   	        "cache.t2.micro", 
   	        "cache.t2.small ", 
   	        "cache.t2.medium ",
     	      "cache.t1.small"
   	    ]
   }
   ```

   For more information, see [list-allowed-node-type-modifications](https://docs.aws.amazon.com/cli/latest/reference/elasticache/list-allowed-node-type-modifications.html) in the *AWS CLI Reference*.

1. Modify your replication group to scale down to the new, smaller node type, using the AWS CLI `modify-replication-group` command and the following parameters.
   + `--replication-group-id` – The name of the replication group you are scaling down to. 
   + `--cache-node-type` – The new node type you want to scale the cluster. This value must be one of the node types returned by the `list-allowed-node-type-modifications` command in step 1.
   + `--cache-parameter-group-name` – [Optional] Use this parameter if you are using `reserved-memory` to manage your cluster's reserved memory. Specify a custom cache parameter group that reserves the correct amount of memory for your new node type. If you are using `reserved-memory-percent` you can omit this parameter.
   + `--apply-immediately` – Causes the scale-up process to be applied immediately. To postpone the scale-down process to the cluster's next maintenance window, use the `--no-apply-immediately` parameter.

   For Linux, macOS, or Unix:

   ```
   aws elasticache modify-replication-group  \
   	    --replication-group-id my-redis-cluster \
   	    --cache-node-type cache.t2.micro \	    
   	    --apply-immediately
   ```

   For Windows:

   ```
   aws elasticache modify-replication-group ^
   	    --replication-group-id my-redis-cluster ^
   	    --cache-node-type cache.t2.micro ^	   
   	    --apply-immediately
   ```

   Output from the above command looks something like this (JSON format).

   ```
   {	
   		"ReplicationGroup": {
           "Status": "modifying",
           "Description": "my-redis-cluster",
           "NodeGroups": [
               {
                   "Status": "modifying",
                   "Slots": "0-16383",
                   "NodeGroupId": "0001",
                   "NodeGroupMembers": [
                       {
                           "PreferredAvailabilityZone": "us-east-1f",
                           "CacheNodeId": "0001",
                           "CacheClusterId": "my-redis-cluster-0001-001"
                       },
                       {
                           "PreferredAvailabilityZone": "us-east-1d",
                           "CacheNodeId": "0001",
                           "CacheClusterId": "my-redis-cluster-0001-002"
                       }
                   ]
               }
           ],
           "ConfigurationEndpoint": {
               "Port": 6379,
               "Address": "my-redis-cluster.r7gdfi.clustercfg.use1.cache.amazonaws.com"
           },
           "ClusterEnabled": true,
           "ReplicationGroupId": "my-redis-cluster",
           "SnapshotRetentionLimit": 1,
           "AutomaticFailover": "enabled",
           "SnapshotWindow": "07:30-08:30",
           "MemberClusters": [
               "my-redis-cluster-0001-001",
               "my-redis-cluster-0001-002"
           ],
           "CacheNodeType": "cache.t2.micro",
            "DataTiering": "disabled"
           "PendingModifiedValues": {}
       }
   }
   ```

   For more information, see [modify-replication-group](https://docs.aws.amazon.com/cli/latest/reference/elasticache/modify-replication-group.html) in the *AWS CLI Reference*.

1. If you used the `--apply-immediately`, check the status of the cluster using the AWS CLI `describe-cache-clusters` command with the following parameter. When the status changes to *available*, you can begin using the new, smaller cluster node.

### Scaling down Valkey or Redis OSS clusters (ElastiCache API)
<a name="Scaling.Vertical.ScaleDown.API"></a>

The following process scales your replication group from its current node type to a new, smaller node type using the ElastiCache API. During this process, your Valkey or Redis OSS cluster will continue to serve requests with minimal downtime.

The amount of time it takes to scale down to a smaller node type varies, depending upon your node type and the amount of data in your current cluster.

**Scaling down (ElastiCache API)**

1. Determine which node types you can scale down to using the ElastiCache API `ListAllowedNodeTypeModifications` action with the following parameter.
   + `ReplicationGroupId` – the name of the replication group. Use this parameter to describe a specific replication group rather than all replication groups.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=ListAllowedNodeTypeModifications
   	   &ReplicationGroupId=MyReplGroup
   	   &Version=2015-02-02
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20150202T192317Z
   	   &X-Amz-Credential=<credential>
   ```

   For more information, see [ListAllowedNodeTypeModifications](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ListAllowedNodeTypeModifications.html) in the *Amazon ElastiCache API Reference*.

1. Scale your current replication group down to the new node type using the `ModifyReplicationGroup` ElastiCache API action and with the following parameters.
   + `ReplicationGroupId` – the name of the replication group.
   + `CacheNodeType` – the new, smaller node type of the clusters in this replication group. This value must be one of the instance types returned by the `ListAllowedNodeTypeModifications` action in the previous step.
   + `CacheParameterGroupName` – [Optional] Use this parameter if you are using `reserved-memory` to manage your cluster's reserved memory. Specify a custom cache parameter group that reserves the correct amount of memory for your new node type. If you are using `reserved-memory-percent` you can omit this parameter.
   + `ApplyImmediately` – Set to `true` to causes the scale-down process to be applied immediately. To postpone the scale-down process to the next maintenance window, use `ApplyImmediately``=false`.

   ```
   https://elasticache.us-west-2.amazonaws.com/
   	   ?Action=ModifyReplicationGroup
   	   &ApplyImmediately=true
   	   &CacheNodeType=cache.t2.micro
   	   &CacheParameterGroupName=redis32-m3-2xl
   	   &ReplicationGroupId=myReplGroup
   	   &SignatureVersion=4
   	   &SignatureMethod=HmacSHA256
   	   &Timestamp=20141201T220302Z
   	   &Version=2014-12-01
   	   &X-Amz-Algorithm=&AWS;4-HMAC-SHA256
   	   &X-Amz-Date=20141201T220302Z
   	   &X-Amz-SignedHeaders=Host
   	   &X-Amz-Expires=20141201T220302Z
   	   &X-Amz-Credential=<credential>
   	   &X-Amz-Signature=<signature>
   ```

   For more information, see [ModifyReplicationGroup](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ModifyReplicationGroup.html) in the *Amazon ElastiCache API Reference*.

# Getting started with Bloom filters
<a name="BloomFilters"></a>

ElastiCache supports the Bloom filter data structure, which provides a space efficient probabilistic data structure to check if an element is a member of a set. When using Bloom filters, false positives are possible—a filter can incorrectly indicate that an element exists, even though that element was not added to the set. However, using Bloom filters will prevent false *negatives*—incorrect indications that an element does *not* exist, even though that element was added to the set. 

You can set the percentage of potential false positives to a preferred rate for your workload, by adjusting the fp rate. You can also configure capacity (the number of items a Bloom filter can hold), scaling and non-scaling properties, and more. 

After you create a cluster with a supported engine version, the Bloom data type and associated commands are automatically available. The `bloom` data type is API compatible with the Bloom filter command syntax of the official Valkey client libraries including `valkey-py`, `valkey-java`, and `valkey-go`. You can easily migrate existing Bloom-based Valkey and Redis OSS applications into ElastiCache. For a complete list of commands see [Bloom filter commands](#SupportedCommandsBloom).

The Bloom-related metrics `BloomFilterBasedCmds`, `BloomFilterBasedCmdsLatency`, and `BloomFilterBasedCmdsECPUs` are incorporated into CloudWatch to monitor the usage of this data type. For more information, see [Metrics for Valkey and Redis OSS](CacheMetrics.Redis.md).

**Note**  
To use Bloom filters, you must be running on ElastiCache Valkey 8.1 and later.
The bloom data type is not RDB compatible with other non-Valkey based bloom offerings.

## Bloom filters data type overview
<a name="BloomFilters.datatype"></a>

Bloom filters are a space efficient probabilistic data structure that allows adding elements and checking whether elements exist. False positives are possible where a filter incorrectly indicates that an element exists, even though it was not added. However, Bloom Filters guarantee that false negatives (incorrectly indicating that an element does not exist, even though it was added) do not occur.

The main source of documentation for bloom filters can be found on the valkey.io documentation page. This contains the following information:
+ [Common use cases for bloom filters](https://valkey.io/topics/bloomfilters/#common-use-cases-for-bloom-filters)
  + Advertisement / Event deduplication
  + Fraud detection
  + Filtering harmful content / spam
  + Unique user detection
+ [Differences between scaling and non scaling bloom filters](https://valkey.io/topics/bloomfilters/#scaling-and-non-scaling-bloom-filters)
  + How to decide between scaling and non scaling bloom filters
+ [Bloom properties](https://valkey.io/topics/bloomfilters/#bloom-properties)
  + Learn about the tunable properties of Bloom filters. This includes the false positive rate, capacity, scaling and non scaling properties, and more.
+ [Performance of bloom commands](https://valkey.io/topics/bloomfilters/#performance)
+ [Monitoring overall bloom filter stats](https://valkey.io/topics/bloomfilters/#monitoring)
+ [Handling large bloom filters](https://valkey.io/topics/bloomfilters/#handling-large-bloom-filters)
  + Recommendations and details on how to check if a bloom filter is reaching its memory usage limit, and if it can scale to reach the desired capacity.
  + You can specifically check the amount of memory consumed by a bloom filter document through using the [BF.INFO](https://valkey.io/commands/bf.info/) command.

## Bloom size limit
<a name="BloomFilters.size"></a>

The consumption of memory by a single Bloom filter object is limited to 128 MB. You can check the amount of memory consumed by a Bloom filter by using the `BF.INFO <key> SIZE` command.

## Bloom ACLs
<a name="BloomFilters.ACL"></a>

Similar to the existing per-datatype categories (@string, @hash, etc.) a new category @bloom is added to simplify managing access to Bloom commands and data. No other existing Valkey or Redis OSS commands are members of the @bloom category. 

There are 3 existing ACL categories that are updated to include the new Bloom commands: @read, @write and @fast. The following table indicates the mapping of Bloom commands to the appropriate categories.


| Bloom command | @read | @write | @fast | @bloom | 
| --- | --- | --- | --- | --- | 
|  BF.ADD  |    |  y  |  y  |  y  | 
|  BF.CARD  |  y  |    |  y  |  y  | 
|  BF.EXISTS  |  y  |    |  y  |  y  | 
|  BF.INFO  |  y  |    |  y  |  y  | 
|  BF.INSERT  |    |  y  |  y  |  y  | 
|  BF.MADD  |    |  y  |  y  |  y  | 
|  BF.MEXISTS  |  y  |    |  y  |  y  | 
|  BF.RESERVE  |  y  |    |  y  |  y  | 

## Bloom filter related metrics
<a name="BloomFilters.Metrics"></a>

The following CloudWatch metrics related to bloom data structures are provided:


| CW Metrics | Unit | Serverless/Node-based | Description | 
| --- | --- | --- | --- | 
|  BloomFilterBasedCmds  |  Count  |  Both  |  The total number of Bloom filter commands, including both read and write commands.  | 
|  BloomFilterBasedCmdsLatency  |  Microseconds  |  Self-managed  |  Latency of all Bloom filter commands, including both read and write commands.  | 
|  BloomFilterBasedCmdsECPUs  |  Count  |  Serverless  |  ECPUs consumed by all Bloom filter commands, including both read and write commands.  | 

## Bloom filter commands
<a name="SupportedCommandsBloom"></a>

[Bloom Filter commands](https://valkey.io/commands/#bloom) are documented on the [Valkey.io](https://valkey.io/) website. Each command page provides a comprehensive overview of the bloom commands, including its syntax, behavior, return values, and potential error conditions.


| Name | Description | 
| --- | --- | 
| [BF.ADD](https://valkey.io/commands/bf.add/) |  Adds a single item to a bloom filter.If the filter doesn't already exist, it is created.  | 
| [BF.CARD](https://valkey.io/commands/bf.card/) | Returns the cardinality of a bloom filter. | 
| [BF.EXISTS](https://valkey.io/commands/bf.exists/) | Determines if the bloom filter contains the specified item.  | 
| [BF.INFO](https://valkey.io/commands/bf.info/) | Returns usage information and properties of a specific bloom filter. | 
| [BF.INSERT](https://valkey.io/commands/bf.insert/) | Creates a bloom filter with 0 or more itemes, or adds items to an existing bloom filter. | 
| [BF.MADD](https://valkey.io/commands/bf.madd/) | Adds one or more items to a bloom filter. | 
| [BF.MEXISTS](https://valkey.io/commands/bf.mexists/) | Determines if the bloom filter contains 1 or more items. | 
| [BF.RESERVE](https://valkey.io/commands/bf.reserve/) | Creates an empty bloom filter with the specified properties. | 

**Note**  
**BF.LOAD** is not supported by ElastiCache. It is only relevant for AOF usage, which ElastiCache does not support.

# Getting started with Watch in Serverless
<a name="ServerlessWatch"></a>

ElastiCache supports the `WATCH` command, which allows you to monitor keys for changes and execute conditional [transactions](https://valkey.io/topics/transactions/). The `WATCH` command is particularly useful for applications that require optimistic concurrency control, ensuring that transactions are only executed if the monitored keys have not been modified. This includes modifications made by a client, like write commands, and by Valkey itself, like expiration or eviction. If keys were modified from the time they were set in `WATCH` and by the time `EXEC` is received, the entire transaction will be aborted. 

For ElastiCache Serverless, the following constraints are introduced - 

ElastiCache Serverless `WATCH` is scoped to a single hash slot. That means only keys that map to the same hash slot can be watched at the same time by the same connection, and the transaction that follows the watch commands can only operate on the same hash slot. When an application attempts to watch keys from different hash slots, or execute transaction commands that operate on keys mapped to a different hash slot than the watched keys', a `CROSSSLOT` error will be returned. [Hash tags](https://valkey.io/topics/cluster-spec/#hash-tags) can be used to ensure multiple keys are mapped to the same hash slot.

Additionally, `SCAN` command cannot be executed inside a connection with watched keys and will return `command not supported during watch state` error. 

The transaction will be aborted (as if watched keys were touched) when ElastiCache Serverless has no certainty of whether a key was modified. For example, when a slot has been migrated and the watched keys cannot be found on the same node.

**Code Examples**

## Watch and Operate on Keys from Different Slots
<a name="w2aac24c33c15b1"></a>

In the following example, the watched key and the key specified in the `SET` command map to different hash slots. The execution returns a `CROSSSLOT ERROR`.

```
> WATCH foo:{005119} 
OK 
> MULTI 
OK 
> SET bar:{011794} 1234 
QUEUED 
> EXEC 
CROSSSLOT Keys in request don't hash to the same slot
```

## Watch and Operate on Keys from the Same Slot
<a name="w2aac24c33c15b3"></a>

The following example shows a successful transaction, as The key set in `WATCH` wasn't changed.

```
> WATCH foo:{005119} 
OK 
> MULTI 
OK 
> SET bar:{005119} 1234 
QUEUED 
> EXEC 
1) OK
```

## Watch keys from Different Slots
<a name="w2aac24c33c15b5"></a>

In the following example, an attempt to `WATCH` keys from different slots simultaneously within the same client connection returns a `CROSSSLOT ERROR`.

```
> WATCH foo:{005119} 
OK 
> WATCH bar:{123455}  
CROSSSLOT Keys in request don't hash to the same slot
```

## Watch limit
<a name="ServerlessWatch.size"></a>

Every client connection can watch up to 1000 keys at the same time.

## Supported commands related to Watch
<a name="SupportedCommandsWatch"></a>

[WATCH](https://valkey.io/commands/watch/) and [UNWATCH](https://valkey.io/commands/unwatch/) commands are documented on the [Valkey.io](https://valkey.io/) website. It provides a comprehensive overview of the commands, including its syntax, behavior, return values, and potential error conditions.

# Getting started with Vector Search
<a name="vector-search"></a>

Amazon ElastiCache for Valkey supports vector search, delivering latency as low as microseconds-the lowest latency vector search with the highest throughput and best price-performance at 95%\$1 recall rate among popular vector databases on AWS. ElastiCache for Valkey provides capabilities to index, search, and update billions of high-dimensional vector embeddings from popular providers like Amazon Bedrock, Amazon SageMaker, Anthropic or OpenAI for fast search and retrieval with up to 99% recall. Vector search for Amazon ElastiCache is ideal for use cases where peak performance and scalability are the most important selection criteria. This includes semantic caching, retrieval-augmented generation, real-time recommendations, personalization, and anomaly detection. 

Vector search can be used in conjunction with other ElastiCache features to enhance your applications. Vector search for ElastiCache is available in Valkey version 8.2 on node-based clusters in all [AWS Regions](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/) at no additional cost. To get started, create a new Valkey 8.2 cluster using the [AWS Management Console](https://console.aws.amazon.com/elasticache/), AWS SDK, or AWS CLI. You can also use vector search on your existing cluster by upgrading from any previous version of Valkey or Redis OSS to Valkey 8.2 in a [few clicks with no downtime](VersionManagement.HowTo.md).

# Vector Search Overview
<a name="vector-search-overview"></a>

Amazon ElastiCache for Valkey supports vector search, enabling you to store, search, and update billions of high-dimensional vector embeddings in-memory with latencies as low as microseconds with recall up to 99%. Vector search allows you to create, maintain and use secondary indexes for efficient and scalable search. Each vector search operation applies to a single index. Index operations apply only to the specified index. Any number of operations may be issued against any index at any time with the exception of index creation and deletion operations. At the cluster level, multiple operations against multiple indexes may be in progress simultaneously.

Within this document the terms key, row, and record are identical and used interchangeably. Similarly, the terms column, field, path and member are also used interchangeably.

The `FT.CREATE` command can be used to create an index for a subset of keys with the specified index types. `FT.SEARCH` performs queries on indexes created, and `FT.DROPINDEX` removes an existing index and all associated data. There are no special commands to add, delete or modify indexed data. The existing `HASH` or `JSON` commands that modify a key that is in an index automatically update the index.

**Topics**
+ [Indexes and the Valkey OSS keyspace](#indexes-keyspace)
+ [Index field types](#index-field-types)
+ [Vector index algorithms](#vector-index-algorithms)
+ [Vector search security](#vector-search-security)

## Indexes and the Valkey OSS keyspace
<a name="indexes-keyspace"></a>

Indexes are constructed and maintained over a subset of the Valkey OSS keyspace. The keyspace for each index is defined by a list of key prefixes that are provided when the index is created. The list of prefixes is optional and if omitted, the entire keyspace will be part of that index. Multiple indexes may choose disjoint or overlapping subsets of the keyspace without limitation.

Indexes are also typed in that they only cover keys that have a matching type. Currently, indexes are supported only on `JSON` and `HASH` types. A `HASH` index only indexes `HASH` keys covered by its prefix list and similarly a `JSON` index only indexes `JSON` keys that are covered by its prefix list. Keys within an index's keyspace prefix list that do not have the designated type are ignored and do not affect search operations.

An index is updated when a command modifies any key that is within the keyspace of the index. Valkey automatically extracts the declared fields for each index and updates the index with the new value. The update process has three steps. In the first step, the HASH or JSON key is modified and the requesting client is blocked. The second step is performed in the background and updates each of the indexes that contain the modified key. In the third step, the client is unblocked. Thus for query operations performed on the same connection as a mutation, that change is immediately visible in the search results. 

The creation of an index is a multi-step process. The first step is to execute the FT.CREATE command which defines the index. Successful execution of a create automatically initiates the second step – backfilling. The backfill process runs in a background thread and scans the key space for keys that are within the new index’s prefix list. Each key that is found is added to the index. Eventually the entire keyspace is scanned, completing the index creation process. Note that while the backfill process is running, mutations of indexed keys are permitted, there is no restriction, and the index backfill process will not complete until all keys are properly indexed. Query operations attempted while an index is undergoing backfill are not allowed and are terminated with an error. The FT.INFO command returns the backfill process status in the 'backfill\$1status' field.

## Index field types
<a name="index-field-types"></a>

Each index has a specific type that is declared when the index is created along with the location of a field (column) to be indexed. For `HASH` keys the location is the field name within the `HASH`. For `JSON` keys the location is a JSON path description. When a key is modified the data associated with the declared fields is extracted, converted to the declared type and stored in the index. If the data is missing or cannot be successfully converted to the declared type, then that field is omitted from the index. There are three types of fields, as explained following:
+ **Vector fields** contain a vector of numbers also known as vector embedding. Vector fields can be used to filter vectors based on specified distance metrics that measure similarity. For `HASH` indexes, the field should contain the entire vector encoded in binary format (little-endian IEEE 754). For `JSON` keys the path should reference an array of the correct size filled with numbers. Note that when a JSON array is used as a vector field, the internal representation of the array within the JSON key is converted into the format required by the selected algorithm, reducing memory consumption and precision. Subsequent read operations using the JSON commands will yield the reduced precision value.
+ **Number fields** contain a single number. Number fields can be used with the range search operator. For `HASH`, the field is expected to contain the ASCII text of a number written in the standard format of fixed- or floating-point numbers. For `JSON` fields, the numeric rules of JSON numbers must be followed. Regardless of the representation within the key, this field is converted to a 64-bit floating point number for storage within the index. Because the underlying numbers are stored in floating point with its precision limitations, the usual rules about numeric comparisons for floating point numbers apply.
+ **Tag fields** contain zero or more tag values coded as a single UTF-8 string. Tag fields can be used to filter queries for tag value equivalence with either case-sensitive or case-insensitive comparison. The string is parsed into tag values using a separator character (default is a comma but can be overridden) with leading and trailing white space removed. Any number of tag values can be contained in a single tag field.

## Vector index algorithms
<a name="vector-index-algorithms"></a>

Two vector index algorithms are supported in Valkey:
+ **Flat** – The Flat algorithm is a brute force linear processing of each vector in the index, yielding exact answers within the bounds of the precision of the distance computations. Because of the linear processing of the index, run times for this algorithm can be very high for large indexes. Flat indexes support higher ingestion speeds.
+ **Hierarchical Navigable Small Worlds (HNSW)** – The HNSW algorithm is an alternative that provides an approximate closest vector matches in exchange for substantially lower execution times. The algorithm is controlled by three parameters `M`, `EF_CONSTRUCTION` and `EF_RUNTIME`. The first two parameters are specified at index creation time and cannot be changed. The `EF_RUNTIME` parameter has a default value that is specified at index creation but can be overridden on any individual query operation afterward. These three parameters interact to balance memory and CPU consumption during ingestion and query operations as well as control the quality of the approximation of an exact KNN search (known as recall ratio).

In HNSW, the parameter M controls the maximum number of neighbors each node can connect to, shaping the index density. A higher M, such as 32 and above, produces a more connected graph, improving recall and query speed because more paths exist to reach relevant neighbors. However, it increases index size, memory use, and slows down indexing. A lower M, such as 8 and below, yields a smaller, faster-to-build index with lower memory use, but recall decreases and queries may take longer due to fewer connections.

The parameter EF\$1construction dictates how many candidate connections are evaluated when building the index. A higher EF\$1construction, such as 400 and above, means the indexer considers more paths before selecting neighbors, leading to a graph that improves both recall and query efficiency later, but at the cost of slower indexing and higher CPU and memory use during construction. A low EF\$1construction, such as 64-120, speeds up indexing and reduces resource use, but the resulting graph may reduce recall and slow queries even if EF\$1runtime is set high.

Finally, EF\$1runtime governs the breadth of the search during querying, controlling how many candidate neighbors are explored at runtime. Setting it high increases recall and accuracy, but at the cost of query latency and CPU use. A low EF\$1runtime makes queries faster and lighter, but with reduced recall. Unlike M or EF\$1construction, this parameter does not affect index size or build time, making it the parameter to tune recall versus latency trade-offs after an index is built.

Both vector search algorithms (Flat and HNSW) support an optional `INITIAL_CAP` parameter. When specified, this parameter pre-allocates memory for the indexes, resulting in reduced memory management overhead and increased vector ingestion rates. Flat indexes support better ingestion speeds than HNSW.

Vector search algorithms like HNSW may not efficiently handle deleting or overwriting of previously inserted vectors. Use of these operations can result in excess index memory consumption and/or degraded recall quality. Reindexing is one method for restoring optimal memory usage and/or recall.

## Vector search security
<a name="vector-search-security"></a>

[Valkey ACL (Access Control Lists)](https://valkey.io/topics/acl/) security mechanisms for both command and data access are extended to control the search facility. ACL control of individual search commands is fully supported. A new ACL category, `@search`, is provided and many of the existing categories (`@fast`, `@read`, `@write`, etc.) are updated to include the new commands. Search commands do not modify key data, meaning that the existing ACL machinery for write access is preserved. The access rules for `HASH` and `JSON` operations are not modified by the presence of an index; normal key-level access control is still applied to those commands.

Search commands with an index also have their access controlled through ACL. Access checks are performed at the whole-index level, not at the per-key level. This means that access to an index is granted to a user only if that user has permission to access all possible keys within the keyspace prefix list of that index. In other words, the actual contents of an index don't control the access. Rather, it is the theoretical contents of an index as defined by the prefix list which is used for the security check. Situations where a user has read and/or write access to a key but is unable to access an index containing that key are possible. Note that only read access to the keyspace is required to create or use an index – the presence or absence of write access is not considered

# Vector search features and limits
<a name="vector-search-features-limits"></a>

## Vector search availability
<a name="vector-search-availability"></a>

Vector search for Amazon ElastiCache is available with Valkey version 8.2 on node-based clusters in all AWS Regions at no additional cost. You can also use vector search on your existing clusters by upgrading from any version of Valkey, or Redis OSS to Valkey 8.2, in a [few clicks with no downtime](VersionManagement.HowTo.md).

Vector search is currently available on all ElastiCache instance types other than nodes with data tiering. Using vector search on t2, t3, and t4g instances requires increasing the memory reserve to at least 50% for micro and 30% for small instances. See [this page](redis-memory-management.md) to find out more.

## Parametric restrictions
<a name="parametric-restrictions"></a>

The following table shows limits for various vector search items:


**Vector search limits**  

| Item | Maximum value | 
| --- | --- | 
| Number of dimensions in a vector | 32768 | 
| Number of indexes that can be created | 10 | 
| Number of fields in an index | 50 | 
| FT.SEARCH TIMEOUT clause (milliseconds) | 60000 | 
| Maximum number of prefixes allowed per index | 16 | 
| Maximum length of a tag field | 10000 | 
| Maximum length of a numeric field | 256 | 
| HNSW M parameter | 2000000 | 
| HNSW EF\$1CONSTRUCTION parameter | 4096 | 
| HNSW EF\$1RUNTIME parameter | 4096 | 

## Operational restrictions
<a name="operational-restrictions"></a>

### Index Persistence and Backfilling
<a name="index-persistence-backfilling"></a>

The update process has three steps. In the first step, the HASH or JSON key is modified and the requesting client is blocked. The second step is performed in the background and updates each of the indexes that contain the modified key. In the third step, the client is unblocked. Thus, for query operations performed on the same connection as a mutation, that change is immediately visible in the search results. However, an insert or update of a key may not be visible in search results for other clients for a short period of time. During periods of heavy system load and/or heavy mutation of data, the visibility delay can become longer.

The vector search feature persists the definition of indexes, and the content of the indexes. Indexes for vector fields are saved but the indexes for TAGS and NUMERIC are not saved, meaning that they must be rebuilt when loaded externally (full sync or a reload). This means that during any operational request or event that causes a node to start or restart, the index definition and content for vectors are restored from the latest snapshot. No user action is required to initiate this. However for TAGS and NUMERIC indexes the rebuild is performed as a backfill operation as soon as data is restored. This is functionally equivalent to the system automatically executing an FT.CREATE command for each defined index. Note that the node becomes available for application operations as soon as the data is restored, but likely before index backfill has completed, meaning that backfill operations will again become visible to applications.

The completion of index backfill is not synchronized between a primary and a replica. This lack of synchronization can unexpectedly become visible to applications, so it is recommended that applications verify backfill completion on primaries and all replicas before initiating search operations.

### Scaling limits
<a name="scaling-limits"></a>

During scaling events, the index may undergo backfill as data is migrated. This will result in a reduced recall for search queries.

### Snapshot import/export and Live Migration
<a name="snapshot-import-export"></a>

The RDB files from the one cluster with search indexes can be imported to another ElastiCache Valkey cluster with version 8.2 or higher. The new cluster will rebuild the index content on loading the RDB file. However, the presence of search indexes in an RDB file limits the compatibility of that data with prior versions of Valkey. The format of the search indexes defined by the vector search functionality is only understood by another ElastiCache cluster with Valkey version 8.2 or higher. However, RDB files that do not contain indexes are not restricted in this fashion.

### Out of Memory during backfill
<a name="out-of-memory-backfill"></a>

Similar to Valkey OSS write operations, an index backfill is subjected to out-of-memory limitations. If engine memory is filled up while a backfill is in progress, all backfills are paused. If memory becomes available, the backfill process is resumed. It is possible to delete an index when backfill is paused due to out of memory.

### Transactions
<a name="transactions"></a>

The commands `FT.CREATE`, `FT.DROPINDEX`, `FT.ALIASADD`, `FT.ALIASDEL`, and `FT.ALIASUPDATE` cannot be executed in a transactional context, i.e., not within a `MULTI/EXEC` block or within a LUA or FUNCTION script.

# Choosing the appropriate configuration
<a name="choosing-configuration"></a>

Within the console experience, ElastiCache offers an easy way to choose the right instance type based on the memory and cpu requirements of your vector workload.

## Memory consumption
<a name="memory-consumption"></a>

Memory consumption is based on the number of vectors, the number of dimensions, the M-value, and the amount of non-vector data, such as metadata associated to the vector or other data stored within the instance. The total memory required is a combination of the space needed for the actual vector data, and the space required for the vector indices. The space required for Vector data is calculated by measuring the actual capacity required for storing vectors within `HASH` or `JSON` data structures and the overhead to the nearest memory slabs, for optimal memory allocations. Each of the vector indexes uses references to the vector data stored in these data structures as well as an additional copy of the vector in the index. It is advised to plan for this additional space consumption by the index.

The number of vectors depend on how you decide to represent your data as vectors. For instance, you can choose to represent a single document into several chunks, where each chunk represents a vector. Alternatively, you could choose to represent the whole document as a single vector. The number of dimensions of your vectors is dependent on the embedding model you choose. For instance, if you choose to use the AWS Titan embedding model then the number of dimensions would be 1536. Note that you should test the instance type to make sure it fits your requirements.

## Scaling your workload
<a name="scaling-workload"></a>

Vector search supports all three methods of scaling: horizontal, vertical and replicas. When scaling for capacity, vector search behaves just like regular Valkey, i.e., increasing the memory of individual nodes (vertical scaling) or increasing the number of nodes (horizontal scaling) will increase the overall capacity. In cluster mode, the `FT.CREATE` command can be sent to any primary node of the cluster and the system will automatically distribute the new index definition to all cluster members.

However, from a performance perspective, vector search behaves very differently from regular Valkey. The multi-threaded implementation of vector search means that additional CPUs yield up to linear increases in both query and ingestion throughput. Horizontal scaling yields linear increases in ingestion throughput but may reduce query throughput. If additional query throughput is required, scaling through replicas or additional CPUs is required.

# Vector Search Commands
<a name="vector-search-commands"></a>

Following are a list of supported commands for vector search.

**Topics**
+ [FT.CREATE](vector-search-commands-ft.create.md)
+ [FT.SEARCH](vector-search-commands-ft.search.md)
+ [FT.DROPINDEX](vector-search-commands-ft.dropindex.md)
+ [FT.INFO](vector-search-commands-ft.info.md)
+ [FT.\$1LIST](vector-search-commands-ft.list.md)

# FT.CREATE
<a name="vector-search-commands-ft.create"></a>

The `FT.CREATE` command creates an empty index and initiates the backfill process. Each index consists of a number of field definitions. Each field definition specifies a field name, a field type, and a path within each indexed key to locate a value of the declared type. Some field type definitions have additional sub-type specifiers.

For indexes on HASH keys, the path is the same as the hash member name. The optional `AS` clause can be used to rename the field if desired. Renaming of fields is especially useful when the member name contains special characters.

For indexes on JSON keys, the path is a JSON path to the data of the declared type. Because the JSON path always contains special characters, the `AS` clause is required.

**Syntax**

```
FT.CREATE <index-name>
ON HASH | JSON
[PREFIX <count> <prefix1> [<prefix2>...]]
SCHEMA 
(<field-identifier> [AS <alias>] 
| VECTOR [HNSW|FLAT] <attr_count> [<attribute_name> <attribute_value>])
| TAG [SEPARATOR <sep>] [CASESENSITIVE] 
| NUMERIC 
)+
```

**<index-name> (required):** This is the name you give to your index. If an index with the same name exists already, an error is returned.

**ON HASH \$1 JSON (optional):** Only keys that match the specified type are included in this index. If omitted, HASH is assumed.

**PREFIX <prefix-count> <prefix> (optional):** If this clause is specified, then only keys that begin with the same bytes as one or more of the specified prefixes will be included in this index. If this clause is omitted, all keys of the correct type will be included. A zero-length prefix would also match all keys of the correct type.

**Field types:**
+ TAG: A tag field is a string that contains one or more tag values. 
  + SEPARATOR <sep> (optional): One of the characters `,.<>{}[]"':;!@#$%^&*()-+=~` used to delimit individual tags. If omitted, the default value is `,`.
  + CASESENSITIVE (optional): If present, tag comparisons will be case-sensitive. The default is that tag comparisons are NOT case-sensitive.
+ NUMERIC: A numeric field contains a number.
+ VECTOR: A vector field contains a vector. Two vector indexing algorithms are currently supported: HNSW (Hierarchical Navigable Small World) and FLAT (brute force). Each algorithm has a set of additional attributes, some required and other optional.
  + FLAT: The Flat algorithm provides exact answers, but has runtime proportional to the number of indexed vectors and thus may not be appropriate for large data sets.
    + DIM <number> (required): Specifies the number of dimensions in a vector.
    + TYPE FLOAT32 (required): Data type, currently only FLOAT32 is supported.
    + DISTANCE\$1METRIC [L2 \$1 IP \$1 COSINE] (required): Specifies the distance algorithm.
    + INITIAL\$1CAP <size> (optional): Initial index size.
  + HNSW: The HNSW algorithm provides approximate answers, but operates substantially faster than FLAT.
    + DIM <number> (required): Specifies the number of dimensions in a vector. 
    + TYPE FLOAT32 (required): Data type, currently only FLOAT32 is supported.
    + DISTANCE\$1METRIC [L2 \$1 IP \$1 COSINE] (required): Specifies the distance algorithm.
    + INITIAL\$1CAP <size> (optional): Initial index size.
    + M <number> (optional): Number of maximum allowed outgoing edges for each node in the graph in each layer. On the layer zero, the maximal number of outgoing edges will be 2\$1M. Default is 16, the maximum is 512.
    + EF\$1CONSTRUCTION <number> (optional): Controls the number of vectors examined during index construction. Higher values for this parameter will improve recall ratio at the expense of longer index creation times. The default value is 200. Maximum value is 4096.
    + EF\$1RUNTIME <number> (optional): Controls the number of vectors to be examined during a query operation. The default is 10, and the max is 4096. You can set this parameter value for each query you run. Higher values increase query times, but improve query recall.

**RESPONSE:** OK or error.

# FT.SEARCH
<a name="vector-search-commands-ft.search"></a>

Performs a search of the specified index. The keys that match the query expression are returned.

```
FT.SEARCH <index-name> <query>
[NOCONTENT]
[RETURN <token_count> (<field-identifier> [AS <alias>])+]
[TIMEOUT timeout] 
[PARAMS <count> <name> <value> [<name> <value>]]
[LIMIT <offset> <count>]
```
+ <index> (required): This index name you want to query.
+ <query> (required): The query string, see below for details.
+ NOCONTENT (optional): When present, only the resulting key names are returned, no key values are included.
+ TIMEOUT <timeout> (optional): Lets you set a timeout value for the search command. This must be an integer in milliseconds.
+ PARAMS <count> <name1> <value1> <name2> <value2> ... (optional): `count` is of the number of arguments, i.e., twice the number of value name pairs. See the query string for usage details.
+ RETURN <count> <field1> <field2> ... (optional): count is the number of fields to return. Specifies the fields you want to retrieve from your documents, along with any aliases for the returned values. By default, all fields are returned unless the NOCONTENT option is set, in which case no fields are returned. If count is set to 0, it behaves the same as NOCONTENT.
+ LIMIT: <offset> <count>: Lets you choose a portion of the result. The first <offset> keys are skipped and only a maximum of <count> keys are included. The default is LIMIT 0 10, which returns at most 10 keys.
+ PARAMS: Two times the number of key value pairs. Param key/value pairs can be referenced from within the query expression. For more information, see [Vector search query expression](https://docs.aws.amazon.com/memorydb/latest/devguide/vector-search-overview.html#vector-search-query-expression).
+ DIALECT: <dialect> (optional): Specifies your dialect. The only supported dialect is 2.

**RESPONSE**

The command returns either an array if successful or an error.

On success, the first entry in the response array represents the count of matching keys, followed by one array entry for each matching key. Note that if the `LIMIT` option is specified it will only control the number of returned keys and will not affect the value of the first entry.

When `NOCONTENT` is specified, each entry in the response contains only the matching keyname. Otherwise, each entry includes the matching keyname, followed by an array of the returned fields. The result fields for a key consist of a set of name/value pairs. The first name/value pair is for the distance computed. The name of this pair is constructed from the vector field name prepended with "\$1\$1", and appended with "\$1score" and the value is the computed distance. The remaining name/value pairs are the members and values of the key as controlled by the `RETURN` clause. 

The query string conforms to this syntax:

```
<filtering>=>[ KNN <K> @<vector_field_name> $<vector_parameter_name> <query-modifiers> ]
```

Where:
+ <filtering>: Is either a \$1 or a filter expression. A \$1 indicates no filtering and thus all vectors within the index are searched. A filter expression can be provided to designate a subset of the vectors to be searched.
+ <vector\$1field\$1name>: The name of a vector field within the specified index.
+ <K>: The number of nearest neighbor vectors to return.
+ <vector\$1parameter\$1name>: A PARAM name whose corresponding value provides the query vector for the KNN algorithm. Note that this parameter must be encoded as a 32-bit IEEE 754 binary floating point in little-endian format.
+ <query-modifiers>: (Optional) A list of keyword/value pairs that modify this particular KNN search. Currently, two keywords are supported:
  + EF\$1RUNTIME: This keyword is accompanied by an integer value which overrides the default value of EF\$1RUNTIME specified when the index was created.
  + AS: This keyword is accompanied by a string value which becomes the name of the score field in the result, overriding the default score field name generation algorithm.

**Filter expression**

A filter expression is constructed as a logical combination of Tag and Numeric search operators contained within parentheses.

**Tag**

The tag search operator is specified with one or more strings separated by the \$1 character. A key will satisfy the tag search operator if the indicated field contains any one of the specified strings.

```
@<field_name>:{<tag>}
or
@<field_name>:{<tag1> | <tag2>}
or
@<field_name>:{<tag1> | <tag2> | ...}
```

For example, the following query will return documents with blue OR black OR green color.

`@color:{blue | black | green}`

As another example, the following query will return documents containing "hello world" or "hello universe".

`@description:{hello world | hello universe}`

**Numeric range**

Numeric range operator allows for filtering queries to only return values that are in between a given start and end value. Both inclusive and exclusive range queries are supported. For simple relational comparisons, \$1inf, -inf can be used with a range query. The syntax for a range search operator is:

```
@<field_name>:[ [(] <bound> [(] <bound>]
```

...where <bound> is either a number or \$1inf or -inf. Bounds without a leading open paren are inclusive, whereas bounds with the leading open paren are exclusive. 

Use the following table as a guide for mapping mathematical expressions to filtering queries:

```
min <= field <= max         @field:[min max]
min < field <= max          @field:[(min max]
min <= field < max            @field:[min (max]
min < field < max            @field:[(min (max]
field >= min                @field:[min +inf]
field > min                    @field:[(min +inf]
field <= max                @field:[-inf max]
field < max                    @field:[-inf (max]
field == val                @field:[val val]
```

**Logical operators**

Multiple tags and numeric search operators can be used to construct complex queries using logical operators.

**Logical AND**

To set a logical AND, use a space between the predicates. For example:

`query1 query2 query3`

**Logical OR**

To set a logical OR, use a space between the predicates. For example:

`query1 | query2 | query3`

**Logical negation**

Any query can be negated by prepending the `-` character before each query. Negative queries return all entries that don't match the query. This also includes keys that don't have the field.

For example, a negative query on @genre:\$1comedy\$1 will return all books that are not comedy AND all books that don't have a genre field.

The following query will return all books with "comedy" genre that are not published between 2015 and 2024, or that have no year field: @genre:[comedy] -@year:[2015 2024]

**Operator precedence**

Typical operator precedence rules apply, i.e., logical NEGATE is the highest priority, followed by logical AND then logical OR with the lowest priority. Parentheses can be used to override the default precedence rules.

*Examples of combining logical operators*

Logical operators can be combined to form complex filter expressions.

The following query will return all books with "comedy" or "horror" genre (AND) published between 2015 and 2024: `@genre:[comedy|horror] @year:[2015 2024]`

The following query will return all books with "comedy" or "horror" genre (OR) published between 2015 and 2024: `@genre:[comedy|horror] | @year:[2015 2024]`

The following query will return all books that either don't have a genre field, or have a genre field not equal to "comedy", that are published between 2015 and 2024: `-@genre:[comedy] @year:[2015 2024]`

# FT.DROPINDEX
<a name="vector-search-commands-ft.dropindex"></a>

**Syntax**

```
FT.DROPINDEX <index-name>
```

The specified index is deleted. Returns OK or an error if that index doesn't exist.
+ <index-name> (required): The name of the index to delete.

# FT.INFO
<a name="vector-search-commands-ft.info"></a>

**Syntax**

```
FT.INFO <index-name>
```

Vector search augments the [FT.INFO](https://valkey.io/commands/info/) command with several additional sections of statistics and counters. A request to retrieve the section SEARCH will retrieve all of the following statistics:


| Key name | Value type | Description | 
| --- | --- | --- | 
| index\$1name | string | Name of the index | 
| index\$1options | string | Reserved. Currently set to "0" | 
| index\$1definition | array | See below for definition of these array elements. | 
| attributes | array of attribute information | One element in this array for each defined attribute, see below for attribute information definition. | 
| num\$1docs | integer | Number of keys currently contained in the index | 
| num\$1terms | integer | Reserved. Currently set to "0". | 
| record\$1count | integer | The sum of the "size" field for each attribute. | 
| hash\$1indexing\$1failures | integer | Number of times that an attribute couldn't be converted to the declared attribute type. Despite the name this also applies to JSON keys. | 
| backfill\$1in\$1progress | integer | If a backfill is currently in progress this will be a '1' otherwise it will be a '0' | 
| backfill\$1percent\$1complete | float | Estimate of backfill completion, a fractional number in the range [0..1] | 
| mutation\$1queue\$1size | integer | Number of keys waiting to update the index. | 
| recent\$1mutations\$1queue\$1delay | integer | Estimate of delay (in seconds) of index update. 0 if no updates are in progress. | 
| state | string | Backfill state: "ready" indicates that backfill completed successfully. "backfill\$1in\$1progress" indicates that backfill is proceeding. "backfill\$1paused\$1by\$1oom" means that backfilling has been paused due to a low memory condition. Once the low memory condition is resolved, backill will continue. | 

The index\$1definition structure is an array of key/value pairs defined as:


| Key name | Value type | Description | 
| --- | --- | --- | 
| key\$1type | string | Either the string 'JSON' or the string 'HASH' | 
| prefixes | array | Each element in the array is a defined prefix for the index. If no prefixes were specified when the index was create, this array will have 0 entries. | 
| default\$1score | string | Reserved. Currently set to "1" | 

Attribute information: Attribute information is type-specific.

Numeric attributes:


| Key | Value type | Description | 
| --- | --- | --- | 
| identifier | string | Location of the attribute within a key. Hash member name or JSON path | 
| alias | string | Name of attribute used in query descriptions. | 
| type | string | The string "NUMERIC" | 
| size | integer | The number of keys with valid numeric values in this attribute. | 

Tag attributes:


| Key name | Value type | Description | 
| --- | --- | --- | 
| identifier | string | Location of the attribute within a key. Hash member name or JSON path | 
| alias | string | Name of attribute used in query descriptions. | 
| type | string | The string "TAG" | 
| SEPARATOR | character | The separator character defined when the index was created | 
| CASESENSITIVE | n/a | This key has no associated value. It is present only if the attribute was created with this option. | 
| size | integer | The number of keys with valid tag values in this attribute | 

Vector attributes:


| Key name | Value type | Description | 
| --- | --- | --- | 
| identifier | string | Location of the attribute within a key. Hash member name or JSON path | 
| alias | string | Name of attribute used in query descriptions. | 
| type | string | The string "VECTOR" | 
| index | character | For further description of vector index, see below. | 

Vector index description:


| Key name | Value type | Description | 
| --- | --- | --- | 
| capacity | string | Current capacity of index | 
| dimensions | string | Number of elements in each vector | 
| distance\$1metric | string | One of "COSINE", "L2" or "IP" | 
| size | array  | Description of vector index, see below. | 
| data\$1type | string | Declared datatype. Only "FLOAT32" is currently supported. | 
| algorithm | array  | Further description of the vector search algorithm. | 

FLAT Vector search algorithm Description:


| Key name | Value type | Description | 
| --- | --- | --- | 
| name | string | Algorithm name: FLAT | 
| block\$1size | number | Size of a block of the FLAT index. | 

HNSW Vector Index Description:


| Key name | Value type | Description | 
| --- | --- | --- | 
| name | string | Algorithm name: HNSW | 
| m | number | The "M" parameter for HNSW | 
| ef\$1construction | number | The "ef\$1construction" parameter for HNSW | 
| ef\$1runtime | number | The "ef\$1runtime" parameter for HNSW. | 

# FT.\$1LIST
<a name="vector-search-commands-ft.list"></a>

List all indexes.

**Syntax**

```
FT._LIST 
```

Returns an array of strings that are the names of currently defined index.

# Using Amazon ElastiCache for Valkey for semantic caching
<a name="semantic-caching"></a>

Large language models (LLMs) are the foundation for generative AI and agentic AI applications that power use cases from chatbots and search assistants to code generation tools and recommendation engines. As the use of AI applications in production grows, customers seek ways to optimize cost and performance. Most AI applications invoke the LLM for every user query, even when queries are repeated or semantically similar. Semantic caching is a method to reduce cost and latency in generative AI applications by reusing responses for identical or semantically similar requests using vector embeddings.

This topic explains how to implement a semantic cache using vector search on Amazon ElastiCache for Valkey, including the concepts, architecture, implementation, benchmarks, and best practices.

**Topics**
+ [Overview of semantic caching](semantic-caching-overview.md)
+ [Why ElastiCache for Valkey for semantic caching](semantic-caching-why-elasticache.md)
+ [Solution architecture](semantic-caching-architecture.md)
+ [Prerequisites](semantic-caching-prerequisites.md)
+ [Implementing a semantic cache with ElastiCache for Valkey](semantic-caching-implementation.md)
+ [Impact and benchmarks](semantic-caching-benchmarks.md)
+ [Multi-turn conversation caching](semantic-caching-multi-turn.md)
+ [Best practices](semantic-caching-best-practices.md)
+ [Related resources](semantic-caching-related-resources.md)

# Overview of semantic caching
<a name="semantic-caching-overview"></a>

Unlike traditional caches that rely on exact string matches, a semantic cache retrieves data based on semantic similarity. A semantic cache uses vector embeddings produced by models like Amazon Titan Text Embeddings to capture semantic meaning in a high-dimensional vector space.

In generative AI applications, a semantic cache stores vector representations of queries and their corresponding responses. The system compares the vector embedding of each new query against cached vectors of prior queries to determine if a similar query has been answered before. If the cache contains a similar query above a configured similarity threshold, the system returns the previously generated response instead of invoking the LLM. Otherwise, the system invokes the LLM to generate a response and caches the query embedding and response together for future reuse.

## Why semantic, not exact match?
<a name="semantic-caching-why-semantic"></a>

Consider an IT help chatbot where thousands of users ask the same question. The following queries are different strings but carry the same meaning:
+ "How do I install the VPN app on my laptop?"
+ "Can you guide me through setting up the company VPN?"
+ "Steps to get VPN working on my computer"

An exact-match cache treats each query as unique and invokes the LLM three times. A semantic cache recognizes these queries as semantically equivalent and returns the cached response for all three, invoking the LLM only once.

## Key benefits
<a name="semantic-caching-benefits"></a>

Semantic caching provides the following benefits for generative AI and agentic AI applications:
+ **Reduced costs** – Reusing answers for similar questions reduces the number of LLM calls and overall inference spend. In benchmarks, semantic caching reduced LLM inference cost by up to 86%.
+ **Lower latency** – Serving answers from the cache provides faster responses than running LLM inference. Cache hits return responses in milliseconds rather than seconds, achieving up to 88% latency reduction.
+ **Improved scalability** – Reducing LLM calls for similar or repeated queries enables you to serve more requests within the same model throughput limits without increasing capacity.
+ **Improved consistency** – Using the same cached response for semantically similar requests helps deliver a consistent answer for the same underlying question.

## Where semantic caching is effective
<a name="semantic-caching-effective-use-cases"></a>

Semantic caching is particularly effective for the following types of applications:


| Application type | Description | Example | 
| --- | --- | --- | 
| RAG-based assistants and copilots | Many queries are duplicate requests from different users against a shared knowledge base | IT help chatbot, product FAQ bot, documentation assistant | 
| Agentic AI applications | Agents break tasks into multiple small steps that may repeatedly look up similar information | Compliance agent reusing policy lookups, research agent reusing prior findings | 
| Multimodal applications | Matching similar audio segments, images, or video queries | Automated phone systems reusing guidance for repeated requests like store hours | 

# Why ElastiCache for Valkey for semantic caching
<a name="semantic-caching-why-elasticache"></a>

Semantic caching workloads continuously write, search, and evict cache entries to serve the stream of incoming user queries while keeping responses fresh. The cache store must meet the following requirements:
+ **Real-time vector updates** – New queries and responses must be immediately available in the cache to maintain hit rates.
+ **Low-latency lookups** – The cache sits in the online request path of every query, so lookups must not add perceptible delay to end-user response time.
+ **Efficient ephemeral management** – Entries are frequently written, read, and evicted, requiring efficient management of a hot set.

ElastiCache for Valkey meets these requirements:
+ **Lowest latency vector search** – At the time of writing, ElastiCache for Valkey delivers the lowest latency vector search with the highest throughput and best price-performance at 95%\$1 recall rate among popular vector databases on AWS. Latency is as low as microseconds with up to 99% recall.
+ **Multithreaded architecture** – Vector search on ElastiCache uses a multithreaded architecture that supports real-time vector updates and high write throughput while maintaining low latency for search requests.
+ **Built-in cache features** – TTL (time to live), eviction policies (`allkeys-lru`), and atomic operations help manage the ephemeral hot set of entries that semantic caching creates.
+ **Vector index support** – ElastiCache supports both HNSW (Hierarchical Navigable Small World) and FLAT index algorithms with COSINE, Euclidean, and inner product distance metrics.
+ **Zero-downtime scalability** – ElastiCache supports scaling without downtime, allowing you to adjust capacity as your cache grows.
+ **Framework integration** – ElastiCache for Valkey integrates with Amazon Bedrock AgentCore through the LangGraph framework, enabling you to implement a Valkey-backed semantic cache for agents built on Amazon Bedrock.

# Solution architecture
<a name="semantic-caching-architecture"></a>

The following architecture implements a read-through semantic cache for an agent on Amazon Bedrock AgentCore. A request follows one of two paths:
+ **Cache hit** – If ElastiCache finds a prior query above the configured similarity threshold, AgentCore returns the cached answer immediately. This path invokes only the embedding model and does not require LLM inference. This path has millisecond-level end-to-end latency and does not incur LLM inference cost.
+ **Cache miss** – If no similar prior query is found, AgentCore invokes the LLM to generate a new answer and returns it to the user. The application then caches the prompt's embedding and answer in ElastiCache so that future similar prompts can be served from the cache.

# Prerequisites
<a name="semantic-caching-prerequisites"></a>

To implement semantic caching with ElastiCache for Valkey, you need:

1. An AWS account with access to Amazon Bedrock, including Amazon Bedrock AgentCore Runtime, Amazon Titan Text Embeddings v2 model, and an LLM such as Amazon Nova Premier enabled in the US East (N. Virginia) Region.

1. The AWS Command Line Interface (AWS CLI) configured with Python 3.11 or later.

1. An Amazon Elastic Compute Cloud instance inside your Amazon VPC with the following packages installed:

   ```
   pip install numpy pandas valkey bedrock-agentcore \
               langchain-aws 'langgraph-checkpoint-aws[valkey]'
   ```

1. An ElastiCache for Valkey cluster running version 8.2 or later, which supports vector search. For instructions on creating a cluster, see [Creating a cluster for Valkey or Redis OSS](Clusters.Create.md).

# Implementing a semantic cache with ElastiCache for Valkey
<a name="semantic-caching-implementation"></a>

The following walkthrough shows how to implement a read-through semantic cache using ElastiCache for Valkey with Amazon Bedrock.

## Step 1: Create an ElastiCache for Valkey cluster
<a name="semantic-caching-step1"></a>

Create an ElastiCache for Valkey cluster with version 8.2 or later using the AWS CLI:

```
aws elasticache create-replication-group \
  --replication-group-id "valkey-semantic-cache" \
  --cache-node-type cache.r7g.large \
  --engine valkey \
  --engine-version 8.2 \
  --num-node-groups 1 \
  --replicas-per-node-group 1
```

## Step 2: Connect to the cluster and configure embeddings
<a name="semantic-caching-step2"></a>

From your application code running on your Amazon EC2 instance, connect to the ElastiCache cluster and set up the embedding model:

```
from valkey.cluster import ValkeyCluster
from langchain_aws import BedrockEmbeddings

# Connect to ElastiCache for Valkey
valkey_client = ValkeyCluster(
    host="mycluster.xxxxxx.clustercfg.use1.cache.amazonaws.com",  # Your cluster endpoint
    port=6379,
    decode_responses=False
)

# Set up Amazon Bedrock Titan embeddings
embeddings = BedrockEmbeddings(
    model_id="amazon.titan-embed-text-v2:0",
    region_name="us-east-1"
)
```

Replace the host value with your ElastiCache cluster's configuration endpoint. For instructions on finding your cluster endpoint, see [Accessing your ElastiCache cluster](accessing-elasticache.md).

## Step 3: Create the vector index for the semantic cache
<a name="semantic-caching-step3"></a>

Configure a ValkeyStore that automatically embeds queries using an HNSW index with COSINE distance for vector search:

```
from langgraph_checkpoint_aws import ValkeyStore
from hashlib import md5

store = ValkeyStore(
    client=valkey_client,
    index={
        "collection_name": "semantic_cache",
        "embed": embeddings,
        "fields": ["query"],           # Fields to vectorize
        "index_type": "HNSW",          # Vector search algorithm
        "distance_metric": "COSINE",   # Similarity metric
        "dims": 1024                   # Titan V2 produces 1024-d vectors
    }
)
store.setup()

def cache_key_for_query(query: str):
    """Generate a deterministic cache key for a query."""
    return md5(query.encode("utf-8")).hexdigest()
```

**Note**  
ElastiCache for Valkey uses an index to provide fast and accurate vector search. The `FT.CREATE` command creates the underlying index. For more information, see [Vector search for ElastiCache](vector-search.md).

## Step 4: Implement cache search and update functions
<a name="semantic-caching-step4"></a>

Create functions to search the cache for semantically similar queries and to store new query-response pairs:

```
def search_cache(user_message: str, k: int = 3, min_similarity: float = 0.8):
    """Look up a semantically similar cached response from ElastiCache."""
    hits = store.search(
        namespace="semantic-cache",
        query=user_message,
        limit=k
    )
    if not hits:
        return None

    # Sort by similarity score (highest first)
    hits = sorted(hits, key=lambda h: h["score"], reverse=True)
    top_hit = hits[0]
    score = top_hit["score"]

    if score < min_similarity:
        return None  # Below similarity threshold

    return top_hit["value"]["answer"]  # Return cached answer


def store_cache(user_message: str, result_message: str):
    """Store a new query-response pair in the semantic cache."""
    key = cache_key_for_query(user_message)
    store.put(
        namespace="semantic-cache",
        key=key,
        value={
            "query": user_message,
            "answer": result_message
        }
    )
```

## Step 5: Implement the read-through cache pattern
<a name="semantic-caching-step5"></a>

Integrate the cache into your application's request handling:

```
import time

def handle_query(user_message: str) -> dict:
    """Handle a user query with read-through semantic cache."""
    start = time.time()

    # Step 1: Search the semantic cache
    cached_response = search_cache(user_message, min_similarity=0.8)

    if cached_response:
        # Cache hit - return cached response
        elapsed = (time.time() - start) * 1000
        return {
            "response": cached_response,
            "source": "cache",
            "latency_ms": round(elapsed, 1),
        }

    # Step 2: Cache miss - invoke LLM
    llm_response = invoke_llm(user_message)  # Your LLM invocation function

    # Step 3: Store the response in cache for future reuse
    store_cache(user_message, llm_response)

    elapsed = (time.time() - start) * 1000
    return {
        "response": llm_response,
        "source": "llm",
        "latency_ms": round(elapsed, 1),
    }
```

## Underlying Valkey commands
<a name="semantic-caching-valkey-commands"></a>

The following table shows the Valkey commands used to implement the semantic cache:


| Operation | Valkey command | Typical latency | 
| --- | --- | --- | 
| Create index | FT.CREATE semantic\$1cache SCHEMA query TEXT answer TEXT embedding VECTOR HNSW 6 TYPE FLOAT32 DIM 1024 DISTANCE\$1METRIC COSINE | One-time setup | 
| Cache lookup | FT.SEARCH semantic\$1cache "\$1=>[KNN 3 @embedding \$1query\$1vec]" PARAMS 2 query\$1vec [bytes] DIALECT 2 | Microseconds | 
| Store response | HSET cache:\$1hash\$1 query "..." answer "..." embedding [bytes] | Microseconds | 
| Set TTL | EXPIRE cache:\$1hash\$1 82800 | Microseconds | 
| LLM inference (miss) | External API call to Amazon Bedrock | 500–6000 ms | 

# Impact and benchmarks
<a name="semantic-caching-benchmarks"></a>

AWS evaluated the approach on 63,796 real user chatbot queries and their paraphrased variants from the public SemBenchmarkLmArena dataset. This dataset captures user interactions with the Chatbot Arena platform across general assistant use cases such as question answering, writing, and analysis.

The evaluation used the following configuration:
+ ElastiCache `cache.r7g.large` instance as the semantic cache store
+ Amazon Titan Text Embeddings V2 for embeddings
+ Claude 3 Haiku for LLM inference

The cache was started empty, and all 63,796 queries were streamed as random incoming user traffic, simulating real-world application traffic.

## Cost and accuracy at different similarity thresholds
<a name="semantic-caching-cost-accuracy"></a>

The following table summarizes the trade-off between cost reduction, latency improvement, and accuracy across different similarity thresholds:


| Similarity threshold | Cache hit ratio | Accuracy of cached responses | Total daily cost | Cost savings | Average latency (s) | Latency reduction | 
| --- | --- | --- | --- | --- | --- | --- | 
| Baseline (no cache) | – | – | \$149.50 | – | 4.35 | – | 
| 0.99 (very strict) | 23.5% | 92.1% | \$141.70 | 15.8% | 3.60 | 17.1% | 
| 0.95 (strict) | 56.0% | 92.6% | \$123.80 | 51.9% | 1.84 | 57.7% | 
| 0.90 (moderate) | 74.5% | 92.3% | \$113.60 | 72.5% | 1.21 | 72.2% | 
| 0.80 (balanced) | 87.6% | 91.8% | \$17.60 | 84.6% | 0.60 | 86.1% | 
| 0.75 (relaxed) | 90.3% | 91.2% | \$16.80 | 86.3% | 0.51 | 88.3% | 
| 0.50 (very relaxed) | 94.3% | 87.5% | \$15.90 | 88.0% | 0.46 | 89.3% | 

At a similarity threshold of 0.75, semantic caching reduced LLM inference cost by up to 86% while maintaining 91% answer accuracy. The choice of LLM, embedding model, and backing store affects both cost and latency. Semantic caching delivers proportionally larger benefits when used with bigger, higher-cost LLMs.

## Individual query latency improvements
<a name="semantic-caching-latency-improvements"></a>

The following table shows the impact on individual query latency. A cache hit reduced latency by up to 59x, from multiple seconds to a few hundred milliseconds:


| Query intent | Cache miss latency | Cache hit latency | Reduction | 
| --- | --- | --- | --- | 
| "Are there instances where SI prefixes deviate from denoting powers of 10, excluding their application?" → paraphrased variant | 6.51 s | 0.11 s | 59x | 
| "Sally is a girl with 3 brothers, and each of her brothers has 2 sisters. How many sisters are there in Sally's family?" → paraphrased variant | 1.64 s | 0.13 s | 12x | 

# Multi-turn conversation caching
<a name="semantic-caching-multi-turn"></a>

For applications with multi-turn conversations, the same user message can mean different things depending on context. For example, "Tell me more" in a conversation about Valkey means something different from "Tell me more" in a conversation about Python.

## The challenge
<a name="semantic-caching-multi-turn-challenge"></a>

Single-prompt caching works well for stateless queries. In multi-turn conversations, you must cache the full conversation context, not just the last message:

```
# "Tell me more" means nothing without context
# Conversation A: "What is Valkey?" -> "Tell me more"  (about Valkey)
# Conversation B: "What is Python?" -> "Tell me more"  (about Python)
```

## Strategy: context-aware cache keys
<a name="semantic-caching-context-aware-keys"></a>

Instead of embedding only the last user message, embed a summary of the full conversation context. This way, similar follow-up questions in similar conversation flows can reuse cached answers.

```
def build_context_string(messages: list) -> str:
    """Build a cacheable context string from conversation messages."""
    # Use last 3 turns (6 messages: user + assistant pairs)
    recent = messages[-6:]
    parts = []
    for msg in recent:
        role = msg["role"]
        content = msg["content"][:200]  # Truncate long messages
        parts.append(f"{role}: {content}")
    return " | ".join(parts)
```

## Per-user cache isolation with TAG filters
<a name="semantic-caching-tag-filters"></a>

Use TAG fields to isolate cached conversations by user, session, or other dimensions. This prevents one user's cached conversations from being returned for another user:

```
# Create index with TAG field for per-user isolation
valkey_client.execute_command(
    "FT.CREATE", "conv_cache_idx",
    "SCHEMA",
    "context_summary", "TEXT",
    "response", "TEXT",
    "user_id", "TAG",
    "turn_count", "NUMERIC",
    "embedding", "VECTOR", "HNSW", "6",
    "TYPE", "FLOAT32",
    "DIM", "1024",
    "DISTANCE_METRIC", "COSINE",
)
```

Search with hybrid filtering (TAG \$1 KNN):

```
def lookup_conversation_cache(messages: list, user_id: str, threshold: float = 0.12):
    """Search cache for similar conversation contexts, scoped to a user.

    Note: FT.SEARCH with COSINE distance returns a distance score where
    0 = identical and 2 = opposite. A lower score means higher similarity.
    The threshold here is a maximum distance: only return results closer
    than this value.
    """
    context = build_context_string(messages)
    query_vec = get_embedding(context)

    # Hybrid search: filter by user_id TAG + KNN on context embedding
    results = valkey_client.execute_command(
        "FT.SEARCH", "conv_cache_idx",
        f"@user_id:{{{user_id}}}=>[KNN 1 @embedding $query_vec]",
        "PARAMS", "2", "query_vec", query_vec,
        "DIALECT", "2",
    )

    if results[0] > 0:
        fields = results[2]
        field_dict = {fields[j]: fields[j+1] for j in range(0, len(fields), 2)}
        distance = float(field_dict.get("__embedding_score", "999"))
        if distance < threshold:  # Lower distance = more similar
            return {"hit": True, "response": field_dict.get("response", ""), "distance": distance}

    return {"hit": False}
```

**Note**  
The `@user_id:{user_123}` TAG filter ensures that User A's cached conversations don't leak to User B. The hybrid query (TAG \$1 KNN) runs as a single atomic operation — pre-filtering by user, then finding the nearest conversation context.

## Cache isolation strategies
<a name="semantic-caching-isolation-strategies"></a>


| Strategy | TAG filter | Best for | 
| --- | --- | --- | 
| Per-user | @user\$1id:\$1user\$1123\$1 | Personalized assistants | 
| Per-session | @session\$1id:\$1sess\$1abc\$1 | Short-lived chats | 
| Global (shared) | No filter (\$1) | FAQ bots, common queries | 
| Per-model | @model:\$1gpt-4\$1 | Multi-model deployments | 
| Per-product | @product\$1id:\$1prod\$1456\$1 | E-commerce assistants | 

# Best practices
<a name="semantic-caching-best-practices"></a>

## Choosing data that can be cached
<a name="semantic-caching-bp-choosing-data"></a>

Semantic caching is well suited for repeated queries whose responses are relatively stable, whereas real-time or highly dynamic responses are often poor candidates for caching.

Use tag and numeric filters derived from existing application context (such as product ID, category, region, or user segment) to decide which queries and responses are eligible for caching and to improve the relevance of cache hits.

## Similarity threshold tuning
<a name="semantic-caching-bp-threshold"></a>

The similarity threshold controls the trade-off between cache hit rate and answer quality. Choose a threshold that balances cost savings with accuracy for your use case:


| Threshold | Hit rate | Quality risk | Best for | 
| --- | --- | --- | --- | 
| 0.95 (strict) | Low (\$125%) | Very low | Medical, legal, financial applications | 
| 0.90 (moderate) | Medium (\$155%) | Low | General chatbots | 
| 0.80 (balanced) | High (\$175%) | Low–Medium | FAQ bots, IT support | 
| 0.75 (relaxed) | Very high (\$190%) | Medium | High-volume repetitive queries | 

**Important**  
Start with a higher threshold (0.90–0.95) and gradually lower it while monitoring accuracy. Use A/B testing to find the optimal balance for your workload.

## Standalone queries versus conversations
<a name="semantic-caching-bp-standalone-vs-conversations"></a>
+ **For standalone queries** – Apply semantic caching directly on the user query text.
+ **For multi-turn conversations** – First use your conversation memory to retrieve the key facts and recent messages needed to answer the current turn. Then apply semantic caching to the combination of the current user message and the retrieved context, instead of embedding the entire raw dialogue.

## Setting cache invalidation periods
<a name="semantic-caching-bp-ttl"></a>

Use TTL to control how long cached responses are served before they are regenerated on a cache miss.


| Data type | Recommended TTL | Rationale | 
| --- | --- | --- | 
| Static facts (documentation, policies) | 24 hours | Facts change infrequently | 
| Product information | 12–24 hours | Updated daily in most catalogs | 
| General assistant responses | 1–4 hours | Balance freshness with hit rate | 
| Real-time data (prices, inventory) | 5–15 minutes | Data changes frequently | 
| Conversation context | 30 minutes | Session-scoped, short-lived | 

```
# Set TTL with random jitter to spread out cache invalidations
import random

base_ttl = 82800  # ~23 hours
jitter = random.randint(0, 3600)  # Up to 1 hour of jitter
valkey_client.expire(cache_key, base_ttl + jitter)
```

**Tip**  
Set TTLs that match your application use case and how often your data or model outputs change. Longer TTLs increase cache hit rates but raise the risk of outdated answers. Shorter TTLs keep responses fresher but lower cache hit rates and require more LLM inference.

## Monitoring and cost tracking
<a name="semantic-caching-bp-monitoring"></a>

Track cache performance metrics to optimize your semantic cache over time:

```
def record_cache_event(valkey_client, event_type: str):
    """Track cache hits and misses using atomic counters."""
    valkey_client.incr(f"cache:metrics:{event_type}")

    # Also track hourly for time-series analysis
    from datetime import datetime
    hour_key = datetime.now().strftime("%Y%m%d%H")
    counter_key = f"cache:metrics:{event_type}:{hour_key}"
    valkey_client.incr(counter_key)
    valkey_client.expire(counter_key, 86400 * 7)  # Keep 7 days

def get_cache_stats(valkey_client) -> dict:
    """Get current cache performance metrics."""
    hits = int(valkey_client.get("cache:metrics:hit") or 0)
    misses = int(valkey_client.get("cache:metrics:miss") or 0)
    total = hits + misses
    hit_rate = hits / total if total > 0 else 0

    avg_cost_per_call = 0.015  # Example: ~$0.015 per LLM call
    savings = hits * avg_cost_per_call

    return {
        "total_requests": total,
        "hits": hits,
        "misses": misses,
        "hit_rate": round(hit_rate, 3),
        "estimated_savings_usd": round(savings, 2),
    }
```

## Memory management
<a name="semantic-caching-bp-memory"></a>
+ **Set maxmemory policy** – Configure `maxmemory-policy allkeys-lru` on your ElastiCache cluster to automatically evict least-recently-used cache entries when the cluster reaches its memory limit.
+ **Plan for capacity** – Each cache entry typically requires approximately 4–6 KB (embedding dimensions × 4 bytes \$1 query text \$1 response text). A 1 GB ElastiCache instance can store approximately 170,000 cached entries.
+ **Use cache invalidation for stale data** – When underlying data changes, use text search to find and invalidate related cache entries:

  ```
  def invalidate_by_topic(valkey_client, topic_keyword: str):
      """Remove cached entries matching a topic after a data update."""
      results = valkey_client.execute_command(
          "FT.SEARCH", "semantic_cache",
          f"@query:{topic_keyword}",
          "NOCONTENT",  # Only return keys, not fields
      )
  
      if results[0] > 0:
          keys = results[1:]
          for key in keys:
              valkey_client.delete(key)
          print(f"Invalidated {len(keys)} cached entries for '{topic_keyword}'")
  ```

# Related resources
<a name="semantic-caching-related-resources"></a>
+ [Vector search for ElastiCache](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/vector-search.html)
+ [Common ElastiCache use cases](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/elasticache-use-cases.html)
+ [Lower cost and latency for AI using Amazon ElastiCache as a semantic cache with Amazon Bedrock](https://aws.amazon.com/blogs/database/lower-cost-and-latency-for-ai-using-amazon-elasticache-as-a-semantic-cache-with-amazon-bedrock/) (AWS Database Blog)
+ [Announcing vector search for Amazon ElastiCache](https://aws.amazon.com/blogs/database/announcing-vector-search-for-amazon-elasticache/) (AWS Database Blog)
+ [Amazon Bedrock AgentCore Runtime](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/agents-tools-runtime.html)
+ [LangGraph checkpoint for Valkey](https://pypi.org/project/langgraph-checkpoint-aws/)
+ [Valkey client libraries](https://valkey.io/clients/)

# Using Amazon ElastiCache for Valkey for agentic memory
<a name="agentic-memory"></a>

Agentic AI applications use external tools, APIs, and multi-step reasoning to complete complex tasks. However, by default, agents don't retain memory between conversations, which limits their ability to provide personalized responses or maintain context across sessions. Amazon ElastiCache for Valkey provides the high-performance, low-latency infrastructure that agentic memory systems require to store, retrieve, and manage persistent memory for AI agents.

This topic explains how to use ElastiCache for Valkey as the storage layer for agentic memory, covering the concepts, architecture, implementation, and best practices for building memory-enabled AI agents.

**Topics**
+ [Overview of agentic memory](agentic-memory-overview.md)
+ [Types of agentic memory](agentic-memory-types.md)
+ [Why ElastiCache for Valkey for agentic memory](agentic-memory-why-elasticache.md)
+ [Solution architecture](agentic-memory-architecture.md)
+ [Prerequisites](agentic-memory-prerequisites.md)
+ [Setting up ElastiCache for Valkey as a vector store for agentic memory](agentic-memory-setup.md)
+ [Performance benefits](agentic-memory-performance.md)
+ [Best practices](agentic-memory-best-practices.md)
+ [Related resources](agentic-memory-related-resources.md)

# Overview of agentic memory
<a name="agentic-memory-overview"></a>

An agentic AI application is a system that takes actions and makes decisions based on input. These agents use external tools, APIs, and multi-step reasoning to complete complex tasks. Without persistent memory, agents forget everything between conversations, making it impossible to deliver personalized experiences or complete multi-step tasks effectively.

Agentic memory handles the persistence, encoding, storage, retrieval, and summarization of knowledge gained through user interactions. This memory system is a critical part of the context management component of an agentic AI application, enabling agents to learn from past conversations and apply that knowledge to future interactions.

Consider the following examples where agentic memory provides value:
+ **Customer support agents** – An agent remembers a customer's previous issues, preferences, and account details across support sessions, avoiding repetitive information gathering and delivering faster resolutions.
+ **Research agents** – An agent that researches GitHub repositories remembers previously discovered project metrics, avoiding redundant web searches and reducing token usage and response time.
+ **Personal assistant agents** – An agent retains a user's scheduling preferences, communication style, and recurring tasks to provide increasingly personalized assistance over time.

# Types of agentic memory
<a name="agentic-memory-types"></a>

## Short-term memory
<a name="agentic-memory-short-term"></a>

Short-term memory maintains context within a single session. It tracks the current conversation flow, recent interactions, and intermediate reasoning steps. Short-term memory is essential for multi-turn conversations where the agent needs to reference earlier parts of the dialogue.

ElastiCache for Valkey supports short-term memory through data structures such as lists (for ordered chat history), hashes (for session metadata), and strings (for tool result caching with TTL-based expiration).

## Long-term memory
<a name="agentic-memory-long-term"></a>

Long-term memory stores information across multiple sessions. This enables agents to remember user preferences, past decisions, and historical context for future conversations. Long-term memory requires a persistent, searchable store that supports semantic retrieval — finding relevant memories based on meaning rather than exact keyword matches.

ElastiCache for Valkey supports long-term memory through its vector similarity search capabilities (available in Valkey 8.2 and later). Vector search enables semantic memory retrieval, allowing agents to find relevant memories based on meaning by comparing vector embeddings of stored memories against new queries.

## Additional memory types
<a name="agentic-memory-additional-types"></a>


| Memory type | Description | ElastiCache support | 
| --- | --- | --- | 
| Episodic memory | Records of specific past interactions and events | Vector search over stored conversation embeddings | 
| Semantic memory | General knowledge and facts extracted from interactions | Vector similarity search with HNSW or FLAT indexes | 
| Procedural memory | Knowledge about how to perform tasks and use tools | Hash-based storage of tool configurations and workflows | 

# Why ElastiCache for Valkey for agentic memory
<a name="agentic-memory-why-elasticache"></a>

ElastiCache for Valkey provides several capabilities that make it well suited as the storage layer for agentic memory:
+ **Sub-millisecond latency** – ElastiCache for Valkey delivers microsecond-level latency for memory operations, making it suitable for real-time agent interactions where memory lookups must not add perceptible delay to the user experience.
+ **Vector similarity search** – Starting with Valkey version 8.2, ElastiCache supports vector similarity search through the valkey-search module. This enables semantic memory retrieval, where agents can find relevant memories based on meaning rather than exact keyword matches.
+ **Real-time index updates** – New memories become immediately searchable after being written. This is critical for agentic applications where the agent may need to recall information it stored moments ago within the same session.
+ **Built-in cache management** – Features such as TTL (time to live), eviction policies (`allkeys-lru`), and atomic operations help manage the memory lifecycle.
+ **Multiple data structures** – Valkey provides hashes, lists, strings, streams, JSON, and vectors — each optimized for different memory patterns. A single ElastiCache instance can support session state (hashes), conversation history (lists), tool result caching (strings with TTL), event logs (streams), and semantic memory (vectors).
+ **Scalability** – ElastiCache scales to handle millions of requests with consistent low latency, supporting applications with large numbers of concurrent users and agents.

# Solution architecture
<a name="agentic-memory-architecture"></a>

The following architecture implements persistent memory for agentic AI applications using ElastiCache for Valkey as the vector storage component.

**Key components:**
+ **Amazon Bedrock AgentCore Runtime** – Provides the hosting environment for deploying and running agents. It provides access to the LLM and embedding models required for the architecture.
+ **Agent framework (for example, Strands Agents)** – Manages LLM invocations, tool execution, and user conversations. Strands Agents supports multiple LLMs, including models from Amazon Bedrock, Anthropic, Google Gemini, and OpenAI.
+ **Mem0** – The memory orchestration layer that sits between AI agents and storage systems. Mem0 manages the memory lifecycle, from extracting information from agent interactions to storing and retrieving it.
+ **Amazon ElastiCache for Valkey** – The managed in-memory data store that serves as the vector storage component. ElastiCache uses Valkey's vector similarity search capabilities to store high-dimensional vector embeddings, enabling semantic memory retrieval.

# Prerequisites
<a name="agentic-memory-prerequisites"></a>

To implement agentic memory with ElastiCache for Valkey, you need:

1. An AWS account with access to Amazon Bedrock, including Amazon Bedrock AgentCore Runtime and embedding models.

1. An ElastiCache cluster running Valkey 8.2 or later. Valkey 8.2 includes support for vector similarity search. For instructions on creating a cluster, see [Creating a cluster for Valkey or Redis OSS](Clusters.Create.md).

1. An Amazon Elastic Compute Cloud instance or other compute resource within the same Amazon VPC as your ElastiCache cluster.

1. Python 3.11 or later with the following packages:

   ```
   pip install strands-agents strands-agents-tools strands-agents-builder
   pip install mem0ai "mem0ai[vector_stores]"
   ```

# Setting up ElastiCache for Valkey as a vector store for agentic memory
<a name="agentic-memory-setup"></a>

The following walkthrough shows how to build a memory-enabled AI agent using Mem0 with ElastiCache for Valkey as the vector store.

## Step 1: Create a basic agent without memory
<a name="agentic-memory-step1"></a>

First, install Strands Agents and create a basic agent:

```
pip install strands-agents strands-agents-tools strands-agents-builder
```

Initialize a basic agent with an HTTP tool for web browsing:

```
from strands import Agent
from strands.tools import http_request

# Initialize agent with access to the tool to browse the web
agent = Agent(tools=[http_request])

# Format messages as expected by Strands
formatted_messages = [
    {
        "role": "user",
        "content": [{"text": "What is the URL for the project mem0 and its most important metrics?"}]
    }
]

result = agent(formatted_messages)
```

Without memory, the agent performs the same research tasks repeatedly for each request. In testing, the agent makes three tool calls to answer the request, using approximately 70,000 tokens and taking over 9 seconds to complete.

## Step 2: Configure Mem0 with ElastiCache for Valkey
<a name="agentic-memory-step2"></a>

Install the Mem0 library with the Valkey vector store connector:

```
pip install mem0ai "mem0ai[vector_stores]"
```

Configure Valkey as the vector store. ElastiCache for Valkey supports vector search capabilities starting with version 8.2:

```
from mem0 import Memory

# Configure Mem0 with ElastiCache for Valkey
config = {
    "vector_store": {
        "provider": "valkey",
        "config": {
            "valkey_url": "your-elasticache-cluster.cache.amazonaws.com:6379",
            "index_name": "agent_memory",
            "embedding_model_dims": 1024,
            "index_type": "flat"
        }
    }
}

m = Memory.from_config(config)
```

Replace *your-elasticache-cluster.cache.amazonaws.com* with your ElastiCache cluster's endpoint. For instructions on finding your cluster endpoint, see [Accessing your ElastiCache cluster](accessing-elasticache.md).

## Step 3: Add memory tools to the agent
<a name="agentic-memory-step3"></a>

Create memory tools that the agent can use to store and retrieve information. The `@tool` decorator transforms regular Python functions into tools the agent can invoke:

```
from strands import Agent, tool
from strands.tools import http_request

@tool
def store_memory_tool(information: str, user_id: str = "user") -> str:
    """Store important information in long-term memory."""
    memory_message = [{"role": "user", "content": information}]

    # Create new memories using Mem0 and store them in Valkey
    m.add(memory_message, user_id=user_id)

    return f"Stored: {information}"

@tool
def search_memory_tool(query: str, user_id: str = "user") -> str:
    """Search stored memories for relevant information."""

    # Search memories using Mem0 stored in Valkey
    results = m.search(query, user_id=user_id)
    if results['results']:
        return "\n".join([r['memory'] for r in results['results']])
    return "No memories found"

# Initialize Strands agent with memory tools
agent = Agent(tools=[http_request, store_memory_tool, search_memory_tool])
```

## Step 4: Test the memory-enabled agent
<a name="agentic-memory-step4"></a>

With memory enabled, the agent stores information from its interactions and retrieves it in subsequent requests:

```
# First request - agent searches the web and stores results in memory
formatted_messages = [
    {
        "role": "user",
        "content": [{"text": "What is the URL for the project mem0 and its most important metrics?"}]
    }
]
result = agent(formatted_messages)

# Second request (same question) - agent retrieves from memory
result = agent(formatted_messages)
```

On the second request, the agent retrieves the information from memory instead of making web tool calls. In testing, this reduced token usage from approximately 70,000 to 6,300 (a 12x reduction) and improved response time from 9.25 seconds to 2 seconds (more than 3x faster).

## How it works under the hood
<a name="agentic-memory-valkey-commands"></a>

The following table shows the Valkey commands that Mem0 uses internally to implement agentic memory with ElastiCache. Mem0 abstracts these commands through its API — the exact schema and key naming may vary depending on the Mem0 version and configuration:


| Operation | Valkey command | Description | 
| --- | --- | --- | 
| Create vector index | FT.CREATE agent\$1memory SCHEMA embedding VECTOR HNSW 6 TYPE FLOAT32 DIM 1024 DISTANCE\$1METRIC COSINE | Creates a vector index for semantic memory search | 
| Store memory | HSET mem:\$1id\$1 memory "..." embedding [bytes] user\$1id "user\$1123" created\$1at "..." | Stores a memory with its vector embedding | 
| Search memories | FT.SEARCH agent\$1memory "\$1=>[KNN 5 @embedding \$1query\$1vec]" PARAMS 2 query\$1vec [bytes] DIALECT 2 | Finds the most semantically similar memories | 
| Set expiration | EXPIRE mem:\$1id\$1 86400 | Sets TTL for memory entries | 

# Performance benefits
<a name="agentic-memory-performance"></a>

The following table summarizes the performance improvements observed in testing with a memory-enabled agent versus a stateless agent:


| Metric | Without memory | With memory | Improvement | 
| --- | --- | --- | --- | 
| Tool calls per request | 3 | 0 (memory retrieval) | Eliminated redundant tool calls | 
| Token usage | \$170,000 | \$16,300 | 12x reduction | 
| Response time | 9.25 seconds | 2 seconds | 3x\$1 faster | 
| Memory lookup latency | N/A | Sub-millisecond | Valkey in-memory performance | 

# Best practices
<a name="agentic-memory-best-practices"></a>

## Memory lifecycle management
<a name="agentic-memory-bp-lifecycle"></a>
+ **Use TTL for short-term memory** – Set appropriate TTL values on memory entries to automatically expire transient information. For session context, use TTLs of 30 minutes to 24 hours. For long-term user preferences, use longer TTLs or persist indefinitely.
+ **Implement memory decay** – Mem0 provides built-in decay mechanisms that remove irrelevant information over time. Configure these to prevent memory bloat as the agent accumulates more interactions.
+ **Deduplicate memories** – Before storing a new memory, check if a similar memory already exists using vector similarity search. Update existing memories rather than creating duplicates.

## Vector index configuration
<a name="agentic-memory-bp-index"></a>
+ **Choose the right index type** – Use `FLAT` for smaller memory stores (under 100,000 entries) where exact search is feasible. Use `HNSW` for larger stores where approximate nearest neighbor search provides better performance at scale.
+ **Select appropriate dimensions** – Match the embedding dimensions to your model. Amazon Titan Text Embeddings V2 produces 1024-dimensional vectors. OpenAI's text-embedding-3-small produces 1536-dimensional vectors.
+ **Use COSINE distance metric** – For text embeddings from models like Amazon Titan and OpenAI, COSINE distance is typically the most appropriate metric for measuring semantic similarity.

## Multi-user isolation
<a name="agentic-memory-bp-isolation"></a>
+ **Scope memories by user ID** – Always include a `user_id` parameter when storing and searching memories to prevent information leaking between users.
+ **Use TAG filters for efficient isolation** – When querying the vector index, use TAG filters (for example, `@user_id:{user_123}`) to pre-filter results by user before performing KNN search. This runs as a single atomic operation, providing both isolation and performance.

  ```
  # Example: TAG-filtered vector search for user isolation
  results = client.execute_command(
      "FT.SEARCH", "agent_memory",
      f"@user_id:{{{user_id}}}=>[KNN 5 @embedding $query_vec]",
      "PARAMS", "2", "query_vec", query_vec,
      "DIALECT", "2",
  )
  ```

## Memory management at scale
<a name="agentic-memory-bp-scale"></a>
+ **Set maxmemory policy** – Configure `maxmemory-policy allkeys-lru` on your ElastiCache cluster to automatically evict least-recently-used memory entries when the cluster reaches its memory limit.
+ **Monitor memory usage** – Use Amazon CloudWatch metrics to track memory utilization, cache hit rates, and vector search latency. Set alarms for high memory usage to proactively manage capacity.
+ **Plan for capacity** – Each memory entry typically requires approximately 4–6 KB (embedding dimensions × 4 bytes \$1 metadata). A 1 GB ElastiCache instance can store approximately 170,000–250,000 memory entries depending on embedding size and metadata.

# Related resources
<a name="agentic-memory-related-resources"></a>
+ [Vector search for ElastiCache](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/vector-search.html)
+ [Common ElastiCache use cases](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/elasticache-use-cases.html)
+ [Build persistent memory for agentic AI applications with Mem0 and Amazon ElastiCache for Valkey](https://aws.amazon.com/blogs/database/build-persistent-memory-for-agentic-ai-applications-with-mem0-open-source-amazon-elasticache-for-valkey-and-amazon-neptune-analytics/) (AWS Database Blog)
+ [Mem0 documentation — Valkey vector store](https://docs.mem0.ai/components/vectordbs/dbs/valkey)
+ [Strands Agents user guide](https://strandsagents.com/latest/documentation/docs/)
+ [Amazon Bedrock AgentCore Runtime](https://docs.aws.amazon.com/bedrock-agentcore/latest/devguide/agents-tools-runtime.html)
+ [Valkey vector search documentation](https://valkey.io/blog/introducing-valkey-search/)

# Getting started with JSON for Valkey and Redis OSS
<a name="json-gs"></a>

ElastiCache supports the native JavaScript Object Notation (JSON) format, which is a simple, schemaless way to encode complex datasets inside Valkey and Redis OSS clusters. You can natively store and access data using the JavaScript Object Notation (JSON) format inside the clusters, and update JSON data stored in those clusters—without needing to manage custom code to serialize and deserialize it.

In addition to using Valkey and Redis OSS API operations for applications that operate over JSON, you can now efficiently retrieve and update specific portions of a JSON document without needing to manipulate the entire object. This can improve performance and reduce cost. You can also search your JSON document contents using the [Goessner-style](https://goessner.net/articles/JsonPath/) `JSONPath` query. 

After you create a cluster with a supported engine version, the JSON data type and associated commands are automatically available. API compatible and RDB compatible with version 2 of the JSON module, so you can easily migrate existing JSON-based Valkey and Redis OSS applications into ElastiCache. For more information on the supported commands, see [Supported Valkey and Redis OSS commandsJSON commands](json-list-commands.md).

The JSON-related metrics `JsonBasedCmds` and `JsonBasedCmdsLatency` are incorporated into CloudWatch to monitor the usage of this data type. For more information, see [Metrics for Valkey and Redis OSS](CacheMetrics.Redis.md).

**Note**  
To use JSON, you must be running Valkey 7.2 and later, or Redis OSS 6.2.6 or later.

**Topics**
+ [JSON data type overview](json-document-overview.md)
+ [Supported Valkey and Redis OSS commands](json-list-commands.md)

# JSON data type overview
<a name="json-document-overview"></a>

ElastiCache supports a number of Valkey and Redis OSS commands for working with the JSON data type. The following is an overview of the JSON data type and a detailed list of commands that are supported.

## Terminology
<a name="json-terminology"></a>


****  

| Term | Description | 
| --- | --- | 
|  JSON document | Refers to the value of a JSON key. | 
|  JSON value | Refers to a subset of a JSON document, including the root that represents the entire document. A value could be a container or an entry within a container. | 
|  JSON element | Equivalent to JSON value. | 

## Supported JSON standard
<a name="Supported-JSON-Standard"></a>

JSON format is compliant with [RFC 7159](https://www.ietf.org/rfc/rfc7159.txt) and [ECMA-404](https://www.ietf.org/rfc/rfc7159.txt) JSON data interchange standard. UTF-8 [Unicode](https://www.unicode.org/standard/WhatIsUnicode.html) in JSON text is supported.

## Root element
<a name="json-root-element"></a>

The root element can be of any JSON data type. Note that in earlier RFC 4627, only objects or arrays were allowed as root values. Since the update to RFC 7159, the root of a JSON document can be of any JSON data type.

## Document size limit
<a name="json-document-size-limit"></a>

JSON documents are stored internally in a format that's optimized for rapid access and modification. This format typically results in consuming somewhat more memory than the equivalent serialized representation of the same document. 

The consumption of memory by a single JSON document is limited to 64 MB, which is the size of the in-memory data structure, not the JSON string. You can check the amount of memory consumed by a JSON document by using the `JSON.DEBUG MEMORY` command.

## JSON ACLs
<a name="json-acls"></a>
+ Similar to the existing per-datatype categories (@string, @hash, etc.), a new category @json is added to simplify managing access to JSON commands and data. No other existing Valkey or Redis OSS commands are members of the @json category. All JSON commands enforce any keyspace or command restrictions and permissions.
+ There are five existing Valkey and Redis OSS ACL categories that are updated to include the new JSON commands: @read, @write, @fast, @slow and @admin. The following table indicates the mapping of JSON commands to the appropriate categories.


**ACL**  

| JSON command | @read | @write | @fast | @slow | @admin | 
| --- | --- | --- | --- | --- | --- | 
|  JSON.ARRAPPEND |  | y | y |  |  | 
|  JSON.ARRINDEX | y |  | y |  |  | 
|  JSON.ARRINSERT |  | y | y |  |  | 
|  JSON.ARRLEN | y |  | y |  |  | 
|  JSON.ARRPOP |  | y | y |  |  | 
|  JSON.ARRTRIM |  | y | y |  |  | 
|  JSON.CLEAR |  | y | y |  |  | 
|  JSON.DEBUG | y |  |  | y | y | 
|  JSON.DEL |  | y | y |  |  | 
|  JSON.FORGET |  | y | y |  |  | 
|  JSON.GET | y |  | y |  |  | 
|  JSON.MGET | y |  | y |  |  | 
|  JSON.NUMINCRBY |  | y | y |  |  | 
|  JSON.NUMMULTBY |  | y | y |  |  | 
|  JSON.OBJKEYS | y |  | y |  |  | 
|  JSON.OBJLEN | y |  | y |  |  | 
|  JSON.RESP | y |  | y |  |  | 
|  JSON.SET |  | y |  | y |  | 
|  JSON.STRAPPEND |  | y | y |  |  | 
|  JSON.STRLEN | y |  | y |  |  | 
|  JSON.STRLEN | y |  | y |  |  | 
|  JSON.TOGGLE |  | y | y |  |  | 
|  JSON.TYPE | y |  | y |  |  | 
|  JSON.NUMINCRBY |  | y | y |  |  | 

## Nesting depth limit
<a name="json-nesting-depth-limit"></a>

When a JSON object or array has an element that is itself another JSON object or array, that inner object or array is said to “nest” within the outer object or array. The maximum nesting depth limit is 128. Any attempt to create a document that contains a nesting depth greater than 128 will be rejected with an error.

## Command syntax
<a name="json-command-syntax"></a>

Most commands require a key name as the first argument. Some commands also have a path argument. The path argument defaults to the root if it's optional and not provided.

 Notation:
+ Required arguments are enclosed in angle brackets. For example: <key>
+ Optional arguments are enclosed in square brackets. For example: [path]
+ Additional optional arguments are indicated by an ellipsis ("…"). For example: [json ...]

## Path syntax
<a name="json-path-syntax"></a>

Redis JSON supports two kinds of path syntaxes:
+ **Enhanced syntax** – Follows the JSONPath syntax described by [Goessner](https://goessner.net/articles/JsonPath/), as shown in the following table. We've reordered and modified the descriptions in the table for clarity.
+ **Restricted syntax** – Has limited query capabilities.

**Note**  
Results of some commands are sensitive to which type of path syntax is used.

 If a query path starts with '\$1', it uses the enhanced syntax. Otherwise, the restricted syntax is used.

**Enhanced syntax**


****  

| Symbol/Expression | Description | 
| --- | --- | 
|  \$1 | The root element. | 
|  . or [] | Child operator. | 
|  .. | Recursive descent. | 
|  \$1 | Wildcard. All elements in an object or array. | 
|  [] | Array subscript operator. Index is 0-based. | 
|  [,] | Union operator. | 
|  [start:end:step] | Array slice operator. | 
|  ?() | Applies a filter (script) expression to the current array or object. | 
|  () | Filter expression. | 
|  @ | Used in filter expressions that refer to the current node being processed. | 
|  == | Equal to, used in filter expressions. | 
|  \$1= | Not equal to, used in filter expressions. | 
|  > | Greater than, used in filter expressions. | 
|  >= | Greater than or equal to, used in filter expressions.  | 
|  < | Less than, used in filter expressions. | 
|  <= | Less than or equal to, used in filter expressions.  | 
|  && | Logical AND, used to combine multiple filter expressions. | 
|  \$1\$1 | Logical OR, used to combine multiple filter expressions. | 

**Examples**

The following examples are built on [Goessner's](https://goessner.net/articles/JsonPath/) example XML data, which we have modified by adding additional fields.

```
{ "store": {
    "book": [ 
      { "category": "reference",
        "author": "Nigel Rees",
        "title": "Sayings of the Century",
        "price": 8.95,
        "in-stock": true,
        "sold": true
      },
      { "category": "fiction",
        "author": "Evelyn Waugh",
        "title": "Sword of Honour",
        "price": 12.99,
        "in-stock": false,
        "sold": true
      },
      { "category": "fiction",
        "author": "Herman Melville",
        "title": "Moby Dick",
        "isbn": "0-553-21311-3",
        "price": 8.99,
        "in-stock": true,
        "sold": false
      },
      { "category": "fiction",
        "author": "J. R. R. Tolkien",
        "title": "The Lord of the Rings",
        "isbn": "0-395-19395-8",
        "price": 22.99,
        "in-stock": false,
        "sold": false
      }
    ],
    "bicycle": {
      "color": "red",
      "price": 19.95,
      "in-stock": true,
      "sold": false
    }
  }
}
```


****  

| Path | Description | 
| --- | --- | 
|  \$1.store.book[\$1].author | The authors of all books in the store. | 
|  \$1..author | All authors. | 
|  \$1.store.\$1 | All members of the store. | 
|  \$1["store"].\$1 | All members of the store. | 
|  \$1.store..price | The price of everything in the store. | 
|  \$1..\$1 | All recursive members of the JSON structure. | 
|  \$1..book[\$1] | All books. | 
|  \$1..book[0] | The first book. | 
|  \$1..book[-1] | The last book. | 
|  \$1..book[0:2] | The first two books. | 
|  \$1..book[0,1] | The first two books. | 
|  \$1..book[0:4] | Books from index 0 to 3 (ending index is not inclusive). | 
|  \$1..book[0:4:2] | Books at index 0, 2. | 
|  \$1..book[?(@.isbn)] | All books with an ISBN number. | 
|  \$1..book[?(@.price<10)] | All books cheaper than \$110. | 
|  '\$1..book[?(@.price < 10)]' | All books cheaper than \$110. (The path must be quoted if it contains white spaces.) | 
|  '\$1..book[?(@["price"] < 10)]' | All books cheaper than \$110. | 
|  '\$1..book[?(@.["price"] < 10)]' | All books cheaper than \$110. | 
|  \$1..book[?(@.price>=10&&@.price<=100)] | All books in the price range of \$110 to \$1100, inclusive. | 
|  '\$1..book[?(@.price>=10 && @.price<=100)]' | All books in the price range of \$110 to \$1100, inclusive. (The path must be quoted if it contains white spaces.) | 
|  \$1..book[?(@.sold==true\$1\$1@.in-stock==false)] | All books sold or out of stock. | 
|  '\$1..book[?(@.sold == true \$1\$1 @.in-stock == false)]' | All books sold or out of stock. (The path must be quoted if it contains white spaces.) | 
|  '\$1.store.book[?(@.["category"] == "fiction")]' | All books in the fiction category. | 
|  '\$1.store.book[?(@.["category"] \$1= "fiction")]' | All books in nonfiction categories. | 

Additional filter expression examples:

```
127.0.0.1:6379> JSON.SET k1 . '{"books": [{"price":5,"sold":true,"in-stock":true,"title":"foo"}, {"price":15,"sold":false,"title":"abc"}]}'
OK
127.0.0.1:6379> JSON.GET k1 $.books[?(@.price>1&&@.price<20&&@.in-stock)]
"[{\"price\":5,\"sold\":true,\"in-stock\":true,\"title\":\"foo\"}]"
127.0.0.1:6379> JSON.GET k1 '$.books[?(@.price>1 && @.price<20 && @.in-stock)]'
"[{\"price\":5,\"sold\":true,\"in-stock\":true,\"title\":\"foo\"}]"
127.0.0.1:6379> JSON.GET k1 '$.books[?((@.price>1 && @.price<20) && (@.sold==false))]'
"[{\"price\":15,\"sold\":false,\"title\":\"abc\"}]"
127.0.0.1:6379> JSON.GET k1 '$.books[?(@.title == "abc")]'
[{"price":15,"sold":false,"title":"abc"}]

127.0.0.1:6379> JSON.SET k2 . '[1,2,3,4,5]'
127.0.0.1:6379> JSON.GET k2 $.*.[?(@>2)]
"[3,4,5]"
127.0.0.1:6379> JSON.GET k2 '$.*.[?(@ > 2)]'
"[3,4,5]"

127.0.0.1:6379> JSON.SET k3 . '[true,false,true,false,null,1,2,3,4]'
OK
127.0.0.1:6379> JSON.GET k3 $.*.[?(@==true)]
"[true,true]"
127.0.0.1:6379> JSON.GET k3 '$.*.[?(@ == true)]'
"[true,true]"
127.0.0.1:6379> JSON.GET k3 $.*.[?(@>1)]
"[2,3,4]"
127.0.0.1:6379> JSON.GET k3 '$.*.[?(@ > 1)]'
"[2,3,4]"
```

**Restricted syntax**


****  

| Symbol/Expression | Description | 
| --- | --- | 
|  . or [] | Child operator. | 
|  [] | Array subscript operator. Index is 0-based. | 

**Examples**


****  

| Path | Description | 
| --- | --- | 
|  .store.book[0].author | The author of the first book. | 
|  .store.book[-1].author | The author of the last book. | 
|  .address.city | City name. | 
|  ["store"]["book"][0]["title"] | The title of the first book. | 
|  ["store"]["book"][-1]["title"] | The title of the last book. | 

**Note**  
All [Goessner](https://goessner.net/articles/JsonPath/) content cited in this documentation is subject to the [Creative Commons License](https://creativecommons.org/licenses/by/2.5/).

## Common error prefixes
<a name="json-error-prefixes"></a>

Each error message has a prefix. The following is a list of common error prefixes.


****  

| Prefix | Description | 
| --- | --- | 
|  ERR | A general error. | 
|  LIMIT | An error that occurs when the size limit is exceeded. For example, the document size limit or nesting depth limit was exceeded. | 
|  NONEXISTENT | A key or path does not exist. | 
|  OUTOFBOUNDARIES | Array index out of bounds. | 
|  SYNTAXERR | Syntax error. | 
|  WRONGTYPE | Wrong value type. | 

## JSON-related metrics
<a name="json-info-metrics"></a>

The following JSON info metrics are provided:


****  

| Info | Description | 
| --- | --- | 
|  json\$1total\$1memory\$1bytes | Total memory allocated to JSON objects. | 
|  json\$1num\$1documents | Total number of documents in Valkey or Redis OSS. | 

To query core metrics, run the following command:

```
info json_core_metrics
```

## How ElastiCache for Valkey and Redis OSS interacts with JSON
<a name="json-differences"></a>

The following section describes how ElastiCache for Valkey and Redis OSS interacts with the JSON data type.

### Operator precedence
<a name="json-operator-precedence"></a>

When evaluating conditional expressions for filtering, &&s take precedence first, and then \$1\$1s are evaluated, as is common across most languages. Operations inside of parentheses are run first. 

### Maximum path nesting limit behavior
<a name="json-max-path"></a>

 The maximum path nesting limit in ElastiCache for Redis OSS is 128. So a value like `$.a.b.c.d...` can only reach 128 levels. 

### Handling numeric values
<a name="json-about-numbers"></a>

JSON doesn't have separate data types for integers and floating point numbers. They are all called numbers.

Numerical representations:

When a JSON number is received on input, it is converted into one of the two internal binary representations: a 64-bit signed integer or a 64-bit IEEE double precision floating point. The original string and all of its formatting are not retained. Thus, when a number is output as part of a JSON response, it is converted from the internal binary representation to a printable string that uses generic formatting rules. These rules might result in a different string being generated than was received.

Arithmetic commands `NUMINCRBY` and `NUMMULTBY`:
+ If both numbers are integers and the result is out of the range of `int64`, it automatically becomes a 64-bit IEEE double precision floating point number.
+ If at least one of the numbers is a floating point, the result is a 64-bit IEEE double precision floating point number.
+ If the result exceeds the range of 64-bit IEEE double, the command returns an `OVERFLOW` error.

For a detailed list of available commands, see [Supported Valkey and Redis OSS commandsJSON commands](json-list-commands.md).

### Direct array filtering
<a name="json-direct-array-filtering"></a>

ElastiCache for Valkey or Redis OSS filters array objects directly.

For data like `[0,1,2,3,4,5,6]` and a path query like `$[?(@<4)]`, or data like `{"my_key":[0,1,2,3,4,5,6]}` and a path query like `$.my_key[?(@<4)]`, ElastiCache would return [1,2,3] in both circumstances. 

### Array indexing behavior
<a name="json-direct-array-indexing"></a>

ElastiCache for Valkey or Redis OSS allows both positive and negative indexes for arrays. For an array of length five, 0 would query the first element, 1 the second, and so on. Negative numbers start at the end of the array, so -1 would query the fifth element, -2 the fourth element, and so on.

To ensure predictable behavior for customers, ElastiCache does not round array indexes down or up, so if you have an array with a length of 5, calling index 5 or higher, or -6 or lower, would not produce a result.

### Strict syntax evaluation
<a name="json-strict-syntax-evaluation"></a>

MemoryDB does not allow JSON paths with invalid syntax, even if a subset of the path contains a valid path. This is to maintain correct behavior for our customers.

# Supported Valkey and Redis OSS commands
<a name="json-list-commands"></a>

ElastiCache supports the following Valkey and Redis OSS JSON commands:

**Topics**
+ [JSON.ARRAPPEND](json-arrappend.md)
+ [JSON.ARRINDEX](json-arrindex.md)
+ [JSON.ARRINSERT](json-arrinsert.md)
+ [JSON.ARRLEN](json-arrlen.md)
+ [JSON.ARRPOP](json-arrpop.md)
+ [JSON.ARRTRIM](json-arrtrim.md)
+ [JSON.CLEAR](json-clear.md)
+ [JSON.DEBUG](json-debug.md)
+ [JSON.DEL](json-del.md)
+ [JSON.FORGET](json-forget.md)
+ [JSON.GET](json-get.md)
+ [JSON.MGET](json-mget.md)
+ [JSON.MSET](json-mset.md)
+ [JSON.NUMINCRBY](json-numincrby.md)
+ [JSON.NUMMULTBY](json-nummultby.md)
+ [JSON.OBJLEN](json-objlen.md)
+ [JSON.OBJKEYS](json-objkeys.md)
+ [JSON.RESP](json-resp.md)
+ [JSON.SET](json-set.md)
+ [JSON.STRAPPEND](json-strappend.md)
+ [JSON.STRLEN](json-strlen.md)
+ [JSON.TOGGLE](json-toggle.md)
+ [JSON.TYPE](json-type.md)

# JSON.ARRAPPEND
<a name="json-arrappend"></a>

Appends one or more values to the array values at the path.

Syntax

```
JSON.ARRAPPEND <key> <path> <json> [json ...]
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (required) – A JSON path.
+ json (required) – The JSON value to be appended to the array.

**Return**

If the path is enhanced syntax:
+ Array of integers that represent the new length of the array at each path.
+ If a value is not an array, its corresponding return value is null.
+ `NONEXISTENT` error if the path does not exist.

If the path is restricted syntax:
+ Integer, the array's new length.
+ If multiple array values are selected, the command returns the new length of the first updated array.
+ `WRONGTYPE` error if the value at the path is not an array.
+ `SYNTAXERR` error if one of the input json arguments is not a valid JSON string.
+ `NONEXISTENT` error if the path does not exist.

**Examples**

 Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '[[], ["a"], ["a", "b"]]'
OK
127.0.0.1:6379> JSON.ARRAPPEND  k1 $[*] '"c"'
1) (integer) 1
2) (integer) 2
3) (integer) 3
127.0.0.1:6379> JSON.GET k1
"[[\"c\"],[\"a\",\"c\"],[\"a\",\"b\",\"c\"]]"
```

 Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '[[], ["a"], ["a", "b"]]'
OK
127.0.0.1:6379> JSON.ARRAPPEND  k1 [-1] '"c"'
(integer) 3
127.0.0.1:6379> JSON.GET k1
"[[],[\"a\"],[\"a\",\"b\",\"c\"]]"
```

# JSON.ARRINDEX
<a name="json-arrindex"></a>

Searches for the first occurrence of a scalar JSON value in the arrays at the path.
+ Out of range errors are treated by rounding the index to the array's start and end.
+ If start > end, return -1 (not found).

Syntax

```
JSON.ARRINDEX <key> <path> <json-scalar> [start [end]]
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (required) – A JSON path.
+ json-scalar (required) – The scalar value to search for. JSON scalar refers to values that are not objects or arrays. That is, string, number, Boolean, and null are scalar values.
+ start (optional) – The start index, inclusive. Defaults to 0 if not provided.
+ end (optional) – The end index, exclusive. Defaults to 0 if not provided, which means that the last element is included. 0 or -1 means the last element is included.

**Return**

If the path is enhanced syntax:
+ Array of integers. Each value is the index of the matching element in the array at the path. The value is -1 if not found.
+ If a value is not an array, its corresponding return value is null.

If the path is restricted syntax:
+ Integer, the index of matching element, or -1 if not found.
+ `WRONGTYPE` error if the value at the path is not an array.

**Examples**

 Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '[[], ["a"], ["a", "b"], ["a", "b", "c"]]'
OK
127.0.0.1:6379> JSON.ARRINDEX k1 $[*] '"b"'
1) (integer) -1
2) (integer) -1
3) (integer) 1
4) (integer) 1
```

 Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"children": ["John", "Jack", "Tom", "Bob", "Mike"]}'
OK
127.0.0.1:6379> JSON.ARRINDEX k1 .children '"Tom"'
(integer) 2
```

# JSON.ARRINSERT
<a name="json-arrinsert"></a>

Inserts one or more values into the array values at the path before the index.

Syntax

```
JSON.ARRINSERT <key> <path> <index> <json> [json ...]
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (required) – A JSON path.
+ index (required) – An array index before which values are inserted.
+ json (required) – The JSON value to be appended to the array.

**Return**

If the path is enhanced syntax:
+ Array of integers that represent the new length of the array at each path.
+ If a value is an empty array, its corresponding return value is null.
+ If a value is not an array, its corresponding return value is null.
+ `OUTOFBOUNDARIES` error if the index argument is out of bounds.

If the path is restricted syntax:
+ Integer, the new length of the array.
+ `WRONGTYPE` error if the value at the path is not an array.
+ `OUTOFBOUNDARIES` error if the index argument is out of bounds.

**Examples**

 Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '[[], ["a"], ["a", "b"]]'
OK
127.0.0.1:6379> JSON.ARRINSERT k1 $[*] 0 '"c"'
1) (integer) 1
2) (integer) 2
3) (integer) 3
127.0.0.1:6379> JSON.GET k1
"[[\"c\"],[\"c\",\"a\"],[\"c\",\"a\",\"b\"]]"
```

 Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '[[], ["a"], ["a", "b"]]'
OK
127.0.0.1:6379> JSON.ARRINSERT k1 . 0 '"c"'
(integer) 4
127.0.0.1:6379> JSON.GET k1
"[\"c\",[],[\"a\"],[\"a\",\"b\"]]"
```

# JSON.ARRLEN
<a name="json-arrlen"></a>

Gets the length of the array values at the path.

Syntax

```
JSON.ARRLEN <key> [path] 
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (optional) – A JSON path. Defaults to the root if not provided.

**Return**

If the path is enhanced syntax:
+ Array of integers that represent the array length at each path.
+ If a value is not an array, its corresponding return value is null.
+ Null if the document key does not exist.

If the path is restricted syntax:
+ Integer, array length.
+ If multiple objects are selected, the command returns the first array's length.
+ `WRONGTYPE` error if the value at the path is not an array.
+ `NONEXISTENT JSON` error if the path does not exist.
+ Null if the document key does not exist.

**Examples**

 Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '[[], ["a"], ["a", "b"], ["a", "b", "c"]]'
OK
127.0.0.1:6379> JSON.ARRLEN k1 $[*]
1) (integer) 0
2) (integer) 1
3) (integer) 2
4) (integer) 3

127.0.0.1:6379> JSON.SET k2 . '[[], "a", ["a", "b"], ["a", "b", "c"], 4]'
OK
127.0.0.1:6379> JSON.ARRLEN k2 $[*]
1) (integer) 0
2) (nil)
3) (integer) 2
4) (integer) 3
5) (nil)
```

 Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '[[], ["a"], ["a", "b"], ["a", "b", "c"]]' 
OK 
127.0.0.1:6379> JSON.ARRLEN k1 [*] 
(integer) 0 
127.0.0.1:6379> JSON.ARRLEN k1 [1] 
(integer) 1 
127.0.0.1:6379> JSON.ARRLEN k1 [2] 
(integer) 2

127.0.0.1:6379> JSON.SET k2 . '[[], "a", ["a", "b"], ["a", "b", "c"], 4]' 
OK
127.0.0.1:6379> JSON.ARRLEN k2 [1] 
(error) WRONGTYPE JSON element is not an array 
127.0.0.1:6379> JSON.ARRLEN k2 [0] 
(integer) 0
127.0.0.1:6379> JSON.ARRLEN k2 [6] 
(error) OUTOFBOUNDARIES Array index is out of bounds
127.0.0.1:6379> JSON.ARRLEN k2 a.b 
(error) NONEXISTENT JSON path does not exist
```

# JSON.ARRPOP
<a name="json-arrpop"></a>

Removes and returns element at the index from the array. Popping an empty array returns null.

Syntax

```
JSON.ARRPOP <key> [path [index]]
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (optional) – A JSON path. Defaults to the root if not provided.
+ index (optional) – The position in the array to start popping from.
  + Defaults to -1 if not provided, which means the last element.
  + Negative value means position from the last element.
  + Out of boundary indexes are rounded to their respective array boundaries.

**Return**

If the path is enhanced syntax:
+ Array of bulk strings that represent popped values at each path.
+ If a value is an empty array, its corresponding return value is null.
+ If a value is not an array, its corresponding return value is null.

If the path is restricted syntax:
+ Bulk string, which represents the popped JSON value.
+ Null if the array is empty.
+ `WRONGTYPE` error if the value at the path is not an array.

**Examples**

 Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '[[], ["a"], ["a", "b"]]'
OK
127.0.0.1:6379> JSON.ARRPOP k1 $[*]
1) (nil)
2) "\"a\""
3) "\"b\""
127.0.0.1:6379> JSON.GET k1
"[[],[],[\"a\"]]"
```

 Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '[[], ["a"], ["a", "b"]]'
OK
127.0.0.1:6379> JSON.ARRPOP k1
"[\"a\",\"b\"]"
127.0.0.1:6379> JSON.GET k1
"[[],[\"a\"]]"

127.0.0.1:6379> JSON.SET k2 . '[[], ["a"], ["a", "b"]]'
OK
127.0.0.1:6379> JSON.ARRPOP k2 . 0
"[]"
127.0.0.1:6379> JSON.GET k2
"[[\"a\"],[\"a\",\"b\"]]"
```

# JSON.ARRTRIM
<a name="json-arrtrim"></a>

Trims an arrays at the path so that it becomes a subarray [start, end], both inclusive.
+ If the array is empty, do nothing, return 0.
+ If start <0, treat it as 0.
+ If end >= size (size of the array), treat it as size-1.
+ If start >= size or start > end, empty the array and return 0.

Syntax

```
JSON.ARRTRIM <key> <path> <start> <end>
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (required) – A JSON path.
+ start (required) – The start index, inclusive.
+ end (required) – The end index, inclusive.

**Return**

If the path is enhanced syntax:
+ Array of integers that represent the new length of the array at each path.
+ If a value is an empty array, its corresponding return value is null.
+ If a value is not an array, its corresponding return value is null.
+ `OUTOFBOUNDARIES` error if an index argument is out of bounds.

If the path is restricted syntax:
+ Integer, the new length of the array.
+ Null if the array is empty.
+ `WRONGTYPE` error if the value at the path is not an array.
+ `OUTOFBOUNDARIES` error if an index argument is out of bounds.

**Examples**

 Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '[[], ["a"], ["a", "b"], ["a", "b", "c"]]'
OK
127.0.0.1:6379> JSON.ARRTRIM k1 $[*] 0 1
1) (integer) 0
2) (integer) 1
3) (integer) 2
4) (integer) 2
127.0.0.1:6379> JSON.GET k1
"[[],[\"a\"],[\"a\",\"b\"],[\"a\",\"b\"]]"
```

 Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"children": ["John", "Jack", "Tom", "Bob", "Mike"]}'
OK
127.0.0.1:6379> JSON.ARRTRIM k1 .children 0 1
(integer) 2
127.0.0.1:6379> JSON.GET k1 .children
"[\"John\",\"Jack\"]"
```

# JSON.CLEAR
<a name="json-clear"></a>

Clears the arrays or an object at the path.

Syntax

```
JSON.CLEAR <key> [path]
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (optional) – A JSON path. Defaults to the root if not provided.

**Return**
+ Integer, the number of containers cleared.
+ Clearing an empty array or object accounts for 1 container cleared.
+ Clearing a non-container value returns 0.

**Examples**

```
127.0.0.1:6379> JSON.SET k1 . '[[], [0], [0,1], [0,1,2], 1, true, null, "d"]'
OK
127.0.0.1:6379>  JSON.CLEAR k1  $[*]
(integer) 7
127.0.0.1:6379> JSON.CLEAR k1  $[*]
(integer) 4
127.0.0.1:6379> JSON.SET k2 . '{"children": ["John", "Jack", "Tom", "Bob", "Mike"]}'
OK
127.0.0.1:6379> JSON.CLEAR k2 .children
(integer) 1
127.0.0.1:6379> JSON.GET k2 .children
"[]"
```

# JSON.DEBUG
<a name="json-debug"></a>

Reports information. Supported subcommands are:
+ MEMORY <key> [path] – Reports memory usage in bytes of a JSON value. Path defaults to the root if not provided.
+ FIELDS <key> [path] – Reports the number of fields at the specified document path. Path defaults to the root if not provided. Each non-container JSON value counts as one field. Objects and arrays recursively count one field for each of their containing JSON values. Each container value, except the root container, counts as one additional field.
+ HELP – Prints help messages of the command.

Syntax

```
JSON.DEBUG <subcommand & arguments>
```

Depends on the subcommand:

MEMORY
+ If the path is enhanced syntax:
  + Returns an array of integers that represent memory size (in bytes) of JSON value at each path.
  + Returns an empty array if the Valkey or Redis OSS key does not exist.
+ If the path is restricted syntax:
  + Returns an integer, memory size, and the JSON value in bytes.
  + Returns null if the Valkey or Redis OSS key does not exist.

FIELDS
+ If the path is enhanced syntax:
  + Returns an array of integers that represent the number of fields of JSON value at each path.
  + Returns an empty array if the Valkey or Redis OSS key does not exist.
+ If the path is restricted syntax:
  + Returns an integer, number of fields of the JSON value.
  + Returns null if the Valkey or Redis OSS key does not exist.

HELP – Returns an array of help messages.

**Examples**

Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '[1, 2.3, "foo", true, null, {}, [], {"a":1, "b":2}, [1,2,3]]'
OK
127.0.0.1:6379> JSON.DEBUG MEMORY k1 $[*]
1) (integer) 16
2) (integer) 16
3) (integer) 19
4) (integer) 16
5) (integer) 16
6) (integer) 16
7) (integer) 16
8) (integer) 50
9) (integer) 64
127.0.0.1:6379> JSON.DEBUG FIELDS k1 $[*]
1) (integer) 1
2) (integer) 1
3) (integer) 1
4) (integer) 1
5) (integer) 1
6) (integer) 0
7) (integer) 0
8) (integer) 2
9) (integer) 3
```

Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"firstName":"John","lastName":"Smith","age":27,"weight":135.25,"isAlive":true,"address":{"street":"21 2nd Street","city":"New York","state":"NY","zipcode":"10021-3100"},"phoneNumbers":[{"type":"home","number":"212 555-1234"},{"type":"office","number":"646 555-4567"}],"children":[],"spouse":null}'
OK
127.0.0.1:6379> JSON.DEBUG MEMORY k1
(integer) 632
127.0.0.1:6379> JSON.DEBUG MEMORY k1 .phoneNumbers
(integer) 166

127.0.0.1:6379> JSON.DEBUG FIELDS k1
(integer) 19
127.0.0.1:6379> JSON.DEBUG FIELDS k1 .address
(integer) 4

127.0.0.1:6379> JSON.DEBUG HELP
1) JSON.DEBUG MEMORY <key> [path] - report memory size (bytes) of the JSON element. Path defaults to root if not provided.
2) JSON.DEBUG FIELDS <key> [path] - report number of fields in the JSON element. Path defaults to root if not provided.
3) JSON.DEBUG HELP - print help message.
```

# JSON.DEL
<a name="json-del"></a>

Deletes the JSON values at the path in a document key. If the path is the root, it is equivalent to deleting the key from Valkey or Redis OSS.

Syntax

```
JSON.DEL <key> [path]
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (optional) – A JSON path. Defaults to the root if not provided.

**Return**
+ Number of elements deleted.
+ 0 if the Valkey or Redis OSS key does not exist.
+ 0 if the JSON path is invalid or does not exist.

**Examples**

 Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"a":{}, "b":{"a":1}, "c":{"a":1, "b":2}, "d":{"a":1, "b":2, "c":3}, "e": [1,2,3,4,5]}'
OK
127.0.0.1:6379> JSON.DEL k1 $.d.*
(integer) 3
127.0.0.1:6379> JSOn.GET k1
"{\"a\":{},\"b\":{\"a\":1},\"c\":{\"a\":1,\"b\":2},\"d\":{},\"e\":[1,2,3,4,5]}"
127.0.0.1:6379> JSON.DEL k1 $.e[*]
(integer) 5
127.0.0.1:6379> JSOn.GET k1
"{\"a\":{},\"b\":{\"a\":1},\"c\":{\"a\":1,\"b\":2},\"d\":{},\"e\":[]}"
```

 Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"a":{}, "b":{"a":1}, "c":{"a":1, "b":2}, "d":{"a":1, "b":2, "c":3}, "e": [1,2,3,4,5]}'
OK
127.0.0.1:6379> JSON.DEL k1 .d.*
(integer) 3
127.0.0.1:6379> JSON.GET k1
"{\"a\":{},\"b\":{\"a\":1},\"c\":{\"a\":1,\"b\":2},\"d\":{},\"e\":[1,2,3,4,5]}"
127.0.0.1:6379> JSON.DEL k1 .e[*]
(integer) 5
127.0.0.1:6379> JSON.GET k1
"{\"a\":{},\"b\":{\"a\":1},\"c\":{\"a\":1,\"b\":2},\"d\":{},\"e\":[]}"
```

# JSON.FORGET
<a name="json-forget"></a>

An alias of [JSON.DEL](json-del.md).

# JSON.GET
<a name="json-get"></a>

Returns the serialized JSON at one or multiple paths.

Syntax

```
JSON.GET <key>
[INDENT indentation-string]
[NEWLINE newline-string]
[SPACE space-string]
[NOESCAPE]
[path ...]
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ INDENT/NEWLINE/SPACE (optional) – Controls the format of the returned JSON string, that is, "pretty print". The default value of each one is an empty string. They can be overridden in any combination. They can be specified in any order.
+ NOESCAPE - Optional, allowed to be present for legacy compatibility and has no other effect.
+ path (optional) – Zero or more JSON paths, defaults to the root if none is given. The path arguments must be placed at the end.

**Return**

Enhanced path syntax:

 If one path is given:
+ Returns serialized string of an array of values.
+ If no value is selected, the command returns an empty array.

 If multiple paths are given:
+ Returns a stringified JSON object, in which each path is a key.
+ If there are mixed enhanced and restricted path syntax, the result conforms to the enhanced syntax.
+ If a path does not exist, its corresponding value is an empty array.

**Examples**

 Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"firstName":"John","lastName":"Smith","age":27,"weight":135.25,"isAlive":true,"address":{"street":"21 2nd Street","city":"New York","state":"NY","zipcode":"10021-3100"},"phoneNumbers":[{"type":"home","number":"212 555-1234"},{"type":"office","number":"646 555-4567"}],"children":[],"spouse":null}'
OK
127.0.0.1:6379> JSON.GET k1 $.address.*
"[\"21 2nd Street\",\"New York\",\"NY\",\"10021-3100\"]"
127.0.0.1:6379> JSON.GET k1 indent "\t" space " " NEWLINE "\n" $.address.*
"[\n\t\"21 2nd Street\",\n\t\"New York\",\n\t\"NY\",\n\t\"10021-3100\"\n]"
127.0.0.1:6379> JSON.GET k1 $.firstName $.lastName $.age
"{\"$.firstName\":[\"John\"],\"$.lastName\":[\"Smith\"],\"$.age\":[27]}"            
127.0.0.1:6379> JSON.SET k2 . '{"a":{}, "b":{"a":1}, "c":{"a":1, "b":2}}'
OK
127.0.0.1:6379> json.get k2 $..*
"[{},{\"a\":1},{\"a\":1,\"b\":2},1,1,2]"
```

 Restricted path syntax:

```
 127.0.0.1:6379> JSON.SET k1 . '{"firstName":"John","lastName":"Smith","age":27,"weight":135.25,"isAlive":true,"address":{"street":"21 2nd Street","city":"New York","state":"NY","zipcode":"10021-3100"},"phoneNumbers":[{"type":"home","number":"212 555-1234"},{"type":"office","number":"646 555-4567"}],"children":[],"spouse":null}'
OK
127.0.0.1:6379> JSON.GET k1 .address
"{\"street\":\"21 2nd Street\",\"city\":\"New York\",\"state\":\"NY\",\"zipcode\":\"10021-3100\"}"
127.0.0.1:6379> JSON.GET k1 indent "\t" space " " NEWLINE "\n" .address
"{\n\t\"street\": \"21 2nd Street\",\n\t\"city\": \"New York\",\n\t\"state\": \"NY\",\n\t\"zipcode\": \"10021-3100\"\n}"
127.0.0.1:6379> JSON.GET k1 .firstName .lastName .age
"{\".firstName\":\"John\",\".lastName\":\"Smith\",\".age\":27}"
```

# JSON.MGET
<a name="json-mget"></a>

Gets serialized JSONs at the path from multiple document keys. It returns null for a nonexistent key or JSON path.

**Syntax**

```
JSON.MGET <key> [key ...] <path>
```
+ key (required) – One or more Valkey or Redis OSS keys of document type.
+ path (required) – A JSON path.

**Return**
+ Array of bulk strings. The size of the array is equal to the number of keys in the command. Each element of the array is populated with either (a) the serialized JSON as located by the path or (b) null if the key does not exist, the path does not exist in the document, or the path is invalid (syntax error).
+ If any of the specified keys exists and is not a JSON key, the command returns `WRONGTYPE` error.

**Examples**

Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"address":{"street":"21 2nd Street","city":"New York","state":"NY","zipcode":"10021"}}'
OK
127.0.0.1:6379> JSON.SET k2 . '{"address":{"street":"5 main Street","city":"Boston","state":"MA","zipcode":"02101"}}'
OK
127.0.0.1:6379> JSON.SET k3 . '{"address":{"street":"100 Park Ave","city":"Seattle","state":"WA","zipcode":"98102"}}'
OK
127.0.0.1:6379> JSON.MGET k1 k2 k3 $.address.city
1) "[\"New York\"]"
2) "[\"Boston\"]"
3) "[\"Seattle\"]"
```

 Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"address":{"street":"21 2nd Street","city":"New York","state":"NY","zipcode":"10021"}}'
OK
127.0.0.1:6379> JSON.SET k2 . '{"address":{"street":"5 main Street","city":"Boston","state":"MA","zipcode":"02101"}}'
OK
127.0.0.1:6379> JSON.SET k3 . '{"address":{"street":"100 Park Ave","city":"Seattle","state":"WA","zipcode":"98102"}}'
OK

127.0.0.1:6379> JSON.MGET k1 k2 k3 .address.city
1) "\"New York\""
2) "\"Seattle\""
3) "\"Seattle\""
```

# JSON.MSET
<a name="json-mset"></a>

Supported for Valkey version 8.1 and above.

Set JSON values for multiple keys. The operation is atomic. Either all values are set or none is set.

**Syntax**

```
JSON.MSET key path json [ key path json ... ]
```
+ If the path calls for an object member:
  + If the parent element does not exist, the command will return NONEXISTENT error.
  + If the parent element exists but is not an object, the command will return ERROR.
  + If the parent element exists and is an object:
    + If the member does not exist, a new member will be appended to the parent object if and only if the parent object is the last child in the path. Otherwise, the command will return NONEXISTENT error.
    + If the member exists, its value will be replaced by the JSON value.
+ If the path calls for an array index:
  + If the parent element does not exist, the command will return a NONEXISTENT error.
  + If the parent element exists but is not an array, the command will return ERROR.
  + If the parent element exists but the index is out of bounds, the command will return OUTOFBOUNDARIES error.
  + If the parent element exists and the index is valid, the element will be replaced by the new JSON value.
+ If the path calls for an object or array, the value (object or array) will be replaced by the new JSON value.

**Return**
+ Simple string reply: 'OK' if the operation was successful.
+ Simple error reply: If the operation failed.

**Examples**

Enhanced path syntax:

```
127.0.0.1:6379> JSON.MSET k1 . '[1,2,3,4,5]' k2 . '{"a":{"a":1, "b":2, "c":3}}' k3 . '{"a": [1,2,3,4,5]}'
OK
127.0.0.1:6379> JSON.GET k1
"[1,2,3,4,5]"
127.0.0.1:6379> JSON.GET k2
"{\"a\":{\"a\":1,\"b\":2,\"c\":3}}"
127.0.0.1:6379> JSON.MSET k2 $.a.* '0' k3 $.a[*] '0'
OK
127.0.0.1:6379> JSON.GET k2
"{\"a\":{\"a\":0,\"b\":0,\"c\":0}}"
127.0.0.1:6379> JSON.GET k3
"{\"a\":[0,0,0,0,0]}"
```

Restricted path syntax:

```
127.0.0.1:6379> JSON.MSET k1 . '{"name": "John","address": {"street": "123 Main St","city": "Springfield"},"phones": ["555-1234","555-5678"]}'
OK
127.0.0.1:6379> JSON.MSET k1 .address.street '"21 2nd Street"' k1 .address.city '"New York"'
OK
127.0.0.1:6379> JSON.GET k1 .address.street
"\"21 2nd Street\""
127.0.0.1:6379> JSON.GET k1 .address.city
"\"New York\""
```

# JSON.NUMINCRBY
<a name="json-numincrby"></a>

Increments the number values at the path by a given number.

Syntax

```
JSON.NUMINCRBY <key> <path> <number>
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (required) – A JSON path.
+ number (required) – A number.

**Return**

If the path is enhanced syntax:
+ Array of bulk strings that represents the resulting value at each path.
+ If a value is not a number, its corresponding return value is null.
+ `WRONGTYPE` error if the number cannot be parsed.
+ `OVERFLOW` error if the result is out of the range of 64-bit IEEE double.
+ `NONEXISTENT` if the document key does not exist.

If the path is restricted syntax:
+ Bulk string that represents the resulting value.
+ If multiple values are selected, the command returns the result of the last updated value.
+ `WRONGTYPE` error if the value at the path is not a number.
+ `WRONGTYPE` error if the number cannot be parsed.
+ `OVERFLOW` error if the result is out of the range of 64-bit IEEE double.
+ `NONEXISTENT` if the document key does not exist.

**Examples**

 Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"a":[], "b":[1], "c":[1,2], "d":[1,2,3]}'
OK
127.0.0.1:6379> JSON.NUMINCRBY k1 $.d[*] 10
"[11,12,13]"
127.0.0.1:6379> JSON.GET k1
"{\"a\":[],\"b\":[1],\"c\":[1,2],\"d\":[11,12,13]}"

127.0.0.1:6379> JSON.SET k1 $ '{"a":[], "b":[1], "c":[1,2], "d":[1,2,3]}'
OK
127.0.0.1:6379> JSON.NUMINCRBY k1 $.a[*] 1
"[]"
127.0.0.1:6379> JSON.NUMINCRBY k1 $.b[*] 1
"[2]"
127.0.0.1:6379> JSON.NUMINCRBY k1 $.c[*] 1
"[2,3]"
127.0.0.1:6379> JSON.NUMINCRBY k1 $.d[*] 1
"[2,3,4]"
127.0.0.1:6379> JSON.GET k1
"{\"a\":[],\"b\":[2],\"c\":[2,3],\"d\":[2,3,4]}"

127.0.0.1:6379> JSON.SET k2 $ '{"a":{}, "b":{"a":1}, "c":{"a":1, "b":2}, "d":{"a":1, "b":2, "c":3}}'
OK
127.0.0.1:6379> JSON.NUMINCRBY k2 $.a.* 1
"[]"
127.0.0.1:6379> JSON.NUMINCRBY k2 $.b.* 1
"[2]"
127.0.0.1:6379> JSON.NUMINCRBY k2 $.c.* 1
"[2,3]"
127.0.0.1:6379> JSON.NUMINCRBY k2 $.d.* 1
"[2,3,4]"
127.0.0.1:6379> JSON.GET k2
"{\"a\":{},\"b\":{\"a\":2},\"c\":{\"a\":2,\"b\":3},\"d\":{\"a\":2,\"b\":3,\"c\":4}}"

127.0.0.1:6379> JSON.SET k3 $ '{"a":{"a":"a"}, "b":{"a":"a", "b":1}, "c":{"a":"a", "b":"b"}, "d":{"a":1, "b":"b", "c":3}}'
OK
127.0.0.1:6379> JSON.NUMINCRBY k3 $.a.* 1
"[null]"
127.0.0.1:6379> JSON.NUMINCRBY k3 $.b.* 1
"[null,2]"
127.0.0.1:6379> JSON.NUMINCRBY k3 $.c.* 1
"[null,null]"
127.0.0.1:6379> JSON.NUMINCRBY k3 $.d.* 1
"[2,null,4]"
127.0.0.1:6379> JSON.GET k3
"{\"a\":{\"a\":\"a\"},\"b\":{\"a\":\"a\",\"b\":2},\"c\":{\"a\":\"a\",\"b\":\"b\"},\"d\":{\"a\":2,\"b\":\"b\",\"c\":4}}"
```

 Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"a":[], "b":[1], "c":[1,2], "d":[1,2,3]}'
OK
127.0.0.1:6379> JSON.NUMINCRBY k1 .d[1] 10
"12"
127.0.0.1:6379> JSON.GET k1
"{\"a\":[],\"b\":[1],\"c\":[1,2],\"d\":[1,12,3]}"

127.0.0.1:6379> JSON.SET k1 . '{"a":[], "b":[1], "c":[1,2], "d":[1,2,3]}'
OK
127.0.0.1:6379> JSON.NUMINCRBY k1 .a[*] 1
(error) NONEXISTENT JSON path does not exist
127.0.0.1:6379> JSON.NUMINCRBY k1 .b[*] 1
"2"
127.0.0.1:6379> JSON.GET k1
"{\"a\":[],\"b\":[2],\"c\":[1,2],\"d\":[1,2,3]}"
127.0.0.1:6379> JSON.NUMINCRBY k1 .c[*] 1
"3"
127.0.0.1:6379> JSON.GET k1
"{\"a\":[],\"b\":[2],\"c\":[2,3],\"d\":[1,2,3]}"
127.0.0.1:6379> JSON.NUMINCRBY k1 .d[*] 1
"4"
127.0.0.1:6379> JSON.GET k1
"{\"a\":[],\"b\":[2],\"c\":[2,3],\"d\":[2,3,4]}"

127.0.0.1:6379> JSON.SET k2 . '{"a":{}, "b":{"a":1}, "c":{"a":1, "b":2}, "d":{"a":1, "b":2, "c":3}}'
OK
127.0.0.1:6379> JSON.NUMINCRBY k2 .a.* 1
(error) NONEXISTENT JSON path does not exist
127.0.0.1:6379> JSON.NUMINCRBY k2 .b.* 1
"2"
127.0.0.1:6379> JSON.GET k2
"{\"a\":{},\"b\":{\"a\":2},\"c\":{\"a\":1,\"b\":2},\"d\":{\"a\":1,\"b\":2,\"c\":3}}"
127.0.0.1:6379> JSON.NUMINCRBY k2 .c.* 1
"3"
127.0.0.1:6379> JSON.GET k2
"{\"a\":{},\"b\":{\"a\":2},\"c\":{\"a\":2,\"b\":3},\"d\":{\"a\":1,\"b\":2,\"c\":3}}"
127.0.0.1:6379> JSON.NUMINCRBY k2 .d.* 1
"4"
127.0.0.1:6379> JSON.GET k2
"{\"a\":{},\"b\":{\"a\":2},\"c\":{\"a\":2,\"b\":3},\"d\":{\"a\":2,\"b\":3,\"c\":4}}"

127.0.0.1:6379> JSON.SET k3 . '{"a":{"a":"a"}, "b":{"a":"a", "b":1}, "c":{"a":"a", "b":"b"}, "d":{"a":1, "b":"b", "c":3}}'
OK
127.0.0.1:6379> JSON.NUMINCRBY k3 .a.* 1
(error) WRONGTYPE JSON element is not a number
127.0.0.1:6379> JSON.NUMINCRBY k3 .b.* 1
"2"
127.0.0.1:6379> JSON.NUMINCRBY k3 .c.* 1
(error) WRONGTYPE JSON element is not a number
127.0.0.1:6379> JSON.NUMINCRBY k3 .d.* 1
"4"
```

# JSON.NUMMULTBY
<a name="json-nummultby"></a>

Multiplies the number values at the path by a given number.

Syntax

```
JSON.NUMMULTBY <key> <path> <number>
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (required) – A JSON path.
+ number (required) – A number.

**Return**

If the path is enhanced syntax:
+ Array of bulk strings that represent the resulting value at each path.
+ If a value is not a number, its corresponding return value is null.
+ `WRONGTYPE` error if the number cannot be parsed.
+ `OVERFLOW` error if the result is out of the range of a 64-bit IEEE double precision floating point number.
+ `NONEXISTENT` if the document key does not exist.

If the path is restricted syntax:
+ Bulk string that represents the resulting value.
+ If multiple values are selected, the command returns the result of the last updated value.
+ `WRONGTYPE` error if the value at the path is not a number.
+ `WRONGTYPE` error if the number cannot be parsed.
+ `OVERFLOW` error if the result is out of the range of a 64-bit IEEE double.
+ `NONEXISTENT` if the document key does not exist.

**Examples**

 Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"a":[], "b":[1], "c":[1,2], "d":[1,2,3]}'
OK
127.0.0.1:6379> JSON.NUMMULTBY k1 $.d[*] 2
"[2,4,6]"
127.0.0.1:6379> JSON.GET k1
"{\"a\":[],\"b\":[1],\"c\":[1,2],\"d\":[2,4,6]}"

127.0.0.1:6379> JSON.SET k1 $ '{"a":[], "b":[1], "c":[1,2], "d":[1,2,3]}'
OK
127.0.0.1:6379> JSON.NUMMULTBY k1 $.a[*] 2
"[]"
127.0.0.1:6379> JSON.NUMMULTBY k1 $.b[*] 2
"[2]"
127.0.0.1:6379> JSON.NUMMULTBY k1 $.c[*] 2
"[2,4]"
127.0.0.1:6379> JSON.NUMMULTBY k1 $.d[*] 2
"[2,4,6]"

127.0.0.1:6379> JSON.SET k2 $ '{"a":{}, "b":{"a":1}, "c":{"a":1, "b":2}, "d":{"a":1, "b":2, "c":3}}'
OK
127.0.0.1:6379> JSON.NUMMULTBY k2 $.a.* 2
"[]"
127.0.0.1:6379> JSON.NUMMULTBY k2 $.b.* 2
"[2]"
127.0.0.1:6379> JSON.NUMMULTBY k2 $.c.* 2
"[2,4]"
127.0.0.1:6379> JSON.NUMMULTBY k2 $.d.* 2
"[2,4,6]"

127.0.0.1:6379> JSON.SET k3 $ '{"a":{"a":"a"}, "b":{"a":"a", "b":1}, "c":{"a":"a", "b":"b"}, "d":{"a":1, "b":"b", "c":3}}'
OK
127.0.0.1:6379> JSON.NUMMULTBY k3 $.a.* 2
"[null]"
127.0.0.1:6379> JSON.NUMMULTBY k3 $.b.* 2
"[null,2]"
127.0.0.1:6379> JSON.NUMMULTBY k3 $.c.* 2
"[null,null]"
127.0.0.1:6379> JSON.NUMMULTBY k3 $.d.* 2
"[2,null,6]"
```

 Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"a":[], "b":[1], "c":[1,2], "d":[1,2,3]}'
OK
127.0.0.1:6379> JSON.NUMMULTBY k1 .d[1] 2
"4"
127.0.0.1:6379> JSON.GET k1
"{\"a\":[],\"b\":[1],\"c\":[1,2],\"d\":[1,4,3]}"

127.0.0.1:6379> JSON.SET k1 . '{"a":[], "b":[1], "c":[1,2], "d":[1,2,3]}'
OK
127.0.0.1:6379> JSON.NUMMULTBY k1 .a[*] 2
(error) NONEXISTENT JSON path does not exist
127.0.0.1:6379> JSON.NUMMULTBY k1 .b[*] 2
"2"
127.0.0.1:6379> JSON.GET k1
"{\"a\":[],\"b\":[2],\"c\":[1,2],\"d\":[1,2,3]}"
127.0.0.1:6379> JSON.NUMMULTBY k1 .c[*] 2
"4"
127.0.0.1:6379> JSON.GET k1
"{\"a\":[],\"b\":[2],\"c\":[2,4],\"d\":[1,2,3]}"
127.0.0.1:6379> JSON.NUMMULTBY k1 .d[*] 2
"6"
127.0.0.1:6379> JSON.GET k1
"{\"a\":[],\"b\":[2],\"c\":[2,4],\"d\":[2,4,6]}"

127.0.0.1:6379> JSON.SET k2 . '{"a":{}, "b":{"a":1}, "c":{"a":1, "b":2}, "d":{"a":1, "b":2, "c":3}}'
OK
127.0.0.1:6379> JSON.NUMMULTBY k2 .a.* 2
(error) NONEXISTENT JSON path does not exist
127.0.0.1:6379> JSON.NUMMULTBY k2 .b.* 2
"2"
127.0.0.1:6379> JSON.GET k2
"{\"a\":{},\"b\":{\"a\":2},\"c\":{\"a\":1,\"b\":2},\"d\":{\"a\":1,\"b\":2,\"c\":3}}"
127.0.0.1:6379> JSON.NUMMULTBY k2 .c.* 2
"4"
127.0.0.1:6379> JSON.GET k2
"{\"a\":{},\"b\":{\"a\":2},\"c\":{\"a\":2,\"b\":4},\"d\":{\"a\":1,\"b\":2,\"c\":3}}"
127.0.0.1:6379> JSON.NUMMULTBY k2 .d.* 2
"6"
127.0.0.1:6379> JSON.GET k2
"{\"a\":{},\"b\":{\"a\":2},\"c\":{\"a\":2,\"b\":4},\"d\":{\"a\":2,\"b\":4,\"c\":6}}"

127.0.0.1:6379> JSON.SET k3 . '{"a":{"a":"a"}, "b":{"a":"a", "b":1}, "c":{"a":"a", "b":"b"}, "d":{"a":1, "b":"b", "c":3}}'
OK
127.0.0.1:6379> JSON.NUMMULTBY k3 .a.* 2
(error) WRONGTYPE JSON element is not a number
127.0.0.1:6379> JSON.NUMMULTBY k3 .b.* 2
"2"
127.0.0.1:6379> JSON.GET k3
"{\"a\":{\"a\":\"a\"},\"b\":{\"a\":\"a\",\"b\":2},\"c\":{\"a\":\"a\",\"b\":\"b\"},\"d\":{\"a\":1,\"b\":\"b\",\"c\":3}}"
127.0.0.1:6379> JSON.NUMMULTBY k3 .c.* 2
(error) WRONGTYPE JSON element is not a number
127.0.0.1:6379> JSON.NUMMULTBY k3 .d.* 2
"6"
127.0.0.1:6379> JSON.GET k3
"{\"a\":{\"a\":\"a\"},\"b\":{\"a\":\"a\",\"b\":2},\"c\":{\"a\":\"a\",\"b\":\"b\"},\"d\":{\"a\":2,\"b\":\"b\",\"c\":6}}"
```

# JSON.OBJLEN
<a name="json-objlen"></a>

Gets the number of keys in the object values at the path.

Syntax

```
JSON.OBJLEN <key> [path]
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (optional) – A JSON path. Defaults to the root if not provided.

**Return**

If the path is enhanced syntax:
+ Array of integers that represent the object length at each path.
+ If a value is not an object, its corresponding return value is null.
+ Null if the document key does not exist.

If the path is restricted syntax:
+ Integer, number of keys in the object.
+ If multiple objects are selected, the command returns the first object's length.
+ `WRONGTYPE` error if the value at the path is not an object.
+ `NONEXISTENT JSON` error if the path does not exist.
+ Null if the document key does not exist.

**Examples**

 Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 $ '{"a":{}, "b":{"a":"a"}, "c":{"a":"a", "b":"bb"}, "d":{"a":1, "b":"b", "c":{"a":3,"b":4}}, "e":1}'
OK
127.0.0.1:6379> JSON.OBJLEN k1 $.a
1) (integer) 0
127.0.0.1:6379> JSON.OBJLEN k1 $.a.*
(empty array)
127.0.0.1:6379> JSON.OBJLEN k1 $.b
1) (integer) 1
127.0.0.1:6379> JSON.OBJLEN k1 $.b.*
1) (nil)
127.0.0.1:6379> JSON.OBJLEN k1 $.c
1) (integer) 2
127.0.0.1:6379> JSON.OBJLEN k1 $.c.*
1) (nil)
2) (nil)
127.0.0.1:6379> JSON.OBJLEN k1 $.d
1) (integer) 3
127.0.0.1:6379> JSON.OBJLEN k1 $.d.*
1) (nil)
2) (nil)
3) (integer) 2
127.0.0.1:6379> JSON.OBJLEN k1 $.*
1) (integer) 0
2) (integer) 1
3) (integer) 2
4) (integer) 3
5) (nil)
```

 Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"a":{}, "b":{"a":"a"}, "c":{"a":"a", "b":"bb"}, "d":{"a":1, "b":"b", "c":{"a":3,"b":4}}, "e":1}'
OK
127.0.0.1:6379> JSON.OBJLEN k1 .a
(integer) 0
127.0.0.1:6379> JSON.OBJLEN k1 .a.*
(error) NONEXISTENT JSON path does not exist
127.0.0.1:6379> JSON.OBJLEN k1 .b
(integer) 1
127.0.0.1:6379> JSON.OBJLEN k1 .b.*
(error) WRONGTYPE JSON element is not an object
127.0.0.1:6379> JSON.OBJLEN k1 .c
(integer) 2
127.0.0.1:6379> JSON.OBJLEN k1 .c.*
(error) WRONGTYPE JSON element is not an object
127.0.0.1:6379> JSON.OBJLEN k1 .d
(integer) 3
127.0.0.1:6379> JSON.OBJLEN k1 .d.*
(integer) 2
127.0.0.1:6379> JSON.OBJLEN k1 .*
(integer) 0
```

# JSON.OBJKEYS
<a name="json-objkeys"></a>

Gets key names in the object values at the path.

Syntax

```
JSON.OBJKEYS <key> [path]
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (optional) – A JSON path. Defaults to the root if not provided.

**Return**

If the path is enhanced syntax:
+ Array of array of bulk strings. Each element is an array of keys in a matching object.
+ If a value is not an object, its corresponding return value is empty value.
+ Null if the document key does not exist.

If the path is restricted syntax:
+ Array of bulk strings. Each element is a key name in the object.
+ If multiple objects are selected, the command returns the keys of the first object.
+ `WRONGTYPE` error if the value at the path is not an object.
+ Null if the document key does not exist.

**Examples**

 Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 $ '{"a":{}, "b":{"a":"a"}, "c":{"a":"a", "b":"bb"}, "d":{"a":1, "b":"b", "c":{"a":3,"b":4}}, "e":1}'
OK
127.0.0.1:6379> JSON.OBJKEYS k1 $.*
1) (empty array)
2) 1) "a"
3) 1) "a"
   2) "b"
4) 1) "a"
   2) "b"
   3) "c"
5) (empty array)
127.0.0.1:6379> JSON.OBJKEYS k1 $.d
1) 1) "a"
   2) "b"
   3) "c"
```

 Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 $ '{"a":{}, "b":{"a":"a"}, "c":{"a":"a", "b":"bb"}, "d":{"a":1, "b":"b", "c":{"a":3,"b":4}}, "e":1}'
OK
127.0.0.1:6379> JSON.OBJKEYS k1 .*
1) "a"
127.0.0.1:6379> JSON.OBJKEYS k1 .d
1) "a"
2) "b"
3) "c"
```

# JSON.RESP
<a name="json-resp"></a>

Returns the JSON value at the given path in the Valkey or Redis OSS Serialization Protocol (RESP). If the value is container, the response is a RESP array or nested array.
+ JSON null is mapped to the RESP Null Bulk String.
+ JSON Boolean values are mapped to the respective RESP Simple Strings.
+ Integer numbers are mapped to RESP Integers.
+ 64-bit IEEE double floating point numbers are mapped to RESP Bulk Strings.
+ JSON strings are mapped to RESP Bulk Strings.
+ JSON arrays are represented as RESP Arrays, where the first element is the simple string [, followed by the array's elements.
+ JSON objects are represented as RESP Arrays, where the first element is the simple string \$1, followed by key-value pairs, each of which is a RESP bulk string.

Syntax

```
JSON.RESP <key> [path]
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (optional) – A JSON path. Defaults to the root if not provided.

**Return**

If the path is enhanced syntax:
+ Array of arrays. Each array element represents the RESP form of the value at one path.
+ Empty array if the document key does not exist.

If the path is restricted syntax:
+ Array that represents the RESP form of the value at the path.
+ Null if the document key does not exist.

**Examples**

Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"firstName":"John","lastName":"Smith","age":27,"weight":135.25,"isAlive":true,"address":{"street":"21 2nd Street","city":"New York","state":"NY","zipcode":"10021-3100"},"phoneNumbers":[{"type":"home","number":"212 555-1234"},{"type":"office","number":"646 555-4567"}],"children":[],"spouse":null}'
OK

127.0.0.1:6379> JSON.RESP k1 $.address
1) 1) {
   2) 1) "street"
      2) "21 2nd Street"
   3) 1) "city"
      2) "New York"
   4) 1) "state"
      2) "NY"
   5) 1) "zipcode"
      2) "10021-3100"

127.0.0.1:6379> JSON.RESP k1 $.address.*
1) "21 2nd Street"
2) "New York"
3) "NY"
4) "10021-3100"

127.0.0.1:6379> JSON.RESP k1 $.phoneNumbers
1) 1) [
   2) 1) {
      2) 1) "type"
         2) "home"
      3) 1) "number"
         2) "555 555-1234"
   3) 1) {
      2) 1) "type"
         2) "office"
      3) 1) "number"
         2) "555 555-4567"

127.0.0.1:6379> JSON.RESP k1 $.phoneNumbers[*]
1) 1) {
   2) 1) "type"
      2) "home"
   3) 1) "number"
      2) "212 555-1234"
2) 1) {
   2) 1) "type"
      2) "office"
   3) 1) "number"
      2) "555 555-4567"
```

Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"firstName":"John","lastName":"Smith","age":27,"weight":135.25,"isAlive":true,"address":{"street":"21 2nd Street","city":"New York","state":"NY","zipcode":"10021-3100"},"phoneNumbers":[{"type":"home","number":"212 555-1234"},{"type":"office","number":"646 555-4567"}],"children":[],"spouse":null}'
OK

127.0.0.1:6379> JSON.RESP k1 .address
1) {
2) 1) "street"
   2) "21 2nd Street"
3) 1) "city"
   2) "New York"
4) 1) "state"
   2) "NY"
5) 1) "zipcode"
   2) "10021-3100"

127.0.0.1:6379> JSON.RESP k1
 1) {
 2) 1) "firstName"
    2) "John"
 3) 1) "lastName"
    2) "Smith"
 4) 1) "age"
    2) (integer) 27
 5) 1) "weight"
    2) "135.25"
 6) 1) "isAlive"
    2) true
 7) 1) "address"
    2) 1) {
       2) 1) "street"
          2) "21 2nd Street"
       3) 1) "city"
          2) "New York"
       4) 1) "state"
          2) "NY"
       5) 1) "zipcode"
          2) "10021-3100"
 8) 1) "phoneNumbers"
    2) 1) [
       2) 1) {
          2) 1) "type"
             2) "home"
          3) 1) "number"
             2) "212 555-1234"
       3) 1) {
          2) 1) "type"
             2) "office"
          3) 1) "number"
             2) "555 555-4567"
 9) 1) "children"
    2) 1) [
10) 1) "spouse"
    2) (nil)
```

# JSON.SET
<a name="json-set"></a>

Sets JSON values at the path.

If the path calls for an object member:
+ If the parent element does not exist, the command returns a NONEXISTENT error.
+ If the parent element exists but is not an object, the command returns ERROR.
+ If the parent element exists and is an object:
  +  If the member does not exist, a new member will be appended to the parent object if and only if the parent object is the last child in the path. Otherwise, the command returns a NONEXISTENT error.
  +  If the member exists, its value will be replaced by the JSON value.

If the path calls for an array index:
+ If the parent element does not exist, the command returns a NONEXISTENT error.
+ If the parent element exists but is not an array, the command returns ERROR.
+ If the parent element exists but the index is out of bounds, the command returns an OUTOFBOUNDARIES error.
+ If the parent element exists and the index is valid, the element will be replaced by the new JSON value.

If the path calls for an object or array, the value (object or array) will be replaced by the new JSON value.

Syntax

```
JSON.SET <key> <path> <json> [NX | XX] 
```

[NX \$1 XX] Where you can have 0 or 1 of [NX \$1 XX] identifiers.
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (required) – A JSON path. For a new key, the JSON path must be the root ".".
+ NX (optional) – If the path is the root, set the value only if the key does not exist. That is, insert a new document. If the path is not the root, set the value only if the path does not exist. That is, insert a value into the document.
+ XX (optional) – If the path is the root, set the value only if the key exists. That is, replace the existing document. If the path is not the root, set the value only if the path exists. That is, update the existing value.

**Return**
+ Simple String 'OK' on success.
+ Null if the NX or XX condition is not met.

**Examples**

 Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"a":{"a":1, "b":2, "c":3}}'
OK
127.0.0.1:6379> JSON.SET k1 $.a.* '0'
OK
127.0.0.1:6379> JSON.GET k1
"{\"a\":{\"a\":0,\"b\":0,\"c\":0}}"

127.0.0.1:6379> JSON.SET k2 . '{"a": [1,2,3,4,5]}'
OK
127.0.0.1:6379> JSON.SET k2 $.a[*] '0'
OK
127.0.0.1:6379> JSON.GET k2
"{\"a\":[0,0,0,0,0]}"
```

 Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"c":{"a":1, "b":2}, "e": [1,2,3,4,5]}'
OK
127.0.0.1:6379> JSON.SET k1 .c.a '0'
OK
127.0.0.1:6379> JSON.GET k1
"{\"c\":{\"a\":0,\"b\":2},\"e\":[1,2,3,4,5]}"
127.0.0.1:6379> JSON.SET k1 .e[-1] '0'
OK
127.0.0.1:6379> JSON.GET k1
"{\"c\":{\"a\":0,\"b\":2},\"e\":[1,2,3,4,0]}"
127.0.0.1:6379> JSON.SET k1 .e[5] '0'
(error) OUTOFBOUNDARIES Array index is out of bounds
```

# JSON.STRAPPEND
<a name="json-strappend"></a>

Appends a string to the JSON strings at the path.

Syntax

```
JSON.STRAPPEND <key> [path] <json_string>
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (optional) – A JSON path. Defaults to the root if not provided.
+ json\$1string (required) – The JSON representation of a string. Note that a JSON string must be quoted. For example: '"string example"'.

**Return**

If the path is enhanced syntax:
+ Array of integers that represent the new length of the string at each path.
+ If a value at the path is not a string, its corresponding return value is null.
+ `SYNTAXERR` error if the input json argument is not a valid JSON string.
+ `NONEXISTENT` error if the path does not exist.

If the path is restricted syntax:
+ Integer, the string's new length.
+ If multiple string values are selected, the command returns the new length of the last updated string.
+ `WRONGTYPE` error if the value at the path is not a string.
+ `WRONGTYPE` error if the input json argument is not a valid JSON string.
+ `NONEXISTENT` error if the path does not exist.

**Examples**

 Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 $ '{"a":{"a":"a"}, "b":{"a":"a", "b":1}, "c":{"a":"a", "b":"bb"}, "d":{"a":1, "b":"b", "c":3}}'
OK
127.0.0.1:6379> JSON.STRAPPEND k1 $.a.a '"a"'
1) (integer) 2
127.0.0.1:6379> JSON.STRAPPEND k1 $.a.* '"a"'
1) (integer) 3
127.0.0.1:6379> JSON.STRAPPEND k1 $.b.* '"a"'
1) (integer) 2
2) (nil)
127.0.0.1:6379> JSON.STRAPPEND k1 $.c.* '"a"'
1) (integer) 2
2) (integer) 3
127.0.0.1:6379> JSON.STRAPPEND k1 $.c.b '"a"'
1) (integer) 4
127.0.0.1:6379> JSON.STRAPPEND k1 $.d.* '"a"'
1) (nil)
2) (integer) 2
3) (nil)
```

 Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"a":{"a":"a"}, "b":{"a":"a", "b":1}, "c":{"a":"a", "b":"bb"}, "d":{"a":1, "b":"b", "c":3}}'
OK
127.0.0.1:6379> JSON.STRAPPEND k1 .a.a '"a"'
(integer) 2
127.0.0.1:6379> JSON.STRAPPEND k1 .a.* '"a"'
(integer) 3
127.0.0.1:6379> JSON.STRAPPEND k1 .b.* '"a"'
(integer) 2
127.0.0.1:6379> JSON.STRAPPEND k1 .c.* '"a"'
(integer) 3
127.0.0.1:6379> JSON.STRAPPEND k1 .c.b '"a"'
(integer) 4
127.0.0.1:6379> JSON.STRAPPEND k1 .d.* '"a"'
(integer) 2
```

# JSON.STRLEN
<a name="json-strlen"></a>

Gets the lengths of the JSON string values at the path.

Syntax

```
JSON.STRLEN <key> [path] 
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (optional) – A JSON path. Defaults to the root if not provided.

**Return**

If the path is enhanced syntax:
+ Array of integers that represents the length of the string value at each path.
+ If a value is not a string, its corresponding return value is null.
+ Null if the document key does not exist.

If the path is restricted syntax:
+ Integer, the string's length.
+ If multiple string values are selected, the command returns the first string's length.
+ `WRONGTYPE` error if the value at the path is not a string.
+ `NONEXISTENT` error if the path does not exist.
+ Null if the document key does not exist.

**Examples**

 Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 $ '{"a":{"a":"a"}, "b":{"a":"a", "b":1}, "c":{"a":"a", "b":"bb"}, "d":{"a":1, "b":"b", "c":3}}'
OK
127.0.0.1:6379> JSON.STRLEN k1 $.a.a
1) (integer) 1
127.0.0.1:6379> JSON.STRLEN k1 $.a.*
1) (integer) 1
127.0.0.1:6379> JSON.STRLEN k1 $.c.*
1) (integer) 1
2) (integer) 2
127.0.0.1:6379> JSON.STRLEN k1 $.c.b
1) (integer) 2
127.0.0.1:6379> JSON.STRLEN k1 $.d.*
1) (nil)
2) (integer) 1
3) (nil)
```

 Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 $ '{"a":{"a":"a"}, "b":{"a":"a", "b":1}, "c":{"a":"a", "b":"bb"}, "d":{"a":1, "b":"b", "c":3}}'
OK
127.0.0.1:6379> JSON.STRLEN k1 .a.a
(integer) 1
127.0.0.1:6379> JSON.STRLEN k1 .a.*
(integer) 1
127.0.0.1:6379> JSON.STRLEN k1 .c.*
(integer) 1
127.0.0.1:6379> JSON.STRLEN k1 .c.b
(integer) 2
127.0.0.1:6379> JSON.STRLEN k1 .d.*
(integer) 1
```

# JSON.TOGGLE
<a name="json-toggle"></a>

Toggles Boolean values between true and false at the path.

Syntax

```
JSON.TOGGLE <key> [path] 
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (optional) – A JSON path. Defaults to the root if not provided.

**Return**

If the path is enhanced syntax:
+ Array of integers (0 - false, 1 - true) that represent the resulting Boolean value at each path.
+ If a value is a not a Boolean value, its corresponding return value is null.
+ `NONEXISTENT` if the document key does not exist.

If the path is restricted syntax:
+ String ("true"/"false") that represents the resulting Boolean value.
+ `NONEXISTENT` if the document key does not exist.
+ `WRONGTYPE` error if the value at the path is not a Boolean value.

**Examples**

 Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"a":true, "b":false, "c":1, "d":null, "e":"foo", "f":[], "g":{}}'
OK
127.0.0.1:6379> JSON.TOGGLE k1 $.*
1) (integer) 0
2) (integer) 1
3) (nil)
4) (nil)
5) (nil)
6) (nil)
7) (nil)
127.0.0.1:6379> JSON.TOGGLE k1 $.*
1) (integer) 1
2) (integer) 0
3) (nil)
4) (nil)
5) (nil)
6) (nil)
7) (nil)
```

 Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . true
OK
127.0.0.1:6379> JSON.TOGGLE k1
"false"
127.0.0.1:6379> JSON.TOGGLE k1
"true"

127.0.0.1:6379> JSON.SET k2 . '{"isAvailable": false}'
OK
127.0.0.1:6379> JSON.TOGGLE k2 .isAvailable
"true"
127.0.0.1:6379> JSON.TOGGLE k2 .isAvailable
"false"
```

# JSON.TYPE
<a name="json-type"></a>

Reports the type of values at the given path.

Syntax

```
JSON.TYPE <key> [path]
```
+ key (required) – A Valkey or Redis OSS key of JSON document type.
+ path (optional) – A JSON path. Defaults to the root if not provided.

**Return**

If the path is enhanced syntax:
+ Array of strings that represent the type of value at each path. The type is one of \$1"null", "boolean", "string", "number", "integer", "object" and "array"\$1.
+ If a path does not exist, its corresponding return value is null.
+ Empty array if the document key does not exist.

If the path is restricted syntax:
+ String, type of the value
+ Null if the document key does not exist.
+ Null if the JSON path is invalid or does not exist.

**Examples**

Enhanced path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '[1, 2.3, "foo", true, null, {}, []]'
OK
127.0.0.1:6379> JSON.TYPE k1 $[*]
1) integer
2) number
3) string
4) boolean
5) null
6) object
7) array
```

Restricted path syntax:

```
127.0.0.1:6379> JSON.SET k1 . '{"firstName":"John","lastName":"Smith","age":27,"weight":135.25,"isAlive":true,"address":{"street":"21 2nd Street","city":"New York","state":"NY","zipcode":"10021-3100"},"phoneNumbers":[{"type":"home","number":"212 555-1234"},{"type":"office","number":"646 555-4567"}],"children":[],"spouse":null}'
OK
127.0.0.1:6379> JSON.TYPE k1
object
127.0.0.1:6379> JSON.TYPE k1 .children
array
127.0.0.1:6379> JSON.TYPE k1 .firstName
string
127.0.0.1:6379> JSON.TYPE k1 .age
integer
127.0.0.1:6379> JSON.TYPE k1 .weight
number
127.0.0.1:6379> JSON.TYPE k1 .isAlive
boolean
127.0.0.1:6379> JSON.TYPE k1 .spouse
null
```

# Tagging your ElastiCache resources
<a name="Tagging-Resources"></a>

To help you manage your clusters and other ElastiCache resources, you can assign your own metadata to each resource in the form of tags. Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type—you can quickly identify a specific resource based on the tags that you've assigned to it. This topic describes tags and shows you how to create them.

**Warning**  
As a best practice, we recommend that you do not include sensitive data in your tags.

## Tag basics
<a name="Tagging-basics"></a>

A tag is a label that you assign to an AWS resource. Each tag consists of a key and an optional value, both of which you define. Tags enable you to categorize your AWS resources in different ways, for example, by purpose or owner. For example, you could define a set of tags for your account's ElastiCache clusters that helps you track each instance's owner and user group.

We recommend that you devise a set of tag keys that meets your needs for each resource type. Using a consistent set of tag keys makes it easier for you to manage your resources. You can search and filter the resources based on the tags you add. For more information about how to implement an effective resource tagging strategy, see the [AWS whitepaper Tagging Best Practices](https://d1.awsstatic.com/whitepapers/aws-tagging-best-practices.pdf).

Tags don't have any semantic meaning to ElastiCache and are interpreted strictly as a string of characters. Also, tags are not automatically assigned to your resources. You can edit tag keys and values, and you can remove tags from a resource at any time. You can set the value of a tag to `null`. If you add a tag that has the same key as an existing tag on that resource, the new value overwrites the old value. If you delete a resource, any tags for the resource are also deleted. Furthermore, if you add or delete tags on a replication group, all nodes in that replication group will also have their tags added or removed.

 You can work with tags using the AWS Management Console, the AWS CLI, and the ElastiCache API.

If you're using IAM, you can control which users in your AWS account have permission to create, edit, or delete tags. For more information, see [Resource-level permissions](IAM.ResourceLevelPermissions.md).

## Resources you can tag
<a name="Tagging-your-resources"></a>

You can tag most ElastiCache resources that already exist in your account. The table below lists the resources that support tagging. If you're using the AWS Management Console, you can apply tags to resources by using the [Tag Editor](https://docs.aws.amazon.com/ARG/latest/userguide/tag-editor.html). Some resource screens enable you to specify tags for a resource when you create the resource; for example, a tag with a key of Name and a value that you specify. In most cases, the console applies the tags immediately after the resource is created (rather than during resource creation). The console may organize resources according to the **Name** tag, but this tag doesn't have any semantic meaning to the ElastiCache service.

 Additionally, some resource-creating actions enable you to specify tags for a resource when the resource is created. If tags cannot be applied during resource creation, we roll back the resource creation process. This ensures that resources are either created with tags or not created at all, and that no resources are left untagged at any time. By tagging resources at the time of creation, you can eliminate the need to run custom tagging scripts after resource creation. 

 If you're using the Amazon ElastiCache API, the AWS CLI, or an AWS SDK, you can use the `Tags` parameter on the relevant ElastiCache API action to apply tags. They are:
+ `CreateServerlessCache`
+ `CreateCacheCluster`
+ `CreateReplicationGroup`
+ `CopyServerlessCacheSnapshot`
+ `CopySnapshot`
+ `CreateCacheParameterGroup`
+ `CreateCacheSecurityGroup`
+ `CreateCacheSubnetGroup`
+ `CreateServerlessCacheSnapshot`
+ `CreateSnapshot`
+ `CreateUserGroup`
+ `CreateUser`
+ `PurchaseReservedCacheNodesOffering`

The following table describes the ElastiCache resources that can be tagged, and the resources that can be tagged on creation using the ElastiCache API, the AWS CLI, or an AWS SDK.


**Tagging support for ElastiCache resources**  

| Resource | Supports tags | Supports tagging on creation | 
| --- | --- | --- | 
| serverlesscache | Yes | Yes | 
| parametergroup | Yes | Yes | 
| securitygroup | Yes | Yes | 
| subnetgroup | Yes | Yes | 
| replicationgroup | Yes | Yes | 
| cluster | Yes | Yes | 
| reserved-instance | Yes | Yes | 
| serverlesscachesnapshot | Yes | Yes | 
| snapshot | Yes | Yes | 
| user | Yes | Yes | 
| usergroup | Yes | Yes | 

**Note**  
You cannot tag Global Datastores.

You can apply tag-based resource-level permissions in your IAM policies to the ElastiCache API actions that support tagging on creation to implement granular control over the users and groups that can tag resources on creation. Your resources are properly secured from creation—tags that are applied immediately to your resources. Therefore any tag-based resource-level permissions controlling the use of resources are immediately effective. Your resources can be tracked and reported on more accurately. You can enforce the use of tagging on new resources, and control which tag keys and values are set on your resources.

For more information, see [Tagging resources examples](#Tagging-your-resources-example).

 For more information about tagging your resources for billing, see [Monitoring costs with cost allocation tags](Tagging.md).

## Tagging caches and snapshots
<a name="Tagging-replication-groups-snapshots"></a>

The following rules apply to tagging as part of request operations:
+ **CreateReplicationGroup**: 
  + If the `--primary-cluster-id` and `--tags` parameters are included in the request, the request tags will be added to the replication group and propagate to all clusters in the replication group. If the primary cluster has existing tags, these will be overwritten with the request tags to have consistent tags across all nodes.

    If there are no request tags, the primary cluster tags will be added to the replication group and propagated to all clusters.
  + If the `--snapshot-name` or `--serverless-cache-snapshot-name` is supplied:

    If tags are included in the request, the replication group will be tagged only with those tags. If no tags are included in the request, the snapshot tags will be added to the replication group.
  + If the `--global-replication-group-id` is supplied:

    If tags are included in the request, the request tags will be added to the replication group and propagate to all clusters. 
+ **CreateCacheCluster** : 
  +  If the `--replication-group-id` is supplied:

    If tags are included in the request, the cluster will be tagged only with those tags. If no tags are included in the request, the cluster will inherit the replication group tags instead of the primary cluster's tags.
  + If the `--snapshot-name` is supplied:

    If tags are included in the request, the cluster will be tagged only with those tags. If no tags are included in the request, the snapshot tags will be added to the cluster.
+ **CreateServerlessCache** : 
  + If tags are included in the request, only the request tags will be added to the serverless cache.
+ **CreateSnapshot** : 
  +  If the `--replication-group-id` is supplied:

    If tags are included in the request, only the request tags will be added to the snapshot. If no tags are included in the request, the replication group tags will be added to the snapshot. 
  + If the `--cache-cluster-id` is supplied:

    If tags are included in the request, only the request tags will be added to the snapshot. If no tags are included in the request, the cluster tags will be added to the snapshot. 
  + For automatic snapshots:

    Tags will propagate from the replication group tags. 
+ **CreateServerlessCacheSnapshot** : 
  + If tags are included in the request, only the request tags will be added to the serverless cache snapshot.
+ **CopySnapshot** : 
  + If tags are included in the request, only the request tags will be added to the snapshot. If no tags are included in the request, the source snapshot tags will be added to the copied snapshot.
+ **CopyServerlessCacheSnapshot** : 
  + If tags are included in the request, only the request tags will be added to the serverless cache snapshot.
+ **AddTagsToResource** and **RemoveTagsFromResource** : 
  + Tags will be added/removed from the replication group and the action will be propagated to all clusters in the replication group.
**Note**  
**AddTagsToResource** and **RemoveTagsFromResource** cannot be used for default parameter and security groups.
+ **IncreaseReplicaCount** and **ModifyReplicationGroupShardConfiguration**: 
  + All new clusters added to the replication group will have the same tags applied as the replication group. 

## Tag restrictions
<a name="Tagging-restrictions"></a>

The following basic restrictions apply to tags:
+ Maximum number of tags per resource – 50
+ For each resource, each tag key must be unique, and each tag key can have only one value.
+ Maximum key length – 128 Unicode characters in UTF-8.
+ Maximum value length – 256 Unicode characters in UTF-8.
+ Although ElastiCache allows for any character in its tags, other services can be restrictive. The allowed characters across services are: letters, numbers, and spaces representable in UTF-8, and the following characters: \$1 - = . \$1 : / @
+ Tag keys and values are case-sensitive.
+ The `aws:` prefix is reserved for AWS use. If a tag has a tag key with this prefix, then you can't edit or delete the tag's key or value. Tags with the `aws:` prefix do not count against your tags per resource limit.

You can't terminate, stop, or delete a resource based solely on its tags; you must specify the resource identifier. For example, to delete snapshots that you tagged with a tag key called `DeleteMe`, you must use the `DeleteSnapshot` action with the resource identifiers of the snapshots, such as `snap-1234567890abcdef0`.

For more information on ElastiCache resources you can tag, see [Resources you can tag](#Tagging-your-resources).

## Tagging resources examples
<a name="Tagging-your-resources-example"></a>
+ Creating a serverless cache using tags. This example uses Memcached as the engine.

  ```
  aws elasticache create-serverless-cache \
      --serverless-cache-name CacheName \
      --engine memcached
      --tags Key="Cost Center", Value="1110001" Key="project",Value="XYZ"
  ```
+ Adding tags to a serverless cache

  ```
  aws elasticache add-tags-to-resource \
  --resource-name arn:aws:elasticache:us-east-1:111111222233:serverlesscache:my-cache \
  --tags Key="project",Value="XYZ" Key="Elasticache",Value="Service"
  ```
+ Adding tags to a Replication Group.

  ```
  aws elasticache add-tags-to-resource \
  --resource-name arn:aws:elasticache:us-east-1:111111222233:replicationgroup:my-rg \
  --tags Key="project",Value="XYZ" Key="Elasticache",Value="Service"
  ```
+ Creating a Cache Cluster using tags.

  ```
  aws elasticache create-cache-cluster \
  --cluster-id testing-tags \
  --cluster-description cluster-test \
  --cache-subnet-group-name test \
  --cache-node-type cache.t2.micro \
  --engine valkey \
  --tags Key="project",Value="XYZ" Key="Elasticache",Value="Service"
  ```
+ Creating a Cache Cluster using tags. This example uses Redis as the engine.

  ```
  aws elasticache create-cache-cluster \
  --cluster-id testing-tags \
  --cluster-description cluster-test \
  --cache-subnet-group-name test \
  --cache-node-type cache.t2.micro \
  --engine valkey \
  --tags Key="project",Value="XYZ" Key="Elasticache",Value="Service"
  ```
+ Creating a serverless snapshot with tags. This example uses Memcached as the engine.

  ```
  aws elasticache create-serverless-cache-snapshot \
  --serverless-cache-name testing-tags \
  --serverless-cache-snapshot-name bkp-testing-tags-scs \
  --tags Key="work",Value="foo"
  ```
+ Creating a Snapshot with tags.

  Snapshots are currently only available for Redis. For this case, if you add tags on request, even if the replication group contains tags, the snapshot will receive only the request tags. 

  ```
  aws elasticache create-snapshot \
  --replication-group-id testing-tags \
  --snapshot-name bkp-testing-tags-rg \
  --tags Key="work",Value="foo"
  ```

## Tag-Based access control policy examples
<a name="Tagging-access-control"></a>

1. Allowing `AddTagsToResource` action to a cluster only if the cluster has the tag Project=XYZ.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "elasticache:AddTagsToResource",
               "Resource": [
                   "arn:aws:elasticache:*:*:cluster:*"
               ],
               "Condition": {
                   "StringEquals": {
                       "aws:ResourceTag/Project": "XYZ"
                   }
               }
           }
       ]
   }
   ```

------

1. Allowing to `RemoveTagsFromResource` action from a replication group if it contains the tags Project and Service and keys are different from Project and Service.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "elasticache:RemoveTagsFromResource",
               "Resource": [
                   "arn:aws:elasticache:*:*:replicationgroup:*"
               ],
               "Condition": {
                   "StringEquals": {
                       "aws:ResourceTag/Service": "Elasticache",
                       "aws:ResourceTag/Project": "XYZ"
                   },                
                   "ForAnyValue:StringNotEqualsIgnoreCase": {
                       "aws:TagKeys": [
                           "Project",
                           "Service"
                       ]
                   }
               }
           }
       ]
   }
   ```

------

1. Allowing `AddTagsToResource` to any resource only if tags are different from Project and Service.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "elasticache:AddTagsToResource",
               "Resource": [
                   "arn:aws:elasticache:*:*:*:*"
               ],
               "Condition": {
                   "ForAnyValue:StringNotEqualsIgnoreCase": {
                       "aws:TagKeys": [ 
                           "Service", 
                           "Project" 
                       ]
                   }
               }
           }
       ]
   }
   ```

------

1. Denying `CreateReplicationGroup` action if request has `Tag Project=Foo`.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Deny",
               "Action": "elasticache:CreateReplicationGroup",
               "Resource": [
                   "arn:aws:elasticache:*:*:replicationgroup:*"
               ],
               "Condition": {
                   "StringEquals": {
                       "aws:RequestTag/Project": "Foo"
                   }
               }
           }
       ]
   }
   ```

------

1. Denying `CopySnapshot` action if source snapshot has tag Project=XYZ and request tag is Service=Elasticache.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Deny",
               "Action": "elasticache:CopySnapshot",
               "Resource": [
                   "arn:aws:elasticache:*:*:snapshot:*"
               ],
               "Condition": {
                   "StringEquals": {
                       "aws:ResourceTag/Project": "XYZ",
                       "aws:RequestTag/Service": "Elasticache"
                   }
               }
           }
       ]
   }
   ```

------

1. Denying `CreateCacheCluster` action if the request tag `Project` is missing or is not equal to `Dev`, `QA` or `Prod`.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
             {
               "Effect": "Allow",
               "Action": [
                   "elasticache:CreateCacheCluster"
               ],
               "Resource": [
                   "arn:aws:elasticache:*:*:parametergroup:*",
                   "arn:aws:elasticache:*:*:subnetgroup:*",
                   "arn:aws:elasticache:*:*:securitygroup:*",
                   "arn:aws:elasticache:*:*:replicationgroup:*"
               ]
           },
           {
               "Effect": "Deny",
               "Action": [
                   "elasticache:CreateCacheCluster"
               ],
               "Resource": [
                   "arn:aws:elasticache:*:*:cluster:*"
               ],
               "Condition": {
                   "Null": {
                       "aws:RequestTag/Project": "true"
                   }
               }
           },
           {
               "Effect": "Allow",
               "Action": [
                   "elasticache:CreateCacheCluster",
                   "elasticache:AddTagsToResource"
               ],
               "Resource": "arn:aws:elasticache:*:*:cluster:*",
               "Condition": {
                   "StringEquals": {
                       "aws:RequestTag/Project": [
                           "Dev",
                           "Prod",
                           "QA"
                       ]
                   }
               }
           }
       ]
   }
   ```

------

For related information on condition keys, see [Using condition keys](IAM.ConditionKeys.md).

# Monitoring costs with cost allocation tags
<a name="Tagging"></a>

When you add cost allocation tags to your resources in Amazon ElastiCache, you can track costs by grouping expenses on your invoices by resource tag values.

An ElastiCache cost allocation tag is a key-value pair that you define and associate with an ElastiCache resource. The key and value are case-sensitive. You can use a tag key to define a category, and the tag value can be an item in that category. For example, you might define a tag key of `CostCenter` and a tag value of `10010`, indicating that the resource is assigned to the 10010 cost center. You can also use tags to designate resources as being used for test or production by using a key such as `Environment` and values such as `test` or `production`. We recommend that you use a consistent set of tag keys to make it easier to track costs associated with your resources.

Use cost allocation tags to organize your AWS bill to reflect your own cost structure. To do this, sign up to get your AWS account bill with tag key values included. Then, to see the cost of combined resources, organize your billing information according to resources with the same tag key values. For example, you can tag several resources with a specific application name, and then organize your billing information to see the total cost of that application across several services. 

You can also combine tags to track costs at a greater level of detail. For example, to track your service costs by region you might use the tag keys `Service` and `Region`. On one resource you might have the values `ElastiCache` and `Asia Pacific (Singapore)`, and on another resource the values `ElastiCache` and `Europe (Frankfurt)`. You can then see your total ElastiCache costs broken out by region. For more information, see [Use Cost Allocation Tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html) in the *AWS Billing User Guide*.

You can add ElastiCache cost allocation tags to ElastiCache node-based clusters. When you add, list, modify, copy, or remove a tag, the operation is applied only to the specified cluster.

**Characteristics of ElastiCache cost allocation tags**
+ Cost allocation tags are applied to ElastiCache resources which are specified in CLI and API operations as an ARN. The resource-type will be a "cluster".

  Sample ARN: `arn:aws:elasticache:<region>:<customer-id>:<resource-type>:<resource-name>`

  Sample arn: `arn:aws:elasticache:us-west-2:1234567890:cluster:my-cluster`
+ The tag key is the required name of the tag. The key's string value can be from 1 to 128 Unicode characters long and cannot be prefixed with `aws:`. The string can contain only the set of Unicode letters, digits, blank spaces, underscores ( \$1 ), periods ( . ), colons ( : ), backslashes ( \$1 ), equal signs ( = ), plus signs ( \$1 ), hyphens ( - ), or at signs ( @ ).

   
+ The tag value is the optional value of the tag. The value's string value can be from 1 to 256 Unicode characters in length and cannot be prefixed with `aws:`. The string can contain only the set of Unicode letters, digits, blank spaces, underscores ( \$1 ), periods ( . ), colons ( : ), backslashes ( \$1 ), equal signs ( = ), plus signs ( \$1 ), hyphens ( - ), or at signs ( @ ).

   
+ An ElastiCache resource can have a maximum of 50 tags.

   
+ Values do not have to be unique in a tag set. For example, you can have a tag set where the keys `Service` and `Application` both have the value `ElastiCache`.

AWS does not apply any semantic meaning to your tags. Tags are interpreted strictly as character strings. AWS does not automatically set any tags on any ElastiCache resource.

# Managing your cost allocation tags using the AWS CLI
<a name="Tagging.Managing.CLI"></a>

You can use the AWS CLI to add, modify, or remove cost allocation tags.

Cost allocation tags are applied to ElastiCache clusters. The cluster to be tagged is specified using an ARN (Amazon Resource Name).

Sample arn: `arn:aws:elasticache:us-west-2:1234567890:cluster:my-cluster`

**Topics**
+ [Listing tags using the AWS CLI](#Tagging.Managing.CLI.List)
+ [Adding tags using the AWS CLI](#Tagging.Managing.CLI.Add)
+ [Modifying tags using the AWS CLI](#Tagging.Managing.CLI.Modify)
+ [Removing tags using the AWS CLI](#Tagging.Managing.CLI.Remove)

## Listing tags using the AWS CLI
<a name="Tagging.Managing.CLI.List"></a>

You can use the AWS CLI to list tags on an existing ElastiCache resource by using the [list-tags-for-resource](https://docs.aws.amazon.com/cli/latest/reference/elasticache/list-tags-for-resource.html) operation.

The following code uses the AWS CLI to list the tags on the Memcached cluster `my-cluster` in the us-west-2 region.

For Linux, macOS, or Unix:

```
aws elasticache list-tags-for-resource \
  --resource-name arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster
```

For Windows:

```
aws elasticache list-tags-for-resource ^
  --resource-name arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster
```

The following code uses the AWS CLI to list the tags on the Valkey or Redis OSS node `my-cluster-001` in the `my-cluster` cluster in region us-west-2.

For Linux, macOS, or Unix:

```
aws elasticache list-tags-for-resource \
  --resource-name arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster-001
```

For Windows:

```
aws elasticache list-tags-for-resource ^
  --resource-name arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster-001
```

Output from this operation will look something like the following, a list of all the tags on the resource.

```
{
   "TagList": [
      {
         "Value": "10110",
         "Key": "CostCenter"
      },
      {
         "Value": "EC2",
         "Key": "Service"
      }
   ]
}
```

If there are no tags on the resource, the output will be an empty TagList.

```
{
   "TagList": []
}
```

For more information, see the AWS CLI for ElastiCache [list-tags-for-resource](https://docs.aws.amazon.com/cli/latest/reference/elasticache/list-tags-for-resource.html).

## Adding tags using the AWS CLI
<a name="Tagging.Managing.CLI.Add"></a>

You can use the AWS CLI to add tags to an existing ElastiCache resource by using the [add-tags-to-resource](https://docs.aws.amazon.com/cli/latest/reference/elasticache/add-tags-to-resource.html) CLI operation. If the tag key does not exist on the resource, the key and value are added to the resource. If the key already exists on the resource, the value associated with that key is updated to the new value.

The following code uses the AWS CLI to add the keys `Service` and `Region` with the values `elasticache` and `us-west-2` respectively to the node `my-cluster-001` in the cluster `my-cluster` in region us-west-2.

**Memcached**

For Linux, macOS, or Unix:

```
aws elasticache add-tags-to-resource \
 --resource-name arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster \
 --tags Key=Service,Value=elasticache \
        Key=Region,Value=us-west-2
```

For Windows:

```
aws elasticache add-tags-to-resource ^
 --resource-name arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster ^
 --tags Key=Service,Value=elasticache ^
        Key=Region,Value=us-west-2
```

**Redis**

For Linux, macOS, or Unix:

```
aws elasticache add-tags-to-resource \
 --resource-name arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster-001 \
 --tags Key=Service,Value=elasticache \
        Key=Region,Value=us-west-2
```

For Windows:

```
aws elasticache add-tags-to-resource ^
 --resource-name arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster-001 ^
 --tags Key=Service,Value=elasticache ^
        Key=Region,Value=us-west-2
```

Output from this operation will look something like the following, a list of all the tags on the resource following the operation.

```
{
   "TagList": [
      {
         "Value": "elasticache",
         "Key": "Service"
      },
      {
         "Value": "us-west-2",
         "Key": "Region"
      }
   ]
}
```

For more information, see the AWS CLI for ElastiCache [add-tags-to-resource](https://docs.aws.amazon.com/cli/latest/reference/elasticache/add-tags-to-resource.html).

You can also use the AWS CLI to add tags to a cluster when you create a new cluster by using the operation [create-cache-cluster](https://docs.aws.amazon.com/cli/latest/reference/elasticache/create-cache-cluster.html). You cannot add tags when creating a cluster using the ElastiCache management console. After the cluster is created, you can then use the console to add tags to the cluster.

## Modifying tags using the AWS CLI
<a name="Tagging.Managing.CLI.Modify"></a>

You can use the AWS CLI to modify the tags on an ElastiCache cluster.

To modify tags:
+ Use [add-tags-to-resource](https://docs.aws.amazon.com/cli/latest/reference/elasticache/add-tags-to-resource.html) to either add a new tag and value or to change the value associated with an existing tag.
+ Use [remove-tags-from-resource](https://docs.aws.amazon.com/cli/latest/reference/elasticache/remove-tags-from-resource.html) to remove specified tags from the resource.

Output from either operation will be a list of tags and their values on the specified cluster.

## Removing tags using the AWS CLI
<a name="Tagging.Managing.CLI.Remove"></a>

You can use the AWS CLI to remove tags from an existing ElastiCache for Memcached cluster by using the [remove-tags-from-resource](https://docs.aws.amazon.com/cli/latest/reference/elasticache/remove-tags-from-resource.html) operation.

For Memcached, the following code uses the AWS CLI to remove the tags with the keys `Service` and `Region` from the node `my-cluster-001` in  the cluster `my-cluster` in the us-west-2 region.

For Linux, macOS, or Unix:

```
aws elasticache remove-tags-from-resource \
 --resource-name arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster \
 --tag-keys PM Service
```

For Windows:

```
aws elasticache remove-tags-from-resource ^
 --resource-name arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster ^
 --tag-keys PM Service
```

For Redis OSS, the following code uses the AWS CLI to remove the tags with the keys `Service` and `Region` from the node `my-cluster-001` in the cluster `my-cluster` in the us-west-2 region.

For Linux, macOS, or Unix:

```
aws elasticache remove-tags-from-resource \
 --resource-name arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster-001 \
 --tag-keys PM Service
```

For Windows:

```
aws elasticache remove-tags-from-resource ^
 --resource-name arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster-001 ^
 --tag-keys PM Service
```

Output from this operation will look something like the following, a list of all the tags on the resource following the operation.

```
{
   "TagList": []
}
```

For more information, see the AWS CLI for ElastiCache [remove-tags-from-resource](https://docs.aws.amazon.com/cli/latest/reference/elasticache/remove-tags-from-resource.html).

# Managing your cost allocation tags using the ElastiCache API
<a name="Tagging.Managing.API"></a>

You can use the ElastiCache API to add, modify, or remove cost allocation tags.

Cost allocation tags are applied to ElastiCache for Memcached clusters. The cluster to be tagged is specified using an ARN (Amazon Resource Name).

Sample arn: `arn:aws:elasticache:us-west-2:1234567890:cluster:my-cluster`

**Topics**
+ [Listing tags using the ElastiCache API](#Tagging.Managing.API.List)
+ [Adding tags using the ElastiCache API](#Tagging.Managing.API.Add)
+ [Modifying tags using the ElastiCache API](#Tagging.Managing.API.Modify)
+ [Removing tags using the ElastiCache API](#Tagging.Managing.API.Remove)

## Listing tags using the ElastiCache API
<a name="Tagging.Managing.API.List"></a>

You can use the ElastiCache API to list tags on an existing resource by using the [ListTagsForResource](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_ListTagsForResource.html) operation.

For Memcached, the following code uses the ElastiCache API to list the tags on the resource `my-cluster` in the us-west-2 region.

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=ListTagsForResource
   &ResourceName=arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Version=2015-02-02
   &Timestamp=20150202T192317Z
   &X-Amz-Credential=<credential>
```

For Redis OSS, the following code uses the ElastiCache API to list the tags on the resource `my-cluster-001` in the us-west-2 region.

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=ListTagsForResource
   &ResourceName=arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster-001
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Version=2015-02-02
   &Timestamp=20150202T192317Z
   &X-Amz-Credential=<credential>
```

## Adding tags using the ElastiCache API
<a name="Tagging.Managing.API.Add"></a>

You can use the ElastiCache API to add tags to an existing ElastiCache cluster by using the [AddTagsToResource](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_AddTagsToResource.html) operation. If the tag key does not exist on the resource, the key and value are added to the resource. If the key already exists on the resource, the value associated with that key is updated to the new value.

The following code uses the ElastiCache API to add the keys `Service` and `Region` with the values `elasticache` and `us-west-2` respectively. For Memcached, this is applied to the resource `my-cluster`. For Redis OSS, this is applied to the resource `my-cluster-001` in the us-west-2 region. 

**Memcached**

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=AddTagsToResource
   &ResourceName=arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Tags.member.1.Key=Service 
   &Tags.member.1.Value=elasticache
   &Tags.member.2.Key=Region
   &Tags.member.2.Value=us-west-2
   &Version=2015-02-02
   &Timestamp=20150202T192317Z
   &X-Amz-Credential=<credential>
```

**Redis**

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=AddTagsToResource
   &ResourceName=arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster-001
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &Tags.member.1.Key=Service 
   &Tags.member.1.Value=elasticache
   &Tags.member.2.Key=Region
   &Tags.member.2.Value=us-west-2
   &Version=2015-02-02
   &Timestamp=20150202T192317Z
   &X-Amz-Credential=<credential>
```

For more information, see [AddTagsToResource](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_AddTagsToResource.html) in the *Amazon ElastiCache API Reference*.

## Modifying tags using the ElastiCache API
<a name="Tagging.Managing.API.Modify"></a>

You can use the ElastiCache API to modify the tags on an ElastiCache cluster.

To modify the value of a tag:
+ Use [AddTagsToResource](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_AddTagsToResource.html) operation to either add a new tag and value or to change the value of an existing tag.
+ Use [RemoveTagsFromResource](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_RemoveTagsFromResource.html) to remove tags from the resource.

Output from either operation will be a list of tags and their values on the specified resource.

Use [RemoveTagsFromResource](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_RemoveTagsFromResource.html) to remove tags from the resource.

## Removing tags using the ElastiCache API
<a name="Tagging.Managing.API.Remove"></a>

You can use the ElastiCache API to remove tags from an existing ElastiCache for Memcached cluster by using the [RemoveTagsFromResource](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_RemoveTagsFromResource.html) operation.

The following code uses the ElastiCache API to remove the tags with the keys `Service` and `Region` from the node `my-cluster-001` in the  cluster `my-cluster` in region us-west-2.

```
https://elasticache.us-west-2.amazonaws.com/
   ?Action=RemoveTagsFromResource
   &ResourceName=arn:aws:elasticache:us-west-2:0123456789:cluster:my-cluster-001
   &SignatureVersion=4
   &SignatureMethod=HmacSHA256
   &TagKeys.member.1=Service
   &TagKeys.member.2=Region
   &Version=2015-02-02
   &Timestamp=20150202T192317Z
   &X-Amz-Credential=<credential>
```

# Using the Amazon ElastiCache Well-Architected Lens
<a name="WellArchitechtedLens"></a>

This section describes the Amazon ElastiCache Well-Architected Lens, a collection of design principles and guidance for designing well-architected ElastiCache workloads.
+ The ElastiCache Lens is additive to the [AWS Well-Architected Framework](https://docs.aws.amazon.com/wellarchitected/latest/framework/welcome.html).
+ Each Pillar has a set of questions to help start the discussion around an ElastiCache Architecture.
  + Each question has a number of leading practices along with their scores for reporting.
    + *Required* - Necessary before going to prod (absent being a high risk)
    + *Best* - Best possible state a customer could be
    + *Good* - What we recommend customers to have (absent being a medium risk)
+ Well-Architected terminology
  + [Component](https://docs.aws.amazon.com/wellarchitected/latest/framework/definitions.html) – Code, configuration and AWS Resources that together deliver against a requirement. Components interact with other components, and often equate to a service in microservice architectures.
  + [Workload](https://docs.aws.amazon.com/wellarchitected/latest/framework/definitions.html) - A set of components that together deliver business value. Examples of workloads are marketing websites, e-commerce websites, the back-ends for a mobile app, analytic platforms, etc.

**Note**  
This guide has not been updated to include information on ElastiCache serverless caching and the new Valkey engine.

**Topics**
+ [Amazon ElastiCache Well-Architected Lens Operational Excellence Pillar](OperationalExcellencePillar.md)
+ [Amazon ElastiCache Well-Architected Lens Security Pillar](SecurityPillar.md)
+ [Amazon ElastiCache Well-Architected Lens Reliability Pillar](ReliabilityPillar.md)
+ [Amazon ElastiCache Well-Architected Lens Performance Efficiency Pillar](PerformanceEfficiencyPillar.md)
+ [Amazon ElastiCache Well-Architected Lens Cost Optimization Pillar](CostOptimizationPillar.md)

# Amazon ElastiCache Well-Architected Lens Operational Excellence Pillar
<a name="OperationalExcellencePillar"></a>

The operational excellence pillar focuses on running and monitoring systems to deliver business value, and continually improving processes and procedures. Key topics include automating changes, responding to events, and defining standards to manage daily operations.

**Topics**
+ [OE 1: How do you understand and respond to alerts and events triggered by your ElastiCache cluster?](#OperationalExcellencePillarOE1)
+ [OE 2: When and how do you scale your existing ElastiCache clusters?](#OperationalExcellencePillarOE2)
+ [OE 3: How do you manage your ElastiCache cluster resources and maintain your cluster up-to-date?](#OperationalExcellencePillarOE3)
+ [OE 4: How do you manage clients’ connections to your ElastiCache clusters?](#OperationalExcellencePillarOE4)
+ [OE 5: How do you deploy ElastiCache Components for a Workload?](#OperationalExcellencePillarOE5)
+ [OE 6: How do you plan for and mitigate failures?](#OperationalExcellencePillarOE6)
+ [OE 7: How do you troubleshoot Valkey or Redis OSS engine events?](#OperationalExcellencePillarOE7)

## OE 1: How do you understand and respond to alerts and events triggered by your ElastiCache cluster?
<a name="OperationalExcellencePillarOE1"></a>

**Question-level introduction: **When you operate ElastiCache clusters you can optionally receive notifications and alerts when specific events occur. ElastiCache, by default, logs [events](ECEvents.md) that relate to your resources, such as a failover, node replacement, scaling operation, scheduled maintenance, and more. Each event includes the date and time, the source name and source type, and a description.

**Question-level benefit: **Being able to understand and manage the underlying reasons behind the events that trigger alerts generated by your cluster enables you to operate more effectively and respond to events appropriately.
+ **[Required]** Review the events generated by ElastiCache on the ElastiCache console (after selecting your region) or using the [Amazon Command Line Interface](http://aws.amazon.com/cli) (AWS CLI) [describe-events](https://docs.aws.amazon.com/cli/latest/reference/elasticache/describe-events.html) command and the [ElastiCache API](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_DescribeEvents.html). Configure ElastiCache to send notifications for important cluster events using Amazon Simple Notification Service (Amazon SNS). Using Amazon SNS with your clusters allows you to programmatically take actions upon ElastiCache events.
  + There are two large categories of events: current and scheduled events. The list of current events includes: resource creation and deletion, scaling operations, failover, node reboot, snapshot created, cluster’s parameter modification, CA certificate renewal, failure events (cluster provisioning failure - VPC or ENI-, scaling failures - ENI-, and snapshot failures). The list of scheduled events includes: node scheduled for replacement during the maintenance window and node replacement rescheduled.
  + Although you may not need to react immediately to some of these events, it is critical to first look at all failure events:
    + ElastiCache:AddCacheNodeFailed
    + ElastiCache:CacheClusterProvisioningFailed
    + ElastiCache:CacheClusterScalingFailed
    + ElastiCache:CacheNodesRebooted
    + ElastiCache:SnapshotFailed (Valkey or Redis OSS only)
  + **[Resources]:**
    + [Managing ElastiCache Amazon SNS notifications](ECEvents.SNS.md)
    + [Event Notifications and Amazon SNS](ElastiCacheSNS.md)
+ **[Best]** To automate responses to events, leverage AWS products and services capabilities such as SNS and Lambda Functions. Follow best practices by making small, frequent, reversible changes, as code to evolve your operations over time. You should use Amazon CloudWatch metrics to monitor your clusters. 

  **[Resources]:** [Monitor ElastiCache (cluster mode disabled) read replica endpoints using AWS Lambda, Amazon Route 53, and Amazon SNS](https://aws.amazon.com/blogs/database/monitor-amazon-elasticache-for-redis-cluster-mode-disabled-read-replica-endpoints-using-aws-lambda-amazon-route-53-and-amazon-sns/) for a use case that uses Lambda and SNS. 

## OE 2: When and how do you scale your existing ElastiCache clusters?
<a name="OperationalExcellencePillarOE2"></a>

**Question-level introduction: **Right-sizing your ElastiCache cluster is a balancing act that needs to be evaluated every time there are changes to the underlying workload types. Your objective is to operate with the right sized environment for your workload.

**Question-level benefit: **Over-utilization of your resources may result in elevated latency and overall decreased performance. Under-utilization, on the other hand, may result in over-provisioned resources at non-optimal cost optimization. By right-sizing your environments you can strike a balance between performance efficiency and cost optimization. To remediate over or under utilization of your resources, ElastiCache can scale in two dimensions. You can scale vertically by increasing or decreasing node capacity. You can also scale horizontally by adding and removing nodes.
+ **[Required]** CPU and network over-utilization on primary nodes should be addressed by offloading and redirecting the read operations to replica nodes. Use replica nodes for read operations to reduce primary node utilization. This can be configured in your Valkey or Redis OSS client library by connecting to the ElastiCache reader endpoint for cluster mode disabled, or by using the READONLY command for cluster mode enabled.

  **[Resources]:**
  + [Finding connection endpoints in ElastiCache](Endpoints.md)
  + [Cluster Right-Sizing](https://aws.amazon.com/blogs/database/five-workload-characteristics-to-consider-when-right-sizing-amazon-elasticache-redis-clusters/)
  + [READONLY Command](https://valkey.io/commands/readonly)
+ **[Required]** Monitor the utilization of critical cluster resources such as CPU, memory, and network. The utilization of these specific cluster resources needs to be tracked to inform your decision to scale, and the type of scaling operation. For ElastiCache cluster mode disabled, primary and replica nodes can scale vertically. Replica nodes can also scale horizontally from 0 to 5 nodes. For cluster mode enabled, the same applies within each shard of your cluster. In addition, you can increase or reduce the number of shards.

  **[Resources]:**
  + [Monitoring best practices with ElastiCache using Amazon CloudWatch](https://aws.amazon.com/blogs/database/monitoring-best-practices-with-amazon-elasticache-for-redis-using-amazon-cloudwatch/)
  + [Scaling ElastiCache Clusters for Valkey and Redis OSS](Scaling.md)
  + [Scaling ElastiCache Clusters for Memcached](Scaling.md)
+ **[Best]** Monitoring trends over time can help you detect workload changes that would remain unnoticed if monitored at a particular point in time. To detect longer term trends, use CloudWatch metrics to scan for longer time ranges. The learnings from observing extended periods of CloudWatch metrics should inform your forecast around cluster resources utilization. CloudWatch data points and metrics are available for up to 455 days.

  **[Resources]:**
  + [Monitoring ElastiCache with CloudWatch Metrics](CacheMetrics.md)
  + [Monitoring Memcached with CloudWatch Metrics](CacheMetrics.md)
  + [Monitoring best practices with ElastiCache using Amazon CloudWatch](https://aws.amazon.com/blogs/database/monitoring-best-practices-with-amazon-elasticache-for-redis-using-amazon-cloudwatch/)
+ **[Best]** If your ElastiCache resources are created with CloudFormation it is best practice to perform changes using CloudFormation templates to preserve operational consistency and avoid unmanaged configuration changes and stack drifts.

  **[Resources]:**
  + [ElastiCache resource type reference for CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_ElastiCache.html)
+ **[Best]** Automate your scaling operations using cluster operational data and define thresholds in CloudWatch to setup alarms. Use CloudWatch Events and Simple Notification Service (SNS) to trigger Lambda functions and execute an ElastiCache API to scale your clusters automatically. An example would be to add a shard to your cluster when the `EngineCPUUtilization` metric reaches 80% for an extended period of time. Another option would be to use `DatabaseMemoryUsedPercentages` for a memory-based threshold.

  **[Resources]:**
  + [Using Amazon CloudWatch Alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html)
  + [What are Amazon CloudWatch events?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html)
  + [Using AWS Lambda with Amazon Simple Notification Service](https://docs.aws.amazon.com/lambda/latest/dg/with-sns.html)
  + [ElastiCache API Reference](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/Welcome.html)

## OE 3: How do you manage your ElastiCache cluster resources and maintain your cluster up-to-date?
<a name="OperationalExcellencePillarOE3"></a>

**Question-level introduction: **When operating at scale, it is essential that you are able to pinpoint and identify all your ElastiCache resources. When rolling out new application features you need to create cluster version symmetry across all your ElastiCache environment types: dev, testing, and production. Resource attributes allow you to separate environments for different operational objectives, such as when rolling out new features and enabling new security mechanisms. 

**Question-level benefit: **Separating your development, testing, and production environments is best operational practice. It is also best practice that your clusters and nodes across environments have the latest software patches applied using well understood and documented processes. Taking advantage of native ElastiCache features enables your engineering team to focus on meeting business objectives and not on ElastiCache maintenance.
+ **[Best]** Run on the latest engine version available and apply the Self-Service Updates as quickly as they become available. ElastiCache automatically updates its underlying infrastructure during your specified maintenance window of the cluster. However, the nodes running in your clusters are updated via Self-Service Updates. These updates can be of two types: security patches or minor software updates. Ensure you understand the difference between types of patches and when they are applied.

  **[Resources]:**
  + [Self-Service Updates in Amazon ElastiCache](Self-Service-Updates.md)
  + [Amazon ElastiCache Managed Maintenance and Service Updates Help Page](https://aws.amazon.com/elasticache/elasticache-maintenance/)
+ **[Best]** Organize your ElastiCache resources using tags. Use tags on replication groups and not on individual nodes. You can configure tags to be displayed when you query resources and you can use tags to perform searches and apply filters. You should use Resource Groups to easily create and maintain collections of resources that share common sets of tags.

  **[Resources]:**
  + [Tagging Best Practices](https://d1.awsstatic.com/whitepapers/aws-tagging-best-practices.pdf)
  + [ElastiCache resource type reference for CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_ElastiCache.html)
  + [Parameter Groups](ParameterGroups.Engine.md#ParameterGroups.Redis)

## OE 4: How do you manage clients’ connections to your ElastiCache clusters?
<a name="OperationalExcellencePillarOE4"></a>

**Question-level introduction: **When operating at scale you need to understand how your clients connect with the ElastiCache cluster to manage your application operational aspects (such as response times). 

**Question-level benefit: **Choosing the most appropriate connection mechanism ensures that your application does not disconnect due to connectivity errors, such as time-outs.
+ **[Required]** Separate read from write operations and connect to the replica nodes to execute read operations. However, be aware when you separate the writes from the reads you will lose the ability to read a key immediately after writing it due to the asynchronous nature of the Valkey and Redis OSS replication. The WAIT command can be leveraged to improve real world data safety and force replicas to acknowledge writes before responding to clients, at an overall performance cost. Using replica nodes for read operations can be configured in your ElastiCache client library using the ElastiCache reader endpoint for cluster mode disabled. For cluster mode enabled, use the READONLY command. For many of the ElastiCache client libraries, READONLY is implemented by default or via a configuration setting.

  **[Resources]:**
  + [Finding connection endpoints in ElastiCache](Endpoints.md)
  + [READONLY](https://valkey.io/commands/readonly)
+ **[Required] **Use connection pooling. Establishing a TCP connection has a cost in CPU time on both client and server sides and pooling allows you to reuse the TCP connection. 

  To reduce connection overhead, you should use connection pooling. With a pool of connections your application can re-use and release connections ‘at will’, without the cost of establishing the connection. You can implement connection pooling via your ElastiCache client library (if supported), with a Framework available for your application environment, or build it from the ground up.
+ **[Best]** Ensure that the socket timeout of the client is set to at least one second (vs. the typical “none” default in several clients).
  + Setting the timeout value too low can lead to possible timeouts when the server load is high. Setting it too high can result in your application taking a long time to detect connection issues.
  + Control the volume of new connections by implementing connection pooling in your client application. This reduces latency and CPU utilization needed to open and close connections, and perform a TLS handshake if TLS is enabled on the cluster.

  **[Resources]:** [Configure ElastiCache for higher availability](https://aws.amazon.com/blogs/database/configuring-amazon-elasticache-for-redis-for-higher-availability/)
+ **[Good]** Using pipelining (when your use cases allow it) can significantly boost the performance.
  + With pipelining you reduce the Round-Trip Time (RTT) between your application clients and the cluster and new requests can be processed even if the client has not yet read the previous responses.
  + With pipelining you can send multiple commands to the server without waiting for replies/ack. The downside of pipelining is that when you eventually fetch all the responses in bulk there may have been an error that you will not catch until the end.
  + Implement methods to retry requests when an error is returned that omits the bad request.

  **[Resources]:** [Pipelining](https://valkey.io/topics/pipelining/)

## OE 5: How do you deploy ElastiCache Components for a Workload?
<a name="OperationalExcellencePillarOE5"></a>

**Question-level introduction: **ElastiCache environments can be deployed manually through the AWS Console, or programmatically through APIs, CLI, toolkits, etc. Operational Excellence best practices suggest automating deployments through code whenever possible. Additionally, ElastiCache clusters can either be isolated by workload or combined for cost optimization purposes.

**Question-level benefit: **Choosing the most appropriate deployment mechanism for your ElastiCache environments can improve Operation Excellence over time. It is recommended to perform operations as code whenever possible to minimize human error and increase repeatability, flexibility, and response time to events.

By understanding the workload isolation requirements, you can choose to have dedicated ElastiCache environments per workload or combine multiple workloads into single clusters, or combinations thereof. Understanding the tradeoffs can help strike a balance between Operational Excellence and Cost Optimization
+ **[Required]** Understand the deployment options available to ElastiCache, and automate these procedures whenever possible. Possible avenues of automation include CloudFormation, AWS CLI/SDK, and APIs.

  **[Resources]: **
  + [Amazon ElastiCache resource type reference](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/AWS_ElastiCache.html)
  + [elasticache](https://docs.aws.amazon.com/cli/latest/reference/elasticache/index.html)
  + [Amazon ElastiCache API Reference](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/Welcome.html)
+ **[Required]** For all workloads determine the level of cluster isolation needed. 
  + **[Best]:** High Isolation – a 1:1 workload to cluster mapping. Allows for finest grained control over access, sizing, scaling, and management of ElastiCache resources on a per workload basis.
  + **[Better]:** Medium Isolation – M:1 isolated by purpose but perhaps shared across multiple workloads (for example a cluster dedicated to caching workloads, and another dedicated for messaging).
  + **[Good]:** Low Isolation – M:1 all purpose, fully shared. Recommended for workloads where shared access is acceptable.

## OE 6: How do you plan for and mitigate failures?
<a name="OperationalExcellencePillarOE6"></a>

**Question-level introduction: **Operational Excellence includes anticipating failures by performing regular "pre-mortem" exercises to identify potential sources of failure so they can be removed or mitigated. ElastiCache offers a Failover API that allows for simulated node failure events, for testing purposes.

**Question-level benefit: **By testing failure scenarios ahead of time you can learn how they impact your workload. This allows for safe testing of response procedures and their effectiveness, as well as gets your team familiar with their execution.

**[Required]** Regularly perform failover testing in dev/test accounts. [TestFailover](https://docs.aws.amazon.com/AmazonElastiCache/latest/APIReference/API_TestFailover.html)

## OE 7: How do you troubleshoot Valkey or Redis OSS engine events?
<a name="OperationalExcellencePillarOE7"></a>

**Question-level introduction: **Operational Excellence requires the ability to investigate both service-level and engine-level information to analyze the health and status of your clusters. ElastiCache can emit Valkey or Redis OSS engine logs to both Amazon CloudWatch and Amazon Kinesis Data Firehose.

**Question-level benefit: **Enabling Valkey or Redis OSS engine logs on ElastiCache clusters provides insight into events that impact the health and performance of clusters. Valkey or Redis OSS engine logs provide data directly from the engine that is not available through the ElastiCache events mechanism. Through careful observation of both ElastiCache events (see preceding OE-1) and engine logs, it is possible to determine an order of events when troubleshooting from both the ElastiCache service perspective and engine perspective.
+ **[Required]** Ensure that Redis OSS engine logging functionality is enabled, which is available as of ElastiCache version 6.2 for Redis OSS and newer. This can be performed during cluster creation or by modifying the cluster after creation. 
  + Determine whether Amazon CloudWatch Logs or Amazon Kinesis Data Firehose is the appropriate target for Redis OSS engine logs.
  + Select an appropriate target log within either CloudWatch or Kinesis Data Firehose to persist the logs. If you have multiple clusters, consider a different target log for each cluster as this will help isolate data when troubleshooting.

  **[Resources]:**
  + Log delivery: [Log delivery](Log_Delivery.md)
  + Logging destinations: [Amazon CloudWatch Logs](Logging-destinations.md#Destination_Specs_CloudWatch_Logs)
  + Amazon CloudWatch Logs introduction: [What is Amazon CloudWatch Logs?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html)
  + Amazon Kinesis Data Firehose introduction: [What Is Amazon Kinesis Data Firehose?](https://docs.aws.amazon.com/firehose/latest/dev/what-is-this-service.html)
+ **[Best]** If using Amazon CloudWatch Logs, consider leveraging Amazon CloudWatch Logs Insights to query Valkey or Redis OSS engine log for important information.

  As an example, create a query against the CloudWatch Log group that contains the Valkey or Redis OSS engine logs that will return events with a LogLevel of ‘WARNING’, such as:

  ```
  fields @timestamp, LogLevel, Message
  | sort @timestamp desc
  | filter LogLevel = "WARNING"
  ```

  **[Resources]:**[Analyzing log data with CloudWatch Logs Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html)

# Amazon ElastiCache Well-Architected Lens Security Pillar
<a name="SecurityPillar"></a>

The security pillar focuses on protecting information and systems. Key topics include confidentiality and integrity of data, identifying and managing who can do what with privilege-based management, protecting systems, and establishing controls to detect security events.

**Topics**
+ [SEC 1: What steps are you taking in controlling authorized access to ElastiCache data?](#SecurityPillarSEC1)
+ [SEC 2: Do your applications require additional authorization to ElastiCache over and above networking-based controls?](#SecurityPillarSEC2)
+ [SEC 3: Is there a risk that commands can be executed inadvertently, causing data loss or failure?](#SecurityPillarSEC3)
+ [SEC 4: How do you ensure data encryption at rest with ElastiCache](#SecurityPillarSEC4)
+ [SEC 5: How do you encrypt in-transit data with ElastiCache?](#SecurityPillarSEC5)
+ [SEC 6: How do you restrict access to control plane resources?](#SecurityPillarSEC6)
+ [SEC 7: How do you detect and respond to security events?](#SecurityPillarSEC7)

## SEC 1: What steps are you taking in controlling authorized access to ElastiCache data?
<a name="SecurityPillarSEC1"></a>

**Question-level introduction: **All ElastiCache clusters are designed to be accessed from Amazon Elastic Compute Cloud instances in a VPC, serverless functions (AWS Lambda), or containers (Amazon Elastic Container Service). The most encountered scenario is to access an ElastiCache cluster from an Amazon Elastic Compute Cloud instance within the same Amazon Virtual Private Cloud (Amazon Virtual Private Cloud). Before you can connect to a cluster from an Amazon EC2 instance, you must authorize the Amazon EC2 instance to access the cluster. To access an ElastiCache cluster running in a VPC, it is necessary to grant network ingress to the cluster.

**Question-level benefit: **Network ingress into the cluster is controlled via VPC security groups. A security group acts as a virtual firewall for your Amazon EC2 instances to control incoming and outgoing traffic. Inbound rules control the incoming traffic to your instance, and outbound rules control the outgoing traffic from your instance. In the case of ElastiCache, when launching a cluster, it requires associating a security group. This ensures that inbound and outbound traffic rules are in place for all nodes that make up the cluster. Additionally, ElastiCache is configured to deploy on private subnets exclusively such that they are only accessible from via the VPC’s private networking.
+ **[Required] **The security group associated with your cluster controls network ingress and access to the cluster. By default, a security group will not have any inbound rules defined and, therefore, no ingress path to ElastiCache. To enable this, configure an inbound rule on the security group specifying source IP address/range, TCP type traffic and the port for your ElastiCache cluster (default port 6379 for ElastiCache for Valkey and Redis OSS for example). While it is possible to allow a very broad set of ingress sources, like all resources within a VPC (0.0.0.0/0), it is advised to be as granular as possible in defining the inbound rules such as authorizing only inbound access to Valkey or Redis OSS clients running on Amazon Amazon EC2 instances associated with a specific security group.

  **[Resources]: **
  + [Subnets and subnet groups](SubnetGroups.md)
  + [Accessing your cluster or replication group](accessing-elasticache.md)
  + [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html#DefaultSecurityGroupdefault%20security%20group)
  + [Amazon Elastic Compute Cloud security groups for Linux instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html#creating-your-own-security-groups)
+ **[Required] **AWS Identity and Access Management policies can be assigned to AWS Lambda functions allowing them to access ElastiCache data. To enable this feature, create an IAM execution role with the `AWSLambdaVPCAccessExecutionRole` permission, then assign the role to the AWS Lambda function.

  **[Resources]: **Configuring a Lambda function to access Amazon ElastiCache in an Amazon VPC: [Tutorial: Configuring a Lambda function to access Amazon ElastiCache in an Amazon VPC](https://docs.aws.amazon.com/lambda/latest/dg/services-elasticache-tutorial.html)

## SEC 2: Do your applications require additional authorization to ElastiCache over and above networking-based controls?
<a name="SecurityPillarSEC2"></a>

**Question-level introduction: **In scenarios where it is necessary to restrict or control access to clusters at an individual client level, it is recommended to authenticate via the AUTH command. ElastiCache authentication tokens, with optional user and user group management, enable ElastiCache to require a password before allowing clients to run commands and access keys, thereby improving data plane security.

**Question-level benefit: **To help keep your data secure, ElastiCache provides mechanisms to safeguard against unauthorized access of your data. This includes enforcing Role-Based Access Control (RBAC) AUTH, or AUTH token (password) be used by clients to connect to ElastiCache before performing authorized commands.
+ **[Best] **For ElastiCache version 6.x and higher for Redis OSS, and ElastiCache version 7.2 and higher for Valkey, define authentication and authorization controls by defining user groups, users, and access strings. Assign users to user groups, then assign user groups to clusters. To utilize RBAC, it must be selected upon cluster creation, and in-transit encryption must be enabled. Ensure you are using a Valkey or Redis OSS client that supports TLS to be able to leverage RBAC.

  **[Resources]: **
  + [Applying RBAC to a Replication Group for ElastiCache](Clusters.RBAC.md#rbac-using)
  + [Specifying Permissions Using an Access String](Clusters.RBAC.md#Access-string)
  + [ACL](https://valkey.io/topics/acl/)
  + [Supported ElastiCache versions](VersionManagement.md#supported-engine-versions)
+ **[Best] **For ElastiCache versions prior to 6.x for Redis OSS, in addition to setting strong token/password and maintaining a strict password policy for AUTH, it is best practice to rotate the password/token. ElastiCache can manage up to two (2) authentication tokens at any given time. You can also modify the cluster to explicitly require the use of authentication tokens.

  **[Resources]: **[Modifying the AUTH token on an existing ElastiCache cluster](auth.md#auth-modifyng-token)

## SEC 3: Is there a risk that commands can be executed inadvertently, causing data loss or failure?
<a name="SecurityPillarSEC3"></a>

**Question-level introduction: **There are a number of Valkey or Redis OSS commands that can have adverse impacts on operations if executed by mistake or by malicious actors. These commands can have un-intended consequences from a performance and data safety perspective. For example a developer may routinely call the FLUSHALL command in a dev environment, and due to a mistake may inadvertently attempt to call this command on a production system, resulting in accidental data loss.

**Question-level benefit: **Beginning with ElastiCache version 5.0.3 for Redis OSS, you have the ability to rename certain commands that might be disruptive to your workload. Renaming the commands can help prevent them from being inadvertently executed on the cluster. 
+ **[Required] **

  **[Resources]: **
  + [ElastiCache version 5.0.3 for Redis OSS (deprecated, use version 5.0.6)](engine-versions.md#redis-version-5-0.3)
  + [ElastiCache version 5.0.3 for Redis OSS parameter changes](ParameterGroups.Engine.md#ParameterGroups.Redis.5-0-3)
  + [Redis OSS security](https://redis.io/docs/management/security/)

## SEC 4: How do you ensure data encryption at rest with ElastiCache
<a name="SecurityPillarSEC4"></a>

**Question-level introduction: **While ElastiCache is an in-memory data store, it is possible to encrypt any data that may be persisted (on storage) as part of standard operations of the cluster. This includes both scheduled and manual backups written to Amazon S3, as well as data saved to disk storage as a result of sync and swap operations. Instance types in the M6g and R6g families also feature always-on, in-memory encryption.

**Question-level benefit: **ElastiCache provides optional encryption at-rest to increase data security.
+ **[Required] **At-rest encryption can be enabled on an ElastiCache cluster (replication group) only when it is created. An existing cluster cannot be modified to begin encrypting data at-rest. By default, ElastiCache will provide and manage the keys used in at-rest encryption. 

  **[Resources]: **
  + [At-Rest Encryption Constraints](at-rest-encryption.md#at-rest-encryption-constraints)
  + [Enabling At-Rest Encryption](at-rest-encryption.md#at-rest-encryption-enable)
+ **[Best] **Leverage Amazon EC2 instance types that encrypt data while it is in memory (such as M6g or R6g). Where possible, consider managing your own keys for at-rest encryption. For more stringent data security environments, AWS Key Management Service (KMS) can be used to self-manage Customer Master Keys (CMK). Through ElastiCache integration with AWS Key Management Service, you are able to create, own, and manage the keys used for encryption of data at rest for your ElastiCache cluster.

  **[Resources]: **
  + [Using customer managed keys from AWS Key Management Service](at-rest-encryption.md#using-customer-managed-keys-for-elasticache-security)
  + [AWS Key Management Service](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html)
  + [AWS KMS concepts](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#master_keys)

## SEC 5: How do you encrypt in-transit data with ElastiCache?
<a name="SecurityPillarSEC5"></a>

**Question-level introduction: ** It is a common requirement to mitigate against data being compromised while in transit. This represents data within components of a distributed system, as well as between application clients and cluster nodes. ElastiCache supports this requirement by allowing for encrypting data in-transit between clients and cluster, and between cluster nodes themselves. Instance types in the M6g and R6g families also feature always-on, in-memory encryption. 

**Question-level benefit: **Amazon ElastiCache in-transit encryption is an optional feature that allows you to increase the security of your data at its most vulnerable points, when it is in-transit from one location to another.
+ **[Required] **In-transit encryption can only be enabled on a cluster (replication group) upon creation. Please note that, due to the additional processing required for encrypting/decrypting data, implementing in-transit encryption will have some performance impact. To understand the impact, it is recommended to benchmark your workload before and after enabling encryption-in-transit.

  **[Resources]: **
  + [In-transit encryption overview](in-transit-encryption.md#in-transit-encryption-overview)

## SEC 6: How do you restrict access to control plane resources?
<a name="SecurityPillarSEC6"></a>

**Question-level introduction: **IAM policies and ARN enable fine grained access controls for ElastiCache for Valkey and Redis OSS, allowing for tighter control to manage the creation, modification and deletion of clusters.

**Question-level benefit: **Management of Amazon ElastiCache resources, such as replication groups, nodes, etc. can be constrained to AWS accounts that have specific permissions based on IAM policies, improving security and reliability of resources.
+ **[Required] **Manage access to Amazon ElastiCache resources by assigning specific AWS Identity and Access Managementpolicies to AWS users, allowing finer control over which accounts can perform what actions on clusters.

  **[Resources]: **
  + [Overview of managing access permissions to your ElastiCache resources](IAM.Overview.md)
  + [Using identity-based policies (IAM policies) for Amazon ElastiCache](IAM.IdentityBasedPolicies.md)

## SEC 7: How do you detect and respond to security events?
<a name="SecurityPillarSEC7"></a>

**Question-level introduction: **ElastiCache, when deployed with RBAC enabled, exports CloudWatch metrics to notify users of security events. These metrics help identify failed attempts to authenticate, access keys, or run commands that connecting RBAC users are not authorized for.

Additionally, AWS products and services resources help secure your overall workload by automating deployments and logging all actions and modifications for later review/audit.

**Question-level benefit: **By monitoring events, you enable your organization to respond according to your requirements, policies, and procedures. Automating the monitoring and responses to these security events hardens your overall security posture.
+ **[Required] **Familiarize yourself with the CloudWatch Metrics published that pertain to RBAC authentication and authorization failures. 
  + AuthenticationFailures = Failed attempts to authenticate to Valkey or Redis OSS
  + KeyAuthorizationFailures = Failed attempts by users to access keys without permission
  + CommandAuthorizationFailures = Failed attempts by users to run commands without permission

  **[Resources]: **
  + [Metrics for Valkey or Redis OSS](CacheMetrics.Redis.md)
+ **[Best] **It is recommended to setup alerts and notifications on these metrics and respond as necessary.

  **[Resources]: **
  + [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html)
+ **[Best] **Use the Valkey or Redis OSS ACL LOG command to gather further details

  **[Resources]: **
  + [ACL LOG](https://valkey.io/commands/acl-log/)
+ **[Best] **Familiarize yourself with the AWS products and services capabilities as it pertains to monitoring, logging, and analyzing ElastiCache deployments and events

  **[Resources]: **
  + [Logging Amazon ElastiCache API calls with AWS CloudTrail](logging-using-cloudtrail.md)
  + [elasticache-redis-cluster-automatic-backup-check](https://docs.aws.amazon.com/config/latest/developerguide/elasticache-redis-cluster-automatic-backup-check.html)
  + [Monitoring use with CloudWatch Metrics](CacheMetrics.md)

# Amazon ElastiCache Well-Architected Lens Reliability Pillar
<a name="ReliabilityPillar"></a>

The reliability pillar focuses on workloads performing their intended functions and how to recover quickly from failure to meet demands. Key topics include distributed system design, recovery planning, and adapting to changing requirements.

**Topics**
+ [REL 1: How are you supporting high availability (HA) architecture deployments?](#ReliabilityPillarREL1)
+ [REL 2: How are you meeting your Recovery Point Objectives (RPOs) with ElastiCache?](#ReliabilityPillarREL2)
+ [REL 3: How do you support disaster recovery (DR) requirements?](#ReliabilityPillarREL3)
+ [REL 4: How do you effectively plan for failovers?](#ReliabilityPillarREL4)
+ [REL 5: Are your ElastiCache components designed to scale?](#ReliabilityPillarREL5)

## REL 1: How are you supporting high availability (HA) architecture deployments?
<a name="ReliabilityPillarREL1"></a>

**Question-level introduction: ** Understanding the high availability architecture of Amazon ElastiCache will enable you to operate in a resilient state during availability events. 

**Question-level benefit: **Architecting your ElastiCache clusters to be resilient to failures ensures higher availability for your ElastiCache deployments. 
+ **[Required] **Determine the level of reliability you require for your ElastiCache cluster. Different workloads have different resiliency standards, from entirely ephemeral to mission critical workloads. Define needs for each type of environment you operate such as dev, test, and production.

  Caching engine: ElastiCache for Memcached vs ElastiCache for Valkey and Redis OSS

  1. ElastiCache for Memcached does not provide any replication mechanism and is used primarily for ephemeral workloads.

  1. ElastiCache for Valkey and Redis OSS offers HA features discussed below
+ **[Best] **For workloads that require HA, use ElastiCache in cluster mode with a minimum of two replicas per shard, even for small throughput requirement workloads that require only one shard. 

  1. For cluster mode enabled, multi-AZ is enabled automatically.

     Multi-AZ minimizes downtime by performing automatic failovers from primary node to replicas, in case of any planned or unplanned maintenance as well as mitigating AZ failure.

  1. For sharded workloads, a minimum of three shards provides faster recovery during failover events as the Valkey or Redis OSS Cluster Protocol requires a majority of primary nodes be available to achieve quorum.

  1. Set up two or more replicas across Availability.

     Having two replicas provides improved read scalability and also read availability in scenarios where one replica is undergoing maintenance.

  1. Use Graviton2-based node types (default nodes in most regions).

     ElastiCache has added optimized performance on these nodes. As a result, you get better replication and synchronization performance, resulting in overall improved availability.

  1. Monitor and right-size to deal with anticipated traffic peaks: under heavy load, the engine may become unresponsive, which affects availability. `BytesUsedForCache` and `DatabaseMemoryUsagePercentage` are good indicators of your memory usage, whereas `ReplicationLag` is an indicator of your replication health based on your write rate. You can use these metrics to trigger cluster scaling.

  1. Ensure client-side resiliency by testing with the [Failover API prior to a production failover event](https://docs.amazonaws.cn/en_us/AmazonElastiCache/latest/APIReference/API_TestFailover.html).

  **[Resources]: **
  + [Configure ElastiCache for Redis OSS for higher availability](https://aws.amazon.com/blogs/database/configuring-amazon-elasticache-for-redis-for-higher-availability/)
  + [High availability using replication groups](Replication.md)

## REL 2: How are you meeting your Recovery Point Objectives (RPOs) with ElastiCache?
<a name="ReliabilityPillarREL2"></a>

**Question-level introduction: **Understand workload RPO to inform decisions on ElastiCache backup and recovery strategies.

**Question-level benefit: **Having an in-place RPO strategy can improve business continuity in the event of a disaster recovery scenarios. Designing your backup and restore policies can help you meet your Recovery Point Objectives (RPO) for your ElastiCache data. ElastiCache offers snapshot capabilities which are stored in Amazon S3, along with a configurable retention policy. These snapshots are taken during a defined backup window, and handled by the service automatically. If your workload requires additional backup granularity, you have the option to create up to 20 manual backups per day. Manually created backups do not have a service retention policy and can be kept indefinitely.
+ **[Required] **Understand and document the RPO of your ElastiCache deployments.
  + Be aware that Memcached does not offer any backup processes.
  + Review the capabilities of ElastiCache Backup and Restore features.
+ **[Best] **Have a well-communicated process in place for backing up your cluster.
  + Initiate manual backups on an as-needed basis.
  + Review retention policies for automatic backups.
  + Note that manual backups will be retained indefinitely.
  + Schedule your automatic backups during periods of low usage.
  + Perform backup operations against read-replicas to ensure you minimize the impact on cluster performance.
+ **[Good] **Leverage the scheduled backup feature of ElastiCache to regularly back up your data during a defined window. 
  + Periodically test restores from your backups.
+ **[Resources]: **
  + [Redis OSS](https://aws.amazon.com/elasticache/faqs/#Redis)
  + [Backup and restore for ElastiCache](backups.md)
  + [Making manual backups](backups-manual.md)
  + [Scheduling automatic backups](backups-automatic.md)
  + [Backup and Restore ElastiCache Clusters](https://aws.amazon.com/blogs/aws/backup-and-restore-elasticache-redis-nodes/)

## REL 3: How do you support disaster recovery (DR) requirements?
<a name="ReliabilityPillarREL3"></a>

**Question-level introduction: **Disaster recovery is an important aspect of any workload planning. ElastiCache offers several options to implement disaster recovery based on workload resilience requirements. With Amazon ElastiCache Global Datastore, you can write to your cluster in one Region and have the data available to be read from two other cross-Region replica clusters, thereby enabling low-latency reads and disaster recovery across regions.

**Question-level benefit: **Understanding and planning for a variety of disaster scenarios can ensure business continuity. DR strategies must be balanced against cost, performance impact, and data loss potential.
+ **[Required] **Develop and document DR strategies for all your ElastiCache components based upon workload requirements. ElastiCache is unique in that some use cases are entirely ephemeral and don’t require any DR strategy, whereas others are on the opposite end of the spectrum and require an extremely robust DR strategy. All options must be weighed against Cost Optimization – greater resiliency requires larger amounts of instrastructure.

  Understand the DR options available on a regional and multi-region level.
  + Multi-AZ Deployments are recommended to guard against AZ failure. Be sure to deploy with Cluster-Mode enabled in Multi-AZ architectures, with a minimum of 3 AZs available.
  + Global Datastore is recommended to guard against regional failures.
+ **[Best] **Enable Global Datastore for workloads that require region level resiliency.
  + Have a plan to failover to secondary region in case of primary degradation.
  + Test multi-region failover process prior to a failover over in production.
  + Monitor `ReplicationLag` metric to understand potential impact of data loss during failover events.
+ **[Resources]: **
  + [Mitigating Failures](disaster-recovery-resiliency.md#FaultTolerance)
  + [Replication across AWS Regions using global datastores](Redis-Global-Datastore.md)
  + [Restoring from a backup with optional cluster resizing](backups-restoring.md)
  + [Minimizing downtime in ElastiCache for Valkey and Redis OSS with Multi-AZ](AutoFailover.md)

## REL 4: How do you effectively plan for failovers?
<a name="ReliabilityPillarREL4"></a>

**Question-level introduction: **Enabling multi-AZ with automatic failovers is an ElastiCache best practice. In certain cases, ElastiCache for Valkey and Redis OSS replaces primary nodes as part of service operations. Examples include planned maintenance events and the unlikely case of a node failure or availability zone issue. Successful failovers rely on both ElastiCache and your client library configuration.

**Question-level benefit: **Following best practices for ElastiCache failovers in conjunction with your specific ElastiCache client library helps you minimize potential downtime during failover events. 
+ **[Required] **For cluster mode disabled, use timeouts so your clients detect if it needs to disconnect from the old primary node and reconnect to the new primary node, using the updated primary endpoint IP address. For cluster mode enabled, the client library is responsible with detecting changes in the underlying cluster topology. This is accomplished most often by configuration settings in the ElastiCache client library, which also allow you to configure the frequency and the method of refresh. Each client library offers its own settings and more details are available in their corresponding documentation.

  **[Resources]: **
  + [Minimizing downtime in ElastiCache for Valkey and Redis OSS with Multi-AZ](AutoFailover.md)
  + Review the best practices of your ElastiCache client library. 
+ **[Required] **Successful failovers depend on a healthy replication environment between the primary and the replica nodes. Review and understand the asynchronous nature of Valkey and Redis OSS replication, as well as the available CloudWatch metrics to report on the replication lag between primary and replica nodes. For use cases that require greater data safety, leverage the WAIT command to force replicas to acknowledge writes before responding to connected clients. 

  **[Resources]: **
  + [Metrics for Valkey or Redis OSS](CacheMetrics.Redis.md)
  +  [Monitoring best practices with ElastiCache using Amazon CloudWatch](https://aws.amazon.com/blogs/database/monitoring-best-practices-with-amazon-elasticache-for-redis-using-amazon-cloudwatch/)
+ **[Best] **Regularly validate the responsiveness of your application during failover using the ElastiCache Test Failover API. 

  **[Resources]: **
  + [Testing Automatic Failover to a Read Replica on ElastiCache](https://aws.amazon.com/blogs/database/testing-automatic-failover-to-a-read-replica-on-amazon-elasticache-for-redis/)
  + [Testing automatic failover](AutoFailover.md#auto-failover-test)

## REL 5: Are your ElastiCache components designed to scale?
<a name="ReliabilityPillarREL5"></a>

**Question-level introduction: **By understanding the scaling capabilities and available deployment topologies, your ElastiCache components can adjust over time to meet changing workload requirements. ElastiCache offers 4-way scaling: in/out (horizontal) as well as up/down (vertical).

**Question-level benefit: **Following best practices for ElastiCache deployments provides the greatest amount of scaling flexibility, as well as meeting the Well Architected principle of scaling horizontally to minimize the impact of failures.
+ **[Required] **Understand the difference between Cluster-mode Enabled and Cluster-mode Disabled topologies. In almost all cases it is recommended to deploy with Cluster-mode enabled as it allow for greater scalability over time. Cluster-mode disabled components are limited in their ability to horizontally scale by adding read replicas.
+ **[Required] **Understand when and how to scale.
  + For more READIOPS: add replicas
  + For more WRITEOPS: add shards (scale out)
  + For more network IO – use network optimized instances, scale up
+ **[Best] **Deploy your ElastiCache components with Cluster-mode enabled, with a bias toward more, smaller nodes rather than fewer, larger nodes. This effectively limits the blast radius of a node failure.
+ **[Best] **Include replicas in your clusters for enhanced responsiveness during scaling events
+ **[Good] **For cluster-mode disabled, leverage read replicas to increase overall read capacity. ElastiCache has support for up to 5 read replicas in cluster-mode disabled, as well as vertical scaling.
+ **[Resources]: **
  + [Scaling ElastiCache clusters](Scaling.md)
  + [Online scaling up](redis-cluster-vertical-scaling.md#redis-cluster-vertical-scaling-scaling-up)

# Amazon ElastiCache Well-Architected Lens Performance Efficiency Pillar
<a name="PerformanceEfficiencyPillar"></a>

The performance efficiency pillar focuses on using IT and computing resources efficiently. Key topics include selecting the right resource types and sizes based on workload requirements, monitoring performance, and making informed decisions to maintain efficiency as business needs evolve.

**Topics**
+ [PE 1: How do you monitor the performance of your Amazon ElastiCache cluster?](#PerformanceEfficiencyPillarPE1)
+ [PE 2: How are you distributing work across your ElastiCache Cluster nodes?](#PerformanceEfficiencyPillarPE2)
+ [PE 3: For caching workloads, how do you track and report the effectiveness and performance of your cache?](#PerformanceEfficiencyPillarPE3)
+ [PE 4: How does your workload optimize the use of networking resources and connections?](#PerformanceEfficiencyPillarPE4)
+ [PE 5: How do you manage key deletion and/or eviction?](#PerformanceEfficiencyPillarPE5)
+ [PE 6: How do you model and interact with data in ElastiCache?](#PerformanceEfficiencyPillarPE6)
+ [PE 7: How do you log slow running commands in your Amazon ElastiCache cluster?](#PerformanceEfficiencyPillarPE7)
+ [PE8: How does Auto Scaling help in increasing the performance of the ElastiCache cluster?](#PerformanceEfficiencyPillarPE8)

## PE 1: How do you monitor the performance of your Amazon ElastiCache cluster?
<a name="PerformanceEfficiencyPillarPE1"></a>

**Question-level introduction: **By understanding the existing monitoring metrics, you can identify current utilization. Proper monitoring can help identify potential bottlenecks impacting the performance of your cluster. 

**Question-level benefit: **Understanding the metrics associated with your cluster can help guide optimization techniques that can lead to reduced latency and increased throughput. 
+ **[Required] **Baseline performance testing using a subset of your workload.
  + You should monitor performance of the actual workload using mechanisms such as load testing. 
  + Monitor the CloudWatch metrics while running these tests to gain an understanding of metrics available, and to establish a performance baseline. 
+ **[Best] **For ElastiCache for Valkey and Redis OSS workloads, rename computationally expensive commands, such as `KEYS`, to limit the ability of users to run blocking commands on production clusters. 
  + ElastiCache workloads running engine 6.x for Redis OSS can leverage role-based access control to restrict certain commands. Access to the commands can be controlled by creating Users and User Groups with the AWS Console or CLI, and associating the User Groups to a cluster. In Redis OSS 6, when RBAC is enabled, we can use "-@dangerous" and it will disallow expensive commands like KEYS, MONITOR, SORT, etc. for that user.
  + For engine version 5.x, rename commands using the `rename-commands` parameter on the cluster parameter group.
+ **[Better] **Analyze slow queries and look for optimization techniques. 
  + For ElastiCache for Valkey and Redis OSS workloads, learn more about your queries by analyzing the Slow Log. For example, you can use the following command, `valkey-cli slowlog get 10` to show last 10 commands which exceeded latency thresholds (10 milliseconds by default).
  + Certain queries can be performed more efficiently using complex ElastiCache for Valkey and Redis OSS data structures. As an example, for numerical style range lookups, an application can implement simple numerical indexes with Sorted Sets. Managing these indexes can reduce scans performed on the data set, and return data with greater performance efficiency. 
  + For ElastiCache for Valkey and Redis OSS workloads, `redis-benchmark` provides a simple interface for testing the performance of different commands using user defined inputs like number of clients, and size of data.
  + Since Memcached only supports simple key level commands, consider building additional keys as indexes to avoid iterating through the key space to serve client queries.
+ **[Resources]: **
  + [Monitoring use with CloudWatch Metrics](CacheMetrics.md)
  + [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html)
  + [Valkey and Redis OSS specific parameters](ParameterGroups.Engine.md#ParameterGroups.Redis)
  + [SLOWLOG](https://valkey.io/commands/slowlog/)
  + [ benchmark](https://valkey.io/topics/benchmark/)

## PE 2: How are you distributing work across your ElastiCache Cluster nodes?
<a name="PerformanceEfficiencyPillarPE2"></a>

**Question-level introduction: **The way your application connects to Amazon ElastiCache nodes can impact the performance and scalability of the cluster. 

**Question-level benefit: **Making proper use of the available nodes in the cluster will ensure that work is distributed across the available resources. The following techniques help avoid idle resources as well.
+ **[Required] **Have clients connect to the proper ElastiCache endpoint.
  + ElastiCache for Valkey and Redis OSS implements different endpoints based on the cluster mode in use. For cluster mode enabled, ElastiCache will provide a configuration endpoint. For cluster mode disabled, ElastiCache provides a primary endpoint, typically used for writes, and a reader endpoint for balancing reads across replicas. Implementing these endpoints correctly will results in better performance, and easier scaling operations. Avoid connecting to individual node endpoints unless there is a specific requirement to do so. 
  + For multi-node Memcached clusters, ElastiCache provides a configuration endpoint which enables Auto Discovery. It is recommended to use a hashing algorithm to distribute work evenly across the cache nodes. Many Memcached client libraries implement consistent hashing. Check the documentation for the library you are using to see if it supports consistent hashing and how to implement it. You can find more information on implementing these features [here](BestPractices.LoadBalancing.md).
+ **[Better] **Take advantage of ElastiCache for Valkey and Redis OSS cluster mode enabled clusters to improve scalability.
  + ElastiCache for Valkey and Redis OSS (cluster mode enabled) clusters support [online scaling operations](scaling-redis-cluster-mode-enabled.md#redis-cluster-resharding-online) (out/in and up/down) to help distribute data dynamically across shards. Using the Configuration Endpoint will ensure your cluster aware clients can adjust to changes in the cluster topology.
  + You may also rebalance the cluster by moving hashslots between available shards in your ElastiCache for Valkey and Redis OSS (cluster mode enabled) cluster. This helps distribute work more efficiently across available shards. 
+ **[Better] **Implement a strategy for identifying and remediating hot keys in your workload.
  + Consider the impact of multi-dimensional Valkey or Redis OSS data structures such a lists, streams, sets, etc. These data structures are stored in single Keys, which reside on a single node. A very large multi-dimensional key has the potential to utilize more network capacity and memory than other data types and can cause a disproportionate use of that node. If possible, design your workload to spread out data access across many discrete Keys.
  + Hot keys in the workload can impact performance of the node in use. For ElastiCache for Valkey and Redis OSS workloads, you can detect hot keys using `valkey-cli --hotkeys` if an LFU max-memory policy is in place.
  + Consider replicating hot keys across multiple nodes to distribute access to them more evenly. This approach requires the client to write to multiple primary nodes (the Valkey or Redis OSS node itself will not provide this functionality) and to maintain a list of key names to read from, in addition to the original key name.
  + ElastiCache engine 7.2 for Valkey and above, and ElastiCache version 6 for Redis OSS and above, all support server-assisted [client-side caching](https://valkey.io/topics/client-side-caching/). This enables applications to wait for changes to a key before making network calls back to ElastiCache. 
+ **[Resources]: **
  + [Configure ElastiCache for Valkey and Redis OSS for higher availability](https://aws.amazon.com/blogs/database/configuring-amazon-elasticache-for-redis-for-higher-availability/)
  + [Finding connection endpoints in ElastiCache](Endpoints.md)
  + [Load balancing best practices](https://docs.aws.amazon.com/AmazonElastiCache/latest/dg/BestPractices.LoadBalancing.html)
  + [Online resharding for Valkey or Redis OSS (cluster mode enabled)](scaling-redis-cluster-mode-enabled.md#redis-cluster-resharding-online)
  + [Client-side caching in Valkey and Redis OSS](https://valkey.io/topics/client-side-caching/)

## PE 3: For caching workloads, how do you track and report the effectiveness and performance of your cache?
<a name="PerformanceEfficiencyPillarPE3"></a>

**Question-level introduction: **Caching is a commonly encountered workload on ElastiCache and it is important that you understand how to manage the effectiveness and performance of your cache.

**Question-level benefit: **Your application may show signs of sluggish performance. Your ability to use cache specific metrics to inform your decision on how to increase app performance is critical for your cache workload.
+ **[Required] **Measure and track over time the cache-hits ratio. The efficiency of your cache is determined by its ‘cache hits ratio’. The cache hits ratio is defined by the total of key hits divided by the total hits and misses. The closer to 1 the ratio is, the more effective your cache is. A low cache hits ratio is caused by the volume of cache misses. Cache misses occur when the requested key is not found in the cache. A key is not in the cache because it either has been evicted or deleted, has expired, or has never existed. Understand why keys are not in cache and develop appropriate strategies to have them in cache. 

  **[Resources]: **
  + [Metrics for Valkey and Redis OSS](CacheMetrics.Redis.md)
+ **[Required] **Measure and collect your application cache performance in conjunction with latency and CPU utilization values to understand whether you need to make adjustments to your time-to-live or other application components. ElastiCache provides a set of CloudWatch metrics for aggregated latencies for each data structure. These latency metrics are calculated using the commandstats statistic from the INFO command and do not include the network and I/O time. This is only the time consumed by ElastiCache to process the operations.

  **[Resources]: **
  + [Metrics for Valkey and Redis OSS](CacheMetrics.Redis.md)
  + [Monitoring best practices with ElastiCache using Amazon CloudWatch](https://aws.amazon.com/blogs/database/monitoring-best-practices-with-amazon-elasticache-for-redis-using-amazon-cloudwatch/)
+ **[Best] **Choose the right caching strategy for your needs. A low cache hits ratio is caused by the volume of cache misses. If your workload is designed to have low volume of cache misses (such as real time communication), it is best to conduct reviews of your caching strategies and apply the most appropriate resolutions for your workload, such as query instrumentation to measure memory and performance. The actual strategies you use to implement for populating and maintaining your cache depend on what data your clients need to cache and the access patterns to that data. For example, it is unlikely that you will use the same strategy for both personalized recommendations on a streaming application, and for trending news stories. 

  **[Resources]: **
  + [Caching strategies for Memcached](Strategies.md)
  + [Caching Best Practices](https://aws.amazon.com/caching/best-practices/)
  + [Performance at Scale with Amazon ElastiCache Whitepaper](https://d0.awsstatic.com/whitepapers/performance-at-scale-with-amazon-elasticache.pdf)

## PE 4: How does your workload optimize the use of networking resources and connections?
<a name="PerformanceEfficiencyPillarPE4"></a>

**Question-level introduction: **ElastiCache for Valkey, Memcached, and Redis OSS are supported by many application clients, and implementations may vary. You need to understand the networking and connection management in place to analyze potential performance impact.

**Question-level benefit: **Efficient use of networking resources can improve the performance efficiency of your cluster. The following recommendations can reduce networking demands, and improve cluster latency and throughput. 
+ **[Required] **Proactively manage connections to your ElastiCache cluster.
  + Connection pooling in the application reduces the amount of overhead on the cluster created by opening and closing connections. Monitor connection behavior in Amazon CloudWatch using `CurrConnections` and `NewConnections`.
  + Avoid connection leaking by properly closing client connections where appropriate. Connection management strategies include properly closing connections that are not in use, and setting connection time-outs. 
  + For Memcached workloads, there is a configurable amount of memory reserved for handling connections called, `memcached_connections_overhead`. 
+ **[Better] **Compress large objects to reduce memory, and improve network throughput.
  + Data compression can reduce the amount of network throughput required (Gbps), but increases the amount of work on the application to compress and decompress data. 
  + Compression also reduces the amount of memory consumed by keys
  + Based on your application needs, consider the trade-offs between compression ratio and compression speed.
+ **[Resources]: **
  + [ElastiCache - Global Datastore](https://aws.amazon.com/elasticache/redis/global-datastore/)
  + [Memcached specific parameters](ParameterGroups.Engine.md#ParameterGroups.Memcached)
  + [ElastiCache version 5.0.3 for Redis OSS enhances I/O handling to boost performance](https://aws.amazon.com/about-aws/whats-new/2019/03/amazon-elasticache-for-redis-503-enhances-io-handling-to-boost-performance/)
  + [Metrics for Valkey and Redis OSS](CacheMetrics.Redis.md)
  + [Configure ElastiCache for higher availability](https://aws.amazon.com/blogs/database/configuring-amazon-elasticache-for-redis-for-higher-availability/)

## PE 5: How do you manage key deletion and/or eviction?
<a name="PerformanceEfficiencyPillarPE5"></a>

**Question-level introduction: **Workloads have different requirements, and expected behavior when a cluster node is approaching memory consumption limits. ElastiCache has different policies for handling these situations. 

**Question-level benefit: **Proper management of available memory, and understanding of eviction policies will help ensure awareness of cluster behavior when instance memory limits are exceeded. 
+ **[Required] **Instrument the data access to evaluate which policy to apply. Identify an appropriate max-memory policy to control if and how evictions are performed on the cluster.
  + Eviction occurs when the max-memory on the cluster is consumed and a policy is in place to allow eviction. The behavior of the cluster in this situation depends on the eviction policy specified. This policy can be managed using the `maxmemory-policy` on the cluster parameter group. 
  + The default policy `volatile-lru` frees up memory by evicting keys with a set expiration time (TTL value). Least frequently used (LFU) and least recently used (LRU) policies remove keys based on usage. 
  + For Memcached workloads, there is a default LRU policy in place controlling evictions on each node. The number of evictions on your Amazon ElastiCache cluster can be monitored using the Evictions metric on Amazon CloudWatch.
+ **[Better] **Standardize delete behavior to control performance impact on your cluster to avoid unexpected performance bottlenecks.
  + For ElastiCache for Valkey and Redis OSS workloads, when explicitly removing keys from the cluster, `UNLINK` is like `DEL`: it removes the specified keys. However, the command performs the actual memory reclaiming in a different thread, so it is not blocking, while `DEL` is. The actual removal will happen later asynchronously. 
  + For ElastiCache version 6.x for Redis OSS workloads, the behavior of the `DEL` command can be modified in the parameter group using `lazyfree-lazy-user-del` parameter.
+ **[Resources]: **
  + [Configuring engine parameters using ElastiCache parameter groups](ParameterGroups.md)
  + [UNLINK](https://valkey.io/commands/unlink/)
  + [Cloud Financial Management with AWS](https://aws.amazon.com/aws-cost-management/)

## PE 6: How do you model and interact with data in ElastiCache?
<a name="PerformanceEfficiencyPillarPE6"></a>

**Question-level introduction: **ElastiCache is heavily application dependent on the data structures and the data model used, but it also needs to consider the underlying data store (if present). Understand the data structures available and ensure you are using the most appropriate data structures for your needs. 

**Question-level benefit: **Data modeling in ElastiCache has several layers, including application use case, data types, and relationships between data elements. Additionally, each data type and command have their own well documented performance signatures.
+ **[Best] **A best practice is to reduce unintentional overwriting of data. Use a naming convention that minimizes overlapping key names. Conventional naming of your data structures uses a hierarchical method such as: `APPNAME:CONTEXT:ID`, such as `ORDER-APP:CUSTOMER:123`.

  **[Resources]: **
  + [Key naming](https://docs.gitlab.com/ee/development/redis.html#key-naming)
+ **[Best] **ElastiCache for Valkey and Redis OSS commands have a time complexity defined by the Big O notation. This time complexity of a command is a algorithmic/mathematical representation of its impact. When introducing a new data type in your application you need to carefully review the time complexity of the related commands. Commands with a time complexity of O(1) are constant in time and do not depend on the size of the input however commands with a time complexity of O(N) are linear in time and are subject to the size of the input. Due to the single threaded design of ElastiCache for Valkey and Redis OSS, large volume of high time complexity operations will result in lower performance and potential operation timeouts.

  **[Resources]: **
  + [Commands](https://valkey.io/commands/)
+ **[Best] **Use APIs to gain GUI visibility into the data model in your cluster.

  **[Resources]: **
  + [Redis OSS Commander](https://www.npmjs.com/package/ElastiCache for Redis-commander)
  + [Redis OSS Browser](https://github.com/humante/redis-browser)
  + [Redsmin](https://www.redsmin.com/)

## PE 7: How do you log slow running commands in your Amazon ElastiCache cluster?
<a name="PerformanceEfficiencyPillarPE7"></a>

**Question-level introduction: **Performance tuning benefits through the capture, aggregation, and notification of long-running commands. By understanding how long it takes for commands to execute, you can determine which commands result in poor performance as well as commands that block the engine from performing optimally. ElastiCache also has the capability to forward this information to Amazon CloudWatch or Amazon Kinesis Data Firehose.

**Question-level benefit: **Logging to a dedicated permanent location and providing notification events for slow commands can help with detailed performance analysis and can be used to trigger automated events.
+ **[Required] **ElastiCache running a Valkey engine 7.2 or newer, or running a Redis OSS engine version 6.0 or newer, properly configured parameter group and SLOWLOG logging enabled on the cluster.
  + The required parameters are only available when engine version compatibility is set to Valkey 7.2 and higher, or Redis OSS version 6.0 or higher.
  + SLOWLOG logging occurs when the server execution time of a command takes longer than a specified value. The behavior of the cluster depends on the associated Parameter Group parameters which are `slowlog-log-slower-than` and `slowlog-max-len`.
  + Changes take effect immediately.
+ **[Best] **Take advantage of CloudWatch or Kinesis Data Firehose capabilities.
  + Use the filtering and alarm capabilities of CloudWatch, CloudWatch Logs Insights and Amazon Simple Notification Services to achieve performance monitoring and event notification.
  + Use the streaming capabilities of Kinesis Data Firehose to archive SLOWLOG logs to permanent storage or to trigger automated cluster parameter tuning.
  + Determine if JSON or plain TEXT format suits your needs best.
  + Provide IAM permissions to publish to CloudWatch or Kinesis Data Firehose.
+ **[Better] **Configure `slowlog-log-slower-than` to a value other than the default.
  + This parameter determines how long a command may execute for within the Valkey or Redis OSS engine before it is logged as a slow running command. The default value is 10,000 microseconds (10 milliseconds). The default value may be too high for some workloads.
  + Determine a value that is more appropriate for your workload based on application needs and testing results; however, a value that is too low may generate excessive data.
+ **[Better] **Leave `slowlog-max-len` at the default value.
  + This parameter determines the upper limit for how many slow-running commands are captured in Valkey or Redis OSS memory at any given time. A value of 0 effectively disables the capture. The higher the value, the more entries will be stored in memory, reducing the chance of important information being evicted before it can be reviewed. The default value is 128.
  + The default value is appropriate for most workloads. If there is a need to analyze data in an expanded time window from the valkey-cli via the SLOWLOG command, consider increasing this value. This allows more commands to remain in Valkey or Redis OSS memory.

    If you are emitting the SLOWLOG data to either CloudWatch Logs or Kinesis Data Firehose, the data will be persisted and can be analyzed outside of the ElastiCache system, reducing the need to store large numbers of slow running commands in Valkey or Redis OSS memory.
+ **[Resources]: **
  + [How do I turn on Slow log in a cluster?](https://repost.aws/knowledge-center/elasticache-turn-on-slow-log)
  + [Log delivery](Log_Delivery.md)
  + [Redis OSS-specific parameters](ParameterGroups.Engine.md#ParameterGroups.Redis)
  + [https://aws.amazon.com/cloudwatch/](https://aws.amazon.com/cloudwatch/)Amazon CloudWatch
  + [Amazon Kinesis Data Firehose](https://aws.amazon.com/kinesis/data-firehose/)

## PE8: How does Auto Scaling help in increasing the performance of the ElastiCache cluster?
<a name="PerformanceEfficiencyPillarPE8"></a>

**Question-level introduction: **By implementing the feature of Valkey or Redis OSS auto scaling, your ElastiCache components can adjust over time to increase or decrease the desired shards or replicas automatically. This can be done by implementing either the target tracking or scheduled scaling policy.

**Question-level benefit: **Understanding and planning for the spikes in the workload can ensure enhanced caching performance and business continuity. ElastiCache Auto Scaling continually monitors your CPU/Memory utilization to make sure your cluster is operating at your desired performance levels.
+ **[Required] **When launching a cluster for ElastiCache for Valkey or Redis OSS:

  1. Ensure that the Cluster mode is enabled

  1. Make sure the instance belongs to a family of certain type and size that support auto scaling

  1. Ensure the cluster is not running in Global datastores, Outposts or Local Zones

  **[Resources]: **
  + [Scaling clusters in Valkey and Redis OSS (Cluster Mode Enabled)](scaling-redis-cluster-mode-enabled.md)
  + [Using Auto Scaling with shards](AutoScaling-Using-Shards.md)
  + [Using Auto Scaling with replicas](AutoScaling-Using-Replicas.md)
+ **[Best] **Identify if your workload is read-heavy or write-heavy to define scaling policy. For best performance, use just one tracking metric. It is recommended to avoid multiple policies for each dimension, as auto scaling policies scale out when the target is hit, but scale in only when all target tracking policies are ready to scale in.

  **[Resources]: **
  + [Auto Scaling policies](AutoScaling-Policies.md)
  + [Defining a scaling policy](AutoScaling-Scaling-Defining-Policy-API.md)
+ **[Best] **Monitoring performance over time can help you detect workload changes that would remain unnoticed if monitored at a particular point in time. You can analyze corresponding CloudWatch metrics for cluster utilization over a four-week period to determine the target value threshold. If you are still not sure of what value to choose, we recommend starting with a minimum supported predefined metric value.

  **[Resources]: **
  + [Monitoring use with CloudWatch Metrics](CacheMetrics.md)
+ **[Better] **We advise testing your application with expected minimum and maximum workloads, to identify the exact number of shards/replicas required for the cluster to develop scaling policies and mitigate availability issues.

  **[Resources]: **
  + [Registering a Scalable Target](AutoScaling-Register-Policy.md)
  + [Registering a Scalable Target using the AWS CLI](AutoScaling-Scaling-Registering-Policy-CLI.md)

# Amazon ElastiCache Well-Architected Lens Cost Optimization Pillar
<a name="CostOptimizationPillar"></a>

The cost optimization pillar focuses on avoiding unnecessary costs. Key topics include understanding and controlling where money is being spent, selecting the most appropriate node type (use instances that support data tiering based on workload needs), the right number of resource types (how many read replicas) , analyzing spend over time, and scaling to meet business needs without overspending.

**Topics**
+ [COST 1: How do you identify and track costs associated with your ElastiCache resources? How do you develop mechanisms to enable users to create, manage, and dispose of created resources?](#CostOptimizationPillarCOST1)
+ [COST 2: How do you use continuous monitoring tools to help you optimize the costs associated with your ElastiCache resources?](#CostOptimizationPillarCOST2)
+ [COST 3: Should you use an instance type that support data tiering? What are the advantages of a data tiering? When not to use data tiering instances?](#CostOptimizationPillarCOST3)

## COST 1: How do you identify and track costs associated with your ElastiCache resources? How do you develop mechanisms to enable users to create, manage, and dispose of created resources?
<a name="CostOptimizationPillarCOST1"></a>

**Question-level introduction: **Understanding cost metrics requires the participation of and collaboration across multiple teams: software engineering, data management, product owners, finance, and leadership. Identifying key cost drivers requires all involved parties understand service usage control levers and cost management trade-offs and it is frequently the key difference between successful and less successful cost optimization efforts. Ensuring you have processes and tools in place to track resources created from development to production and retirement helps you manage the costs associated with ElastiCache.

**Question-level benefit: **Continuous tracking of all costs associated with your workload requires a deep understanding of the architecture that includes ElastiCache as one of its components. Additionally, you should have a cost management plan in place to collect and compare usage against your budget. 
+ **[Required] **Institute a Cloud Center of Excellence (CCoE) with one of its founding charters to own defining, tracking, and taking action on metrics around your organizations’ ElastiCache usage. If a CCoE exists and functions, ensure that it knows how to read and track costs associated with ElastiCache. When resources are created, use IAM roles and policies to validate that only specific teams and groups can instantiate resources. This ensures that costs are associated with business outcomes and a clear line of accountability is established, from a cost perspective.

  1. CCoE should identify, define, and publish cost metrics that are updated on a regular -monthly- basis around key ElastiCache usage across categorical data such as: 

     1. Types of nodes used and their attributes: standard vs. memory optimized, on-demand vs. reserved instances, regions and availability zones

     1. Types of environments: free, dev, testing, and production

     1. Backup storage and retention strategies

     1. Data transfer within and across regions

     1. Instances running on Amazon Outposts 

  1. CCoE consists of a cross-functional team with non-exclusive representation from software engineering, data management, product team, finance, and leadership teams in your organization.

  **[Resources]: **
  + [Create a Cloud Center of Excellence](https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-laying-the-foundation/cloud-center-of-excellence.html)
  + [Amazon ElastiCache pricing](https://aws.amazon.com/elasticache/pricing/)
+ **[Required] **Use cost allocation tags to track costs at a low level of granularity. Use AWS Cost Management to visualize, understand, and manage your AWS costs and usage over time. 

  1. Use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed level. After you activate cost allocation tags, AWS uses the cost allocation tags to organize your resource costs on your cost allocation report, to make it easier for you to categorize and track your AWS costs. AWS provides two types of cost allocation tags, an AWS generated tags and user-defined tags. AWS defines, creates, and applies the AWS generated tags for you, and you define, create, and apply user-defined tags. You must activate both types of tags separately before they can appear in Cost Management or on a cost allocation report.

  1. Use cost allocation tags to organize your AWS bill to reflect your own cost structure. When you add cost allocation tags to your resources in Amazon ElastiCache, you will be able to track costs by grouping expenses on your invoices by resource tag values. You should consider combining tags to track costs at a greater level of detail.

  **[Resources]: **
  + [Using AWS cost allocation tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html)
  + [Monitoring costs with cost allocation tags](Tagging.md)
  + [AWS Cost Explorer](https://aws.amazon.com/aws-cost-management/aws-cost-explorer/)
+ **[Best] **Connect ElastiCache cost to metrics that reach across the organization.

  1. Consider business metrics as well as operational metrics like latency - what concepts in your business model are understandable across roles? The metrics need to be understandable by as many roles as possible in the organization. 

  1. Examples - simultaneous served users, max and average latency per operation and user, user engagement scores, user return rates/week, session length/user, abandonment rate, cache hit rate, and keys tracked

  **[Resources]: **
  + [Monitoring use with CloudWatch Metrics](CacheMetrics.md)
+ **[Good] **Maintain up-to-date architectural and operational visibility on metrics and costs across the entire workload that uses ElastiCache.

  1. Understand your entire solution ecosystem, ElastiCache tends to be part of a full ecosystem of AWS services in their technology set, from clients to API Gateway, Redshift, and QuickSight for reporting tools (for example). 

  1. Map components of your solution from clients, connections, security, in-memory operations, storage, resource automation, data access and management, on your architecture diagram. Each layer connects to the entire solution and has its own needs and capabilities that add to and/or help you manage the overall cost.

  1. Your diagram should include the use of compute, networking, storage, lifecycle policies, metrics gathering as well as the operational and functional ElastiCache elements of your application

  1. The requirements of your workload are likely to evolve over time and it is essential that you continue to maintain and document your understanding of the underlying components as well as your primary functional objectives in order to remain proactive in your workload cost management.

  1. Executive support for visibility, accountability, prioritization, and resources is crucial to you having an effective cost management strategy for your ElastiCache.

## COST 2: How do you use continuous monitoring tools to help you optimize the costs associated with your ElastiCache resources?
<a name="CostOptimizationPillarCOST2"></a>

**Question-level introduction: **You need to aim for a proper balance between your ElastiCache cost and application performance metrics. Amazon CloudWatch provides visibility into key operational metrics that can help you assess whether your ElastiCache resources are over or under utilized, relative to your needs. From a cost optimization perspective, you need to understand when you are overprovisioned and be able to develop appropriate mechanisms to resize your ElastiCache resources while maintaining your operational, availability, resilience, and performance needs. 

**Question-level benefit: **In an ideal state, you will have provisioned sufficient resources to meet your workload operational needs and not have under-utilized resources that can lead to a sub-optimal cost state. You need to be able to both identify and avoid operating oversized ElastiCache resources for long periods of time. 
+ **[Required] **Use CloudWatch to monitor your ElastiCache clusters and analyze how these metrics relate to your AWS Cost Explorer dashboards. 

  1. ElastiCache provides both host-level metrics (for example, CPU usage) and metrics that are specific to the cache engine software (for example, cache gets and cache misses). These metrics are measured and published for each cache node in 60-second intervals.

  1. ElastiCache performance metrics (CPUUtilization, EngineUtilization, SwapUsage, CurrConnections, and Evictions) may indicate that you need to scale up/down (use larger/smaller cache node types) or in/out (add more/less shards). Understand the cost implications of scaling decisions by creating a playbook matrix that estimates the additional cost and the min and max lengths of time required to meet your application performance thresholds.

  **[Resources]: **
  + [Monitoring use with CloudWatch Metrics](CacheMetrics.md)
  + [Which Metrics Should I Monitor?](CacheMetrics.WhichShouldIMonitor.md)
  + [Amazon ElastiCache pricing](https://aws.amazon.com/elasticache/pricing/)
+ **[Required] **Understand and document your backup strategy and cost implications.

  1. With ElastiCache, the backups are stored in Amazon S3, which provides durable storage. You need to understand the cost implications in relation to your ability to recover from failures.

  1. Enable automatic backups that will delete backup files that are past the retention limit.

  **[Resources]: **
  + [Scheduling automatic backups](backups-automatic.md)
  + [Amazon Simple Storage Service pricing](https://aws.amazon.com/s3/pricing/)
+ **[Best] **Use Reserved Nodes for your instances as a deliberate strategy to manage costs for workloads that are well understood and documented. Reserved nodes are charged an up front fee that depends upon the node type and the length of reservation—one or three years. This charge is much less than the hourly usage charge that you incur with On-Demand nodes.

  1. You may need to operate your ElastiCache clusters using on-demand nodes until you have gathered sufficient data to estimate the reserved instance requirements. Plan and document the resources needed to meet your needs and compare expected costs across instance types (on-demand vs. reserved)

  1. Regularly evaluate new cache node types available and assess whether it makes sense, from a cost and operational metrics perspective, to migrate your instance fleet to new cache node types

## COST 3: Should you use an instance type that support data tiering? What are the advantages of a data tiering? When not to use data tiering instances?
<a name="CostOptimizationPillarCOST3"></a>

**Question-level introduction: **Selecting the appropriate instance type can not only have performance and service level impact but also financial impact. Instance types have different cost associated with them. Selecting one or a few large instance types that can accommodate all storage needs in memory might be a natural decision. However, this could have significant cost impact as the project matures. Ensuring that the correct instance type is selected requires periodic examination of ElastiCache object idle time.

**Question-level benefit: **You should have a clear understanding of how various instance types impact your cost at the present and in the future. Marginal or periodic workload changes should not cause disproportionate costs changes. If the workload permits it, instance types that support data tiering offer a better price per storage available storage. Because of the per instance available SSD storage data tiering instances support a much higher total data per instance capability.
+ **[Required] **Understand limitations of data tiering instances

  1. Only available for ElastiCache for Valkey or Redis OSS clusters.

  1. Only limited instance types support data tiering.

  1. Only ElastiCache version 6.2 for Redis OSS and above is supported

  1. Large items are not swapped out to SSD. Objects over 128 MiB are kept in memory.

  **[Resources]: **
  + [Data tiering](data-tiering.md)
  + [Amazon ElastiCache pricing](https://aws.amazon.com/elasticache/pricing/)
+ **[Required] **Understand what percentage of your database is regularly accessed by your workload.

  1. Data tiering instances are ideal for workloads that often access a small portion of your overall dataset but still requires fast access to the remaining data. In other words, the ratio of hot to warm data is about 20:80.

  1. Develop cluster level tacking of object idle time.

  1. Large implementations of over 500 Gb of data are good candidates
+ **[Required] **Understand that data tiering instances are not optional for certain workloads.

  1. There is a small performance cost for accessing less frequently used objects as those are swapped out to local SSD. If your application is response time sensitive test the impact on your workload.

  1. Not suitable for caches that store mostly large objects over 128 MiB in size.

  **[Resources]: **
  + [Limitations](data-tiering.md#data-tiering-prerequisites)
+ **[Best] **Reserved instance types support data tiering. This assures the lowest cost in terms of amount of data storage per instance.

  1. You may need to operate your ElastiCache clusters using non-data tiering instances until you have a better understanding of your requirements.

  1. Analyze your ElastiCache clusters data usage pattern.

  1. Create an automated job that periodically collects object idle time.

  1. If you notice that a large percentage (about 80%) of objects are idle for a period of time deemed appropriate for your workload document the findings and suggest migrating the cluster to instances that support data tiering.

  1. Regularly evaluate new cache node types available and assess whether it makes sense, from a cost and operational metrics perspective, to migrate your instance fleet to new cache node types.

  **[Resources]: **
  + [OBJECT IDLETIME](https://valkey.io/commands/object-idletime/)
  + [Amazon ElastiCache pricing](https://aws.amazon.com/elasticache/pricing/)

# Common troubleshooting steps and best practices with ElastiCache
<a name="wwe-troubleshooting"></a>

The following topics provide troubleshooting advice for errors and issues that you could encounter when using ElastiCache. If you find an issue that isn't listed here, you can use the feedback button on this page to report it.

For more troubleshooting advice and answers to common support questions, visit the [AWS Knowledge Center](https://aws.amazon.com/premiumsupport/knowledge-center/)

**Topics**
+ [Connection issues](#wwe-troubleshooting.connection)
+ [Valkey or Redis OSS client errors](#wwe-troubleshooting.clienterrors)
+ [Troubleshooting high latency in ElastiCache Serverless](#wwe-troubleshooting.latency)
+ [Troubleshooting throttling issues in ElastiCache Serverless](#wwe-troubleshooting.throttling)
+ [Persistent connection issues](TroubleshootingConnections.md)
+ [Related Topics](#wwe-troubleshooting.related)

## Connection issues
<a name="wwe-troubleshooting.connection"></a>

If you are unable to connect to your ElastiCache cache, consider one of the following:

1. **Using TLS: **If you are experiencing a hung connection when trying to connect to your ElastiCache endpoint, you may not be using TLS in your client. If you are using ElastiCache Serverless, encryption in transit is always enabled. Make sure that your client is using TLS to connect to the cache. [Learn more about connecting to a TLS enabled cache](connect-tls.md).

1. **VPC:** ElastiCache caches are accessible only from within a VPC. Ensure that the EC2 instance from which you are accessing the cache and the ElastiCache cache are created in the same VPC. Alternatively, you must enable [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) between the VPC where your EC2 instance resides and the VPC where you are creating your cache. 

1. **Security groups: **ElastiCache uses security groups to control access to your cache. Consider the following:

   1. Make sure that the security group used by your ElastiCache cache allows inbound access to it from your EC2 instance. See [here](https://docs.aws.amazon.com/vpc/latest/userguide/security-group-rules.html) to learn how to setup inbound rules in your security group correctly. 

   1. Make sure that the security group used by your ElastiCache cache allows access to your cache’s ports ( 6379 and 6380 for serverless, and 6379 by default for node-based clusters). ElastiCache uses these ports to accept Valkey or Redis OSS commands. Learn more about how to setup port access [here](set-up.md#elasticache-install-grant-access-VPN).

If connection continues to be difficult, see [Persistent connection issues](TroubleshootingConnections.md) for other steps.

## Valkey or Redis OSS client errors
<a name="wwe-troubleshooting.clienterrors"></a>

ElastiCache Serverless is only accessible using clients that support the Valkey or Redis OSS cluster mode protocol. Node-based clusters can be accessed from clients in either mode, depending on the cluster configuration.

If you are experiencing errors in your client, consider the following:

1. **Cluster mode: **If you are experiencing CROSSLOT errors or errors with the [SELECT](https://valkey.io/commands/select/) command, you may be trying to access a Cluster Mode Enabled cache with a Valkey or Redis OSS client that does not support the Cluster protocol. ElastiCache Serverless only supports clients that support the Valkey or Redis OSS cluster protocol. If you want to use Valkey or Redis OSS in “Cluster Mode Disabled” (CMD), then you must create a node-based cluster. 

1. **CROSSLOT errors: **If you are experiencing the `ERR CROSSLOT Keys in request don't hash to the same slot` error, you may be attempting to access keys that do not belong to the same slot in a Cluster mode cache. As a reminder, ElastiCache Serverless always operates in Cluster Mode. Multi-key operations, transactions, or Lua scripts involving multiple keys are allowed only if all the keys involved are in the same hash slot. 

For additional best practices around configuring Valkey or Redis OSS clients, please review this [ blog post](https://aws.amazon.com/blogs/database/best-practices-redis-clients-and-amazon-elasticache-for-redis/). 

## Troubleshooting high latency in ElastiCache Serverless
<a name="wwe-troubleshooting.latency"></a>

If your workload appears to experience high latency, you can analyze the CloudWatch `SuccessfulReadRequestLatency` and `SuccessfulWriteRequestLatency` metrics to check if the latency is related to ElastiCache Serverless. These metrics measure latency which is internal to ElastiCache Serverless - client side latency and network trip times between your client and the ElastiCache Serverless endpoint are not included. 

**Troubleshooting client-side latency**

If you notice elevated latency on the client side but no corresponding increase in ``CloudWatch `SuccessfulReadRequestLatency` and `SuccessfulWriteRequestLatency` metrics which measure the server-side latency, consider the following:
+ **Ensure the security group allows access to ports 6379 and 6380:** ElastiCache Serverless uses the 6379 port for the primary endpoint and the 6380 port for the reader endpoint. Some clients establish connectivity to both ports for every new connection, even if your application is not using the Read from Replica feature. If your security group does not allow inbound access to both ports, then connection establishment can take longer. Learn more about how to setup port access [here](set-up.md#elasticache-install-grant-access-VPN). 

**Troubleshooting server-side latency**

Some variability and occasional spikes should not be a cause for concern. However, if the `Average` statistic shows a sharp increase and persists, you should check the Health Dashboard and your Personal Health Dashboard for more information. If necessary, consider opening a support case with Support. 

Consider the following best practices and strategies to reduce latency:
+ **Enable Read from Replica: **If your application allows it, we recommend enabling the “Read from Replica” feature in your Valkey or Redis OSS client to scale reads and achieve lower latency. When enabled, ElastiCache Serverless attempts to route your read requests to replica cache nodes that are in the same Availability Zone (AZ) as your client, thus avoiding cross-AZ network latency. Note, that enabling the Read from Replica feature in your client signifies that your application accepts eventual consistency in data. Your application may receive older data for some time if you attempt to read after writing to a key. 
+ **Ensure your application is deployed in the same AZs as your cache: **You may observe higher client side latency if your application is not deployed in the same AZs as your cache. When you create a serverless cache you can provide the subnets from where your application will access the cache, and ElastiCache Serverless creates VPC Endpoints in those subnets. Ensure that your application is deployed in the same AZs. Otherwise, your application may incur a cross-AZ hop when accessing the cache resulting in higher client side latency. 
+ **Reuse connections: **ElastiCache Serverless requests are made via a TLS enabled TCP connection using the RESP protocol. Initiating the connection (including authenticating the connection, if configured) takes time so the latency of the first request is higher than typical. Requests over an already initialized connection deliver ElastiCache’s consistent low latency. For this reason, you should consider using connection pooling or reusing existing Valkey or Redis OSS connections. 
+ **Scaling speed: **ElastiCache Serverless automatically scales as your request rate grows. A sudden large increase in request rate, faster than the speed at which ElastiCache Serverless scales, may result in elevated latency for some time. ElastiCache Serverless can typically increase its supported request rate quickly, taking up to 10-12 minutes to double the request rate.
+ **Inspect long running commands: **Some Valkey or Redis OSS commands, including Lua scripts or commands on large data structures, may run for a long time. To identify these commands, ElastiCache publishes command level metrics. With [ElastiCache Serverless](serverless-metrics-events-redis.md#serverless-metrics) you can use the `BasedECPUs` metrics. 
+ **Throttled Requests: **When requests are throttled in ElastiCache Serverless, you may experience an increase in client side latency in your application. When requests are throttled in ElastiCache Serverless, you should see an increase in the `ThrottledRequests` [ElastiCache Serverless](serverless-metrics-events-redis.md#serverless-metrics) metric. Review the section below for troubleshooting throttled requests.
+ **Uniform distribution of keys and requests: **In ElastiCache for Valkey and Redis OSS, an uneven distribution of keys or requests per slot can result in a hot slot which can result in elevated latency. ElastiCache Serverless supports up to 30,000 ECPUs/second (90,000 ECPUs/second when using Read from Replica) on a single slot, in a workload that executes simple SET/GET commands. We recommend evaluating your key and request distribution across slots and ensuring a uniform distribution if your request rate exceeds this limit. 

## Troubleshooting throttling issues in ElastiCache Serverless
<a name="wwe-troubleshooting.throttling"></a>

In service-oriented architectures and distributed systems, limiting the rate at which API calls are processed by various service components is called throttling. This smooths spikes, controls for mismatches in component throughput, and allows for more predictable recoveries when there's an unexpected operational event. ElastiCache Serverless is designed for these types of architectures, and most Valkey or Redis OSS clients have retries built in for throttled requests. Some degree of throttling is not necessarily a problem for your application, but persistent throttling of a latency-sensitive part of your data workflow can negatively impact user experience and reduce the overall efficiency of the system. 

When requests are throttled in ElastiCache Serverless, you should see an increase in the `ThrottledRequests` [ElastiCache Serverless](serverless-metrics-events-redis.md#serverless-metrics) metric. If you are noticing a high number of throttled requests, consider the following:
+ **Scaling speed: **ElastiCache Serverless automatically scales as you ingest more data or grow your request rate.If your application scales faster than the speed at which ElastiCache Serverless scales, then your requests may get throttled while ElastiCache Serverless scales to accommodate your workload. ElastiCache Serverless can typically increase the storage size quickly, taking up to 10-12 minutes to double the storage size in your cache.
+ **Uniform distribution of keys and requests: **In ElastiCache for Valkey and Redis OSS, an uneven distribution of keys or requests per slot can result in a hot slot. A hot slot can result in throttling of requests, if the request rate to a single slot exceeds 30,000 ECPUs/second and is in a workload that executes simple SET/GET commands. Similarly, with ElastiCache for Memcached, a hot key can result in throttling of requests if the request rate exceeds 30,000 ECPUs/second.
+ **Read from Replica: **If you application allows it, consider using the “Read from Replica“ feature. Most Valkey or Redis OSS clients can be configured to ”scale reads“ to direct reads to replica nodes. This feature enables you to scale read traffic. In addition ElastiCache Serverless automatically routes read from replica requests to nodes in the same Availability Zone as your application resulting in lower latency. When Read from Replica is enabled, you can achieve up to 90,000 ECPUs/second on a single slot, for workloads with simple SET/GET commands. 

# Persistent connection issues
<a name="TroubleshootingConnections"></a>

The following items must be verified while troubleshooting persistent connectivity issues with ElastiCache:

**Topics**
+ [Security groups](#Security_groups)
+ [Network ACLs](#Network_ACLs)
+ [Route tables](#Route_tables)
+ [DNS resolution](#DNS_Resolution)
+ [Identifying issues with server-side diagnostics](#Diagnostics)
+ [Network connectivity validation](#Connectivity)
+ [Network-related limits](#Network-limits)
+ [CPU Usage](#CPU-Usage)
+ [Connections being terminated from the server side](#Connections-server)
+ [Client-side troubleshooting for Amazon EC2 instances](#Connections-client)
+ [Dissecting the time taken to complete a single request](#Dissecting-time)

## Security groups
<a name="Security_groups"></a>

Security Groups are virtual firewalls protecting your ElastiCache client (EC2 instance, AWS Lambda function, Amazon ECS container, etc.) and ElastiCache cache. Security groups are stateful, meaning that after the incoming or outgoing traffic is allowed, the responses for that traffic will be automatically authorized in the context of that specific security group.

The stateful feature requires the security group to keep track of all authorized connections, and there is a limit for tracked connections. If the limit is reached, new connections will fail. Please refer to the troubleshooting section for help on how to identify if the limits has been hit on the client or ElastiCache side.

You can have a single security groups assigned at the same time to the client and ElastiCache cluster, or individual security groups for each.

For both cases, you need to allow the TCP outbound traffic on the ElastiCache port from the source and the inbound traffic on the same port to ElastiCache. The default port is 11211 for Memcached, and 6379 for Valkey or Redis OSS. By default, security groups allow all outbound traffic. In this case, only the inbound rule in the target security group is required.

For more information, see [Access patterns for accessing an ElastiCache cluster in an Amazon VPC](elasticache-vpc-accessing.md).

## Network ACLs
<a name="Network_ACLs"></a>

Network Access Control Lists (ACLs) are stateless rules. The traffic must be allowed in both directions (Inbound and Outbound) to succeed. Network ACLs are assigned to subnets, not specific resources. It is possible to have the same ACL assigned to ElastiCache and the client resource, especially if they are in the same subnet.

By default, network ACLs allow all trafic. However, it is possible to customize them to deny or allow traffic. Additionally, the evaluation of ACL rules is sequential, meaning that the rule with the lowest number matching the traffic will allow or deny it. The minimum configuration to allow the Valkey or Redis OSS traffic is:

Client side Network ACL:
+ **Inbound Rules:**
+ Rule number: preferably lower than any deny rule;
+ Type: Custom TCP Rule;
+ Protocol: TCP
+ Port Range: 1024-65535
+ Source: 0.0.0.0/0 (or create individial rules for the ElastiCache cluster subnets)
+ Allow/Deny: Allow
+ **Outbound Rules:**
+ Rule number: preferably lower than any deny rule;
+ Type: Custom TCP Rule;
+ Protocol: TCP
+ Port Range: 6379
+ Source: 0.0.0.0/0 (or the ElastiCache cluster subnets. Keep in mind that using specific IPs may create issues in case of failover or scaling the cluster)
+ Allow/Deny: Allow

ElastiCache Network ACL:
+ **Inbound Rules:**
+ Rule number: preferably lower than any deny rule;
+ Type: Custom TCP Rule;
+ Protocol: TCP
+ Port Range: 6379
+ Source: 0.0.0.0/0 (or create individial rules for the ElastiCache cluster subnets)
+ Allow/Deny: Allow
+ **Outbound Rules:**
+ Rule number: preferably lower than any deny rule;
+ Type: Custom TCP Rule;
+ Protocol: TCP
+ Port Range: 1024-65535
+ Source: 0.0.0.0/0 (or the ElastiCache cluster subnets. Keep in mind that using specific IPs may create issues in case of failover or scaling the cluster)
+ Allow/Deny: Allow

For more information, see [Network ACLs](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html).

## Route tables
<a name="Route_tables"></a>

Similarly to Network ACLs, each subnet can have different route tables. If clients and the ElastiCache cluster are in different subnets, make sure that their route tables allow them to reach each other.

More complex environments, involving multiple VPCs, dynamic routing, or network firewalls, may become difficult to troubleshoot. See [Network connectivity validation](#Connectivity) to confirm that your network settings are appropriate.

## DNS resolution
<a name="DNS_Resolution"></a>

ElastiCache provides the service endpoints based on DNS names. The endpoints available are `Configuration`, `Primary`, `Reader`, and `Node` endpoints. For more information, see [Finding Connection Endpoints](Endpoints.md).

In case of failover or cluster modification, the address associated to the endpoint name may change and will be automatically updated.

Custom DNS settings (i.e., not using the VPC DNS service) may not be aware of the ElastiCache-provided DNS names. Make sure that your system can successfully resolve the ElastiCache endpoints using system tools like `dig` (as shown following) or `nslookup`.

```
$ dig +short example.xxxxxx.ng.0001.use1.cache.amazonaws.com
example-001.xxxxxx.0001.use1.cache.amazonaws.com.
1.2.3.4
```

You can also force the name resolution through the VPC DNS service:

```
$ dig +short example.xxxxxx.ng.0001.use1.cache.amazonaws.com @169.254.169.253
example-001.tihewd.0001.use1.cache.amazonaws.com.
1.2.3.4
```

## Identifying issues with server-side diagnostics
<a name="Diagnostics"></a>

CloudWatch metrics and run-time information from the ElastiCache engine are common sources or information to identify potential sources of connection issues. A good analysis commonly starts with the following items:
+ CPU usage: Valkey and Redis OSS are multi-threaded applications. However, execution of each command happens in a single (main) thread. For this reason, ElastiCache provides the metrics `CPUUtilization` and `EngineCPUUtilization`. `EngineCPUUtilization` provides the CPU utilization dedicated to the Valkey or Redis OSS process, and `CPUUtilization` the usage across all vCPUs. Nodes with more than one vCPU usually have different values for `CPUUtilization` and `EngineCPUUtilization`, the second being commonly higher. High `EngineCPUUtilization` can be caused by an elevated number of requests or complex operations that take a significant amount of CPU time to complete. You can identify both with the following:
  + Elevated number of requests: Check for increases on other metrics matching the `EngineCPUUtilization` pattern. Useful metrics are:
    + `CacheHits` and `CacheMisses`: the number of successful requests or requests that didn’t find a valid item in the cache. If the ratio of misses compared to hits is high, the application is wasting time and resources with unfruitful requests.
    + `SetTypeCmds` and `GetTypeCmds`: These metrics correlated with `EngineCPUUtilization` can help to understand if the load is significantly higher for write requests, measured by `SetTypeCmds`, or reads, measured by `GetTypeCmds`. If the load is predominantly reads, using multiple read-replicas can balance the requests across multiple nodes and spare the primary for writes. In cluster mode-disabled clusters, the use of read-replicas can be done by creating an additional connection configuration in the application using the ElastiCache reader endpoint. For more information, see [Finding Connection Endpoints](Endpoints.md). The read operations must be submitted to this additional connection. Write operations will be done through the regular primary endpoint. In cluster mode-enabled, it is advisable to use a library that supports read replicas natively. With the right flags, the library will be able to automatically discover the cluster topology, the replica nodes, enable the read operations through the [READONLY](https://valkey.io/commands/readonly) Valkey or Redis OSS command, and submit the read requests to the replicas.
  + Elevated number of connections:
    + `CurrConnections` and `NewConnections`: `CurrConnection` is the number of established connections at the moment of the datapoint collection, while `NewConnections` shows how many connections were created in the period.

      Creating and handling connections implies significant CPU overhead. Additionally, the TCP three-way handshake required to create new connections will negatively affect the overall response times.

      An ElastiCache node with thousands of `NewConnections` per minute indicates that a connection is created and used by just a few commands, which is not optimal. Keeping connections established and reusing them for new operations is a best practice. This is possible when the client application supports and properly implements connection pooling or persistent connections. With connection pooling, the number of `currConnections` does not have big variations, and the `NewConnections` should be as low as possible. Valkey and Redis OSS provide optimal performance with small number of currConnections. Keeping currConnection in the order of tens or hundreds minimizes the usage of resources to support individual connections like client buffers and CPU cycles to serve the connection.
  + Network throughput:
    + Determine the bandwidth: ElastiCache nodes have network bandwidth proportional to the node size. Since applications have different characteristics, the results can vary according to the workload. As examples, applications with high rate of small requests tend to affect more the CPU usage than the network throughput while bigger keys will cause higher network utilization. For that reason, it is advisable to test the nodes with the actual workload for a better understanding of the limits.

      Simulating the load from the application would provide more accurate results. However, benchmark tools can give a good idea of the limits.
    + For cases where the requests are predominantly reads, using replicas for read operations will alleviate the load on the primary node. If the use-case is predominantly writes, the use of many replicas will amplify the network usage. For every byte written to the primary node, N bytes will be sent to the replicas,being N the number of replicas. The best practice for write intensive workloads are using ElastiCache for Redis OSS with cluster mode-enabled so the writes can be balanced across multiple shards, or scale-up to a node type with more network capabilities.
    + The CloudWatchmetrics `NetworkBytesIn` and `NetworkBytesOut` provide the amount of data coming into or leaving the node, respectively. `ReplicationBytes` is the traffic dedicated to data replication.

    For more information, see [Network-related limits](#Network-limits).
  + Complex commands: Redis OSS commands are served on a single thread, meaning that requests are served sequentially. A single slow command can affect other requests and connections, culminating in time-outs. The use of commands that act upon multiple values, keys, or data types must be done carefully. Connections can be blocked or terminated depending on the number of parameters, or size of its input or output values.

    A notorious example is the `KEYS` command. It sweeps the entire keyspace searching for a given pattern and blocks the execution of other commands during its execution. Redis OSS uses the “Big O” notation to describe its commands complexity.

    Keys command has O(N) time complexity, N being the number of keys in the database. Therefore, the larger the number of keys, the slower the command will be. `KEYS` can cause trouble in different ways: If no search pattern is used, the command will return all key names available. In databases with thousand or million of items, a huge output will be created and flood the network buffers.

    If a search pattern is used, only the keys matching the pattern will return to the client. However, the engine still sweeps the entire keyspace searching for it, and the time to complete the command will be the same. 

    An alternative for `KEYS` is the `SCAN` command. It iterates over the keyspace and limits the iterations in a specific number of items, avoiding prolonged blocks on the engine.

    Scan has the `COUNT` parameter, used to set the size of the iteration blocks. The default value is 10 (10 items per iteration).

    Depending on the number of items in the database, small `COUNT` values blocks will require more iterations to complete a full scan, and bigger values will keep the engine busy for longer at each iteration. While small count values will make `SCAN` slower on big databases, larger values can cause the same issues mentioned for `KEYS`.

    As an example, running the `SCAN` command with count value as 10 will requires 100,000 repetitions on a database with 1 million keys. If the average network round-trip time is 0.5 milliseconds, approximately 50,000 milliseconds (50 seconds) will be spent transferring requests.

    On the other hand, if the count value were 100,0000, a single iteration would be required and only 0.5 ms would be spent transferring it. However, the engine would be entirely blocked for other operations until the command finishes sweeping all the keyspace. 

    Besides `KEYS`, several other commands are potentially harmful if not used correctly. To see a list of all commands and their respective time complexity, go to [Valkey and Redis OSS commands](https://valkey.io/commands).

    Examples of potential issues:
    + Lua scripts: Valkey and Redis OSS provide an embedded Lua interpreter, allowing the execution of scripts on the server-side. Lua scripts on Valkey and Redis OSS are executed on engine level and are atomic by definition, meaning that no other command or script will be allowed to run while a script is in execution. Lua scripts provide the possibility of running multiple commands, decision-making algorithms, data parsing, and others directly on the engine. While the atomicity of scripts and the chance of offloading the application are tempting, scripts must be used with care and for small operations. On ElastiCache, the execution time of Lua scripts is limited to 5 seconds. Scripts that haven’t written to the keyspace will be automatically terminated after the 5 seconds period. To avoid data corruption and inconsistencies, the node will failover if the script execution hasn’t completed in 5 seconds and had any write during its execution. [Transactions](https://valkey.io/topics/transactions) are the alternative to guarantee consistency of multiple related key modifications in Redis OSS. A transaction allows the execution of a block of commands, watching existing keys for modifications. If any of the watched keys changes before the completion of the transaction, all modifications are discarded.
    + Mass deletion of items: The `DEL` command accepts multiple parameters, which are the key names to be deleted. Deletion operations are synchronous and will take significant CPU time if the list of parameters is big, or contains a big list, set, sorted set, or hash (data structures holding several sub-items). In other words, even the deletion of a single key can take significant time if it has many elements. The alternative to `DEL` is `UNLINK`, which is an asynchronous command available since Redis OSS 4. `UNLINK` must be preferred over `DEL` whenever possible. Starting on ElastiCache for Redis OSS 6x, the `lazyfree-lazy-user-del` parameter makes the `DEL` command behave like `UNLINK` when enabled. For more information, see [Redis OSS 6.0 Parameter Changes](ParameterGroups.Engine.md#ParameterGroups.Redis.6-x). 
    + Commands acting upon multiple keys: `DEL` was mentioned before as a command that accepts multiple arguments and its execution time will be directly proportional to that. However, Redis OSS provides many more commands that work similarly. As examples, `MSET` and `MGET` allow the insertion or retrieval of multiple String keys at once. Their usage may be beneficial to reduce the network latency inherent to multiple individual `SET` or `GET` commands. However, an extensive list of parameters will affect CPU usage.

       While CPU utilization alone is not the cause for connectivity issues, spending too much time to process a single or few commands over multiple keys may cause failure of other requests and increase overall CPU utilization.

      The number of keys and their size will affect the command complexity and consequently completion time.

      Other examples of commands that can act upon multiple keys: `HMGET`, `HMSET`, `MSETNX`, `PFCOUNT`, `PFMERGE`, `SDIFF`, `SDIFFSTORE`, `SINTER`, `SINTERSTORE`, `SUNION`, `SUNIONSTORE`, `TOUCH`, `ZDIFF`, `ZDIFFSTORE`, `ZINTER` or `ZINTERSTORE`.
    + Commands acting upon multiple data types: Redis OSS also provides commands that act upon one or multiple keys, regardless of their data type. ElastiCache for Redis OSS provides the metric `KeyBasedCmds` to monitor such commands. This metric sums the execution of the following commands in the selected period:
      + O(N) complexity:
        + `KEYS`
      + O(1)
        + `EXISTS`
        + `OBJECT`
        + `PTTL`
        + `RANDOMKEY`
        + `TTL`
        + `TYPE`
        + `EXPIRE`
        + `EXPIREAT`
        + `MOVE`
        + `PERSIST`
        + `PEXPIRE`
        + `PEXPIREAT`
        + `UNLINK (O(N)` to reclaim memory. However the memory-reclaiming task happens in a separated thread and does not block the engine
      + Different complexity times depending on the data type:
        + `DEL`
        + `DUMP`
        + `RENAME` is considered a command with O(1) complexity, but executes `DEL` internally. The execution time will vary depending on the size of the renamed key.
        + `RENAMENX`
        + `RESTORE`
        + `SORT`
      + Big hashes: Hash is a data type that allows a single key with multiple key-value sub-items. Each hash can store 4.294.967.295 items and operations on big hashes can become expensive. Similarly to `KEYS`, hashes have the `HKEYS` command with O(N) time complexity, N being the number of items in the hash. `HSCAN` must be preferred over `HKEYS` to avoid long running commands. `HDEL`, `HGETALL`, `HMGET`, `HMSET` and `HVALS` are commands that should be used with caution on big hashes.
    + Other big data-structures: Besides hashes, other data structures can be CPU intensive. Sets, Lists, Sorted Sets, and Hyperloglogs can also take significant time to be manipulated depending on their size and commands used. For more information on those commands, see [Valkey and Redis OSS commands](https://valkey.io/commands).

## Network connectivity validation
<a name="Connectivity"></a>

After reviewing the network configurations related to DNS resolution, security groups, network ACLs, and route tables, the connectivity can be validated with the VPC Reachability Analyzer and system tools.

Reachability Analyzer will test the network connectivity and confirm if all the requirements and permissions are satisfied. For the tests below you will need the ENI ID (Elastic Network Interface Identification) of one of the ElastiCache nodes available in your VPC. You can find it by doing the following:

1. Go to [https://console.aws.amazon.com/ec2/v2/home?\$1NIC:](https://console.aws.amazon.com/ec2/v2/home?#NIC)

1. Filter the interface list by your ElastiCache cluster name or the IP address got from the DNS validations previously.

1. Write down or otherwise save the ENI ID. If multiple interfaces are shown, review the description to confirm that they belong to the right ElastiCache cluster and choose one of them.

1. Proceed to the next step.

1. Create an analyze path at [https://console.aws.amazon.com/vpc/home?\$1ReachabilityAnalyzer](https://console.aws.amazon.com/vpc/home?#ReachabilityAnalyzer) and choose the following options:
   + **Source Type**: Choose **instance** if your ElastiCache client runs on an Amazon EC2 instance or **Network Interface** if it uses another service, such as AWS Fargate Amazon ECS with awsvpc network, AWS Lambda, etc), and the respective resource ID (EC2 instance or ENI ID);
   + **Destination Type**: Choose **Network Interface** and select the **Elasticache ENI** from the list.
   + **Destination port**: specify 6379 for ElastiCache for Redis OSS or 11211 for ElastiCache for Memcached. Those are the ports defined with the default configuration and this example assumes that they are not changed.
   + **Protocol**: TCP

Create the analyze path and wait a few moments for the result. If the status is unreachable, open the analysis details and review the **Analysis explorer** for details where the requests were blocked.

If the reachability tests passed, proceed to the verification on the OS level.

To validate the TCP connectivity on the ElastiCache service port: On Amazon Linux, `Nping` is available in the package `nmap` and can test the TCP connectivity on the ElastiCache port, as well as providing the network round-trip time to establish the connection. Use this to validate the network connectivity and the current latency to the ElastiCache cluster, as shown following: 

```
$ sudo nping --tcp -p 6379 example.xxxxxx.ng.0001.use1.cache.amazonaws.com

Starting Nping 0.6.40 ( http://nmap.org/nping ) at 2020-12-30 16:48 UTC
SENT (0.0495s) TCP ...
(Output suppressed )

Max rtt: 0.937ms | Min rtt: 0.318ms | Avg rtt: 0.449ms
Raw packets sent: 5 (200B) | Rcvd: 5 (220B) | Lost: 0 (0.00%)
Nping done: 1 IP address pinged in 4.08 seconds
```

By default, `nping` sends 5 probes with a delay of 1 second between them. You can use the option “-c” to increase the number of probes and “--delay“ to change the time to send a new test. 

If the tests with `nping` fail and the *VPC Reachability Analyzer* tests passed, ask your system administrator to review possible Host-based firewall rules, asymmetric routing rules, or any other possible restriction on the operating system level.

On the ElastiCache console, check if **Encryption in-transit** is enabled in your ElastiCache cluster details. If in-transit encryption is enabled, confirm if the TLS session can be established with the following command:

```
openssl s_client -connect example.xxxxxx.use1.cache.amazonaws.com:6379
```

An extensive output is expected if the connection and TLS negotiation are successful. Check the return code available in the last line, the value must be `0 (ok)`. If openssl returns something different, check the reason for the error at [https://www.openssl.org/docs/man1.0.2/man1/verify.html\$1DIAGNOSTICS](https://www.openssl.org/docs/man1.0.2/man1/verify.html#DIAGNOSTICS).

If all the infrastructure and operating system tests passed but your application is still unable to connect to ElastiCache, check if the application configurations are compliant with the ElastiCache settings. Common mistakes are:
+ Your application does not support ElastiCache cluster mode, and ElastiCache has cluster-mode enabled;
+ Your application does not support TLS/SSL, and ElastiCache has in-transit encryption enabled; 
+ Application supports TLS/SSL but does not have the right configuration flags or trusted certification authorities; 

## Network-related limits
<a name="Network-limits"></a>
+ Maximum number of connections: There are hard limits for simultaneous connections. Each ElastiCache node allows up to 65,000 simultaneous connections across all clients. This limit can be monitored through the `CurrConnections` metrics on CloudWatch. However, clients also have their limits for outbound connections.On Linux, check the allowed ephemeral port range with the command:

  ```
  # sysctl net.ipv4.ip_local_port_range
  net.ipv4.ip_local_port_range = 32768 60999
  ```

  In the previous example, 28231 connections will be allowed from the same source, to the same destination IP (ElastiCache node) and port. The following command shows how many connections exist for a specific ElastiCache node (IP 1.2.3.4):

  ```
  ss --numeric --tcp state connected "dst 1.2.3.4 and dport == 6379" | grep -vE '^State' | wc -l
  ```

  If the number is too high, your system may become overloaded trying to process the connection requests. It is advisable to consider implementing techniques like connection pooling or persistent connections to better handle the connections. Whenever possible, configure the connection pool to limit the maximum number of connections to a few hundred. Also, back-off logic to handle time-outs or other connection exceptions would are advisable to avoid connection churn in case of issues.
+ Network traffic limits: Check the following [CloudWatch metrics for Redis OSS](CacheMetrics.Redis.md) to identify possible network limits being hit on the ElastiCache node:
  + `NetworkBandwidthInAllowanceExceeded` / `NetworkBandwidthOutAllowanceExceeded`: Network packets shaped because the throughput exceeded the aggregated bandwidth limit.

    It is important to note that every byte written to the primary node will be replicated to N replicas, N being the number of replicas. Clusters with small node types, multiple replicas, and intensive write requests may not be able to cope with the replication backlog. For such cases, it's a best practice to scale-up (change node type), scale-out (add shards in cluster-mode enabled clusters), reduce the number of replicas, or minimize the number of writes.
  + `NetworkConntrackAllowanceExceeded`: Packets shaped because the maximum number of connections tracked across all security groups assigned to the node has been exceeded. New connections will likely fail during this period.
  + `NetworkPackets PerSecondAllowanceExceeded`: Maximum number of packets per second exceeded. Workloads based on a high rate of very small requests may hit this limit before the maximum bandwidth.

  The metrics above are the ideal way to confirm nodes hitting their network limits. However, limits are also identifiable by plateaus on network metrics.

  If the plateaus are observed for extended periods, they will be likely followed by replication lag, increase on bytes Used for cache, drop on freeable memory, high swap and CPU usage. Amazon EC2 instances also have network limits that can tracked through [ENA driver metrics](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/monitoring-network-performance-ena.html). Linux instances with enhanced networking support and ENA drivers 2.2.10 or newer can review the limit counters with the command:

  ```
  # ethtool -S eth0 | grep "allowance_exceeded"
  ```

## CPU Usage
<a name="CPU-Usage"></a>

The CPU usage metric is the starting point of investigation, and the following items can help to narrow down possible issues on the ElastiCache side:
+ Redis OSS SlowLogs: The ElastiCache default configuration retains the last 128 commands that took over 10 milliseconds to complete. The history of slow commands is kept during the engine runtime and will be lost in case of failure or restart. If the list reaches 128 entries, old events will be removed to open room for new ones. The size of the list of slow events and the execution time considered slow can by modified via the parameters `slowlog-max-len` and `slowlog-log-slower-than` in a [custom parameter group](ParameterGroups.md). The slowlogs list can be retrieved by running `SLOWLOG GET 128` on the engine, 128 being the last 128 slow commands reported. Each entry has the following fields:

  ```
  1) 1) (integer) 1 -----------> Sequential ID
     2) (integer) 1609010767 --> Timestamp (Unix epoch time)of the Event
     3) (integer) 4823378 -----> Time in microseconds to complete the command.
     4) 1) "keys" -------------> Command
        2) "*" ----------------> Arguments 
     5) "1.2.3.4:57004"-> Source
  ```

  The event above happened on December 26, at 19:26:07 UTC, took 4.8 seconds (4.823ms) to complete and was caused by the `KEYS` command requested from the client 1.2.3.4.

  On Linux, the timestamp can be converted with the command date:

  ```
  $ date --date='@1609010767'
  Sat Dec 26 19:26:07 UTC 2020
  ```

  With Python:

  ```
  >>> from datetime import datetime
  >>> datetime.fromtimestamp(1609010767)
  datetime.datetime(2020, 12, 26, 19, 26, 7)
  ```

  Or on Windows with PowerShell:

  ```
  PS D:\Users\user> [datetimeoffset]::FromUnixTimeSeconds('1609010767')
  DateTime      : 12/26/2020 7:26:07 PM
  UtcDateTime  
                  : 12/26/2020 7:26:07 PM
  LocalDateTime : 12/26/2020 2:26:07 PM
  Date          : 12/26/2020 12:00:00 AM
  Day           : 26
  DayOfWeek    
                  : Saturday
  DayOfYear     : 361
  Hour          : 19
  Millisecond   : 0
  Minute        : 26
  Month        
                  : 12
  Offset        : 00:00:00Ticks         : 637446075670000000
  UtcTicks     
                  : 637446075670000000
  TimeOfDay     : 19:26:07
  Year          : 2020
  ```

  Many slow commands in a short period of time (same minute or less) is a reason for concern. Review the nature of commands and how they can be optimized (see previous examples). If commands with O(1) time complexity are frequently reported, check the other factors for high CPU usage mentioned before.
+ Latency metrics: ElastiCache for Redis OSS provides CloudWatch metrics to monitor the average latency for different classes of commands. The datapoint is calculated by dividing the total number of executions of commands in the category by the total execution time in the period. It is important to understand that latency metric results are an aggregate of multiple commands. A single command can cause unexpected results, like timeouts, without significant impact on the metrics. For such cases, the slowlog events would be a more accurate source of information. The following list contains the latency metrics available and the respective commands that affect them.
  + EvalBasedCmdsLatency: related to Lua Script commands, `eval`, `evalsha`;
  + GeoSpatialBasedCmdsLatency: `geodist`, `geohash`, `geopos`, `georadius`, `georadiusbymember`, `geoadd`;
  + GetTypeCmdsLatency: Read commands, regardless of data type;
  + HashBasedCmdsLatency: `hexists`, `hget`, `hgetall`, `hkeys`, `hlen`, `hmget`, `hvals`, `hstrlen`, `hdel`, `hincrby`, `hincrbyfloat`, `hmset`, `hset`, `hsetnx`;
  + HyperLogLogBasedCmdsLatency: `pfselftest`, `pfcount`, `pfdebug`, `pfadd`, `pfmerge`;
  + KeyBasedCmdsLatency: Commands that can act upon different data types: `dump`, `exists`, `keys`, `object`, `pttl`, `randomkey`, `ttl`, `type`, `del`, `expire`, `expireat`, `move`, `persist`, `pexpire`, `pexpireat`, `rename`, `renamenx`, `restoreK`, `sort`, `unlink`;
  + ListBasedCmdsLatency: lindex, llen, lrange, blpop, brpop, brpoplpush, linsert, lpop, lpush, lpushx, lrem, lset, ltrim, rpop, rpoplpush, rpush, rpushx; 
  + PubSubBasedCmdsLatency: psubscribe, publish, pubsub, punsubscribe, subscribe, unsubscribe; 
  + SetBasedCmdsLatency: `scard`, `sdiff`, `sinter`, `sismember`, `smembers`, `srandmember`, `sunion`, `sadd`, `sdiffstore`, `sinterstore`, `smove`, `spop`, `srem`, `sunionstore`; 
  + SetTypeCmdsLatency: Write commands, regardless of data-type;
  + SortedSetBasedCmdsLatency: `zcard`, `zcount`, `zrange`, `zrangebyscore`, `zrank`, `zrevrange`, `zrevrangebyscore`, `zrevrank`, `zscore`, `zrangebylex`, `zrevrangebylex`, `zlexcount`, `zadd`. `zincrby`, `zinterstore`, `zrem`, `zremrangebyrank`, `zremrangebyscore`, `zunionstore`, `zremrangebylex`, `zpopmax`, `zpopmin`, `bzpopmin`, `bzpopmax`; 
  + StringBasedCmdsLatency: `bitcount`, `get`, `getbit`, `getrange`, `mget`, `strlen`, `substr`, `bitpos`, `append`, `bitop`, `bitfield`, `decr`, `decrby`, `getset`, `incr`, `incrby`, `incrbyfloat`, `mset`, `msetnx`, `psetex`, `set`, `setbit`, `setex`, `setnx`, `setrange`; 
  + StreamBasedCmdsLatency: `xrange`, `xrevrange`, `xlen`, `xread`, `xpending`, `xinfo`, `xadd`, `xgroup`, `readgroup`, `xack`, `xclaim`, `xdel`, `xtrim`, `xsetid`; 
+ Redis OSS runtime commands: 
  + info commandstats: Provides a list of commands executed since the engine started, their cumulative executions number, total execution time, and average execution time per command;
  + client list: Provides a list of currently connected clients and relevant information like buffers usage, last command executed, etc;
+ Backup and replication: ElastiCache for Redis OSS versions earlier than 2.8.22 use a forked process to create backups and process full syncs with the replicas. This method may incur in significant memory overhead for write intensive use-cases.

  Starting with ElastiCache Redis OSS 2.8.22, AWS introduced a forkless backup and replication method. The new method may delay writes in order to prevent failures. Both methods can cause periods of higher CPU utilization, lead to higher response times and consequently lead to client timeouts during their execution. Always check if the client failures happen during the backup window or the `SaveInProgress` metric was 1 in the period. It is advisable to schedule the backup window for periods of low utilization to minimize the possibility of issues with clients or backup failures.

## Connections being terminated from the server side
<a name="Connections-server"></a>

The default ElastiCache for Redis OSS configuration keeps the client connections established indefinitely. However, in some cases connection termination may be desirable. For example:
+ Bugs in the client application may cause connections to be forgotten and kept established with an idle state. This is called "connection leak“ and the consequence is a steady increase on the number of established connections observed on the `CurrConnections` metric. This behavior can result in saturation on the client or ElastiCache side. When an immediate fix is not possible from the client side, some administrators set a ”timeout“ value in their ElastiCache parameter group. The timeout is the time in seconds allowed for idle connections to persist. If the client doesn’t submit any request in the period, the engine will terminate the connection as soon as the connection reaches the timeout value. Small timeout values may result in unnecessary disconnections and clients will need handle them properly and reconnect, causing delays.
+ The memory used to store keys is shared with client buffers. Slow clients with big requests or responses may demand a significant amount of memory to handle its buffers. The default ElastiCache for Redis OSS configurations does not restrict the size of regular client output buffers. If the `maxmemory` limit is hit, the engine will try to evict items to fulfill the buffer usage. In extreme low memory conditions, ElastiCache for Redis OSS might choose to disconnect clients that consume large client output buffers in order to free memory and retain the cluster’s health. 

  It is possible to limit the size of client buffers with custom configurations and clients hitting the limit will be disconnected. However, clients should be able to handle unexpected disconnections. The parameters to handle buffers size for regular clients are the following:
  + client-query-buffer-limit: Maximum size of a single input request;
  + client-output-buffer-limit-normal-soft-limit: Soft limit for client connections. The connection will be terminated if stays above the soft limit for more than the time in seconds defined on client-output-buffer-limit-normal-soft-seconds or if it hits the hard limit;
  + client-output-buffer-limit-normal-soft-seconds: Time allowed for the connections exceeding the client-output-buffer-limit-normal-soft-limit; 
  + client-output-buffer-limit-normal-hard-limit: A connection hitting this limit will be immediatelly terminated.

  Besides the regular client buffers, the following options control the buffer for replica nodes and Pub/Sub (Publish/Subscribe) clients:
  + client-output-buffer-limit-replica-hard-limit;
  + client-output-buffer-limit-replica-soft-seconds;
  + client-output-buffer-limit-replica-hard-limit;
  + client-output-buffer-limit-pubsub-soft-limit;
  + client-output-buffer-limit-pubsub-soft-seconds;
  + client-output-buffer-limit-pubsub-hard-limit;

## Client-side troubleshooting for Amazon EC2 instances
<a name="Connections-client"></a>

The load and responsiveness on the client side can also affect the requests to ElastiCache. EC2 instance and operating system limits need to be carefully reviewed while troubleshooting intermittent connectivity or timeout issues. Some key points to observe:
+ CPU: 
  + EC2 instance CPU usage: Make sure the CPU hasn’t been saturated or near to 100 percent. Historical analysis can be done via CloudWatch, however keep in mind that data points granularity is either 1 minute (with detailed monitoring enabled) or 5 minutes;
  + If using [burstable EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-performance-instances.html), make sure that their CPU credit balance hasn’t been depleted. This information is available on the `CPUCreditBalance` CloudWatch metric.
  + Short periods of high CPU usage can cause timeouts without reflecting on 100 percent utilization on CloudWatch. Such cases require real-time monitoring with operating-system tools like `top`, `ps` and `mpstat`.
+ Network
  + Check if the Network throughput is under acceptable values according to the instance capabilities. For more information, see [Amazon EC2 Instance Types](https://aws.amazon.com/ec2/instance-types/)
  + On instances with the `ena` Enhanced Network driver, check the [ena statistics](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/troubleshooting-ena.html#statistics-ena) for timeouts or exceeded limits. The following statistics are useful to confirm network limits saturation:
    + `bw_in_allowance_exceeded` / `bw_out_allowance_exceeded`: number of packets shaped due to excessive inbound or outbound throughput;
    + `conntrack_allowance_exceeded`: number of packets dropped due to security groups [connection tracking limits](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/security-group-connection-tracking.html#connection-tracking-throttling). New connections will fail when this limit is saturated;
    + `linklocal_allowance_exceeded`: number of packets dropped due to excessive requests to instance meta-data, NTP via VPC DNS. The limit is 1024 packets per second for all the services;
    + `pps_allowance_exceeded`: number of packets dropped due to excessive packets per second ratio. The PPS limit can be hit when the network traffic consists on thousands or millions of very small requests per second. ElastiCache traffic can be optimized to make better use of network packets via pipelines or commands that do multiple operations at once like `MGET` instead of `GET`.

## Dissecting the time taken to complete a single request
<a name="Dissecting-time"></a>
+ On the network: `Tcpdump` and `Wireshark` (tshark on the command line) are handy tools to understand how much time the request took to travel the network, hit the ElastiCache engine and get a return. The following example highlights a single request created with the following command: 

  ```
  $ echo ping | nc example.xxxxxx.ng.0001.use1.cache.amazonaws.com 6379
  +PONG
  ```

  In parallel to the command above, tcpdump was in execution and returned:

  ```
  $ sudo tcpdump -i any -nn port 6379 -tt
  tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
  listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
  1609428918.917869 IP 172.31.11.142.40966
      > 172.31.11.247.6379: Flags [S], seq 177032944, win 26883, options [mss 8961,sackOK,TS val 27819440 ecr 0,nop,wscale 7], length 0
  1609428918.918071 IP 172.31.11.247.6379 > 172.31.11.142.40966: Flags [S.], seq 53962565, ack 177032945, win
      28960, options [mss 1460,sackOK,TS val 3788576332 ecr 27819440,nop,wscale 7], length 0
  1609428918.918091 IP 172.31.11.142.40966 > 172.31.11.247.6379: Flags [.], ack 1, win 211, options [nop,nop,TS val 27819440 ecr 3788576332], length 0
  1609428918.918122
      IP 172.31.11.142.40966 > 172.31.11.247.6379: Flags [P.], seq 1:6, ack 1, win 211, options [nop,nop,TS val 27819440 ecr 3788576332], length 5: RESP "ping"
  1609428918.918132 IP 172.31.11.142.40966 > 172.31.11.247.6379: Flags [F.], seq 6, ack
      1, win 211, options [nop,nop,TS val 27819440 ecr 3788576332], length 0
  1609428918.918240 IP 172.31.11.247.6379 > 172.31.11.142.40966: Flags [.], ack 6, win 227, options [nop,nop,TS val 3788576332 ecr 27819440], length 0
  1609428918.918295
      IP 172.31.11.247.6379 > 172.31.11.142.40966: Flags [P.], seq 1:8, ack 7, win 227, options [nop,nop,TS val 3788576332 ecr 27819440], length 7: RESP "PONG"
  1609428918.918300 IP 172.31.11.142.40966 > 172.31.11.247.6379: Flags [.], ack 8, win
      211, options [nop,nop,TS val 27819441 ecr 3788576332], length 0
  1609428918.918302 IP 172.31.11.247.6379 > 172.31.11.142.40966: Flags [F.], seq 8, ack 7, win 227, options [nop,nop,TS val 3788576332 ecr 27819440], length 0
  1609428918.918307
      IP 172.31.11.142.40966 > 172.31.11.247.6379: Flags [.], ack 9, win 211, options [nop,nop,TS val 27819441 ecr 3788576332], length 0
  ^C
  10 packets captured
  10 packets received by filter
  0 packets dropped by kernel
  ```

  From the output above we can confirm that the TCP three-way handshake was completed in 222 microseconds (918091 - 917869) and the ping command was submitted and returned in 173 microseconds (918295 - 918122).

   It took 438 microseconds (918307 - 917869) from requesting to closing the connection. Those results would confirm that network and engine response times are good and the investigation can focus on other components.
+ On the operating system: `Strace` can help identifying time gaps on the OS level. The analysis of actual applications would be way more extensive and specialized application profilers or debuggers are advisable. The following example just shows if the base operating system components are working as expected, otherwise further investigation may be required. Using the same Redis OSS `PING` command with `strace` we get:

  ```
  $ echo ping | strace -f -tttt -r -e trace=execve,socket,open,recvfrom,sendto nc example.xxxxxx.ng.0001.use1.cache.amazonaws.com (http://example.xxxxxx.ng.0001.use1.cache.amazonaws.com/)
      6379
  1609430221.697712 (+ 0.000000) execve("/usr/bin/nc", ["nc", "example.xxxxxx.ng.0001.use"..., "6379"], 0x7fffede7cc38 /* 22 vars */) = 0
  1609430221.708955 (+ 0.011231) socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
  1609430221.709084
      (+ 0.000124) socket(AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
  1609430221.709258 (+ 0.000173) open("/etc/nsswitch.conf", O_RDONLY|O_CLOEXEC) = 3
  1609430221.709637 (+ 0.000378) open("/etc/host.conf", O_RDONLY|O_CLOEXEC) = 3
  1609430221.709923
      (+ 0.000286) open("/etc/resolv.conf", O_RDONLY|O_CLOEXEC) = 3
  1609430221.711365 (+ 0.001443) open("/etc/hosts", O_RDONLY|O_CLOEXEC) = 3
  1609430221.713293 (+ 0.001928) socket(AF_INET, SOCK_DGRAM|SOCK_CLOEXEC|SOCK_NONBLOCK, IPPROTO_IP) = 3
  1609430221.717419
      (+ 0.004126) recvfrom(3, "\362|\201\200\0\1\0\2\0\0\0\0\rnotls20201224\6tihew"..., 2048, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("172.31.0.2")}, [28->16]) = 155
  1609430221.717890 (+ 0.000469) recvfrom(3, "\204\207\201\200\0\1\0\1\0\0\0\0\rnotls20201224\6tihew"...,
      65536, 0, {sa_family=AF_INET, sin_port=htons(53), sin_addr=inet_addr("172.31.0.2")}, [28->16]) = 139
  1609430221.745659 (+ 0.027772) socket(AF_INET, SOCK_STREAM, IPPROTO_TCP) = 3
  1609430221.747548 (+ 0.001887) recvfrom(0, 0x7ffcf2f2ca50, 8192,
      0, 0x7ffcf2f2c9d0, [128]) = -1 ENOTSOCK (Socket operation on non-socket)
  1609430221.747858 (+ 0.000308) sendto(3, "ping\n", 5, 0, NULL, 0) = 5
  1609430221.748048 (+ 0.000188) recvfrom(0, 0x7ffcf2f2ca50, 8192, 0, 0x7ffcf2f2c9d0, [128]) = -1 ENOTSOCK
      (Socket operation on non-socket)
  1609430221.748330 (+ 0.000282) recvfrom(3, "+PONG\r\n", 8192, 0, 0x7ffcf2f2c9d0, [128->0]) = 7
  +PONG
  1609430221.748543 (+ 0.000213) recvfrom(3, "", 8192, 0, 0x7ffcf2f2c9d0, [128->0]) = 0
  1609430221.752110
      (+ 0.003569) +++ exited with 0 +++
  ```

   In the example above, the command took a little more than 54 milliseconds to complete (752110 - 697712 = 54398 microseconds).

   A significant amount of time, approximately 20ms, was taken to instantiate nc and do the name resolution (from 697712 to 717890), after that, 2ms were required to create the TCP socket (745659 to 747858), and 0.4 ms (747858 to 748330) to submit and receive the response for the request. 

## Related Topics
<a name="wwe-troubleshooting.related"></a>
+ [ElastiCache best practices and caching strategies](BestPractices.md)