

# Using Amazon Aurora Global Database
Using Aurora Global Database<a name="gdb"></a><a name="globaldb"></a><a name="global_db"></a><a name="global_database"></a>

 With the Amazon Aurora Global Database feature, you set up multiple Aurora DB clusters that span multiple AWS Regions. Aurora automatically synchronizes all changes made in the primary DB cluster to one or more secondary clusters. An Aurora global database has a primary DB cluster in one Region, and up to 10 secondary DB clusters in different Regions. This multi-Region configuration provides fast recovery from the rare outage that might affect an entire AWS Region. Having a full copy of all your data in multiple geographic locations also enables low-latency read operations for applications that connect from widely separated locations around the world. 

**Topics**
+ [

## Overview of Amazon Aurora Global Database
](#aurora-global-database-overview)
+ [

## Advantages of Amazon Aurora Global Database
](#aurora-global-database.advantages)
+ [

## Region and version availability
](#aurora-global-database.Availability)
+ [

## Limitations of Amazon Aurora Global Database
](#aurora-global-database.limitations)
+ [

# Getting started with Amazon Aurora Global Database
](aurora-global-database-getting-started.md)
+ [

# Managing an Amazon Aurora global database
](aurora-global-database-managing.md)
+ [

# Connecting to Amazon Aurora Global Database
](aurora-global-database-connecting.md)
+ [

# Using write forwarding in an Amazon Aurora global database
](aurora-global-database-write-forwarding.md)
+ [

# Using switchover or failover in Amazon Aurora Global Database
](aurora-global-database-disaster-recovery.md)
+ [

# Monitoring an Amazon Aurora global database
](aurora-global-database-monitoring.md)
+ [

# Using Blue/Green Deployments for Amazon Aurora Global Database
](aurora-global-database-bluegreen.md)
+ [

# Using Amazon Aurora global databases with other AWS services
](aurora-global-database-interop.md)
+ [

# Upgrading an Amazon Aurora global database
](aurora-global-database-upgrade.md)

## Overview of Amazon Aurora Global Database
Overview of Aurora Global Database

By using the Amazon Aurora Global Database feature, you can run your globally distributed applications using a single Aurora database that spans multiple AWS Regions.

An Aurora global database consists of one *primary* AWS Region where your data is written, and up to 10 read-only *secondary* AWS Regions. You issue write operations to the primary DB cluster in the primary AWS Region. The most convenient way to do so is to connect to the Aurora Global Database writer endpoint, which always points to the primary DB cluster, even after a switchover or failover to a different AWS Region. After any write operation, Aurora replicates data to the secondary AWS Regions using dedicated infrastructure, with latency typically under a second. 

In the following diagram, you can find an example Aurora global database that spans two AWS Regions.

![\[An Aurora global database has a single primary and at least one secondary Aurora DB clusters.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-databases-conceptual-illo.png)


You can scale up each secondary cluster independently, by adding one or more Aurora reader instances to serve read-only workloads. You can use Aurora Serverless v2 for the reader instances for even more granular and flexible scaling.

Only the primary cluster performs write operations. Clients that perform write operations connect to the Aurora Global Database writer endpoint, which always points to the writer DB instance of the primary cluster. As shown in the diagram, Aurora uses the cluster storage volume and not the database engine for fast, low-overhead replication. To learn more, see [Overview of Amazon Aurora storage](Aurora.Overview.StorageReliability.md#Aurora.Overview.Storage). 

Aurora Global Database is designed for applications with a worldwide footprint. The read-only secondary DB clusters in multiple AWS Regions help to optimize read operations closer to application users. By using the write forwarding feature, you can also configure your global database so that secondary clusters send write requests to the primary. For more information, see [Using write forwarding in an Amazon Aurora global database](aurora-global-database-write-forwarding.md). 

Aurora Global Database supports two different operations for changing the Region of your primary DB cluster, depending on the scenario: *Aurora Global Database switchover* and *Aurora Global Database failover*.
+ For planned operational procedures such as Regional rotation, use the switchover mechanism (previously called "managed planned failover"). With this feature, you can relocate the primary cluster of a healthy Aurora Global Database to one of its secondary Regions with no data loss. To learn more, see [Performing switchovers for Amazon Aurora global databases](aurora-global-database-disaster-recovery.md#aurora-global-database-disaster-recovery.managed-failover).
+ To recover your Aurora Global Database after an outage in the primary Region, use the failover mechanism. With this feature, you perform a failover from your primary DB cluster to another Region (cross-Region failover). To learn more, see [Performing managed failovers for Aurora global databases](aurora-global-database-disaster-recovery.md#aurora-global-database-failover.managed-unplanned).

## Advantages of Amazon Aurora Global Database


By using Aurora Global Database, you can get the following advantages: 
+ **Global reads with local latency** – If you have offices around the world, you can use Aurora Global Database to keep your main sources of information updated in the primary AWS Region. Offices in your other Regions can access the information in their own Region, with local latency. 
+ **Scalable secondary Aurora DB clusters** – You can scale your secondary clusters by adding more read-only instances to a secondary AWS Region. The secondary cluster is read-only, so it can support up to 16 read-only DB instances rather than the usual limit of 15 for a single Aurora cluster.
+ **Fast replication from primary to secondary Aurora DB clusters** – The replication performed by Aurora Global Database has little performance impact on the primary DB cluster. The resources of the DB instances are fully devoted to serve application read and write workloads.
+ **Recovery from Region-wide outages** – The secondary clusters allow you to make an Aurora Global Database available in a new primary AWS Region more quickly (lower RTO) and with less data loss (lower RPO) than traditional replication solutions. 

## Region and version availability
Region and version availability

Feature availability and support vary across specific versions of each Aurora database engine, and across AWS Regions. For more information on version and Region availability with Aurora Global Database, see [Supported Regions and DB engines for Aurora global databases](Concepts.Aurora_Fea_Regions_DB-eng.Feature.GlobalDatabase.md). 

## Limitations of Amazon Aurora Global Database
Limitations of Aurora Global Database

The following limitations currently apply to Aurora Global Database:
+ Aurora Global Database is available in certain AWS Regions and for specific Aurora MySQL and Aurora PostgreSQL versions. For more information, see [Supported Regions and DB engines for Aurora global databases](Concepts.Aurora_Fea_Regions_DB-eng.Feature.GlobalDatabase.md).
+ Aurora Global Database has specific configuration requirements for supported Aurora DB instance classes, maximum number of AWS Regions, and so on. For more information, see [Configuration requirements of an Amazon Aurora global database](aurora-global-database.configuration.requirements.md).
+ For Aurora MySQL with MySQL 5.7 compatibility, Aurora Global Database switchovers require version 2.09.1 or a higher minor version.
+ You can perform managed cross-Region switchovers or failovers with Aurora Global Database only if the primary and secondary DB clusters have the same major and minor engine versions. Depending on the engine and engine versions, the patch levels might need to be identical or the patch levels can be different. For a list of engines and engine versions that allow these operations between primary and secondary clusters with different patch levels, see [Patch level compatibility for managed cross-Region switchovers and failovers](aurora-global-database-upgrade.md#aurora-global-database-upgrade.minor.incompatibility). If your engine versions require identical patch levels, you can perform the failover manually by following the steps in [Performing manual failovers for Aurora global databases](aurora-global-database-disaster-recovery.md#aurora-global-database-failover.manual-unplanned). 
+ Aurora Global Database currently doesn't support the following Aurora features: 
  + Aurora Serverless v1
  + Backtracking in Aurora
+ For limitations with using the RDS Proxy feature with Aurora Global Database, see [Limitations for RDS Proxy with global databases](rds-proxy-gdb.md#rds-proxy-gdb.limitations).
+ Automatic minor version upgrade doesn't apply to Aurora MySQL and Aurora PostgreSQL clusters that are part of a global database. Note that you can specify this setting for a DB instance that is part of a global database cluster, but the setting has no effect.
+ Aurora Global Database currently doesn't support Aurora Auto Scaling for secondary DB clusters.
+ To use Database Activity Streams (DAS) on Aurora Global Database running Aurora MySQL 5.7, the engine version must be version 2.08 or higher. For information about DAS, see [Monitoring Amazon Aurora with Database Activity Streams](DBActivityStreams.md).
+ The following limitations currently apply to upgrading Aurora Global Database:
  + You can't apply a custom parameter group to the global database cluster while you're performing a major version upgrade of that Aurora global database. You create your custom parameter groups in each Region of the global cluster and you apply them manually to the Regional clusters after the upgrade.
  + With an Aurora global database based on Aurora MySQL, you can't perform an in-place upgrade from Aurora MySQL version 2 to version 3 if the `lower_case_table_names` parameter is turned on. For more information on the methods that you can use, see [Major version upgrades](aurora-global-database-upgrade.md#aurora-global-database-upgrade.major).
  + With Aurora Global Database, you can't perform a major version upgrade of the Aurora PostgreSQL DB engine if the recovery point objective (RPO) feature is turned on. For information about the RPO feature, see [Managing RPOs for Aurora PostgreSQL–based global databases](aurora-global-database-disaster-recovery.md#aurora-global-database-manage-recovery).
  + With an Aurora Global Database, you can't perform a minor version upgrade from Aurora MySQL version 3.01 or 3.02 to 3.03 or higher by using the standard process. For details about the process to use, see [Upgrading Aurora MySQL by modifying the engine version](AuroraMySQL.Updates.Patching.ModifyEngineVersion.md).

  For information about upgrading Aurora Global Database, see [Upgrading an Amazon Aurora global database](aurora-global-database-upgrade.md).
+ You can't stop or start the Aurora DB clusters in your global database individually. To learn more, see [Stopping and starting an Amazon Aurora DB cluster](aurora-cluster-stop-start.md). 
+ Aurora reader DB instances attached to the secondary Aurora DB cluster can restart under certain circumstances. If the primary AWS Region's writer DB instance undergoes a restart or failover, reader DB instances in secondary Regions also restart. The secondary cluster is then unavailable until all reader DB instances are back in sync with the primary DB cluster's writer instance. The behavior of the primary cluster when rebooting or during a failover is the same as for a singular, nonglobal DB cluster. For more information, see [Replication with Amazon Aurora](Aurora.Replication.md).

  Be sure that you understand the impacts to your global database before making changes to your primary DB cluster. To learn more, see [Recovering an Amazon Aurora global database from an unplanned outage](aurora-global-database-disaster-recovery.md#aurora-global-database-failover).
+ Aurora Global Database currently doesn't support the `inaccessible-encryption-credentials-recoverable` status when Amazon Aurora loses access to the AWS KMS key for the DB cluster. In these cases, the encrypted DB cluster goes directly into the terminal `inaccessible-encryption-credentials` state. For more information about these states, see [Viewing DB cluster status](accessing-monitoring.md#Aurora.Status).
+ Secrets Manager doesn't support Aurora Global Database. When you add a Region to a global database, you must first turn off Secrets Manager integration for the DB instance.
+ Aurora PostgreSQL–based DB clusters that use Aurora Global Database have the following limitations:
  + Cluster cache management isn't supported for Aurora PostgreSQL secondary DB clusters that are part of Aurora global databases.
  + If the primary DB cluster of your global database is based on a replica of an Amazon RDS PostgreSQL instance, you can't create a secondary cluster. Don't attempt to create a secondary from that cluster using the AWS Management Console, the AWS CLI, or the `CreateDBCluster` API operation. Attempts to do so time out, and the secondary cluster isn't created.

We recommend that you create secondary DB clusters for your global databases by using the same version of the Aurora DB engine as the primary. For more information, see [Creating an Amazon Aurora global database](aurora-global-database-creating.md).

# Getting started with Amazon Aurora Global Database
Getting started with Aurora Global Database

To get started with Aurora Global Database, first decide which Aurora DB engine you want to use and in which AWS Regions. Only specific versions of the Aurora MySQL and Aurora PostgreSQL database engines in certain AWS Regions support Aurora Global Database. For the complete list, see [Supported Regions and DB engines for Aurora global databases](Concepts.Aurora_Fea_Regions_DB-eng.Feature.GlobalDatabase.md). 

You can create an Aurora Global Database in one of the following ways:
+ **Create a new Aurora Global Database with new Aurora DB clusters and Aurora DB instances** – You can do this by following the steps in [Creating an Amazon Aurora global database](aurora-global-database-creating.md). After you create the primary Aurora DB cluster, you then add the secondary AWS Region by following the steps in [Adding an AWS Region to an Amazon Aurora global database](aurora-global-database-attaching.md). 
+ **Use an existing Aurora DB cluster that supports the Aurora Global Database feature and add an AWS Region to it** – You can do this only if your existing Aurora DB cluster uses a DB engine version that is global-compatible.

  Check whether you can choose **Add region** for **Action** on the AWS Management Console when your Aurora DB cluster is selected. If you can, you can use that Aurora DB cluster for your Aurora global cluster. For more information, see [Adding an AWS Region to an Amazon Aurora global database](aurora-global-database-attaching.md). 

Before creating an Aurora Global Database, we recommend that you understand all configuration requirements.

**Topics**
+ [

# Configuration requirements of an Amazon Aurora global database
](aurora-global-database.configuration.requirements.md)
+ [

# Creating an Amazon Aurora global database
](aurora-global-database-creating.md)
+ [

# Adding an AWS Region to an Amazon Aurora global database
](aurora-global-database-attaching.md)
+ [

# Creating a headless Aurora DB cluster in a secondary Region
](aurora-global-database-attach.console.headless.md)
+ [

# Creating an Amazon Aurora global database from an Aurora or Amazon RDS snapshot
](aurora-global-database.use-snapshot.md)

# Configuration requirements of an Amazon Aurora global database
Configuration requirements

 Before you start working with your global database, plan the cluster and instance names, AWS Regions, number of instances and instance classes that you intend to use. Make sure that your choices agree with the following configuration requirements. 

An Aurora global database spans at least two AWS Regions. The primary AWS Region supports an Aurora DB cluster that has one writer Aurora DB instance. A secondary AWS Region runs a read-only Aurora DB cluster made up entirely of Aurora Replicas. At least one secondary AWS Region is required, but an Aurora global database can have up to 10 secondary AWS Regions. The table lists the maximum Aurora DB clusters, Aurora DB instances, and Aurora Replicas allowed in an Aurora global database. 


| Description | Primary AWS Region | Secondary AWS Regions | 
| --- | --- | --- | 
| Aurora DB clusters | 1 | 10 (maximum)  | 
| Writer instances | 1 | 0 | 
| Read-only instances (Aurora replicas), per Aurora DB cluster | 15 (max) | 16 (total) | 
| Read-only instances (max allowed, given actual number of secondary Regions) | 15 - *s* | *s* = total number of secondary AWS Regions  | 

The Aurora DB clusters that make up an Aurora global database have the following specific requirements:
+ **DB instance class requirements** – An Aurora global database requires DB instance classes that are optimized for memory-intensive applications. For information about the memory optimized DB instance classes, see [DB instance class types](Concepts.DBInstanceClass.Types.md). We recommend that you use a db.r5 or higher instance class.
+ **AWS Region requirements** – An Aurora global database needs a primary Aurora DB cluster in one AWS Region, and at least one secondary Aurora DB cluster in a different Region. You can create up to 10 secondary (read-only) Aurora DB clusters, and each must be in a different Region. In other words, no two Aurora DB clusters in an Aurora global database can be in the same AWS Region.

   For information about which combinations of Aurora DB engine, engine version, and AWS Region you can use with Aurora Global Database, see [Supported Regions and DB engines for Aurora global databases](Concepts.Aurora_Fea_Regions_DB-eng.Feature.GlobalDatabase.md). 
+ **Naming requirements** – The names you choose for each of your Aurora DB clusters must be unique, across all AWS Regions. You can't use the same name for different Aurora DB clusters even though they're in different Regions.
+ **Capacity requirements for Aurora Serverless v2** – For a global database with Aurora Serverless v2, the [minimum recommended capacity](aurora-serverless-v2.setting-capacity.md#aurora-serverless-v2.min_capacity_considerations) for the DB cluster in the primary AWS Region is 8 ACUs.

Before you can follow the procedures in this section, you need an AWS account. Complete the setup tasks for working with Amazon Aurora. For more information, see [Setting up your environment for Amazon Aurora](CHAP_SettingUp_Aurora.md). You also need to complete other preliminary steps for creating any Aurora DB cluster. To learn more, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md). 

 When you are ready to set up your global database, see [Creating an Amazon Aurora global database](aurora-global-database-creating.md) for the procedure to create all the necessary resources. You can also follow the procedure in [Adding an AWS Region to an Amazon Aurora global database](aurora-global-database-attaching.md) to create a global database using an existing Aurora cluster as the primary cluster. 

# Creating an Amazon Aurora global database
Creating a global database

To create an Aurora global database and its associated resources by using the AWS Management Console, the AWS CLI, or the RDS API, use the following steps.

**Note**  
 If you have an existing Aurora DB cluster running an Aurora database engine that's global-compatible, you can use a shorter form of this procedure. In that case, you can add another AWS Region to the existing DB cluster to create your Aurora global database. To do so, see [Adding an AWS Region to an Amazon Aurora global database](aurora-global-database-attaching.md). 

## Console


The steps for creating an Aurora global database begin by signing in to an AWS Region that supports the Aurora global database feature. For a complete list, see [Supported Regions and DB engines for Aurora global databases](Concepts.Aurora_Fea_Regions_DB-eng.Feature.GlobalDatabase.md).

One of the following steps is choosing a virtual private cloud (VPC) based on Amazon VPC for your Aurora DB cluster. To use your own VPC, we recommend that you create it in advance so it's available for you to choose. At the same time, create any related subnets, and as needed a subnet group and security group. To learn how, see [Tutorial: Create a VPC for use with a DB cluster (IPv4 only)](CHAP_Tutorials.WebServerDB.CreateVPC.md). 

For general information about creating an Aurora DB cluster, see [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md).

**To create an Aurora global database**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose **Create database**. On the **Create database** page, do the following: 
   + For the database creation method, choose **Standard create**. (Don't choose Easy create.)
   + For `Engine type` in the **Engine options** section, choose the applicable engine type, **Aurora (MySQL Compatible)** or **Aurora (PostgreSQL Compatible)**.

1. Continue creating your Aurora global database by using the steps from the following procedures.

### Creating a global database using Aurora MySQL


The following steps apply to all versions of Aurora MySQL. 

**To create an Aurora global database using Aurora MySQL**

Complete the **Create database** page.

1. For **Engine options**, choose the following:

   1. For **Engine version**, choose the version of Aurora MySQL that you want to use for your Aurora global database.

1. For **Templates**, choose **Production**. Or you can choose Dev/Test if appropriate for your use case. Don't use Dev/Test in production environments. 

1. For **Settings**, do the following:

   1. Enter a meaningful name for the DB cluster identifier. When you finish creating the Aurora global database, this name identifies the primary DB cluster. 

   1. Enter your own password for the `admin` user account for the DB instance, or have Aurora generate one for you. If you choose to autogenerate a password, you get an option to copy the password.  
![\[Screenshot of Settings choices when creating a global database.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-create-ams-3.png)

1. For **DB instance class**, choose `db.r5.large` or another memory optimized DB instance class. We recommend that you use a db.r5 or higher instance class.

1. For **Availability & durability**, we recommend that you choose to have Aurora create an Aurora Replica in a different Availability Zone (AZ) for you. If you don't create an Aurora Replica now, you need to do it later.  
![\[Screenshot of Availability & durability.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-create-ams-4b.png)

1. For **Connectivity**, choose the virtual private cloud (VPC) based on Amazon VPC that defines the virtual networking environment for this DB instance. You can choose the defaults to simplify this task. 

1. Complete the **Database authentication** settings. To simplify the process, you can choose **Password authentication** now and set up AWS Identity and Access Management (IAM) later.

1. For **Additional configuration**, do the following:

   1. Enter a name for **Initial database name** to create the primary Aurora DB instance for this cluster. This is the writer node for the Aurora primary DB cluster. 

      Leave the defaults selected for the DB cluster parameter group and DB parameter group, unless you have your own custom parameter groups that you want to use. 

   1.  Clear the **Enable backtrack** check box if it's selected. Aurora global databases don't support backtracking. Otherwise, accept the other default settings for **Additional configuration**. 

1. Choose **Create database**.

   It can take several minutes for Aurora to complete the process of creating the Aurora DB instance, its Aurora Replica, and the Aurora DB cluster. You can tell when the Aurora DB cluster is ready to use as the primary DB cluster in an Aurora global database by its status. When that's so, its status and that of the writer and replica node is **Available**, as shown following.  
![\[Screenshot of Databases with an Aurora DB cluster ready to use for Aurora global database.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-create-ams-5.png)

When your primary DB cluster is available, create the Aurora global database by adding a secondary cluster to it. To do this, follow the steps in [Adding an AWS Region to an Amazon Aurora global database](aurora-global-database-attaching.md). 

### Creating a global database using Aurora PostgreSQL


**To create an Aurora global database using Aurora PostgreSQL**

Complete the **Create database** page.

1. For **Engine options**, choose the following:

   1. For **Engine version**, choose the version of Aurora PostgreSQL that you want to use for your Aurora global database.

1. For **Templates**, choose **Production**. Or you can choose Dev/Test if appropriate. Don't use Dev/Test in production environments. 

1. For **Settings**, do the following:

   1. Enter a meaningful name for the DB cluster identifier. When you finish creating the Aurora global database, this name identifies the primary DB cluster. 

   1. Enter your own password for the default admin account for the DB cluster, or have Aurora generate one for you. If you choose Auto generate a password, you get an option to copy the password.  
![\[Screenshot of Settings choices when creating a global database.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-create-apg-2.png)

1. For **DB instance class**, choose `db.r5.large` or another memory optimized DB instance class. We recommend that you use a db.r5 or higher instance class. 

1. For **Availability & durability**, we recommend that you choose to have Aurora create an Aurora Replica in a different AZ for you. If you don't create an Aurora Replica now, you need to do it later. 

1. For **Connectivity**, choose the virtual private cloud (VPC) based on Amazon VPC that defines the virtual networking environment for this DB instance. You can choose the defaults to simplify this task. 

1. (Optional) Complete the **Database authentication** settings. Password authentication is always enabled. To simplify the process, you can skip this section and set up IAM or password and Kerberos authentication later.

1. For **Additional configuration**, do the following:

   1. Enter a name for **Initial database name** to create the primary Aurora DB instance for this cluster. This is the writer node for the Aurora primary DB cluster. 

      Leave the defaults selected for the DB cluster parameter group and DB parameter group, unless you have your own custom parameter groups that you want to use. 

   1. Accept all other default settings for **Additional configuration**, such as Encryption, Log exports, and so on.

1. Choose **Create database**. 

   It can take several minutes for Aurora to complete the process of creating the Aurora DB instance, its Aurora Replica, and the Aurora DB cluster. When the cluster is ready to use, the Aurora DB cluster and its writer and replica nodes display **Available** status. This becomes the primary DB cluster of your Aurora global database, after you add a secondary.  
![\[Screenshot of Databases with an Aurora DB cluster ready to use for Aurora global database.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-create-apg-5-add-region.png)

When your primary DB cluster is available, create one or more secondary clusters by following the steps in [Adding an AWS Region to an Amazon Aurora global database](aurora-global-database-attaching.md). 

## AWS CLI


The AWS CLI commands in the procedures following accomplish the following tasks: 

1. Create an Aurora global database, giving it a name and specifying the Aurora database engine type that you plan to use. 

1. Create an Aurora DB cluster for the Aurora global database. 

1. Create the Aurora DB instance for the cluster. This is the primary Aurora DB cluster for the global database.

1. Create a second DB instance for Aurora DB cluster. This is a reader to complete the Aurora DB cluster. 

1. Create a second Aurora DB cluster in another Region and then add it to your Aurora global database, by following the steps in [Adding an AWS Region to an Amazon Aurora global database](aurora-global-database-attaching.md). 

Follow the procedure for your Aurora database engine.

### Creating a global database using Aurora MySQL


**To create an Aurora global database using Aurora MySQL**

1. Use the `[create-global-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-global-cluster.html)` CLI command, passing the name of the AWS Region, Aurora database engine, and version.

   For Linux, macOS, or Unix:

   ```
   aws rds create-global-cluster --region primary_region \
       --global-cluster-identifier global_database_id \
       --engine aurora-mysql \
       --engine-version version # optional
   ```

   For Windows:

   ```
   aws rds create-global-cluster ^
       --global-cluster-identifier global_database_id ^
       --engine aurora-mysql ^
       --engine-version version # optional
   ```

   This creates an "empty" Aurora global database, with just a name (identifier) and Aurora database engine. It can take a few minutes for the Aurora global database to be available. Before going to the next step, use the `[describe-global-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-global-clusters.html)` CLI command to see if it's available.

   ```
   aws rds describe-global-clusters --region primary_region --global-cluster-identifier global_database_id
   ```

   When the Aurora global database is available, you can create its primary Aurora DB cluster. 

1. To create a primary Aurora DB cluster, use the `[create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html)` CLI command. Include the name of your Aurora global database by using the `--global-cluster-identifier` parameter.

   For Linux, macOS, or Unix:

   ```
   aws rds create-db-cluster \
     --region primary_region \
     --db-cluster-identifier primary_db_cluster_id \
     --master-username userid \
     --master-user-password password \
     --engine aurora-mysql \
     --engine-version version \
     --global-cluster-identifier global_database_id
   ```

   For Windows:

   ```
   aws rds create-db-cluster ^
     --region primary_region ^
     --db-cluster-identifier primary_db_cluster_id ^
     --master-username userid ^
     --master-user-password password ^
     --engine aurora-mysql ^
     --engine-version version ^
     --global-cluster-identifier global_database_id
   ```

   Use the `[describe-db-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html)` AWS CLI command to confirm that the Aurora DB cluster is ready. To single out a specific Aurora DB cluster, use `--db-cluster-identifier` parameter. Or you can leave out the Aurora DB cluster name in the command to get details about all your Aurora DB clusters in the given Region. 

   ```
   aws rds describe-db-clusters --region primary_region --db-cluster-identifier primary_db_cluster_id
   ```

   When the response shows `"Status": "available"` for the cluster, it's ready to use.

1. Create the DB instance for your primary Aurora DB cluster. To do so, use the `[create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html)` CLI command. Give the command your Aurora DB cluster's name, and specify the configuration details for the instance. You don't need to pass the `--master-username` and `--master-user-password` parameters in the command, because it gets those from the Aurora DB cluster.

   For the `--db-instance-class`, you can use only those from the memory optimized classes, such as `db.r5.large`. We recommend that you use a db.r5 or higher instance class. For information about these classes, see [DB instance class types](Concepts.DBInstanceClass.Types.md). 

   For Linux, macOS, or Unix:

   ```
   aws rds create-db-instance \
     --db-cluster-identifier primary_db_cluster_id \
     --db-instance-class instance_class \
     --db-instance-identifier db_instance_id \
     --engine aurora-mysql \
     --engine-version version \
     --region primary_region
   ```

   For Windows:

   ```
   aws rds create-db-instance ^
     --db-cluster-identifier primary_db_cluster_id ^
     --db-instance-class instance_class ^
     --db-instance-identifier db_instance_id ^
     --engine aurora-mysql ^
     --engine-version version ^
     --region primary_region
   ```

    The `create-db-instance` operation might take some time to complete. Check the status to see if the Aurora DB instance is available before continuing. 

   ```
   aws rds describe-db-clusters --db-cluster-identifier primary_db_cluster_id
   ```

    When the command returns a status of `available`, you can create another Aurora DB instance for your primary DB cluster. This is the reader instance (the Aurora Replica) for the Aurora DB cluster. 

1.  To create another Aurora DB instance for the cluster, use the `[create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html)` CLI command. 

   For Linux, macOS, or Unix:

   ```
   aws rds create-db-instance \
     --db-cluster-identifier primary_db_cluster_id \
     --db-instance-class instance_class \
     --db-instance-identifier replica_db_instance_id \
     --engine aurora-mysql
   ```

   For Windows:

   ```
   aws rds create-db-instance ^
     --db-cluster-identifier primary_db_cluster_id ^
     --db-instance-class instance_class ^
     --db-instance-identifier replica_db_instance_id ^
     --engine aurora-mysql
   ```

 When the DB instance is available, replication begins from the writer node to the replica. Before continuing, check that the DB instance is available with the `[describe-db-instances](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html)` CLI command. 

 At this point, you have an Aurora global database with its primary Aurora DB cluster containing a writer DB instance and an Aurora Replica. You can now add a read-only Aurora DB cluster in a different Region to complete your Aurora global database. To do so, follow the steps in [Adding an AWS Region to an Amazon Aurora global database](aurora-global-database-attaching.md). 

### Creating a global database using Aurora PostgreSQL


 When you create Aurora objects for an Aurora global database by using the following commands, it can take a few minutes for each to become available. We recommend that after completing any given command, you check the specific Aurora object's status to make sure that the status is available. 

 To do so, use the `[describe-global-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-global-clusters.html)` CLI command. 

```
aws rds describe-global-clusters --region primary_region
    --global-cluster-identifier global_database_id
```

**To create an Aurora global database using Aurora PostgreSQL**

1. Use the `[create-global-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-global-cluster.html)` CLI command.

   For Linux, macOS, or Unix:

   ```
   aws rds create-global-cluster --region primary_region \
       --global-cluster-identifier global_database_id \
       --engine aurora-postgresql \
       --engine-version version # optional
   ```

   For Windows:

   ```
   aws rds create-global-cluster ^
       --global-cluster-identifier global_database_id ^
       --engine aurora-postgresql ^
       --engine-version version # optional
   ```

   When the Aurora global database is available, you can create its primary Aurora DB cluster.

1.  To create a primary Aurora DB cluster, use the `[create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html)` CLI command. Include the name of your Aurora global database by using the `--global-cluster-identifier` parameter. 

   For Linux, macOS, or Unix:

   ```
   aws rds create-db-cluster \
     --region primary_region \
     --db-cluster-identifier primary_db_cluster_id \
     --master-username userid \
     --master-user-password password \
     --engine aurora-postgresql \
     --engine-version version \
     --global-cluster-identifier global_database_id
   ```

   For Windows:

   ```
   aws rds create-db-cluster ^
     --region primary_region ^
     --db-cluster-identifier primary_db_cluster_id ^
     --master-username userid ^
     --master-user-password password ^
     --engine aurora-postgresql ^
     --engine-version version ^
     --global-cluster-identifier global_database_id
   ```

   Check that the Aurora DB cluster is ready. When the response from the following command shows `"Status": "available"` for the Aurora DB cluster, you can continue. 

   ```
   aws rds describe-db-clusters --region primary_region --db-cluster-identifier primary_db_cluster_id
   ```

1. Create the DB instance for your primary Aurora DB cluster. To do so, use the `[create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html)` CLI command. 

   Pass the name of your Aurora DB cluster with the `--db-cluster-identifier` parameter.

   You don't need to pass the `--master-username` and `--master-user-password` parameters in the command, because it gets those from the Aurora DB cluster.

   For the `--db-instance-class`, you can use only those from the memory optimized classes, such as `db.r5.large`. We recommend that you use a db.r5 or higher instance class. For information about these classes, see [DB instance class types](Concepts.DBInstanceClass.Types.md). 

   For Linux, macOS, or Unix:

   ```
   aws rds create-db-instance \
     --db-cluster-identifier primary_db_cluster_id \
     --db-instance-class instance_class \
     --db-instance-identifier db_instance_id \
     --engine aurora-postgresql \
     --engine-version version \
     --region primary_region
   ```

   For Windows:

   ```
   aws rds create-db-instance ^
     --db-cluster-identifier primary_db_cluster_id ^
     --db-instance-class instance_class ^
     --db-instance-identifier db_instance_id ^
     --engine aurora-postgresql ^
     --engine-version version ^
     --region primary_region
   ```

1.  Check the status of the Aurora DB instance before continuing. 

   ```
   aws rds describe-db-clusters --db-cluster-identifier primary_db_cluster_id
   ```

    If the response shows that Aurora DB instance status is `available`, you can create another Aurora DB instance for your primary DB cluster. 

1.  To create an Aurora Replica for Aurora DB cluster, use the `[create-db-instance](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-instance.html)` CLI command. 

   For Linux, macOS, or Unix:

   ```
   aws rds create-db-instance \
     --db-cluster-identifier primary_db_cluster_id \
     --db-instance-class instance_class \
     --db-instance-identifier replica_db_instance_id \
     --engine aurora-postgresql
   ```

   For Windows:

   ```
   aws rds create-db-instance ^
     --db-cluster-identifier primary_db_cluster_id ^
     --db-instance-class instance_class ^
     --db-instance-identifier replica_db_instance_id ^
     --engine aurora-postgresql
   ```

 When the DB instance is available, replication begins from the writer node to the replica. Before continuing, check that the DB instance is available with the `[describe-db-instances](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html)` CLI command. 

Your Aurora global database exists, but it has only its primary Region with an Aurora DB cluster made up of a writer DB instance and an Aurora Replica. You can now add a read-only Aurora DB cluster in a different Region to complete your Aurora global database. To do so, follow the steps in [Adding an AWS Region to an Amazon Aurora global database](aurora-global-database-attaching.md). 

## RDS API


 To create an Aurora global database with the RDS API, run the [CreateGlobalCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateGlobalCluster.html) operation. 

# Adding an AWS Region to an Amazon Aurora global database
Adding a secondary cluster

 You can use the following procedure to add an additional secondary cluster to an existing global database. You can also create a global database from a standalone Aurora DB cluster by using this procedure to add the first secondary AWS Region. 

An Aurora global database needs at least one secondary Aurora DB cluster in a different AWS Region than the primary Aurora DB cluster. You can attach up to 10 secondary DB clusters to your Aurora global database. Repeat the following procedure for each new secondary DB cluster. For each secondary DB cluster that you add to your Aurora global database, reduce the number of Aurora Replicas allowed to the primary DB cluster by one.

For example, if your Aurora global database has 10 secondary Regions, your primary DB cluster can have only 5 (rather than 15) Aurora Replicas. For more information, see [Configuration requirements of an Amazon Aurora global database](aurora-global-database.configuration.requirements.md).

The number of Aurora Replicas (reader instances) in the primary DB cluster determines the number of secondary DB clusters you can add. The total number of reader instances in the primary DB cluster plus the number of secondary clusters can't exceed 15. For example, if you have 14 reader instances in the primary DB cluster and 1 secondary cluster, you can't add another secondary cluster to the global database.

**Note**  
For Aurora MySQL version 3, when you create a secondary cluster, make sure that the value of `lower_case_table_names` matches the value in the primary cluster. This setting is a database parameter that affects how the server handles identifier case sensitivity. For more information about database parameters, see [Parameter groups for Amazon Aurora](USER_WorkingWithParamGroups.md).  
We recommend that when you create a secondary cluster, you use the same DB engine version for the primary and secondary. If necessary, upgrade the primary to be the same version as the secondary. For more information, see [Patch level compatibility for managed cross-Region switchovers and failovers](aurora-global-database-upgrade.md#aurora-global-database-upgrade.minor.incompatibility).

## Console


**To add an AWS Region to an Aurora global database**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane of the AWS Management Console, choose **Databases**. 

1. Choose the Aurora global database that needs a secondary Aurora DB cluster. Ensure that the primary Aurora DB cluster is `Available`.

1.  For **Actions**, choose **Add AWS Region**.   
![\[Screenshot showing provisioned DB cluster with "Add AWS Region" chosen from the Actions menu.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-create-apg-5-add-region.png)

1. On the **Add a region** page, choose the secondary AWS Region. 

   You can't choose an AWS Region that already has a secondary Aurora DB cluster for the same Aurora global database. Also, it can't be the same Region as the primary Aurora DB cluster.
**Note**  
Babelfish for Aurora PostgreSQL global databases works in secondary regions only if the parameters that control Babelfish preferences are turned on in those regions. For more information, see [DB cluster parameter group settings for Babelfish](babelfish-configuration.md)  
![\[The Add a region page for an Aurora global database.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-create-apg-6-add-region.png)

1. Complete the remaining fields for the secondary Aurora cluster in the new AWS Region. These are the same configuration options as for any Aurora DB cluster instance, except for the following option for Aurora MySQL–based Aurora global databases only:
   + Enable read replica write forwarding – This optional setting let's your Aurora global database's secondary DB clusters forward write operations to the primary cluster. For more information, see [Using write forwarding in an Amazon Aurora global database](aurora-global-database-write-forwarding.md).   
![\[Screenshot showing the secondary cluster is now part of the Aurora global database.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-database-enable-write-forwarding.png)

1. Choose **Add AWS Region**. 

After you finish adding the Region to your Aurora global database, you can see it in the list of **Databases** in the AWS Management Console as shown in the screenshot. 

![\[Screenshot showing the secondary cluster is now part of the Aurora global database.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-apg-complete.png)


## AWS CLI


**To add a secondary AWS Region to an Aurora global database**

 To add a secondary cluster to your global database using the CLI, you must already have the global cluster container object. If you haven't already run the `create-global-cluster` command, see the CLI procedure in [Creating an Amazon Aurora global database](aurora-global-database-creating.md). 

1. Use the `[create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html)` CLI command with the name (`--global-cluster-identifier`) of your Aurora global database. For other parameters, do the following: 

1. For `--region`, choose a different AWS Region than that of your Aurora primary Region.

1. Choose specific values for the `--engine` and `--engine-version` parameters. These values are the same as those for the primary Aurora DB cluster in your Aurora global database. 

1. For an encrypted cluster, specify your primary AWS Region as the `--source-region` for encryption.

The following example creates a new Aurora DB cluster and attaches it to an Aurora global database as a read-only secondary Aurora DB cluster. In the last step, an Aurora DB instance is added to the new Aurora DB cluster.

For Linux, macOS, or Unix:

```
aws rds --region secondary_region \
  create-db-cluster \
    --db-cluster-identifier secondary_cluster_id \
    --global-cluster-identifier global_database_id \
    --engine aurora-mysql | aurora-postgresql \
    --engine-version version

aws rds --region secondary_region \
  create-db-instance \
    --db-instance-class instance_class \
    --db-cluster-identifier secondary_cluster_id \
    --db-instance-identifier db_instance_id \
    --engine aurora-mysql | aurora-postgresql
```

For Windows:

```
aws rds --region secondary_region ^
  create-db-cluster ^
    --db-cluster-identifier secondary_cluster_id ^
    --global-cluster-identifier global_database_id_id ^
    --engine aurora-mysql | aurora-postgresql ^
    --engine-version version

aws rds --region secondary_region ^
  create-db-instance ^
    --db-instance-class instance_class ^
    --db-cluster-identifier secondary_cluster_id ^
    --db-instance-identifier db_instance_id ^
    --engine aurora-mysql | aurora-postgresql
```

## RDS API


 To add a new AWS Region to an Aurora global database with the RDS API, run the [CreateDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html) operation. Specify the identifier of the existing global database using the `GlobalClusterIdentifier` parameter. 

# Creating a headless Aurora DB cluster in a secondary Region
Creating a headless secondary cluster

Although an Aurora global database requires at least one secondary Aurora DB cluster in a different AWS Region than the primary, you can use a *headless* configuration for the secondary cluster. A headless secondary Aurora DB cluster is one without a DB instance. This type of configuration can lower expenses for an Aurora global database. In an Aurora DB cluster, compute and storage are decoupled. Without the DB instance, you're not charged for compute, only for storage. If it's set up correctly, a headless secondary's storage volume is kept in-sync with the primary Aurora DB cluster. 

You add the secondary cluster as you normally do when creating an Aurora global database. If you are creating all the clusters in the global database, follow the procedure in [Creating an Amazon Aurora global database](aurora-global-database-creating.md). If you already have a DB cluster to use as the primary cluster, follow the procedure in [Adding an AWS Region to an Amazon Aurora global database](aurora-global-database-attaching.md). 

 After the primary Aurora DB cluster begins replication to the secondary, you delete the Aurora read-only DB instance from the secondary Aurora DB cluster. This secondary cluster is now considered "headless" because it no longer has a DB instance. Even without any DB instance in the secondary cluster, Aurora keeps the storage volume in sync with the primary Aurora DB cluster.

**Warning**  
 With Aurora PostgreSQL, to create a headless cluster in a secondary AWS Region, use the AWS CLI or RDS API to add the secondary AWS Region. Skip the step to create the reader DB instance for the secondary cluster. Currently, creating a headless cluster isn't supported in the RDS Console. For the CLI and API procedures to use, see [Adding an AWS Region to an Amazon Aurora global database](aurora-global-database-attaching.md).  
 If your global database is using an Aurora PostgreSQL engine version lower than 13.4, 12.8, or 11.13, creating a reader DB instance in a secondary Region and subsequently deleting it could lead to an Aurora PostgreSQL vacuum issue on the primary Region's writer DB instance. If you encounter this issue, restart the primary Region's writer DB instance after you delete the secondary Region's reader DB instance.

**To add a headless secondary Aurora DB cluster to your Aurora global database**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane of the AWS Management Console, choose **Databases**. 

1. Choose the Aurora global database that needs a secondary Aurora DB cluster. Ensure that the primary Aurora DB cluster is `Available`.

1. For **Actions**, choose **Add AWS Region**. 

1. On the **Add a region** page, choose the secondary AWS Region. 

   You can't choose an AWS Region that already has a secondary Aurora DB cluster for the same Aurora global database. Also, it can't be the same Region as the primary Aurora DB cluster.

1. Complete the remaining fields for the secondary Aurora cluster in the new AWS Region. These are the same configuration options as for any Aurora DB cluster instance.

   For an Aurora MySQL–based Aurora global database, disregard the **Enable read replica write forwarding** option. This option has no function after you delete the reader instance.

1. Choose **Add AWS Region**. After you finish adding the Region to your Aurora global database, you can see it in the list of **Databases** in the AWS Management Console as shown in the screenshot.   
![\[Screenshot showing the secondary cluster with its reader instance is now part of the Aurora global database.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-headless-stage-1.png)

1. Check the status of the secondary Aurora DB cluster and its reader instance before continuing, by using the AWS Management Console or the AWS CLI. For example:

   ```
   $ aws rds describe-db-clusters --db-cluster-identifier secondary-cluster-id --query '*[].[Status]' --output text
   ```

   It can take several minutes for the status of a newly added secondary Aurora DB cluster to change from `creating` to `available`. When the Aurora DB cluster is available, you can delete the reader instance.

1. Select the reader instance in the secondary Aurora DB cluster, and then choose **Delete**.  
![\[Screenshot showing the reader instance selected and ready to delete.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-headless-stage-2.png)

After deleting the reader instance, the secondary cluster remains part of the Aurora global database. It has no instance associated with it, as shown following.

![\[Screenshot showing the headless secondary DB cluster.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-headless-secondary.png)


You can use this headless secondary Aurora DB cluster to [manually recover your Amazon Aurora global database from an unplanned outage in the primary AWS Region](aurora-global-database-disaster-recovery.md#aurora-global-database-failover) if such an outage occurs. 

# Creating an Amazon Aurora global database from an Aurora or Amazon RDS snapshot
Creating a global database from a snapshot

You can restore a snapshot of an Aurora DB cluster or from an Amazon RDS DB instance to use as the starting point for your Aurora global database. You restore the snapshot and create a new Aurora provisioned DB cluster at the same time. You then add another AWS Region to the restored DB cluster, thus turning it into an Aurora global database. Any Aurora DB cluster that you create using a snapshot in this way becomes the primary cluster of your Aurora global database.

The snapshot that you use can be from a `provisioned` or from a `serverless` Aurora DB cluster.

During the restore process, choose the same DB engine type as the snapshot. For example, suppose that you want to restore a snapshot that was made from an Aurora Serverless DB cluster running Aurora PostgreSQL. In this case, you create an Aurora PostgreSQL DB cluster using that same Aurora DB engine and version. 

 The restored DB cluster assumes the role of primary cluster for the Aurora global database when you add an AWS Region to it. All data contained in this primary cluster is replicated to any secondary clusters that you add to your Aurora global database. 

![\[Screenshot showing the restore snapshot page for an Aurora global database.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-databases-restore-snapshot-01.png)


# Managing an Amazon Aurora global database
Managing an Aurora global database

You perform most management operations on the individual clusters that make up an Aurora global database. When you choose **Group related resources** on the **Databases** page in the console, you see the primary cluster and secondary clusters grouped under the associated global database. To find the AWS Regions where a global database's DB clusters are running, its Aurora DB engine and version, and its identifier, use its **Configuration** tab.

 The cross-Region database failover processes are available to Aurora global databases only, not for a single Aurora DB cluster. To learn more, see [Using switchover or failover in Amazon Aurora Global Database](aurora-global-database-disaster-recovery.md). 

 To recover an Aurora global database from an unplanned outage in its primary Region, see [Recovering an Amazon Aurora global database from an unplanned outage](aurora-global-database-disaster-recovery.md#aurora-global-database-failover). 

**Topics**
+ [

# Modifying an Amazon Aurora global database
](aurora-global-database-modifying.md)
+ [

# Modifying parameters for an Aurora global database
](aurora-global-database-modifying.parameters.md)
+ [

# Removing a cluster from an Amazon Aurora global database
](aurora-global-database-detaching.md)
+ [

# Deleting an Amazon Aurora global database
](aurora-global-database-deleting.md)
+ [

# Tagging Amazon Aurora Global Database resources
](aurora-global-database-tagging.md)

# Modifying an Amazon Aurora global database
Modifying an Aurora global database

The **Databases** page in the AWS Management Console lists all your Aurora global databases, showing the primary cluster and secondary clusters for each one. The Aurora global database has its own configuration settings. Specifically, it has AWS Regions associated with its primary and secondary clusters, as shown in the screenshot following.

![\[Screenshot showing a selected Aurora global database and its configuration settings in the AWS Management Console.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-global-database-configuration.png)


When you make changes to the Aurora global database, you have a chance to cancel changes, as shown in the following screenshot.

![\[Screenshot showing the page to modify settings for an Aurora global database.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-databases-modify-global-01.png)


When you choose **Continue**, you confirm the changes.

# Modifying parameters for an Aurora global database
Modifying global database parameters

You can configure the Aurora DB cluster parameter groups independently for each Aurora cluster within the Aurora global database. Most parameters work the same as for other kinds of Aurora clusters. We recommend that you keep settings consistent among all the clusters in a global database. Doing this helps to avoid unexpected behavior changes if you promote a secondary cluster to be the primary. 

For example, use the same settings for time zones and character sets to avoid inconsistent behavior if a different cluster takes over as the primary cluster. 

The `aurora_enable_repl_bin_log_filtering` and `aurora_enable_replica_log_compression` configuration settings have no effect. 

# Removing a cluster from an Amazon Aurora global database
Removing a cluster from an Aurora global database

You can remove Aurora DB clusters from your Aurora global database for several different reasons. For example, you might want to remove an Aurora DB cluster from an Aurora global database if the primary cluster becomes degraded or isolated. It then becomes a standalone provisioned Aurora DB cluster that could be used to create a new Aurora global database. To learn more, see [Recovering an Amazon Aurora global database from an unplanned outage](aurora-global-database-disaster-recovery.md#aurora-global-database-failover). 

You also might want to remove Aurora DB clusters because you want to delete an Aurora global database that you no longer need. You can't delete the Aurora global database until after you remove (detach) all associated Aurora DB clusters, leaving the primary for last. For more information, see [Deleting an Amazon Aurora global database](aurora-global-database-deleting.md).

When an Aurora DB cluster is detached from the Aurora global database, it's no longer synchronized with the primary. It becomes a standalone provisioned Aurora DB cluster with full read/write capabilities.

You can remove Aurora DB clusters from your Aurora global database using the AWS Management Console, the AWS CLI, or the RDS API. 

## Console


**To remove an Aurora cluster from an Aurora global database**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose the cluster on the **Databases** page.

1. For **Actions**, choose **Remove from Global**.   
![\[Screenshot showing selected Aurora DB cluster (secondary) and "Remove from global" Action.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-detach-secondary-01.png)

   You see a prompt to confirm that you want to detach the secondary from the Aurora global database.  
![\[Screenshot showing confirmation prompt to remove a secondary cluster from an Aurora global database.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-detach-secondary-02.png)

1. Choose **Remove and promote** to remove the cluster from the global database. 

The Aurora DB cluster is no longer serving as a secondary in the Aurora global database, and is no longer synchronized with the primary DB cluster. It is a standalone Aurora DB cluster with full read/write capability.

![\[Screenshot showing confirmation prompt to remove a secondary cluster from an Aurora global database.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-detach-secondary-03.png)


After you remove or delete all secondary clusters, then you can remove the primary cluster the same way. You can't detach (remove) the primary Aurora DB cluster from an Aurora global database until after you remove all secondary clusters. 

The Aurora global database might remain in the **Databases** list, with zero Regions and AZs. You can delete if you no longer want to use this Aurora global database. For more information, see [Deleting an Amazon Aurora global database](aurora-global-database-deleting.md). 

## AWS CLI


 To remove an Aurora cluster from an Aurora global database, run the [remove-from-global-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/remove-from-global-cluster.html) CLI command with the following parameters: 
+ `--global-cluster-identifier` – The name (identifier) of your Aurora global database. 
+ `--db-cluster-identifier` – The name of each Aurora DB cluster to remove from the Aurora global database. Remove all secondary Aurora DB clusters before removing the primary. 

 The following examples first remove a secondary cluster and then the primary cluster from an Aurora global database. 

For Linux, macOS, or Unix:

```
aws rds --region secondary_region \
  remove-from-global-cluster \
    --db-cluster-identifier secondary_cluster_ARN \
    --global-cluster-identifier global_database_id

aws rds --region primary_region \
  remove-from-global-cluster \
    --db-cluster-identifier primary_cluster_ARN \
    --global-cluster-identifier global_database_id
```

 Repeat the `remove-from-global-cluster --db-cluster-identifier secondary_cluster_ARN ` command for each secondary AWS Region in your Aurora global database. 

For Windows:

```
aws rds --region secondary_region ^
  remove-from-global-cluster ^
    --db-cluster-identifier secondary_cluster_ARN ^
    --global-cluster-identifier global_database_id

aws rds --region primary_region ^
  remove-from-global-cluster ^
    --db-cluster-identifier primary_cluster_ARN ^
    --global-cluster-identifier global_database_id
```

 Repeat the `remove-from-global-cluster --db-cluster-identifier secondary_cluster_ARN` command for each secondary AWS Region in your Aurora global database. 

## RDS API


 To remove an Aurora cluster from an Aurora global database with the RDS API, run the [RemoveFromGlobalCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RemoveFromGlobalCluster.html) action. 

# Deleting an Amazon Aurora global database
Deleting an Aurora global database

 Because an Aurora global database typically holds business-critical data, you can't delete the global database and its associated clusters in a single step. To delete an Aurora global database, do the following: 
+ Remove all secondary DB clusters from the Aurora global database. Each cluster becomes a standalone Aurora DB cluster. To learn how, see [Removing a cluster from an Amazon Aurora global database](aurora-global-database-detaching.md).
+ From each standalone Aurora DB cluster, delete all Aurora Replicas.
+ Remove the primary DB cluster from the Aurora global database. This becomes a standalone Aurora DB cluster.
+ From the Aurora primary DB cluster, first delete all Aurora Replicas, then delete the writer DB instance. 

 Deleting the writer instance from the newly standalone Aurora DB cluster also typically removes the Aurora DB cluster and the Aurora global database. 

 For more general information, see [Deleting a DB instance from an Aurora DB cluster](USER_DeleteCluster.md#USER_DeleteInstance). 

 To delete an Aurora global database, you can use the AWS Management Console, the AWS CLI, or the RDS API. 

## Console


**To delete an Aurora global database**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose **Databases** and find the Aurora global database you want to delete in the listing.

1. Confirm that all clusters are removed from the Aurora global database. The Aurora global database should show 0 Regions and AZs and a size of 0 clusters. 

   If the Aurora global database contains any Aurora DB clusters, you can't delete it. If necessary, detach the primary and secondary Aurora DB clusters from the Aurora global database. For more information, see [Removing a cluster from an Amazon Aurora global database](aurora-global-database-detaching.md).

1. Choose your Aurora global database in the list, and then choose **Delete** from the **Actions** menu.  
![\[An Aurora global database based on Aurora MySQL 5.6.10a remains in the AWS Management Console until you delete it, even if it doesn't have any associated Aurora DB clusters.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-ams5610a-delete-empty-cluster.png)

## AWS CLI


To delete an Aurora global database, run the [delete-global-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/delete-global-cluster.html) CLI command with the name of the AWS Region and the Aurora global database identifier, as shown in the following example.

For Linux, macOS, or Unix:

```
aws rds --region primary_region delete-global-cluster \
   --global-cluster-identifier global_database_id
```

For Windows:

```
aws rds --region primary_region delete-global-cluster ^
   --global-cluster-identifier global_database_id
```

## RDS API


To delete a cluster that is part of an Aurora global database, run the [DeleteGlobalCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DeleteGlobalCluster.html) API operation. 

# Tagging Amazon Aurora Global Database resources
Tagging for Aurora Global Database

 With the Aurora Global Database feature, you can apply RDS tags to resources at different levels within a global database. If you aren't familiar with how tags are used with AWS or Aurora resources, see [Tagging Amazon Aurora andAmazon RDS resources](USER_Tagging.md) before applying tags within your global database. 

**Note**  
 Because AWS processes tag data as part of its cost-reporting mechanisms, don't include any sensitive data or personally identifiable information (PII) in the tag names or values. 

 You can apply tags to the following kinds of resources within a global database: 
+  The container object for your entire global database. This resource is known as the global cluster. 

   After you create the global cluster by performing an **Add AWS Region** operation in the console, you can add tags by using the details page for the global cluster. On the **Tags** tab within the global cluster details page, you can add, remove, or modify tags and their associated values by choosing **Manage tags**. 

   With the AWS CLI and RDS API, you can add tags to the global cluster at the same time you create it. You can also add, remove, or modify tags for an existing global cluster. 
+  The primary cluster. You use the same procedures for working with tags here as for standalone Aurora clusters. You can set up the tags before turning the original Aurora cluster into a global database. You can also add, remove, or modify tags and their associated values by choosing **Manage tags** on the **Tags** tab within the DB cluster details page. 
+  Any secondary clusters. You use the same procedures for working with tags here as for standalone Aurora clusters. You can set up the tags at the same time as you create a secondary Aurora cluster by using the **Add AWS Region** action in the console. You can also add, remove, or modify tags and their associated values by choosing **Manage tags** on the **Tags** tab within the DB cluster details page. 
+  Individual DB instances within the primary or secondary clusters. You use the same procedures for working with tags here as for Aurora or RDS DB instances. You can set up the tags at the same time as you add a new DB instance to the Aurora cluster by using the **Add reader** action in the console. You can also add, remove, or modify tags and their associated values by choosing **Manage tags** on the **Tags** tab within the DB instance details page. 

 Here are some examples of the kinds of tags you might assign within a global database: 
+  You might add tags to the global cluster to record overall information about your application, such as anonymized identifiers representing owners and contacts within your organization. You might use tags to represent properties of the application that uses the global database. 
+  You might add tags to the primary cluster and secondary clusters to track costs for your application at the AWS Region level. For details about that procedure, see [How AWS billing works with tags in Amazon RDS](USER_Tagging.md#Tagging.Billing). 
+  You might add tags to specific DB instances with the Aurora clusters to indicate their special purpose. For example, within the primary cluster, you might have a reader instance with a low failover priority that is used exclusively for report generation. A tag can distinguish this special-purpose DB instance from other instances that are dedicated to high availability within the primary cluster. 
+  You might use tags at all levels of your global database resources to control access through IAM policies. For more information, see [Controlling access to AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-resources) in the *AWS Identity and Access Management User Guide*. 
**Tip**  
 In the AWS Management Console, you add tags to the global cluster container as a separate step after you create it. If you want to avoid any time interval when the global cluster exists without access control tags, you can apply the tags during the `CreateGlobalCluster` operation by creating that resource through AWS CLI, RDS API, or a CloudFormation template. 
+  You might use tags at the cluster level, or for the global cluster, to record information about quality assurance and testing of your application. For example, you might specify a tag on a DB cluster to record the last time you performed a switchover to that cluster. You might specify a tag on the global cluster to record the time of the last disaster recovery drill for the entire application. 

## AWS CLI examples of tagging for global databases


 The following AWS CLI examples show how you can specify and examine tags for all the types of Aurora resources in your global database. 

 You can specify tags for the global cluster container with the `create-global-cluster` command. The following example creates a global cluster and assigns two tags to it. The tags have keys `tag1` and `tag2`. 

```
$  aws rds create-global-cluster --global-cluster-identifier my_global_cluster_id \
  --engine aurora-mysql --tags Key=tag1,Value=val1 Key=tag2,Value=val2
```

 You can list the tags on the global cluster container with the `describe-global-clusters` command. When working with tags, you typically run this command first to retrieve the Amazon Resource Name (ARN) of the global cluster. You use the ARN as a parameter in subsequent commands for working with tags. The following command displays the tag information in the `TagList` attribute. It also shows the ARN, which is used as a parameter in the later examples. 

```
$  aws rds describe-global-clusters --global-cluster-identifier my_global_cluster_id
{
    "GlobalClusters": [
        {
            "Status": "available",
            "Engine": "aurora-mysql",
            "GlobalClusterArn": "my_global_cluster_arn",
            ...
            "TagList": [
                {
                    "Value": "val1",
                    "Key": "tag1"
                },
                {
                    "Value": "val2",
                    "Key": "tag2"
                }
            ]
        }
    ]
}
```

 You can add new tags with the `add-tags-to-resource` command. With this command, you specify the Amazon Resource Name (ARN) of the global cluster instead of its identifier. Adding a tag with the same name as an existing tag overwrites the value of that tag. If you include spaces or special characters in the tag values, quote the values as appropriate for your operating system or command shell. The following example modifies the tags of the global cluster from the previous example. Originally, the cluster had tags with keys `tag1` and `tag2`. After the command finishes, the global cluster has a new tag with key `tag3`, and the tag with key `tag1` has a different value. 

```
$  aws rds add-tags-to-resource --resource-name my_global_cluster_arn \
  --tags Key=tag1,Value="new value for tag1" Key=tag3,Value="entirely new tag"

$  aws rds describe-global-clusters --global-cluster-identifier my_global_cluster_id
{
    "GlobalClusters": [
        {
            "Status": "available",
            "Engine": "aurora-mysql",
            ...
            "TagList": [
                {
                    "Value": "new value for tag1",
                    "Key": "tag1"
                },
                {
                    "Value": "val2",
                    "Key": "tag2"
                },
                {
                    "Value": "entirely new tag",
                    "Key": "tag3"
                }
            ]
        }
    ]
}
```

 You can delete a tag from the global cluster with the `remove-tags-from-resource` command. With this command, you specify only a set of tag keys, without any tag values. The following example modifies the tags of the global cluster from the previous example. Originally, the cluster had tags with keys `tag1`, `tag2`, and `tag3`. After the command finishes, only the tag with key `tag1` remains. 

```
$  aws rds remove-tags-from-resource --resource-name my_global_cluster_arn --tag-keys tag2 tag3

$  aws rds describe-global-clusters --global-cluster-identifier my_global_cluster_id
{
    "GlobalClusters": [
        {
            "Status": "available",
            "Engine": "aurora-mysql",
            ...
            "TagList": [
                {
                    "Value": "new value for tag1",
                    "Key": "tag1"
                }
            ]
        }
    ]
}
```

# Connecting to Amazon Aurora Global Database
Connecting to Aurora Global Database

 Each Aurora Global Database comes with a writer endpoint that is automatically updated by Aurora to route requests to the current writer instance of the primary DB cluster. With the writer endpoint, you don't have to modify your connection string after you change the location of the primary Region using the managed Aurora Global Database switchover and failover capabilities. To learn more about using the writer endpoint along with Aurora Global Database switchover and failover, see [Using switchover or failover in Amazon Aurora Global Database](aurora-global-database-disaster-recovery.md). For information about connecting to an Aurora Global Database with RDS Proxy, see [Using RDS Proxy with Aurora global databases](https://docs.aws.amazon.com/AuroraUserGuide/rds-proxy-gdb.html). 

**Topics**
+ [

## Choosing the endpoint that meets your application needs
](#gdb-endpoint-choosing)
+ [

## Viewing the endpoints of an Amazon Aurora global database
](#viewing-endpoints)
+ [

## Considerations with using Global writer endpoints
](#global-writer-endpoint-considerations)

## Choosing the endpoint that meets your application needs


 Connecting to an Aurora Global Database depends on your need to read or write from the database and the AWS Region you want to route your requests to. Here are a few typical use cases: 
+  **Routing requests to the writer instance**: Connect to the Aurora Global Database writer endpoint if you need to run data manipulation language (DML) and data definition language (DDL) statements, or if you need strong consistency between reads and writes. That endpoint routes requests to the writer instance in your global database's primary cluster. This endpoint is automatically updated to route requests to the writer instance, eliminating the need to update your application each time you change the writer location in your global cluster. You can also use the global endpoint to send cross-Region read/write requests to your writer. 
**Note**  
 If you set up your global database before the Aurora Global Database writer endpoint was available, your application might connect to the cluster endpoint of the primary cluster. In this case, we recommend switching your connection settings to use the global writer endpoint instead. Doing so avoids the need to change your connection settings after every Aurora Global Database switchover or failover.   
 The first part of the writer endpoint name is the name of your Aurora Global Database. Thus, if you rename your Aurora Global Database, the writer endpoint name changes, and any code that uses it must be updated with the new name. 
+  **Scaling reads closer to your application's region**: To scale read-only requests in the same or nearby AWS Region as your application, connect to the reader endpoint of the primary or secondary Aurora clusters. 
+  **Scaling reads with occasional cross-region writes**: For occasional DML statements such as for maintenance and data cleanup, connect to the reader endpoint of a secondary cluster that has write forwarding enabled. With write forwarding, Aurora automatically forwards the write statements to the writer in the primary Region of your Aurora Global Database. Write forwarding provides the following benefits: 
  +  You don't need to do the heavy lifting to establish connectivity between the secondary and primary clusters to send cross-region writes. 
  +  You don't need to split read and write requests in the application. 
  +  You don't need to develop complex logic to manage consistency for read-after-write requests. 

   However, with write forwarding, you do need to update your application code or configuration to connect to the newly promoted primary Region's reader endpoint after performing a cross-Region failover or switchover. We recommend that you monitor the latency of operations done through write forwarding, to check overhead of processing the write requests. Finally, write forwarding doesn't support certain MySQL or PostgreSQL operations, such as making data definition language (DDL) changes or `SELECT FOR UPDATE` statements. 

   To learn more about using write forwarding across AWS Regions, see [Using write forwarding in an Amazon Aurora global database](aurora-global-database-write-forwarding.md). 

 For details about the different kinds of Aurora endpoints, see [Connecting to an Amazon Aurora DB cluster](Aurora.Connecting.md). 

## Viewing the endpoints of an Amazon Aurora global database
Viewing the endpoints of a global database

 When you view an Aurora Global Database in the console, you can see all of the endpoints associated with all of its clusters. The following figure shows an example of the types of endpoints you see when you view the details for your primary DB cluster: 
+  Global writer – The single read/write endpoint that always points to the current writer DB instance for the global database cluster. 
+  Writer – The connection endpoint for read/write requests to the primary DB cluster in the global database cluster. 
+  Reader – The connection endpoint for read-only requests to a primary or secondary DB cluster in the global database cluster. To minimize latency, choose whichever reader endpoint is in your AWS Region or the AWS Region closest to you. 

![\[In the RDS console, the Connectivity & security tab for an Aurora Global Database shows the global writer endpoint.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-databases-primary-cluster-connectivity-2.png)


### Console


**To view the endpoints of a global database**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  In the navigation pane, choose **Databases**. 

1.  In the list, choose the global database, or the primary or secondary DB cluster whose endpoints you want to view. 

1.  Choose the **Connectivity & security** tab to see the endpoint details. The endpoints displayed depend on the type of cluster you selected, as follows: 
   +  Global database – The global writer endpoint. 
   +  Primary DB cluster – The global writer endpoint, and the cluster endpoint and reader endpoint for the primary cluster. 
   +  Secondary DB cluster – The cluster endpoint and reader endpoint for the secondary cluster. On a secondary cluster, the cluster endpoint displays a status of **inactive** because it doesn't handle write requests. You can still connect to the cluster endpoint, but only for read queries. 

### AWS CLI


 To view the global cluster's writer endpoint, use the AWS CLI [describe-global-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-global-clusters.html) command, as in the following example. 

```
aws rds describe-global-clusters --region aws_region
{
    "GlobalClusters": [
        {
            "GlobalClusterIdentifier": "global_cluster_id",
            "GlobalClusterResourceId": "cluster-unique_string",
            "GlobalClusterArn": "arn:aws:rds::123456789012:global-cluster:global_cluster_id",
            "Status": "available",
            "Engine": "aurora-mysql",
            "EngineVersion": "5.7.mysql_aurora.2.11.2",
            "GlobalClusterMembers": [

              ...
            ],
            "Endpoint": "global_cluster_id.global-unique_string.global.rds.amazonaws.com"
        }
    ]
}
```

 To view the cluster and reader endpoints for member DB clusters of the global cluster, use the AWS CLI [describe-db-clusters](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html) command, as in the following example. The values returned for `Endpoint` and `ReaderEndpoint` are the cluster and reader endpoints, respectively. 

```
aws rds describe-db-clusters --region primary_region --db-cluster-identifier db_cluster_id
{
    "DBClusters": [
        {
            "AllocatedStorage": 1,
            "AvailabilityZones": [
                "az_1",
                "az_2",
                "az_3"
            ],
            "BackupRetentionPeriod": 1,
            "DBClusterIdentifier": "db_cluster_id",
            "DBClusterParameterGroup": "default.aurora-mysql5.7",
            "DBSubnetGroup": "default",
            "Status": "available",
            "EarliestRestorableTime": "2023-08-01T18:21:11.301Z",
            "Endpoint": "db_cluster_id.cluster-unique_string.primary_region.rds.amazonaws.com",
            "ReaderEndpoint": "db_cluster_id.cluster-ro-unique_string.primary_region.rds.amazonaws.com",
            "MultiAZ": false,
            "Engine": "aurora-mysql",
            "EngineVersion": "5.7.mysql_aurora.2.11.2",

            "ReadReplicaIdentifiers": [
                "arn:aws:rds:secondary_region:123456789012:cluster:db_cluster_id"
            ],
            "DBClusterMembers": [
                {
                    "DBInstanceIdentifier": "db_instance_id",
                    "IsClusterWriter": true,
                    "DBClusterParameterGroupStatus": "in-sync",
                    "PromotionTier": 1
                }
            ],

            ...
            "TagList": [],
            "GlobalWriteForwardingRequested": false
        }
    ]
}
```

### RDS API


 To view the global cluster's writer endpoint, use the RDS API [DescribeGlobalClusters](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeGlobalClusters.html) operation. To view the cluster and reader endpoints for member DB clusters of the global cluster, use the RDS API [DescribeDBClusters](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusters.html) operation. 

## Considerations with using Global writer endpoints


 You can make effective use of the Aurora Global Database writer endpoints by following these guidelines and best practices: 
+  To minimize disruption after a cross-Region failover or switchover, you can set up VPC connectivity between your application compute and your primary and secondary AWS Regions. For example, suppose that you have applications or client systems that are running in the same VPC as the primary cluster. If the secondary cluster is promoted, the global writer endpoint automatically changes to point to that cluster. Although the global writer endpoint lets you avoid changing the connection settings for your application, your applications can't access the IP addresses in the newly promoted primary AWS Region's VPC until you set up networking between the two VPCs. See [Amazon VPC-to-Amazon VPC connectivity options](https://docs.aws.amazon.com/whitepapers/latest/aws-vpc-connectivity-options/amazon-vpc-to-amazon-vpc-connectivity-options.html) to evaluate different options for setting up this connectivity. 
+  The global writer endpoint update after a global database failover or switchover can take a long time depending upon your Domain Name Service (DNS) caching duration. See [ Amazon Aurora MySQL Database Administrator's Handbook](https://docs.aws.amazon.com/pdfs/whitepapers/latest/amazon-aurora-mysql-db-admin-handbook/amazon-aurora-mysql-db-admin-handbook.pdf) to learn more. Aurora Global Database emits an RDS Event when it sees the DNS change on the global writer endpoint. You can use the event to devise strategies to ensure the DNS cache doesn't extend beyond the time after the event is generated. For more information, see [DB cluster events](USER_Events.Messages.md#USER_Events.Messages.cluster). 
+  Aurora Global Database replicates data asynchronously. The cross-Region failover methods can result in some write transaction data that wasn't replicated to the chosen secondary before the failover initiated. Although Aurora attempts on best-effort basis to block writes in the original primary AWS Region, failover can be susceptible to split-brain issues. The considerations to minimize data loss and split-brain risk apply to Aurora Global Database writer endpoints as well. For more information, see [Performing managed failovers for Aurora global databases](aurora-global-database-disaster-recovery.md#aurora-global-database-failover.managed-unplanned). 

# Using write forwarding in an Amazon Aurora global database
Using write forwarding in an Aurora global database

You can reduce the number of endpoints that you need to manage for applications running on your Aurora global database, by using *write forwarding*. With write forwarding enabled, secondary clusters in an Aurora global database forward SQL statements that perform write operations to the primary cluster. The primary cluster updates the source and then propagates resulting changes back to all secondary AWS Regions. 

The write forwarding configuration saves you from implementing your own mechanism to send write operations from a secondary AWS Region to the primary Region. Aurora handles the cross-Region networking setup. Aurora also transmits all necessary session and transactional context for each statement. The data is always changed first on the primary cluster and then replicated to the secondary clusters in the Aurora global database. This way, the primary cluster is the source of truth and always has an up-to-date copy of all your data.

**Topics**
+ [

# Using write forwarding in an Aurora MySQL global database
](aurora-global-database-write-forwarding-ams.md)
+ [

# Using write forwarding in an Aurora PostgreSQL global database
](aurora-global-database-write-forwarding-apg.md)

# Using write forwarding in an Aurora MySQL global database
Using write forwarding in Aurora MySQL

**Topics**
+ [

## Region and version availability of write forwarding in Aurora MySQL
](#aurora-global-database-write-forwarding-regions-versions-ams)
+ [

## Enabling write forwarding in Aurora MySQL
](#aurora-global-database-write-forwarding-enabling-ams)
+ [

## Checking if a secondary cluster has write forwarding enabled in Aurora MySQL
](#aurora-global-database-write-forwarding-describing-ams)
+ [

## Application and SQL compatibility with write forwarding in Aurora MySQL
](#aurora-global-database-write-forwarding-compatibility-ams)
+ [

## Isolation and consistency for write forwarding in Aurora MySQL
](#aurora-global-database-write-forwarding-isolation-ams)
+ [

## Running multipart statements with write forwarding in Aurora MySQL
](#aurora-global-database-write-forwarding-multipart-ams)
+ [

## Transactions with write forwarding in Aurora MySQL
](#aurora-global-database-write-forwarding-txns-ams)
+ [

## Configuration parameters for write forwarding in Aurora MySQL
](#aurora-global-database-write-forwarding-params-ams)
+ [

## Amazon CloudWatch metrics for write forwarding in Aurora MySQL
](#aurora-global-database-write-forwarding-cloudwatch-ams)
+ [

## Aurora MySQL status variables for write forwarding
](#aurora-global-database-write-forwarding-status-ams)

## Region and version availability of write forwarding in Aurora MySQL


Write forwarding is supported with Aurora MySQL 2.08.1 and higher versions, in every Region where Aurora MySQL-based global databases are available.

For information on version and Region availability of Aurora MySQL global databases, see [Aurora global databases with Aurora MySQL](Concepts.Aurora_Fea_Regions_DB-eng.Feature.GlobalDatabase.md#Concepts.Aurora_Fea_Regions_DB-eng.Feature.GlobalDatabase.amy).

## Enabling write forwarding in Aurora MySQL


By default, write forwarding isn't enabled when you add a secondary cluster to an Aurora global database.

To enable write forwarding using the AWS Management Console, select the **Turn on global write forwarding** check box under **Read replica write forwarding** when you add a Region for a global database. For an existing secondary cluster, modify the cluster to **Turn on global write forwarding**. To turn off write forwarding, clear the **Turn on global write forwarding** check box when adding the Region or modifying the secondary cluster.

 To enable write forwarding using the AWS CLI, use the `--enable-global-write-forwarding` option. This option works when you create a new secondary cluster using the `create-db-cluster` command. It also works when you modify an existing secondary cluster using the `modify-db-cluster` command. It requires that the global database uses an Aurora version that supports write forwarding. You can turn write forwarding off by using the `--no-enable-global-write-forwarding` option with these same CLI commands. 

 To enable write forwarding using the Amazon RDS API, set the `EnableGlobalWriteForwarding` parameter to `true`. This parameter works when you create a new secondary cluster using the `CreateDBCluster` operation. It also works when you modify an existing secondary cluster using the `ModifyDBCluster` operation. It requires that the global database uses an Aurora version that supports write forwarding. You can turn write forwarding off by setting the `EnableGlobalWriteForwarding` parameter to `false`. 

**Note**  
For a database session to use write forwarding, specify a setting for the `aurora_replica_read_consistency` configuration parameter. Do this in every session that uses the write forwarding feature. For information about this parameter, see [Isolation and consistency for write forwarding in Aurora MySQL](#aurora-global-database-write-forwarding-isolation-ams).   
The RDS Proxy feature doesn't support the `SESSION` value for the `aurora_replica_read_consistency` variable. Setting this value can cause unexpected behavior.

The following CLI examples show how you can set up an Aurora global database with write forwarding enabled or disabled. The highlighted items represent the commands and options that are important to specify and keep consistent when setting up the infrastructure for an Aurora global database. 

 The following example creates an Aurora global database, a primary cluster, and a secondary cluster with write forwarding enabled. Substitute your own choices for the user name, password, and primary and secondary AWS Regions. 

```
# Create overall global database.
aws rds create-global-cluster --global-cluster-identifier write-forwarding-test \
  --engine aurora-mysql --engine-version 5.7.mysql_aurora.2.11.1 \
  --region us-east-1

# Create primary cluster, in the same AWS Region as the global database.
aws rds create-db-cluster --global-cluster-identifier write-forwarding-test \
  --db-cluster-identifier write-forwarding-test-cluster-1 \
  --engine aurora-mysql --engine-version 5.7.mysql_aurora.2.11.1 \
  --master-username user_name --master-user-password password \
  --region us-east-1

aws rds create-db-instance --db-cluster-identifier write-forwarding-test-cluster-1 \
  --db-instance-identifier write-forwarding-test-cluster-1-instance-1 \
  --db-instance-class db.r5.large \
  --engine aurora-mysql --engine-version 5.7.mysql_aurora.2.11.1 \
  --region us-east-1

aws rds create-db-instance --db-cluster-identifier write-forwarding-test-cluster-1 \
  --db-instance-identifier write-forwarding-test-cluster-1-instance-2 \
  --db-instance-class db.r5.large \
  --engine aurora-mysql --engine-version 5.7.mysql_aurora.2.11.1 \
  --region us-east-1

# Create secondary cluster, in a different AWS Region than the global database,
# with write forwarding enabled.
aws rds create-db-cluster --global-cluster-identifier write-forwarding-test \
  --db-cluster-identifier write-forwarding-test-cluster-2 \
  --engine aurora-mysql --engine-version 5.7.mysql_aurora.2.11.1 \
  --region us-east-2 \
  --enable-global-write-forwarding

aws rds create-db-instance --db-cluster-identifier write-forwarding-test-cluster-2 \
  --db-instance-identifier write-forwarding-test-cluster-2-instance-1 \
  --db-instance-class db.r5.large \
  --engine aurora-mysql --engine-version 5.7.mysql_aurora.2.11.1 \
  --region us-east-2

aws rds create-db-instance --db-cluster-identifier write-forwarding-test-cluster-2 \
  --db-instance-identifier write-forwarding-test-cluster-2-instance-2 \
  --db-instance-class db.r5.large \
  --engine aurora-mysql --engine-version 5.7.mysql_aurora.2.11.1 \
  --region us-east-2
```

 The following example continues from the previous one. It creates a secondary cluster without write forwarding enabled, then enables write forwarding. After this example finishes, all secondary clusters in the global database have write forwarding enabled.

```
# Create secondary cluster, in a different AWS Region than the global database,
# without write forwarding enabled.
aws rds create-db-cluster --global-cluster-identifier write-forwarding-test \
  --db-cluster-identifier write-forwarding-test-cluster-2 \
  --engine aurora-mysql --engine-version 5.7.mysql_aurora.2.11.1 \
  --region us-west-1

aws rds create-db-instance --db-cluster-identifier write-forwarding-test-cluster-2 \
  --db-instance-identifier write-forwarding-test-cluster-2-instance-1 \
  --db-instance-class db.r5.large \
  --engine aurora-mysql --engine-version 5.7.mysql_aurora.2.11.1 \
  --region us-west-1

aws rds create-db-instance --db-cluster-identifier write-forwarding-test-cluster-2 \
  --db-instance-identifier write-forwarding-test-cluster-2-instance-2 \
  --db-instance-class db.r5.large \
  --engine aurora-mysql --engine-version 5.7.mysql_aurora.2.11.1 \
  --region us-west-1

aws rds modify-db-cluster --db-cluster-identifier write-forwarding-test-cluster-2 \
  --region us-east-2 \
  --enable-global-write-forwarding
```

## Checking if a secondary cluster has write forwarding enabled in Aurora MySQL


 To determine whether you can use write forwarding from a secondary cluster, you can check whether the cluster has the attribute `"GlobalWriteForwardingStatus": "enabled"`. 

In the AWS Management Console, on the **Configuration** tab of the details page for the cluster, you see the status **Enabled** for **Global read replica write forwarding**.

To see the status of the global write forwarding setting for all of your clusters, run the following AWS CLI command.

A secondary cluster shows the value `"enabled"` or `"disabled"` to indicate if write forwarding is turned on or off. A value of `null` indicates that write forwarding isn't available for that cluster. Either the cluster isn't part of a global database, or is the primary cluster instead of a secondary cluster. The value can also be `"enabling"` or `"disabling"` if write forwarding is in the process of being turned on or off.

**Example**  

```
aws rds describe-db-clusters \
--query '*[].{DBClusterIdentifier:DBClusterIdentifier,GlobalWriteForwardingStatus:GlobalWriteForwardingStatus}'

[
    {
        "GlobalWriteForwardingStatus": "enabled",
        "DBClusterIdentifier": "aurora-write-forwarding-test-replica-1"
    },
    {
        "GlobalWriteForwardingStatus": "disabled",
        "DBClusterIdentifier": "aurora-write-forwarding-test-replica-2"
    },
    {
        "GlobalWriteForwardingStatus": null,
        "DBClusterIdentifier": "non-global-cluster"
    }
]
```

 To find all secondary clusters that have global write forwarding enabled, run the following command. This command also returns the cluster's reader endpoint. You use the secondary cluster's reader endpoint when you use write forwarding from the secondary to the primary in your Aurora global database. 

**Example**  

```
aws rds describe-db-clusters --query 'DBClusters[].{DBClusterIdentifier:DBClusterIdentifier,GlobalWriteForwardingStatus:GlobalWriteForwardingStatus,ReaderEndpoint:ReaderEndpoint} | [?GlobalWriteForwardingStatus == `enabled`]'
[
    {
        "GlobalWriteForwardingStatus": "enabled",
        "ReaderEndpoint": "aurora-write-forwarding-test-replica-1.cluster-ro-cnpexample.us-west-2.rds.amazonaws.com",
        "DBClusterIdentifier": "aurora-write-forwarding-test-replica-1"
    }
]
```

## Application and SQL compatibility with write forwarding in Aurora MySQL


You can use the following kinds of SQL statements with write forwarding:
+ Data manipulation language (DML) statements, such as `INSERT`, `DELETE`, and `UPDATE`. There are some restrictions on the properties of these statements that you can use with write forwarding, as described following.
+ `SELECT ... LOCK IN SHARE MODE` and `SELECT FOR UPDATE` statements.
+ `PREPARE` and `EXECUTE` statements.

 Certain statements aren't allowed or can produce stale results when you use them in a global database with write forwarding. Thus, the `EnableGlobalWriteForwarding` setting is turned off by default for secondary clusters. Before turning it on, check to make sure that your application code isn't affected by any of these restrictions. 

 The following restrictions apply to the SQL statements you use with write forwarding. In some cases, you can use the statements on secondary clusters with write forwarding enabled at the cluster level. This approach works if write forwarding isn't turned on within the session by the `aurora_replica_read_consistency` configuration parameter. Trying to use a statement when it's not allowed because of write forwarding causes an error message with the following format. 

```
ERROR 1235 (42000): This version of MySQL doesn't yet support 'operation with write forwarding'.
```

**Data definition language (DDL)**  
 Connect to the primary cluster to run DDL statements. You can't run them from reader DB instances.

**Updating a permanent table using data from a temporary table**  
 You can use temporary tables on secondary clusters with write forwarding enabled. However, you can't use a DML statement to modify a permanent table if the statement refers to a temporary table. For example, you can't use an `INSERT ... SELECT` statement that takes the data from a temporary table. The temporary table exists on the secondary cluster and isn't available when the statement runs on the primary cluster. 

**XA transactions**  
 You can't use the following statements on a secondary cluster when write forwarding is turned on within the session. You can use these statements on secondary clusters that don't have write forwarding enabled, or within sessions where the `aurora_replica_read_consistency` setting is empty. Before turning on write forwarding within a session, check if your code uses these statements.   

```
XA {START|BEGIN} xid [JOIN|RESUME]
XA END xid [SUSPEND [FOR MIGRATE]]
XA PREPARE xid
XA COMMIT xid [ONE PHASE]
XA ROLLBACK xid
XA RECOVER [CONVERT XID]
```

**LOAD statements for permanent tables**  
 You can't use the following statements on a secondary cluster with write forwarding enabled.   

```
LOAD DATA INFILE 'data.txt' INTO TABLE t1;
        LOAD XML LOCAL INFILE 'test.xml' INTO TABLE t1;
```
 You can load data into a temporary table on a secondary cluster. However, make sure that you run any `LOAD` statements that refer to permanent tables only on the primary cluster. 

**Plugin statements**  
 You can't use the following statements on a secondary cluster with write forwarding enabled.   

```
INSTALL PLUGIN example SONAME 'ha_example.so';
UNINSTALL PLUGIN example;
```

**SAVEPOINT statements**  
 You can't use the following statements on a secondary cluster when write forwarding is turned on within the session. You can use these statements on secondary clusters that don't have write forwarding enabled, or within sessions where the `aurora_replica_read_consistency` setting is blank. Check if your code uses these statements before turning on write forwarding within a session.   

```
SAVEPOINT t1_save;
ROLLBACK TO SAVEPOINT t1_save;
RELEASE SAVEPOINT t1_save;
```

## Isolation and consistency for write forwarding in Aurora MySQL


 In sessions that use write forwarding, you can only use the `REPEATABLE READ` isolation level. Although you can also use the `READ COMMITTED` isolation level with read-only clusters in secondary AWS Regions, that isolation level doesn't work with write forwarding. For information about the `REPEATABLE READ` and `READ COMMITTED` isolation levels, see [Aurora MySQL isolation levels](AuroraMySQL.Reference.IsolationLevels.md). 

 You can control the degree of read consistency on a secondary cluster. The read consistency level determines how much waiting the secondary cluster does before each read operation to ensure that some or all changes are replicated from the primary cluster. You can adjust the read consistency level to ensure that all forwarded write operations from your session are visible in the secondary cluster before any subsequent queries. You can also use this setting to ensure that queries on the secondary cluster always see the most current updates from the primary cluster. This is so even for those submitted by other sessions or other clusters. To specify this type of behavior for your application, you choose a value for the session-level parameter `aurora_replica_read_consistency`. 

**Important**  
Always set the `aurora_replica_read_consistency` parameter for any session for which you want to forward writes. If you don't, Aurora doesn't enable write forwarding for that session. This parameter has an empty value by default, so choose a specific value when you use this parameter. The `aurora_replica_read_consistency` parameter has an effect only on secondary clusters that have write forwarding enabled.  
For Aurora MySQL version 2 and version 3 lower than 3.04, use `aurora_replica_read_consistency` as a session variable. For Aurora MySQL version 3.04 and higher, you can use `aurora_replica_read_consistency` as either a session variable or as a DB cluster parameter.

 For the `aurora_replica_read_consistency` parameter, you can specify the values `EVENTUAL`, `SESSION`, and `GLOBAL`. 

 As you increase the consistency level, your application spends more time waiting for changes to be propagated between AWS Regions. You can choose the balance between fast response time and ensuring that changes made in other locations are fully available before your queries run. 

 With the read consistency set to `EVENTUAL`, queries in a secondary AWS Region that uses write forwarding might see data that is slightly stale due to replication lag. Results of write operations in the same session aren't visible until the write operation is performed on the primary Region and replicated to the current Region. The query doesn't wait for the updated results to be available. Thus, it might retrieve the older data or the updated data, depending on the timing of the statements and the amount of replication lag. 

 With the read consistency set to `SESSION`, all queries in a secondary AWS Region that uses write forwarding see the results of all changes made in that session. The changes are visible regardless of whether the transaction is committed. If necessary, the query waits for the results of forwarded write operations to be replicated to the current Region. It doesn't wait for updated results from write operations performed in other Regions or in other sessions within the current Region. 

 With the read consistency set to `GLOBAL`, a session in a secondary AWS Region sees changes made by that session. It also sees all committed changes from both the primary AWS Region and other secondary AWS Regions. Each query might wait for a period that varies depending on the amount of session lag. The query proceeds when the secondary cluster is up-to-date with all committed data from the primary cluster, as of the time that the query began. 

 For more information about all the parameters involved with write forwarding, see [Configuration parameters for write forwarding in Aurora MySQL](#aurora-global-database-write-forwarding-params-ams). 

### Examples of using write forwarding


These examples use `aurora_replica_read_consistency` as a session variable. For Aurora MySQL version 3.04 and higher, you can use `aurora_replica_read_consistency` as either a session variable or as a DB cluster parameter.

In the following example, the primary cluster is in the US East (N. Virginia) Region. The secondary cluster is in the US East (Ohio) Region. The example shows the effects of running `INSERT` statements followed by `SELECT` statements. Depending on the value of the `aurora_replica_read_consistency` setting, the results might differ depending on the timing of the statements. To achieve higher consistency, you might wait briefly before issuing the `SELECT` statement. Or Aurora can automatically wait until the results finish replicating before proceeding with `SELECT`.

In this example, there is a read consistency setting of `eventual`. Running an `INSERT` statement immediately followed by a `SELECT` statement still returns the value of `COUNT(*)`. This value reflects the number of rows before the new row is inserted. Running the `SELECT` again a short time later returns the updated row count. The `SELECT` statements don't wait.

```
mysql> set aurora_replica_read_consistency = 'eventual';
mysql> select count(*) from t1;
+----------+
| count(*) |
+----------+
|        5 |
+----------+
1 row in set (0.00 sec)
mysql> insert into t1 values (6); select count(*) from t1;
+----------+
| count(*) |
+----------+
|        5 |
+----------+
1 row in set (0.00 sec)
mysql> select count(*) from t1;
+----------+
| count(*) |
+----------+
|        6 |
+----------+
1 row in set (0.00 sec)
```

With a read consistency setting of `session`, a `SELECT` statement immediately after an `INSERT` waits until the changes from the `INSERT` statement are visible. Subsequent `SELECT` statements don't wait.

```
mysql> set aurora_replica_read_consistency = 'session';
mysql> select count(*) from t1;
+----------+
| count(*) |
+----------+
|        6 |
+----------+
1 row in set (0.01 sec)
mysql> insert into t1 values (6); select count(*) from t1; select count(*) from t1;
Query OK, 1 row affected (0.08 sec)
+----------+
| count(*) |
+----------+
|        7 |
+----------+
1 row in set (0.37 sec)
+----------+
| count(*) |
+----------+
|        7 |
+----------+
1 row in set (0.00 sec)
```

 With the read consistency setting still set to `session`, introducing a brief wait after performing an `INSERT` statement makes the updated row count available by the time the next `SELECT` statement runs. 

```
mysql> insert into t1 values (6); select sleep(2); select count(*) from t1;
Query OK, 1 row affected (0.07 sec)
+----------+
| sleep(2) |
+----------+
|        0 |
+----------+
1 row in set (2.01 sec)
+----------+
| count(*) |
+----------+
|        8 |
+----------+
1 row in set (0.00 sec)
```

 With a read consistency setting of `global`, each `SELECT` statement waits to ensure that all data changes as of the start time of the statement are visible before performing the query. The amount of waiting for each `SELECT` statement varies, depending on the amount of replication lag between the primary and secondary clusters. 

```
mysql> set aurora_replica_read_consistency = 'global';
mysql> select count(*) from t1;
+----------+
| count(*) |
+----------+
|        8 |
+----------+
1 row in set (0.75 sec)
mysql> select count(*) from t1;
+----------+
| count(*) |
+----------+
|        8 |
+----------+
1 row in set (0.37 sec)
mysql> select count(*) from t1;
+----------+
| count(*) |
+----------+
|        8 |
+----------+
1 row in set (0.66 sec)
```

## Running multipart statements with write forwarding in Aurora MySQL


 A DML statement might consist of multiple parts, such as a `INSERT ... SELECT` statement or a `DELETE ... WHERE` statement. In this case, the entire statement is forwarded to the primary cluster and run there.

## Transactions with write forwarding in Aurora MySQL


 Whether the transaction is forwarded to the primary cluster depends on the access mode of the transaction. You can specify the access mode for the transaction by using the `SET TRANSACTION` statement or the `START TRANSACTION` statement. You can also specify the transaction access mode by changing the value of the [transaction\$1read\$1only](https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_transaction_read_only) session variable. You can change this session value only while you're connected to a DB cluster that has write forwarding enabled.

 If a long-running transaction doesn't issue any statement for a substantial period of time, it might exceed the idle timeout period. This period has a default of one minute. You can increase it up to one day. A transaction that exceeds the idle timeout is canceled by the primary cluster. The next subsequent statement you submit receives a timeout error. Then Aurora rolls back the transaction. 

 This type of error can occur in other cases when write forwarding becomes unavailable. For example, Aurora cancels any transactions that use write forwarding if you restart the primary cluster or if you turn off the write forwarding configuration setting. 

## Configuration parameters for write forwarding in Aurora MySQL


 The Aurora cluster parameter groups include settings for the write forwarding feature. Because these are cluster parameters, all DB instances in each cluster have the same values for these variables. Details about these parameters are summarized in the following table, with usage notes after the table.


| Name | Scope | Type | Default value | Valid values | 
| --- | --- | --- | --- | --- | 
| aurora\$1fwd\$1master\$1idle\$1timeout (Aurora MySQL version 2) | Global  | unsigned integer | 60 | 1–86,400 | 
| aurora\$1fwd\$1master\$1max\$1connections\$1pct (Aurora MySQL version 2) | Global | unsigned long integer | 10 | 0–90 | 
| aurora\$1fwd\$1writer\$1idle\$1timeout (Aurora MySQL version 3) | Global | unsigned integer | 60 | 1–86,400 | 
| aurora\$1fwd\$1writer\$1max\$1connections\$1pct (Aurora MySQL version 3) | Global | unsigned long integer | 10 | 0–90 | 
| aurora\$1replica\$1read\$1consistency | Session for version 2 and version 3 lower than 3.04, Global for version 3.04 and higher | Enum | '' (null) | EVENTUAL, SESSION, GLOBAL | 

To control incoming write requests from secondary clusters, use these settings on the primary cluster: 
+  `aurora_fwd_master_idle_timeout`, `aurora_fwd_writer_idle_timeout`: The number of seconds the primary cluster waits for activity on a connection that's forwarded from a secondary cluster before closing it. If the session remains idle beyond this period, Aurora cancels the session. 
+  `aurora_fwd_master_max_connections_pct`, `aurora_fwd_writer_max_connections_pct`: The upper limit on database connections that can be used on a writer DB instance to handle queries forwarded from readers. It's expressed as a percentage of the `max_connections` setting for the writer DB instance in the primary cluster. For example, if `max_connections` is 800 and `aurora_fwd_master_max_connections_pct` or `aurora_fwd_writer_max_connections_pct` is 10, then the writer allows a maximum of 80 simultaneous forwarded sessions. These connections come from the same connection pool managed by the `max_connections` setting. 

   This setting applies only on the primary cluster, when one or more secondary clusters have write forwarding enabled. If you decrease the value, existing connections aren't affected. Aurora takes the new value of the setting into account when attempting to create a new connection from a secondary cluster. The default value is 10, representing 10% of the `max_connections` value. If you enable query forwarding on any of the secondary clusters, this setting must have a nonzero value for write operations from secondary clusters to succeed. If the value is zero, the write operations receive the error code `ER_CON_COUNT_ERROR` with the message `Not enough connections on writer to handle your request`. 

The `aurora_replica_read_consistency` parameter enables write forwarding. You use it in each session. You can specify `EVENTUAL`, `SESSION`, or `GLOBAL` for read consistency level. To learn more about consistency levels, see [Isolation and consistency for write forwarding in Aurora MySQL](#aurora-global-database-write-forwarding-isolation-ams). The following rules apply to this parameter:
+  The default value is '' (empty). 
+ Write forwarding is available in a session only if `aurora_replica_read_consistency` is set to `EVENTUAL` or `SESSION` or `GLOBAL`. This parameter is relevant only in reader instances of secondary clusters that have write forwarding enabled and that are in an Aurora global database. 
+  You can't set this variable (when empty) or unset (when already set) inside a multistatement transaction. However, you can change it from one valid value (`EVENTUAL`, `SESSION`, or `GLOBAL`) to another valid value (`EVENTUAL`, `SESSION`, or `GLOBAL`) during such a transaction. 
+  The variable can't be `SET` when write forwarding isn't enabled on the secondary cluster.

## Amazon CloudWatch metrics for write forwarding in Aurora MySQL


 The following Amazon CloudWatch metrics apply to the primary cluster when you use write forwarding on one or more secondary clusters. These metrics are all measured on the writer DB instance in the primary cluster. 


| CloudWatch metric | Unit | Description | 
| --- | --- | --- | 
|  `AuroraDMLRejectedMasterFull`  | Count |  The number of forwarded queries that are rejected because the session is full on the writer DB instance. For Aurora MySQL version 2.  | 
|  `AuroraDMLRejectedWriterFull`  | Count |  The number of forwarded queries that are rejected because the session is full on the writer DB instance. For Aurora MySQL version 3.  | 
|  `ForwardingMasterDMLLatency`  | Milliseconds |  Average time to process each forwarded DML statement on the writer DB instance. It doesn't include the time for the secondary cluster to forward the write request, or the time to replicate changes back to the secondary cluster. For Aurora MySQL version 2.  | 
|  `ForwardingMasterDMLThroughput`  | Count per second |  Number of forwarded DML statements processed each second by this writer DB instance. For Aurora MySQL version 2.  | 
|  `ForwardingMasterOpenSessions`  | Count |  Number of forwarded sessions on the writer DB instance. For Aurora MySQL version 2.  | 
|  `ForwardingWriterDMLLatency`  | Milliseconds |  Average time to process each forwarded DML statement on the writer DB instance. It doesn't include the time for the secondary cluster to forward the write request, or the time to replicate changes back to the secondary cluster. For Aurora MySQL version 3.  | 
|  `ForwardingWriterDMLThroughput`  | Count per second | Number of forwarded DML statements processed each second by this writer DB instance.For Aurora MySQL version 3. | 
|  `ForwardingWriterOpenSessions`  | Count | Number of forwarded sessions on the writer DB instance.For Aurora MySQL version 3. | 

 The following CloudWatch metrics apply to each secondary cluster. These metrics are measured on each reader DB instance in a secondary cluster with write forwarding enabled. 


| CloudWatch metric | Unit | Description | 
| --- | --- | --- | 
|  `ForwardingReplicaDMLLatency`  | Milliseconds | Average response time of forwarded DMLs on the replica. | 
|  `ForwardingReplicaDMLThroughput`  | Count per second | Number of forwarded DML statements processed each second. | 
|  `ForwardingReplicaOpenSessions`  | Count | Number of sessions that are using write forwarding on a reader DB instance. | 
|  `ForwardingReplicaReadWaitLatency`  | Milliseconds |  Average wait time that a `SELECT` statement on a reader DB instance waits to catch up to the primary cluster. The degree to which the reader DB instance waits before processing a query depends on the `aurora_replica_read_consistency` setting.  | 
|  `ForwardingReplicaReadWaitThroughput`  | Count per second | Total number of SELECT statements processed each second in all sessions that are forwarding writes. | 
|   `ForwardingReplicaSelectLatency`  | Milliseconds | Forwarded SELECT latency, average over all forwarded SELECT statements within the monitoring period. | 
|   `ForwardingReplicaSelectThroughput`  | Count per second | Forwarded SELECT throughput per second average within the monitoring period. | 

## Aurora MySQL status variables for write forwarding


 The following Aurora MySQL status variables apply to the primary cluster when you use write forwarding on one or more secondary clusters. These metrics are all measured on the writer DB instance in the primary cluster. 


| Aurora MySQL status variable | Unit | Description | 
| --- | --- | --- | 
| Aurora\$1fwd\$1master\$1dml\$1stmt\$1count | Count | Total number of DML statements forwarded to this writer DB instance.For Aurora MySQL version 2. | 
| Aurora\$1fwd\$1master\$1dml\$1stmt\$1duration | Microseconds |  Total duration of DML statements forwarded to this writer DB instance. For Aurora MySQL version 2.  | 
| Aurora\$1fwd\$1master\$1open\$1sessions | Count |  Number of forwarded sessions on the writer DB instance. For Aurora MySQL version 2.  | 
| Aurora\$1fwd\$1master\$1select\$1stmt\$1count | Count |  Total number of `SELECT` statements forwarded to this writer DB instance. For Aurora MySQL version 2.  | 
| Aurora\$1fwd\$1master\$1select\$1stmt\$1duration | Microseconds |  Total duration of `SELECT` statements forwarded to this writer DB instance. For Aurora MySQL version 2.  | 
| Aurora\$1fwd\$1writer\$1dml\$1stmt\$1count | Count | Total number of DML statements forwarded to this writer DB instance.For Aurora MySQL version 3. | 
| Aurora\$1fwd\$1writer\$1dml\$1stmt\$1duration | Microseconds | Total duration of DML statements forwarded to this writer DB instance. | 
| Aurora\$1fwd\$1writer\$1open\$1sessions | Count | Number of forwarded sessions on the writer DB instance.For Aurora MySQL version 3. | 
| Aurora\$1fwd\$1writer\$1select\$1stmt\$1count | Count | Total number of `SELECT` statements forwarded to this writer DB instance.For Aurora MySQL version 3. | 
| Aurora\$1fwd\$1writer\$1select\$1stmt\$1duration | Microseconds |  Total duration of `SELECT` statements forwarded to this writer DB instance. For Aurora MySQL version 3.  | 

 The following Aurora MySQL status variables apply to each secondary cluster. These metrics are measured on each reader DB instance in a secondary cluster with write forwarding enabled. 


| Aurora MySQL status variable | Unit | Description | 
| --- | --- | --- | 
| Aurora\$1fwd\$1replica\$1dml\$1stmt\$1count | Count | Total number of DML statements forwarded from this reader DB instance. | 
| Aurora\$1fwd\$1replica\$1dml\$1stmt\$1duration | Microseconds | Total duration of all DML statements forwarded from this reader DB instance. | 
| Aurora\$1fwd\$1replica\$1errors\$1session\$1limit | Count |  Number of sessions rejected by the primary cluster due to one of the following error conditions: [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-write-forwarding-ams.html)  | 
| Aurora\$1fwd\$1replica\$1open\$1sessions | Count | Number of sessions that are using write forwarding on a reader DB instance. | 
| Aurora\$1fwd\$1replica\$1read\$1wait\$1count | Count | Total number of read-after-write waits on this reader DB instance.  | 
| Aurora\$1fwd\$1replica\$1read\$1wait\$1duration | Microseconds | Total duration of waits due to the read consistency setting on this reader DB instance. | 
| Aurora\$1fwd\$1replica\$1select\$1stmt\$1count | Count | Total number of SELECT statements forwarded from this reader DB instance. | 
| Aurora\$1fwd\$1replica\$1select\$1stmt\$1duration | Microseconds | Total duration of SELECT statements forwarded from this reader DB instance.  | 

# Using write forwarding in an Aurora PostgreSQL global database
Using write forwarding in Aurora PostgreSQL

**Topics**
+ [

## Region and version availability of write forwarding in Aurora PostgreSQL
](#aurora-global-database-write-forwarding-regions-versions-apg)
+ [

## Enabling write forwarding in Aurora PostgreSQL
](#aurora-global-database-write-forwarding-enabling-apg)
+ [

## Checking if a secondary cluster has write forwarding enabled in Aurora PostgreSQL
](#aurora-global-database-write-forwarding-describing-apg)
+ [

## Application and SQL compatibility with write forwarding in Aurora PostgreSQL
](#aurora-global-database-write-forwarding-compatibility-apg)
+ [

## Isolation and consistency for write forwarding in Aurora PostgreSQL
](#aurora-global-database-write-forwarding-isolation-apg)
+ [

## Transaction access modes with write forwarding
](#aurora-global-database-write-forwarding-txns)
+ [

## Running multipart statements with write forwarding in Aurora PostgreSQL
](#aurora-global-database-write-forwarding-multipart-apg)
+ [

## Configuration parameters for write forwarding in Aurora PostgreSQL
](#aurora-global-database-write-forwarding-params-apg)
+ [

## Amazon CloudWatch metrics for write forwarding in Aurora PostgreSQL
](#aurora-global-database-write-forwarding-cloudwatch-apg)
+ [

## Wait events for write forwarding in Aurora PostgreSQL
](#aurora-global-database-write-forwarding-wait-events-apg)

## Region and version availability of write forwarding in Aurora PostgreSQL


 In Aurora PostgreSQL version 16 and higher major versions, global write forwarding is supported in all minor versions. For earlier Aurora PostgreSQL versions, global write forwarding is supported with version 15.4 and higher minor versions, and version 14.9 and higher minor versions. Write forwarding is available in every AWS Region where Aurora PostgreSQL-based global databases are available. 

For more information on version and Region availability of Aurora PostgreSQL global databases, see [Aurora global databases with Aurora PostgreSQL](Concepts.Aurora_Fea_Regions_DB-eng.Feature.GlobalDatabase.md#Concepts.Aurora_Fea_Regions_DB-eng.Feature.GlobalDatabase.apg).

## Enabling write forwarding in Aurora PostgreSQL
Enabling write forwarding in Aurora PostgreSQL

By default, write forwarding isn't enabled when you add a secondary cluster to an Aurora global database. You can enable write forwarding for your secondary DB cluster while you're creating it or anytime after you create it. If needed, you can disable it later. Enabling or disabling write forwarding doesn't cause downtime or a reboot.

**Note**  
You can use local write forwarding for your applications that have occasional writes and require read-after-write consistency, which is the ability to read the latest write in a transaction. 

### Console


In the console, you can enable or disable write forwarding when you create or modify a secondary DB cluster.

#### Enabling or disabling write forwarding when creating a secondary DB cluster


When you create a new secondary DB cluster, you enable write forwarding by selecting the **Turn on global write forwarding** check box under **Read replica write forwarding**. Or clear the check box to disable it. To create a secondary DB cluster, follow the instructions for your DB engine in [Creating an Amazon Aurora DB cluster](Aurora.CreateInstance.md).

The following screenshot shows the **Read replica write forwarding** section with the **Turn on global write forwarding** check box selected.

![\[\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-enable-write-forwarding.png)


#### Enabling or disabling write forwarding when modifying a secondary DB cluster


In the console, you can modify a secondary DB cluster to enable or disable write forwarding.

**To enable or disable write forwarding for a secondary DB cluster by using the console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose **Databases**.

1. Choose the secondary DB cluster, and choose **Modify**.

1. In the **Read replica write forwarding** section, check or clear the **Turn on global write forwarding** check box.

1. Choose **Continue**.

1. For **Schedule modifications**, choose **Apply immediately**. If you choose **Apply during the next scheduled maintenance window**, Aurora ignores this setting and turns on write forwarding immediately.

1. Choose **Modify cluster**.

### AWS CLI


 To enable write forwarding by using the AWS CLI, use the `--enable-global-write-forwarding` option. This option works when you create a new secondary cluster using the [create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html) command. It also works when you modify an existing secondary cluster using the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) command. It requires that the global database uses an Aurora version that supports write forwarding. You can disable write forwarding by using the `--no-enable-global-write-forwarding` option with these same CLI commands. 

The following procedures describe how to enable or disable write forwarding for a secondary DB cluster in your global cluster by using the AWS CLI.

**To enable or disable write forwarding for an existing secondary DB cluster**
+ Call the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) AWS CLI command and supply the following values:
  + `--db-cluster-identifier` – The name of the DB cluster.
  + `--enable-global-write-forwarding` to turn on or `--no-enable-global-write-forwarding` to turn off.

  The following example enables write forwarding for DB cluster `sample-secondary-db-cluster`.

  For Linux, macOS, or Unix:

  ```
  aws rds modify-db-cluster \
      --db-cluster-identifier sample-secondary-db-cluster \
      --enable-global-write-forwarding
  ```

  For Windows:

  ```
  aws rds modify-db-cluster ^
      --db-cluster-identifier sample-secondary-db-cluster ^
      --enable-global-write-forwarding
  ```

### RDS API


 To enable write forwarding using the Amazon RDS API, set the `EnableGlobalWriteForwarding` parameter to `true`. This parameter works when you create a new secondary cluster using the [CreateDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html) operation. It also works when you modify an existing secondary cluster using the [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) operation. It requires that the global database uses an Aurora version that supports write forwarding. You can disable write forwarding by setting the `EnableGlobalWriteForwarding` parameter to `false`. 

## Checking if a secondary cluster has write forwarding enabled in Aurora PostgreSQL


 To determine whether you can use write forwarding from a secondary cluster, you can check whether the cluster has the attribute `"GlobalWriteForwardingStatus": "enabled"`. 

 In the AWS Management Console, you see `Read replica write forwarding` on the **Configuration** tab of the details page for the cluster. To see the status of the global write forwarding setting for all of your clusters, run the following AWS CLI command. 

 A secondary cluster shows the value `"enabled"` or `"disabled"` to indicate if write forwarding is turned on or off. A value of `null` indicates that write forwarding isn't available for that cluster. Either the cluster isn't part of a global database, or is the primary cluster instead of a secondary cluster. The value can also be `"enabling"` or `"disabling"` if write forwarding is in the process of being turned on or off. 

**Example**  

```
aws rds describe-db-clusters --query '*[].{DBClusterIdentifier:DBClusterIdentifier,GlobalWriteForwardingStatus:GlobalWriteForwardingStatus}'
[
    {
        "GlobalWriteForwardingStatus": "enabled",
        "DBClusterIdentifier": "aurora-write-forwarding-test-replica-1"
    },
    {
        "GlobalWriteForwardingStatus": "disabled",
        "DBClusterIdentifier": "aurora-write-forwarding-test-replica-2"
    },
    {
        "GlobalWriteForwardingStatus": null,
        "DBClusterIdentifier": "non-global-cluster"
    }
]
```

 To find all secondary clusters that have global write forwarding enabled, run the following command. This command also returns the cluster's reader endpoint. You use the secondary cluster's reader endpoint when you use write forwarding from the secondary to the primary in your Aurora global database. 

**Example**  

```
aws rds describe-db-clusters --query 'DBClusters[].{DBClusterIdentifier:DBClusterIdentifier,GlobalWriteForwardingStatus:GlobalWriteForwardingStatus,ReaderEndpoint:ReaderEndpoint} | [?GlobalWriteForwardingStatus == `enabled`]'
[
    {
        "GlobalWriteForwardingStatus": "enabled",
        "ReaderEndpoint": "aurora-write-forwarding-test-replica-1.cluster-ro-cnpexample.us-west-2.rds.amazonaws.com",
        "DBClusterIdentifier": "aurora-write-forwarding-test-replica-1"
    }
]
```

## Application and SQL compatibility with write forwarding in Aurora PostgreSQL


 Certain statements aren't allowed or can produce stale results when you use them in a global database with write forwarding. In addition, user defined functions and user defined procedures aren't supported. Thus, the `EnableGlobalWriteForwarding` setting is turned off by default for secondary clusters. Before turning it on, check to make sure that your application code isn't affected by any of these restrictions.

 You can use the following kinds of SQL statements with write forwarding: 
+  Data manipulation language (DML) statements, such as `INSERT`, `DELETE`, and `UPDATE`
+  `SELECT FOR { UPDATE | NO KEY UPDATE | SHARE | KEY SHARE }` statements
+  `PREPARE` and `EXECUTE` statements
+  `EXPLAIN` statements with the statements in this list

 The following kinds of SQL statements aren't supported with write forwarding: 
+  Data definition language (DDL) statements 
+  `ANALYZE` 
+  `CLUSTER` 
+  `COPY` 
+ Cursors – Cursors aren't supported, so make sure to close them before using write forwarding.
+  `GRANT`\$1`REVOKE`\$1`REASSIGN OWNED`\$1`SECURITY LABEL`
+  `LOCK` 
+  `SAVEPOINT` statements 
+  `SELECT INTO` 
+  `SET CONSTRAINTS` 
+  `TRUNCATE` 
+  `VACUUM` 

## Isolation and consistency for write forwarding in Aurora PostgreSQL


 In sessions that use write forwarding, you can use the `REPEATABLE READ` and `READ COMMITTED` isolation levels. However, the `SERIALIZABLE` isolation level isn't supported.

You can control the degree of read consistency on a secondary cluster. The read consistency level determines how much waiting the secondary cluster does before each read operation to ensure that some or all changes are replicated from the primary cluster. You can adjust the read consistency level to ensure that all forwarded write operations from your session are visible in the secondary cluster before any subsequent queries. You can also use this setting to ensure that queries on the secondary cluster always see the most current updates from the primary cluster. This is so even for those submitted by other sessions or other clusters. To specify this type of behavior for your application, you choose the appropriate value for the `apg_write_forward.consistency_mode` parameter. The `apg_write_forward.consistency_mode` parameter has an effect only on secondary clusters that have write forwarding enabled.

**Note**  
For the `apg_write_forward.consistency_mode` parameter, you can specify the value `SESSION`, `EVENTUAL`, `GLOBAL`, or `OFF`. By default, the value is set to `SESSION`. Setting the value to `OFF` disables write forwarding in the session.

 As you increase the consistency level, your application spends more time waiting for changes to be propagated between AWS Regions. You can choose the balance between fast response time and ensuring that changes made in other locations are fully available before your queries run.

With each available consistency mode setting, the effect is as follows:
+ `SESSION` – All queries in a secondary AWS Region that uses write forwarding see the results of all changes made in that session. The changes are visible regardless of whether the transaction is committed. If necessary, the query waits for the results of forwarded write operations to be replicated to the current Region. It doesn't wait for updated results from write operations performed in other Regions or in other sessions within the current Region. 
+ `EVENTUAL` – Queries in a secondary AWS Region that uses write forwarding might see data that is slightly stale due to replication lag. Results of write operations in the same session aren't visible until the write operation is performed on the primary Region and replicated to the current Region. The query doesn't wait for the updated results to be available. Thus, it might retrieve the older data or the updated data, depending on the timing of the statements and the amount of replication lag. 
+ `GLOBAL` – A session in a secondary AWS Region sees changes made by that session. It also sees all committed changes from both the primary AWS Region and other secondary AWS Regions. Each query might wait for a period that varies depending on the amount of session lag. The query proceeds when the secondary cluster is up-to-date with all committed data from the primary cluster, as of the time that the query began. 
+ `OFF` – Write forwarding is disabled in the session.

 For more information about all the parameters involved with write forwarding, see [Configuration parameters for write forwarding in Aurora PostgreSQL](#aurora-global-database-write-forwarding-params-apg).

## Transaction access modes with write forwarding


If the transaction access mode is set to read only, write forwarding isn't used. You can set the access mode to read write only while you’re connected to a DB cluster and session that has write forwarding enabled.

For more information on the transaction access modes, see [SET TRANSACTION](https://www.postgresql.org/docs/current/sql-set-transaction.html).

## Running multipart statements with write forwarding in Aurora PostgreSQL


 A DML statement might consist of multiple parts, such as a `INSERT ... SELECT` statement or a `DELETE ... WHERE` statement. In this case, the entire statement is forwarded to the primary cluster and run there.

## Configuration parameters for write forwarding in Aurora PostgreSQL


 The Aurora cluster parameter groups include settings for the write forwarding feature. Because these are cluster parameters, all DB instances in each cluster have the same values for these variables. Details about these parameters are summarized in the following table, with usage notes after the table.


|  Name  |  Scope  |  Type  |  Default value  |  Valid values  | 
| --- | --- | --- | --- | --- | 
|  apg\$1write\$1forward.connect\$1timeout  |  Session  |  seconds  |  30  |  0–2147483647  | 
|  apg\$1write\$1forward.consistency\$1mode |  Session  |  enum  |  Session |  SESSION, EVENTUAL, GLOBAL, OFF  | 
|  apg\$1write\$1forward.idle\$1in\$1transaction\$1session\$1timeout  |  Session  |  milliseconds  |  86400000  |  0–2147483647  | 
|  apg\$1write\$1forward.idle\$1session\$1timeout  |  Session  |  milliseconds  |  300000  |  0–2147483647  | 
|  apg\$1write\$1forward.max\$1forwarding\$1connections\$1percent  |  Global  |  int  |  25  |  1–100  | 

The `apg_write_forward.max_forwarding_connections_percent` parameter is the upper limit on database connection slots that can be used to handle queries forwarded from readers. It is expressed as a percentage of the `max_connections` setting for the writer DB instance in the primary cluster. For example, if `max_connections` is `800` and `apg_write_forward.max_forwarding_connections_percent` is `10`, then the writer allows a maximum of 80 simultaneous forwarded sessions. These connections come from the same connection pool managed by the `max_connections` setting. This setting applies only on the primary cluster when at least one secondary clusters has write forwarding enabled.

Use the following settings on the secondary cluster:
+ `apg_write_forward.consistency_mode` – A session-level parameter that controls the degree of read consistency on the secondary cluster. Valid values are `SESSION`, `EVENTUAL`, `GLOBAL`, or `OFF`. By default, the value is set to `SESSION`. Setting the value to `OFF` disables write forwarding in the session. To learn more about consistency levels, see [Isolation and consistency for write forwarding in Aurora PostgreSQL](#aurora-global-database-write-forwarding-isolation-apg). This parameter is relevant only in reader instances of secondary clusters that have write forwarding enabled and that are in an Aurora global database.
+ `apg_write_forward.connect_timeout` – The maximum number of seconds the secondary cluster waits when establishing a connection to the primary cluster before giving up. A value of `0` means to wait indefinitely.
+ `apg_write_forward.idle_in_transaction_session_timeout` – The number of milliseconds the primary cluster waits for activity on a connection that's forwarded from a secondary cluster that has an open transaction before closing it. If the session remains idle in transaction beyond this period, Aurora terminates the session. A value of `0` disables the timeout.
+ `apg_write_forward.idle_session_timeout` – The number of milliseconds the primary cluster waits for activity on a connection that's forwarded from a secondary cluster before closing it. If the session remains idle beyond this period, Aurora terminates the session. A value of `0` disables the timeout.

## Amazon CloudWatch metrics for write forwarding in Aurora PostgreSQL


 The following Amazon CloudWatch metrics apply to the primary cluster when you use write forwarding on one or more secondary clusters. These metrics are all measured on the writer DB instance in the primary cluster.


| CloudWatch Metric | Units and description | 
| --- | --- | 
| `AuroraForwardingWriterDMLThroughput`  | Count (per second). Number of forwarded DML statements processed each second by this writer DB instance. | 
|  `AuroraForwardingWriterOpenSessions`  | Count. Number of open sessions on this writer DB instance processing forwarded queries. | 
|  `AuroraForwardingWriterTotalSessions`  | Count. Total number of forwarded sessions on this writer DB instance. | 

 The following CloudWatch metrics apply to each secondary cluster. These metrics are measured on each reader DB instance in a secondary cluster with write forwarding enabled. 


| CloudWatch Metric | Unit and description | 
| --- | --- | 
|  `AuroraForwardingReplicaCommitThroughput` |  Count (per second). Number of commits in sessions forwarded by this replica each second.  | 
|  `AuroraForwardingReplicaDMLLatency` |  Milliseconds. Average response time in milliseconds of forwarded DMLs on replica.  | 
|  `AuroraForwardingReplicaDMLThroughput` |  Count (per second). Number of forwarded DML statements processed on this replica each second.  | 
|  `AuroraForwardingReplicaErrorSessionsLimit` |  Count. Number of sessions rejected by the primary cluster because the limit for max connections or max write forward connections was reached.  | 
|  `AuroraForwardingReplicaOpenSessions`  |  Count. The number of sessions that are using write forwarding on a replica instance.  | 
|  `AuroraForwardingReplicaReadWaitLatency` | Milliseconds. Average wait time in milliseconds that the replica waits to be consistent with the LSN of the primary cluster. The degree to which the reader DB instance waits depends on the apg\$1write\$1forward.consistency\$1mode setting. For information about this setting, see [Configuration parameters for write forwarding in Aurora PostgreSQL](#aurora-global-database-write-forwarding-params-apg).  | 

## Wait events for write forwarding in Aurora PostgreSQL


Amazon Aurora generates the following wait events when you use write forwarding with Aurora PostgreSQL.

**Topics**
+ [

### IPC:AuroraWriteForwardConnect
](#apg-waits.ipcaurorawriteforwardconnect)
+ [

### IPC:AuroraWriteForwardConsistencyPoint
](#apg-waits.ipcaurorawriteforwardconsistencypoint)
+ [

### IPC:AuroraWriteForwardExecute
](#apg-waits.ipc:aurorawriteforwardexecute)
+ [

### IPC:AuroraWriteForwardGetGlobalConsistencyPoint
](#apg-waits.ipc:aurorawriteforwardgetglobalconsistencypoint)
+ [

### IPC:AuroraWriteForwardXactAbort
](#apg-waits.ipc:aurorawriteforwardxactabort)
+ [

### IPC:AuroraWriteForwardXactCommit
](#apg-waits.ipc:aurorawriteforwardxactcommit)
+ [

### IPC:AuroraWriteForwardXactStart
](#apg-waits.ipc:aurorawriteforwardxactstart)

### IPC:AuroraWriteForwardConnect


The `IPC:AuroraWriteForwardConnect` event occurs when a backend process on the secondary DB cluster is waiting for a connection to the writer node of the primary DB cluster to be opened.

**Likely causes of increased waits**

This event increases as the number of connection attempts from a secondary Region's reader node to the writer node of the primary DB cluster increases.

**Actions**

Reduce the number of simultaneous connections from a secondary node to the primary Region's writer node.

### IPC:AuroraWriteForwardConsistencyPoint


The `IPC:AuroraWriteForwardConsistencyPoint` event describes how long a query from a node on the secondary DB cluster will wait for the results of forwarded write operations to be replicated to the current Region. This event is only generated if the session-level parameter `apg_write_forward.consistency_mode` is set to one of the following:
+ `SESSION` – queries on a secondary node wait for the results of all changes made in that session.
+ `GLOBAL` – queries on a secondary node wait for the results of changes made by that session, plus all committed changes from both the primary Region and other secondary Regions in the global cluster.

For more information about the `apg_write_forward.consistency_mode` parameter settings, see [Configuration parameters for write forwarding in Aurora PostgreSQL](#aurora-global-database-write-forwarding-params-apg).

**Likely causes of increased waits**

Common causes for longer wait times include the following:
+ Increased replica lag, as measured by the Amazon CloudWatch `ReplicaLag` metric. For more information about this metric, see [Monitoring Aurora PostgreSQL replication](AuroraPostgreSQL.Replication.md#AuroraPostgreSQL.Replication.Monitoring).
+ Increased load on the primary Region's writer node or on the secondary node.

**Actions**

Change your consistency mode, depending on your application's requirements.

### IPC:AuroraWriteForwardExecute


The `IPC:AuroraWriteForwardExecute` event occurs when a backend process on the secondary DB cluster is waiting for a forwarded query to complete and obtain results from the writer node of the primary DB cluster.

**Likely causes of increased waits**

Common causes for increased waits include the following:
+ Fetching a large number of rows from the primary Region's writer node.
+ Increased network latency between the secondary node and primary Region's writer node increases the time it takes the secondary node to receive data from the writer node.
+ Increased load on the secondary node can delay transmission of the query request from the secondary node to the primary Region's writer node.
+ Increased load on the primary Region's writer node can delay transmission of data from the writer node to the secondary node.

**Actions**

We recommend different actions depending on the causes of your wait event.
+ Optimize queries to retrieve only necessary data.
+ Optimize data manipulation language (DML) operations to only modify necessary data.
+ If the secondary node or primary Region's writer node is constrained by CPU or network bandwidth, consider changing it to an instance type with more CPU capacity or more network bandwidth.

### IPC:AuroraWriteForwardGetGlobalConsistencyPoint


The `IPC:AuroraWriteForwardGetGlobalConsistencyPoint` event occurs when a backend process on the secondary DB cluster that's using the GLOBAL consistency mode is waiting to obtain the global consistency point from the writer node before executing a query.

**Likely causes of increased waits**

Common causes for increased waits include the following:
+ Increased network latency between the secondary node and primary Region's writer node increases the time it takes the secondary node to receive data from the writer node.
+ Increased load on the secondary node can delay transmission of the query request from the secondary node to the primary Region's writer node.
+ Increased load on the primary Region's writer node can delay transmission of data from the writer node to the secondary node.

**Actions**

We recommend different actions depending on the causes of your wait event.
+ Change your consistency mode, depending on your application's requirements.
+ If the secondary node or primary Region's writer node is constrained by CPU or network bandwidth, consider changing it to an instance type with more CPU capacity or more network bandwidth.

### IPC:AuroraWriteForwardXactAbort


The `IPC:AuroraWriteForwardXactAbort` event occurs when a backend process on the secondary DB cluster is waiting for the result of a remote cleanup query. Cleanup queries are issued to return the process to the appropriate state after a write-forwarded transaction is aborted. Amazon Aurora performs them either because an error was found or because an user issued an explicit `ABORT` command or cancelled a running query.

**Likely causes of increased waits**

Common causes for increased waits include the following:
+ Increased network latency between the secondary node and primary Region's writer node increases the time it takes the secondary node to receive data from the writer node.
+ Increased load on the secondary node can delay transmission of the cleanup query request from the secondary node to the primary Region's writer node.
+ Increased load on the primary Region's writer node can delay transmission of data from the writer node to the secondary node.

**Actions**

We recommend different actions depending on the causes of your wait event.
+ Investigate the cause of the aborted transaction.
+ If the secondary node or primary Region's writer node is constrained by CPU or network bandwidth, consider changing it to an instance type with more CPU capacity or more network bandwidth.

### IPC:AuroraWriteForwardXactCommit


The `IPC:AuroraWriteForwardXactCommit` event occurs when a backend process on the secondary DB cluster is waiting for the result of a forwarded commit transaction command.

**Likely causes of increased waits**

Common causes for increased waits include the following:
+ Increased network latency between the secondary node and primary Region's writer node increases the time it takes the secondary node to receive data from the writer node.
+ Increased load on the secondary node can delay transmission of the query request from the secondary node to the primary Region's writer node.
+ Increased load on the primary Region's writer node can delay transmission of data from the writer node to the secondary node.

**Actions**

If the secondary node or primary Region's writer node is constrained by CPU or network bandwidth, consider changing it to an instance type with more CPU capacity or more network bandwidth.

### IPC:AuroraWriteForwardXactStart


The `IPC:AuroraWriteForwardXactStart` event occurs when a backend process on the secondary DB cluster is waiting for the result of a forwarded start transaction command.

**Likely causes of increased waits**

Common causes for increased waits include the following:
+ Increased network latency between the secondary node and primary Region's writer node increases the time it takes the secondary node to receive data from the writer node.
+ Increased load on the secondary node can delay transmission of the query request from the secondary node to the primary Region's writer node.
+ Increased load on the primary Region's writer node can delay transmission of data from the writer node to the secondary node.

**Actions**

If the secondary node or primary Region's writer node is constrained by CPU or network bandwidth, consider changing it to an instance type with more CPU capacity or more network bandwidth.

# Using switchover or failover in Amazon Aurora Global Database
Switchover or failover for Aurora Global Database

 The Aurora Global Database feature provides more business continuity and disaster recovery (BCDR) protection than the standard [high availability](Concepts.AuroraHighAvailability.md) provided by an Aurora DB cluster in a single AWS Region. By using Aurora Global Database, you can plan for faster recovery from rare, unplanned Regional disasters or complete service-level outages quickly. 

 You can consult the following guidance and procedures to plan, test, and implement your BCDR strategy using the Aurora Global Database feature. 

**Topics**
+ [

## Planning for business continuity and disaster recovery
](#aurora-global-database-bcdr-planning)
+ [

## Performing switchovers for Amazon Aurora global databases
](#aurora-global-database-disaster-recovery.managed-failover)
+ [

## Recovering an Amazon Aurora global database from an unplanned outage
](#aurora-global-database-failover)
+ [

## Managing RPOs for Aurora PostgreSQL–based global databases
](#aurora-global-database-manage-recovery)
+ [

# Cross-Region resiliency for Global Database secondary clusters
](aurora-global-database-secondary-availability.md)

## Planning for business continuity and disaster recovery
Planning for BCDR

 To plan your business continuity and disaster recovery strategy, it's helpful to understand the following industry terminology and how those terms relate to Aurora Global Database features. 

 Recovery from disaster is typically driven by the following two business objectives: 
+  **Recovery time objective (RTO)** – The time it takes a system to return to a working state after a disaster or service outage. In other words, RTO measures downtime. For Aurora Global Database, RTO can be in the order of minutes. 
+  **Recovery point objective (RPO)** – The amount of data that can be lost (measured in time) after a disaster or service outage. This data loss is usually due to asynchronous replication lag. For an Aurora global database, RPO is typically measured in seconds. With an Aurora PostgreSQL–based global database, you can use the `rds.global_db_rpo` parameter to set and track the upper bound on RPO, but doing so might affect transaction processing on the primary cluster's writer node. For more information, see [Managing RPOs for Aurora PostgreSQL–based global databases](#aurora-global-database-manage-recovery). 

 Performing a switchover or failover with Aurora Global Database involves promoting a secondary DB cluster to be the primary DB cluster. The term "regional outage" is often used to describe a variety of failure scenarios. A worst case scenario could be a wide-spread outage from a catastrophic event that affects hundreds of square miles. However, most outages are much more localized, affecting only a small subset of cloud services or customer systems. Consider the full scope of the outage to make sure cross-Region failover is the proper solution and to choose the appropriate failover method for the situation. Whether you should use the failover or switchover approach depends on the specific outage scenario: 
+  **Failover** – Use this approach to recover from an unplanned outage. With this approach, you perform a cross-Region failover to one of the secondary DB clusters in your Aurora global database. The RPO for this approach is typically a non-zero value measured in seconds. The amount of data loss depends on the Aurora global database replication lag across the AWS Regions at the time of the failure. To learn more, see [Recovering an Amazon Aurora global database from an unplanned outage](#aurora-global-database-failover). 
+  **Switchover** – This operation was previously called "managed planned failover". Use this approach for controlled scenarios, such as operational maintenance and other planned operational procedures where all the Aurora clusters and other services they interact with are in a healthy state. Because this feature synchronizes secondary DB clusters with the primary before making any other changes, RPO is 0 (no data loss). To learn more, see [Performing switchovers for Amazon Aurora global databases](#aurora-global-database-disaster-recovery.managed-failover). 

**Note**  
 Before you can perform a switchover or failover to a headless secondary Aurora DB cluster, you must add a DB instance to it. For more information about headless DB clusters, see [Creating a headless Aurora DB cluster in a secondary Region](aurora-global-database-attach.console.headless.md). 

## Performing switchovers for Amazon Aurora global databases
Performing switchovers<a name="planned_failover"></a><a name="switchover"></a>

**Note**  
 Switchovers were previously called **managed planned failovers**. 

 By using switchovers, you can change the Region of your primary cluster on a routine basis. This approach is intended for controlled scenarios, such as operational maintenance and other planned operational procedures. 

 There are three common use cases for using switchovers. 
+  For "regional rotation" requirements imposed on specific industries. For example, financial service regulations might want tier-0 systems to switch to a different Region for several months to ensure that disaster recovery procedures are regularly exercised. 
+  For multi-Region "follow-the-sun" applications. For example, a business might want to provide lower latency writes in different Regions based on business hours across different time zones. 
+  As a zero-data-loss method to fail back to the original primary Region after a failover. 

**Note**  
 Switchovers are designed to be used on a an Aurora global database where all the Aurora clusters and other services they interact with are in a healthy state. To recover from an unplanned outage, follow the appropriate procedure in [Recovering an Amazon Aurora global database from an unplanned outage](#aurora-global-database-failover).   
 You can perform managed cross-Region switchovers with Aurora Global Database only if the primary and secondary DB clusters have the same major and minor engine versions. Depending on the engine and engine versions, the patch levels might need to be identical or the patch levels can be different. For a list of engines and engine versions that allow these operations between primary and secondary clusters with different patch levels, see [Patch level compatibility for managed cross-Region switchovers and failovers](aurora-global-database-upgrade.md#aurora-global-database-upgrade.minor.incompatibility). Before you begin the switchover, check the engine versions in your global cluster to make sure that they support managed cross-Region switchover, and upgrade them if needed. 

 During a switchover, Aurora makes the cluster in your chosen secondary Region be your primary cluster. The switchover mechanism maintains your global database's existing replication topology: it still has the same number of Aurora clusters in the same Regions. Before Aurora starts the switchover process, it waits for the target secondary Region clusters to be fully synchronized with the primary Region cluster. Then, the DB cluster in the primary Region becomes read-only. The chosen secondary cluster promotes one of its read-only nodes to full writer status, which allows that secondary cluster to assume the role of primary cluster. Because the target secondary cluster was synchronized with the primary at the beginning of the process, the new primary continues operations for the Aurora global database without losing any data. Your database is unavailable for a short time while the primary and selected secondary clusters are assuming their new roles. 

**Note**  
To manage replication slots for Aurora PostgreSQL after performing a switchover, see [Managing logical slots for Aurora PostgreSQL](AuroraPostgreSQL.Replication.Logical-monitoring.md#AuroraPostgreSQL.Replication.Logical.Configure.managing-logical-slots).

 To optimize application availability, we recommend that you do the following before using this feature: 
+  Perform this operation during nonpeak hours or at another time when writes to the primary DB cluster are minimal. 
+  Check lag times for all secondary Aurora DB clusters in the Aurora global database. For all Aurora PostgreSQL-based global databases and for Aurora MySQL-based global databases starting with engine versions 3.04.0 and higher or 2.12.0 and higher, use Amazon CloudWatch to view the `AuroraGlobalDBRPOLag` metric for all secondary DB clusters. For lower minor versions of Aurora MySQL-based global databases, view the `AuroraGlobalDBReplicationLag` metric instead. These metrics show you how far behind (in milliseconds) replication to a secondary cluster is to the primary DB cluster. This value is directly proportional to the time it takes for Aurora to complete the switchover. Therefore, the larger the lag value, the longer the switchover will take. When you examine these metrics, do so from the current primary cluster. 

   For more information about CloudWatch metrics for Aurora, see [Cluster-level metrics for Amazon Aurora](Aurora.AuroraMonitoring.Metrics.md#Aurora.AuroraMySQL.Monitoring.Metrics.clusters). 
+  The secondary DB cluster that's promoted during a switchover might have different configuration settings than the old primary DB cluster. We recommend that you keep the following types of configuration settings consistent across all the clusters in your Aurora global database clusters. Doing so helps to minimize performance issues, workload incompatibilities, and other anomalous behavior after a switchover. 
  +  **Configure Aurora DB cluster parameter group for the new primary, if necessary** – When you promote a secondary DB cluster to take over the primary role, the parameter group from the secondary might be configured differently than for the primary. If so, modify the promoted secondary DB cluster's parameter group to conform to your primary cluster's settings. To learn how, see [Modifying parameters for an Aurora global database](aurora-global-database-modifying.parameters.md). 
  +  **Configure monitoring tools and options, such as Amazon CloudWatch Events and alarms** – Configure the promoted DB cluster with the same logging ability, alarms, and so on as needed for the global database. As with parameter groups, configuration for these features isn't inherited from the primary during the switchover process. Some CloudWatch metrics, such as replication lag, are only available for secondary Regions. Thus, a switchover changes how to view those metrics and set alarms on them, and could require changes to any predefined dashboards. For more information about Aurora DB clusters and monitoring, see [Monitoring Amazon Aurora metrics with Amazon CloudWatch](monitoring-cloudwatch.md). 
  +  **Configure integrations with other AWS services** – If your Aurora global database integrates with AWS services, such as AWS Secrets Manager, AWS Identity and Access Management, Amazon S3, and AWS Lambda, make sure to configure your integrations with these services as needed. For more information about integrating Aurora global databases with IAM, Amazon S3 and Lambda, see [Using Amazon Aurora global databases with other AWS services](aurora-global-database-interop.md). To learn more about Secrets Manager, see [How to automate replication of secrets in AWS Secrets Manager across AWS Regions](https://aws.amazon.com/blogs/security/how-to-automate-replication-of-secrets-in-aws-secrets-manager-across-aws-regions/). 

 If you are using the Aurora Global Database writer endpoint, you don't need to change the connection settings in your application. Verify that the DNS changes have propagated and that you can connect and perform write operations on the new primary cluster. Then you can resume full operation of your application. 

 Suppose that your application connections use the cluster endpoint of the old primary cluster, instead of the global writer endpoint. In that case, make sure to change the your application connection settings to use the cluster endpoint of the new primary cluster. If you accepted the provided names when you created the Aurora global database, you can change the endpoint by removing the `-ro` from the promoted cluster's endpoint string in your application. For example, the secondary cluster's endpoint `my-global.cluster-ro-aaaaaabbbbbb.us-west-1.rds.amazonaws.com` becomes `my-global.cluster-aaaaaabbbbbb.us-west-1.rds.amazonaws.com` when that cluster is promoted to primary. 

 If you're using RDS Proxy, make sure to redirect your application's write operations to the appropriate read/write endpoint of the proxy that's associated with the new primary cluster. This proxy endpoint might be the default endpoint or a custom read/write endpoint. For more information see [How RDS Proxy endpoints work with global databases](rds-proxy-gdb.md#rds-proxy-gdb.endpoints). 

 You can perform an Aurora Global Database switchover using the AWS Management Console, the AWS CLI, or the RDS API. 

### Console


**To perform the switchover on your Aurora global database**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  Choose **Databases** and find the Aurora global database where you intend to perform the switchover. 

1.  Choose **Switch over or fail over global database** from the **Actions** menu.   
![\[The Databases list with the Actions menu open showing the Switch over or fail over global database option.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-switchover-1.png)

1.  Choose **Switchover**.   
![\[The Switch over or fail over global database dialog, with Failover (allow data loss) selected.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-switchover-2.png)

1.  For **New primary cluster**, choose an active cluster in one of your secondary AWS Regions to be the new primary cluster. 

1.  Choose **Confirm**. 

 When the switchover completes, you can see the Aurora DB clusters and their current roles in the **Databases** list, as shown in the following image. 

![\[Showing the Databases list with the global database selected. The selected secondary cluster now shows as having the primary cluster role and the old primary has the role of a secondary cluster.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-switchover-3.png)


### AWS CLI


 **To perform the switchover on an Aurora global database** 

 Use the `[switchover-global-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/switchover-global-cluster.html)` CLI command to perform a switchover for Aurora Global Database. With the command, pass values for the following parameters. 
+  `--region` – Specify the AWS Region where the primary DB cluster of the Aurora global database is running. 
+  `--global-cluster-identifier` – Specify the name of your Aurora global database. 
+  `--target-db-cluster-identifier` – Specify the Amazon Resource Name (ARN) of the Aurora DB cluster that you want to promote to be the primary for the Aurora global database. 

For Linux, macOS, or Unix:

```
aws rds --region region_of_primary \
   switchover-global-cluster --global-cluster-identifier global_database_id \
  --target-db-cluster-identifier arn_of_secondary_to_promote
```

For Windows:

```
aws rds --region region_of_primary ^
   switchover-global-cluster --global-cluster-identifier global_database_id ^
  --target-db-cluster-identifier arn_of_secondary_to_promote
```

### RDS API


 To perform a switchover for Aurora Global Database, run the [SwitchoverGlobalCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_SwitchoverGlobalCluster.html) API operation. 

## Recovering an Amazon Aurora global database from an unplanned outage
Recovering from an unplanned outage<a name="unplanned_failover"></a><a name="failover"></a>

 On rare occasions, your Aurora global database might experience an unexpected outage in its primary AWS Region. If this happens, your primary Aurora DB cluster and its writer node aren't available, and the replication between the primary and secondary DB clusters stops. To minimize both downtime (RTO) and data loss (RPO), you can work quickly to perform a cross-Region failover. 

 Aurora Global Database has two failover methods that you can use in a disaster recovery situation: 
+  Managed failover – This method is recommended for disaster recovery. When you use this method, Aurora automatically adds back the old primary Region to the global database as a secondary Region when it becomes available again. Thus, the original topology of your global cluster is maintained. To learn how to use this method, see [Performing managed failovers for Aurora global databases](#aurora-global-database-failover.managed-unplanned). 
+  Manual failover – This alternative method can be used when managed failover isn't an option, for example, when your primary and secondary Regions are running incompatible engine versions. To learn how to use this method, see [Performing manual failovers for Aurora global databases](#aurora-global-database-failover.manual-unplanned). 

**Important**  
 Both failover methods can result in a loss of write transaction data that wasn't replicated to the chosen secondary before the failover event occurred. However, the recovery process that promotes a DB instance on the chosen secondary DB cluster to be the primary writer DB instance guarantees that the data is in a transactionally consistent state. Failovers are also susceptible to *split-brain* issues. 

### Performing managed failovers for Aurora global databases
Performing managed failovers

 This approach is intended for business continuity in the event of a true Regional disaster or complete service-level outage. 

 During a managed failover, the secondary cluster in your chosen secondary Region becomes the new primary cluster. The chosen secondary cluster promotes one of its read-only nodes to full writer status. This step allows the cluster to assume the role of primary cluster. Your database is unavailable for a short time while this cluster is assuming its new role. As soon as that old primary Region is healthy and available again, Aurora automatically adds it back to the global cluster as a secondary Region. Thus, your Aurora global database's existing replication topology is maintained. 

**Note**  
To manage replication slots for Aurora PostgreSQL after performing a failover, see [Managing logical slots for Aurora PostgreSQL](AuroraPostgreSQL.Replication.Logical-monitoring.md#AuroraPostgreSQL.Replication.Logical.Configure.managing-logical-slots).

**Note**  
 You can perform managed cross-Region failovers with Aurora Global Database only if the primary and secondary DB clusters have the same major and minor engine versions. Depending on the engine and engine versions, the patch levels might need to be identical or the patch levels can be different. For a list of engines and engine versions that allow these operations between primary and secondary clusters with different patch levels, see [Patch level compatibility for managed cross-Region switchovers and failovers](aurora-global-database-upgrade.md#aurora-global-database-upgrade.minor.incompatibility). Before you begin the failover, check the engine versions in your global cluster to make sure that they support managed cross-Region switchover, and upgrade them if needed. If your engine versions require identical patch levels but are running different patch levels, you can perform the failover manually by following the steps in [Performing manual failovers for Aurora global databases](#aurora-global-database-failover.manual-unplanned). 

 Managed failover doesn't wait for data to synchronize between the chosen secondary Region and the current primary Region. Because Aurora Global Database replicates data asynchronously, it's possible that not all transactions replicated to the chosen secondary AWS Region before it's promoted to accept full read/write capabilities. 

 To ensure the data is in a consistent state, Aurora creates a new storage volume for the old primary Region after it recovers. Before creating the new storage volume in the AWS Region, Aurora attempts to take a snapshot of the old storage volume at the point of failure. That way, you can restore the snapshot and recover any of the missing data from it. If this operation is successful, Aurora places this snapshot named `rds:unplanned-global-failover-name-of-old-primary-DB-cluster-timestamp` in the snapshot section of the AWS Management Console. You can also use the `describe-db-cluster-snapshots` AWS CLI command or the `DescribeDBClusterSnapshots` API operation to see details for the snapshot. 

 When you initiate a managed failover, Aurora also attempts to halt write traffic through the highly-available Aurora storage layer. We refer to this mechanism as "write fencing". If the process succeeds, Aurora emits an RDS Event letting you know that writes were stopped. In the unlikely event of multiple AZ failures in a Region, it's possible that the write fencing process doesn't succeed in a timely manner. In that case, Aurora emits an RDS event informing you that the process to stop writes timed out. If the old primary cluster is reachable on the network, Aurora records these events there. If not, Aurora records the events on the new primary cluster. To learn more about these events, see [DB cluster events](USER_Events.Messages.md#USER_Events.Messages.cluster). Because fencing writes is a best-effort attempt, it's possible that writes might be momentarily accepted in the old primary Region, causing split-brain issues. 

 We recommend that you complete the following tasks before you perform a failover with Aurora Global Database. Doing so minimizes the possibility of split-brain issues, or recovering unreplicated data from the snapshot of the old primary cluster. 
+  To prevent writes from being sent to the primary cluster of Aurora Global Database, take applications offline. 
+  Make sure that any applications that connect to the primary DB cluster are using the global writer endpoint. This endpoint has a value that remains the same even when a new Region becomes the primary cluster because of switchover or failover. Aurora implements additional safeguards to minimize the possibility of data loss for write operations submitted through the global endpoint. For more information about global writer endpoints, see [Connecting to Amazon Aurora Global Database](aurora-global-database-connecting.md). 
+  If you are using the global writer endpoint and your application or networking layers cache DNS values, reduce the time-to-live (TTL) of your DNS cache to a low value such as 5 seconds. That way, your application quickly registers DNS changes with the global writer endpoint. Although Aurora attempts to block writes in the old primary Region, the action is not guaranteed to succeed. Reducing the DNS cache duration further reduces the likelihood of split-brain issues. As an alternative, you can check for the RDS event that informs you when Aurora observed the DNS changes for the global writer endpoint. That way, you can validate that your application also registered the DNS change before restarting your application write traffic. 
+  Check lag times for all secondary Aurora DB clusters in the Aurora Global Database. Choosing the secondary Region with the least replication lag can minimize data loss with the current failed primary Region. 

   For all versions of Aurora PostgreSQL-based global databases, and for Aurora MySQL-based global databases starting with engine versions 3.04.0 and higher or 2.12.0 and higher, use Amazon CloudWatch to view the `AuroraGlobalDBRPOLag` metric for all secondary DB clusters. For lower minor versions of Aurora MySQL-based global databases, view the `AuroraGlobalDBReplicationLag` metric instead. These metrics show you how far behind (in milliseconds) replication to a secondary cluster is to the primary DB cluster. 

   For more information about CloudWatch metrics for Aurora, see [Cluster-level metrics for Amazon Aurora](Aurora.AuroraMonitoring.Metrics.md#Aurora.AuroraMySQL.Monitoring.Metrics.clusters). 

 During a managed failover, the chosen secondary DB cluster is promoted to its new role as primary. However, it doesn't inherit the various configuration options of the primary DB cluster. A mismatch in configuration can lead to performance issues, workload incompatibilities, and other anomalous behavior. To avoid such issues, we recommend that you resolve differences between your Aurora global database clusters for the following: 
+  **Configure Aurora DB cluster parameter group for the new primary, if necessary** – You can configure your Aurora DB cluster parameter groups independently for each Aurora cluster in your Aurora Global Database. Therefore, when you promote a secondary DB cluster to take over the primary role, the parameter group from the secondary might be configured differently than for the primary. If so, modify the promoted secondary DB cluster's parameter group to conform to your primary cluster's settings. To learn how, see [Modifying parameters for an Aurora global database](aurora-global-database-modifying.parameters.md). 
+  **Configure monitoring tools and options, such as Amazon CloudWatch Events and alarms** – Configure the promoted DB cluster with the same logging ability, alarms, and so on as needed for the global database. As with parameter groups, configuration for these features isn't inherited from the primary during the failover process. Some CloudWatch metrics, such as replication lag, are only available for secondary Regions. Thus, a failover changes how to view those metrics and set alarms on them, and could require changes to any predefined dashboards. For more information about monitoring Aurora DB clusters, see [Monitoring Amazon Aurora metrics with Amazon CloudWatch](monitoring-cloudwatch.md). 
+  **Configure integrations with other AWS services** – If your Aurora Global Database integrates with other AWS services, such as AWS Secrets Manager, AWS Identity and Access Management, Amazon S3, and AWS Lambda, you need to make sure these are configured as required for access from any secondary Regions. For more information about integrating Aurora global databases with IAM, Amazon S3 and Lambda, see [Using Amazon Aurora global databases with other AWS services](aurora-global-database-interop.md). To learn more about Secrets Manager, see [How to automate replication of secrets in AWS Secrets Manager across AWS Regions](https://aws.amazon.com/blogs/security/how-to-automate-replication-of-secrets-in-aws-secrets-manager-across-aws-regions/). 

 Typically, the chosen secondary cluster assumes the primary role within a few minutes. As soon as the new primary Region's writer DB instance is available, you can connect your applications to it and resume your workloads. After Aurora promotes the new primary cluster, it automatically rebuilds all additional secondary Region clusters. 

 Because Aurora global databases use asynchronous replication, the replication lag in each secondary Region can vary. Aurora rebuilds these secondary Regions to have the exact same point-in-time data as the new primary Region cluster. The duration of the complete rebuilding task can take a few minutes to several hours, depending on the size of the storage volume and the distance between the Regions. When the secondary Region clusters finish rebuilding from the new primary Region, they become available for read access. 

 As soon as the new primary writer is promoted and available, the new primary Region's cluster can handle read and write operations for the Aurora global database. 

 If you are using the global endpoint, you don't need to change the connection settings in your application. Verify that the DNS changes have propagated and that you can connect and perform write operations on the new primary cluster. Then you can resume full operation of your application. 

 If you aren't using the global endpoint, make sure to change the endpoint for your application to use the cluster endpoint for the newly promoted primary DB cluster. If you accepted the provided names when you created the Aurora global database, you can change the endpoint by removing the `-ro` from the promoted cluster's endpoint string in your application. 

 For example, the secondary cluster's endpoint `my-global.cluster-ro-aaaaaabbbbbb.us-west-1.rds.amazonaws.com` becomes `my-global.cluster-aaaaaabbbbbb.us-west-1.rds.amazonaws.com` when that cluster is promoted to primary. 

 If you are using RDS Proxy, make sure to redirect your application's write operations to the appropriate read/write endpoint of the proxy that's associated with the new primary cluster. This proxy endpoint might be the default endpoint or a custom read/write endpoint. For more information see [How RDS Proxy endpoints work with global databases](rds-proxy-gdb.md#rds-proxy-gdb.endpoints). 

 To restore the global database cluster's original topology, Aurora monitors the availability of the old primary Region. As soon as that Region is healthy and available again, Aurora automatically adds it back to the global cluster as a secondary Region. Before creating the new storage volume in the old primary Region, Aurora tries to take a snapshot of the old storage volume at the point of failure. It does this so that you can use it to recover any of the missing data. If this operation is successful, Aurora creates a snapshot named `rds:unplanned-global-failover-name-of-old-primary-DB-cluster-timestamp`. You can find this snapshot in the **Snapshots** section of the AWS Management Console. You can also see this snapshot listed in the information returned by the [DescribeDBClusterSnapshots](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_DescribeDBClusterSnapshots.html) API operation. 

**Note**  
 The snapshot of the old storage volume is a system snapshot that's subject to the backup retention period configured on the old primary cluster. To preserve this snapshot outside of the retention period, you can copy it to save it as a manual snapshot. To learn more about copying snapshots, including pricing, see [DB cluster snapshot copying](aurora-copy-snapshot.md). 

 After the original topology is restored, you can fail back your global database to the original primary Region by performing a switchover operation when it makes the most sense for your business and workload. To do so, follow the steps in [Performing switchovers for Amazon Aurora global databases](#aurora-global-database-disaster-recovery.managed-failover). 

 You can perform a failover with Aurora Global Database using the AWS Management Console, the AWS CLI, or the RDS API. 

#### Console


**To perform the managed failover on your Aurora global database**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  Choose **Databases** and find the Aurora global database where you want to perform the failover. 

1.  Choose **Switch over or fail over global database** from the **Actions** menu.   
![\[The Databases list with the Actions menu open, showing the Switch over or fail over global database option.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-managed-failover-1.png)

1.  Choose **Failover (allow data loss)**.   
![\[The Switch over or fail over global database dialog, with Failover (allow data loss) selected.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-managed-failover-2.png)

1.  For **New primary cluster**, choose an active cluster in one of your secondary AWS Regions to be the new primary cluster. 

1.  Enter **confirm**, and then choose **Confirm**. 

 When the failover completes, you can view the Aurora DB clusters and their current state in the **Databases** list, as shown in the following image. 

![\[Showing the Databases list with the global database selected. The selected secondary cluster now shows as having the primary cluster role and the old primary has the role of a secondary cluster.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-managed-failover-5.png)


#### AWS CLI


 **To perform the managed failover on an Aurora global database** 

 Use the `[failover-global-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/failover-global-cluster.html)` CLI command to perform a failover with Aurora Global Database. With the command, pass values for the following parameters. 
+  `--region` – Specify the AWS Region where the secondary DB cluster that you want to be the new primary for the Aurora global database is running. 
+  `--global-cluster-identifier` – Specify the name of your Aurora global database. 
+  `--target-db-cluster-identifier` – Specify the Amazon Resource Name (ARN) of the Aurora DB cluster that you want to promote to be the new primary for the Aurora global database. 
+  `--allow-data-loss` – Explicitly make this a failover operation instead of a switchover operation. A failover operation can result in some data loss if the asynchronous replication components haven't completed sending all replicated data to the secondary Region. 

For Linux, macOS, or Unix:

```
aws rds --region region_of_selected_secondary \
   failover-global-cluster --global-cluster-identifier global_database_id \
   --target-db-cluster-identifier arn_of_secondary_to_promote \
   --allow-data-loss
```

For Windows:

```
aws rds --region region_of_selected_secondary ^
   failover-global-cluster --global-cluster-identifier global_database_id ^
   --target-db-cluster-identifier arn_of_secondary_to_promote ^
   --allow-data-loss
```

#### RDS API


 To perform a failover with Aurora Global Database, run the [FailoverGlobalCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_FailoverGlobalCluster.html) API operation. 

### Performing manual failovers for Aurora global databases
Performing manual failovers

 In some scenarios, you might not be able to use the managed failover process. One example is if your primary and secondary DB clusters aren't running compatible engine versions. In this case, you can follow this manual process to perform a failover to your target secondary Region. 

**Tip**  
 We recommend that you understand this process before using it. Have a plan ready to quickly proceed at the first sign of a Region-wide issue. You can be ready to identify the secondary Region with the least replication lag by using Amazon CloudWatch regularly to track lag times for the secondary clusters. Make sure to test your plan to check that your procedures are complete and accurate, and that staff are trained to perform a disaster recovery failover before it really happens. 

**To perform a manual failover to a secondary cluster after an unplanned outage in the primary Region**

1.  Stop issuing DML statements and other write operations to the primary Aurora DB cluster in the AWS Region with the outage. 

1.  Identify an Aurora DB cluster from a secondary AWS Region to use as a new primary DB cluster. If you have two or more secondary AWS Regions in your Aurora global database, choose the secondary cluster that has the least replication lag. 

1.  Detach your chosen secondary DB cluster from the Aurora global database. 

    Removing a secondary DB cluster from an Aurora global database immediately stops the replication from the primary to this secondary and promotes it to a standalone provisioned Aurora DB cluster with full read/write capabilities. Any other secondary Aurora DB clusters associated with the primary cluster in the Region with the outage are still available and can accept calls from your application. They also consume resources. Because you're recreating the Aurora global database, remove the other secondary DB clusters before creating the new Aurora global database in the following steps. Doing this avoids data inconsistencies among the DB clusters in the Aurora global database (*split-brain* issues). 

    For detailed steps for detaching, see [Removing a cluster from an Amazon Aurora global database](aurora-global-database-detaching.md). 

1.  Reconfigure your application to send all write operations to this now standalone Aurora DB cluster using its new endpoint. If you accepted the provided names when you created the Aurora global database, you can change the endpoint by removing the `-ro` from the cluster's endpoint string in your application. 

    For example, the secondary cluster's endpoint `my-global.cluster-ro-aaaaaabbbbbb.us-west-1.rds.amazonaws.com` becomes `my-global.cluster-aaaaaabbbbbb.us-west-1.rds.amazonaws.com` when that cluster is detached from the Aurora global database. 

    This Aurora DB cluster becomes the primary cluster of a new Aurora global database when you start adding Regions to it in the next step. 

    If you are using RDS Proxy, make sure to redirect your application's write operations to the appropriate read/write endpoint of the proxy that's associated with the new primary cluster. This proxy endpoint might be the default endpoint or a custom read/write endpoint. For more information see [How RDS Proxy endpoints work with global databases](rds-proxy-gdb.md#rds-proxy-gdb.endpoints). 

1.  Add an AWS Region to the DB cluster. When you do this, the replication process from primary to secondary begins. For detailed steps to add a Region, see [Adding an AWS Region to an Amazon Aurora global database](aurora-global-database-attaching.md). 

1.  Add more AWS Regions as needed to recreate the topology needed to support your application. 

 Make sure that application writes are sent to the correct Aurora DB cluster before, during, and after making these changes. Doing this avoids data inconsistencies among the DB clusters in the Aurora global database (*split-brain* issues). 

 If you reconfigured in response to an outage in an AWS Region, you can make that AWS Region the primary again after the outage is resolved. To do so, you add the old AWS Region to your new global database, and then use the switchover process to switch its role. Your Aurora global database must use a version of Aurora PostgreSQL or Aurora MySQL that supports switchovers. For more information, see [Performing switchovers for Amazon Aurora global databases](#aurora-global-database-disaster-recovery.managed-failover). 

## Managing RPOs for Aurora PostgreSQL–based global databases
Managing RPOs (Aurora PostgreSQL)

 With an Aurora PostgreSQL–based global database, you can manage the recovery point objective (RPO) for your Aurora global database by using the `rds.global_db_rpo` parameter. RPO represents the maximum amount of data that can be lost in the event of an outage. 

 When you set an RPO for your Aurora PostgreSQL–based global database, Aurora monitors the *RPO lag time* of all secondary clusters to make sure that at least one secondary cluster stays within the target RPO window. RPO lag time is another time-based metric. 

 The RPO is used when your database resumes operations in a new AWS Region after a failover. Aurora evaluates RPO and RPO lag times to commit (or block) transactions on the primary as follows: 
+  Commits the transaction if at least one secondary DB cluster has an RPO lag time less than the RPO. 
+  Blocks the transaction if all secondary DB clusters have RPO lag times that are larger than the RPO. It also logs the event to the PostgreSQL log file and emits "wait" events that show the blocked sessions. 

 In other words, if all secondary clusters are behind the target RPO, Aurora pauses transactions on the primary cluster until at least one of the secondary clusters catches up. Paused transactions are resumed and committed as soon as the lag time of at least one secondary DB cluster becomes less than the RPO. The result is that no transactions can commit until the RPO is met. 

 The `rds.global_db_rpo` parameter is dynamic. If you decide that you don't want all write transactions to stall until the lag decreases sufficiently, you can reset it quickly. In this case, Aurora recognizes and implements the change after a short delay. 

**Important**  
 In a global database with only two AWS Regions, we recommend keeping the `rds.global_db_rpo` parameter's default value in the secondary Region's parameter group. Otherwise, performing a failover due to a loss of the primary AWS Region could cause Aurora to pause transactions. Instead, wait until Aurora completes rebuilding the cluster in the old failed AWS Region before changing this parameter to enforce a maximum RPO. 

 If you set this parameter as outlined in the following, you can also monitor the metrics that it generates. You can do so by using `psql` or another tool to query the Aurora global database's primary DB cluster and obtain detailed information about your Aurora PostgreSQL–based global database's operations. To learn how, see [Monitoring Aurora PostgreSQL-based global databases](aurora-global-database-monitoring.md#aurora-global-database-monitoring.postgres). 

**Topics**
+ [

### Setting the recovery point objective
](#aurora-global-database-set-rpo)
+ [

### Viewing the recovery point objective
](#aurora-global-database-view-rpo)
+ [

### Disabling the recovery point objective
](#aurora-global-database-disable-rpo)

### Setting the recovery point objective
Setting the RPO

 The `rds.global_db_rpo` parameter controls the RPO setting for a PostgreSQL database. This parameter is supported by Aurora PostgreSQL. Valid values for `rds.global_db_rpo` range from 20 seconds to 2,147,483,647 seconds (68 years). Choose a realistic value to meet your business need and use case. For example, you might want to allow up to 10 minutes for your RPO, in which case you set the value to 600. 

 You can set this value for your Aurora PostgreSQL–based global database by using the AWS Management Console, the AWS CLI, or the RDS API. 

#### Console


**To set the RPO**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  Choose the primary cluster of your Aurora global database and open the **Configuration** tab to find its DB cluster parameter group. For example, the default parameter group for a primary DB cluster running Aurora PostgreSQL 11.7 is `default.aurora-postgresql11`. 

    Parameter groups can't be edited directly. Instead, you do the following: 
   +  Create a custom DB cluster parameter group using the appropriate default parameter group as the starting point. For example, create a custom DB cluster parameter group based on the `default.aurora-postgresql11`. 
   +  On your custom DB parameter group, set the value of the **rds.global\$1db\$1rpo** parameter to meet your use case. Valid values range from 20 seconds up to the maximum integer value of 2,147,483,647 (68 years). 
   +  Apply the modified DB cluster parameter group to your Aurora DB cluster. 

 For more information, see [Modifying parameters in a DB cluster parameter groupin Amazon Aurora](USER_WorkingWithParamGroups.ModifyingCluster.md). 

#### AWS CLI


 To set the `rds.global_db_rpo` parameter, use the [modify-db-cluster-parameter-group](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster-parameter-group.html) CLI command. In the command, specify the name of your primary cluster's parameter group and values for RPO parameter. 

 The following example sets the RPO to 600 seconds (10 minutes) for the primary DB cluster's parameter group named `my_custom_global_parameter_group`. 

For Linux, macOS, or Unix:

```
aws rds modify-db-cluster-parameter-group \
    --db-cluster-parameter-group-name my_custom_global_parameter_group \
    --parameters "ParameterName=rds.global_db_rpo,ParameterValue=600,ApplyMethod=immediate"
```

For Windows:

```
aws rds modify-db-cluster-parameter-group ^
    --db-cluster-parameter-group-name my_custom_global_parameter_group ^
    --parameters "ParameterName=rds.global_db_rpo,ParameterValue=600,ApplyMethod=immediate"
```

#### RDS API


 To modify the `rds.global_db_rpo` parameter, use the Amazon RDS [ModifyDBClusterParameterGroup](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBClusterParameterGroup.html) API operation. 

### Viewing the recovery point objective
Viewing the RPO

 The recovery point objective (RPO) of a global database is stored in the `rds.global_db_rpo` parameter for each DB cluster. You can connect to the endpoint for the secondary cluster you want to view and use `psql` to query the instance for this value. 

```
db-name=>show rds.global_db_rpo;
```

 If this parameter isn't set, the query returns the following: 

```
rds.global_db_rpo
-------------------
 -1
(1 row)
```

 This next response is from a secondary DB cluster that has 1 minute RPO setting. 

```
rds.global_db_rpo
-------------------
 60
(1 row)
```

 You can also use the CLI to get values for find out if `rds.global_db_rpo` is active on any of the Aurora DB clusters by using the CLI to get values of all `user` parameters for the cluster. 

For Linux, macOS, or Unix:

```
aws rds describe-db-cluster-parameters \
 --db-cluster-parameter-group-name lab-test-apg-global \
 --source user
```

For Windows:

```
aws rds describe-db-cluster-parameters ^
 --db-cluster-parameter-group-name lab-test-apg-global *
 --source user
```

 The command returns output similar to the following for all `user` parameters. that aren't `default-engine` or `system` DB cluster parameters. 

```
{
    "Parameters": [
        {
            "ParameterName": "rds.global_db_rpo",
            "ParameterValue": "60",
            "Description": "(s) Recovery point objective threshold, in seconds, that blocks user commits when it is violated.",
            "Source": "user",
            "ApplyType": "dynamic",
            "DataType": "integer",
            "AllowedValues": "20-2147483647",
            "IsModifiable": true,
            "ApplyMethod": "immediate",
            "SupportedEngineModes": [
                "provisioned"
            ]
        }
    ]
}
```

 To learn more about viewing parameters of the cluster parameter group, see [Viewing parameter values for a DB cluster parameter groupin Amazon Aurora](USER_WorkingWithParamGroups.ViewingCluster.md). 

### Disabling the recovery point objective
Disabling the RPO

 To disable the RPO, reset the `rds.global_db_rpo` parameter. You can reset parameters using the AWS Management Console, the AWS CLI, or the RDS API. 

#### Console


**To disable the RPO**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1.  In the navigation pane, choose **Parameter groups**. 

1.  In the list, choose your primary DB cluster parameter group. 

1.  Choose **Edit parameters**. 

1.  Choose the box next to the **rds.global\$1db\$1rpo** parameter. 

1.  Choose **Reset**. 

1.  When the screen shows **Reset parameters in DB parameter group**, choose **Reset parameters**. 

 For more information on how to reset a parameter with the console, see [Modifying parameters in a DB cluster parameter groupin Amazon Aurora](USER_WorkingWithParamGroups.ModifyingCluster.md). 

#### AWS CLI


 To reset the `rds.global_db_rpo` parameter, use the [reset-db-cluster-parameter-group](https://docs.aws.amazon.com/cli/latest/reference/rds/reset-db-cluster-parameter-group.html) command. 

For Linux, macOS, or Unix:

```
aws rds reset-db-cluster-parameter-group \
    --db-cluster-parameter-group-name global_db_cluster_parameter_group \
    --parameters "ParameterName=rds.global_db_rpo,ApplyMethod=immediate"
```

For Windows:

```
aws rds reset-db-cluster-parameter-group ^
    --db-cluster-parameter-group-name global_db_cluster_parameter_group ^
    --parameters "ParameterName=rds.global_db_rpo,ApplyMethod=immediate"
```

#### RDS API


 To reset the `rds.global_db_rpo` parameter, use the Amazon RDS API [ResetDBClusterParameterGroup](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ResetDBClusterParameterGroup.html) operation. 

# Cross-Region resiliency for Global Database secondary clusters
Cross-Region resiliency for secondary clusters

 Aurora PostgreSQL versions 16.6, 15.10, 14.15, 13.18, 12.22, or higher and Aurora MySQL versions 3.09 or higher contain availability improvements that enable secondary Region read replicas to maintain service continuity during unplanned events such as hardware failures, network disruptions across AWS Regions, large volumes of data transfers between the clusters, and others. 

 Although the read replicas remain available for your application requests, the replication lag may continue to increase until the resolution of the unplanned event. You can monitor the lag between primary and secondary clusters using the `AuroraGlobalDBProgressLag` CloudWatch metric. To measure the end-to-end lag, including any lag between the cluster volume and DB instances of the secondary cluster, add the values of the `AuroraGlobalDBProgressLag` and `AuroraReplicaLag` CloudWatch metrics. For more information about metrics, refer to [Metrics reference for Amazon Aurora](metrics-reference.md). 

 The Global Database read availability for Aurora MySQL and earlier versions of Aurora PostgreSQL may be impacted during such unplanned events. 

 For more information about new features in Aurora PostgreSQL 16.6, 15.10, 14.15, 13.18, and 12.22, see [PostgreSQL 16.6](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraPostgreSQLReleaseNotes/AuroraPostgreSQL.Updates.html#aurorapostgresql-versions-version166x) in the *Aurora PostgreSQL Release Notes*. 

 For more information about new features in Aurora MySQL versions 3.09 and higher, see [Database engine updates for Amazon Aurora MySQL version 3](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraMySQLReleaseNotes/AuroraMySQL.Updates.30Updates.html) in the *Aurora MySQL Release Notes*. 

# Monitoring an Amazon Aurora global database
Monitoring an Aurora global database

When you create the Aurora DB clusters that make up your Aurora global database, you can choose many options that let you monitor your DB cluster's performance. These options include the following:
+ Amazon RDS Performance Insights – Enables performance schema in the underlying Aurora database engine. To learn more about Performance Insights and Aurora global databases, see [Monitoring an Amazon Aurora global database with Amazon RDS Performance Insights](#aurora-global-database-pi).
+ Enhanced monitoring – Generates metrics for process or thread utilization on the CPU. To learn about enhanced monitoring, see [Monitoring OS metrics with Enhanced Monitoring](USER_Monitoring.OS.md).
+ Amazon CloudWatch Logs – Publishes specified log types to CloudWatch Logs. Error logs are published by default, but you can choose other logs specific to your Aurora database engine.
  + For Aurora MySQL–based Aurora DB clusters, you can export the audit log, general log, and slow query log.
  + For Aurora PostgreSQL–based Aurora DB clusters, you can export the PostgreSQL log.
+ For Aurora MySQL–based global databases, you can query specific `information_schema` tables to check the status of your Aurora global database and its instances. To learn how, see [Monitoring Aurora MySQL-based global databases](#aurora-global-database-monitoring.mysql). 
+ For Aurora PostgreSQL–based global databases, you can use specific functions to check the status of your Aurora global database and its instances. To learn how, see [Monitoring Aurora PostgreSQL-based global databases](#aurora-global-database-monitoring.postgres). 

The following screenshot shows some of the options available on the Monitoring tab of a primary Aurora DB cluster in an Aurora global database.

![\[Monitoring tab: Monitoring dropdown showing CloudWatch, Enhanced monitoring, OS process list, and Performance Insights options.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aurora-global-db-monitoring-options.png)


For more information, see [Monitoring metrics in an Amazon Aurora cluster](MonitoringAurora.md).

## Monitoring an Amazon Aurora global database with Amazon RDS Performance Insights
Monitoring an Aurora global database with Performance Insights

You can use Amazon RDS Performance Insights for your Aurora global databases. You enable this feature individually, for each Aurora DB cluster in your Aurora global database. To do so, you choose **Enable Performance Insights** in the **Additional configuration** section of the Create database page. Or you can modify your Aurora DB clusters to use this feature after they are up and running. You can enable or turn off Performance Insights for each cluster that's part of your Aurora global database. 

The reports created by Performance Insights apply to each cluster in the global database. When you add a new secondary AWS Region to an Aurora global database that's already using Performance Insights, be sure that you enable Performance Insights in the newly added cluster. It doesn't inherit the Performance Insights setting from the existing global database. 

You can switch AWS Regions while viewing the Performance Insights page for a DB instance that's attached to a global database. However, you might not see performance information immediately after switching AWS Regions. Although the DB instances might have identical names in each AWS Region, the associated Performance Insights URL is different for each DB instance. After switching AWS Regions, choose the name of the DB instance again in the Performance Insights navigation pane. 

For DB instances associated with a global database, the factors affecting performance might be different in each AWS Region. For example, the DB instances in each AWS Region might have different capacity.

To learn more about using Performance Insights, see [Monitoring DB load with Performance Insights on Amazon Aurora](USER_PerfInsights.md). 

## Monitoring Aurora global databases with Database Activity Streams


By using the Database Activity Streams feature, you can monitor and set alarms for auditing activity in the DB clusters in your global database. You start a database activity stream on each DB cluster separately. Each cluster delivers audit data to its own Kinesis stream within its own AWS Region. For more information, see [Monitoring Amazon Aurora with Database Activity Streams](DBActivityStreams.md).

## Monitoring Aurora MySQL-based global databases


To view the status of an Aurora MySQL-based global database, query the [information\$1schema.aurora\$1global\$1db\$1status](AuroraMySQL.Reference.ISTables.md#AuroraMySQL.Reference.ISTables.aurora_global_db_status) and [information\$1schema.aurora\$1global\$1db\$1instance\$1status](AuroraMySQL.Reference.ISTables.md#AuroraMySQL.Reference.ISTables.aurora_global_db_instance_status) tables.

**Note**  
The `information_schema.aurora_global_db_status` and `information_schema.aurora_global_db_instance_status` tables are only available with Aurora MySQL version 3.04.0 and higher global databases.

**To monitor an Aurora MySQL-based global database**

1. Connect to the global database primary cluster endpoint using a MySQL client. For more information about how to connect, see [Connecting to Amazon Aurora Global Database](aurora-global-database-connecting.md).

1. Query the `information_schema.aurora_global_db_status` table in a mysql command to list the primary and secondary volumes. This query returns the lag times of the global database secondary DB clusters, as in the following example.

   ```
   mysql> select * from information_schema.aurora_global_db_status;
   ```

   ```
   AWS_REGION | HIGHEST_LSN_WRITTEN | DURABILITY_LAG_IN_MILLISECONDS | RPO_LAG_IN_MILLISECONDS | LAST_LAG_CALCULATION_TIMESTAMP | OLDEST_READ_VIEW_TRX_ID
   -----------+---------------------+--------------------------------+------------------------+---------------------------------+------------------------
   us-east-1  |           183537946 |                            0   |                      0 |  1970-01-01 00:00:00.000000     |               0
   us-west-2  |           183537944 |                            428 |                      0 |  2023-02-18 01:26:41.925000     |               20806982
   (2 rows)
   ```

   The output includes a row for each DB cluster of the global database containing the following columns:
   + **AWS\$1REGION** – The AWS Region that this DB cluster is in. For tables listing AWS Regions by engine, see [Region availability](Concepts.RegionsAndAvailabilityZones.md#Aurora.Overview.Availability). 
   + **HIGHEST\$1LSN\$1WRITTEN** – The highest log sequence number (LSN) currently written on this DB cluster. 

     A *log sequence number (LSN)* is a unique sequential number that identifies a record in the database transaction log. LSNs are ordered such that a larger LSN represents a later transaction.
   + **DURABILITY\$1LAG\$1IN\$1MILLISECONDS** – The difference in the timestamp values between the `HIGHEST_LSN_WRITTEN` on a secondary DB cluster and the `HIGHEST_LSN_WRITTEN` on the primary DB cluster. This value is always 0 on the primary DB cluster of the Aurora global database.
   + **RPO\$1LAG\$1IN\$1MILLISECONDS** – The recovery point objective (RPO) lag. The RPO lag is the time it takes for the most recent user transaction COMMIT to be stored on a secondary DB cluster after it's been stored on the primary DB cluster of the Aurora global database. This value is always 0 on the primary DB cluster of the Aurora global database.

     In simple terms, this metric calculates the recovery point objective for each Aurora MySQL DB cluster in the Aurora global database, that is, how much data might be lost if there were an outage. As with lag, RPO is measured in time.
   + **LAST\$1LAG\$1CALCULATION\$1TIMESTAMP** – The timestamp that specifies when values were last calculated for `DURABILITY_LAG_IN_MILLISECONDS` and `RPO_LAG_IN_MILLISECONDS`. A time value such as `1970-01-01 00:00:00+00` means this is the primary DB cluster.
   + **OLDEST\$1READ\$1VIEW\$1TRX\$1ID** – The ID of the oldest transaction that the writer DB instance can purge to.

1. Query the `information_schema.aurora_global_db_instance_status` table to list all secondary DB instances for both the primary DB cluster and the secondary DB clusters.

   ```
   mysql> select * from information_schema.aurora_global_db_instance_status;
   ```

   ```
   SERVER_ID            |              SESSION_ID              | AWS_REGION | DURABLE_LSN | HIGHEST_LSN_RECEIVED | OLDEST_READ_VIEW_TRX_ID | OLDEST_READ_VIEW_LSN | VISIBILITY_LAG_IN_MSEC
   ---------------------+--------------------------------------+------------+-------------+----------------------+-------------------------+----------------------+------------------------
   ams-gdb-primary-i2   | MASTER_SESSION_ID                    | us-east-1  | 183537698   |                    0 |                       0 |                    0 |                      0
   ams-gdb-secondary-i1 | cc43165b-bdc6-4651-abbf-4f74f08bf931 | us-west-2  | 183537689   |            183537692 |                20806928 |            183537682 |                      0
   ams-gdb-secondary-i2 | 53303ff0-70b5-411f-bc86-28d7a53f8c19 | us-west-2  | 183537689   |            183537692 |                20806928 |            183537682 |                    677
   ams-gdb-primary-i1   | 5af1e20f-43db-421f-9f0d-2b92774c7d02 | us-east-1  | 183537697   |            183537698 |                20806930 |            183537691 |                     21
   (4 rows)
   ```

   The output includes a row for each DB instance of the global database containing the following columns:
   + **SERVER\$1ID** – The server identifier for the DB instance.
   + **SESSION\$1ID** – A unique identifier for the current session. A value of `MASTER_SESSION_ID` identifies the Writer (primary) DB instance.
   + **AWS\$1REGION** – The AWS Region that this DB instance is in. For tables listing AWS Regions by engine, see [Region availability](Concepts.RegionsAndAvailabilityZones.md#Aurora.Overview.Availability). 
   + **DURABLE\$1LSN** – The LSN made durable in storage.
   + **HIGHEST\$1LSN\$1RECEIVED** – The highest LSN received by the DB instance from the writer DB instance.
   + **OLDEST\$1READ\$1VIEW\$1TRX\$1ID** – The ID of the oldest transaction that the writer DB instance can purge to.
   + **OLDEST\$1READ\$1VIEW\$1LSN** – The oldest LSN used by the DB instance to read from storage.
   + **VISIBILITY\$1LAG\$1IN\$1MSEC** – For readers in the primary DB cluster, how far this DB instance is lagging behind the writer DB instance in milliseconds. For readers in a secondary DB cluster, how far this DB instance is lagging behind the secondary volume in milliseconds.

To see how these values change over time, consider the following transaction block where a table insert takes an hour.

```
mysql> BEGIN;
mysql> INSERT INTO table1 SELECT Large_Data_That_Takes_1_Hr_To_Insert;
mysql> COMMIT;
```

In some cases, there might be a network disconnect between the primary DB cluster and the secondary DB cluster after the `BEGIN` statement. If so, the secondary DB cluster's **DURABILITY\$1LAG\$1IN\$1MILLISECONDS** value starts increasing. At the end of the `INSERT` statement, the **DURABILITY\$1LAG\$1IN\$1MILLISECONDS** value is 1 hour. However, the **RPO\$1LAG\$1IN\$1MILLISECONDS** value is 0 because all the user data committed between the primary DB cluster and secondary DB cluster are still the same. As soon as the `COMMIT` statement completes, the **RPO\$1LAG\$1IN\$1MILLISECONDS** value increases.

## Monitoring Aurora PostgreSQL-based global databases


To view the status of an Aurora PostgreSQL-based global database, use the `aurora_global_db_status` and `aurora_global_db_instance_status` functions. 

**Note**  
Only Aurora PostgreSQL supports the `aurora_global_db_status` and `aurora_global_db_instance_status` functions.

**To monitor an Aurora PostgreSQL-based global database**

1. Connect to the global database primary cluster endpoint using a PostgreSQL utility such as psql. For more information about how to connect, see [Connecting to Amazon Aurora Global Database](aurora-global-database-connecting.md).

1. Use the `aurora_global_db_status` function in a psql command to list the primary and secondary volumes. This shows the lag times of the global database secondary DB clusters.

   ```
   postgres=> select * from aurora_global_db_status();
   ```

   ```
   aws_region | highest_lsn_written | durability_lag_in_msec | rpo_lag_in_msec | last_lag_calculation_time  | feedback_epoch | feedback_xmin
   ------------+---------------------+------------------------+-----------------+----------------------------+----------------+---------------
   us-east-1  |         93763984222 |                     -1 |              -1 | 1970-01-01 00:00:00+00     |              0 |             0
   us-west-2  |         93763984222 |                    900 |            1090 | 2020-05-12 22:49:14.328+00 |              2 |    3315479243
   (2 rows)
   ```

   The output includes a row for each DB cluster of the global database containing the following columns:
   + **aws\$1region** – The AWS Region that this DB cluster is in. For tables listing AWS Regions by engine, see [Region availability](Concepts.RegionsAndAvailabilityZones.md#Aurora.Overview.Availability). 
   + **highest\$1lsn\$1written** – The highest log sequence number (LSN) currently written on this DB cluster. 

     A *log sequence number (LSN)* is a unique sequential number that identifies a record in the database transaction log. LSNs are ordered such that a larger LSN represents a later transaction.
   + **durability\$1lag\$1in\$1msec** – The timestamp difference between the highest log sequence number written on a secondary DB cluster (`highest_lsn_written`) and the `highest_lsn_written` on the primary DB cluster.
   + **rpo\$1lag\$1in\$1msec** – The recovery point objective (RPO) lag. This lag is the time difference between the most recent user transaction commit stored on a secondary DB cluster and the most recent user transaction commit stored on the primary DB cluster.
   + **last\$1lag\$1calculation\$1time** – The timestamp when values were last calculated for `durability_lag_in_msec` and `rpo_lag_in_msec`.
   + **feedback\$1epoch** – The epoch a secondary DB cluster uses when it generates hot standby information.

     *Hot standby* is when a DB cluster can connect and query while the server is in recovery or standby mode. Hot standby feedback is information about the DB cluster when it's in hot standby. For more information, see [Hot standby](https://www.postgresql.org/docs/current/hot-standby.html) in the PostgreSQL documentation.
   + **feedback\$1xmin** – The minimum (oldest) active transaction ID used by a secondary DB cluster.

1. Use the `aurora_global_db_instance_status` function to list all secondary DB instances for both the primary DB cluster and secondary DB clusters.

   ```
   postgres=> select * from aurora_global_db_instance_status();
   ```

   ```
   server_id                                   |              session_id              | aws_region | durable_lsn | highest_lsn_rcvd | feedback_epoch | feedback_xmin | oldest_read_view_lsn | visibility_lag_in_msec
   --------------------------------------------+--------------------------------------+------------+-------------+------------------+----------------+---------------+----------------------+------------------------
   apg-global-db-rpo-mammothrw-elephantro-1-n1 | MASTER_SESSION_ID                    | us-east-1  | 93763985102 |                  |                |               |                      |
   apg-global-db-rpo-mammothrw-elephantro-1-n2 | f38430cf-6576-479a-b296-dc06b1b1964a | us-east-1  | 93763985099 |      93763985102 |              2 |    3315479243 |          93763985095 |                     10
   apg-global-db-rpo-elephantro-mammothrw-n1   | 0d9f1d98-04ad-4aa4-8fdd-e08674cbbbfe | us-west-2  | 93763985095 |      93763985099 |              2 |    3315479243 |          93763985089 |                   1017
   (3 rows)
   ```

   The output includes a row for each DB instance of the global database containing the following columns:
   + **server\$1id** – The server identifier for the DB instance.
   + **session\$1id** – A unique identifier for the current session.
   +  **aws\$1region** – The AWS Region that this DB instance is in. For tables listing AWS Regions by engine, see [Region availability](Concepts.RegionsAndAvailabilityZones.md#Aurora.Overview.Availability). 
   + **durable\$1lsn** – The LSN made durable in storage.
   + **highest\$1lsn\$1rcvd** – The highest LSN received by the DB instance from the writer DB instance.
   + **feedback\$1epoch** – The epoch the DB instance uses when it generates hot standby information.

     *Hot standby* is when a DB instance can connect and query while the server is in recovery or standby mode. Hot standby feedback is information about the DB instance when it's in hot standby. For more information, see the PostgreSQL documentation on [Hot standby](https://www.postgresql.org/docs/current/hot-standby.html).
   + **feedback\$1xmin** – The minimum (oldest) active transaction ID used by the DB instance.
   + **oldest\$1read\$1view\$1lsn** – The oldest LSN used by the DB instance to read from storage.
   + **visibility\$1lag\$1in\$1msec** – How far this DB instance is lagging behind the writer DB instance.

To see how these values change over time, consider the following transaction block where a table insert takes an hour.

```
psql> BEGIN;
psql> INSERT INTO table1 SELECT Large_Data_That_Takes_1_Hr_To_Insert;
psql> COMMIT;
```

In some cases, there might be a network disconnect between the primary DB cluster and the secondary DB cluster after the `BEGIN` statement. If so, the secondary DB cluster's `durability_lag_in_msec` value starts increasing. At the end of the `INSERT` statement, the `durability_lag_in_msec` value is 1 hour. However, the `rpo_lag_in_msec` value is 0 because all the user data committed between the primary DB cluster and secondary DB cluster are still the same. As soon as the `COMMIT` statement completes, the `rpo_lag_in_msec` value increases.

# Using Blue/Green Deployments for Amazon Aurora Global Database
Using Blue/Green Deployments for Aurora Global Database

Amazon RDS Blue/Green Deployments provide a capability for testing database changes safely. For your global database, blue/green deployments allow you to perform upgrades and maintenance operations with minimal downtime. You can create a fully managed staging environment (green) that mirrors your entire production environment (blue), including the primary cluster and all associated secondary regions across multiple AWS Regions. The staging environment reflects your production setup, enabling you to reliably test changes before switching over to the new environment. Throughout the process, blue/green deployments keep the staging and production environments in sync. When ready to make your staging environment the new production environment, perform a blue/green switchover. This operation transitions your primary and all secondary regions to the green environment, with downtime typically under one minute. The service automatically renames clusters, instances, and endpoints to match the original production environment, enabling your applications to access the new production environment without requiring any configuration changes and minimizing operational overhead.

**Topics**
+ [

## Benefits of using Blue/Green Deployments with Aurora Global Database
](#aurora-blue-green-benefits)
+ [

## How Blue/Green Deployments work with Aurora Global Database
](#aurora-blue-green-howitworks)

## Benefits of using Blue/Green Deployments with Aurora Global Database
Benefits of using Blue/Green Deployments with Aurora Global Database
+ Perform major version, minor version, and system updates including database patches and OS upgrades on Aurora Global Databases, while maintaining minimal downtime.
+ Create a production-ready staging (green) environment in both primary and secondary regions of the global database to test and implement newer database features.
+ Switchover safely using built-in switchover guardrails with downtime typically under one minute, depending on your workload.
+ Maintains disaster recovery capabilities during the Blue/Green switchover process, allowing Global Database failover during the Blue/Green switchover.
+ Your traffic will be directed to the new production environment without requiring any application changes.

## How Blue/Green Deployments work with Aurora Global Database


 For details on how to create, view, switch, and delete a Blue/Green Deployment, see [Using Amazon Aurora Blue/Green Deployments for database updates](blue-green-deployments.md). You can use this for major or minor version upgrades, database performance improvements through parameter updates, and adoption of new database features, with minimal downtime.

A representation of how a blue/green deployment for Aurora Global Database with one secondary region looks before and after a blue/green switchover is shown below. 

![\[An example of a Blue/Green deployment for Aurora Global Database.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/Aurora Global Database_Blue_Green_example.png)


You can create a blue/green deployment from the primary Region of your Global Database. Select the engine configurations such as major or minor Engine version, DB Parameter group, and DB cluster Parameter group for the green environment. Amazon RDS copies the blue environment's topology for the green environment. A visual representation in AWS Management Console is as shown below.

![\[Summary of a Blue/Green Deployment.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/auroraglobaldatabase_bluegreen.png)


**Note**  
Global failover is supported during a blue/green switchover, but Global switchover is not supported during a blue/green switchover.

When you initiate a global failover during an RDS blue/green switchover, the target region automatically rolls back to the blue environment or rolls forward to the green environment before the global failover occurs.

For information on creating, viewing, switching, and deleting blue/green deployments, see [Using Amazon Aurora Blue/Green Deployments for database updates](blue-green-deployments.md). Follow the same workflow for Global Databases, with specific instructions noted in the relevant steps.

# Using Amazon Aurora global databases with other AWS services
Using Aurora global databases with other AWS services

You can use your Aurora global databases with other AWS services, such as Amazon S3 and AWS Lambda. Doing so requires that all Aurora DB clusters in your global database have the same privileges, external functions, and so on in the respective AWS Regions. Because a read-only Aurora secondary DB cluster in an Aurora global database can be promoted to the role of primary, we recommend that you set up write privileges ahead of time, on all Aurora DB clusters for any services you plan to use with your Aurora global database. 

The following procedures summarize the actions to take for each AWS service.

**To invoke AWS Lambda functions from an Aurora global database**

1. For all the Aurora clusters that make up the Aurora global database, perform the procedures in [Invoking a Lambda function from an Amazon Aurora MySQL DB cluster](AuroraMySQL.Integrating.Lambda.md). 

1. For each cluster in the Aurora global database, set the (ARN) of the new IAM (IAM) role. 

1. To permit database users in an Aurora global database to invoke Lambda functions, associate the role that you created in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md) with each cluster in the Aurora global database. 

1. Configure each cluster in the Aurora global database to allow outbound connections to Lambda. For instructions, see [Enabling network communication from Amazon Aurora to other AWS services](AuroraMySQL.Integrating.Authorizing.Network.md). 

**To load data from Amazon S3**

1. For all the Aurora clusters that make up the Aurora global database, perform the procedures in [Loading data into an Amazon Aurora MySQL DB cluster from text files in an Amazon S3 bucket](AuroraMySQL.Integrating.LoadFromS3.md). 

1. For each Aurora cluster in the global database, set either the `aurora_load_from_s3_role` or `aws_default_s3_role` DB cluster parameter to the Amazon Resource Name (ARN) of the new IAM role. If an IAM role isn't specified for `aurora_load_from_s3_role`, Aurora uses the IAM role specified in `aws_default_s3_role`. 

1. To permit database users in an Aurora global database to access S3, associate the role that you created in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md) with each Aurora cluster in the global database. 

1.  Configure each Aurora cluster in the global database to allow outbound connections to S3. For instructions, see [Enabling network communication from Amazon Aurora to other AWS services](AuroraMySQL.Integrating.Authorizing.Network.md). 

**To save queried data to Amazon S3**

1. For all the Aurora clusters that make up the Aurora global database, perform the procedures in [Saving data from an Amazon Aurora MySQL DB cluster into text files in an Amazon S3 bucket](AuroraMySQL.Integrating.SaveIntoS3.md) or [Exporting data from an Aurora PostgreSQL DB cluster to Amazon S3](postgresql-s3-export.md). 

1. For each Aurora cluster in the global database, set either the `aurora_select_into_s3_role` or `aws_default_s3_role` DB cluster parameter to the Amazon Resource Name (ARN) of the new IAM role. If an IAM role isn't specified for `aurora_select_into_s3_role`, Aurora uses the IAM role specified in `aws_default_s3_role`. 

1. To permit database users in an Aurora global database to access S3, associate the role that you created in [Creating an IAM role to allow Amazon Aurora to access AWS services](AuroraMySQL.Integrating.Authorizing.IAM.CreateRole.md) with each Aurora cluster in the global database. 

1. Configure each Aurora cluster in the global database to allow outbound connections to S3. For instructions, see [Enabling network communication from Amazon Aurora to other AWS services](AuroraMySQL.Integrating.Authorizing.Network.md). 

## Using Amazon Application Recovery Controller (ARC) with Aurora Global Database


When planning your business continuity and disaster recovery strategy, you need to orchestrate recovery across application stacks and their dependencies. [Amazon Application Recovery Controller (ARC)](https://docs.aws.amazon.com/r53recovery/latest/dg/region-switch.html) integrates with Aurora Global Database to automate this process through ARC Region Switch, a centralized solution for automated multi-Region application recovery. Region Switch orchestrates failover steps across AWS accounts and Regions, provides real-time recovery dashboards, and generates compliance reports by aggregating data across resources and accounts. Learn more about [using Region Switch for Aurora Global Database](https://docs.aws.amazon.com/r53recovery/latest/dg/aurora-global-database-block.html).

# Upgrading an Amazon Aurora global database
Upgrading an Amazon Aurora global database

Upgrading an Aurora global database follows the same procedures as upgrading Aurora DB clusters. However, following are some important differences to take note of before you start the process.

We recommend that you upgrade the primary and secondary DB clusters to the same version. You can only perform a managed cross-Region database failover on an Aurora global database if the primary and secondary DB clusters have the same major, minor, and patch level engine versions. However, the patch levels can be different, depending on the minor engine version. For more information, see [Patch level compatibility for managed cross-Region switchovers and failovers](#aurora-global-database-upgrade.minor.incompatibility).

## Major version upgrades
Major version upgrades

When you perform a major version upgrade of an Amazon Aurora global database, you upgrade the global database cluster instead the individual clusters that it contains.

To learn how to upgrade an Aurora PostgreSQL global database to a higher major version, see [Major upgrades for global databases](USER_UpgradeDBInstance.PostgreSQL.MajorVersion.md#USER_UpgradeDBInstance.PostgreSQL.GlobalDB).

**Note**  
With an Aurora global database based on Aurora PostgreSQL, you can't perform a major version upgrade of the Aurora DB engine if the recovery point objective (RPO) feature is turned on. For information about the RPO feature, see [Managing RPOs for Aurora PostgreSQL–based global databases](aurora-global-database-disaster-recovery.md#aurora-global-database-manage-recovery).

To learn how to upgrade an Aurora MySQL global database to a higher major version, see [In-place major upgrades for global databases](AuroraMySQL.Upgrading.Procedure.md#AuroraMySQL.Upgrading.GlobalDB).

**Note**  
With an Aurora global database based on Aurora MySQL, you can perform an in-place upgrade from Aurora MySQL version 2 to version 3 only if you set the `lower_case_table_names` parameter to default and you reboot your global database.  
To perform a major version upgrade to Aurora MySQL version 3 when using `lower_case_table_names`, use the following process:  
Remove all secondary Regions from the global cluster. Follow the steps in [Removing a cluster from an Amazon Aurora global database](aurora-global-database-detaching.md).
Upgrade the engine version of the primary Region to Aurora MySQL version 3. Follow the steps in [How to perform an in-place upgrade](AuroraMySQL.Upgrading.Procedure.md).
Add secondary Regions to the global cluster. Follow the steps in [Adding an AWS Region to an Amazon Aurora global database](aurora-global-database-attaching.md).
You can also use the snapshot restore method instead. For more information, see [Restoring from a DB cluster snapshot](aurora-restore-snapshot.md).

## Minor version upgrades
Minor version upgrades

You can upgrade your Aurora global database to a newer minor engine version across all regions with a single managed operation and minimal downtime, eliminating the need to manually upgrade each cluster individually and reducing operational overhead for global cluster management.

### Understanding global database minor version upgrades


You can upgrade the minor version of your global database through the RDS API, AWS CLI, or AWS Management Console. This single operation orchestrates the upgrade across your primary cluster and all secondary (mirror) clusters. If issues occur during the upgrade, the service automatically rolls back to the existing version.

**Note**  
This managed capability is currently supported only for Aurora PostgreSQL-compatible engines.

When you initiate a global database minor version upgrade using the `modify-global-cluster` command, you specify the target engine version, and the service coordinates the upgrade across all clusters. This upgrade is applied immediately.

For Linux, macOS, or Unix:

```
aws rds modify-global-cluster \
    --global-cluster-identifier global_cluster_identifier \
    --engine-version target_engine_version
```

For Windows:

```
aws rds modify-global-cluster ^
    --global-cluster-identifier global_cluster_identifier ^
    --engine-version target_engine_version
```

### Considerations for minor version upgrades


When planning a minor version upgrade for your global database, consider the following:
+ The managed capability applies only to minor version upgrades. Patch version upgrades continue to use existing system-update maintenance actions.
+ The managed capability is supported only for Aurora PostgreSQL global clusters.

 You can upgrade each cluster in your global cluster topology individually. If you choose this approach, upgrade all secondary clusters before upgrading the primary cluster. When upgrading, ensure your primary and secondary DB clusters are upgraded to the same minor version and patch level. To update the patch level, apply all pending maintenance actions on the secondary cluster. To learn how to upgrade an Aurora PostgreSQL global database to a higher minor version, see [How to perform minor version upgrades and apply patches](USER_UpgradeDBInstance.PostgreSQL.MinorUpgrade.md#USER_UpgradeDBInstance.PostgreSQL.Minor).

### Minor version upgrades for Aurora MySQL global database


To learn how to upgrade an Aurora MySQL global database to a higher minor version, see [Upgrading Aurora MySQL by modifying the engine version](AuroraMySQL.Updates.Patching.ModifyEngineVersion.md).

Before you perform the upgrade, review the following considerations:
+ Upgrading the minor version of a secondary cluster doesn't affect availability or usage of the primary cluster in any way.
+ A secondary cluster must have at least one DB instance to perform a minor upgrade.
+ If you upgrade an Aurora MySQL global database to version 2.11.\$1, you must upgrade your primary and secondary DB clusters to exactly the same version, including the patch level.
+ To support managed cross-Region switchovers or failovers, you might need to upgrade your primary and secondary DB clusters to the exact same version, including the patch level. This requirement applies to Aurora MySQL and some Aurora PostgreSQL versions. For a list of versions that allow switchovers and failovers between clusters running different patch levels, see [Patch level compatibility for managed cross-Region switchovers and failovers](#aurora-global-database-upgrade.minor.incompatibility).

### Patch level compatibility for managed cross-Region switchovers and failovers
Patch level compatibility for managed cross-Region switchovers and failovers

If your Aurora Global Database is running one of the following minor engine versions, you can perform managed cross-Region switchovers or failovers even if the patch levels of your primary and secondary DB clusters don't match. For minor engine versions lower than the ones on this list, your primary and secondary DB clusters must be running the same major, minor, and patch levels to perform managed cross-Region switchovers or failovers. Make sure to review the version information and the notes in the following table when planning upgrades for your primary cluster, secondary clusters, or both.

**Note**  
 For manual cross-Region failovers, you can perform the failover process as long as the target secondary DB cluster is running the same major and minor engine version as the primary DB cluster. In this case, the patch levels don't need to match.   
 If your engine versions require identical patch levels, you can perform the failover manually by following the steps in [Performing manual failovers for Aurora global databases](aurora-global-database-disaster-recovery.md#aurora-global-database-failover.manual-unplanned). 


| Database engine | Minor engine versions | Notes | 
| --- | --- | --- | 
| Aurora MySQL | No minor versions | None of the Aurora MySQL minor versions allow managed cross-Region switchovers or failovers with differing patch levels between the primary and secondary DB clusters.  | 
| Aurora PostgreSQL |  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-upgrade.html)  | With the engine versions listed in the previous column, you can perform managed cross-Region switchovers or failovers from a primary DB cluster with one patch level to a secondary DB cluster with a different patch level. With minor versions lower than these, you can perform managed cross-Region switchovers or failovers only if the patch levels of the primary and secondary DB clusters match. When you update a cluster in your global database to any of the following patch versions, you won't be able to perform cross-Region switchovers or failovers until all of the clusters in your global database are running one of these patch versions or a newer one.  Patch versions 16.1.6, 16.2.4, 16.3.2, and 16.4.2 Patch versions 15.3.8, 15.4.9, 15.5.6, 15.6.4, 15.7.2, and 15.8.2 Patch versions 14.8.8, 14.9.9, 14.10.6, 14.11.4, 14.12.2, and 14.13.2  | 