

# Overview of Amazon Aurora Blue/Green Deployments
Overview of Blue/Green Deployments

By using Amazon Aurora Blue/Green Deployments, you can make and test database changes before implementing them in a production environment. A *blue/green deployment* creates a staging environment that copies the production environment. In a blue/green deployment, the *blue environment* is the current production environment. The *green environment* is the staging environment and stays in sync with the current production environment.

You can make changes to the Aurora DB cluster in the green environment without affecting production workloads. For example, you can upgrade the major or minor DB engine version or change database parameters in the staging environment. You can thoroughly test changes in the green environment. When ready, you can *switch over* the environments to transition the green environment to be the new production environment. The switchover typically takes under a minute with no data loss and no need for application changes.

Because the green environment is a copy of the topology of the production environment, the DB cluster and all of its DB instances are copied in the deployment. The green environment also includes the features used by the DB cluster, such as DB cluster snapshots, Performance Insights, Enhanced Monitoring, and Aurora Serverless v2.

Amazon Aurora Blue/Green Deployments support Amazon RDS Proxy and smart drivers. These solutions reduce writer node upgrade downtime during switchover by detecting the topology change and redirecting connections to the new production environment without waiting for DNS propagation.

**Note**  
Blue/Green Deployments are supported for Aurora MySQL, Aurora PostgreSQL, and Aurora Global Database. For Amazon RDS availability, see [Overview of Amazon RDS Blue/Green Deployments ](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/blue-green-deployments-overview.html) in the *Amazon RDS User Guide*.

**Topics**
+ [

## Region and version availability
](#blue-green-deployments-region-version-availability)
+ [

## Benefits of using Amazon RDS Blue/Green Deployments
](#blue-green-deployments-benefits)
+ [

## Workflow of a blue/green deployment
](#blue-green-deployments-major-steps)
+ [

# Authorizing access to Amazon Aurora blue/green deployment operations
](blue-green-deployments-authorizing-access.md)
+ [

# Limitations and considerations for Amazon Aurora blue/green deployments
](blue-green-deployments-considerations.md)
+ [

# Best practices for Amazon Aurora blue/green deployments
](blue-green-deployments-best-practices.md)

## Region and version availability


Feature availability and support varies across specific versions of each database engine, and across AWS Regions. For more information, see [Supported Regions and Aurora DB engines for Blue/Green Deployments](Concepts.Aurora_Fea_Regions_DB-eng.Feature.BlueGreenDeployments.md).

## Benefits of using Amazon RDS Blue/Green Deployments
Benefits

By using Amazon RDS Blue/Green Deployments, you can stay current on security patches, improve database performance, and adopt newer database features with short, predictable downtime. Blue/green deployments reduce the risks and downtime for database updates, such as major or minor engine version upgrades.

Blue/green deployments provide the following benefits:
+ Easily create a production-ready staging environment.
+ Automatically replicate database changes from the production environment to the staging environment.
+ Test database changes in a safe staging environment without affecting the production environment.
+ Stay current with database patches and system updates.
+ Implement and test newer database features.
+ Switch over your staging environment to be the new production environment without changes to your application.
+ Safely switch over through the use of built-in switchover guardrails.
+ Eliminate data loss during switchover.
+ Switch over quickly, typically under a minute depending on your workload.

## Workflow of a blue/green deployment
Workflow

Complete the following major steps when you use a blue/green deployment for Aurora DB cluster updates.

1. Identify a production DB cluster that requires updates.

   The following image shows an example of a production DB cluster.  
![\[Production (blue) Aurora DB cluster in a blue/green deployment\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/blue-green-deployment-blue-environment-aurora.png)

1. Create the blue/green deployment. For instructions, see [Creating a blue/green deployment in Amazon Aurora](blue-green-deployments-creating.md).

   The following image shows an example of a blue/green deployment of the production environment from step 1. While creating the blue/green deployment, RDS copies the complete topology and configuration of the Aurora DB cluster to create the green environment. The names of the copied DB cluster and DB instances are appended with `-green-random-characters`. The staging environment in the image contains the DB cluster (auroradb-green-**abc123**). It also contains the three DB instances in the DB cluster (auroradb-instance1-green-**abc123**, auroradb-instance2-green-**abc123**, and auroradb-instance3-green-**abc123**).  
![\[Blue/green deployment for Amazon Aurora\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/blue-green-deployment-aurora.png)

   When you create the blue/green deployment, you can specify a higher DB engine version and a different DB cluster parameter group for the DB cluster in the green environment. You can also specify a different DB parameter group for the DB instances in the DB cluster.

   RDS also configures replication from the primary DB instance in the blue environment to the primary DB instance in the green environment.
**Important**  
For Aurora MySQL version 3, after you create the blue/green deployment, the DB cluster in the green environment does not allow write operations by default. However, this doesn't apply for users who have the `CONNECTION_ADMIN` privilege, including the Aurora master user. Users with this privilege can override the `read_only` behaviour. For more information, see [Role-based privilege model](AuroraMySQL.Compare-80-v3.md#AuroraMySQL.privilege-model).

1. Make changes to the staging environment.

   For example, you might change the DB instance class used by one or more DB instances in the green environment.

   For information about modifying a DB cluster, see [Modifying an Amazon Aurora DB cluster](Aurora.Modifying.md).

1. Test your staging environment.

   During testing, we recommend that you keep your databases in the green environment read only. Enable write operations on the green environment with caution because they can result in replication conflicts. They can also result in unintended data in the production databases after switchover. To enable write operations for Aurora MySQL, set the `read_only` parameter to `0`, then reboot the DB instance. For Aurora PostgreSQL, set the `default_transaction_read_only` parameter to `off` at the session level. If you need to test your Green environment with Amazon RDS Proxy, you must create a new Amazon RDS Proxy and register the Green cluster with it. This allows you to test the Green environment independently without affecting your production Blue environment traffic. Delete the test proxy once testing is completed.

1. When ready, switch over to transition the staging environment to be the new production environment. For instructions, see [Switching a blue/green deployment in Amazon Aurora](blue-green-deployments-switching.md).

   The switchover results in downtime. The downtime is usually under one minute, but it can be longer depending on your workload.

   The following image shows the DB clusters after the switchover.  
![\[DB cluster and DB instances after switching over an Amazon Aurora blue/green deployment\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/blue-green-deployment-switchover-aurora.png)

   After the switchover, the Aurora DB cluster in the green environment becomes the new production DB cluster. The names and endpoints in the current production environment are assigned to the newly switched over production environment, requiring no changes to your application. As a result, your production traffic now flows to the new production environment. The DB cluster and DB instances in the blue environment are renamed by appending `-oldn` to the current name, where `n` is a number. For example, assume the name of the DB instance in the blue environment is `auroradb-instance-1`. After switchover, the DB instance name might be `auroradb-instance-1-old1`.

   In the example in the image, the following changes occur during switchover:
   + The green environment DB cluster `auroradb-green-abc123` becomes the production DB cluster named `auroradb`.
   + The green environment DB instance named `auroradb-instance1-green-abc123` becomes the production DB instance `auroradb-instance1`.
   + The green environment DB instance named `auroradb-instance2-green-abc123` becomes the production DB instance `auroradb-instance2`.
   + The green environment DB instance named `auroradb-instance3-green-abc123` becomes the production DB instance `auroradb-instance3`.
   + The blue environment DB cluster named `auroradb` becomes `auroradb-old1`.
   + The blue environment DB instance named `auroradb-instance1` becomes `auroradb-instance1-old1`.
   + The blue environment DB instance named `auroradb-instance2` becomes `auroradb-instance2-old1`.
   + The blue environment DB instance named `auroradb-instance3` becomes `auroradb-instance3-old1`.

1. If you no longer need a blue/green deployment, you can delete it. For instructions, see [Deleting a blue/green deployment in Amazon Aurora](blue-green-deployments-deleting.md).

   After switchover, the previous production environment isn't deleted so that you can use it for regression testing, if necessary.

# Authorizing access to Amazon Aurora blue/green deployment operations
Authorizing access

Users must have the required permissions to perform operations related to blue/green deployments. You can create IAM policies that grant users and roles permission to perform specific API operations on the specified resources they need. You can then attach those policies to the IAM permission sets or roles that require those permissions. For more information, see [Identity and access management for Amazon Aurora](UsingWithRDS.IAM.md).

The user who creates a blue/green deployment must have permissions to perform the following RDS operations:
+ `rds:CreateBlueGreenDeployment`
+ `rds:AddTagsToResource` 
+ `rds:CreateDBCluster` 
+ `rds:CreateDBInstance` 
+ `rds:CreateDBClusterEndpoint` 

The user who switches over a blue/green deployment must have permissions to perform the following RDS operations:
+ `rds:SwitchoverBlueGreenDeployment`
+ `rds:ModifyDBCluster` 
+ `rds:PromoteReadReplicaDBCluster` 

The user who deletes a blue/green deployment must have permissions to perform the following RDS operations:
+ `rds:DeleteBlueGreenDeployment`
+ `rds:DeleteDBCluster` 
+ `rds:DeleteDBInstance` 
+ `rds:DeleteDBClusterEndpoint` 

Aurora provisions and modifies resources in the staging environment on your behalf. These resources include DB instances that use an internally defined naming convention. Therefore, attached IAM policies can't contain partial resource name patterns such as `my-db-prefix-*`. Only wildcards (\$1) are supported. In general, we recommend using resource tags and other supported attributes to control access to these resources, rather than wildcards. For more information, see [Actions, resources, and condition keys for Amazon RDS](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazonrds.html).

## Additional permissions for Aurora Global Database Blue/Green Deployments
Permissions for Global Database

 When creating blue/green deployments for Aurora Global Database clusters, in addition to the above listed permission, users need the following permissions to perform operations to manage the global cluster topology. 

The user who creates a blue/green deployment must have permissions to perform the following RDS operations:
+ `rds:CreateGlobalCluster`

The user who switches over a blue/green deployment must have permissions to perform the following RDS operations:
+ `rds:ModifyGlobalCluster`
+ `rds:PromoteReadReplicaDBCluster`

The user who deletes a blue/green deployment must have permissions to perform the following RDS operations:
+ `rds:DeleteGlobalCluster`

# Limitations and considerations for Amazon Aurora blue/green deployments
Limitations and considerations

Blue/green deployments in Amazon RDS require careful consideration of factors such as replication slots, resource management, instance sizing, and potential impacts on database performance. The following sections provide guidance to help you optimize your deployment strategy to ensure minimal downtime, seamless transitions, and effective management of your database environment.

**Topics**
+ [

## Limitations for blue/green deployments
](#blue-green-deployments-limitations)
+ [

## Aurora Global Database limitations for blue/green deployments
](#blue-green-deployments-limitations-agd)
+ [

## Considerations for blue/green deployments
](#blue-green-deployments-consider)

## Limitations for blue/green deployments
Limitations

The following limitations apply to blue/green deployments.

**Topics**
+ [

### General limitations for blue/green deployments
](#blue-green-deployments-limitations-general)
+ [

### Aurora MySQL limitations for blue/green deployments
](#blue-green-deployments-limitations-mysql)
+ [

### Aurora PostgreSQL limitations for blue/green deployments
](#blue-green-deployments-limitations-postgres-logical)

### General limitations for blue/green deployments
General limitations

The following general limitations apply to blue/green deployments:
+ You can't stop and start a cluster that is part of a blue/green deployment.
+ Blue/green deployments don't support managing master user passwords with AWS Secrets Manager.
+ If you attempt to force a backtrack on the blue DB cluster, the blue/green deployment breaks and switchover is blocked. 
+ During switchover, the blue and green environments can't have zero-ETL integrations with Amazon Redshift. You must delete the integration first and switch over, then recreate the integration.
+ The Event Scheduler (`event_scheduler` parameter) must be disabled on the green environment when you create a blue/green deployment. This prevents events from being generated in the green environment and causing inconsistencies.
+ Auto Scaling policies configured on the blue DB cluster are not copied to the green environment. You must reconfigured them after switchover, regardless of whether they were initially set up on the blue or green environment.
+ You can't change an unencrypted DB cluster into an encrypted DB cluster. In addition, you can't change an encrypted DB cluster into an unencrypted DB cluster.
+ You can't change a blue DB cluster to a higher engine version than its corresponding green DB cluster.
+ The resources in the blue environment and green environment must be in the same AWS account.
+ If you use Amazon RDS Proxy, you must register your blue cluster with the proxy before creating a blue/green deployment. If a blue/green deployment already exists for a given blue cluster, registering that blue cluster to Amazon RDS Proxy will be blocked.
+ Amazon RDS Proxy with blue/green deployments is not supported for Aurora Global Databases.
+ Blue/green deployments aren't supported for the following features:
  + Cross-Region read replicas
  + Aurora Serverless v1 DB clusters
  + CloudFormation

### Aurora MySQL limitations for blue/green deployments


The following limitations apply to Aurora MySQL blue/green deployments:
+ The source DB cluster can't contain any databases named `tmp`. Databases with this name will not be copied to the green environment.
+ The blue DB cluster can't be an external binlog replica.
+ If the source DB cluster that has backtrack enabled, the green DB cluster is created without backtracking support. This is because backtracking doesn't work with binary log (binlog) replication, which is required for blue/green deployments. For more information, see [Backtracking an Aurora DB cluster](AuroraMySQL.Managing.Backtrack.md).
+ Blue/green deployments don't support the AWS JDBC Driver for MySQL. For more information, see [Known Limitations](https://github.com/awslabs/aws-mysql-jdbc?tab=readme-ov-file#known-limitations) on GitHub.

### Aurora PostgreSQL limitations for blue/green deployments


The following limitations apply to Aurora PostgreSQL blue/green deployments. 
+ [Unlogged](https://www.postgresql.org/docs/16/sql-createtable.html#SQL-CREATETABLE-UNLOGGED) tables aren't replicated to the green environment unless the `rds.logically_replicate_unlogged_tables` parameter is set to `1` on the blue DB cluster. Don't modify this parameter value after you create a blue/green deployment to avoid possible replication errors on unlogged tables.
+ The blue DB cluster can't be a logical source (publisher) or replica (subscriber).
+ If the blue DB cluster is configured as the foreign server of a foreign data wrapper (FDW) extension, you must use the cluster endpoint name instead of IP addresses. This allows the configuration to remain functional after switchover.
+ In a blue/green deployment, each database requires a logical replication slot. As the number of databases grows, resource overhead increases and can potentially lead to replication lag, especially if the DB cluster isn't sufficiently scaled. The impact depends on factors such as database workload and the number of connections. To mitigate this, consider scaling up your DB instance class or reducing the number of databases on the source cluster.
+ Blue/green deployments are supported for Babelfish for Aurora PostgreSQL only for version 15.7 and higher 15 versions, and 16.3 and higher 16 versions.
+ If you want to capture execution plans in Aurora Replicas, you must provide the blue DB cluster endpoint when calling the `apg_plan_mgmt.create_replica_plan_capture` function. This ensures that plan captures continue to work after switchover. For more information, see [Capturing Aurora PostgreSQL execution plans in Replicas](AuroraPostgreSQL.QPM.Plancapturereplicas.md).
+ The logical replication [apply process](https://www.postgresql.org/docs/current/logical-replication-architecture.html) in the green environment is single-threaded. If the blue environment generates a high volume of write traffic, the green environment might not be able to keep up. This can lead to replication lag or failure, especially for workloads that produce continuous high write throughput. Make sure to test your workloads thoroughly. For scenarios that require major version upgrades and handling high-volume write workloads, consider alternative approaches such as using [AWS Database Migration Service (AWS DMS)](https://docs.aws.amazon.com/dms/latest/userguide/data-migrations.html) or [self-managed logical replication](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraPostgreSQL.MajorVersionUpgrade.html).
+ Creating new partitions on partitioned tables isn't supported during blue/green deployments for Aurora. Creating new partitions involves data definition language (DDL) operations such as `CREATE TABLE`, which aren't replicated from the blue environment to the green environment. However, existing partitioned tables and their data will be replicated to the green environment.
+ The following limitations apply to PostgreSQL extensions:
  + The `pg_partman` extension must be disabled in the blue environment when you create a blue/green deployment. The extension performs DDL operations such as `CREATE TABLE`, which break logical replication from the blue environment to the green environment.
  + The `pg_cron` extension must remain disabled on all green databases after the blue/green deployment is created. The extension has background workers that run as superuser and bypass the read-only setting of the green environment, which might cause replication conflicts.
  + The `apg_plan_mgmt` extension must have the `apg_plan_mgmt.capture_plan_baselines` parameter set to `off` on all green databases to avoid primary key conflicts if an identical plan is captured in the blue environment. For more information, see [Overview of Aurora PostgreSQL query plan management](AuroraPostgreSQL.Optimize.overview.md).
  + The `pglogical` and `pgactive` extensions must be disabled on the blue environment when you create a blue/green deployment. After you switch over the green environment to be the new production environment, you can enable the extensions again. In addition, the blue database can’t be a logical subscriber of an external instance.
  + If you're using the `pgAudit` extension, it must remain in the shared libraries (`shared_preload_libraries`) on the custom DB parameter groups for both the blue and the green DB instances. For more information, see [Setting up the pgAudit extension](Appendix.PostgreSQL.CommonDBATasks.pgaudit.basic-setup.md).

#### Logical replication-specific limitations for blue/green deployments


PostgreSQL has certain restrictions related to logical replication, which translate to limitations when creating blue/green deployments for Aurora PostgreSQL DB clusters.

The following table describes logical replication limitations that apply to blue/green deployments for Aurora PostgreSQL. For more information, see [Restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions.html) in the PostgreSQL logical replication documentation.


| Limitation | Explanation | 
| --- | --- | 
| Data definition language (DDL) statements, such as CREATE TABLE and CREATE SCHEMA, aren't replicated from the blue environment to the green environment. |  If Aurora detects a DDL change in the blue environment, your green databases enter a state of **Replication degraded**. You must delete the blue/green deployment and all green databases, then recreate it.  | 
| Data control language (DCL) statements, such as GRANT and REVOKE, aren't replicated from the blue environment to the green environment. |  If Aurora detects an attempt to execute a DCL statement in the blue environment, you will see a warning message. There is no configuration or API available to change this behavior, as it is a limitation of the blue/green deployment process.  | 
| NEXTVAL operations on sequence objects aren't synchronized between the blue environment and the green environment. |  During switchover, Aurora increments sequence values in the green environment to match those in the blue environment. If you have thousands of sequences, this can delay switchover.  | 
| Large objects in the blue environment aren't replicated to the green environment. This includes both existing large objects and any newly created or modified large objects during the blue/green deployment process. |  If Aurora detects the creation or modification of large objects in the blue environment that are stored in the `pg_largeobject` system table, your green databases enter a state of **Replication degraded**. You must delete the blue/green deployment and all green databases, then recreate it.  | 
|  Refreshing materialized views breaks replication.  |  Refreshing materialized views in the blue environment breaks replication to the green environment. Refrain from refreshing materialized views in the blue environment. After a switchover, you can manually refresh them using the [REFRESH MATERIALIZED VIEW](https://www.postgresql.org/docs/current/sql-refreshmaterializedview.html) command, or schedule a refresh.  | 
|  UPDATE and DELETE operations aren't permitted on tables that don't have a primary key.  |  Before you create a blue/green deployment, make sure that all tables have a primary key or use `REPLICA IDENTITY FULL`. However, only use `REPLICA IDENTITY FULL` if no primary or unique key exists, as it affects replication performance. For more information, see the [PostgreSQL documentation](https://www.postgresql.org/docs/current/logical-replication-restrictions.html).  | 

## Aurora Global Database limitations for blue/green deployments


In addition to the above stated general and engine specific limitations, the following limitations apply for blue/green deployments for Aurora Global Database:
+ All operations must be initiated from the same Region as the writer cluster of the Global Database.
+ Performing a global switchover or a global failover will cause active blue/green deployment to become invalid. The blue-green deployment need to be deleted and recreated from the new primary region.
+ For Aurora PostgreSQL, if you have global write forwarding enabled in your production environment and create a blue/green deployment, write forwarding is disabled on the green cluster. It is enabled in the green environment only after the blue/green switchover when the green environment becomes the new production environment. After switchover, write forwarding is disabled on the `-old1` cluster. 
+ Modifying to the topology of the global database after the creation of the blue/green deployment will cause the active blue/green deployment to become invalid. The blue-green deployment would have to be deleted and recreated from the new primary region.
+ Automated snapshots are retained per the backup retention days that was originally configured in old blue environment. Automated snapshots from old blue cluster are not copied to green.
+ Global failover is supported during a blue/green switchover but a global switchover is not supported during a blue/green switchover.
+ Ensure DB cluster and DB parameter groups for the green environment exist in all secondary regions with identical names. If the parameter group in any region is unavailable, the default parameter group in the regions is used.
+ Avoid using RDS Proxy on any global database members during blue/green deployment switchover.

## Considerations for blue/green deployments
Considerations

Amazon RDS tracks resources in blue/green deployments with the `DbiResourceId` and `DbClusterResourceId` of each resource. This resource ID is an AWS Region-unique, immutable identifier for the resource.

The *resource* ID is separate from the DB *cluster*** ID. Each one is listed in the database configuration in the RDS console.

The name (cluster ID) of a resource changes when you switch over a blue/green deployment, but each resource keeps the same resource ID. For example, a DB cluster identifier might have been `mycluster` in the blue environment. After switchover, the same DB cluster might be renamed to `mycluster-old1`. However, the resource ID of the DB cluster doesn't change during switchover. So, when you switch over the green resources to be the new production resources, their resource IDs don't match the blue resource IDs that were previously in production.

After you switch over a blue/green deployment, consider updating the resource IDs to those of the newly transitioned production resources for integrated features and services that you used with the production resources. Specifically, consider the following updates:
+ If you perform filtering using the RDS API and resource IDs, adjust the resource IDs used in filtering after switchover.
+ If you use CloudTrail for auditing resources, adjust the consumers of the CloudTrail to track the new resource IDs after switchover. For more information, see [Monitoring Amazon Aurora API calls in AWS CloudTrail](logging-using-cloudtrail.md).
+ If you use Database Activity Streams for resources in the blue environment, adjust your application to monitor database events for the new stream after switchover. For more information, see [Supported Regions and Aurora DB engines for database activity streams](Concepts.Aurora_Fea_Regions_DB-eng.Feature.DBActivityStreams.md).
+ If you use the Performance Insights API, adjust the resource IDs in calls to the API after switchover. For more information, see [Monitoring DB load with Performance Insights on Amazon Aurora](USER_PerfInsights.md).

  You can monitor a database with the same name after switchover, but it doesn't contain the data from before the switchover.
+ If you use resource IDs in IAM policies, make sure you add the resource IDs of the newly transitioned resources when necessary. For more information, see [Identity and access management for Amazon Aurora](UsingWithRDS.IAM.md).
+ If you have IAM roles associated with your DB cluster, make sure to reassociate them after switchover. Attached roles aren't automatically copied to the green environment.
+ If you authenticate to your DB cluster using [IAM database authentication](UsingWithRDS.IAMDBAuth.md), make sure that the IAM policy used for database access has both the blue and the green databases listed under the `Resource` element of the policy. This is required in order to connect to the green database after switchover. For more information, see [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md).
+ If you want to restore a manual DB cluster snapshot for a DB cluster that was part of a blue/green deployment, make sure you restore the correct DB cluster snapshot by examining the time when the snapshot was taken. For more information, see [Restoring from a DB cluster snapshot](aurora-restore-snapshot.md).
+ After you switch over, AWS Database Migration Service (AWS DMS) replication tasks can't resume because the checkpoint from the blue environment is invalid in the green environment. You must recreate the DMS task with a new checkpoint to continue replication.
+ Amazon Aurora creates the green environment by *cloning* the underlying Aurora storage volume in the blue environment. The green cluster volume only stores incremental changes made to the green environment. If you delete the DB cluster in the blue environment, the size of the underlying Aurora storage volume in the green environment grows to the full size. For more information, see [Cloning a volume for an Amazon Aurora DB cluster](Aurora.Managing.Clone.md).
+ When you add a DB instance to the DB cluster in the green environment of a blue/green deployment, the new DB instance won't replace a DB instance in the blue environment when you switch over. However, the new DB instance is retained in the DB cluster and becomes a DB instance in the new production environment.
+ When you delete a DB instance in the DB cluster in the green environment of a blue/green deployment, you can't create a new DB instance to replace it in the blue/green deployment.

  If you create a new DB instance with the same name and ARN as the deleted DB instance, it has a different `DbiResourceId`, so it isn't part of the green environment.

  The following behavior results if you delete a DB instance in the DB cluster in the green environment:
  + If the DB instance in the blue environment with the same name exists, it won't be switched over to the DB instance in the green environment. This DB instance won't be renamed by appending `-oldn` to the DB instance name.
  + Any application that points to the DB instance in the blue environment continues to use the same DB instance after switchover.
+ If you use resource tags for access control or operational management, you need to understand that tag changes aren't synchronized between blue and green environments until switchover. When you create a blue/green deployment, tags from the blue environment are copied to the green environment. After creation, any tag modifications that you make to either environment aren't automatically synchronized. During switchover, blue environment tags replace all tags in the green environment. Apply all necessary tags to the blue environment before you create the blue/green deployment, or reapply required tags to the new production environment after switchover. For more information about tags, see [Tagging Amazon Aurora andAmazon RDS resources](USER_Tagging.md).

# Best practices for Amazon Aurora blue/green deployments
Best practices

The following are best practices for blue/green deployments.

**Topics**
+ [

## General best practices for blue/green deployments
](#blue-green-deployments-best-practices-general)
+ [

## Aurora MySQL best practices for blue/green deployments
](#blue-green-deployments-best-practices-mysql)
+ [

## Aurora PostgreSQL best practices for blue/green deployments
](#blue-green-deployments-best-practices-postgres)
+ [

## Aurora Global Database best practices for blue/green deployments
](#blue-green-deployments-best-practices-agd)

## General best practices for blue/green deployments


Consider the following general best practices when you create a blue/green deployment.
+ Thoroughly test the Aurora DB cluster in the green environment before switching over.
+ Keep your databases in the green environment read only. We recommend that you enable write operations on the green environment with caution because they can result in replication conflicts. They can also result in unintended data in the production databases after switchover.
+ If you use a blue/green deployment to implement schema changes, make only replication-compatible changes.

  For example, you can add new columns at the end of a table without disrupting replication from the blue deployment to the green deployment. However, schema changes, such as renaming columns or renaming tables, break replication to the green deployment.

  For more information about replication-compatible changes, see [Replication with Differing Table Definitions on Source and Replica](https://dev.mysql.com/doc/refman/8.0/en/replication-features-differing-tables.html) in the MySQL documentation and [Restrictions](https://www.postgresql.org/docs/current/logical-replication-restrictions.html) in the PostgreSQL logical replication documentation.
+ Use the cluster endpoint, reader endpoint, or custom endpoint for all connections in both environments. Don't use instance endpoints or custom endpoints with static or exclusion lists.
+ When you switch over a blue/green deployment, follow the switchover best practices. For more information, see [Switchover best practices](blue-green-deployments-switching.md#blue-green-deployments-switching-best-practices).

## Aurora MySQL best practices for blue/green deployments


Consider the following best practices when you create a blue/green deployment from an Aurora MySQL DB cluster.
+ If the green environment experiences replica lag, consider the following:
  + Disable binary log retention if it's not needed, or temporarily disable it until replication catches up. To do so, set the `binlog_format` DB cluster parameter back to `0` and reboot the green writer DB instance.
  + Temporarily set the `innodb_flush_log_at_trx_commit` parameter to `0` in the green DB parameter group. After replication catches up, revert to the default value of `1` before switchover. If an unexpected shutdown or crash occurs with the temporary parameter value, rebuild the green environment to avoid undetected data corruption. For more information, see [Configuring how frequently the log buffer is flushed](AuroraMySQL.BestPractices.FeatureRecommendations.md#AuroraMySQL.BestPractices.Flush).

## Aurora PostgreSQL best practices for blue/green deployments


Consider the following best practices when you create a blue/green deployment from an Aurora PostgreSQL DB cluster.
+ Monitor the Aurora PostgreSQL logical replication write-through cache and make adjustments to the cache buffer if necessary. For more information, see [Monitoring the Aurora PostgreSQL logical replication write-through cache](AuroraPostgreSQL.Replication.Logical-monitoring.md#AuroraPostgreSQL.Replication.Logical-write-through-cache).
+ Increase the value of the `logical_decoding_work_mem` DB parameter in the blue environment. Doing so allows for less decoding on disk and instead uses memory. For more information, see [Adjusting working memory for logical decoding](AuroraPostgreSQL.BestPractices.Tuning-memory-parameters.md#AuroraPostgreSQL.BestPractices.Tuning-memory-parameters.logical-decoding-work-mem).
  + You can monitor transaction overflow being written to disk using the `ReplicationSlotDiskUsage` CloudWatch metric. This metric offers insights into the disk usage of replication slots, helping identify when transaction data exceeds memory capacity and is stored on disk. You can monitor freeable memory with the `FreeableMemory` CloudWatch metric. For more information, see [Instance-level metrics for Amazon Aurora](Aurora.AuroraMonitoring.Metrics.md#Aurora.AuroraMySQL.Monitoring.Metrics.instances).
  + In Aurora PostgreSQL version 14 and higher, you can monitor the size of logical overflow files using the `[pg\$1stat\$1replication\$1slots](https://www.postgresql.org/docs/14/monitoring-stats.html#MONITORING-PG-STAT-REPLICATION-SLOTS-VIEW)` system view.
+ Update all of your PostgreSQL extensions to the latest version before you create a blue/green deployment. For more information, see [Upgrading PostgreSQL extensions](USER_UpgradeDBInstance.Upgrading.ExtensionUpgrades.md).
+ If you’re using the `aws_s3` extension, give the green DB cluster access to Amazon S3 through an IAM role after the green environment is created. This allows the import and export commands to continue functioning after switchover. For instructions, see [Setting up access to an Amazon S3 bucket](postgresql-s3-export-access-bucket.md).
+ If you specify a higher engine version for the green environment, run the `ANALYZE` operation on all databases to refresh the `pg_statistic` table. Optimizer statistics aren't transferred during a major version upgrade, so you must regenerate all statistics to avoid performance issues. For additional best practices during major version upgrades, see [Performing a major version upgrade](USER_UpgradeDBInstance.PostgreSQL.MajorVersion.md).
+ Avoid configuring triggers as `ENABLE REPLICA` or `ENABLE ALWAYS` if the trigger is used on the source to manipulate data. Otherwise, the replication system propagates changes and executes the trigger, which leads to duplication.
+ Long-running transactions can cause significant replica lag. To reduce replica lag, consider doing the following:
  + Reduce long-running transactions and subtransactions that can be delayed until after the green environment catches up to the blue environment.
  + Reduce bulk operations on the blue environment until after the green environment catches up to the blue environment.
  + Initiate a manual vacuum freeze operation on busy tables prior to creating the blue/green deployment. For instructions, see [Performing a manual vacuum freeze](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.Autovacuum.VacuumFreeze.html).
  + In PostgreSQL version 12 and higher, disable the `index_cleanup` parameter on large or busy tables to improve the efficiency of regular maintenance on blue databases. For more information, see [Vacuuming a table as quickly as possible](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.Autovacuum.LargeIndexes.html#Appendix.PostgreSQL.CommonDBATasks.Autovacuum.LargeIndexes.Executing).
**Note**  
Regularly skipping index cleanup during vacuuming can lead to index bloat, which might degrade scan performance. As a best practice, use this approach only while using a blue/green deployment. Once the deployment is complete, we recommend resuming regular index maintenance and cleanup.
  + Replica lag can occur if the blue and green DB instances are undersized for the workload. Ensure that your DB instances are not reaching their resource limits for the instance type. For more information, see [Using Amazon CloudWatch metrics to analyze resource usage for Aurora PostgreSQL](AuroraPostgreSQL_AnayzeResourceUsage.md).
+ Slow replication can cause senders and receivers to restart often, which delays synchronization. To ensure that they remain active, disable timeouts by setting the `wal_sender_timeout` parameter to `0` in the blue environment, and the `wal_receiver_timeout` parameter to `0` in the green environment.
+ Review the performance of your UPDATE and DELETE statements and evaluate whether creating an index on the column used in the WHERE clause can optimize these queries. This can enhance performance when the operations are replayed in the green environment. For more information, see [Check predicate filters for queries that generate waits](apg-waits.iodatafileread.md#apg-waits.iodatafileread.actions.filters).
+ If you're using triggers, make sure they don't interfere with the creating, updating, and dropping of `pg_catalog.pg_publication`, `pg_catalog.pg_subscription`, and `pg_catalog.pg_replication_slots` objects whose names start with 'rds'.
+ If Babelfish is enabled on the source DB cluster, the following parameters must have the same settings in the target DB cluster parameter group for the green environment as in the source DB cluster parameter group:
  + rds.babelfish\$1status
  + babelfishpg\$1tds.tds\$1default\$1numeric\$1precision
  + babelfishpg\$1tds.tds\$1default\$1numeric\$1scale
  + babelfishpg\$1tsql.default\$1locale
  + babelfishpg\$1tsql.migration\$1mode
  + babelfishpg\$1tsql.server\$1collation\$1name

  For more information about these parameters, see [DB cluster parameter group settings for Babelfish](babelfish-configuration.md).

## Aurora Global Database best practices for blue/green deployments


In addition to the above listed general and engine specific best practices, consider the following best practices for Aurora Global Database. 
+ Monitor the following CloudWatch metrics to identify periods of low activity in your production environment:
  + `DatabaseConnections`
  + `ActiveTransactions`

  Schedule the blue/green switchover during your planned maintenance window or during a period of low activity.
+ Blue/Green switchover duration varies based on your workload and the number of secondary regions. When you initiate a blue/green switchover, the service waits for replica lag to reach zero before proceeding. We recommend checking replica lag before initiating a switchover.
+ If you intend to use a DB parameter or DB Cluster parameter group other than the default one for your green environment, create the desired parameter group with the same name in all secondary regions before initiating the blue/green deployment.