

 Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the [ blog post ](https://aws.amazon.com/blogs/big-data/amazon-redshift-python-user-defined-functions-will-reach-end-of-support-after-june-30-2026/). 

# Data encryption
<a name="security-encryption"></a>

Data protection refers to protecting data while in transit (as it travels to and from Amazon Redshift) and at rest (while it is stored on disks in Amazon Redshift data centers). You can protect data in transit by using SSL or by using client-side encryption. You have the following options of protecting data at rest in Amazon Redshift.
+ **Use server-side encryption** – You request Amazon Redshift to encrypt your data before saving it on disks in its data centers and decrypt it when you download the objects. 
+ **Use client-side encryption** – You can encrypt data client-side and upload the encrypted data to Amazon Redshift. In this case, you manage the encryption process, the encryption keys, and related tools.

# Encryption at rest
<a name="security-server-side-encryption"></a>

Server-side encryption is about data encryption at rest—that is, Amazon Redshift optionally encrypts your data as it writes it in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted data. 

Amazon Redshift protects data at rest through encryption. Optionally, you can protect all data stored on disks within a cluster and all backups in Amazon S3 with Advanced Encryption Standard AES-256. 

To manage the keys used for encrypting and decrypting your Amazon Redshift resources, you use [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/). AWS KMS combines secure, highly available hardware and software to provide a key management system scaled for the cloud. Using AWS KMS, you can create encryption keys and define the policies that control how these keys can be used. AWS KMS supports AWS CloudTrail, so you can audit key usage to verify that keys are being used appropriately. You can use your AWS KMS keys in combination with Amazon Redshift and supported AWS services.. For a list of services that support AWS KMS, see [How AWS Services Use AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/services.html) in the *AWS Key Management Service Developer Guide*.

If you choose to manage your provisioned cluster or serverless namespace's admin password using AWS Secrets Manager, Amazon Redshift also accepts an additional AWS KMS key that AWS Secrets Manager uses to encrypt your credentials. This additional key can be an automatically generated key from AWS Secrets Manager, or a custom key that you provide. 

Amazon Redshift query editor v2 securely stores information entered into the query editor as follows:
+ The Amazon Resource Name (ARN) of the KMS key used to encrypt query editor v2 data.
+ Database connection information.
+ Names and content of files and folders.

Amazon Redshift query editor v2 encrypts information using block-level encryption with either your KMS key or the service account KMS key. The encryption of your Amazon Redshift data is controlled by your Amazon Redshift cluster properties.

**Topics**
+ [Amazon Redshift database encryption](working-with-db-encryption.md)

# Amazon Redshift database encryption
<a name="working-with-db-encryption"></a>

In Amazon Redshift, your database is encrypted by default to protect your data at rest. Database encryption applies to the cluster and also to its snapshots.

You can modify an unencrypted cluster to use AWS Key Management Service (AWS KMS) encryption. To do so, you can use either an AWS-owned key or a customer managed key. When you modify your cluster to enable AWS KMS encryption, Amazon Redshift automatically migrates your data to a new encrypted cluster. Snapshots created from the encrypted cluster are also encrypted. You can also migrate an encrypted cluster to an unencrypted cluster by modifying the cluster and changing the **Encrypt database** option. For more information, see [Changing cluster encryption](changing-cluster-encryption.md). 

Though you can still transform the default encrypted cluster to unencrypted after creating the cluster, we recommend that you keep cluster that contains sensitive data as encrypted. Additionally, you might be required to use encryption depending on the guidelines or regulations that govern your data. For example, the Payment Card Industry Data Security Standard (PCI DSS), the Sarbanes-Oxley Act (SOX), the Health Insurance Portability and Accountability Act (HIPAA), and other such regulations provide guidelines for handling specific types of data.

Amazon Redshift uses a hierarchy of encryption keys to encrypt the database. You can use either AWS Key Management Service (AWS KMS) or a hardware security module (HSM) to manage the top-level encryption keys in this hierarchy. The process that Amazon Redshift uses for encryption differs depending on how you manage keys. Amazon Redshift automatically integrates with AWS KMS but not with an HSM. When you use an HSM, you must use client and server certificates to configure a trusted connection between Amazon Redshift and your HSM.

**Important**  
 Amazon Redshift can lose access to the KMS key for a provisioned cluster or serverless namespace when you disable the customer-managed KMS key. In these cases, Amazon Redshift takes a backup of the Amazon Redshift data warehouse and puts it into an `inaccessible-kms-key` state for 14 days. If you restore the KMS key within that period, Amazon Redshift will restore access and the warehouse will function normally. If the 14 day period ends without the KMS key being restored, Amazon Redshift will delete the data warehouse. While a warehouse is in `inaccessible-kms-key` state, it has the following characteristics:   
 You can’t run any queries on the data warehouse. 
 If the data warehouse is the producer warehouse of a datashare, you can’t run datasharing queries against it from consumer warehouses. 
 You can’t create cross-region snapshot copies. 
For information on restoring a disabled KMS key, see [Enable and disable keys](https://docs.aws.amazon.com/kms/latest/developerguide/enabling-keys.html) in the *AWS Key Management Service Developer Guide*. If the warehouse’s KMS key was deleted, you can use the backup to create a new data warehouse before the `inaccessible-kms-key` status warehouse is deleted.

## Encryption process improvements for better performance and availability
<a name="resize-classic-encryption"></a>

### Encryption with RA3 nodes
<a name="resize-classic-encryption-ra3"></a>

 Updates to the encryption process for RA3 nodes have made the experience much better. Both read and write queries can run during the process with less performance impact from the encryption. Also, encryption finishes much more quickly. The updated process steps include a restore operation and migration of cluster metadata to a target cluster. The improved experience applies to encryption types like AWS KMS, for example. When you have petabyte-scale data volumes, the operation has been reduced from weeks to days. 

Prior to encrypting your cluster, if you plan to continue to run database workloads, you can improve performance and speed up the process by adding nodes with elastic resize. You can't use elastic resize when encryption is in process, so do it before you encrypt. Note that adding nodes typically results in higher cost.

### Encryption with other node types
<a name="resize-classic-encryption-ds2"></a>

When you encrypt a cluster with DC2 nodes, you don't have the ability to run write queries, like with RA3 nodes. Only read queries can be run.

### Usage notes for encryption with RA3 nodes
<a name="resize-classic-encryption-usage"></a>

The following insights and resources help you prepare for encryption and monitor the process.
+ **Running queries after starting encryption** – After encryption is started, reads and writes are available within about fifteen minutes. How long it takes the full encryption process to complete depends on the amount of data on the cluster and the workload levels. 
+ **How long does encryption take?** – The time to encrypt your data depends on several factors: These include the number of workloads running, the compute resources being used, the number of nodes, and the type of nodes. We recommend that you initially perform encryption in a test environment. As a rule of thumb, if you're working with data volumes in petabytes, it likely can take 1-3 days for encryption to complete.
+ **How do I know encryption is finished?** – After you enable encryption, the completion of the first snapshot confirms that encryption is completed.
+ **Rolling back encryption** – If you need to roll back the encryption operation, the best way to do this is to restore from the most recent backup taken prior to when encryption was initiated. You will have to re-apply any new updates (updates/deletes/inserts) following the last-backup. 
+ **Performing a table restore** – Note that you can't restore a table from an unencrypted cluster to an encrypted cluster.
+ **Encrypting a single-node cluster** – Encrypting a single-node cluster has performance limitations. It takes longer than encryption for a multi-node cluster.
+ **Creating a backup after encryption** – When you encrypt the data in your cluster, a backup isn't created until the cluster is fully encrypted. The amount of time this takes can vary. The time taken for backup can be hours to days, depending on the cluster size. After encryption completes, there can be a delay before you can create a backup.

  Note that because a backup-and-restore operation occurs during the encryption process, any tables or materialized views created with `BACKUP NO` aren't retained. For more information, see [CREATE TABLE](https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_NEW.html) or [CREATE MATERIALIZED VIEW](https://docs.aws.amazon.com/redshift/latest/dg/materialized-view-create-sql-command.html).

**Topics**
+ [Encryption process improvements for better performance and availability](#resize-classic-encryption)
+ [Encryption using AWS KMS](#working-with-aws-kms)
+ [Encryption using hardware security modules](#working-with-HSM)
+ [Encryption key rotation](#working-with-key-rotation)
+ [Changing cluster encryption](changing-cluster-encryption.md)
+ [Migrating to an HSM-encrypted cluster](migrating-to-an-encrypted-cluster.md)
+ [Rotating encryption keys](manage-key-rotation-console.md)

## Encryption using AWS KMS
<a name="working-with-aws-kms"></a>

When you choose AWS KMS for key management with Amazon Redshift, there is a four-tier hierarchy of encryption keys. These keys, in hierarchical order, are the root key, a cluster encryption key (CEK), a database encryption key (DEK), and data encryption keys.

When you launch your cluster, Amazon Redshift returns a list of the AWS KMS keys that Amazon Redshift or your AWS account has created or has permission to use in AWS KMS. You select a KMS key to use as your root key in the encryption hierarchy.

By default, Amazon Redshift selects an automatically generated AWS-owned key as the root key for your AWS account to use in Amazon Redshift. 

If you don't want to use the default key, you must have (or create) a customer managed KMS key separately in AWS KMS before you launch your cluster in Amazon Redshift. Customer managed keys give you more flexibility, including the ability to create, rotate, disable, define access control for, and audit the encryption keys used to help protect your data. For more information about creating KMS keys, see [Creating Keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*.

If you want to use a AWS KMS key from another AWS account, you must have permission to use the key and specify its Amazon Resource Name (ARN) in Amazon Redshift. For more information about access to keys in AWS KMS, see [Controlling Access to Your Keys](https://docs.aws.amazon.com/kms/latest/developerguide/control-access.html) in the *AWS Key Management Service Developer Guide*.

After you choose a root key, Amazon Redshift requests that AWS KMS generate a data key and encrypt it using the selected root key. This data key is used as the CEK in Amazon Redshift. AWS KMS exports the encrypted CEK to Amazon Redshift, where it is stored internally on disk in a separate network from the cluster along with the grant to the KMS key and the encryption context for the CEK. Only the encrypted CEK is exported to Amazon Redshift; the KMS key remains in AWS KMS. Amazon Redshift also passes the encrypted CEK over a secure channel to the cluster and loads it into memory. Then, Amazon Redshift calls AWS KMS to decrypt the CEK and loads the decrypted CEK into memory. For more information about grants, encryption context, and other AWS KMS-related concepts, see [Concepts](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html) in the *AWS Key Management Service Developer Guide*.

Next, Amazon Redshift randomly generates a key to use as the DEK and loads it into memory in the cluster. The decrypted CEK is used to encrypt the DEK, which is then passed over a secure channel from the cluster to be stored internally by Amazon Redshift on disk in a separate network from the cluster. Like the CEK, both the encrypted and decrypted versions of the DEK are loaded into memory in the cluster. The decrypted version of the DEK is then used to encrypt the individual encryption keys that are randomly generated for each data block in the database.

When the cluster reboots, Amazon Redshift starts with the internally stored, encrypted versions of the CEK and DEK, reloads them into memory, and then calls AWS KMS to decrypt the CEK with the KMS key again so it can be loaded into memory. The decrypted CEK is then used to decrypt the DEK again, and the decrypted DEK is loaded into memory and used to encrypt and decrypt the data block keys as needed.

For more information about creating Amazon Redshift clusters that are encrypted with AWS KMS keys, see [Creating a cluster](create-cluster.md).

### Copying AWS KMS–encrypted snapshots to another AWS Region
<a name="configure-snapshot-copy-grant"></a>

AWS KMS keys are specific to an AWS Region. If you want to enable copying of Amazon Redshift snapshots from an encrypted source cluster to another AWS Region, but want to use your own AWS KMS key for snapshots in the destination, you need to configure a grant for Amazon Redshift to use a root key in your account in the destination AWS Region. This grant enables Amazon Redshift to encrypt snapshots in the destination AWS Region. If you want snapshots in destination encrypted by an AWS Region-owned key, you don't need to configure any grants in the destination AWS Region. For more information about cross-Region snapshot copy, see [Copying a snapshot to another AWS Region](cross-region-snapshot-copy.md).

**Note**  
If you enable copying of snapshots from an encrypted cluster and use AWS KMS for your root key, you cannot rename your cluster because the cluster name is part of the encryption context. If you must rename your cluster, you can disable copying of snapshots in the source AWS Region, rename the cluster, and then configure and enable copying of snapshots again.

The process to configure the grant for copying snapshots is as follows. 

1. In the destination AWS Region, create a snapshot copy grant by doing the following:
   +  If you do not already have an AWS KMS key to use, create one. For more information about creating AWS KMS keys, see [Creating Keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*. 
   + Specify a name for the snapshot copy grant. This name must be unique in that AWS Region for your AWS account.
   + Specify the AWS KMS key ID for which you are creating the grant. If you do not specify a key ID, the grant applies to your default key.

1. In the source AWS Region, enable copying of snapshots and specify the name of the snapshot copy grant that you created in the destination AWS Region.

This preceding process is only necessary if you enable copying of snapshots using the AWS CLI, the Amazon Redshift API, or SDKs. If you use the console, Amazon Redshift provides the proper workflow to configure the grant when you enable cross-Region snapshot copy. For more information about configuring cross-Region snapshot copy for AWS KMS-encrypted clusters by using the console, see [Configuring cross-Region snapshot copy for an AWS KMS–encrypted cluster](xregioncopy-kms-encrypted-snapshot.md).

Before the snapshot is copied to the destination AWS Region, Amazon Redshift decrypts the snapshot using the root key in the source AWS Region and re-encrypts it temporarily using a randomly generated RSA key that Amazon Redshift manages internally. Amazon Redshift then copies the snapshot over a secure channel to the destination AWS Region, decrypts the snapshot using the internally managed RSA key, and then re-encrypts the snapshot using the root key in the destination AWS Region.

## Encryption using hardware security modules
<a name="working-with-HSM"></a>

If you don't use AWS KMS for key management, you can use a hardware security module (HSM) for key management with Amazon Redshift. 

**Important**  
HSM encryption is not supported for DC2 and RA3 node types.

HSMs are devices that provide direct control of key generation and management. They provide greater security by separating key management from the application and database layers. Amazon Redshift supports AWS CloudHSM Classic for key management. The encryption process is different when you use HSM to manage your encryption keys instead of AWS KMS.

**Important**  
Amazon Redshift supports only AWS CloudHSM Classic. We don't support the newer AWS CloudHSM service.   
AWS CloudHSM Classic is closed to new customers. For more information, see [CloudHSM Classic Pricing](https://aws.amazon.com/cloudhsm/pricing-classic/). AWS CloudHSM Classic isn't available in all AWS Regions. For more information about available AWS Regions, see [AWS Region Table](https://aws.amazon.com/about-aws/global-infrastructure/regional-product-services/). 

When you configure your cluster to use an HSM, Amazon Redshift sends a request to the HSM to generate and store a key to be used as the CEK. However, unlike AWS KMS, the HSM doesn’t export the CEK to Amazon Redshift. Instead, Amazon Redshift randomly generates the DEK in the cluster and passes it to the HSM to be encrypted by the CEK. The HSM returns the encrypted DEK to Amazon Redshift, where it is further encrypted using a randomly-generated, internal root key and stored internally on disk in a separate network from the cluster. Amazon Redshift also loads the decrypted version of the DEK in memory in the cluster so that the DEK can be used to encrypt and decrypt the individual keys for the data blocks.

If the cluster is rebooted, Amazon Redshift decrypts the internally-stored, double-encrypted DEK using the internal root key to return the internally stored DEK to the CEK-encrypted state. The CEK-encrypted DEK is then passed to the HSM to be decrypted and passed back to Amazon Redshift, where it can be loaded in memory again for use with the individual data block keys.

### Configuring a trusted connection between Amazon Redshift and an HSM
<a name="configure-trusted-connection"></a>

When you opt to use an HSM for management of your cluster key, you need to configure a trusted network link between Amazon Redshift and your HSM. Doing this requires configuration of client and server certificates. The trusted connection is used to pass the encryption keys between the HSM and Amazon Redshift during encryption and decryption operations.

Amazon Redshift creates a public client certificate from a randomly generated private and public key pair. These are encrypted and stored internally. You download and register the public client certificate in your HSM, and assign it to the applicable HSM partition.

You provide Amazon Redshift with the HSM IP address, HSM partition name, HSM partition password, and a public HSM server certificate, which is encrypted by using an internal root key. Amazon Redshift completes the configuration process and verifies that it can connect to the HSM. If it cannot, the cluster is put into the INCOMPATIBLE\$1HSM state and the cluster is not created. In this case, you must delete the incomplete cluster and try again.

**Important**  
When you modify your cluster to use a different HSM partition, Amazon Redshift verifies that it can connect to the new partition, but it does not verify that a valid encryption key exists. Before you use the new partition, you must replicate your keys to the new partition. If the cluster is restarted and Amazon Redshift cannot find a valid key, the restart fails. For more information, see [Replicating Keys Across HSMs](https://docs.aws.amazon.com/cloudhsm/latest/userguide/cli-clone-hapg.html). 

After initial configuration, if Amazon Redshift fails to connect to the HSM, an event is logged. For more information about these events, see [Amazon Redshift Event Notifications](https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-event-notifications.html).

## Encryption key rotation
<a name="working-with-key-rotation"></a>

In Amazon Redshift, you can rotate encryption keys for encrypted clusters. When you start the key rotation process, Amazon Redshift rotates the CEK for the specified cluster and for any automated or manual snapshots of the cluster. Amazon Redshift also rotates the DEK for the specified cluster, but cannot rotate the DEK for the snapshots while they are stored internally in Amazon Simple Storage Service (Amazon S3) and encrypted using the existing DEK. 

While the rotation is in progress, the cluster is put into a ROTATING\$1KEYS state until completion, at which time the cluster returns to the AVAILABLE state. Amazon Redshift handles decryption and re-encryption during the key rotation process.

**Note**  
You cannot rotate keys for snapshots without a source cluster. Before you delete a cluster, consider whether its snapshots rely on key rotation.

Because the cluster is momentarily unavailable during the key rotation process, you should rotate keys only as often as your data needs require or when you suspect the keys might have been compromised. As a best practice, you should review the type of data that you store and plan how often to rotate the keys that encrypt that data. The frequency for rotating keys varies depending on your corporate policies for data security, and any industry standards regarding sensitive data and regulatory compliance. Ensure that your plan balances security needs with availability considerations for your cluster.

For more information about rotating keys, see [Rotating encryption keys](manage-key-rotation-console.md).

# Changing cluster encryption
<a name="changing-cluster-encryption"></a>

You can modify an unencrypted cluster to use AWS Key Management Service (AWS KMS) encryption using either an AWS-owned key or a customer managed key. When you modify your cluster to enable AWS KMS encryption, Amazon Redshift automatically migrates your data to a new encrypted cluster. You can also migrate an encrypted cluster to an unencrypted cluster by modifying the cluster with the AWS CLI, but not with the AWS Management Console.

During the migration operation, your cluster is available in read-only mode, and the cluster status appears as **resizing**. 

If your cluster is configured to enable cross-AWS Region snapshot copy, you must disable it before changing encryption. For more information, see [Copying a snapshot to another AWS Region](cross-region-snapshot-copy.md) and [Configuring cross-Region snapshot copy for an AWS KMS–encrypted cluster](xregioncopy-kms-encrypted-snapshot.md). You can't enable hardware security module (HSM) encryption by modifying the cluster. Instead, create a new, HSM-encrypted cluster and migrate your data to the new cluster. For more information, see [Migrating to an HSM-encrypted cluster](migrating-to-an-encrypted-cluster.md). 

------
#### [ Amazon Redshift console ]

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose the cluster that you want to modify encryption.

1. Choose **Properties**.

1. In the **Database configurations** section, choose **Edit**, then choose **Edit encryption**. 

1. Choose one of the encryption options and choose **Save changes**.

------
#### [ AWS CLI ]

To modify your unencrypted cluster to use AWS KMS, run the `modify-cluster` CLI command and specify `–-encrypted`, as shown following. By default, your default KMS key is used. To specify a customer managed key, include the `--kms-key-id` option.

```
aws redshift modify-cluster --cluster-identifier <value> --encrypted --kms-key-id <value>
```

To remove encryption from your cluster, run the following CLI command.

```
aws redshift modify-cluster --cluster-identifier <value> --no-encrypted
```

------

# Migrating to an HSM-encrypted cluster
<a name="migrating-to-an-encrypted-cluster"></a>

To migrate an unencrypted cluster to a cluster encrypted using a hardware security module (HSM), you create a new encrypted cluster and move your data to the new cluster. You can't migrate to an HSM-encrypted cluster by modifying the cluster.

To migrate from an unencrypted cluster to an HSM-encrypted cluster, you first unload your data from the existing, source cluster. Then you reload the data in a new, target cluster with the chosen encryption setting. For more information about launching an encrypted cluster, see [Amazon Redshift database encryption](working-with-db-encryption.md). 

During the migration process, your source cluster is available for read-only queries until the last step. The last step is to rename the target and source clusters, which switches endpoints so all traffic is routed to the new, target cluster. The target cluster is unavailable until you reboot following the rename. Suspend all data loads and other write operations on the source cluster while data is being transferred. <a name="prepare-for-migration"></a>

**To prepare for migration**

1. Identify all the dependent systems that interact with Amazon Redshift, for example business intelligence (BI) tools and extract, transform, and load (ETL) systems.

1. Identify validation queries to test the migration. 

   For example, you can use the following query to find the number of user-defined tables.

   ```
   select count(*)
   from pg_table_def
   where schemaname != 'pg_catalog';
   ```

   The following query returns a list of all user-defined tables and the number of rows in each table.

   ```
   select "table", tbl_rows
   from svv_table_info;
   ```

1. Choose a good time for your migration. To find a time when cluster usage is lowest, monitor cluster metrics such as CPU utilization and number of database connections. For more information, see [Viewing cluster performance data](performance-metrics-perf.md).

1. Drop unused tables. 

   To create a list of tables and the number of the times each table has been queried, run the following query. 

   ```
   select database,
   schema,
   table_id,
   "table",
   round(size::float/(1024*1024)::float,2) as size,
   sortkey1,
   nvl(s.num_qs,0) num_qs
   from svv_table_info t
   left join (select tbl,
   perm_table_name,
   count(distinct query) num_qs
   from stl_scan s
   where s.userid > 1
   and   s.perm_table_name not in ('Internal worktable','S3')
   group by tbl,
   perm_table_name) s on s.tbl = t.table_id
   where t."schema" not in ('pg_internal');
   ```

1. Launch a new, encrypted cluster. 

   Use the same port number for the target cluster as for the source cluster. For more information about launching an encrypted cluster, see [Amazon Redshift database encryption](working-with-db-encryption.md). 

1. Set up the unload and load process. 

   You can use the [Amazon Redshift Unload/Copy Utility](https://github.com/awslabs/amazon-redshift-utils/tree/master/src/UnloadCopyUtility) to help you to migrate data between clusters. The utility exports data from the source cluster to a location on Amazon S3. The data is encrypted with AWS KMS. The utility then automatically imports the data into the target. Optionally, you can use the utility to clean up Amazon S3 after migration is complete. 

1. Run a test to verify your process and estimate how long write operations must be suspended. 

   During the unload and load operations, maintain data consistency by suspending data loads and other write operations. Using one of your largest tables, run through the unload and load process to help you estimate timing. 

1. Create database objects, such as schemas, views, and tables. To help you generate the necessary data definition language (DDL) statements, you can use the scripts in [AdminViews](https://github.com/awslabs/amazon-redshift-utils/tree/master/src/AdminViews) in the AWS GitHub repository.<a name="migration-your-cluster"></a>

**To migrate your cluster**

1. Stop all ETL processes on the source cluster. 

   To confirm that there are no write operations in process, use the Amazon Redshift Management Console to monitor write IOPS. For more information, see [Viewing cluster performance data](performance-metrics-perf.md). 

1. Run the validation queries you identified earlier to collect information about the unencrypted source cluster before migration.

1. (Optional) Create one workload management (WLM) queue to use the maximum available resources in both the source and target cluster. For example, create a queue named `data_migrate` and configure the queue with memory of 95 percent and concurrency of 4. For more information, see [Routing Queries to Queues Based on User Groups and Query Groups](https://docs.aws.amazon.com/redshift/latest/dg/tutorial-wlm-routing-queries-to-queues.html) in the *Amazon Redshift Database Developer Guide*.

1. Using the `data_migrate` queue, run the UnloadCopyUtility. 

   Monitor the UNLOAD and COPY process using the Amazon Redshift Console. 

1. Run the validation queries again and verify that the results match the results from the source cluster. 

1. Rename your source and target clusters to swap the endpoints. To avoid disruption, perform this operation outside of business hours.

1. Verify that you can connect to the target cluster using all of your SQL clients, such as ETL and reporting tools.

1. Shut down the unencrypted source cluster.

# Rotating encryption keys
<a name="manage-key-rotation-console"></a>

You can use the following procedure to rotate encryption keys by using the Amazon Redshift console.

**To rotate the encryption keys for a cluster**

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. On the navigation menu, choose **Clusters**, then choose the cluster that you want to update encryption keys.

1. For **Actions**, choose **Rotate encryption** to display the **Rotate encryption keys** page. 

1. On the **Rotate encryption keys** page, choose **Rotate encryption keys**. 

# Encryption in transit
<a name="security-encryption-in-transit"></a>

You can configure your environment to protect the confidentiality and integrity of data in transit.

The following details apply to encryption of data in transit between an Amazon Redshift cluster and SQL clients over JDBC/ODBC:
+ You can connect to Amazon Redshift clusters from SQL client tools over Java Database Connectivity (JDBC) and Open Database Connectivity (ODBC) connections. 
+ Amazon Redshift supports Secure Sockets Layer (SSL) connections to encrypt data and server certificates to validate the server certificate that the client connects to. The client connects to the leader node of an Amazon Redshift cluster. For more information, see [Configuring security options for connections](connecting-ssl-support.md).
+ To support SSL connections, Amazon Redshift creates and installs AWS Certificate Manager (ACM) issued certificates on each cluster. For more information, see [Transitioning to ACM certificates for SSL connections](connecting-transitioning-to-acm-certs.md). 
+ To protect your data in transit within the AWS Cloud, Amazon Redshift uses hardware accelerated SSL to communicate with Amazon S3 or Amazon DynamoDB for COPY, UNLOAD, backup, and restore operations. 

The following details apply to encryption of data in transit between an Amazon Redshift cluster and Amazon S3 or DynamoDB:
+ Amazon Redshift uses hardware accelerated SSL to communicate with Amazon S3 or DynamoDB for COPY, UNLOAD, backup, and restore operations. 
+ Redshift Spectrum supports the Amazon S3 server-side encryption (SSE) using your account's default key managed by the AWS Key Management Service (KMS). 
+ You can encrypt Amazon Redshift loads with Amazon S3 and AWS KMS. For more information, see [Encrypt Your Amazon Redshift Loads with Amazon S3 and AWS KMS](https://aws.amazon.com/blogs/big-data/encrypt-your-amazon-redshift-loads-with-amazon-s3-and-aws-kms/).

The following details apply to encryption and signing of data in transit between AWS CLI, SDK, or API clients and Amazon Redshift endpoints:
+ Amazon Redshift provides HTTPS endpoints for encrypting data in transit. 
+ To protect the integrity of API requests to Amazon Redshift, API calls must be signed by the caller. Calls are signed by an X.509 certificate or the customer's AWS secret access key according to the Signature Version 4 Signing Process (Sigv4). For more information, see [Signature Version 4 Signing Process](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) in the *AWS General Reference*.
+ Use the AWS CLI or one of the AWS SDKs to make requests to AWS. These tools automatically sign the requests for you with the access key that you specify when you configure the tools. 

The following details apply to encryption of data in transit between Amazon Redshift clusters and Amazon Redshift query editor v2:
+ Data is transmitted between query editor v2 and Amazon Redshift clusters over a TLS-encrypted channel. 

# VPC encryption controls with Amazon Redshift
<a name="security-vpc-encryption-controls"></a>

Amazon Redshift supports [ VPC encryption controls](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-encryption-controls.html), a security feature that helps you enforce encryption in transit for all traffic within and across VPCs in a Region. This document describes how to use VPC encryption controls with Amazon Redshift clusters and serverless workgroups.

VPC encryption controls provide centralized control to monitor and enforce encryption in transit within your VPCs. When enabled in enforce mode, it ensures that all network traffic is encrypted either at the hardware layer (using AWS Nitro System) or at the application layer (using TLS/SSL).

Amazon Redshift integrates with VPC encryption controls to help you meet compliance requirements for industries such as healthcare (HIPAA), government (FedRAMP), and finance (PCI DSS).

## How VPC encryption controls work with Amazon Redshift
<a name="security-vpc-encryption-controls-sypnosis"></a>

VPC encryption controls operate in two modes:
+ Monitor Mode: Provides visibility into the encryption status of traffic flows and helps identify resources that allow unencrypted traffic.
+ Enforce Mode: Prevents the creation or use of resources that allow unencrypted traffic within the VPC. All traffic must be encrypted either at the hardware layer (Nitro-based instances) or application layer (TLS/SSL).

## Requirements for using VPC encryption controls
<a name="security-vpc-encryption-controls-requirements"></a>

**Instance type requirements**

Amazon Redshift requires Nitro-based instances to support VPC encryption controls. All modern Redshift instance types support the necessary encryption capabilities.

**SSL/TLS requirements**

When VPC encryption controls is enabled in enforce mode, the require\$1ssl parameter must be set to true and cannot be disabled. This ensures that all client connections use encrypted TLS connections.

## Migrating to VPC ecncryption controls
<a name="security-vpc-encryption-controls-migration"></a>

**For existing clusters and workgroups**

You cannot enable VPC encryption controls in enforce mode on a VPC that contains existing Redshift clusters or serverless workgroups. See the following steps to use encryption controls if you have an existing cluster or workgroup:

1. Create a snapshot of your existing cluster or namespace

1. Create a new VPC with VPC encryption controls enabled in enforce mode

1. Restore from the snapshot into the new VPC using one of these operations:
   + For provisioned clusters: Use the `restore-from-cluster-snapshot` operation
   + For serverless: Use the `restore-from-snapshot` operation on your workgroup

**When creating new clusters or workgroups in a VPC with encryption controls enabled, the require\$1ssl parameter must be set to true.**

Amazon Redshift requires Nitro-based instances to support VPC encryption controls. All modern Redshift instance types support the necessary encryption capabilities.

**SSL/TLS requirements**

When VPC encryption controls is enabled in enforce mode, the require\$1ssl parameter must be set to true and cannot be disabled. This ensures that all client connections use encrypted TLS connections.

## Considerations and limitations
<a name="security-vpc-encryption-controls-limitations"></a>

When using VPC encryption controls in Amazon Redshift, consider the following:

**VPC State Restrictions**
+ Cluster and workgroup creation is blocked when VPC encryption controls is in `enforce-in-progress` state
+ You must wait until the VPC reaches `enforce` mode before creating new resources

**SSL configuration**
+ require\$1ssl parameter: Must always be `true` for clusters and workgroups created in encryption-enforced VPCs
+ Once a cluster or workgroup is created in an encryption-enforced VPC, `require_ssl` cannot be disabled for its lifetime

**Region availability**

This feature is not available in enforce mode with Amazon Redshift Serverless in the following Regions:
+ South America (São Paulo)
+ Europe (Zurich)

# Key management
<a name="security-key-management"></a>

You can configure your environment to protect data with keys:
+ Amazon Redshift automatically integrates with AWS Key Management Service (AWS KMS) for key management. AWS KMS uses envelope encryption. For more information, see [Envelope Encryption](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping). 
+ When encryption keys are managed in AWS KMS, Amazon Redshift uses a four-tier, key-based architecture for encryption. The architecture consists of randomly generated AES-256 data encryption keys, a database key, a cluster key, and a root key. For more information, see [How Amazon Redshift Uses AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/services-redshift.html). 
+ You can create your own customer managed key in AWS KMS. For more information, see [Creating Keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html). 
+ You can also import your own key material for new AWS KMS keys. For more information, see [Importing Key Material in AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/importing-keys.html). 
+ Amazon Redshift supports management of encryption keys in external hardware security modules (HSMs). The HSM can be on-premises or can be AWS CloudHSM. When you use an HSM, you must use client and server certificates to configure a trusted connection between Amazon Redshift and your HSM. Amazon Redshift supports only AWS CloudHSM Classic for key management. For more information, see [Encryption using hardware security modules](working-with-db-encryption.md#working-with-HSM). For information about AWS CloudHSM, see [What is AWS CloudHSM?](https://docs.aws.amazon.com/cloudhsm/latest/userguide/introduction.html) 
+ You can rotate encryption keys for encrypted clusters.. For more information, see [Encryption key rotation](working-with-db-encryption.md#working-with-key-rotation). 