

 Amazon Redshift will no longer support the creation of new Python UDFs starting Patch 198. Existing Python UDFs will continue to function until June 30, 2026. For more information, see the [ blog post ](https://aws.amazon.com/blogs/big-data/amazon-redshift-python-user-defined-functions-will-reach-end-of-support-after-june-30-2026/). 

# Getting started with zero-ETL integrations
<a name="zero-etl-using.setting-up"></a>

This set of tasks walks you through setting up your first zero-ETL integration. First, you configure your integration source and set it up with the required parameters and permissions. Then, you continue to the rest of the initial setup from the Amazon Redshift console or AWS CLI. The console provides a **Fix it for me** option to correct some configuration issues.

**Topics**
+ [

# Create and configure a target Amazon Redshift data warehouse
](zero-etl-setting-up.rs-data-warehouse.md)
+ [

# Turn on case sensitivity for your data warehouse
](zero-etl-setting-up.case-sensitivity.md)
+ [

# Configure authorization for your Amazon Redshift data warehouse
](zero-etl-using.redshift-iam.md)
+ [

# Create a zero-ETL integration
](zero-etl-setting-up.create-integration.md)

# Create and configure a target Amazon Redshift data warehouse
<a name="zero-etl-setting-up.rs-data-warehouse"></a>

In this step, you create and configure a target Amazon Redshift data warehouse, such as a Redshift Serverless workgroup or a provisioned cluster. If you already have a Amazon Redshift data warehouse configured for use with zero-ETL integrations, you can skip this step.

Your target data warehouse must have the following characteristics:
+ Running Amazon Redshift Serverless or a provisioned cluster of an RA3 node type. 
+ Has case sensitivity (`enable_case_sensitive_identifier`) turned on. For more information, see [Turn on case sensitivity for your data warehouse](zero-etl-setting-up.case-sensitivity.md).
+ Encrypted, if your target data warehouse is an Amazon Redshift provisioned cluster. For more information, see [Amazon Redshift database encryption](working-with-db-encryption.md).
+ Created in the same AWS Region as the integration source.

To create your target data warehouse for your zero-ETL integrations, see one of the following topics depending on your deployment type:
+ To create an Amazon Redshift provisioned cluster, see [Creating a cluster](create-cluster.md).
+ To create an Amazon Redshift Serverless workgroup with a namespace, see [Creating a workgroup with a namespace](serverless-console-workgroups-create-workgroup-wizard.md).

When you create a provisioned cluster, Amazon Redshift also creates a default parameter group. You can't edit the default parameter group. However, you can create a custom parameter group before creating a new cluster and then associate it with the cluster. Or, you can edit the parameter group that will be associated with the created cluster. You must also turn on case sensitivity for the parameter group either when creating the custom parameter group or when editing a current one to use zero-ETL integrations.

To create a custom parameter group using the Amazon Redshift console or the AWS CLI, see [Creating a parameter group](https://docs.aws.amazon.com/redshift/latest/mgmt/parameter-group-create.html).

# Turn on case sensitivity for your data warehouse
<a name="zero-etl-setting-up.case-sensitivity"></a>

You can attach a parameter group and enable case sensitivity for a provisioned cluster during creation. However, you can update a serverless workgroup through the AWS Command Line Interface (AWS CLI) only after it's been created. This is required to support the case sensitivity of source tables and columns. The `enable_case_sensitive_identifier` is a configuration value that determines whether name identifiers of databases, tables, and columns are case sensitive. This parameter must be turned on to create zero-ETL integrations in the data warehouse. For more information, see [enable\$1case\$1sensitive\$1identifier](https://docs.aws.amazon.com/redshift/latest/dg/r_enable_case_sensitive_identifier.html).

For Amazon Redshift Serverless – [Turn on case sensitivity for Amazon Redshift Serverless using the AWS CLI](#case-sensitivity-serverless-cli). Note that you can turn on case sensitivity for Amazon Redshift Serverless only from the AWS CLI.

For Amazon Redshift provisioned clusters, enable case sensitivity for your target cluster using one of the following topics: 
+ [Turn on case sensitivity for Amazon Redshift provisioned clusters using the Amazon Redshift console](#case-sensitivity-cluster-console)
+ [Turn on case sensitivity for Amazon Redshift provisioned clusters using the AWS CLI](#case-sensitivity-cluster-cli)

## Turn on case sensitivity for Amazon Redshift Serverless using the AWS CLI
<a name="case-sensitivity-serverless-cli"></a>

Run the following AWS CLI command to turn on case sensitivity for your workgroup. 

```
aws redshift-serverless update-workgroup \
        --workgroup-name target-workgroup \
        --config-parameters parameterKey=enable_case_sensitive_identifier,parameterValue=true
```

Wait for the workgroup status to be `Active` before proceeding to the next step.

## Turn on case sensitivity for Amazon Redshift provisioned clusters using the Amazon Redshift console
<a name="case-sensitivity-cluster-console"></a>

1. Sign in to the AWS Management Console and open the Amazon Redshift console at [https://console.aws.amazon.com/redshiftv2/](https://console.aws.amazon.com/redshiftv2/).

1. In the left navigation pane, choose **Provisioned clusters dashboard**.

1. Choose the provisioned cluster that you want to replicate data into.

1. In the left navigation pane, choose **Configurations** > **Workload management**.

1. On the workload management page, choose the parameter group.

1. Choose the **Parameters** tab.

1. Choose **Edit parameters**, then change **enable\$1case\$1sensitive\$1identifier** to **true**.

1. Then, choose **Save**.

## Turn on case sensitivity for Amazon Redshift provisioned clusters using the AWS CLI
<a name="case-sensitivity-cluster-cli"></a>

1. Because you can't edit the default parameter group, from your terminal program, run the following AWS CLI command to create a custom parameter group. Later, you will associate it with the provisioned cluster.

   ```
   aws redshift create-cluster-parameter-group \
       --parameter-group-name zero-etl-params \
       --parameter-group-family redshift-2.0 \
       --description "Param group for zero-ETL integrations"
   ```

1. Run the following AWS CLI command to turn on case sensitivity for the parameter group.

   ```
   aws redshift modify-cluster-parameter-group \
       --parameter-group-name zero-etl-params \
       --parameters ParameterName=enable_case_sensitive_identifier,ParameterValue=true
   ```

1. Run the following command to associate the parameter group with the cluster.

   ```
   aws redshift modify-cluster \
       --cluster-identifier target-cluster \
       --cluster-parameter-group-name zero-etl-params
   ```

1. Wait for the provisioned cluster to be available. You can check the status of the cluster by using the `describe-cluster` command. Then, run the following command to reboot the cluster.

   ```
   aws redshift reboot-cluster \
       --cluster-identifier target-cluster
   ```

# Configure authorization for your Amazon Redshift data warehouse
<a name="zero-etl-using.redshift-iam"></a>

To replicate data from your integration source into your Amazon Redshift data warehouse, you must initially add the following two entities:
+ *Authorized principal* – identifies the user or role that can create zero-ETL integrations into the data warehouse.
+ *Authorized integration source* – identifies the source database that can update the data warehouse.

You can configure authorized principals and authorized integration sources from the **Resource Policy** tab on the Amazon Redshift console or using the Amazon Redshift `PutResourcePolicy` API operation.

## Add authorized principals
<a name="zero-etl-using.redshift-iam-ap"></a>

To create a zero-ETL integration into your Redshift Serverless workgroup or provisioned cluster, authorize access to the associated namespace or provisioned cluster. 

You can skip this step if both of the following conditions are true:
+ The AWS account that owns the Redshift Serverless workgroup or provisioned cluster also owns the source database.
+ That principal is associated with an identity-based IAM policy with permissions to create zero-ETL integrations into this Redshift Serverless namespace or provisioned cluster.

### Add authorized principals to an Amazon Redshift Serverless namespace
<a name="iam-ap-serverless"></a>

1. In the Amazon Redshift console, in the left navigation pane, choose **Redshift Serverless**.

1. Choose **Namespace configuration**, then choose your namespace, and go to the **Resource Policy** tab.

1. Choose **Add authorized principals**.

1. For each authorized principal that you want to add, enter into the namespace either the ARN of the AWS user or role, or the ID of the AWS account that you want to grant access to create zero-ETL integrations. An account ID is stored as an ARN.

1. Choose **Save changes**.

### Add authorized principals to an Amazon Redshift provisioned cluster
<a name="iam-ap-cluster"></a>

1. In the Amazon Redshift console, in the left navigation pane, choose **Provisioned clusters dashboard**.

1. Choose **Clusters**, then choose the cluster, and go to the **Resource Policy** tab.

1. Choose **Add authorized principals**.

1. For each authorized principal that you want to add, enter into the cluster either the ARN of the AWS user or role, or the ID of the AWS account that you want to grant access to create zero-ETL integrations. An account ID is stored as an ARN.

1. Choose **Save changes**.

## Add authorized integration sources
<a name="zero-etl-using.redshift-iam-air"></a>

To allow your source to update your Amazon Redshift data warehouse, you must add it as an authorized integration source to the namespace.

### Add an authorized integration source to an Amazon Redshift Serverless namespace
<a name="iam-air-serverless"></a>

1. In the Amazon Redshift console, go to **Serverless dashboard**. 

1. Choose the name of the namespace.

1. Go to the **Resource Policy** tab.

1. Choose **Add authorized integration source**.

1. Specify the ARN of the source for the zero-ETL integration.

**Note**  
Removing an authorized integration source stops data from replicating into the namespace. This action deactivates all zero-ETL integrations from that source into this namespace.

### Add an authorized integration source to an Amazon Redshift provisioned cluster
<a name="iam-air-cluster"></a>

1. In the Amazon Redshift console, go to **Provisioned clusters dashboard**. 

1. Choose the name of the provisioned cluster.

1. Go to the **Resource Policy** tab.

1. Choose **Add authorized integration source**.

1. Specify the ARN of the source that's the data source for the zero-ETL integration.

**Note**  
Removing an authorized integration source stops data from replicating into the provisioned cluster. This action deactivates all zero-ETL integrations from that source into this Amazon Redshift provisioned cluster.

## Configure authorization using the Amazon Redshift API
<a name="zero-etl-using.resource-policies"></a>

You can use the Amazon Redshift API operations to configure resource policies that work with zero-ETL integrations.

To control the source that can create an inbound integration into the namespace, create a resource policy and attach it to the namespace. With the resource policy, you can specify the source that has access to the integration. The resource policy is attached to the namespace of your target data warehouse to allow the source to create an inbound integration to replicate live data from the source into Amazon Redshift.

The following is a sample resource policy.

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "redshift.amazonaws.com"
      },
      "Action": "redshift:AuthorizeInboundIntegration",
      "Resource": "arn:aws:redshift:*:*:integration:*",
      "Condition": {
        "StringEquals": {
          "aws:SourceArn": "arn:aws:rds:*:111122223333:cluster:*"
        }
      }
    },
    {
      "Effect": "Allow",
      "Principal": {
       "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": "redshift:CreateInboundIntegration",
      "Resource": "arn:aws:redshift:*:*:integration:*"
    }
  ]
}
```

------

The following summarizes the Amazon Redshift API operations applicable to configuring resource policies for integrations:
+ Use the [PutResourcePolicy](https://docs.aws.amazon.com/redshift/latest/APIReference/API_PutResourcePolicy.html) API operation to persist the resource policy. When you provide another resource policy, the previous resource policy on the resource is replaced. Use the previous example resource policy, which grants permissions for the following actions:
  + `CreateInboundIntegration` – Allows the source principal to create an inbound integration for data to be replicated from the source into the target data warehouse.
  + `AuthorizeInboundIntegration` – Allows Amazon Redshift to continuously validate that the target data warehouse can receive data replicated from the source ARN.
+ Use the [GetResourcePolicy](https://docs.aws.amazon.com/redshift/latest/APIReference/API_GetResourcePolicy.html) API operation is to view existing resource policies.
+ Use the [DeleteResourcePolicy](https://docs.aws.amazon.com/redshift/latest/APIReference/API_DeleteResourcePolicy.html) API operation to remove a resource policy from the resource.

To update a resource policy, you can also use the [put-resource-policy](https://docs.aws.amazon.com/cli/latest/reference/redshift/put-resource-policy.html) AWS CLI command. For example, to put a resource policy on your Amazon Redshift namespace ARN for a DynamoDB source, run a AWS CLI command similar to the following.

```
aws redshift put-resource-policy \
--policy file://rs-rp.json \
--resource-arn "arn:aws:redshift-serverless:us-east-1:123456789012:namespace/cc4ffe56-ad2c-4fd1-a5a2-f29124a56433"
```

Where `rs-rp.json` contains:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "redshift.amazonaws.com"
            },
            "Action": "redshift:AuthorizeInboundIntegration",
            "Resource": "arn:aws:redshift-serverless:us-east-1:123456789012:namespace/cc4ffe56-ad2c-4fd1-a5a2-f29124a56433",
            "Condition": {
                "StringEquals": {
                    "aws:SourceArn": "arn:aws:dynamodb:us-east-1:123456789012:table/test_ddb"
                }
            }
        },
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:root"
            },
            "Action": "redshift:CreateInboundIntegration",
            "Resource": "arn:aws:redshift-serverless:us-east-1:123456789012:namespace/cc4ffe56-ad2c-4fd1-a5a2-f29124a56433"
        }
    ]
}
```

------

# Create a zero-ETL integration
<a name="zero-etl-setting-up.create-integration"></a>

First, you create a zero-ETL integration to replicate your source data to Amazon Redshift.

The source of your data determines which type of zero-ETL integration to create.

**Topics**
+ [

# Create a zero-ETL integration for Aurora
](zero-etl-setting-up.create-integration-aurora.md)
+ [

# Create a zero-ETL integration for Amazon RDS
](zero-etl-setting-up.create-integration-rds.md)
+ [

# Create a zero-ETL integration for DynamoDB
](zero-etl-setting-up.create-integration-ddb.md)
+ [

# Create a zero-ETL integration with applications
](zero-etl-setting-up.create-integration-glue.md)

# Create a zero-ETL integration for Aurora
<a name="zero-etl-setting-up.create-integration-aurora"></a>

In this step, you create an Aurora zero-ETL integration with Amazon Redshift.

**To create an Aurora zero-ETL integration with Amazon Redshift**

1. From the Amazon RDS console, [create a custom DB cluster parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.setting-up.html#zero-etl.parameters) as described in the *Amazon Aurora User Guide*.

1. From the Amazon RDS console, [create a source Amazon Aurora DB cluster](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.setting-up.html#zero-etl.create-cluster) as described in the *Amazon Aurora User Guide*.

1. From the Amazon Redshift console: [Create and configure a target Amazon Redshift data warehouse](zero-etl-setting-up.rs-data-warehouse.md). 
   + From the AWS CLI or Amazon Redshift console: [Turn on case sensitivity for your data warehouse](zero-etl-setting-up.case-sensitivity.md).
   + From the Amazon Redshift console: [Configure authorization for your Amazon Redshift data warehouse](zero-etl-using.redshift-iam.md).

1. From the Amazon RDS console, [create a zero-ETL integration](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.creating.html#zero-etl.create) as described in the *Amazon Aurora User Guide*.

1. From the Amazon Redshift console or the query editor v2, [create an Amazon Redshift database from your integration](https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.creating-db.html).

   Then, [query and create materialized views with replicated data](https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.querying-and-creating-materialized-views.html).

For detailed information to create Aurora zero-ETL integrations, see [Creating Amazon Aurora zero-ETL integrations with Amazon Redshift](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/zero-etl.creating.html) in the *Amazon Aurora User Guide*.

# Create a zero-ETL integration for Amazon RDS
<a name="zero-etl-setting-up.create-integration-rds"></a>

In this step, you create an Amazon RDS zero-ETL integration with Amazon Redshift. Redshift supports integrations with RDS for MySQL, RDS for PostgreSQL, and RDS for Oracle.

**To create an Amazon RDS zero-ETL integration with Amazon Redshift**

1. From the Amazon RDS console, [create a custom DB parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.setting-up.html#zero-etl.parameters) as described in the *Amazon RDS User Guide*.

1. From the Amazon RDS console, [create a source Amazon RDS instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.setting-up.html#zero-etl.create-cluster) as described in the *Amazon RDS User Guide*.

1. From the Amazon Redshift console: [Create and configure a target Amazon Redshift data warehouse](zero-etl-setting-up.rs-data-warehouse.md). 
   + From the AWS CLI or Amazon Redshift console: [Turn on case sensitivity for your data warehouse](zero-etl-setting-up.case-sensitivity.md).
   + From the Amazon Redshift console: [Configure authorization for your Amazon Redshift data warehouse](zero-etl-using.redshift-iam.md).

1. From the Amazon RDS console, [create a zero-ETL integration](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.creating.html#zero-etl.create) as described in the *Amazon RDS User Guide*.

1. From the Amazon Redshift console or the query editor v2, [create an Amazon Redshift database from your integration](https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.creating-db.html).

   Then, [query and create materialized views with replicated data](https://docs.aws.amazon.com/redshift/latest/mgmt/zero-etl-using.querying-and-creating-materialized-views.html).

The Amazon RDS console offers a step-by-step integration creation flow, in which you specify the source database and the target Amazon Redshift data warehouse. If issues occur, then you can choose to have Amazon RDS fix the issues for you instead of manually fixing them on either the Amazon RDS or Amazon Redshift console. 

For detailed instructions to create RDS zero-ETL integrations, see [Creating Amazon RDS zero-ETL integrations with Amazon Redshift](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/zero-etl.creating.html) in the *Amazon RDS User Guide*. 

For detailed instructions to specifically create an Amazon RDS for Oracle zero-ETL integration, see [Setting up a zero-ETL integration](https://docs.aws.amazon.com/odb/latest/UserGuide/setting-up-zero-etl.html) in the *Oracle Database@AWS User Guide*.

# Create a zero-ETL integration for DynamoDB
<a name="zero-etl-setting-up.create-integration-ddb"></a>

Before creating a zero-ETL integration, review the considerations and requirements outlined in [Considerations when using zero-ETL integrations with Amazon Redshift](zero-etl.reqs-lims.md). Follow this general flow to create a zero-ETL integration from DynamoDB to Amazon Redshift

**To replicate DynamoDB data to Amazon Redshift with zero-ETL integration**

1. Confirm your sign in credentials allow permissions to work with zero-ETL integrations with Amazon Redshift and DynamoDB. See [IAM policy to work with DynamoDB zero-ETL integrations](#zero-etl-signin-iam-policy) for an example IAM policy.

1. From the DynamoDB console, [configure your DynamoDB table](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/RedshiftforDynamoDB-zero-etl.html#RedshiftforDynamoDB-zero-etl-prereqs) to have point-in-time recovery (PITR), resource policies, identity-based policies, and encryption key permissions as described in the *Amazon DynamoDB Developer Guide*.

1. From the Amazon Redshift console: [Create and configure a target Amazon Redshift data warehouse](zero-etl-setting-up.rs-data-warehouse.md). 
   + From the AWS CLI or Amazon Redshift console: [Turn on case sensitivity for your data warehouse](zero-etl-setting-up.case-sensitivity.md).
   + From the Amazon Redshift console: [Configure authorization for your Amazon Redshift data warehouse](zero-etl-using.redshift-iam.md).

1. From the Amazon Redshift console, create the zero-ETL integration integration as described later in this topic.

1. From the Amazon Redshift console, create the destination database in your Amazon Redshift data warehouse. For more information, see [Creating destination databases in Amazon Redshift](zero-etl-using.creating-db.md).

1. From the Amazon Redshift console, query your replicated data in the Amazon Redshift data warehouse. For more information, see [Querying replicated data in Amazon Redshift](zero-etl-using.querying-and-creating-materialized-views.md).

In this step, you create an Amazon DynamoDB zero-ETL integration with Amazon Redshift.

------
#### [ Amazon Redshift console ]

**To create an Amazon DynamoDB zero-ETL integration with Amazon Redshift using the Amazon Redshift console**

1. From the Amazon Redshift console, choose **Zero-ETL integrations**. On the pane with the list of zero-ETL integrations, choose **Create zero-ETL integration**, **Create DynamoDB integration**.

1. On the pages to create an integration, enter information about the integration as follows:
   + Enter an **Integration name** – Which is a unique name that can be used to reference your integration.
   + Enter a **Description** – That describes the data that is to be replicated from source to target.
   + Choose the DynamoDB **Source table** – One DynamoDB table can be chosen. Point-in-time recovery (PITR) must be enabled on the table. Only tables with a table size up to 100 tebibytes (TiB) are shown. The source DynamoDB table must be encrypted. The source must also have a resource policy with authorized principals and integration sources. If these the policy is not correct, you are presented with the option **Fix it for me**. 
   + Choose the target **Amazon Redshift data warehouse** – The data warehouse can be an Amazon Redshift provisioned cluster or Redshift Serverless workgroup. If your target Amazon Redshift is in the same account, you are able to select the target. If the target is in a different account, you specify the **Redshift data warehouse ARN**. The target must have a resource policy with authorized principals and integration source and the `enable_case_sensitive_identifier` parameter set to true. If you do not have the correct resource policies on the target and your target is in the same account, you can select the **Fix it for me** option to automatically apply the resource policies during the create integration process. If your target is in a different AWS account, you need to apply the resource policy on the Amazon Redshift warehouse manually. If your target Amazon Redshift data warehouse does not have the correct parameter group option `enable_case_sensitive_identifier` configured as `true`, you can select the **Fix it for me** option to automatically update this parameter group and reboot the warehouse during the create integration process.
   + Enter up to 50 tag **Keys** and with an optional **Value** – To provide additional metadata about the integration. For more information, see [Tag resources in Amazon Redshift](amazon-redshift-tagging.md).
   + Choose **Encryption** options – To encrypt the integration. For more information, see [Encrypting DynamoDB integrations with a customer managed key](#zero-etl.create-encrypt).

     When you encrypt the integration, you can also add **Additional encryption contexts**. For more information, see [Encryption context](#zero-etl.add-encryption-context).

1. A review page is shown where you can choose **Create DynamoDB integration**.

1. A progress page is shown where you can view the progress of the various tasks to create the zero-ETL integration.

1. After the integration is created and active, on the details page of the integration, choose **Connect to database**. When your Amazon Redshift data warehouse was first created, a database was also created. You need to connect to any database in your target data warehouse to create another database for the integration. In the **Connect to database** page, determine whether you can use a recent connection and choose an **Authentication** method. Depending on your authentication method, enter information to connect to an existing database in your target. This authentication information can include the existing **Database name** (typically, `dev`) and the **Database user** specified when the database was created with the Amazon Redshift data warehouse.

1. After you are connected to a database, choose **Create database from integration** to create the database that receives the data from the source. When you create the database you provide the **Integration ID**, **Data warehouse name**, and **Database name**.

1. After the integration status and destination database is `Active`, data begins to replicate from your DynamoDB table to the target table. As you add data to the source it replicates automatically to the target Amazon Redshift data warehouse.

------
#### [ AWS CLI ]

To create an Amazon DynamoDB zero-ETL integration with Amazon Redshift using the AWS CLI, use the `create-integration` command with the following options:
+ `integration-name` – Specify a name for the integration.
+ `source-arn` – Specify the ARN of the DynamoDB source.
+ `target-arn` – Specify the namespace ARN of the Amazon Redshift provisioned cluster or Redshift Serverless workgroup target.

The following example creates an integration by providing the integration name, source ARN, and target ARN. The integration is not encrypted.

```
aws redshift create-integration \
--integration-name ddb-integration \
--source-arn arn:aws:dynamodb:us-east-1:123456789012:table/books \
--target-arn arn:aws:redshift:us-east-1:123456789012:namespace:a1b2c3d4-5678-90ab-cdef-EXAMPLE22222
          
{
    "Status": "creating",
    "IntegrationArn": "arn:aws:redshift:us-east-1:123456789012:integration:a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
    "Errors": [],
    "ResponseMetadata": {
        "RetryAttempts": 0,
        "HTTPStatusCode": 200,
        "RequestId": "132cbe27-fd10-4f0a-aacb-b68f10bb2bfb",
        "HTTPHeaders": {
            "x-amzn-requestid": "132cbe27-fd10-4f0a-aacb-b68f10bb2bfb",
            "date": "Sat, 24 Aug 2024 05:44:08 GMT",
            "content-length": "934",
            "content-type": "text/xml"
        }
    },
    "Tags": [],
    "CreateTime": "2024-08-24T05:44:08.573Z",
    "KMSKeyId": "arn:aws:kms:us-east-1:123456789012:key/a1b2c3d4-5678-90ab-cdef-EXAMPLE33333",
    "AdditionalEncryptionContext": {},
    "TargetArn": "arn:aws:redshift:us-east-1:123456789012:namespace:a1b2c3d4-5678-90ab-cdef-EXAMPLE22222",
    "IntegrationName": "ddb-integration",
    "SourceArn": "arn:aws:dynamodb:us-east-1:123456789012:table/books"
}
```

The following example creates an integration using a customer managed key for encryption. Before creating the integration:
+ Create a customer managed key (called “CMCMK” in the example) in the same account (called “AccountA” in the example) at the source DynamoDB table.
+ Ensure that the user/role (called “RoleA” in the example) is being used to create the integration has `kms:CreateGrant` and `kms:DescribeKey` permissions on this KMS key. 
+ Add the following to the key policy.

```
{
    "Sid": "Enable RoleA to create grants with key",
    "Effect": "Allow",
    "Principal": {
        "AWS": "RoleA-ARN"
    },
    "Action": "kms:CreateGrant",
    "Resource": "*",
    "Condition": {
        // Add "StringEquals" condition if you plan to provide additional encryption context
        // for the zero-ETL integration. Ensure that the key-value pairs added here match
        // the key-value pair you plan to use while creating the integration.
        // Remove this if you don't plan to use additional encryption context
        "StringEquals": {
            "kms:EncryptionContext:context-key1": "context-value1"
        },
        "ForAllValues:StringEquals": {
            "kms:GrantOperations": [
                "Decrypt",
                "GenerateDataKey",
                "CreateGrant"
            ]
        }
    }
},
{
    "Sid": "Enable RoleA to describe key",
    "Effect": "Allow",
    "Principal": {
        "AWS": "RoleA-ARN"
    },
    "Action": "kms:DescribeKey",
    "Resource": "*"
},
{
    "Sid": "Allow use by RS SP",
    "Effect": "Allow",
    "Principal": {
        "Service": "redshift.amazonaws.com" 
           },
    "Action": "kms:CreateGrant",
    "Resource": "*"
}
```

```
aws redshift create-integration \
--integration-name ddb-integration \
--source-arn arn:aws:dynamodb:us-east-1:123456789012:table/books \
--target-arn arn:aws:redshift:us-east-1:123456789012:namespace:a1b2c3d4-5678-90ab-cdef-EXAMPLE22222 \
--kms-key-id arn:aws:kms:us-east-1:123456789012:key/a1b2c3d4-5678-90ab-cdef-EXAMPLE33333 \
--additional-encryption-context key33=value33  // This matches the condition in the key policy.
          {
    "IntegrationArn": "arn:aws:redshift:us-east-1:123456789012:integration:a1b2c3d4-5678-90ab-cdef-EXAMPLE11111",
    "IntegrationName": "ddb-integration",
    "SourceArn": "arn:aws:dynamodb:us-east-1:123456789012:table/books",
    "SourceType": "dynamodb",
    "TargetArn": "arn:aws:redshift:us-east-1:123456789012:namespace:a1b2c3d4-5678-90ab-cdef-EXAMPLE22222",
    "Status": "creating",
    "Errors": [],
    "CreateTime": "2024-10-02T18:29:26.710Z",
    "KMSKeyId": "arn:aws:kms:us-east-1:123456789012:key/a1b2c3d4-5678-90ab-cdef-EXAMPLE33333",
    "AdditionalEncryptionContext": {
        "key33": "value33"
    },
    "Tags": []
}
```

------

## IAM policy to work with DynamoDB zero-ETL integrations
<a name="zero-etl-signin-iam-policy"></a>

When creating zero-ETL integrations, your sign in credentials must have permission to on both DynamoDB and Amazon Redshift actions and also on the resources involved as sources and targets of the integration. Following is a example that demonstrates the minimum permissions required.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:ListTables"
            ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "dynamodb:GetResourcePolicy",
                "dynamodb:PutResourcePolicy",
                "dynamodb:UpdateContinuousBackups"
            ],
            "Resource": [
            "arn:aws:dynamodb:us-east-1:111122223333:table/my-ddb-table"
            ]
        },
        {
            "Sid": "AllowRedshiftDescribeIntegration",
            "Effect": "Allow",
            "Action": [
                "redshift:DescribeIntegrations"
            ],
            "Resource": "*"
        },
        {
            "Sid": "AllowRedshiftCreateIntegration",
            "Effect": "Allow",
            "Action": "redshift:CreateIntegration",
            "Resource": "arn:aws:redshift:us-east-1:111122223333:integration:*"
        },
        {
            "Sid": "AllowRedshiftModifyDeleteIntegration",
            "Effect": "Allow",
            "Action": [
                "redshift:ModifyIntegration",
                "redshift:DeleteIntegration"
            ],
            "Resource": "arn:aws:redshift:us-east-1:111122223333:integration:<uuid>"
        },
        {
            "Sid": "AllowRedshiftCreateInboundIntegration",
            "Effect": "Allow",
            "Action": "redshift:CreateInboundIntegration",
            "Resource": "arn:aws:redshift:us-east-1:111122223333:namespace:<uuid>"
        }
    ]
}
```

------

## Encrypting DynamoDB integrations with a customer managed key
<a name="zero-etl.create-encrypt"></a>

If you specify a custom KMS key rather than an AWS owned key when you create a DynamoDB zero-ETL integration, the key policy must provide the Amazon Redshift service principal access to the `CreateGrant` action. In addition, it must allow the requester account or role permission to run the `DescribeKey` and `CreateGrant` actions.

The following sample key policy statements demonstrate the permissions required in your policy. Some examples include context keys to further reduce the scope of permissions.

### Sample key policy statements
<a name="zero-etl.kms-sample-policy"></a>

The following policy statement allows the requester account or role to retrieve information about a KMS key.

```
{
   "Effect":"Allow",
   "Principal":{
      "AWS":"arn:aws:iam::{account-ID}:role/{role-name}"
   },
   "Action":"kms:DescribeKey",
   "Resource":"*"
}
```

The following policy statement allows the requester account or role to add a grant to a KMS key. The [https://docs.aws.amazon.com/kms/latest/developerguide/conditions-kms.html#conditions-kms-via-service](https://docs.aws.amazon.com/kms/latest/developerguide/conditions-kms.html#conditions-kms-via-service) condition key limits use of the KMS key to requests from Amazon Redshift.

```
{
   "Effect":"Allow",
   "Principal":{
      "AWS":"arn:aws:iam::{account-ID}:role/{role-name}"
   },
   "Action":"kms:CreateGrant",
   "Resource":"*",
   "Condition":{
      "StringEquals":{
         "kms:EncryptionContext:{context-key}":"{context-value}",
         "kms:ViaService":"redshift.{region}.amazonaws.com"
      },
      "ForAllValues:StringEquals":{
         "kms:GrantOperations":[
            "Decrypt",
            "GenerateDataKey",
            "CreateGrant"
         ]
      }
   }
}
```

The following policy statement allows the Amazon Redshift service principal to add a grant to a KMS key.

```
{
   "Effect":"Allow",
   "Principal":{
      "Service":"redshift.amazonaws.com"
   },
   "Action":"kms:CreateGrant",
   "Resource":"*",
   "Condition":{
      "StringEquals":{
         "kms:EncryptionContext:{context-key}":"{context-value}",
         "aws:SourceAccount":"{account-ID}"
      },
      "ForAllValues:StringEquals":{
         "kms:GrantOperations":[
            "Decrypt",
            "GenerateDataKey",
            "CreateGrant"
         ]
      },
      "ArnLike":{
         "aws:SourceArn":"arn:aws:*:{region}:{account-ID}:integration:*"
      }
   }
}
```

For more information, see [Creating a key policy](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-overview.html) in the *AWS Key Management Service Developer Guide*.

## Encryption context
<a name="zero-etl.add-encryption-context"></a>

When you encryption a zero-ETL integration, you can add key-value pairs as an **Additional encryption context**. You might want to add these key-value pairs to add additional contextual information about the data being replicated. For more information, see [Encryption context](https://docs.aws.amazon.com//kms/latest/developerguide/encrypt_context.html) in the *AWS Key Management Service Developer Guide*.

Amazon Redshift adds the following encryption context pairs in addition to any that you add:
+ `aws:redshift:integration:arn` - `IntegrationArn`
+ `aws:servicename:id` - `Redshift`

This reduces the overall number of pairs that you can add from 8 to 6, and contributes to the overall character limit of the grant constraint. For more information, see [Using grant constraints](https://docs.aws.amazon.com/kms/latest/developerguide/create-grant-overview.html#grant-constraints) in the *AWS Key Management Service Developer Guide*.

# Create a zero-ETL integration with applications
<a name="zero-etl-setting-up.create-integration-glue"></a>

In this step, you create a zero-ETL integration with applications with Amazon Redshift.

**To create a zero-ETL integration with applications with Amazon Redshift**

1. From the Amazon Redshift console: [Create and configure a target Amazon Redshift data warehouse](zero-etl-setting-up.rs-data-warehouse.md). 
   + From the AWS CLI or Amazon Redshift console: [Turn on case sensitivity for your data warehouse](zero-etl-setting-up.case-sensitivity.md).
   + From the Amazon Redshift console: [Configure authorization for your Amazon Redshift data warehouse](zero-etl-using.redshift-iam.md).

1. From the AWS Glue console: [Creating an integration](https://docs.aws.amazon.com/glue/latest/dg/zero-etl-common-integration-tasks.html#zero-etl-creating) as described in the *AWS Glue Developer Guide*.

1. After the destination database has been created and data starts replicating, you can query and create materialized data for your replicated data. For more information, see [Querying replicated data in Amazon Redshift](zero-etl-using.querying-and-creating-materialized-views.md).

For detailed information to create zero-ETL integrations with applications, see [Zero-ETL integrations](https://docs.aws.amazon.com/glue/latest/dg/zero-etl-using.html) in the *AWS Glue Developer Guide*.