

# Integrating Amazon Aurora PostgreSQL with other AWS services
<a name="AuroraPostgreSQL.Integrating"></a>

Amazon Aurora integrates with other AWS services so that you can extend your Aurora PostgreSQL DB cluster to use additional capabilities in the AWS Cloud. Your Aurora PostgreSQL DB cluster can use AWS services to do the following:
+ Quickly collect, view, and assess performance for your Aurora PostgreSQL DB instances with Amazon RDS Performance Insights. Performance Insights expands on existing Amazon RDS monitoring features to illustrate your database's performance and help you analyze any issues that affect it. With the Performance Insights dashboard, you can visualize the database load and filter the load by waits, SQL statements, hosts, or users. For more information about Performance Insights, see [Monitoring DB load with Performance Insights on Amazon Aurora](USER_PerfInsights.md). 
+ Configure your Aurora PostgreSQL DB cluster to publish log data to Amazon CloudWatch Logs. CloudWatch Logs provide highly durable storage for your log records. With CloudWatch Logs, you can perform real-time analysis of the log data, and use CloudWatch to create alarms and view metrics. For more information, see [Publishing Aurora PostgreSQL logs to Amazon CloudWatch Logs](AuroraPostgreSQL.CloudWatch.md).
+ Import data from an Amazon S3 bucket to an Aurora PostgreSQL DB cluster, or export data from an Aurora PostgreSQL DB cluster to an Amazon S3 bucket. For more information, see [Importing data from Amazon S3 into an Aurora PostgreSQL DB cluster](USER_PostgreSQL.S3Import.md) and [Exporting data from an Aurora PostgreSQL DB cluster to Amazon S3](postgresql-s3-export.md).
+ Add machine learning-based predictions to database applications using the SQL language. Aurora machine learning uses a highly optimized integration between the Aurora database and the AWS machine learning (ML) services SageMaker AI and Amazon Comprehend. For more information, see [Using Amazon Aurora machine learning with Aurora PostgreSQL](postgresql-ml.md).
+ Invoke AWS Lambda functions from an Aurora PostgreSQL DB cluster. To do this, use the `aws_lambda` PostgreSQL extension provided with Aurora PostgreSQL. For more information, see [Invoking an AWS Lambda function from an Aurora PostgreSQL DB cluster ](PostgreSQL-Lambda.md).
+ Integrate queries from Amazon Redshift and Aurora PostgreSQL. For more information, see [ Getting started with using federated queries to PostgreSQL](https://docs.aws.amazon.com/redshift/latest/dg/getting-started-federated.html) in the *Amazon Redshift Database Developer Guide*.

# Importing data from Amazon S3 into an Aurora PostgreSQL DB cluster
<a name="USER_PostgreSQL.S3Import"></a>

You can import data that's been stored using Amazon Simple Storage Service into a table on an Aurora PostgreSQL DB cluster instance. To do this, you first install the Aurora PostgreSQL `aws_s3` extension. This extension provides the functions that you use to import data from an Amazon S3 bucket. A *bucket* is an Amazon S3 container for objects and files. The data can be in a comma-separate value (CSV) file, a text file, or a compressed (gzip) file. Following, you can learn how to install the extension and how to import data from Amazon S3 into a table. 

Your database must be running PostgreSQL version 10.7 or higher to import from Amazon S3 into Aurora PostgreSQL. 

If you don't have data stored on Amazon S3, you need to first create a bucket and store the data. For more information, see the following topics in the *Amazon Simple Storage Service User Guide*. 
+ [Create a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html#creating-bucket)
+ [Add an object to a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/GetStartedWithS3.html#uploading-an-object-bucket) 

Cross-account import from Amazon S3 is supported. For more information, see [ Granting cross-account permissions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-managing-access-example2.html) in the *Amazon Simple Storage Service User Guide*.

You can use the customer managed key for encryption while importing data from S3. For more information, see [ KMS keys stored in AWS KMS](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html) in the *Amazon Simple Storage Service User Guide*.

**Note**  
Importing data from Amazon S3 isn't supported for Aurora Serverless v1. It is supported for Aurora Serverless v2.

**Topics**
+ [Installing the aws\$1s3 extension](USER_PostgreSQL.S3Import.InstallExtension.md)
+ [Overview of importing data from Amazon S3 data](USER_PostgreSQL.S3Import.Overview.md)
+ [Setting up access to an Amazon S3 bucket](USER_PostgreSQL.S3Import.AccessPermission.md)
+ [Importing data from Amazon S3 to your Aurora PostgreSQL DB cluster](USER_PostgreSQL.S3Import.FileFormats.md)
+ [Function reference](USER_PostgreSQL.S3Import.Reference.md)

# Installing the aws\$1s3 extension
<a name="USER_PostgreSQL.S3Import.InstallExtension"></a>

Before you can use Amazon S3 with your Aurora PostgreSQL DB cluster, you need to install the `aws_s3` extension. This extension provides functions for importing data from an Amazon S3. It also provides functions for exporting data from an instance of an Aurora PostgreSQL DB cluster to an Amazon S3 bucket. For more information, see [Exporting data from an Aurora PostgreSQL DB cluster to Amazon S3](postgresql-s3-export.md). The `aws_s3` extension depends on some of the helper functions in the `aws_commons` extension, which is installed automatically when needed. 

**To install the `aws_s3` extension**

1. Use psql (or pgAdmin) to connect to the writer instance of your Aurora PostgreSQL DB cluster as a user that has `rds_superuser` privileges. If you kept the default name during the setup process, you connect as `postgres`.

   ```
   psql --host=111122223333.aws-region.rds.amazonaws.com --port=5432 --username=postgres --password
   ```

1. To install the extension, run the following command. 

   ```
   postgres=> CREATE EXTENSION aws_s3 CASCADE;
   NOTICE: installing required extension "aws_commons"
   CREATE EXTENSION
   ```

1. To verify that the extension is installed, you can use the psql `\dx` metacommand.

   ```
   postgres=> \dx
          List of installed extensions
       Name     | Version |   Schema   |                 Description
   -------------+---------+------------+---------------------------------------------
    aws_commons | 1.2     | public     | Common data types across AWS services
    aws_s3      | 1.1     | public     | AWS S3 extension for importing data from S3
    plpgsql     | 1.0     | pg_catalog | PL/pgSQL procedural language
   (3 rows)
   ```

The functions for importing data from Amazon S3 and exporting data to Amazon S3 are now available to use.

# Overview of importing data from Amazon S3 data
<a name="USER_PostgreSQL.S3Import.Overview"></a>

**To import S3 data into Aurora PostgreSQL**

First, gather the details that you need to supply to the function. These include the name of the table on your Aurora PostgreSQL DB cluster's instance, and the bucket name, file path, file type, and AWS Region where the Amazon S3 data is stored. For more information, see [View an object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/OpeningAnObject.html) in the *Amazon Simple Storage Service User Guide*.
**Note**  
Multi part data import from Amazon S3 isn't currently supported.

1. Get the name of the table into which the `aws_s3.table_import_from_s3` function is to import the data. As an example, the following command creates a table `t1` that can be used in later steps. 

   ```
   postgres=> CREATE TABLE t1 
       (col1 varchar(80), 
       col2 varchar(80), 
       col3 varchar(80));
   ```

1. Get the details about the Amazon S3 bucket and the data to import. To do this, open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/), and choose **Buckets**. Find the bucket containing your data in the list. Choose the bucket, open its Object overview page, and then choose Properties.

   Make a note of the bucket name, path, the AWS Region, and file type. You need the Amazon Resource Name (ARN) later, to set up access to Amazon S3 through an IAM role. For more more information, see [Setting up access to an Amazon S3 bucket](USER_PostgreSQL.S3Import.AccessPermission.md). The image following shows an example.   
![\[Image of a file object in an Amazon S3 bucket.\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/aws_s3_import-export_s3_bucket-info.png)

1. You can verify the path to the data on the Amazon S3 bucket by using the AWS CLI command `aws s3 cp`. If the information is correct, this command downloads a copy of the Amazon S3 file. 

   ```
   aws s3 cp s3://amzn-s3-demo-bucket/sample_file_path ./ 
   ```

1. Set up permissions on your Aurora PostgreSQL DB cluster to allow access to the file on the Amazon S3 bucket. To do so, you use either an AWS Identity and Access Management (IAM) role or security credentials. For more information, see [Setting up access to an Amazon S3 bucket](USER_PostgreSQL.S3Import.AccessPermission.md).

1. Supply the path and other Amazon S3 object details gathered (see step 2) to the `create_s3_uri` function to construct an Amazon S3 URI object. To learn more about this function, see [aws\$1commons.create\$1s3\$1uri](USER_PostgreSQL.S3Import.Reference.md#USER_PostgreSQL.S3Import.create_s3_uri). The following is an example of constructing this object during a psql session.

   ```
   postgres=> SELECT aws_commons.create_s3_uri(
      'docs-lab-store-for-rpg',
      'versions_and_jdks_listing.csv',
      'us-west-1'
   ) AS s3_uri \gset
   ```

   In the next step, you pass this object (`aws_commons._s3_uri_1`) to the `aws_s3.table_import_from_s3` function to import the data to the table. 

1. Invoke the `aws_s3.table_import_from_s3` function to import the data from Amazon S3 into your table. For reference information, see [aws\$1s3.table\$1import\$1from\$1s3](USER_PostgreSQL.S3Import.Reference.md#aws_s3.table_import_from_s3). For examples, see [Importing data from Amazon S3 to your Aurora PostgreSQL DB cluster ](USER_PostgreSQL.S3Import.FileFormats.md). 

# Setting up access to an Amazon S3 bucket
<a name="USER_PostgreSQL.S3Import.AccessPermission"></a>

To import data from an Amazon S3 file, give the Aurora PostgreSQL DB cluster permission to access the Amazon S3 bucket containing the file. You provide access to an Amazon S3 bucket in one of two ways, as described in the following topics.

**Topics**
+ [Using an IAM role to access an Amazon S3 bucket](#USER_PostgreSQL.S3Import.ARNRole)
+ [Using security credentials to access an Amazon S3 bucket](#USER_PostgreSQL.S3Import.Credentials)
+ [Troubleshooting access to Amazon S3](#USER_PostgreSQL.S3Import.troubleshooting)

## Using an IAM role to access an Amazon S3 bucket
<a name="USER_PostgreSQL.S3Import.ARNRole"></a>

Before you load data from an Amazon S3 file, give your Aurora PostgreSQL DB cluster permission to access the Amazon S3 bucket the file is in. This way, you don't have to manage additional credential information or provide it in the [aws\$1s3.table\$1import\$1from\$1s3](USER_PostgreSQL.S3Import.Reference.md#aws_s3.table_import_from_s3) function call.

To do this, create an IAM policy that provides access to the Amazon S3 bucket. Create an IAM role and attach the policy to the role. Then assign the IAM role to your DB cluster. 

**Note**  
You can't associate an IAM role with an Aurora Serverless v1 DB cluster, so the following steps don't apply.

**To give an Aurora PostgreSQL DB cluster access to Amazon S3 through an IAM role**

1. Create an IAM policy. 

   This policy provides the bucket and object permissions that allow your Aurora PostgreSQL DB cluster to access Amazon S3. 

   Include in the policy the following required actions to allow the transfer of files from an Amazon S3 bucket to Aurora PostgreSQL: 
   + `s3:GetObject` 
   + `s3:ListBucket` 

   Include in the policy the following resources to identify the Amazon S3 bucket and objects in the bucket. This shows the Amazon Resource Name (ARN) format for accessing Amazon S3.
   + arn:aws:s3:::*amzn-s3-demo-bucket*
   + arn:aws:s3:::*amzn-s3-demo-bucket*/\$1

   For more information on creating an IAM policy for Aurora PostgreSQL, see [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md). See also [Tutorial: Create and attach your first customer managed policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_managed-policies.html) in the *IAM User Guide*.

   The following AWS CLI command creates an IAM policy named `rds-s3-import-policy` with these options. It grants access to a bucket named *amzn-s3-demo-bucket*. 
**Note**  
Make a note of the Amazon Resource Name (ARN) of the policy returned by this command. You need the ARN in a subsequent step when you attach the policy to an IAM role.  
**Example**  

   For Linux, macOS, or Unix:

   ```
   aws iam create-policy \
      --policy-name rds-s3-import-policy \
      --policy-document '{
        "Version": "2012-10-17",		 	 	 
        "Statement": [
          {
            "Sid": "s3import",
            "Action": [
              "s3:GetObject",
              "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": [
              "arn:aws:s3:::amzn-s3-demo-bucket", 
              "arn:aws:s3:::amzn-s3-demo-bucket/*"
            ] 
          }
        ] 
      }'
   ```

   For Windows:

   ```
   aws iam create-policy ^
      --policy-name rds-s3-import-policy ^
      --policy-document '{
        "Version": "2012-10-17",		 	 	 
        "Statement": [
          {
            "Sid": "s3import",
            "Action": [
              "s3:GetObject",
              "s3:ListBucket"
            ], 
            "Effect": "Allow",
            "Resource": [
              "arn:aws:s3:::amzn-s3-demo-bucket", 
              "arn:aws:s3:::amzn-s3-demo-bucket/*"
            ] 
          }
        ] 
      }'
   ```

1. Create an IAM role. 

   You do this so Aurora PostgreSQL can assume this IAM role to access your Amazon S3 buckets. For more information, see [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.

   We recommend using the `[aws:SourceArn](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn)` and `[aws:SourceAccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount)` global condition context keys in resource-based policies to limit the service's permissions to a specific resource. This is the most effective way to protect against the [confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html). 

   If you use both global condition context keys and the `aws:SourceArn` value contains the account ID, the `aws:SourceAccount` value and the account in the `aws:SourceArn` value must use the same account ID when used in the same policy statement.
   + Use `aws:SourceArn` if you want cross-service access for a single resource. 
   + Use `aws:SourceAccount` if you want to allow any resource in that account to be associated with the cross-service use.

   In the policy, be sure to use the `aws:SourceArn` global condition context key with the full ARN of the resource. The following example shows how to do so using the AWS CLI command to create a role named `rds-s3-import-role`.   
**Example**  

   For Linux, macOS, or Unix:

   ```
   aws iam create-role \
      --role-name rds-s3-import-role \
      --assume-role-policy-document '{
        "Version": "2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
               "Service": "rds.amazonaws.com"
             },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                   "aws:SourceAccount": "111122223333",
                   "aws:SourceArn": "arn:aws:rds:us-east-1:111122223333:cluster:clustername"
                   }
                }
          }
        ] 
      }'
   ```

   For Windows:

   ```
   aws iam create-role ^
      --role-name rds-s3-import-role ^
      --assume-role-policy-document '{
        "Version": "2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
               "Service": "rds.amazonaws.com"
             },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                   "aws:SourceAccount": "111122223333",
                   "aws:SourceArn": "arn:aws:rds:us-east-1:111122223333:cluster:clustername"
                   }
                }
          }
        ] 
      }'
   ```

1. Attach the IAM policy that you created to the IAM role that you created.

   The following AWS CLI command attaches the policy created in the previous step to the role named `rds-s3-import-role` Replace `your-policy-arn` with the policy ARN that you noted in an earlier step.   
**Example**  

   For Linux, macOS, or Unix:

   ```
   aws iam attach-role-policy \
      --policy-arn your-policy-arn \
      --role-name rds-s3-import-role
   ```

   For Windows:

   ```
   aws iam attach-role-policy ^
      --policy-arn your-policy-arn ^
      --role-name rds-s3-import-role
   ```

1. Add the IAM role to the DB cluster. 

   You do so by using the AWS Management Console or AWS CLI, as described following. 

### Console
<a name="collapsible-section-1"></a>

**To add an IAM role for a PostgreSQL DB cluster using the console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose the PostgreSQL DB cluster name to display its details.

1. On the **Connectivity & security** tab, in the **Manage IAM roles **section, choose the role to add under **Add IAM roles to this cluster **. 

1. Under **Feature**, choose **s3Import**.

1. Choose **Add role**.

### AWS CLI
<a name="collapsible-section-2"></a>

**To add an IAM role for a PostgreSQL DB cluster using the CLI**
+ Use the following command to add the role to the PostgreSQL DB cluster named `my-db-cluster`. Replace *`your-role-arn`* with the role ARN that you noted in a previous step. Use `s3Import` for the value of the `--feature-name` option.   
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds add-role-to-db-cluster \
     --db-cluster-identifier my-db-cluster \
     --feature-name s3Import \
     --role-arn your-role-arn   \
     --region your-region
  ```

  For Windows:

  ```
  aws rds add-role-to-db-cluster ^
     --db-cluster-identifier my-db-cluster ^
     --feature-name s3Import ^
     --role-arn your-role-arn ^
     --region your-region
  ```

### RDS API
<a name="collapsible-section-3"></a>

To add an IAM role for a PostgreSQL DB cluster using the Amazon RDS API, call the [AddRoleToDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_AddRoleToDBCluster.html) operation. 

## Using security credentials to access an Amazon S3 bucket
<a name="USER_PostgreSQL.S3Import.Credentials"></a>

If you prefer, you can use security credentials to provide access to an Amazon S3 bucket instead of providing access with an IAM role. You do so by specifying the `credentials` parameter in the [aws\$1s3.table\$1import\$1from\$1s3](USER_PostgreSQL.S3Import.Reference.md#aws_s3.table_import_from_s3) function call. 

The `credentials` parameter is a structure of type `aws_commons._aws_credentials_1`, which contains AWS credentials. Use the [aws\$1commons.create\$1aws\$1credentials](USER_PostgreSQL.S3Import.Reference.md#USER_PostgreSQL.S3Import.create_aws_credentials) function to set the access key and secret key in an `aws_commons._aws_credentials_1` structure, as shown following. 

```
postgres=> SELECT aws_commons.create_aws_credentials(
   'sample_access_key', 'sample_secret_key', '')
AS creds \gset
```

After creating the `aws_commons._aws_credentials_1 `structure, use the [aws\$1s3.table\$1import\$1from\$1s3](USER_PostgreSQL.S3Import.Reference.md#aws_s3.table_import_from_s3) function with the `credentials` parameter to import the data, as shown following.

```
postgres=> SELECT aws_s3.table_import_from_s3(
   't', '', '(format csv)',
   :'s3_uri', 
   :'creds'
);
```

Or you can include the [aws\$1commons.create\$1aws\$1credentials](USER_PostgreSQL.S3Import.Reference.md#USER_PostgreSQL.S3Import.create_aws_credentials) function call inline within the `aws_s3.table_import_from_s3` function call.

```
postgres=> SELECT aws_s3.table_import_from_s3(
   't', '', '(format csv)',
   :'s3_uri', 
   aws_commons.create_aws_credentials('sample_access_key', 'sample_secret_key', '')
);
```

## Troubleshooting access to Amazon S3
<a name="USER_PostgreSQL.S3Import.troubleshooting"></a>

If you encounter connection problems when attempting to import data from Amazon S3, see the following for recommendations:
+ [Troubleshooting Amazon Aurora identity and access](security_iam_troubleshoot.md)
+ [Troubleshooting Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/troubleshooting.html) in the *Amazon Simple Storage Service User Guide*
+ [Troubleshooting Amazon S3 and IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_iam-s3.html) in the *IAM User Guide*

# Importing data from Amazon S3 to your Aurora PostgreSQL DB cluster
<a name="USER_PostgreSQL.S3Import.FileFormats"></a>

You import data from your Amazon S3 bucket by using the `table_import_from_s3` function of the aws\$1s3 extension. For reference information, see [aws\$1s3.table\$1import\$1from\$1s3](USER_PostgreSQL.S3Import.Reference.md#aws_s3.table_import_from_s3). 

**Note**  
The following examples use the IAM role method to allow access to the Amazon S3 bucket. Thus, the `aws_s3.table_import_from_s3` function calls don't include credential parameters.

The following shows a typical example.

```
postgres=> SELECT aws_s3.table_import_from_s3(
   't1',
   '', 
   '(format csv)',
   :'s3_uri'
);
```

The parameters are the following:
+ `t1` – The name for the table in the PostgreSQL DB cluster to copy the data into. 
+ `''` – An optional list of columns in the database table. You can use this parameter to indicate which columns of the S3 data go in which table columns. If no columns are specified, all the columns are copied to the table. For an example of using a column list, see [Importing an Amazon S3 file that uses a custom delimiter](#USER_PostgreSQL.S3Import.FileFormats.CustomDelimiter).
+ `(format csv)` – PostgreSQL COPY arguments. The copy process uses the arguments and format of the [PostgreSQL COPY](https://www.postgresql.org/docs/current/sql-copy.html) command to import the data. Choices for format include comma-separated value (CSV) as shown in this example, text, and binary. The default is text. 
+  `s3_uri` – A structure that contains the information identifying the Amazon S3 file. For an example of using the [aws\$1commons.create\$1s3\$1uri](USER_PostgreSQL.S3Import.Reference.md#USER_PostgreSQL.S3Import.create_s3_uri) function to create an `s3_uri` structure, see [Overview of importing data from Amazon S3 data](USER_PostgreSQL.S3Import.Overview.md).

For more information about this function, see [aws\$1s3.table\$1import\$1from\$1s3](USER_PostgreSQL.S3Import.Reference.md#aws_s3.table_import_from_s3).

The `aws_s3.table_import_from_s3` function returns text. To specify other kinds of files for import from an Amazon S3 bucket, see one of the following examples. 

**Note**  
Importing 0 bytes file will cause an error.

**Topics**
+ [Importing an Amazon S3 file that uses a custom delimiter](#USER_PostgreSQL.S3Import.FileFormats.CustomDelimiter)
+ [Importing an Amazon S3 compressed (gzip) file](#USER_PostgreSQL.S3Import.FileFormats.gzip)
+ [Importing an encoded Amazon S3 file](#USER_PostgreSQL.S3Import.FileFormats.Encoded)

## Importing an Amazon S3 file that uses a custom delimiter
<a name="USER_PostgreSQL.S3Import.FileFormats.CustomDelimiter"></a>

The following example shows how to import a file that uses a custom delimiter. It also shows how to control where to put the data in the database table using the `column_list` parameter of the [aws\$1s3.table\$1import\$1from\$1s3](USER_PostgreSQL.S3Import.Reference.md#aws_s3.table_import_from_s3) function. 

For this example, assume that the following information is organized into pipe-delimited columns in the Amazon S3 file.

```
1|foo1|bar1|elephant1
2|foo2|bar2|elephant2
3|foo3|bar3|elephant3
4|foo4|bar4|elephant4
...
```

**To import a file that uses a custom delimiter**

1. Create a table in the database for the imported data.

   ```
   postgres=> CREATE TABLE test (a text, b text, c text, d text, e text);
   ```

1. Use the following form of the [aws\$1s3.table\$1import\$1from\$1s3](USER_PostgreSQL.S3Import.Reference.md#aws_s3.table_import_from_s3) function to import data from the Amazon S3 file. 

   You can include the [aws\$1commons.create\$1s3\$1uri](USER_PostgreSQL.S3Import.Reference.md#USER_PostgreSQL.S3Import.create_s3_uri) function call inline within the `aws_s3.table_import_from_s3` function call to specify the file. 

   ```
   postgres=> SELECT aws_s3.table_import_from_s3(
      'test',
      'a,b,d,e',
      'DELIMITER ''|''', 
      aws_commons.create_s3_uri('amzn-s3-demo-bucket', 'pipeDelimitedSampleFile', 'us-east-2')
   );
   ```

The data is now in the table in the following columns.

```
postgres=> SELECT * FROM test;
a | b | c | d | e 
---+------+---+---+------+-----------
1 | foo1 | | bar1 | elephant1
2 | foo2 | | bar2 | elephant2
3 | foo3 | | bar3 | elephant3
4 | foo4 | | bar4 | elephant4
```

## Importing an Amazon S3 compressed (gzip) file
<a name="USER_PostgreSQL.S3Import.FileFormats.gzip"></a>

The following example shows how to import a file from Amazon S3 that is compressed with gzip. The file that you import needs to have the following Amazon S3 metadata:
+ Key: `Content-Encoding`
+ Value: `gzip`

If you upload the file using the AWS Management Console, the metadata is typically applied by the system. For information about uploading files to Amazon S3 using the AWS Management Console, the AWS CLI, or the API, see [Uploading objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/upload-objects.html) in the *Amazon Simple Storage Service User Guide*. 

For more information about Amazon S3 metadata and details about system-provided metadata, see [Editing object metadata in the Amazon S3 console](https://docs.aws.amazon.com/AmazonS3/latest/userguide/add-object-metadata.html) in the *Amazon Simple Storage Service User Guide*.

Import the gzip file into your Aurora PostgreSQL DB cluster as shown following.

```
postgres=> CREATE TABLE test_gzip(id int, a text, b text, c text, d text);
postgres=> SELECT aws_s3.table_import_from_s3(
 'test_gzip', '', '(format csv)',
 'amzn-s3-demo-bucket', 'test-data.gz', 'us-east-2'
);
```

## Importing an encoded Amazon S3 file
<a name="USER_PostgreSQL.S3Import.FileFormats.Encoded"></a>

The following example shows how to import a file from Amazon S3 that has Windows-1252 encoding.

```
postgres=> SELECT aws_s3.table_import_from_s3(
 'test_table', '', 'encoding ''WIN1252''',
 aws_commons.create_s3_uri('amzn-s3-demo-bucket', 'SampleFile', 'us-east-2')
);
```

# Function reference
<a name="USER_PostgreSQL.S3Import.Reference"></a>

**Topics**
+ [aws\$1s3.table\$1import\$1from\$1s3](#aws_s3.table_import_from_s3)
+ [aws\$1commons.create\$1s3\$1uri](#USER_PostgreSQL.S3Import.create_s3_uri)
+ [aws\$1commons.create\$1aws\$1credentials](#USER_PostgreSQL.S3Import.create_aws_credentials)

## aws\$1s3.table\$1import\$1from\$1s3
<a name="aws_s3.table_import_from_s3"></a>

Imports Amazon S3 data into an Aurora PostgreSQL table. The `aws_s3` extension provides the `aws_s3.table_import_from_s3` function. The return value is text.

### Syntax
<a name="aws_s3.table_import_from_s3-syntax"></a>

The required parameters are `table_name`, `column_list` and `options`. These identify the database table and specify how the data is copied into the table. 

You can also use the following parameters: 
+ The `s3_info` parameter specifies the Amazon S3 file to import. When you use this parameter, access to Amazon S3 is provided by an IAM role for the PostgreSQL DB cluster.

  ```
  aws_s3.table_import_from_s3 (
     table_name text, 
     column_list text, 
     options text, 
     s3_info aws_commons._s3_uri_1
  )
  ```
+ The `credentials` parameter specifies the credentials to access Amazon S3. When you use this parameter, you don't use an IAM role.

  ```
  aws_s3.table_import_from_s3 (
     table_name text, 
     column_list text, 
     options text, 
     s3_info aws_commons._s3_uri_1,
     credentials aws_commons._aws_credentials_1
  )
  ```

### Parameters
<a name="aws_s3.table_import_from_s3-parameters"></a>

 *table\$1name*   
A required text string containing the name of the PostgreSQL database table to import the data into. 

 *column\$1list*   
A required text string containing an optional list of the PostgreSQL database table columns in which to copy the data. If the string is empty, all columns of the table are used. For an example, see [Importing an Amazon S3 file that uses a custom delimiter](USER_PostgreSQL.S3Import.FileFormats.md#USER_PostgreSQL.S3Import.FileFormats.CustomDelimiter).

 *options*   
A required text string containing arguments for the PostgreSQL `COPY` command. These arguments specify how the data is to be copied into the PostgreSQL table. For more details, see the [PostgreSQL COPY documentation](https://www.postgresql.org/docs/current/sql-copy.html).

 *s3\$1info*   
An `aws_commons._s3_uri_1` composite type containing the following information about the S3 object:  
+ `bucket` – The name of the Amazon S3 bucket containing the file.
+ `file_path` – The Amazon S3 file name including the path of the file.
+ `region` – The AWS Region that the file is in. For a listing of AWS Region names and associated values, see [Regions and Availability Zones](Concepts.RegionsAndAvailabilityZones.md).

 *credentials*   
An `aws_commons._aws_credentials_1` composite type containing the following credentials to use for the import operation:  
+ Access key
+ Secret key
+ Session token
For information about creating an `aws_commons._aws_credentials_1` composite structure, see [aws\$1commons.create\$1aws\$1credentials](#USER_PostgreSQL.S3Import.create_aws_credentials).

### Alternate syntax
<a name="aws_s3.table_import_from_s3-alternative-syntax"></a>

To help with testing, you can use an expanded set of parameters instead of the `s3_info` and `credentials` parameters. Following are additional syntax variations for the `aws_s3.table_import_from_s3` function: 
+ Instead of using the `s3_info` parameter to identify an Amazon S3 file, use the combination of the `bucket`, `file_path`, and `region` parameters. With this form of the function, access to Amazon S3 is provided by an IAM role on the PostgreSQL DB instance.

  ```
  aws_s3.table_import_from_s3 (
     table_name text, 
     column_list text, 
     options text, 
     bucket text, 
     file_path text, 
     region text 
  )
  ```
+ Instead of using the `credentials` parameter to specify Amazon S3 access, use the combination of the `access_key`, `session_key`, and `session_token` parameters.

  ```
  aws_s3.table_import_from_s3 (
     table_name text, 
     column_list text, 
     options text, 
     bucket text, 
     file_path text, 
     region text, 
     access_key text, 
     secret_key text, 
     session_token text 
  )
  ```

### Alternate parameters
<a name="aws_s3.table_import_from_s3-alternative-parameters"></a>

*bucket*  
A text string containing the name of the Amazon S3 bucket that contains the file. 

*file\$1path*  
A text string containing the Amazon S3 file name including the path of the file. 

*region*  
A text string identifying the AWS Region location of the file. For a listing of AWS Region names and associated values, see [Regions and Availability Zones](Concepts.RegionsAndAvailabilityZones.md).

*access\$1key*  
A text string containing the access key to use for the import operation. The default is NULL.

*secret\$1key*  
A text string containing the secret key to use for the import operation. The default is NULL.

*session\$1token*  
(Optional) A text string containing the session key to use for the import operation. The default is NULL.

## aws\$1commons.create\$1s3\$1uri
<a name="USER_PostgreSQL.S3Import.create_s3_uri"></a>

Creates an `aws_commons._s3_uri_1` structure to hold Amazon S3 file information. Use the results of the `aws_commons.create_s3_uri` function in the `s3_info` parameter of the [aws\$1s3.table\$1import\$1from\$1s3](#aws_s3.table_import_from_s3) function. 

### Syntax
<a name="USER_PostgreSQL.S3Import.create_s3_uri-syntax"></a>

```
aws_commons.create_s3_uri(
   bucket text,
   file_path text,
   region text
)
```

### Parameters
<a name="USER_PostgreSQL.S3Import.create_s3_uri-parameters"></a>

*bucket*  
A required text string containing the Amazon S3 bucket name for the file.

*file\$1path*  
A required text string containing the Amazon S3 file name including the path of the file.

*region*  
A required text string containing the AWS Region that the file is in. For a listing of AWS Region names and associated values, see [Regions and Availability Zones](Concepts.RegionsAndAvailabilityZones.md).

## aws\$1commons.create\$1aws\$1credentials
<a name="USER_PostgreSQL.S3Import.create_aws_credentials"></a>

Sets an access key and secret key in an `aws_commons._aws_credentials_1` structure. Use the results of the `aws_commons.create_aws_credentials` function in the `credentials` parameter of the [aws\$1s3.table\$1import\$1from\$1s3](#aws_s3.table_import_from_s3) function. 

### Syntax
<a name="USER_PostgreSQL.S3Import.create_aws_credentials-syntax"></a>

```
aws_commons.create_aws_credentials(
   access_key text,
   secret_key text,
   session_token text
)
```

### Parameters
<a name="USER_PostgreSQL.S3Import.create_aws_credentials-parameters"></a>

*access\$1key*  
A required text string containing the access key to use for importing an Amazon S3 file. The default is NULL.

*secret\$1key*  
A required text string containing the secret key to use for importing an Amazon S3 file. The default is NULL.

*session\$1token*  
An optional text string containing the session token to use for importing an Amazon S3 file. The default is NULL. If you provide an optional `session_token`, you can use temporary credentials.

# Exporting data from an Aurora PostgreSQL DB cluster to Amazon S3
<a name="postgresql-s3-export"></a>

You can query data from an Aurora PostgreSQL DB cluster and export it directly into files stored in an Amazon S3 bucket. To do this, you first install the Aurora PostgreSQL `aws_s3` extension. This extension provides you with the functions that you use to export the results of queries to Amazon S3. Following, you can find out how to install the extension and how to export data to Amazon S3. 

You can export from a provisioned or an Aurora Serverless v2 DB instance. These steps aren't supported for Aurora Serverless v1. 

**Note**  
Cross-account export to Amazon S3 isn't supported. 

All currently available versions of Aurora PostgreSQL support exporting data to Amazon Simple Storage Service. For detailed version information, see [Amazon Aurora PostgreSQL updates](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraPostgreSQLReleaseNotes/AuroraPostgreSQL.Updates.html) in the *Release Notes for Aurora PostgreSQL*.

If you don't have a bucket set up for your export, see the following topics the *Amazon Simple Storage Service User Guide*. 
+ [Setting up Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/setting-up-s3.html)
+ [Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html)

By default, the data exported from Aurora PostgreSQL to Amazon S3 uses server-side encryption with AWS managed key. You can alternatively use customer managed key that you have already created. If you are using bucket encryption, the Amazon S3 bucket must be encrypted with AWS Key Management Service (AWS KMS) key (SSE-KMS). Currently, buckets encrypted with Amazon S3 managed keys (SSE-S3) are not supported. 

**Note**  
You can save DB and DB cluster snapshot data to Amazon S3 using the AWS Management Console, AWS CLI, or Amazon RDS API. For more information, see [Exporting DB cluster snapshot data to Amazon S3](aurora-export-snapshot.md).

**Topics**
+ [Installing the aws\$1s3 extension](#USER_PostgreSQL.S3Export.InstallExtension)
+ [Overview of exporting data to Amazon S3](#postgresql-s3-export-overview)
+ [Specifying the Amazon S3 file path to export to](#postgresql-s3-export-file)
+ [Setting up access to an Amazon S3 bucket](postgresql-s3-export-access-bucket.md)
+ [Exporting query data using the aws\$1s3.query\$1export\$1to\$1s3 function](postgresql-s3-export-examples.md)
+ [Function reference](postgresql-s3-export-functions.md)
+ [Troubleshooting access to Amazon S3](postgresql-s3-export-troubleshoot.md)

## Installing the aws\$1s3 extension
<a name="USER_PostgreSQL.S3Export.InstallExtension"></a>

Before you can use Amazon Simple Storage Service with your Aurora PostgreSQL DB cluster, you need to install the `aws_s3` extension. This extension provides functions for exporting data from the writer instance of an Aurora PostgreSQL DB cluster to an Amazon S3 bucket. It also provides functions for importing data from an Amazon S3. For more information, see [Importing data from Amazon S3 into an Aurora PostgreSQL DB cluster](USER_PostgreSQL.S3Import.md). The `aws_s3` extension depends on some of the helper functions in the `aws_commons` extension, which is installed automatically when needed. 

**To install the `aws_s3` extension**

1. Use psql (or pgAdmin) to connect to the writer instance of your Aurora PostgreSQL DB cluster as a user that has `rds_superuser` privileges. If you kept the default name during the setup process, you connect as `postgres`.

   ```
   psql --host=111122223333.aws-region.rds.amazonaws.com --port=5432 --username=postgres --password
   ```

1. To install the extension, run the following command. 

   ```
   postgres=> CREATE EXTENSION aws_s3 CASCADE;
   NOTICE: installing required extension "aws_commons"
   CREATE EXTENSION
   ```

1. To verify that the extension is installed, you can use the psql `\dx` metacommand.

   ```
   postgres=> \dx
          List of installed extensions
       Name     | Version |   Schema   |                 Description
   -------------+---------+------------+---------------------------------------------
    aws_commons | 1.2     | public     | Common data types across AWS services
    aws_s3      | 1.1     | public     | AWS S3 extension for importing data from S3
    plpgsql     | 1.0     | pg_catalog | PL/pgSQL procedural language
   (3 rows)
   ```

The functions for importing data from Amazon S3 and exporting data to Amazon S3 are now available to use.

### Verify that your Aurora PostgreSQL version supports exports to Amazon S3
<a name="postgresql-s3-supported"></a>

You can verify that your Aurora PostgreSQL version supports export to Amazon S3 by using the `describe-db-engine-versions` command. The following example checks to see if version 10.14 can export to Amazon S3.

```
aws rds describe-db-engine-versions --region us-east-1 \
--engine aurora-postgresql --engine-version 10.14 | grep s3Export
```

If the output includes the string `"s3Export"`, then the engine supports Amazon S3 exports. Otherwise, the engine doesn't support them.

## Overview of exporting data to Amazon S3
<a name="postgresql-s3-export-overview"></a>

To export data stored in an Aurora PostgreSQL database to an Amazon S3 bucket, use the following procedure.

**To export Aurora PostgreSQL data to S3**

1. Identify an Amazon S3 file path to use for exporting data. For details about this process, see [Specifying the Amazon S3 file path to export to](#postgresql-s3-export-file).

1. Provide permission to access the Amazon S3 bucket.

   To export data to an Amazon S3 file, give the Aurora PostgreSQL DB cluster permission to access the Amazon S3 bucket that the export will use for storage. Doing this includes the following steps:

   1. Create an IAM policy that provides access to an Amazon S3 bucket that you want to export to.

   1. Create an IAM role.

   1. Attach the policy you created to the role you created.

   1. Add this IAM role to your DB cluster .

   For details about this process, see [Setting up access to an Amazon S3 bucket](postgresql-s3-export-access-bucket.md).

1. Identify a database query to get the data. Export the query data by calling the `aws_s3.query_export_to_s3` function. 

   After you complete the preceding preparation tasks, use the [aws\$1s3.query\$1export\$1to\$1s3](postgresql-s3-export-functions.md#aws_s3.export_query_to_s3) function to export query results to Amazon S3. For details about this process, see [Exporting query data using the aws\$1s3.query\$1export\$1to\$1s3 function](postgresql-s3-export-examples.md).

## Specifying the Amazon S3 file path to export to
<a name="postgresql-s3-export-file"></a>

Specify the following information to identify the location in Amazon S3 where you want to export data to:
+ Bucket name – A *bucket* is a container for Amazon S3 objects or files.

  For more information on storing data with Amazon S3, see [ Creating a bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html) and [Working with objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/uploading-downloading-objects.html) in the *Amazon Simple Storage Service User Guide*. 
+ File path – The file path identifies where the export is stored in the Amazon S3 bucket. The file path consists of the following:
  + An optional path prefix that identifies a virtual folder path.
  + A file prefix that identifies one or more files to be stored. Larger exports are stored in multiple files, each with a maximum size of approximately 6 GB. The additional file names have the same file prefix but with `_partXX` appended. The `XX` represents 2, then 3, and so on.

  For example, a file path with an `exports` folder and a `query-1-export` file prefix is `/exports/query-1-export`.
+ AWS Region (optional) – The AWS Region where the Amazon S3 bucket is located. If you don't specify an AWS Region value, then Aurora saves your files into Amazon S3 in the same AWS Region as the exporting DB cluster.
**Note**  
Currently, the AWS Region must be the same as the region of the exporting DB cluster.

  For a listing of AWS Region names and associated values, see [Regions and Availability Zones](Concepts.RegionsAndAvailabilityZones.md).

To hold the Amazon S3 file information about where the export is to be stored, you can use the [aws\$1commons.create\$1s3\$1uri](postgresql-s3-export-functions.md#aws_commons.create_s3_uri) function to create an `aws_commons._s3_uri_1` composite structure as follows.

```
psql=> SELECT aws_commons.create_s3_uri(
   'amzn-s3-demo-bucket',
   'sample-filepath',
   'us-west-2'
) AS s3_uri_1 \gset
```

You later provide this `s3_uri_1` value as a parameter in the call to the [aws\$1s3.query\$1export\$1to\$1s3](postgresql-s3-export-functions.md#aws_s3.export_query_to_s3) function. For examples, see [Exporting query data using the aws\$1s3.query\$1export\$1to\$1s3 function](postgresql-s3-export-examples.md).

# Setting up access to an Amazon S3 bucket
<a name="postgresql-s3-export-access-bucket"></a>

To export data to Amazon S3, give your PostgreSQL DB cluster permission to access the Amazon S3 bucket that the files are to go in. 

To do this, use the following procedure.

**To give a PostgreSQL DB cluster access to Amazon S3 through an IAM role**

1. Create an IAM policy. 

   This policy provides the bucket and object permissions that allow your PostgreSQL DB cluster to access Amazon S3. 

   As part of creating this policy, take the following steps:

   1. Include in the policy the following required actions to allow the transfer of files from your PostgreSQL DB cluster to an Amazon S3 bucket: 
      + `s3:PutObject`
      + `s3:AbortMultipartUpload`

   1. Include the Amazon Resource Name (ARN) that identifies the Amazon S3 bucket and objects in the bucket. The ARN format for accessing Amazon S3 is: `arn:aws:s3:::amzn-s3-demo-bucket/*`

   For more information on creating an IAM policy for Aurora PostgreSQL, see [Creating and using an IAM policy for IAM database access](UsingWithRDS.IAMDBAuth.IAMPolicy.md). See also [Tutorial: Create and attach your first customer managed policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_managed-policies.html) in the *IAM User Guide*.

   The following AWS CLI command creates an IAM policy named `rds-s3-export-policy` with these options. It grants access to a bucket named *amzn-s3-demo-bucket*. 
**Warning**  
We recommend that you set up your database within a private VPC that has endpoint policies configured for accessing specific buckets. For more information, see [ Using endpoint policies for Amazon S3](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints-s3.html#vpc-endpoints-policies-s3) in the Amazon VPC User Guide.  
We strongly recommend that you do not create a policy with all-resource access. This access can pose a threat for data security. If you create a policy that gives `S3:PutObject` access to all resources using `"Resource":"*"`, then a user with export privileges can export data to all buckets in your account. In addition, the user can export data to *any publicly writable bucket within your AWS Region*. 

   After you create the policy, note the Amazon Resource Name (ARN) of the policy. You need the ARN for a subsequent step when you attach the policy to an IAM role. 

   ```
   aws iam create-policy  --policy-name rds-s3-export-policy  --policy-document '{
        "Version": "2012-10-17",		 	 	 
        "Statement": [
          {
            "Sid": "s3export",
            "Action": [
              "s3:PutObject*",
              "s3:ListBucket",
              "s3:GetObject*",
              "s3:DeleteObject*",
              "s3:GetBucketLocation",
              "s3:AbortMultipartUpload"
            ],
            "Effect": "Allow",
            "Resource": [
              "arn:aws:s3:::amzn-s3-demo-bucket/*"
            ] 
          }
        ] 
      }'
   ```

1. Create an IAM role. 

   You do this so Aurora PostgreSQL can assume this IAM role on your behalf to access your Amazon S3 buckets. For more information, see [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.

   We recommend using the `[aws:SourceArn](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourcearn)` and `[aws:SourceAccount](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceaccount)` global condition context keys in resource-based policies to limit the service's permissions to a specific resource. This is the most effective way to protect against the [confused deputy problem](https://docs.aws.amazon.com/IAM/latest/UserGuide/confused-deputy.html). 

   If you use both global condition context keys and the `aws:SourceArn` value contains the account ID, the `aws:SourceAccount` value and the account in the `aws:SourceArn` value must use the same account ID when used in the same policy statement.
   + Use `aws:SourceArn` if you want cross-service access for a single resource. 
   + Use `aws:SourceAccount` if you want to allow any resource in that account to be associated with the cross-service use.

    In the policy, be sure to use the `aws:SourceArn` global condition context key with the full ARN of the resource. The following example shows how to do so using the AWS CLI command to create a role named `rds-s3-export-role`.   
**Example**  

   For Linux, macOS, or Unix:

   ```
   aws iam create-role  \
       --role-name rds-s3-export-role  \
       --assume-role-policy-document '{
        "Version": "2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
               "Service": "rds.amazonaws.com"
             },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                   "aws:SourceAccount": "111122223333",
                   "aws:SourceArn": "arn:aws:rds:us-east-1:111122223333:cluster:dbname"
                   }
                }
          }
        ] 
      }'
   ```

   For Windows:

   ```
   aws iam create-role  ^
       --role-name rds-s3-export-role  ^
       --assume-role-policy-document '{
        "Version": "2012-10-17",		 	 	 
        "Statement": [
          {
            "Effect": "Allow",
            "Principal": {
               "Service": "rds.amazonaws.com"
             },
            "Action": "sts:AssumeRole",
            "Condition": {
                "StringEquals": {
                   "aws:SourceAccount": "111122223333",
                   "aws:SourceArn": "arn:aws:rds:us-east-1:111122223333:cluster:dbname"
                   }
                }
          }
        ] 
      }'
   ```

1. Attach the IAM policy that you created to the IAM role that you created.

   The following AWS CLI command attaches the policy created earlier to the role named `rds-s3-export-role.` Replace `your-policy-arn` with the policy ARN that you noted in an earlier step. 

   ```
   aws iam attach-role-policy  --policy-arn your-policy-arn  --role-name rds-s3-export-role  
   ```

1. Add the IAM role to the DB cluster. You do so by using the AWS Management Console or AWS CLI, as described following.

## Console
<a name="collapsible-section-1"></a>

**To add an IAM role for a PostgreSQL DB cluster using the console**

1. Sign in to the AWS Management Console and open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. Choose the PostgreSQL DB cluster name to display its details.

1. On the **Connectivity & security** tab, in the **Manage IAM roles **section, choose the role to add under **Add IAM roles to this instance**. 

1. Under **Feature**, choose **s3Export**.

1. Choose **Add role**.

## AWS CLI
<a name="collapsible-section-2"></a>

**To add an IAM role for a PostgreSQL DB cluster using the CLI**
+ Use the following command to add the role to the PostgreSQL DB cluster named `my-db-cluster`. Replace *`your-role-arn`* with the role ARN that you noted in a previous step. Use `s3Export` for the value of the `--feature-name` option.   
**Example**  

  For Linux, macOS, or Unix:

  ```
  aws rds add-role-to-db-cluster \
     --db-cluster-identifier my-db-cluster \
     --feature-name s3Export \
     --role-arn your-role-arn   \
     --region your-region
  ```

  For Windows:

  ```
  aws rds add-role-to-db-cluster ^
     --db-cluster-identifier my-db-cluster ^
     --feature-name s3Export ^
     --role-arn your-role-arn ^
     --region your-region
  ```

# Exporting query data using the aws\$1s3.query\$1export\$1to\$1s3 function
<a name="postgresql-s3-export-examples"></a>

Export your PostgreSQL data to Amazon S3 by calling the [aws\$1s3.query\$1export\$1to\$1s3](postgresql-s3-export-functions.md#aws_s3.export_query_to_s3) function. 

**Topics**
+ [Prerequisites](#postgresql-s3-export-examples-prerequisites)
+ [Calling aws\$1s3.query\$1export\$1to\$1s3](#postgresql-s3-export-examples-basic)
+ [Exporting to a CSV file that uses a custom delimiter](#postgresql-s3-export-examples-custom-delimiter)
+ [Exporting to a binary file with encoding](#postgresql-s3-export-examples-encoded)

## Prerequisites
<a name="postgresql-s3-export-examples-prerequisites"></a>

Before you use the `aws_s3.query_export_to_s3` function, be sure to complete the following prerequisites:
+ Install the required PostgreSQL extensions as described in [Overview of exporting data to Amazon S3](postgresql-s3-export.md#postgresql-s3-export-overview).
+ Determine where to export your data to Amazon S3 as described in [Specifying the Amazon S3 file path to export to](postgresql-s3-export.md#postgresql-s3-export-file).
+ Make sure that the DB cluster has export access to Amazon S3 as described in [Setting up access to an Amazon S3 bucket](postgresql-s3-export-access-bucket.md).

The examples following use a database table called `sample_table`. These examples export the data into a bucket called *amzn-s3-demo-bucket*. The example table and data are created with the following SQL statements in psql.

```
psql=> CREATE TABLE sample_table (bid bigint PRIMARY KEY, name varchar(80));
psql=> INSERT INTO sample_table (bid,name) VALUES (1, 'Monday'), (2,'Tuesday'), (3, 'Wednesday');
```

## Calling aws\$1s3.query\$1export\$1to\$1s3
<a name="postgresql-s3-export-examples-basic"></a>

The following shows the basic ways of calling the [aws\$1s3.query\$1export\$1to\$1s3](postgresql-s3-export-functions.md#aws_s3.export_query_to_s3) function. 

These examples use the variable `s3_uri_1` to identify a structure that contains the information identifying the Amazon S3 file. Use the [aws\$1commons.create\$1s3\$1uri](postgresql-s3-export-functions.md#aws_commons.create_s3_uri) function to create the structure.

```
psql=> SELECT aws_commons.create_s3_uri(
   'amzn-s3-demo-bucket',
   'sample-filepath',
   'us-west-2'
) AS s3_uri_1 \gset
```

Although the parameters vary for the following two `aws_s3.query_export_to_s3` function calls, the results are the same for these examples. All rows of the `sample_table` table are exported into a bucket called *amzn-s3-demo-bucket*. 

```
psql=> SELECT * FROM aws_s3.query_export_to_s3('SELECT * FROM sample_table', :'s3_uri_1');

psql=> SELECT * FROM aws_s3.query_export_to_s3('SELECT * FROM sample_table', :'s3_uri_1', options :='format text');
```

The parameters are described as follows:
+ `'SELECT * FROM sample_table'` – The first parameter is a required text string containing an SQL query. The PostgreSQL engine runs this query. The results of the query are copied to the S3 bucket identified in other parameters.
+ `:'s3_uri_1'` – This parameter is a structure that identifies the Amazon S3 file. This example uses a variable to identify the previously created structure. You can instead create the structure by including the `aws_commons.create_s3_uri` function call inline within the `aws_s3.query_export_to_s3` function call as follows.

  ```
  SELECT * from aws_s3.query_export_to_s3('select * from sample_table', 
     aws_commons.create_s3_uri('amzn-s3-demo-bucket', 'sample-filepath', 'us-west-2') 
  );
  ```
+ `options :='format text'` – The `options` parameter is an optional text string containing PostgreSQL `COPY` arguments. The copy process uses the arguments and format of the [PostgreSQL COPY](https://www.postgresql.org/docs/current/sql-copy.html) command. 

If the file specified doesn't exist in the Amazon S3 bucket, it's created. If the file already exists, it's overwritten. The syntax for accessing the exported data in Amazon S3 is the following.

```
s3-region://bucket-name[/path-prefix]/file-prefix
```

Larger exports are stored in multiple files, each with a maximum size of approximately 6 GB. The additional file names have the same file prefix but with `_partXX` appended. The `XX` represents 2, then 3, and so on. For example, suppose that you specify the path where you store data files as the following.

```
s3-us-west-2://amzn-s3-demo-bucket/my-prefix
```

If the export has to create three data files, the Amazon S3 bucket contains the following data files.

```
s3-us-west-2://amzn-s3-demo-bucket/my-prefix
s3-us-west-2://amzn-s3-demo-bucket/my-prefix_part2
s3-us-west-2://amzn-s3-demo-bucket/my-prefix_part3
```

For the full reference for this function and additional ways to call it, see [aws\$1s3.query\$1export\$1to\$1s3](postgresql-s3-export-functions.md#aws_s3.export_query_to_s3). For more about accessing files in Amazon S3, see [View an object](https://docs.aws.amazon.com/AmazonS3/latest/userguide/OpeningAnObject.html) in the *Amazon Simple Storage Service User Guide*. 

## Exporting to a CSV file that uses a custom delimiter
<a name="postgresql-s3-export-examples-custom-delimiter"></a>

The following example shows how to call the [aws\$1s3.query\$1export\$1to\$1s3](postgresql-s3-export-functions.md#aws_s3.export_query_to_s3) function to export data to a file that uses a custom delimiter. The example uses arguments of the [PostgreSQL COPY](https://www.postgresql.org/docs/current/sql-copy.html) command to specify the comma-separated value (CSV) format and a colon (:) delimiter.

```
SELECT * from aws_s3.query_export_to_s3('select * from basic_test', :'s3_uri_1', options :='format csv, delimiter $$:$$');
```

## Exporting to a binary file with encoding
<a name="postgresql-s3-export-examples-encoded"></a>

The following example shows how to call the [aws\$1s3.query\$1export\$1to\$1s3](postgresql-s3-export-functions.md#aws_s3.export_query_to_s3) function to export data to a binary file that has Windows-1253 encoding.

```
SELECT * from aws_s3.query_export_to_s3('select * from basic_test', :'s3_uri_1', options :='format binary, encoding WIN1253');
```

# Function reference
<a name="postgresql-s3-export-functions"></a>

**Topics**
+ [aws\$1s3.query\$1export\$1to\$1s3](#aws_s3.export_query_to_s3)
+ [aws\$1commons.create\$1s3\$1uri](#aws_commons.create_s3_uri)

## aws\$1s3.query\$1export\$1to\$1s3
<a name="aws_s3.export_query_to_s3"></a>

Exports a PostgreSQL query result to an Amazon S3 bucket. The `aws_s3` extension provides the `aws_s3.query_export_to_s3` function. 

The two required parameters are `query` and `s3_info`. These define the query to be exported and identify the Amazon S3 bucket to export to. An optional parameter called `options` provides for defining various export parameters. For examples of using the `aws_s3.query_export_to_s3` function, see [Exporting query data using the aws\$1s3.query\$1export\$1to\$1s3 function](postgresql-s3-export-examples.md).

**Syntax**

```
aws_s3.query_export_to_s3(
    query text,    
    s3_info aws_commons._s3_uri_1,    
    options text,
    kms_key text
)
```Input parameters

*query*  
A required text string containing an SQL query that the PostgreSQL engine runs. The results of this query are copied to an S3 bucket identified in the `s3_info` parameter.

*s3\$1info*  
An `aws_commons._s3_uri_1` composite type containing the following information about the S3 object:  
+ `bucket` – The name of the Amazon S3 bucket to contain the file.
+ `file_path` – The Amazon S3 file name and path.
+ `region` – The AWS Region that the bucket is in. For a listing of AWS Region names and associated values, see [Regions and Availability Zones](Concepts.RegionsAndAvailabilityZones.md). 

  Currently, this value must be the same AWS Region as that of the exporting DB cluster . The default is the AWS Region of the exporting DB cluster . 
To create an `aws_commons._s3_uri_1` composite structure, see the [aws\$1commons.create\$1s3\$1uri](#aws_commons.create_s3_uri) function.

*options*  
An optional text string containing arguments for the PostgreSQL `COPY` command. These arguments specify how the data is to be copied when exported. For more details, see the [PostgreSQL COPY documentation](https://www.postgresql.org/docs/current/sql-copy.html).

*kms\$1key text*  
An optional text string containing the customer managed KMS key of the S3 bucket to export the data to.

### Alternate input parameters
<a name="aws_s3.export_query_to_s3-alternate-parameters"></a>

To help with testing, you can use an expanded set of parameters instead of the `s3_info` parameter. Following are additional syntax variations for the `aws_s3.query_export_to_s3` function. 

Instead of using the `s3_info` parameter to identify an Amazon S3 file, use the combination of the `bucket`, `file_path`, and `region` parameters.

```
aws_s3.query_export_to_s3(
    query text,    
    bucket text,    
    file_path text,    
    region text,    
    options text,
    kms_key text
)
```

*query*  
A required text string containing an SQL query that the PostgreSQL engine runs. The results of this query are copied to an S3 bucket identified in the `s3_info` parameter.

*bucket*  
A required text string containing the name of the Amazon S3 bucket that contains the file.

*file\$1path*  
A required text string containing the Amazon S3 file name including the path of the file.

*region*  
An optional text string containing the AWS Region that the bucket is in. For a listing of AWS Region names and associated values, see [Regions and Availability Zones](Concepts.RegionsAndAvailabilityZones.md).  
Currently, this value must be the same AWS Region as that of the exporting DB cluster . The default is the AWS Region of the exporting DB cluster . 

*options*  
An optional text string containing arguments for the PostgreSQL `COPY` command. These arguments specify how the data is to be copied when exported. For more details, see the [PostgreSQL COPY documentation](https://www.postgresql.org/docs/current/sql-copy.html).

*kms\$1key text*  
An optional text string containing the customer managed KMS key of the S3 bucket to export the data to.

### Output parameters
<a name="aws_s3.export_query_to_s3-output-parameters"></a>

```
aws_s3.query_export_to_s3(
    OUT rows_uploaded bigint,
    OUT files_uploaded bigint,
    OUT bytes_uploaded bigint
)
```

*rows\$1uploaded*  
The number of table rows that were successfully uploaded to Amazon S3 for the given query.

*files\$1uploaded*  
The number of files uploaded to Amazon S3. Files are created in sizes of approximately 6 GB. Each additional file created has `_partXX` appended to the name. The `XX` represents 2, then 3, and so on as needed.

*bytes\$1uploaded*  
The total number of bytes uploaded to Amazon S3.

### Examples
<a name="aws_s3.export_query_to_s3-examples"></a>

```
psql=> SELECT * from aws_s3.query_export_to_s3('select * from sample_table', 'amzn-s3-demo-bucket', 'sample-filepath');
psql=> SELECT * from aws_s3.query_export_to_s3('select * from sample_table', 'amzn-s3-demo-bucket', 'sample-filepath','us-west-2');
psql=> SELECT * from aws_s3.query_export_to_s3('select * from sample_table', 'amzn-s3-demo-bucket', 'sample-filepath','us-west-2','format text');
```

## aws\$1commons.create\$1s3\$1uri
<a name="aws_commons.create_s3_uri"></a>

Creates an `aws_commons._s3_uri_1` structure to hold Amazon S3 file information. You use the results of the `aws_commons.create_s3_uri` function in the `s3_info` parameter of the [aws\$1s3.query\$1export\$1to\$1s3](#aws_s3.export_query_to_s3) function. For an example of using the `aws_commons.create_s3_uri` function, see [Specifying the Amazon S3 file path to export to](postgresql-s3-export.md#postgresql-s3-export-file).

**Syntax**

```
aws_commons.create_s3_uri(
   bucket text,
   file_path text,
   region text
)
```Input parameters

*bucket*  
A required text string containing the Amazon S3 bucket name for the file.

*file\$1path*  
A required text string containing the Amazon S3 file name including the path of the file.

*region*  
A required text string containing the AWS Region that the file is in. For a listing of AWS Region names and associated values, see [Regions and Availability Zones](Concepts.RegionsAndAvailabilityZones.md).

# Troubleshooting access to Amazon S3
<a name="postgresql-s3-export-troubleshoot"></a>

If you encounter connection problems when attempting to export data to Amazon S3, first confirm that the outbound access rules for the VPC security group associated with your DB instance permit network connectivity. Specifically, the security group must have a rule that allows the DB instance to send TCP traffic to port 443 and to any IPv4 addresses (0.0.0.0/0). For more information, see [Provide access to the DB cluster in the VPC by creating a security group](CHAP_SettingUp_Aurora.md#CHAP_SettingUp_Aurora.SecurityGroup).

See also the following for recommendations:
+ [Troubleshooting Amazon Aurora identity and access](security_iam_troubleshoot.md)
+ [Troubleshooting Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/troubleshooting.html) in the *Amazon Simple Storage Service User Guide*
+ [Troubleshooting Amazon S3 and IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_iam-s3.html) in the *IAM User Guide*

# Invoking an AWS Lambda function from an Aurora PostgreSQL DB cluster
<a name="PostgreSQL-Lambda"></a>

AWS Lambda is an event-driven compute service that lets you run code without provisioning or managing servers. It's available for use with many AWS services, including Aurora PostgreSQL. For example, you can use Lambda functions to process event notifications from a database, or to load data from files whenever a new file is uploaded to Amazon S3. To learn more about Lambda, see [What is AWS Lambda?](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html) in the *AWS Lambda Developer Guide.* 

**Note**  
Invoking AWS Lambda functions is supported in Aurora PostgreSQL 11.9 and higher (including Aurora Serverless v2). 

Setting up Aurora PostgreSQL to work with Lambda functions is a multi-step process involving AWS Lambda, IAM, your VPC, and your Aurora PostgreSQL DB cluster. Following, you can find summaries of the necessary steps. 

For more information about Lambda functions, see [Getting started with Lambda](https://docs.aws.amazon.com/lambda/latest/dg/getting-started.html) and [AWS Lambda foundations](https://docs.aws.amazon.com/lambda/latest/dg/lambda-foundation.html) in the *AWS Lambda Developer Guide*. 

**Topics**
+ [Step 1: Configure your Aurora PostgreSQL DB cluster for outbound connections to AWS Lambda](#PostgreSQL-Lambda-network)
+ [Step 2: Configure IAM for your Aurora PostgreSQL DB cluster and AWS Lambda](#PostgreSQL-Lambda-access)
+ [Step 3: Install the `aws_lambda` extension for an Aurora PostgreSQL DB cluster](#PostgreSQL-Lambda-install-extension)
+ [Step 4: Use Lambda helper functions with your Aurora PostgreSQL DB cluster (Optional)](#PostgreSQL-Lambda-specify-function)
+ [Step 5: Invoke a Lambda function from your Aurora PostgreSQL DB cluster](#PostgreSQL-Lambda-invoke)
+ [Step 6: Grant other users permission to invoke Lambda functions](#PostgreSQL-Lambda-grant-users-permissions)
+ [Examples: Invoking Lambda functions from your Aurora PostgreSQL DB cluster](PostgreSQL-Lambda-examples.md)
+ [Lambda function error messages](PostgreSQL-Lambda-errors.md)
+ [AWS Lambda function and parameter reference](PostgreSQL-Lambda-functions.md)

## Step 1: Configure your Aurora PostgreSQL DB cluster for outbound connections to AWS Lambda
<a name="PostgreSQL-Lambda-network"></a>

Lambda functions always run inside an Amazon VPC that's owned by the AWS Lambda service. Lambda applies network access and security rules to this VPC and it maintains and monitors the VPC automatically. Your Aurora PostgreSQL DB cluster sends network traffic to the Lambda service's VPC. How you configure this depends on whether your Aurora DB cluster's primary DB instance is public or private.
+ **Public Aurora PostgreSQL DB cluster** – A DB cluster's primary DB instance is public if it's located in a public subnet on your VPC, and if the instance's "PubliclyAccessible" property is `true`. To find the value of this property, you can use the [describe-db-instances](https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-instances.html) AWS CLI command. Or, you can use the AWS Management Console to open the **Connectivity & security** tab and check that **Publicly accessible** is **Yes**. To verify that the instance is in the public subnet of your VPC, you can use the AWS Management Console or the AWS CLI. 

  To set up access to Lambda, you use the AWS Management Console or the AWS CLI to create an outbound rule on your VPC's security group. The outbound rule specifies that TCP can use port 443 to send packets to any IPv4 addresses (0.0.0.0/0).
+ **Private Aurora PostgreSQL DB cluster** – In this case, the instance's "PubliclyAccessible" property is `false` or it's in a private subnet. To allow the instance to work with Lambda, you can use a Network Address Translation) NAT gateway. For more information, see [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html). Or, you can configure your VPC with a VPC endpoint for Lambda. For more information, see [VPC endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html) in the *Amazon VPC User Guide*. The endpoint responds to calls made by your Aurora PostgreSQL DB cluster to your Lambda functions. 

Your VPC can now interact with the AWS Lambda VPC at the network level. Next, you configure the permissions using IAM. 

## Step 2: Configure IAM for your Aurora PostgreSQL DB cluster and AWS Lambda
<a name="PostgreSQL-Lambda-access"></a>

Invoking Lambda functions from your Aurora PostgreSQL DB cluster requires certain privileges. To configure the necessary privileges, we recommend that you create an IAM policy that allows invoking Lambda functions, assign that policy to a role, and then apply the role to your DB cluster. This approach gives the DB cluster privileges to invoke the specified Lambda function on your behalf. The following steps show you how to do this using the AWS CLI.

**To configure IAM permissions for using your cluster with Lambda**

1. Use the [create-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/create-policy.html) AWS CLI command to create an IAM policy that allows your Aurora PostgreSQL DB cluster to invoke the specified Lambda function. (The statement ID (Sid) is an optional description for your policy statement and has no effect on usage.) This policy gives your Aurora DB cluster the minimum permissions needed to invoke the specified Lambda function. 

   ```
   aws iam create-policy  --policy-name rds-lambda-policy --policy-document '{
       "Version": "2012-10-17",		 	 	 
       "Statement": [
           {
           "Sid": "AllowAccessToExampleFunction",
           "Effect": "Allow",
           "Action": "lambda:InvokeFunction",
           "Resource": "arn:aws:lambda:aws-region:444455556666:function:my-function"
           }
       ]
   }'
   ```

   Alternatively, you can use the predefined `AWSLambdaRole` policy that allows you to invoke any of your Lambda functions. For more information, see [Identity-based IAM policies for Lambda](https://docs.aws.amazon.com/lambda/latest/dg/access-control-identity-based.html#access-policy-examples-aws-managed) 

1. Use the [create-role](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/create-role.html) AWS CLI command to create an IAM role that the policy can assume at runtime.

   ```
   aws iam create-role  --role-name rds-lambda-role --assume-role-policy-document '{
       "Version": "2012-10-17",		 	 	 
       "Statement": [
           {
           "Effect": "Allow",
           "Principal": {
               "Service": "rds.amazonaws.com"
           },
           "Action": "sts:AssumeRole"
           }
       ]
   }'
   ```

1. Apply the policy to the role by using the [attach-role-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/attach-role-policy.html) AWS CLI command.

   ```
   aws iam attach-role-policy \
       --policy-arn arn:aws:iam::444455556666:policy/rds-lambda-policy \
       --role-name rds-lambda-role --region aws-region
   ```

1. Apply the role to your Aurora PostgreSQL DB cluster by using the  [add-role-to-db-cluster](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/rds/add-role-to-db-cluster.html)  AWS CLI command. This last step allows your DB cluster's database users to invoke Lambda functions. 

   ```
   aws rds add-role-to-db-cluster \
          --db-cluster-identifier my-cluster-name \
          --feature-name Lambda \
          --role-arn  arn:aws:iam::444455556666:role/rds-lambda-role   \
          --region aws-region
   ```

With the VPC and the IAM configurations complete, you can now install the `aws_lambda` extension. (Note that you can install the extension at any time, but until you set up the correct VPC support and IAM privileges, the `aws_lambda` extension adds nothing to your Aurora PostgreSQL DB cluster's capabilities.)

## Step 3: Install the `aws_lambda` extension for an Aurora PostgreSQL DB cluster
<a name="PostgreSQL-Lambda-install-extension"></a>

To use AWS Lambda with your Aurora PostgreSQL DB cluster, add the `aws_lambda` PostgreSQL extension to your Aurora PostgreSQL DB cluster. This extension provides your Aurora PostgreSQL DB cluster with the ability to call Lambda functions from PostgreSQL. 

**To install the `aws_lambda` extension in your Aurora PostgreSQL DB cluster**

Use the PostgreSQL `psql` command-line or the pgAdmin tool to connect to your Aurora PostgreSQL DB cluster . 

1. Connect to your Aurora PostgreSQL DB cluster instance as a user with `rds_superuser` privileges. The default `postgres` user is shown in the example.

   ```
   psql -h cluster-instance.444455556666.aws-region.rds.amazonaws.com -U postgres -p 5432
   ```

1. Install the `aws_lambda` extension. The `aws_commons` extension is also required. It provides helper functions to `aws_lambda` and many other Aurora extensions for PostgreSQL. If it's not already on your Aurora PostgreSQLDB cluster , it's installed with `aws_lambda` as shown following. 

   ```
   CREATE EXTENSION IF NOT EXISTS aws_lambda CASCADE;
   NOTICE:  installing required extension "aws_commons"
   CREATE EXTENSION
   ```

The `aws_lambda` extension is installed in your Aurora PostgreSQL DB cluster's primary DB instance. You can now create convenience structures for invoking your Lambda functions. 

## Step 4: Use Lambda helper functions with your Aurora PostgreSQL DB cluster (Optional)
<a name="PostgreSQL-Lambda-specify-function"></a>

You can use the helper functions in the `aws_commons` extension to prepare entities that you can more easily invoke from PostgreSQL. To do this, you need to have the following information about your Lambda functions:
+ **Function name** – The name, Amazon Resource Name (ARN), version, or alias of the Lambda function. The IAM policy created in [Step 2: Configure IAM for your cluster and Lambda](#PostgreSQL-Lambda-access) requires the ARN, so we recommend that you use your function's ARN.
+ **AWS Region** – (Optional) The AWS Region where the Lambda function is located if it's not in the same Region as your Aurora PostgreSQL DB cluster.

To hold the Lambda function name information, you use the [aws\$1commons.create\$1lambda\$1function\$1arn](PostgreSQL-Lambda-functions.md#aws_commons.create_lambda_function_arn) function. This helper function creates an `aws_commons._lambda_function_arn_1` composite structure with the details needed by the invoke function. Following, you can find three alternative approaches to setting up this composite structure.

```
SELECT aws_commons.create_lambda_function_arn(
   'my-function',
   'aws-region'
) AS aws_lambda_arn_1 \gset
```

```
SELECT aws_commons.create_lambda_function_arn(
   '111122223333:function:my-function',
   'aws-region'
) AS lambda_partial_arn_1 \gset
```

```
SELECT aws_commons.create_lambda_function_arn(
   'arn:aws:lambda:aws-region:111122223333:function:my-function'
) AS lambda_arn_1 \gset
```

Any of these values can be used in calls to the [aws\$1lambda.invoke](PostgreSQL-Lambda-functions.md#aws_lambda.invoke) function. For examples, see [Step 5: Invoke a Lambda function from your Aurora PostgreSQL DB cluster](#PostgreSQL-Lambda-invoke).

## Step 5: Invoke a Lambda function from your Aurora PostgreSQL DB cluster
<a name="PostgreSQL-Lambda-invoke"></a>

The `aws_lambda.invoke` function behaves synchronously or asynchronously, depending on the `invocation_type`. The two alternatives for this parameter are `RequestResponse` (the default) and `Event`, as follows. 
+ **`RequestResponse`** – This invocation type is *synchronous*. It's the default behavior when the call is made without specifying an invocation type. The response payload includes the results of the `aws_lambda.invoke` function. Use this invocation type when your workflow requires receiving results from the Lambda function before proceeding. 
+ **`Event`** – This invocation type is *asynchronous*. The response doesn't include a payload containing results. Use this invocation type when your workflow doesn't need a result from the Lambda function to continue processing.

As a simple test of your setup, you can connect to your DB instance using `psql` and invoke an example function from the command line. Suppose that you have one of the basic functions set up on your Lambda service, such as the simple Python function shown in the following screenshot.

![\[Example Lambda function shown in the AWS CLI for AWS Lambda\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/lambda_simple_function.png)


**To invoke an example function**

1. Connect to your primary DB instance using `psql` or pgAdmin.

   ```
   psql -h cluster.444455556666.aws-region.rds.amazonaws.com -U postgres -p 5432
   ```

1. Invoke the function using its ARN.

   ```
   SELECT * from aws_lambda.invoke(aws_commons.create_lambda_function_arn('arn:aws:lambda:aws-region:444455556666:function:simple', 'us-west-1'), '{"body": "Hello from Postgres!"}'::json );
   ```

   The response looks as follows.

   ```
   status_code |                        payload                        | executed_version | log_result
   -------------+-------------------------------------------------------+------------------+------------
            200 | {"statusCode": 200, "body": "\"Hello from Lambda!\""} | $LATEST          |
   (1 row)
   ```

If your invocation attempt doesn't succeed, see [Lambda function error messages](PostgreSQL-Lambda-errors.md). 

## Step 6: Grant other users permission to invoke Lambda functions
<a name="PostgreSQL-Lambda-grant-users-permissions"></a>

At this point in the procedures, only you as `rds_superuser` can invoke your Lambda functions. To allow other users to invoke any functions that you create, you need to grant them permission. 

**To grant others permission to invoke Lambda functions**

1. Connect to your primary DB instance using `psql` or pgAdmin.

   ```
   psql -h cluster.444455556666.aws-region.rds.amazonaws.com -U postgres -p 5432
   ```

1. Run the following SQL commands:

   ```
   postgres=>  GRANT USAGE ON SCHEMA aws_lambda TO db_username;
   GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA aws_lambda TO db_username;
   ```

# Examples: Invoking Lambda functions from your Aurora PostgreSQL DB cluster
<a name="PostgreSQL-Lambda-examples"></a>

Following, you can find several examples of calling the [aws\$1lambda.invoke](PostgreSQL-Lambda-functions.md#aws_lambda.invoke) function. Most of the examples use the composite structure `aws_lambda_arn_1` that you create in [Step 4: Use Lambda helper functions with your Aurora PostgreSQL DB cluster (Optional)](PostgreSQL-Lambda.md#PostgreSQL-Lambda-specify-function) to simplify passing the function details. For an example of asynchronous invocation, see [Example: Asynchronous (Event) invocation of Lambda functions](#PostgreSQL-Lambda-Event). All the other examples listed use synchronous invocation. 

To learn more about Lambda invocation types, see [Invoking Lambda functions](https://docs.aws.amazon.com/lambda/latest/dg/lambda-invocation.html) in the *AWS Lambda Developer Guide*. For more information about `aws_lambda_arn_1`, see [aws\$1commons.create\$1lambda\$1function\$1arn](PostgreSQL-Lambda-functions.md#aws_commons.create_lambda_function_arn). 

**Topics**
+ [Example: Synchronous (RequestResponse) invocation of Lambda functions](#PostgreSQL-Lambda-RequestResponse)
+ [Example: Asynchronous (Event) invocation of Lambda functions](#PostgreSQL-Lambda-Event)
+ [Example: Capturing the Lambda execution log in a function response](#PostgreSQL-Lambda-log-response)
+ [Example: Including client context in a Lambda function](#PostgreSQL-Lambda-client-context)
+ [Example: Invoking a specific version of a Lambda function](#PostgreSQL-Lambda-function-version)

## Example: Synchronous (RequestResponse) invocation of Lambda functions
<a name="PostgreSQL-Lambda-RequestResponse"></a>

Following are two examples of a synchronous Lambda function invocation. The results of these `aws_lambda.invoke` function calls are the same.

```
SELECT * FROM aws_lambda.invoke('aws_lambda_arn_1', '{"body": "Hello from Postgres!"}'::json);
```

```
SELECT * FROM aws_lambda.invoke('aws_lambda_arn_1', '{"body": "Hello from Postgres!"}'::json, 'RequestResponse');
```

The parameters are described as follows:
+ `:'aws_lambda_arn_1'` – This parameter identifies the composite structure created in [Step 4: Use Lambda helper functions with your Aurora PostgreSQL DB cluster (Optional)](PostgreSQL-Lambda.md#PostgreSQL-Lambda-specify-function), with the `aws_commons.create_lambda_function_arn` helper function. You can also create this structure inline within your `aws_lambda.invoke` call as follows. 

  ```
  SELECT * FROM aws_lambda.invoke(aws_commons.create_lambda_function_arn('my-function', 'aws-region'),
  '{"body": "Hello from Postgres!"}'::json
  );
  ```
+ `'{"body": "Hello from PostgreSQL!"}'::json` – The JSON payload to pass to the Lambda function.
+ `'RequestResponse'` – The Lambda invocation type.

## Example: Asynchronous (Event) invocation of Lambda functions
<a name="PostgreSQL-Lambda-Event"></a>

Following is an example of an asynchronous Lambda function invocation. The `Event` invocation type schedules the Lambda function invocation with the specified input payload and returns immediately. Use the `Event` invocation type in certain workflows that don't depend on the results of the Lambda function.

```
SELECT * FROM aws_lambda.invoke('aws_lambda_arn_1', '{"body": "Hello from Postgres!"}'::json, 'Event');
```

## Example: Capturing the Lambda execution log in a function response
<a name="PostgreSQL-Lambda-log-response"></a>

You can include the last 4 KB of the execution log in the function response by using the `log_type` parameter in your `aws_lambda.invoke` function call. By default, this parameter is set to `None`, but you can specify `Tail` to capture the results of the Lambda execution log in the response, as shown following.

```
SELECT *, select convert_from(decode(log_result, 'base64'), 'utf-8') as log FROM aws_lambda.invoke(:'aws_lambda_arn_1', '{"body": "Hello from Postgres!"}'::json, 'RequestResponse', 'Tail');
```

Set the [aws\$1lambda.invoke](PostgreSQL-Lambda-functions.md#aws_lambda.invoke) function's `log_type` parameter to `Tail` to include the execution log in the response. The default value for the `log_type` parameter is `None`.

The `log_result` that's returned is a `base64` encoded string. You can decode the contents using a combination of the `decode` and `convert_from` PostgreSQL functions.

For more information about `log_type`, see [aws\$1lambda.invoke](PostgreSQL-Lambda-functions.md#aws_lambda.invoke).

## Example: Including client context in a Lambda function
<a name="PostgreSQL-Lambda-client-context"></a>

The `aws_lambda.invoke` function has a `context` parameter that you can use to pass information separate from the payload, as shown following. 

```
SELECT *, convert_from(decode(log_result, 'base64'), 'utf-8') as log FROM aws_lambda.invoke(:'aws_lambda_arn_1', '{"body": "Hello from Postgres!"}'::json, 'RequestResponse', 'Tail');
```

To include client context, use a JSON object for the [aws\$1lambda.invoke](PostgreSQL-Lambda-functions.md#aws_lambda.invoke) function's `context` parameter.

For more information about the `context` parameter, see the [aws\$1lambda.invoke](PostgreSQL-Lambda-functions.md#aws_lambda.invoke) reference. 

## Example: Invoking a specific version of a Lambda function
<a name="PostgreSQL-Lambda-function-version"></a>

You can specify a particular version of a Lambda function by including the `qualifier` parameter with the `aws_lambda.invoke` call. Following, you can find an example that does this using `'custom_version'` as an alias for the version.

```
SELECT * FROM aws_lambda.invoke('aws_lambda_arn_1', '{"body": "Hello from Postgres!"}'::json, 'RequestResponse', 'None', NULL, 'custom_version');
```

You can also supply a Lambda function qualifier with the function name details instead, as follows.

```
SELECT * FROM aws_lambda.invoke(aws_commons.create_lambda_function_arn('my-function:custom_version', 'us-west-2'),
'{"body": "Hello from Postgres!"}'::json);
```

For more information about `qualifier` and other parameters, see the [aws\$1lambda.invoke](PostgreSQL-Lambda-functions.md#aws_lambda.invoke) reference.

# Lambda function error messages
<a name="PostgreSQL-Lambda-errors"></a>

In the following list you can find information about error messages, with possible causes and solutions.
+ **VPC configuration issues**

  VPC configuration issues can raise the following error messages when trying to connect: 

  ```
  ERROR:  invoke API failed
  DETAIL: AWS Lambda client returned 'Unable to connect to endpoint'.
  CONTEXT:  SQL function "invoke" statement 1
  ```

  A common cause for this error is improperly configured VPC security group. Make sure you have an outbound rule for TCP open on port 443 of your VPC security group so that your VPC can connect to the Lambda VPC.
+ **Lack of permissions needed to invoke Lambda functions**

  If you see either of the following error messages, the user (role) invoking the function doesn't have proper permissions.

  ```
  ERROR:  permission denied for schema aws_lambda
  ```

  ```
  ERROR:  permission denied for function invoke
  ```

  A user (role) must be given specific grants to invoke Lambda functions. For more information, see [Step 6: Grant other users permission to invoke Lambda functions](PostgreSQL-Lambda.md#PostgreSQL-Lambda-grant-users-permissions). 
+ **Improper handling of errors in your Lambda functions**

  If a Lambda function throws an exception during request processing, `aws_lambda.invoke` fails with a PostgreSQL error such as the following.

  ```
  SELECT * FROM aws_lambda.invoke('aws_lambda_arn_1', '{"body": "Hello from Postgres!"}'::json);
  ERROR:  lambda invocation failed
  DETAIL:  "arn:aws:lambda:us-west-2:555555555555:function:my-function" returned error "Unhandled", details: "<Error details string>".
  ```

  Be sure to handle errors in your Lambda functions or in your PostgreSQL application.

# AWS Lambda function and parameter reference
<a name="PostgreSQL-Lambda-functions"></a>

Following is the reference for the functions and parameters to use for invoking Lambda with Aurora PostgreSQL .

**Topics**
+ [aws\$1lambda.invoke](#aws_lambda.invoke)
+ [aws\$1commons.create\$1lambda\$1function\$1arn](#aws_commons.create_lambda_function_arn)
+ [aws\$1lambda parameters](#aws_lambda.parameters)

## aws\$1lambda.invoke
<a name="aws_lambda.invoke"></a>

Runs a Lambda function for an Aurora PostgreSQL DB cluster .

For more details about invoking Lambda functions, see also [Invoke](https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html) in the *AWS Lambda Developer Guide.*

**Syntax**

------
#### [ JSON ]

```
aws_lambda.invoke(
IN function_name TEXT,
IN payload JSON,
IN region TEXT DEFAULT NULL,
IN invocation_type TEXT DEFAULT 'RequestResponse',
IN log_type TEXT DEFAULT 'None',
IN context JSON DEFAULT NULL,
IN qualifier VARCHAR(128) DEFAULT NULL,
OUT status_code INT,
OUT payload JSON,
OUT executed_version TEXT,
OUT log_result TEXT)
```

```
aws_lambda.invoke(
IN function_name aws_commons._lambda_function_arn_1,
IN payload JSON,
IN invocation_type TEXT DEFAULT 'RequestResponse',
IN log_type TEXT DEFAULT 'None',
IN context JSON DEFAULT NULL,
IN qualifier VARCHAR(128) DEFAULT NULL,
OUT status_code INT,
OUT payload JSON,
OUT executed_version TEXT,
OUT log_result TEXT)
```

------
#### [ JSONB ]

```
aws_lambda.invoke(
IN function_name TEXT,
IN payload JSONB,
IN region TEXT DEFAULT NULL,
IN invocation_type TEXT DEFAULT 'RequestResponse',
IN log_type TEXT DEFAULT 'None',
IN context JSONB DEFAULT NULL,
IN qualifier VARCHAR(128) DEFAULT NULL,
OUT status_code INT,
OUT payload JSONB,
OUT executed_version TEXT,
OUT log_result TEXT)
```

```
aws_lambda.invoke(
IN function_name aws_commons._lambda_function_arn_1,
IN payload JSONB,
IN invocation_type TEXT DEFAULT 'RequestResponse',
IN log_type TEXT DEFAULT 'None',
IN context JSONB DEFAULT NULL,
IN qualifier VARCHAR(128) DEFAULT NULL,
OUT status_code INT,
OUT payload JSONB,
OUT executed_version TEXT,
OUT log_result TEXT
)
```

------Input parameters

**function\$1name**  
The identifying name of the Lambda function. The value can be the function name, an ARN, or a partial ARN. For a listing of possible formats, see [ Lambda function name formats](https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html#API_Invoke_RequestParameters) in the *AWS Lambda Developer Guide.*

*payload*  
The input for the Lambda function. The format can be JSON or JSONB. For more information, see [JSON Types](https://www.postgresql.org/docs/current/datatype-json.html) in the PostgreSQL documentation.

*region*  
(Optional) The Lambda Region for the function. By default, Aurora resolves the AWS Region from the full ARN in the `function_name` or it uses the Aurora PostgreSQL DB instance Region. If this Region value conflicts with the one provided in the `function_name` ARN, an error is raised.

*invocation\$1type*  
The invocation type of the Lambda function. The value is case-sensitive. Possible values include the following:  
+ `RequestResponse` – The default. This type of invocation for a Lambda function is synchronous and returns a response payload in the result. Use the `RequestResponse` invocation type when your workflow depends on receiving the Lambda function result immediately. 
+ `Event` – This type of invocation for a Lambda function is asynchronous and returns immediately without a returned payload. Use the `Event` invocation type when you don't need results of the Lambda function before your workflow moves on.
+ `DryRun` – This type of invocation tests access without running the Lambda function. 

*log\$1type*  
The type of Lambda log to return in the `log_result` output parameter. The value is case-sensitive. Possible values include the following:  
+ Tail – The returned `log_result` output parameter will include the last 4 KB of the execution log. 
+ None – No Lambda log information is returned.

*context*  
Client context in JSON or JSONB format. Fields to use include than `custom` and `env`.

*qualifier*  
A qualifier that identifies a Lambda function's version to be invoked. If this value conflicts with one provided in the `function_name` ARN, an error is raised.Output parameters

*status\$1code*  
An HTTP status response code. For more information, see [Lambda Invoke response elements](https://docs.aws.amazon.com/lambda/latest/dg/API_Invoke.html#API_Invoke_ResponseElements) in the *AWS Lambda Developer Guide.*

*payload*  
The information returned from the Lambda function that ran. The format is in JSON or JSONB.

*executed\$1version*  
The version of the Lambda function that ran.

*log\$1result*  
The execution log information returned if the `log_type` value is `Tail` when the Lambda function was invoked. The result contains the last 4 KB of the execution log encoded in Base64.

## aws\$1commons.create\$1lambda\$1function\$1arn
<a name="aws_commons.create_lambda_function_arn"></a>

Creates an `aws_commons._lambda_function_arn_1` structure to hold Lambda function name information. You can use the results of the `aws_commons.create_lambda_function_arn` function in the `function_name` parameter of the aws\$1lambda.invoke [aws\$1lambda.invoke](#aws_lambda.invoke) function. 

**Syntax**

```
aws_commons.create_lambda_function_arn(
    function_name TEXT,
    region TEXT DEFAULT NULL
    )
    RETURNS aws_commons._lambda_function_arn_1
```Input parameters

*function\$1name*  
A required text string containing the Lambda function name. The value can be a function name, a partial ARN, or a full ARN.

*region*  
An optional text string containing the AWS Region that the Lambda function is in. For a listing of Region names and associated values, see [Regions and Availability Zones](Concepts.RegionsAndAvailabilityZones.md).

## aws\$1lambda parameters
<a name="aws_lambda.parameters"></a>

In this table, you can find parameters associated with the `aws_lambda` function.


| Parameter | Description | 
| --- | --- | 
| `aws_lambda.connect_timeout_ms` | This is a dynamic parameter and it sets the maximum wait time while connecting to AWS Lambda. The default values is `1000`. Allowed values for this parameter are 1 - 900000. | 
| `aws_lambda.request_timeout_ms` | This is a dynamic parameter and it sets the maximum wait time while waiting for response from AWS Lambda. The default values is `3000`. Allowed values for this parameter are 1 - 900000. | 
| `aws_lambda.endpoint_override` | Specifies the endpoint that can be used to connect to AWS Lambda. An empty string selects the default AWS Lambda endpoint for the region. You must restart the database for this static parameter change to take effect. | 

# Publishing Aurora PostgreSQL logs to Amazon CloudWatch Logs
<a name="AuroraPostgreSQL.CloudWatch"></a>

You can configure your Aurora PostgreSQL DB cluster to export log data to Amazon CloudWatch Logs on a regular basis. When you do so, events from your Aurora PostgreSQL DB cluster's PostgreSQL log are automatically *published* to Amazon CloudWatch, as Amazon CloudWatch Logs. In CloudWatch, you can find the exported log data in a *Log group* for your Aurora PostgreSQL DB cluster. The log group contains one or more *log streams* that contain the events from the PostgreSQL log from each instance in the cluster. 

Publishing the logs to CloudWatch Logs allows you to keep your cluster's PostgreSQL log records in highly durable storage. With the log data available in CloudWatch Logs, you can evaluate and improve your cluster's operations. You can also use CloudWatch to create alarms and view metrics. To learn more, see [Monitoring log events in Amazon CloudWatch](AuroraPostgreSQL.CloudWatch.Monitor.md).

**Note**  
Publishing your PostgreSQL logs to CloudWatch Logs consumes storage, and you incur charges for that storage. Be sure to delete any CloudWatch Logs that you no longer need. 

Turning the export log option off for an existing Aurora PostgreSQL DB cluster doesn't affect any data that's already held in CloudWatch Logs. Existing logs remain available in CloudWatch Logs based on your log retention settings. To learn more about CloudWatch Logs, see [What is Amazon CloudWatch Logs?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/WhatIsCloudWatchLogs.html) 

Aurora PostgreSQL supports publishing logs to CloudWatch Logs for the following versions. 
+ 16.1 and all higher versions
+ 15.2 and higher 15 versions
+ 14.3 and higher 14 versions
+ 13.3 and higher 13 versions
+ 12.8 and higher 12 versions
+ 11.9 and higher 11 versions

For information about turning on the option to publish logs to CloudWatch Logs, monitoring log events in CloudWatch Logs, and analyzing logs using CloudWatch Logs Insights, see the following topics.

**Topics**
+ [Turning on the option to publish logs to Amazon CloudWatch](AuroraPostgreSQL.CloudWatch.Publishing.md)
+ [Monitoring log events in Amazon CloudWatch](AuroraPostgreSQL.CloudWatch.Monitor.md)
+ [Analyzing PostgreSQL logs using CloudWatch Logs Insights](AuroraPostgreSQL.CloudWatch.Analyzing.md)

# Turning on the option to publish logs to Amazon CloudWatch
<a name="AuroraPostgreSQL.CloudWatch.Publishing"></a>

To publish your Aurora PostgreSQL DB cluster's PostgreSQL log to CloudWatch Logs, choose the **Log export** option for the cluster. You can choose the Log export setting when you create your Aurora PostgreSQL DB cluster. Or, you can modify the cluster later on. When you modify an existing cluster, its PostgreSQL logs from each instance are published to CloudWatch cluster from that point on. For Aurora PostgreSQL, the PostgreSQL log (`postgresql.log`) is the only log that gets published to Amazon CloudWatch. 

You can use the AWS Management Console, the AWS CLI, or the RDS API to turn on the Log export feature for your Aurora PostgreSQL DB cluster. 

## Console
<a name="AuroraPostgreSQL.CloudWatch.Console"></a>

You choose the Log exports option to start publishing the PostgreSQL logs from your Aurora PostgreSQL DB cluster to CloudWatch Logs.

**To turn on the Log export feature from the console**

1. Open the Amazon RDS console at [https://console.aws.amazon.com/rds/](https://console.aws.amazon.com/rds/).

1. In the navigation pane, choose **Databases**.

1. Choose the Aurora PostgreSQL DB cluster whose log data you want to publish to CloudWatch Logs.

1. Choose **Modify**.

1. In the **Log exports** section, choose **PostgreSQL log**.

1. Choose **Continue**, and then choose **Modify cluster** on the summary page.

## AWS CLI
<a name="AuroraPostgreSQL.CloudWatch.CLI"></a>

You can turn on the log export option to start publishing Aurora PostgreSQL logs to Amazon CloudWatch Logs with the AWS CLI. To do so, run the [modify-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/modify-db-cluster.html) AWS CLI command with the following options: 
+ `--db-cluster-identifier`—The DB cluster identifier.
+ `--cloudwatch-logs-export-configuration`—The configuration setting for the log types to be set for export to CloudWatch Logs for the DB cluster.

You can also publish Aurora PostgreSQL logs by running one of the following AWS CLI commands: 
+ [create-db-cluster](https://docs.aws.amazon.com/cli/latest/reference/rds/create-db-cluster.html)
+ [restore-db-cluster-from-s3](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-from-s3.html)
+ [restore-db-cluster-from-snapshot](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-from-snapshot.html)
+ [restore-db-cluster-to-point-in-time](https://docs.aws.amazon.com/cli/latest/reference/rds/restore-db-cluster-to-point-in-time.html)

Run one of these AWS CLI commands with the following options:
+ `--db-cluster-identifier`—The DB cluster identifier.
+ `--engine`—The database engine.
+ `--enable-cloudwatch-logs-exports`—The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

Other options might be required depending on the AWS CLI command that you run.

**Example**  
The following command creates an Aurora PostgreSQL DB cluster to publish log files to CloudWatch Logs.  
For Linux, macOS, or Unix:  

```
1. aws rds create-db-cluster \
2.     --db-cluster-identifier my-db-cluster \
3.     --engine aurora-postgresql \
4.     --enable-cloudwatch-logs-exports postgresql
```
For Windows:  

```
1. aws rds create-db-cluster ^
2.     --db-cluster-identifier my-db-cluster ^
3.     --engine aurora-postgresql ^
4.     --enable-cloudwatch-logs-exports postgresql
```

**Example**  
The following command modifies an existing Aurora PostgreSQL DB cluster to publish log files to CloudWatch Logs. The `--cloudwatch-logs-export-configuration` value is a JSON object. The key for this object is `EnableLogTypes`, and its value is `postgresql`, and `instance`.  
For Linux, macOS, or Unix:  

```
1. aws rds modify-db-cluster \
2.     --db-cluster-identifier my-db-cluster \
3.     --cloudwatch-logs-export-configuration '{"EnableLogTypes":["postgresql","instance"]}'
```
For Windows:  

```
1. aws rds modify-db-cluster ^
2.     --db-cluster-identifier my-db-cluster ^
3.     --cloudwatch-logs-export-configuration '{\"EnableLogTypes\":[\"postgresql\",\"instance\"]}'
```
When using the Windows command prompt, make sure to escape double quotation marks (") in JSON code by prefixing them with a backslash (\$1).

**Example**  
The following example modifies an existing Aurora PostgreSQL DB cluster to disable publishing log files to CloudWatch Logs. The `--cloudwatch-logs-export-configuration` value is a JSON object. The key for this object is `DisableLogTypes`, and its value is `postgresql` and `instance`.  
For Linux, macOS, or Unix:  

```
aws rds modify-db-cluster \
    --db-cluster-identifier mydbinstance \
    --cloudwatch-logs-export-configuration '{"DisableLogTypes":["postgresql","instance"]}'
```
For Windows:  

```
aws rds modify-db-cluster ^
    --db-cluster-identifier mydbinstance ^
    --cloudwatch-logs-export-configuration "{\"DisableLogTypes\":[\"postgresql\",\"instance\"]}"
```
When using the Windows command prompt, you must escape double quotes (") in JSON code by prefixing them with a backslash (\$1).

## RDS API
<a name="AuroraPostgreSQL.CloudWatch.API"></a>

You can turn on the log export option to start publishing Aurora PostgreSQL logs with the RDS API. To do so, run the [ModifyDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_ModifyDBCluster.html) operation with the following options: 
+ `DBClusterIdentifier` – The DB cluster identifier.
+ `CloudwatchLogsExportConfiguration` – The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

You can also publish Aurora PostgreSQL logs with the RDS API by running one of the following RDS API operations: 
+ [CreateDBCluster](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_CreateDBCluster.html)
+ [RestoreDBClusterFromS3](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromS3.html)
+ [RestoreDBClusterFromSnapshot](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterFromSnapshot.html)
+ [RestoreDBClusterToPointInTime](https://docs.aws.amazon.com/AmazonRDS/latest/APIReference/API_RestoreDBClusterToPointInTime.html)

Run the RDS API action with the following parameters: 
+ `DBClusterIdentifier`—The DB cluster identifier.
+ `Engine`—The database engine.
+ `EnableCloudwatchLogsExports`—The configuration setting for the log types to be enabled for export to CloudWatch Logs for the DB cluster.

Other parameters might be required depending on the AWS CLI command that you run.

# Monitoring log events in Amazon CloudWatch
<a name="AuroraPostgreSQL.CloudWatch.Monitor"></a>

With Aurora PostgreSQL log events published and available as Amazon CloudWatch Logs, you can view and monitor events using Amazon CloudWatch. For more information about monitoring, see [View log data sent to CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/Working-with-log-groups-and-streams.html#ViewingLogData).

When you turn on Log exports, a new log group is automatically created using the prefix `/aws/rds/cluster/` with the name of your Aurora PostgreSQL and the log type, as in the following pattern. 

```
/aws/rds/cluster/your-cluster-name/postgresql
```

As an example, suppose that an Aurora PostgreSQL DB cluster named `docs-lab-apg-small` exports its log to Amazon CloudWatch Logs. Its log group name in Amazon CloudWatch is shown following.

```
/aws/rds/cluster/docs-lab-apg-small/postgresql
```

If a log group with the specified name exists, Aurora uses that log group to export log data for the Aurora DB cluster. Each DB instance in the Aurora PostgreSQL DB cluster uploads its PostgreSQL log to the log group as a distinct log stream. You can examine the log group and its log streams using the various graphical and analytical tools available in Amazon CloudWatch.

For example, you can search for information within the log events from your Aurora PostgreSQL DB cluster, and filter events by using the CloudWatch Logs console, the AWS CLI, or the CloudWatch Logs API. For more information, [Searching and filtering log data](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/MonitoringLogData.html) in the *Amazon CloudWatch Logs User Guide*. 

By default, new log groups are created using **Never expire** for their retention period. You can use the CloudWatch Logs console, the AWS CLI, or the CloudWatch Logs API to change the log retention period. To learn more, see [Change log data retention in CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/SettingLogRetention.html) in the *Amazon CloudWatch Logs User Guide*.

**Tip**  
You can use automated configuration, such as AWS CloudFormation, to create log groups with predefined log retention periods, metric filters, and access permissions. 

# Analyzing PostgreSQL logs using CloudWatch Logs Insights
<a name="AuroraPostgreSQL.CloudWatch.Analyzing"></a>

With the PostgreSQL logs from your Aurora PostgreSQL DB cluster published as CloudWatch Logs, you can use CloudWatch Logs Insights to interactively search and analyze your log data in Amazon CloudWatch Logs. CloudWatch Logs Insights includes a query language, sample queries, and other tools for analyzing your log data so that you can identify potential issues and verify fixes. To learn more, see [Analyzing log data with CloudWatch Logs Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html) in the *Amazon CloudWatch Logs User Guide*.

**To analyze PostgreSQL logs with CloudWatch Logs Insights**

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, open **Logs** and choose **Log insights**.

1. In **Select log group(s)**, select the log group for your Aurora PostgreSQL DB cluster.  
![\[\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/apg-cwl-insights-select-log-group.png)

1. In the query editor, delete the query that is currently shown, enter the following, and then choose **Run query**.

   ```
   ##Autovacuum execution time in seconds per 5 minute
   fields @message
   | parse @message "elapsed: * s" as @duration_sec
   | filter @message like / automatic vacuum /
   | display @duration_sec
   | sort @timestamp
   | stats avg(@duration_sec) as avg_duration_sec, 
   max(@duration_sec) as max_duration_sec 
   by bin(5 min)
   ```  
![\[\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/apg-cwl-insights-query.png)

1. Choose the **Visualization** tab.  
![\[\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/apg-cwl-insights-visualization.png)

1. Choose **Add to dashboard**.

1. In **Select a dashboard**, either select a dashboard or enter a name to create a new dashboard.

1. In **Widget type**, choose a widget type for your visualization.  
![\[\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/apg-cwl-insights-dashboard.png)

1. (Optional) Add more widgets based on your log query results.

   1. Choose **Add widget**.

   1. Choose a widget type, such as **Line**.  
![\[\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/apg-cwl-insights-widget.png)

   1. In the **Add to this dashboard** window, choose **Logs**.  
![\[\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/apg-cwl-add-logs-to-dashboard.png)

   1. In **Select log group(s)**, select the log group for your DB cluster.

   1. In the query editor, delete the query that is currently shown, enter the following, and then choose **Run query**.

      ```
      ##Autovacuum tuples statistics per 5 min
      fields @timestamp, @message
      | parse @message "tuples: " as @tuples_temp
      | parse @tuples_temp "* removed," as @tuples_removed
      | parse @tuples_temp "remain, * are dead but not yet removable, " as @tuples_not_removable
      | filter @message like / automatic vacuum /
      | sort @timestamp
      | stats  avg(@tuples_removed) as avg_tuples_removed, 
      avg(@tuples_not_removable) as avg_tuples_not_removable 
      by bin(5 min)
      ```  
![\[\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/apg-cwl-insights-query2.png)

   1. Choose **Create widget**.

      Your dashboard should look similar to the following image.  
![\[\]](http://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/images/apg-cwl-insights-dashboard-two-graphs.png)