

# Working with Amazon S3 Tables and table buckets
<a name="s3-tables"></a>

Amazon S3 Tables provide S3 storage that’s optimized for analytics workloads, with features designed to continuously improve query performance and reduce storage costs for tables. S3 Tables are purpose-built for storing tabular data, such as daily purchase transactions, streaming sensor data, or ad impressions. Tabular data represents data in columns and rows, like in a database table. 

The data in S3 Tables is stored in a new bucket type: a *table bucket*, which stores tables as subresources. Table buckets support storing tables in the Apache Iceberg format. Using standard SQL statements, you can query your tables with query engines that support Iceberg, such as Amazon Athena, Amazon Redshift, and Apache Spark.

**Topics**
+ [Features of S3 Tables](#s3-tables-features)
+ [Related services](#s3-tables-services)
+ [Tutorial: Getting started with S3 Tables](s3-tables-getting-started.md)
+ [Table buckets](s3-tables-buckets.md)
+ [S3 Tables maintenance](s3-tables-maintenance-overview.md)
+ [Cost optimization for tables with Intelligent-Tiering](tables-intelligent-tiering.md)
+ [Table namespaces](s3-tables-namespace.md)
+ [Tables in S3 table buckets](s3-tables-tables.md)
+ [Accessing table data](s3-tables-access.md)
+ [Working with Apache Iceberg V3](working-with-apache-iceberg-v3.md)
+ [Replicating S3 tables](s3-tables-replication-tables.md)
+ [S3 Tables AWS Regions, endpoints, and service quotas](s3-tables-regions-quotas.md)
+ [Making requests to S3 Tables over IPv6](s3-tables-ipv6.md)
+ [Security for S3 Tables](s3-tables-security-overview.md)
+ [Logging and monitoring for S3 Tables](s3-tables-monitoring-overview.md)

## Features of S3 Tables
<a name="s3-tables-features"></a>

**Purpose-built storage for tables**  
S3 table buckets are specifically designed for tables. Table buckets provide higher transactions per second (TPS) and better query throughput compared to self-managed tables in S3 general purpose buckets. Table buckets deliver the same durability, availability, and scalability as other Amazon S3 bucket types.

**Built-in support for Apache Iceberg**  
Tables in your table buckets are stored in [https://aws.amazon.com//what-is/apache-iceberg/](https://aws.amazon.com//what-is/apache-iceberg/) format. You can query these tables using standard SQL in query engines that support Iceberg. Iceberg has a variety of features to optimize query performance, including schema evolution and partition evolution.  
With Iceberg, you can change how your data is organized so that it can evolve over time without requiring you to rewrite your queries or rebuild your data structures. Iceberg is designed to help ensure data consistency and reliability through its support for transactions. To help you correct issues or perform time travel queries, you can track how data changes over time and roll back to historical versions.

**Automated table optimization**  
To optimize your tables for querying, S3 continuously performs automatic maintenance operations, such as compaction, snapshot management, and unreferenced file removal. These operations increase table performance by compacting smaller objects into fewer, larger files. Maintenance operations also reduce your storage costs by cleaning up unused objects. This automated maintenance streamlines the operation of data lakes at scale by reducing the need for manual table maintenance. For each table and table bucket, you can customize maintenance configurations.

**Access management and security**  
You can manage access for both table buckets and individual tables with AWS Identity and Access Management (IAM) and [Service Control Policies](https://docs.aws.amazon.com//organizations/latest/userguide/orgs_manage_policies_scps.html) in AWS Organizations. S3 Tables uses a different service namespace than Amazon S3: the *s3tables* namespace. Therefore, you can design policies specifically for the S3 Tables service and its resources. You can design policies to grant access to individual tables, all tables within a table namespace, or entire table buckets. All Amazon S3 Block Public Access settings are always enabled for table buckets and cannot be disabled. 

**Integration with AWS analytics services**  
You can automatically integrate your Amazon S3 table buckets with AWS Glue Data Catalog through the S3 console. This integration allows AWS analytics services to automatically discover and access your table data. After the integration, you can work with your tables using analytics services such as Amazon Athena, Amazon Redshift, Quick, and more. For more information about how the integration works, see [Integrating Amazon S3 Tables with AWS analytics services](s3-tables-integrating-aws.md).

## Related services
<a name="s3-tables-services"></a>

You can use the following AWS services with S3 Tables to support your specific analytics applications.


+ [https://docs.aws.amazon.com//athena/latest/ug/what-is.html](https://docs.aws.amazon.com//athena/latest/ug/what-is.html) – Athena is an interactive query service that you can use to analyze data directly in Amazon S3 by using standard SQL. You can also use Athena to interactively run data analytics by using Apache Spark without having to plan for, configure, or manage resources. When you run Apache Spark applications on Athena, you submit Spark code for processing and receive the results directly.
+ [https://docs.aws.amazon.com//glue/latest/dg/what-is-glue.html](https://docs.aws.amazon.com//glue/latest/dg/what-is-glue.html) – AWS Glue is a serverless data-integration service that allows you to discover, prepare, move, and integrate data from multiple sources. You can use AWS Glue for analytics, machine learning (ML), and application development. AWS Glue also includes additional productivity and data-operations tooling for authoring, running jobs, and implementing business workflows.
+ [https://docs.aws.amazon.com//sagemaker-unified-studio/latest/userguide/what-is-sagemaker-unified-studio.html](https://docs.aws.amazon.com//sagemaker-unified-studio/latest/userguide/what-is-sagemaker-unified-studio.html) – SageMaker Unified Studio delivers an integrated experience for analytics and AI with unified access to all your data. Collaborate and build in SageMaker Unified Studio using familiar AWS tools for SQL analytics, data processing, model development, and generative AI, accelerated by [Amazon Q Developer](https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/what-is.html).
+ [https://docs.aws.amazon.com//emr/latest/ManagementGuide/emr-what-is-emr.html](https://docs.aws.amazon.com//emr/latest/ManagementGuide/emr-what-is-emr.html) – Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data.
+ [https://docs.aws.amazon.com//redshift/latest/mgmt/welcome.html](https://docs.aws.amazon.com//redshift/latest/mgmt/welcome.html) – Amazon Redshift is a petabyte-scale data warehouse service in the cloud. You can use Amazon Redshift Serverless to access and analyze data without all of the configurations of a provisioned data warehouse. Resources are automatically provisioned and data warehouse capacity is intelligently scaled to deliver fast performance for even the most demanding and unpredictable workloads. You don't incur charges when the data warehouse is idle, so you only pay for what you use. You can load data and start querying right away in the Amazon Redshift query editor v2 or in your favorite business intelligence (BI) tool.
+ [https://docs.aws.amazon.com//quicksight/latest/user/welcome.html](https://docs.aws.amazon.com//quicksight/latest/user/welcome.html) – Quick is a business analytics service to build visualizations, perform ad hoc analysis, and quickly get business insights from your data. Quick seamlessly discovers AWS data sources and delivers fast and responsive query performance by using the Quick Super-fast, Parallel, In-Memory, Calculation Engine (SPICE).
+ [https://docs.aws.amazon.com//lake-formation/latest/dg/what-is-lake-formation.html.html](https://docs.aws.amazon.com//lake-formation/latest/dg/what-is-lake-formation.html.html) – Lake Formation is a managed service that streamlines the process to set up, secure, and manage your data lakes. Lake Formation helps you discover your data sources and then catalog, cleanse, and transform the data. With Lake Formation, you can manage fine-grained access control for your data lake data on Amazon S3 and its metadata in AWS Glue Data Catalog.

# Tutorial: Getting started with S3 Tables
<a name="s3-tables-getting-started"></a>

In this tutorial, you create a table bucket and integrate table buckets in your Region with AWS analytics services. Next, you will use the AWS CLI or console to create your first namespace and table in your table bucket. Then, you can begin querying your table with Athena.

**Tip**  
If you're migrating tabular data from general purpose buckets to table buckets, the AWS Solutions Library has a guided solution to assist you. This solution automates moving Apache Iceberg and Apache Hive tables that are registered in AWS Glue Data Catalog and stored in general purpose buckets to table buckets by using AWS Step Functions and Amazon EMR with Apache Spark. For more information, see [Guidance for Migrating Tabular Data from Amazon S3 to S3 Tables](https://aws.amazon.com/solutions/guidance/migrating-tabular-data-from-amazon-s3-to-s3-tables/) in the AWS Solutions Library.

**Topics**
+ [Step 1: Create a table bucket and integrate it with AWS analytics services](#s1-tables-tutorial-create-bucket)
+ [Step 2: Create a table namespace and a table](#s2-tables-tutorial-create-namespace-and-table)
+ [Step 3: Query data with SQL in Athena](#s4-query-tables)

## Step 1: Create a table bucket and integrate it with AWS analytics services
<a name="s1-tables-tutorial-create-bucket"></a>

In this step, you use the Amazon S3 console to create your first table bucket. For other ways to create a table bucket, see [Creating a table bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-buckets-create.html).

**Note**  
By default, the Amazon S3 console automatically integrates your table buckets with AWS Glue Data Catalog, which allows AWS analytics services to automatically discover and access your S3 Tables data. If you create your first table bucket programmatically by using the AWS Command Line Interface (AWS CLI), AWS SDKs, or REST API, you must manually complete the AWS analytics services integration. For more information, see [Integrating Amazon S3 Tables with AWS analytics services](s3-tables-integrating-aws.md).

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region in which you want to create the table bucket.

1. In the left navigation pane, choose **Table buckets**.

1. Choose **Create table bucket**.

1. Under **General configuration**, enter a name for your table bucket.

   The table bucket name must: 
   + Be unique within for your AWS account in the current Region.
   + Be between 3 and 63 characters long.
   + Consist only of lowercase letters, numbers, and hyphens (`-`).
   + Begin and end with a letter or number.

   After you create the table bucket, you can't change its name. The AWS account that creates the table bucket owns it. For more information about naming table buckets, see [Table bucket naming rules](s3-tables-buckets-naming.md#table-buckets-naming-rules).

1. In the **Integration with AWS analytics services** section, make sure that the **Enable integration** checkbox is selected. 

   If **Enable integration** is selected when you create your first table bucket by using the console, Amazon S3 attempts to integrate your table bucket with AWS analytics services. This integration allows you to use AWS analytics services to access all tables in the current Region. For more information, see [Integrating Amazon S3 Tables with AWS analytics services](s3-tables-integrating-aws.md).

1. Choose **Create bucket**.

## Step 2: Create a table namespace and a table
<a name="s2-tables-tutorial-create-namespace-and-table"></a>

For this step, you create a namespace in your table bucket, and then create a new table under that namespace. You can create a table namespace and a table by using either the console or the AWS CLI. 

**Important**  
When creating tables, make sure that you use all lowercase letters in your table names and table definitions. For example, make sure that your column names are all lowercase. If your table name or table definition contains capital letters, the table isn't supported by AWS Lake Formation or the AWS Glue Data Catalog. In this case, your table won't be visible to AWS analytics services such as Amazon Athena, even if your table buckets are integrated with AWS analytics services.   
If your table definition contains capital letters, you receive the following error message when running a `SELECT` query in Athena: "GENERIC\$1INTERNAL\$1ERROR: Get table request failed: com.amazonaws.services.glue.model.ValidationException: Unsupported Federation Resource - Invalid table or column names."

### Using the S3 console and Amazon Athena
<a name="s3-tables-tutorial-create-table-console"></a>

The following procedure uses the Amazon S3 console to create a namespace and a table with Amazon Athena. 

**To create a table namespace and a table**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. On the **Table buckets** page, choose the table bucket that you want to create a table in.

1. On the table bucket details page, choose **Create table with Athena**. 

1. In the **Create table with Athena** dialog box, choose **Create a namespace**, and then enter a name in the **Namespace name** field. Namespace names must be 1 to 255 characters and unique within the table bucket. Valid characters are a–z, 0–9, and underscores (`_`). Underscores aren't allowed at the start of namespace names.

1. Choose **Create namespace**.

1. Choose **Create table with Athena**.

1. The Amazon Athena console opens and the Athena query editor appears. The query editor is populated with a sample query that you can use to create a table. Modify the query to specify the table name and columns that you want your table to have. 

1. When you're finished modifying the query, choose **Run** to create your table. 

If your table creation was successful, the name of your new table appears in the list of tables in Athena. When you navigate back to the Amazon S3 console, your new table appears in the **Tables** list on the details page for your table bucket after you refresh the list. 

### Using the AWS CLI
<a name="s3-tables-tutorial-create-table-CLI"></a>

To use the following AWS CLI example commands to create a namespace in your table bucket, and then create a new table with a schema under that namespace, replace the `user input placeholder` values with your own.

**Prerequisites**
+ Attach the [https://docs.aws.amazon.com//aws-managed-policy/latest/reference/AmazonS3TablesFullAccess.html](https://docs.aws.amazon.com//aws-managed-policy/latest/reference/AmazonS3TablesFullAccess.html) policy to your IAM identity. 
+ Install AWS CLI version 2.23.10 or higher. For more information, see [Installing or updating the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

1. Create a new namespace in your table bucket by running the following command:

   ```
   aws s3tables create-namespace \
   --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket \
   --namespace my_namespace
   ```

   1. Confirm that your namespace was created successfully by running the following command: 

     ```
     aws s3tables list-namespaces \
     --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket
     ```

1. Create a new table with a table schema by running the following command:

   ```
   aws s3tables create-table --cli-input-json file://mytabledefinition.json
   ```

   For the `mytabledefinition.json` file, use the following example table definition:

   ```
   {
       "tableBucketARN": "arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket",
       "namespace": "my_namespace",
       "name": "my_table",
       "format": "ICEBERG",
       "metadata": {
           "iceberg": {
               "schema": {
                   "fields": [
                        {"name": "id", "type": "int","required": true},
                        {"name": "name", "type": "string"},
                        {"name": "value", "type": "int"}
                   ]
               }
           }
       }
   }
   ```

## Step 3: Query data with SQL in Athena
<a name="s4-query-tables"></a>

You can query your table with SQL in Athena. Athena supports Data Definition Language (DDL), Data Manipulation Language (DML), and Data Query Language (DQL) queries for S3 Tables.

You can access the Athena query either from the Amazon S3 console or through the Amazon Athena console. 

### Using the S3 console and Amazon Athena
<a name="s4-query-tables-query-table-s3-console"></a>

The following procedure uses the Amazon S3 console to access the Athena query editor so that you can query a table with Amazon Athena. 

**To query a table**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. On the **Table buckets** page, choose the table bucket that contains the table that you want to query.

1. On the table bucket details page, choose the option button next to the name of the table that you want to query. 

1. Choose **Query table with Athena**.

1. The Amazon Athena console opens and the Athena query editor appears with a sample `SELECT` query loaded for you. Modify this query as needed for your use case. 

1. To run the query, choose **Run**.

### Using the Amazon Athena console
<a name="s4-query-tables-query-table-athena-console"></a>

**To query a table**

1. Open the Athena console at [https://console.aws.amazon.com/athena/](https://console.aws.amazon.com/athena/home).

1. Query your table. The following is a sample query that you can modify. Make sure to replace the `user input placeholders` with your own information.

   ```
   SELECT * FROM "s3tablescatalog/amzn-s3-demo-table-bucket"."my_namespace"."my_table" LIMIT 10
   ```

1. To run the query, choose **Run**. 

# Table buckets
<a name="s3-tables-buckets"></a>

Amazon S3 table buckets are an S3 bucket type that you can use to create and store tables as S3 resources. Table buckets are used to store tabular data and metadata as objects for use in analytics workloads. S3 performs maintenance in your table buckets automatically to help reduce your table storage costs. For more information, see [S3 Tables maintenance](s3-tables-maintenance-overview.md).

To interact with the tables stored inside your table buckets, you can integrate your table buckets with analytics applications that support [Apache Iceberg](https://iceberg.apache.org/docs/latest/). Table buckets integrate with AWS analytics services through the AWS Glue Data Catalog. For more information, see [Integrating Amazon S3 Tables with AWS analytics services](s3-tables-integrating-aws.md). You can also interact with your tables using open-source query engines using the Amazon S3 Tables Catalog for Apache Iceberg. For more information, see [Accessing tables using the Amazon S3 Tables Iceberg REST endpoint](s3-tables-integrating-open-source.md). 

Each table bucket has a unique Amazon Resource Name (ARN) and resource policy attached to it. Table bucket ARNs follow this format:

```
arn:aws:s3tables:Region:OwnerAccountID:bucket/bucket-name
```

All table buckets and tables are private and can't be made public. These resources can only be accessed by users who are explicitly granted access. To grant access, you can use IAM resource-based policies for table buckets and tables, and IAM identity-based policies for users and roles.

By default, you can create up to 10 table buckets per AWS Region in an AWS account. To request a quota increase for table buckets or tables, contact [Support](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase).

## Types of table buckets
<a name="s3-tables-buckets-types"></a>

Amazon S3 supports the following types of table buckets:

**Customer-managed table buckets**  <a name="s3-tables-buckets-customer-managed"></a>
Customer-managed table buckets are resources for storing Amazon S3 Tables created and managed by customers. You create these buckets explicitly, choose their names, and maintain full control over the tables and namespaces within them. For customer-managed table buckets, you can create, delete, set custom default encryption, or configure maintenance options as needed.

**AWS managed table buckets**  <a name="s3-tables-buckets-aws-managed"></a>
AWS managed table buckets are AWS managed resources that automatically store tables created by AWS services, such as the live inventory and journal tables created by S3 Metadata. These buckets provide a centralized location for all system-generated tables. These buckets follow a standard naming convention, use a standard namespace for all tables, and have preset maintenance and default encryption configurations which S3 modifies on your behalf. You have read-only access to query the data, while AWS handles all table creation, updates, and maintenance operations. For more information, see [Working with AWS managed table buckets](s3-tables-aws-managed-buckets.md).

There are several types of Amazon S3 buckets. Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements. For more information about the various bucket types and the appropriate use cases for each, see [Buckets](Welcome.md#BasicsBucket).

**Topics**
+ [Types of table buckets](#s3-tables-buckets-types)
+ [Amazon S3 table bucket, table, and namespace naming rules](s3-tables-buckets-naming.md)
+ [Creating a table bucket](s3-tables-buckets-create.md)
+ [Deleting a table bucket](s3-tables-buckets-delete.md)
+ [Viewing details about an Amazon S3 table bucket](s3-tables-buckets-details.md)
+ [Managing table bucket policies](s3-tables-bucket-policy.md)
+ [Working with AWS managed table buckets](s3-tables-aws-managed-buckets.md)
+ [Using tags with S3 table buckets](table-bucket-tagging.md)

# Amazon S3 table bucket, table, and namespace naming rules
<a name="s3-tables-buckets-naming"></a>

When you create a table bucket, you choose a bucket name and AWS Region, the name must be unique for your account in the chosen Region. After you create a table bucket, you can't change the bucket name or Region. Table bucket names must follow specific naming rules. For more information about naming rules for table buckets and the tables and namespaces within them, see the following topic.

**Topics**
+ [Table bucket naming rules](#table-buckets-naming-rules)
+ [Naming rules for tables and namespaces](#naming-rules-table)

## Table bucket naming rules
<a name="table-buckets-naming-rules"></a>

When you create Amazon S3 table buckets, you specify a table bucket name. Like other bucket types, table buckets can't be renamed. Unlike other bucket types, table buckets aren't in a global namespace, so each bucket name in your account needs to be unique only within your current AWS Region. 

For general purpose bucket naming rules, see [General purpose bucket naming rules](bucketnamingrules.md). For directory bucket naming rules, see [Directory bucket naming rules](directory-bucket-naming-rules.md).

The following naming rules apply for table buckets.
+ Bucket names must be between 3 and 63 characters long.
+ Bucket names can consist only of lowercase letters, numbers, and hyphens (`-`).
+ Bucket names must begin and end with a letter or number.
+ Bucket names must not contain any underscores (`_`) or periods (`.`).
+ Bucket names must not start with any of the following reserved prefixes: 
  + `xn--`
  + `sthree-`
  + `amzn-s3-demo-`
  + `aws`
+ Bucket names must not end with any of the following reserved suffixes:
  + `-s3alias`
  + `--ol-s3`
  + `--x-s3`
  + `--table-s3`

## Naming rules for tables and namespaces
<a name="naming-rules-table"></a>

The following naming rules apply to tables and namespaces within table buckets:
+ Names must be between 1 and 255 characters long.
+ Names can consist only of lowercase letters, numbers, and underscores (`_`).
+ Names must begin with a letter or number.
+ Names must not contain hyphens (`-`) or periods (`.`).
+ A table name must be unique within a namespace.
+ A namespace must be unique within a table bucket.
+ Namespace names must not start with the reserved prefix `aws`.

**Important**  
When creating tables, make sure that you use all lowercase letters in your table names and table definitions. For example, make sure that your column names are all lowercase. If your table name or table definition contains capital letters, the table isn't supported by AWS Lake Formation or the AWS Glue Data Catalog. In this case, your table won't be visible to AWS analytics services such as Amazon Athena, even if your table buckets are integrated with AWS analytics services.   
If your table definition contains capital letters, you receive the following error message when running a `SELECT` query in Athena: "GENERIC\$1INTERNAL\$1ERROR: Get table request failed: com.amazonaws.services.glue.model.ValidationException: Unsupported Federation Resource - Invalid table or column names."

# Creating a table bucket
<a name="s3-tables-buckets-create"></a>

Amazon S3 table buckets are an S3 bucket type that you can use to create and store tables as S3 resources. To start using S3 Tables, you create a table bucket where you store and manage tables. When you create a table bucket, you choose a bucket name and AWS Region. The table bucket name must be unique for your account in the chosen Region. After you create a table bucket, you can't change the bucket name or Region. For information about naming table buckets, see [Amazon S3 table bucket, table, and namespace naming rules](s3-tables-buckets-naming.md).

Table buckets have the following Amazon Resource Name (ARN) format:

```
arn:aws:s3tables:region:owner-account-id:bucket/bucket-name
```

By default, you can create up to 10 table buckets per Region in an AWS account. To request a quota increase for table buckets or tables, contact [Support](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase).

When you create a table bucket you can specify the encryption type for that will be used to encrypt the tables you create in that bucket. For more information about bucket encryption options, see [Protecting S3 table data with encryption](s3-tables-encryption.md).

**Prerequisites for creating table buckets**

To create a table bucket, you must first do the following: 
+ Make sure that you have AWS Identity and Access Management (IAM) permissions for `s3tables:CreateTableBucket`.

**Note**  
If you choose SSE-KMS as the default encryption type, you must have permissions for `s3tables:PutTableBucketEncryption`, and have `DescribeKey` permission on the chosen AWS KMS key. Additionally the AWS KMS key you use needs to grant S3 Tables permission to perform automatic table maintenance. For more information, see [Permission requirements for S3 Tables SSE-KMS encryption](s3-tables-kms-permissions.md).

To create a table bucket, you can use the Amazon S3 console, Amazon S3 REST API, AWS Command Line Interface (AWS CLI), or AWS SDKs.

## Using the S3 console
<a name="create-table-bucket-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region in which you want to create a bucket.

1. In the left navigation pane, choose **Table buckets**.

1. Choose **Create table bucket** to open the **Create table bucket** page.

1. Under **Properties**, enter a name for your table bucket.

   The table bucket name must: 
   + Be unique within for your account in the current Region.
   + Be between 3 and 63 characters long
   + Consist only of lowercase letters, numbers, and hyphens (-).
   + Begin and end with a letter or number.

   After you create the bucket, you can't change its name. The AWS account that creates the bucket owns it. For information about naming table buckets, see [Amazon S3 table bucket, table, and namespace naming rules](s3-tables-buckets-naming.md).

1. If you want to integrate your table buckets with AWS analytics services, make sure **Enable integration** is selected under **Integration with AWS analytics services**.
**Note**  
When you create your first table bucket by using the console with the **Enable integration** option selected, Amazon S3 attempts to automatically integrate your table bucket with AWS analytics services. This integration allows you to use AWS analytics services to query all tables in the current Region. For more information, see [Integrating Amazon S3 Tables with AWS analytics services](s3-tables-integrating-aws.md).

1. To configure default encryption, under **Encryption type**, choose one of the following: 
   + **Server-side encryption with Amazon S3 managed key (SSE-S3)**
   + **Server-side encryption with AWS Key Management Service key (SSE-KMS)**

   For more information about encryption options for table data, see [Protecting S3 table data with encryption](s3-tables-encryption.md).

1. Choose **Create bucket**.

## Using the AWS CLI
<a name="create-table-bucket-CLI"></a>

This example shows how to create a table bucket by using the AWS CLI. To use this example, replace the `user input placeholders` with your own information.

```
aws s3tables create-table-bucket \
    --region us-east-2 \
    --name amzn-s3-demo-bucket1
```

By default S3 table buckets use SSE-S3 as their default encryption setting, however, you can use the optional `--encryption-configuration` parameter to specify a different encryption type. The following examples shows how to create a bucket that uses SSE-KMS encryption. For more information on encryption settings for table buckets, see [Protecting S3 table data with encryption](s3-tables-encryption.md).

```
aws s3tables create-table-bucket \
    --region us-east-2 \
    --name amzn-s3-demo-bucket1
    --encryption-configuration '{
                    "sseAlgorithm": "aws:kms",
                    "kmsKeyArn": "arn:aws:kms:Region:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab" }'
```

# Deleting a table bucket
<a name="s3-tables-buckets-delete"></a>

You can use the Amazon S3 APIs, AWS Command Line Interface, or AWS SDKs to delete a table bucket. Before you delete a table bucket, you must first delete all namespaces and tables within the bucket.

**Important**  
 When you delete a table bucket, you need to know the following:   
Bucket deletion is permanent and can't be undone. 
All data and configurations associated with the bucket are permanently lost.

## Using the AWS CLI
<a name="delete-table-bucket-CLI"></a>

This example shows how to delete a table bucket by using the AWS CLI. To use this example, replace the *user input placeholders* with your own information.

```
aws s3tables delete-table-bucket \
    --region us-east-2 \
    --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket1
```

# Viewing details about an Amazon S3 table bucket
<a name="s3-tables-buckets-details"></a>

You can view the general details of an Amazon S3 table bucket, such as bucket owner and type, in the console or programmatically. You can view default encryption settings, and maintenance settings programmatically by using the S3 Tables REST API, AWS CLI or AWS SDKs.

## Viewing table bucket details
<a name="table-bucket-details-view"></a>

### Using the AWS CLI
<a name="table-bucket-details-CLI"></a>

This example shows how to get details about a table bucket by using the AWS CLI. To use this example, replace the *user input placeholders* with your own information.

```
aws s3tables get-table-bucket --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket
```

### Using the S3 console
<a name="table-bucket-details-CLI"></a>

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. Select your table bucket.

1. Select the **Properties** tab.

## Viewing table bucket encryption settings
<a name="table-bucket-encryption-view"></a>

For more information about table bucket encryption, see [Using server-side encryption with AWS KMS keys (SSE-KMS) in table buckets](s3-tables-kms-encryption.md).

### Using the AWS CLI
<a name="table-bucket-encryption-view-CLI"></a>

This example shows how to get details about encryption settings for a table bucket by using the AWS CLI. To use this example, replace the *user input placeholders* with your own information.

```
aws s3tables get-table-bucket-encryption --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket
```

## Viewing table bucket maintenance configurations
<a name="table-bucket-maintenance-view"></a>

For information about maintenance settings, see [Maintenance for table buckets](s3-table-buckets-maintenance.md) 

### Using the AWS CLI
<a name="table-bucket-maintenance-view-CLI"></a>

This example shows how to get details about maintenance configuration settings for a table bucket by using the AWS CLI. To use this example, replace the *user input placeholders* with your own information.

```
aws s3tables get-table-bucket-maintenance-configuration --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket
```

# Managing table bucket policies
<a name="s3-tables-bucket-policy"></a>

You can add, delete, update, and view bucket policies for Amazon S3 table buckets by using the Amazon S3 REST API, AWS SDKs, and the AWS Command Line Interface (AWS CLI). For more information, see the following topics. 

For more information, see the following topics. For more information about supported AWS Identity and Access Management (IAM) actions and condition keys for Amazon S3 Tables, see [Access management for S3 Tables](s3-tables-setting-up.md). For example bucket policies for table buckets, see [Resource-based policies for S3 Tables](s3-tables-resource-based-policies.md).

**Note**  
The table bucket policy provides access to the tables stored in the bucket. Table bucket policies don't apply to tables owned by other accounts.

## Adding a table bucket policy
<a name="table-bucket-policy-add"></a>

To add a bucket policy to a table bucket, use the following AWS CLI example. 

### Using the AWS CLI
<a name="table-bucket-policy-add-CLI"></a>

This example shows how to create a table bucket policy by using the AWS CLI. To use the command, replace the `user input placeholders` with your own information.

```
aws s3tables put-table-bucket-policy \
    --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket1  \
    --resource-policy your-policy-JSON
```

### Using the S3 console
<a name="table-bucket-policy-add-console"></a>

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Amazon S3**.

1. Choose **Table buckets** and select the table bucket name that you want to add a policy to.

1. Choose the **Permissions** tab.

1. Under **Table bucket policy**, Choose **Edit**.

1. In the policy editor, enter your policy JSON. 

1. (Optional) Choose **Policy examples** to see sample policies that you can adapt to your needs.

1. After entering your policy, choose **Save changes**.

## Viewing a table bucket policy
<a name="table-bucket-policy-get"></a>

To view the bucket policy that's attached to a table bucket, use the following AWS CLI example. 

### Using the AWS CLI
<a name="table-bucket-policy-get-CLI"></a>

This example shows how to view the policy that's attached to a table bucket by using the AWS CLI. To use the command, replace the `user input placeholders` with your own information.

```
aws s3tables get-table-bucket-policy --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket1
```

### Using the S3 console
<a name="get-policy-table-bucket-console"></a>

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Amazon S3**.

1. Choose **Table buckets** and select the table bucket name that you want to view the policy for.

1. Choose the **Permissions** tab.

## Deleting a table bucket policy
<a name="table-bucket-policy-delete"></a>

To delete a bucket policy that's attached to a table bucket, use the following AWS CLI example. 

### Using the AWS CLI
<a name="table-bucket-policy-delete-CLI"></a>

This example shows how to delete a table bucket policy by using the AWS CLI. To use the command, replace the `user input placeholders` with your own information.

```
aws s3tables delete-table-bucket-policy --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket1
```

### Using the S3 console
<a name="table-bucket-policy-delete-console"></a>

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Amazon S3**.

1. Choose **Table buckets** and select the table bucket name that you want to delete a policy for.

1. Choose the **Permissions** tab.

1. Under **Table bucket policy**, choose **Delete**.

# Working with AWS managed table buckets
<a name="s3-tables-aws-managed-buckets"></a>

AWS managed table buckets are specialized Amazon S3 table buckets designed to store AWS managed tables, such as [S3 Metadata](metadata-tables-overview.md) journal and live inventory tables. Unlike customer-managed table buckets that you create and manage directly, AWS managed table buckets are automatically provisioned by AWS when you configure features that require AWS managed tables. When managed tables are created, they belong to a predefined namespace that's based on the name of the source bucket. This predefined namespace can't be modified. 

Each AWS account has one AWS managed table bucket per Region, following the naming convention `aws-s3`. This bucket serves as a centralized location for all managed tables associated with your account's resources in that Region.

The following table compares AWS managed table buckets with customer-managed table buckets.


| **Feature** | **AWS managed table buckets** | **Customer-managed table buckets** | 
| --- | --- | --- | 
| Creation | Automatically created by AWS services | You create these manually | 
| Naming | Use a standard naming convention (aws-s3) | You define your own names | 
| Table creation | Only AWS services can create tables | You can create tables | 
| Namespace control | You can't create or delete namespaces (all tables belong to a fixed namespace) | You can create and delete namespaces | 
| Access | Read-only access | Full access | 
| Encryption | You can change the default encryption (SSE-S3) settings only if you encrypted the initial table with a customer managed AWS Key Management Service (AWS KMS) key | You can set bucket-level default encryption and modify it anytime | 
| Maintenance | Managed by AWS services | You can customize automated maintenance at the bucket level | 

## Permissions to work with AWS managed table buckets and to query tables
<a name="aws-managed-buckets-permissions"></a>

To work with AWS managed table buckets, you need permissions to create AWS managed table buckets and tables and to specify encryption settings for AWS managed tables. You also need permissions to query the tables in your AWS managed table buckets.

The following example policy allows you to create an AWS managed table bucket through an S3 Metadata configuration:

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"PermissionsToWorkWithMetadataTables",
         "Effect":"Allow",
         "Action":[
             "s3:CreateBucketMetadataTableConfiguration",
             "s3tables:CreateTableBucket",
             "s3tables:CreateNamespace",
             "s3tables:CreateTable",
             "s3tables:GetTable",
             "s3tables:PutTablePolicy"
             "s3tables:PutTableEncryption",
             "kms:DescribeKey"
         ],
         "Resource":[
            "arn:aws:s3:::bucket/amzn-s3-demo-source-bucket",
            "arn:aws:s3tables:us-east-1:111122223333:bucket/aws-s3",
            "arn:aws:s3tables:us-east-1:111122223333:bucket/aws-s3/table/*",
            "arn:aws:kms:us-east-1:111122223333:key/01234567-89ab-cdef-0123-456789abcdef"
         ]
       }
    ]
}
```

The following example policy allows you to query tables in AWS managed table buckets:

```
{
   "Version":"2012-10-17",		 	 	 
   "Statement":[
      {
         "Sid":"PermissionsToQueryMetadataTables",
         "Effect":"Allow",
         "Action":[
             "s3tables:GetTable",
             "s3tables:GetTableData",
             "s3tables:GetTableMetadataLocation",
             "kms:Decrypt"
         ],
         "Resource":[
            "arn:aws:s3tables:us-east-1:111122223333:bucket/aws-s3",
            "arn:aws:s3tables:us-east-1:111122223333:bucket/aws-s3/table/*",
            "arn:aws:kms:us-east-1:111122223333:key/01234567-89ab-cdef-0123-456789abcdef"
         ]
       }
    ]
}
```

## Querying tables in AWS managed table buckets
<a name="querying-tables-in-aws-managed-table-buckets"></a>

You can query AWS managed tables in AWS managed table buckets using access methods and engines supported by S3 Tables. The following are some example queries.

------
#### [ Using standard SQL ]

The following example shows how to query AWS managed tables using standard SQL syntax:

```
SELECT *
FROM "s3tablescatalog/aws-s3"."b_amzn-s3-demo-source-bucket"."inventory"
LIMIT 10;
```

The following example shows how to join AWS managed tables with your own tables:

```
SELECT *
FROM "s3tablescatalog/aws-s3"."b_amzn-s3-demo-source-bucket"."inventory" a
JOIN "s3tablescatalog/amzn-s3-demo-table-bucket"."my_namespace"."my_table" b
ON a.key = b.key
LIMIT 10;
```

------
#### [ Using Spark ]

The following example shows how to query your table with Spark:

```
spark.sql("""
    SELECT *
    FROM ice_catalog.inventory a
    JOIN ice_catalog.my_table b
    ON a.key = b.key
""").show(10, true)
```

The following example shows how to joining your AWS managed table with another table:

```
SELECT *
FROM inventory a
JOIN my_table b
ON a.key = b.key
LIMIT 10;
```

------

## Encryption for AWS managed table buckets
<a name="aws-managed-buckets-encryption"></a>

By default, AWS managed table buckets are encrypted with server-side encryption using Amazon S3 managed keys (SSE-S3). After your AWS managed table bucket is created, you can use [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_PutTableBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_PutTableBucketEncryption.html) to set the bucket's default encryption setting to use server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS).

During creation of your AWS managed tables, you can choose to encrypt them with SSE-KMS. If you choose to use SSE-KMS, you must provide a customer managed KMS key in the same Region as your AWS managed table bucket. You can set the encryption type for your AWS managed tables only during table creation. After an AWS managed table is created, you can't change its encryption setting.

If you want the AWS managed table bucket and the tables stored in it to use the same KMS key, make sure to use the same KMS key that you used to encrypt your tables to encrypt your table bucket after it's been created. After you've changed the default encryption settings for your table bucket to use SSE-KMS, those encryption settings are used for any future tables that are created in the bucket.

# Using tags with S3 table buckets
<a name="table-bucket-tagging"></a>

An AWS tag is a key-value pair that holds metadata about resources, in this case Amazon S3 table buckets. You can tag S3 table buckets when you create them or manage tags on existing table buckets. For general information about tags, see [Tagging for cost allocation or attribute-based access control (ABAC)](tagging.md).

**Note**  
There is no additional charge for using tags on table buckets beyond the standard S3 API request rates. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

## Common ways to use tags with table buckets
<a name="common-ways-to-use-tags-table-bucket"></a>

Use tags on your S3 table buckets for:

**Attribute-based access control (ABAC)** – Scale access permissions and grant access to S3 table buckets based on their tags. For more information, see [Using tags for ABAC](https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging.html#using-tags-for-abac).

### ABAC for S3 table buckets
<a name="abac-for-table-buckets"></a>

Amazon S3 table buckets support attribute-based access control (ABAC) using tags. Use tag-based condition keys in your AWS organizations, AWS Identity and Access Management (IAM), and S3 table bucket policies. ABAC in Amazon S3 supports authorization across multiple AWS accounts. 

In your IAM policies, you can control access to S3 table buckets based on the table bucket's tags by using the `s3tables:TableBucketTag/tag-key` condition key or the [AWS global condition keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-tagkeys): `aws:ResourceTag/key-name`, `aws:RequestTag/key-name`, or `aws:TagKeys`. 

#### aws:ResourceTag/key-name
<a name="table-bucket-condition-key-resource-tag"></a>

Use this condition key to compare the tag key-value pair that you specify in the policy with the key-value pair attached to the resource. For example, you could require that access to a table bucket is allowed only if the table bucket has the tag key `Department` with the value `Marketing`.

This condition key applies to all table bucket actions that are performed using the Amazon S3 Console, the AWS Command Line Interface (CLI), S3 APIs, or the AWS SDKs, except for the `CreateBucket` API request.

For an example policy, see [1.1 - table bucket policy to restrict operations on the tables inside of the table bucket using tags](#example-policy-table-bucket-resource-tag). 

For additional example policies and more information, see [Controlling access to AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-resources) in the *AWS Identity and Access Management User Guide*.

**Note**  
For actions performed on tables, this condition key acts on the tags applied to the table and not on the tags applied to the table bucket containing the table. Use the `s3tables:TableBucketTag/tag-key` instead if you would like your ABAC policies to act on the tags of the table bucket when performaing table actions. 

#### aws:RequestTag/key-name
<a name="table-bucket-condition-key-request-tag"></a>

Use this condition key to compare the tag key-value pair that was passed in the request with the tag pair that you specify in the policy. For example, you could check whether the request to tag a table bucket includes the tag key `Department` and that it has the value `Accounting`. 

This condition key applies when tag keys are passed in a `TagResource` or `CreateTableBucket` API operation request, or when tagging or creating a table bucket with tags using the Amazon S3 Console, the AWS Command Line Interface (CLI), or the AWS SDKs. 

For an example policy, see [1.2 - IAM policy to create or modify table buckets with specific tags](#example-policy-table-bucket-request-tag).

For additional example policies and more information, see [Controlling access during AWS requests](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-requests) in the *AWS Identity and Access Management User Guide*.

#### aws:TagKeys
<a name="table-bucket-condition-key-tag-keys"></a>

Use this condition key to compare the tag keys in a request with the keys that you specify in the policy to define what tag keys are allowed for access. For example, to allow tagging during the `CreateTableBucket` action, you must create a policy that allows both the `s3tables:TagResource` and `s3tables:CreateTableBucket` actions. You can then use the `aws:TagKeys` condition key to enforce that only specific tags are used in the `CreateTableBucket` request. 

This condition key applies when tag keys are passed in a `TagResource`, `UntagResource`, or `CreateTableBucket` API operations or when tagging, untagging, or creating a table bucket with tags using the Amazon S3 Console, the AWS Command Line Interface (CLI), or the AWS SDKs. 

For an example policy, see [1.3 - IAM policy to control the modification of tags on existing resources maintaining tagging governance](#example-policy-table-bucket-tag-keys).

For additional example policies and more information, see [Controlling access based on tag keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-tag-keys) in the *AWS Identity and Access Management User Guide*.

#### s3tables:TableBucketTag/tag-key
<a name="table-bucket-condition-key"></a>

Use this condition key to grant permissions to specific data in table buckets using tags. This condition key acts, on the most part, on the tags assigned to the table bucket for all S3 tables actions. Even when you create a table with tags, this condition key acts on the tags applied to the table bucket that contains that table. The exceptions are: 
+ When you create a table bucket with tags, this condition key acts on the tags in the request.

For an example policy, see [1.4 - Using the s3tables:TableBucketTag condition key](#example-policy-table-bucket-tag). 

#### Example ABAC policies for table buckets
<a name="example-table-buckets-abac-policies"></a>

See the following example ABAC policies for Amazon S3 table buckets.

**Note**  
If you are using Lake Formation to manage access to your Amazon S3 Tables and you have an IAM or S3 Tables resource-based policy that restricts IAM users and IAM roles based on principal tags, you must attach the same principal tags to the IAM role that Lake Formation uses to access your Amazon S3 data (for example, LakeFormationDataAccessRole) and grant this role the necessary permissions. This is required for your tag-based access control policy to work correctly with your S3 Tables analytics integration.

##### 1.1 - table bucket policy to restrict operations on the tables inside of the table bucket using tags
<a name="example-policy-table-bucket-resource-tag"></a>

In this table bucket policy, the specified IAM principals (users and roles) can perform the `GetTable` action on any table in the table bucket only if the value of the table's `project` tag matches the value of the principal's `project` tag.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowGetTable",
      "Effect": "Allow",
      "Principal": {
        "AWS": "111122223333"
      },
      "Action": "s3tables:GetTable",
      "Resource": "arn:aws::s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket/*",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/project": "${aws:PrincipalTag/project}"
        }
      }
    }
  ]
}
```

##### 1.2 - IAM policy to create or modify table buckets with specific tags
<a name="example-policy-table-bucket-request-tag"></a>

**Note**  
If you are using AWS Lake Formation to manage access to your Amazon S3 tables, and you are using ABAC with Amazon S3 Tables, make sure that you also give the IAM role that Lake Formation assumes the required access. For more information on setting up the IAM role for Lake Formation, see [Prerequisites for integrating Amazon S3 tables catalog with the Data Catalog and Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/s3tables-catalog-prerequisites.html) in the *AWS Lake Formation developer guide*. 

In this IAM policy, users or roles with this policy can only create S3 table buckets if they tag the table bucket with the tag key `project` and tag value `Trinity` in the table bucket creation request. They can also add or modify tags on existing S3 table buckets as long as the `TagResource` request includes the tag key-value pair `project:Trinity`. This policy does not grant read, write, or delete permissions on the table buckets or its objects. 

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "CreateTableBucketWithTags",
      "Effect": "Allow",
      "Action": [
        "s3tables:CreateTableBucket",
        "s3tables:TagResource"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/project": [
            "Trinity"
          ]
        }
      }
    }
  ]
}
```

##### 1.3 - IAM policy to control the modification of tags on existing resources maintaining tagging governance
<a name="example-policy-table-bucket-tag-keys"></a>

In this IAM policy, IAM principals (users or roles) can modify tags on a table bucket only if the value of the table bucket's `project` tag matches the value of the principal's `project` tag. Only the four tags `project`, `environment`, `owner`, and `cost-center` specified in the `aws:TagKeys` condition keys are permitted for these table buckets. This helps enforce tag governance, prevents unauthorized tag modifications, and keeps the tagging schema consistent across your table buckets.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "EnforceTaggingRulesOnModification",
      "Effect": "Allow",
      "Action": [
        "s3tables:TagResource",
        "s3tables:UntagResource"
      ],
      "Resource": "arn:aws::s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/project": "${aws:PrincipalTag/project}"
        },
        "ForAllValues:StringEquals": {
          "aws:TagKeys": [
            "project",
            "environment",
            "owner",
            "cost-center"
          ]
        }
      }
    }
  ]
}
```

##### 1.4 - Using the s3tables:TableBucketTag condition key
<a name="example-policy-table-bucket-tag"></a>

In this IAM policy, the condition statement allows access to the table bucket's data only if the table bucket has the tag key `Environment` and tag value `Production`. The `s3tables:TableBucketTag/<tag-key>` differs from the `aws:ResourceTag/<tag-key>` condition key because, in addition to controlling access to table buckets depending on their tags, it allows you to control access to tables based on the tags on their parent table bucket.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowAccessToSpecificTables",
      "Effect": "Allow",
      "Action": "*",
      "Resource": "arn:aws::s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket/*",
      "Condition": {
        "StringEquals": {
          "s3tables:TableBucketTag/Environment": "Production"
        }
      }
    }
  ]
}
```

## Managing tags for table buckets
<a name="table-bucket-working-with-tags"></a>

You can add or manage tags for S3 table buckets using the Amazon S3 Console, the AWS Command Line Interface (CLI), the AWS SDKs, or using the S3 APIs: [TagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_TagResource.html), [UntagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_UntagResource.html), and [ListTagsForResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_ListTagsForResource.html). For more information, see the following:

**Topics**
+ [Common ways to use tags with table buckets](#common-ways-to-use-tags-table-bucket)
+ [Managing tags for table buckets](#table-bucket-working-with-tags)
+ [Creating table buckets with tags](table-bucket-create-tag.md)
+ [Adding a tag to a table bucket](table-bucket-tag-add.md)
+ [Viewing table bucket tags](table-bucket-tag-view.md)
+ [Deleting a tag from a table bucket](table-bucket-tag-delete.md)

# Creating table buckets with tags
<a name="table-bucket-create-tag"></a>

You can tag Amazon S3 table buckets when you create them. There is no additional charge for using tags on table buckets beyond the standard S3 API request rates. For more information, see [Amazon S3 pricing](https://docs.aws.amazon.com/s3/pricing/). For more information about tagging table buckets, see [Using tags with S3 table buckets](table-bucket-tagging.md).

## Permissions
<a name="table-bucket-create-tag-permissions"></a>

To create a table bucket with tags, you must have the following permissions:
+ `s3tables:CreateTableBucket`
+ `s3tables:TagResource`

## Troubleshooting errors
<a name="table-bucket-create-tag-troubleshooting"></a>

If you encounter an error when attempting to create a table bucket with tags, you can do the following: 
+ Verify that you have the required [Permissions](#table-bucket-create-tag-permissions) to create the table bucket and apply a tag to it.
+ Check your IAM user policy for any attribute-based access control (ABAC) conditions. Your policy may require you to tag your table buckets with only specific tag keys and values. For more information about ABAC and example table bucket ABAC policies, see [ABAC for S3 table buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/table-bucket-tagging.html#abac-for-table-buckets).

## Steps
<a name="table-bucket-create-tag-steps"></a>

You can create a table bucket with tags applied by using the Amazon S3 Console, the AWS Command Line Interface (AWS CLI), the Amazon S3 Tables REST API, and the AWS SDKs.

## Using the S3 console
<a name="table-bucket-create-tag-console"></a>

To create a table bucket with tags using the Amazon S3 console:

1. Sign in to the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. To create a new table bucket, choose **Create table bucket**.

1. Enter a name for the table bucket. For more information, see [Amazon S3 table bucket, table, and namespace naming rules](s3-tables-buckets-naming.md). 

1. On the **Create table bucket** page, there is a **Tags** section.

1. Choose **Add new Tag** to open the Tags editor and enter a tag key-value pair. The tag key is required, but the value is optional. 

1. To add another tag, select **Add new Tag** again. You can enter up to 50 tag key-value pairs.

1. Specify the remaining options for your new table bucket. For more information, see [Creating a table bucket](s3-tables-buckets-create.md).

1. Choose **Create table bucket**.

## Using the REST API
<a name="table-bucket-create-tag-api"></a>

For information about the Amazon S3 Tables REST API support for creating a table bucket with tags, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [CreateTableBucket](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_CreateTableBucket.html)

## Using the AWS CLI
<a name="table-bucket-create-tag-cli"></a>

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to create a table bucket with tags by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

When you create a table bucket you must provide configuration details. For more information, see [Creating a table bucket](s3-tables-buckets-create.md). You must also name the table bucket with a name that follows the table bucket naming convention. For more information see [Amazon S3 table bucket, table, and namespace naming rules](s3-tables-buckets-naming.md). 

**Request:**

```
aws --region us-west-2 \
s3tables create-table-bucket \
--tags '{"Department":"Engineering"}' \
--name amzn-s3-demo-table-bucket
```

# Adding a tag to a table bucket
<a name="table-bucket-tag-add"></a>



You can add tags to Amazon S3 table buckets and modify these tags. For more information about tagging table buckets, see [Using tags with S3 table buckets](table-bucket-tagging.md).

## Permissions
<a name="table-bucket-tag-add-permissions"></a>

To add a tag to a table bucket, you must have the following permission:
+ `s3tables:TagResource`

## Troubleshooting errors
<a name="table-bucket-tag-add-troubleshooting"></a>

If you encounter an error when attempting to add a tag to a table bucket, you can do the following: 
+ Verify that you have the required [Permissions](#table-bucket-tag-add-permissions) to add a tag to a table bucket.
+ If you attempted to add a tag key that starts with the AWS reserved prefix `aws:`, change the tag key and try again. 
+ The tag key is required. Also, make sure that the tag key and the tag value do not exceed the maximum character length and do not contain restricted characters. For more information, see [Tagging for cost allocation or attribute-based access control (ABAC)](tagging.md).

## Steps
<a name="table-bucket-tag-add-steps"></a>

You can add tags to table buckets by using the Amazon S3 Console, the AWS Command Line Interface (AWS CLI), the Amazon S3 Tables REST API, and the AWS SDKs.

## Using the S3 console
<a name="table-bucket-tag-add-console"></a>

To add tags to a table bucket using the Amazon S3 console:

1. Sign in to the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. Choose the table bucket name. 

1. Choose the **Properties** tab. 

1. Scroll to the **Tags** section and choose **Add new Tag**. 

1. This opens the **Add Tags** page. You can enter up to 50 tag key value pairs. 

1. If you add a new tag with the same key name as an existing tag, the value of the new tag overrides the value of the existing tag.

1. You can also edit the values of existing tags on this page.

1. After you have added the tag(s), choose **Save changes**. 

## Using the REST API
<a name="table-bucket-tag-add-api"></a>

For information about the Amazon S3 REST API support for adding tags to a table bucket, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [TagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_TagResource.html)

## Using the AWS CLI
<a name="table-bucket-tag-add-cli"></a>

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to add tags to a table bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

**Request:**

```
aws --region us-west-2 \
s3tables tag-resource \
--resource-arn arn:aws::s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket \
--tags '{"Department":"Engineering"}'
```

# Viewing table bucket tags
<a name="table-bucket-tag-view"></a>

You can view or list tags applied to Amazon S3 table buckets. For more information about tagging table buckets, see [Using tags with S3 table buckets](table-bucket-tagging.md).

## Permissions
<a name="table-bucket-tag-view-permissions"></a>

To view tags applied to a table bucket, you must have the following permission: 
+ `s3tables:ListTagsForResource`

## Troubleshooting errors
<a name="table-bucket-tag-view-troubleshooting"></a>

If you encounter an error when attempting to list or view the tags of a table bucket, you can do the following: 
+ Verify that you have the required [Permissions](#table-bucket-tag-view-permissions) to view the tags of the table bucket.

## Steps
<a name="table-bucket-tag-view-steps"></a>

You can view tags applied to table buckets by using the Amazon S3 Console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and the AWS SDKs.

## Using the S3 console
<a name="table-bucket-tag-view-console"></a>

To view tags applied to a table bucket using the Amazon S3 console:

1. Sign in to the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. Choose the table bucket name. 

1. Choose the **Properties** tab. 

1. Scroll to the **Tags** section to view all of the tags applied to the table bucket. 

1. The **Tags** section shows the **User-defined tags** by default. You can select the **AWS-generated tags** tab to view tags applied to your table bucket by AWS services.

## Using the REST API
<a name="table-bucket-tag-view-api"></a>

For information about the Amazon S3 REST API support for viewing the tags applied to a table bucket, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [ListTagsforResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_ListTagsForResource.html)

## Using the AWS CLI
<a name="table-bucket-tag-view-cli"></a>

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to view tags applied to a table bucket. To use the command replace the *user input placeholders* with your own information.

**Request:**

```
aws --region us-west-2 \
s3tables list-tags-for-resource \
--resource-arn arn:aws::s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket/table/example_table
```

**Response - tags present:**

```
{
    "tags": {
        "project": "Trinity",
        "code": "123456"
    }
}
```

**Response - no tags present:**

```
{
  "Tags": []
}
```

# Deleting a tag from a table bucket
<a name="table-bucket-tag-delete"></a>

You can remove tags from Amazon S3 table buckets. For more information about tagging table buckets, see [Using tags with S3 table buckets](table-bucket-tagging.md).

**Note**  
If you delete a tag and later learn that it was being used to track costs or for access control, you can add the tag back to the table bucket. 

## Permissions
<a name="table-bucket-tag-delete-permissions"></a>

To delete a tag from a table bucket, you must have the following permission: 
+ `s3tables:UntagResource`

## Troubleshooting errors
<a name="table-bucket-tag-delete-troubleshooting"></a>

If you encounter an error when attempting to delete a tag from a table bucket, you can do the following: 
+ Verify that you have the required [Permissions](#table-bucket-tag-delete-permissions) to delete a tag from a table bucket.

## Steps
<a name="table-bucket-tag-delete-steps"></a>

You can delete tags from table buckets by using the Amazon S3 Console, the AWS Command Line Interface (AWS CLI), the Amazon S3 Tables REST API, and the AWS SDKs.

## Using the S3 console
<a name="table-bucket-tag-delete-console"></a>

To delete tags from a table bucket using the Amazon S3 console:

1. Sign in to the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. Choose the table bucket name. 

1. Choose the **Properties** tab. 

1. Scroll to the **Tags** section and select the checkbox next to the tag or tags that you would like to delete. 

1. Choose **Delete**. 

1. The **Delete user-defined tags** pop-up appears and asks you to confirm the deletion of the tag or tags you selected. 

1. Choose **Delete** to confirm.

## Using the REST API
<a name="table-bucket-tag-delete-api"></a>

For information about the Amazon S3 REST API support for deleting tags from a table bucket, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [UnTagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_UntagResource.html)

## Using the AWS CLI
<a name="table-bucket-tag-delete-cli"></a>

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to delete tags from a table bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

**Request:**

```
aws --region us-west-2 \
s3tables untag-resource \
--resource-arn arn:aws::s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket \
--tags-keys '["Department"]'
```

# S3 Tables maintenance
<a name="s3-tables-maintenance-overview"></a>

Amazon S3 automatically performs maintenance to enhance the performance of your tables in S3 table buckets. Maintenance is performed at the table bucket and individual table level and includes the following:

**Table bucket-level maintenance:**  
+ **Unreferenced file removal** – Cleans up orphaned files to optimize storage usage and costs.

**Table-level maintenance:**  
+ **File compaction** – Consolidates small files to improve query performance and reduce storage costs.
+ **Snapshot management** – Controls table version history and prevents excessive metadata growth.

These options are enabled by default. You can edit or disable these operations through maintenance configuration files. 

In addition to these options, you can also enable and configure record expiration settings for tables. With this option, Amazon S3 automatically removes records from a table when the records expire.

**Topics**
+ [S3 Tables maintenance job status](s3-tables-maintenance-status.md)
+ [Maintenance for table buckets](s3-table-buckets-maintenance.md)
+ [Maintenance for tables](s3-tables-maintenance.md)
+ [Record expiration for tables](s3-tables-record-expiration.md)
+ [Considerations and limitations for maintenance jobs](s3-tables-considerations.md)

# S3 Tables maintenance job status
<a name="s3-tables-maintenance-status"></a>

S3 Tables maintenance jobs run periodically for your S3 tables or table buckets. You can query the status of these jobs with the `GetTableMaintenanceJobStatus` API.

**To get the status of your maintenance jobs by using the AWS CLI**  
The following example will get the statuses of maintenance jobs using the `GetTableMaintenanceJobStatus` API.  

```
aws s3tables get-table-maintenance-job-status \
   --table-bucket-arn="arn:aws:s3tables:arn:aws::111122223333:bucket/amzn-s3-demo-bucket1" \
   --namespace="mynamespace" \
   --name="testtable"
```
For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3tables/get-table-maintenance-job-status.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3tables/get-table-maintenance-job-status.html) in the *AWS CLI Command Reference*.

S3 Tables maintenance jobs can transition between four possible statuses:
+ `Successful`
+ `Failed`
+ `Disabled`
+ `Not_Yet_Run`

Jobs with a failed status will include a failure message. The following list describes possible failure messages. 
+ Encountered Iceberg validation exception when trying to read table. Ensure that your table is readable, adheres to the Iceberg specification, and contains only S3 paths that begin with your S3 Table alias.
+ Iceberg Snapshot management does not currently support user defined tags or references.
+ Iceberg table maintenance configuration is incompatible with 'history.expire.max-snapshot-age-ms' and 'history.expire.min-snapshots-to-keep' table properties.
+ Iceberg snapshot management and unreferenced file removal is not supported when the 'gc.enabled' table property is false. Ensure that this property is unset or explicitly set to true.
+ Failed to commit because of out of date metadata. Maintenance will be retried at the next available opportunity.
+ Insufficient access to perform table maintenance. Ensure that the key used to encrypt the table is active, exists, and has a resource policy granting access to the S3 service principal `maintenance.s3tables.amazonaws.com`.
**Note**  
 For more information on AWS KMS permissions for S3 Tables, see [Permission requirements for S3 Tables SSE-KMS encryption](s3-tables-kms-permissions.md). 
+ Internal error

# Maintenance for table buckets
<a name="s3-table-buckets-maintenance"></a>

Amazon S3 offers maintenance operations to enhance the management and performance of your table buckets. The following option is enabled by default for all table buckets. You can edit or disable this option by specifying a maintenance configuration file for your table bucket.

Editing this configuration requires the `s3tables:PutTableBucketMaintenanceConfiguration` permission.

**Topics**
+ [Unreferenced file removal](#s3-table-bucket-maintenance-unreferenced)
+ [Consideration and limitations](#s3-tables-buckets-considerations-see-more)

## Unreferenced file removal
<a name="s3-table-bucket-maintenance-unreferenced"></a>

Unreferenced file removal identifies and deletes all objects that are not referenced by any table snapshots. As part of your unreferenced file removal policy, you can configure two properties: `unreferencedDays` (3 days by default) and `nonCurrentDays` (10 days by default).

For any object not referenced by your table and older than the `unreferencedDays` property, S3 marks the object as noncurrent. S3 deletes noncurrent objects after the number of days specified by the `nonCurrentDays` property.

**Note**  
Deletes of noncurrent objects are permanent with no way to recover these objects.

To view or recover objects that have been marked as noncurrent, you must contact AWS Support. For information about contacting AWS Support, see [Contact AWS](https://aws.amazon.com/contact-us/) or the [AWS Support Documentation](https://aws.amazon.com/documentation/aws-support/).

Unreferenced file removal determines the objects to delete from your table with reference only to that table. Any reference made to these objects outside of the table will not prevent unreferenced file removal from deleting an object.

If you disable unreferenced file removal, any in-progress jobs will not be affected. The new configuration will take effect for the next job after the configuration change. For more information, see the pricing information in the [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

You can only configure unreferenced file removal at the table bucket level. This configuration will apply to every table in your bucket.

**To configure unreferenced file removal by using the AWS CLI**  
The following example will set the `unreferencedDays` to 4 days and the `nonCurrentDays` to 10 days using the `PutTableBucketMaintenanceConfiguration` API.  

```
aws s3tables put-table-bucket-maintenance-configuration \
   --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket \
   --type icebergUnreferencedFileRemoval \
   --value '{"status":"enabled","settings":{"icebergUnreferencedFileRemoval":{"unreferencedDays":4,"nonCurrentDays":10}}}'
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3tables/put-table-bucket-maintenance-configuration.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3tables/put-table-bucket-maintenance-configuration.html) in the *AWS CLI Command Reference*.

## Consideration and limitations
<a name="s3-tables-buckets-considerations-see-more"></a>

To learn more about additional consideration and limits for unreferenced file removal, see [Considerations and limitations for maintenance jobs](s3-tables-considerations.md).

# Maintenance for tables
<a name="s3-tables-maintenance"></a>

S3 Tables offers maintenance operations to enhance the management and performance of your individual tables. The following options are enabled by default for all tables in table buckets. You can edit or disable these by specifying maintenance configuration files for your S3 table.

Editing this configuration requires the `s3tables:GetTableMaintenanceConfiguration` and `s3tables:PutTableMaintenanceConfiguration` permissions.

**Note**  
You can track S3 Tables automated maintenance operations on your tables through CloudTrail logs, for more information, see [CloudTrail management events for S3 Tables maintenance](s3-tables-logging.md#s3-tables-maintenance-events).

**Topics**
+ [Compaction](#s3-tables-maintenance-compaction)
+ [Snapshot management](#s3-tables-maintenance-snapshot)
+ [Consideration and limitations](#s3-tables-considerations-see-more)

## Compaction
<a name="s3-tables-maintenance-compaction"></a>

Compaction is configured at the table level and combines multiple smaller objects into fewer, larger objects to improve Apache Iceberg query performance. When combining objects, compaction also applies the effects of row-level deletes in your table.

Compaction is enabled by default for all tables, with a default target file size of 512MB, or a custom value you specify between 64MB to 512MB. The compacted files are written as the most recent snapshot of your table.

**Note**  
Compaction is supported on Apache Parquet, Avro, and ORC file types.

### Compaction Strategies
<a name="s3-tables-maintenance-compaction-strategies"></a>

You can choose from multiple compaction strategies which can further increase query performance depending on your query patterns and table sort order.

S3 Tables supports these compaction strategies for tables:
+ **Auto (Default)**
  + Amazon S3 selects the best compaction strategy based on your table sort order. This is the default compaction strategy for all tables.
  + For tables with a defined sort order in their metadata, `auto` will automatically apply `sort` compaction.
  + For tables without a sort order, `auto` will default to using `binpack` compaction.
+ **Binpack**
  + Combines small files into larger files, typically targeting sizes over 100MB, while applying any pending deletes. This is the default compaction strategy for unsorted tables.
+ **Sort**
  + Organizes data based on specified columns which are sorted automatically by hierarchy during compaction, improving query performance for filtered operations. This strategy is recommended when your queries frequently filter on specific columns. When you use this strategy, S3 Tables automatically applies hierarchical sorting on columns when a `sort_order` is defined in table properties.
+ **Z-order**
  + Optimizes data organization by blending multiple attributes into a single scalar value that can be used for sorting, allowing efficient querying across multiple dimensions. This strategy is recommended when you need to query data across multiple dimensions simultaneously. This strategy requires you to define a sort order in your Iceberg table properties using the `sort_order` table property.

Compaction will incur additional costs. The `z-order` and `sort` compaction strategies may incur a higher cost than `binpack`. For more information, see the pricing information in the .[Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

### Compaction Examples
<a name="tables-compaction-examples"></a>

The following examples show configurations for table compaction.

**To configure the compaction target file size by using the AWS CLI**  
The minimum target compaction file size is 64MB; the maximum is 512MB.  
The following example will change the target file size to 256MB using the `PutTableMaintenanceConfiguration` API.  

```
aws s3tables put-table-maintenance-configuration \
   --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket1 \
   --type icebergCompaction \
   --namespace mynamespace \
   --name testtable \
   --value='{"status":"enabled","settings":{"icebergCompaction":{"targetFileSizeMB":256}}}'
```
For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3tables/put-table-maintenance-configuration.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3tables/put-table-maintenance-configuration.html) in the *AWS CLI Command Reference*.

**To configure the compaction strategy by using the AWS CLI**  
The following example will change the compaction strategy to `sort` using the `PutTableMaintenanceConfiguration` API. When setting compaction you can choose from the following compaction strategies: `auto`,`binpack`,`sort`, or `z-order`  
To set the compaction strategy to `sort` or `z-order` you need the following prerequisites:  
+ A sort order defined in your Iceberg table properties.
+ The `s3tables:GetTableData` permission.

```
aws s3tables put-table-maintenance-configuration \
   --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket \
   --type icebergCompaction \
   --namespace mynamespace \
   --name testtable \
   --value='{"status":"enabled","settings":{"icebergCompaction":{"strategy":"sort"}}}'
```
For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3tables/put-table-maintenance-configuration.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3tables/put-table-maintenance-configuration.html) in the *AWS CLI Command Reference*.

**To disable compaction by using the AWS CLI**  
The following example will disable compaction using the `PutTableMaintenanceConfiguration` API.  

```
aws s3tables put-table-maintenance-configuration \
   --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket \
   --type icebergCompaction \
   --namespace mynamespace \
   --name testtable \
   --value='{"status":"disabled","settings":{"icebergCompaction":{"targetFileSizeMB":256}}}'
```
For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3tables/put-table-maintenance-configuration.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3tables/put-table-maintenance-configuration.html) in the *AWS CLI Command Reference*.

## Snapshot management
<a name="s3-tables-maintenance-snapshot"></a>

Snapshot management determines the number of active snapshots for your table. This is based on the `MinimumSnapshots` (1 by default) and `MaximumSnapshotAge` (120 hours by default). Snapshot management expires and removes table snapshots based on these configurations.

When a snapshot expires, Amazon S3 marks any objects referenced only by that snapshot as noncurrent. These noncurrent objects are deleted after the number of days specified by the `NoncurrentDays` property in your unreferenced file removal policy.

**Note**  
Deletes of noncurrent objects are permanent with no way to recover these objects.

To view or recover objects that has been marked as noncurrent you must contact AWS Support. For information about contacting AWS Support, see [Contact AWS](https://aws.amazon.com/contact-us/) or the [AWS Support Documentation](https://aws.amazon.com/documentation/aws-support/).

Snapshot management determines the objects to delete from your table with references only to that table. References to these objects from outside the table will not prevent snapshot management from deleting them.

**Note**  
Snapshot management does not support retention values that you configure as Apache Iceberg table properties in the `metadata.json` file or through an `ALTER TABLE SET TBLPROPERTIES` SQL command. If any of the following conditions exist, snapshot management will fail for the entire table and Amazon S3 will not expire or remove any snapshots:  
**User-defined tags or branches** – If any user-defined tag or branch exists on the table, snapshot management will fail for the entire table. This applies even if the tag or branch has a short retention period. To restore automated snapshot expiration, remove all user-defined tags and branches from the table.
**Iceberg snapshot retention table properties** – If the `history.expire.max-snapshot-age-ms` or `history.expire.min-snapshots-to-keep` property is set as an Apache Iceberg table property, snapshot management will fail for the entire table regardless of the configured value. To restore automated snapshot expiration, remove these properties:  

  ```
  ALTER TABLE mydb.mytable UNSET TBLPROPERTIES ('history.expire.max-snapshot-age-ms');
  ALTER TABLE mydb.mytable UNSET TBLPROPERTIES ('history.expire.min-snapshots-to-keep');
  ```
To diagnose snapshot management failures, use the `GetTableMaintenanceJobStatus` API or run the following AWS CLI command. If snapshot management has failed, the response includes a `FAILED` status with a message that describes the cause of the failure.  

```
aws s3tables get-table-maintenance-job-status \
    --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket \
    --namespace my_namespace \
    --name my_table
```

You can only configure snapshot management at the table level. For more information, see the pricing information in the [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

### Snapshot Management Examples
<a name="tables-snapshot-examples"></a>

The following examples show configurations for table snapshot management.

**To configure the snapshot management by using the AWS CLI**  
The following example will set the `MinimumSnapshots` to 10 and the `MaximumSnapshotAge` to 2500 hours using the `PutTableMaintenanceConfiguration` API.  

```
aws s3tables put-table-maintenance-configuration \
--table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket \
--namespace my_namespace \
--name my_table \
--type icebergSnapshotManagement \
--value '{"status":"enabled","settings":{"icebergSnapshotManagement":{"minSnapshotsToKeep":10,"maxSnapshotAgeHours":2500}}}'
```

**To disable snapshot management by using the AWS CLI**  
The following example will disable snapshot management using the `PutTableMaintenanceConfiguration` API.  

```
aws s3tables put-table-maintenance-configuration \
--table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket \
--namespace my_namespace \
--name my_table \
--type icebergSnapshotManagement \
--value '{"status":"disabled","settings":{"icebergSnapshotManagement":{"minSnapshotsToKeep":1,"maxSnapshotAgeHours":120}}}'
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3tables/put-table-maintenance-configuration.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3tables/put-table-maintenance-configuration.html) in the *AWS CLI Command Reference*.

## Consideration and limitations
<a name="s3-tables-considerations-see-more"></a>

To learn more about additional consideration and limits for compaction and snapshot management, see [Considerations and limitations for maintenance jobs](s3-tables-considerations.md).

**Note**  
S3 Tables applies the parquets row-group-default size of 128 MB.

# Record expiration for tables
<a name="s3-tables-record-expiration"></a>

By default, records in your S3 tables don't expire. To help minimize storage costs for your tables, you can enable and configure record expiration for the tables. With this option, Amazon S3 automatically removes records from a table when the records expire.

If you enable record expiration for a table, you specify the number of days to retain records in the table before the records expire. This can be any number of days ranging from 1 day through 2,147,483,647 days. For example, to retain table records for one year, specify `365` days. The records then persist for 365 days. After 365 days, the records expire and Amazon S3 automatically removes them.

You can enable and configure record expiration for AWS managed tables that store specific data sets from certain AWS services, currently Amazon S3 Storage Lens and Amazon SageMaker Catalog. Record expiration options aren't currently available for other AWS managed tables. The exception is Amazon S3 metadata journal tables. Journal tables use distinct record expiration settings that you specify at the service level. For information about configuring record expiration for this type of table, see [Expiring journal table records](metadata-tables-expire-journal-table-records.md). Note that record expiration options aren't available for S3 tables that you create.

After you enable record expiration for a table, you can disable it at any time. Amazon S3 then stops expiring and removing records from the table.

**Topics**
+ [How it works](#s3-tables-record-expiration-how-it-works)
+ [Configuring record expiration](#s3-tables-record-expiration-configure)
+ [Monitoring record expiration](#s3-tables-record-expiration-monitor)
+ [Considerations](#s3-tables-expiration-considerations)

## How record expiration works
<a name="s3-tables-record-expiration-how-it-works"></a>

Record expiration automatically removes records from an S3 table when the records are older than the number of days that you specify in the record expiration settings for the table. To determine when records expire, Amazon S3 uses specific timestamps in the records. The choice of timestamp column derives directly from the table schema for a table. You don't need to specify which timestamp column to use. The tables are managed by AWS and Amazon S3 automatically chooses the appropriate column to use when you enable record expiration for a table.

You can enable and configure record expiration settings for AWS managed tables that store specific Amazon S3 Storage Lens metrics or specific Amazon SageMaker Catalog metadata. Record expiration options are available for the following AWS managed tables for those services:
+ S3 Storage Lens – `bucket_property_metrics`, `default_activity_metrics`, `default_storage_metrics`, `expanded_prefixes_activity_metrics`, and `expanded_prefixes_storage_metrics`. To determine when records in these tables expire, Amazon S3 uses the `report_time` field in the records.
+ Amazon SageMaker Catalog – `ASSET`. To determine when records in this table expire, Amazon S3 uses the `snapshot_time` field in the records.

After you enable record expiration for a table, Amazon S3 starts running record expiration jobs that perform the following operations for the table:

1. Identify records that are older than the specified expiration setting.

1. Create a new snapshot that excludes references to the expired records.

Removal is also based on snapshot expiration and unreferenced file removal settings in the maintenance configuration settings for the table. To learn more about these settings, see [Maintenance for tables](s3-tables-maintenance.md).

**Warning**  
Amazon S3 expires and removes records within 24 to 48 hours after the records become eligible for expiration. Table records are removed from the latest snapshot. Data and storage for the records are removed through table maintenance operations. Table records can't be recovered after they expire.

## Configuring record expiration for a table
<a name="s3-tables-record-expiration-configure"></a>

You can enable, configure, and otherwise manage the record expiration settings for an S3 table by using the Amazon S3 console, Amazon S3 REST API, AWS Command Line Interface (AWS CLI), or AWS SDKs.

Before you try to perform these tasks for a table, make sure that you have the following AWS Identity and Access Management (IAM) permissions:
+ `s3tables:GetTableRecordExpirationConfiguration` – This action allows you to access current record expiration settings for tables.
+ `s3tables:PutTableRecordExpirationConfiguration` – This action allows you to enable, configure, and disable record expiration settings for tables.
+ `s3tables:GetTableRecordExpirationJobStatus` – This action allows you to monitor the status of record expiration operations (jobs) for tables and access metrics for the operations.

The following sections explain how to enable, configure, and disable record expiration settings for a table by using the Amazon S3 console and the AWS CLI. To perform these tasks with the Amazon S3 REST API or an AWS SDK, use the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_PutTableRecordExpirationConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_PutTableRecordExpirationConfiguration.html) operation. For more information, see [Developing with Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/API/developing-s3.html) in the *Amazon Simple Storage Service API Reference*.

### Using the S3 console
<a name="configure-table-record-expiration-console"></a>

To enable and configure record expiration settings for an S3 table by using the console, follow these steps.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. On the **Table buckets** page, choose the bucket that stores the table.

1. On the **Tables** tab, choose the table.

1. On the **Maintenance** tab, in the **Record expiration** section, choose **Edit**.

1. Under **Record expiration**, choose **Enable**.

1. For **Days after which records expire**, enter the number of days to retain records in the table. This can be any whole number ranging from 1 through 2,147,483,647. For example, to retain records for one year, enter **365**.
**Warning**  
As you determine the appropriate retention period for records in the table, note that records can't be recovered after they expire.

1. Choose **Save changes**.

To subsequently change the retention period, repeat the preceding steps.

To subsequently disable record expiration, repeat steps 1 through 5. Then, for step 6, choose **Disable**. When you finish, choose **Save changes**.

### Using the AWS CLI
<a name="configure-table-record-expiration-CLI"></a>

To configure and manage record expiration settings for an S3 table by using the AWS CLI, run the [https://docs.aws.amazon.com/cli/latest/reference/s3tables/put-table-record-expiration-configuration.html](https://docs.aws.amazon.com/cli/latest/reference/s3tables/put-table-record-expiration-configuration.html) command.

You can start by creating a JSON file that contains the record expiration settings to apply to the table. The following example shows the contents of a JSON file that enables record expiration for a table. It also specifies a retention period of 30 days for records in the table. In other words, it specifies that table records should expire after 30 days.

```
{
    "status": "enabled",
        "settings": {
            "days": 30
        {
}
```

To use the preceding example, replace the `user input placeholders` with your own information.

**Warning**  
As you determine the appropriate retention period for records in the table, note that records can't be recovered after they expire.

To disable record expiration for a table, specify `disabled` for the `status` field and omit the `settings` object from the file. For example:

```
{
    "status": "disabled"
}
```

After you create a JSON file with the settings to apply, run the `put-table-record-expiration-configuration` command. For the `table-arn` parameter, specify the Amazon Resource Name (ARN) of the table. For the `value` parameter, specify the name of the file that stores the settings.

For example, the following command updates the record expiration settings for a table. The settings are specified in a file named *`record-expiration-config.json`*.

```
aws s3tables put-table-record-expiration-configuration \
    --table-arn arn:aws:s3tables:us-east-1:123456789012:bucket/amzn-s3-demo-table-bucket/table/amzn-s3-demo-table \
    --value file://./record-expiration-config.json
```

To use the preceding example, replace the `user input placeholders` with your own information.

## Monitoring record expiration for a table
<a name="s3-tables-record-expiration-monitor"></a>

To monitor the status and results of record expiration operations for your S3 tables, use the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_GetTableRecordExpirationJobStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_GetTableRecordExpirationJobStatus.html) operation or, if you're using the AWS CLI, run the [https://docs.aws.amazon.com/cli/latest/reference/s3tables/get-table-record-expiration-job-status.html](https://docs.aws.amazon.com/cli/latest/reference/s3tables/get-table-record-expiration-job-status.html) command. In your request, specify the Amazon Resource Name (ARN) of the table.

For example, the following AWS CLI command retrieves the status of record expiration operations for a specific table in a table bucket. To use this example, replace the `user input placeholders` with your own information.

```
aws s3tables get-table-record-expiration-job-status \
    --table-arn arn:aws:s3tables:us-east-1:123456789012:bucket/amzn-s3-demo-table-bucket/table/amzn-s3-demo-table
```

If your request succeeds, you receive a response that provides details such as when Amazon S3 most recently ran record expiration operations for the table and the status of that run. If the most recent run succeeded, the response also includes processing metrics—for example, the number of data files and records that were removed, and the total size of the data that was removed. If errors occurred during the most recent run, the response includes a failure message that describes why the run failed.

## Considerations
<a name="s3-tables-expiration-considerations"></a>

As you configure and manage record expiration settings for your AWS managed S3 tables, keep the following in mind:
+ Record expiration is available only for certain AWS managed tables created by supported AWS services, Amazon S3 Storage Lens and Amazon SageMaker Catalog. In addition, record expiration is available only for individual tables, not entire table buckets.
+ To determine when records expire, Amazon S3 uses specific timestamps in the tables. These timestamps represent when the data was created, not when Amazon S3 ingested the records in a table. The timestamp column that's used depends on the service that publishes the table: for S3 Storage Lens metrics, the `report_time` field; and, for Amazon SageMaker Catalog metadata, the `snapshot_time` field. You cannot specify which field to use because the tables are managed by AWS.
+ If there are delays when data is exported to a table, records might become eligible for expiration sooner than you expect. For this reason, we recommend that you account for potential ingestion delays by adding buffer to the retention period in the expiration settings for your tables.
+ Records expire and are removed within 24 to 48 hours after they become eligible for expiration. Amazon S3 doesn't expire and remove records immediately after they become eligible for expiration.
+ Records cannot be recovered after they expire and are removed.

# Considerations and limitations for maintenance jobs
<a name="s3-tables-considerations"></a>

Amazon S3 offers maintenance operations to enhance the performance of your S3 tables or table buckets. These options are file compaction, snapshot management, and unreferenced file removal. The following are limitations and consideration for these management options.

**Topics**
+ [Considerations for compaction](#s3-tables-compaction-considerations)
+ [Considerations for snapshot management](#s3-tables-snapshot-considerations)
+ [Considerations for unreferenced file removal](#s3-tables-unreferenced-file-removal-considerations)
+ [S3 table and table buckets maintenance operations limits and related APIs](#s3-tables-maintenance-limits)

## Considerations for compaction
<a name="s3-tables-compaction-considerations"></a>

The following considerations apply to compaction. For more information about compaction, see [Maintenance for tables](s3-tables-maintenance.md).
+ Compaction is supported on Apache Parquet, Avro, and ORC file types.
+ Compaction writes new files in Apache Parquet format by default. To compact files into Avro or ORC formats instead, set the `write.format.default` table property to `avro` or `orc`.
+ Compaction doesn’t support data type: `Fixed`.
+ Compaction doesn’t support compression types: `brotli`, `lz4`.
+ Compaction occurs on an automated schedule. If you want to prevent charges associated with compaction you can manually disable it for a table using the [PutTableMaintenanceConfiguration](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_PutTableMaintenanceConfiguration.html) API operation.

**Note**  
Apache Iceberg uses an optimistic concurrency model along with conflict detection to arbitrate write transactions. With optimistic concurrency, user and compaction transactions can conflict causing transactions to fail. If conflicts occur, compaction jobs will retry on failure. It is recommended that your pipelines also use retry logic to overcome transactions that fail from conflicting operations.

## Considerations for snapshot management
<a name="s3-tables-snapshot-considerations"></a>

The following considerations apply to snapshot management. For more information about snapshot management, see [Maintenance for tables](s3-tables-maintenance.md).
+ Snapshots will be preserved only when both criteria are satisfied: the minimum number of snapshots to keep and the specified retention period.
+ Snapshot management deletes expired snapshot metadata from Apache Iceberg, preventing time travel queries for expired snapshots and optionally deleting associated data files.
+ Snapshot management does not support retention values you configure as Iceberg table properties in the `metadata.json` file or through an `ALTER TABLE SET TBLPROPERTIES` SQL command, including branch or tag-based retention. Snapshot management is disabled when you configure a branch or tag-based retention policy, or configure a retention policy on the `metadata.json` file that is longer than the values configured through the `PutTableMaintenanceConfiguration` API. In these cases S3 will not expire or remove snapshots and you will need to manually delete snapshots or remove the properties from your Iceberg table to avoid storage charges.

## Considerations for unreferenced file removal
<a name="s3-tables-unreferenced-file-removal-considerations"></a>

The following considerations apply to unreferenced file removal. For more information about unreferenced file removal, see [Maintenance for table buckets](s3-table-buckets-maintenance.md).
+ Unreferenced file removal deletes data and metadata files that are no longer referenced by Iceberg metadata if their creation time is before the retention period.

## S3 table and table buckets maintenance operations limits and related APIs
<a name="s3-tables-maintenance-limits"></a>




| Maintenance operation | Property | Configurable at table bucket level? | Configurable at table level? | Default value | Minimum value | Related Iceberg maintenance routine | Controlling S3 Tables API | 
| --- | --- | --- | --- | --- | --- | --- | --- | 
| Compaction | targetFileSizeMB | No | Yes | 512MB | 64MB | [https://iceberg.apache.org/docs/latest/maintenance/#compact-data-files](https://iceberg.apache.org/docs/latest/maintenance/#compact-data-files) | [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3tables_PutTableMaintenanceConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3tables_PutTableMaintenanceConfiguration.html) | 
| Snapshot management | minimumSnapshots | No | Yes | 1 | 1 | [https://iceberg.apache.org/docs/latest/maintenance/#expire-snapshots](https://iceberg.apache.org/docs/latest/maintenance/#expire-snapshots) retainLast | [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3tables_PutTableMaintenanceConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3tables_PutTableMaintenanceConfiguration.html) | 
| Snapshot management | maximumSnapshotAge | No | Yes | 120 hours | 1 hour | [https://iceberg.apache.org/docs/latest/maintenance/#expire-snapshots](https://iceberg.apache.org/docs/latest/maintenance/#expire-snapshots) expireOlderThan | [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3tables_PutTableMaintenanceConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3tables_PutTableMaintenanceConfiguration.html) | 
| Unreferenced file removal | unreferencedDays | Yes | No | 3 days | 1 days | [https://iceberg.apache.org/docs/latest/maintenance/#delete-orphan-files](https://iceberg.apache.org/docs/latest/maintenance/#delete-orphan-files) | [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3tables_PutTableBucketMaintenanceConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3tables_PutTableBucketMaintenanceConfiguration.html) | 
| Unreferenced file removal | nonCurrentDays | Yes | No | 10 days | 1 days | N/A | [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3tables_PutTableBucketMaintenanceConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3tables_PutTableBucketMaintenanceConfiguration.html) | 

**Note**  
S3 Tables applies the parquets row-group-default size of 128 MB.

# Cost optimization for tables with Intelligent-Tiering
<a name="tables-intelligent-tiering"></a>

You can automatically optimize storage costs for tables by using S3 Intelligent-Tiering. The S3 Tables Intelligent-Tiering storage class automatically moves data to the most cost-effective access tier when access patterns change. When you use S3 Intelligent-Tiering, data accessed less frequently is automatically moved to lower-cost tiers, and moved back to the Frequent Access tier whenever you access it again.

All data is moved between tiers without retrieval fees, performance impact, or changes to availability. Additionally, table maintenance operations such as compaction are optimized based on access patterns, processing only actively accessed data in the Frequent Access tier while reducing maintenance costs on less frequently accessed data in lower-cost tiers.

**Topics**
+ [S3 Tables Intelligent-Tiering access tiers](#tables-intelligent-tiering-access-tiers)
+ [Auto-tiering behavior with S3 Intelligent-Tiering](#tables-intelligent-tiering-auto-tiering-behavior)
+ [Specifying S3 Intelligent-Tiering as your storage class](#tables-intelligent-tiering-specifying-storage-class)
+ [Monitoring storage usage](#tables-intelligent-tiering-monitoring-storage)

## S3 Tables Intelligent-Tiering access tiers
<a name="tables-intelligent-tiering-access-tiers"></a>

When your table is stored in the S3 Intelligent-Tiering storage class, Amazon S3 continuously monitors access patterns and automatically moves table data between access tiers.

Tiering happens at the individual file level, so a single table can have files in different tiers based on access patterns. Table data is automatically moved to one of the following access tiers based on access patterns:
+ **Frequent Access**: The default tier for all files. Files in other tiers automatically move back to the Frequent Access tier when accessed.
+ **Infrequent Access**: If you do not access a file for 30 consecutive days, it moves to the Infrequent Access tier.
+ **Archive Instant Access**: If you do not access a file for 90 consecutive days, it moves to the Archive Instant Access tier.

All tiers provide millisecond latency, high throughput performance, and are designed for 99.9% availability and 99.999999999% durability.

## Auto-tiering behavior with S3 Intelligent-Tiering
<a name="tables-intelligent-tiering-auto-tiering-behavior"></a>

The following actions constitute access that automatically moves files from the Infrequent Access tier or the Archive Instant Access tier back to the Frequent Access tier:
+ Any read or write operations on table data or metadata files using `GetObject`, `PutObject`, or `CompleteMultipartUpload` actions
+ `LoadTable` or `UpdateTable` actions using [Iceberg REST API Operations](https://docs.aws.amazon.com//AmazonS3/latest/userguide/s3-tables-integrating-open-source.html#endpoint-supported-api)
+ S3 Tables replication operations

Other actions don't constitute access that automatically moves files from the Infrequent Access tier or the Archive Instant Access tier back to the Frequent Access tier.

**Note**  
Files smaller than 128 KB are not eligible for auto-tiering and remain in the Frequent Access tier. Compaction may combine these files into fewer, larger objects and commit them back to your table as a new snapshot. The newly compacted files become eligible for auto-tiering if the new file is 128 KB or larger.

### Table maintenance behavior
<a name="tables-intelligent-tiering-table-maintenance"></a>

Automatic table maintenance operations performed by Amazon S3, such as snapshot management, unreferenced file removal, and record expiration, continue to run on your tables regardless of tier. Compaction runs only on files in the Frequent Access tier, optimizing performance for frequently accessed data while reducing maintenance costs on data in lower-cost tiers.

Maintenance operations do not affect the access tier of files in your table. Reads performed by maintenance operations do not cause files to change tiers. However, if a maintenance operation, such as compaction or record expiration, writes a new file, that file is created in the Frequent Access tier.

**Note**  
Because compaction only processes files in the Frequent Access tier, delete operations on data in lower-cost tiers create delete files that are not automatically compacted. These delete files become eligible for compaction when the associated data files are accessed and move back to the Frequent Access tier. For tables that are not frequently accessed, you can manually run compaction using Amazon EMR to compact these delete files with their associated data files. For more information, see [Maintaining tables by using compaction](https://docs.aws.amazon.com//prescriptive-guidance/latest/apache-iceberg-on-aws/best-practices-compaction.html#compaction-emr-glue). You can monitor file growth in your table using Amazon CloudWatch metrics to determine when manual compaction may be beneficial.

## Specifying S3 Intelligent-Tiering as your storage class
<a name="tables-intelligent-tiering-specifying-storage-class"></a>

By default, all tables are created in the S3 Standard storage class and cannot be moved to S3 Intelligent-Tiering. To use S3 Intelligent-Tiering, you must specify it at table creation. You can also set S3 Intelligent-Tiering as the default storage class for your table bucket to automatically store any new tables created there in the S3 Intelligent-Tiering storage class.

### Specifying S3 Intelligent-Tiering for table buckets
<a name="tables-intelligent-tiering-table-buckets"></a>

You can specify S3 Intelligent-Tiering as the default storage class when creating a new table bucket by using the `storage-class-configuration` header with the `CreateTableBucket` operation.

To check the default storage class on an existing table bucket, use the `GetTableBucketStorageClass` operation. To modify the default storage class of an existing table bucket, use the `PutTableBucketStorageClass` operation.

**Note**  
When you modify the default storage class on a table bucket, that setting applies only to new tables created in that bucket. The storage class for pre-existing tables is not changed.

### Specifying S3 Intelligent-Tiering for tables
<a name="tables-intelligent-tiering-tables"></a>

You can specify S3 Intelligent-Tiering as the storage class when creating a new table using the `storage-class-configuration` header with the `CreateTable` operation.

If you do not specify a storage class at table creation, the table is created in the default storage class configured on the table bucket at that time. Once a table is created, you cannot modify its storage class.

To check the default storage class on an existing table bucket, use the `GetTableBucketStorageClass` operation.

## Monitoring storage usage
<a name="tables-intelligent-tiering-monitoring-storage"></a>

You can view your storage usage breakdown by access tier in the AWS Cost and Usage Reports for your account. For more information, see [Creating Cost and Usage Reports](https://docs.aws.amazon.com//cur/latest/userguide/creating-cur.html) in the *AWS Data Exports User Guide*.

The following usage types are available in your billing reports:


| Usage Type | Unit | Granularity | Description | 
| --- | --- | --- | --- | 
| region-Tables-TimedStorage-INT-FA-ByteHrs | GB-Month | Daily | The number of GB-months that data was stored in the S3 Intelligent-Tiering Frequent Access of S3 Intelligent-Tiering storage | 
| region-Tables-TimedStorage-INT-IA-ByteHrs | GB-Month | Daily | The number of GB-months that data was stored in the S3 Intelligent-Tiering Infrequent Access of S3 Intelligent-Tiering storage | 
| region-Tables-TimedStorage-INT-AIA-ByteHrs | GB-Month | Daily | The number of GB-months that data was stored in the S3 Intelligent-Tiering Archive Instant Access of S3 Intelligent-Tiering storage | 
| region-Tables-Requests-INT-Tier1 | Count | Hourly | The number of PUT, COPY, or POST requests on S3 Tables Intelligent-Tiering objects | 
| region-Tables-Requests-INT-Tier2 | Count | Hourly | The number of GET and all other non-Tier1 requests for S3 Tables Intelligent-Tiering objects | 

# Table namespaces
<a name="s3-tables-namespace"></a>

When you create tables within your Amazon S3 table bucket, you organize them into logical groupings called *namespaces*. Unlike S3 tables and table buckets, namespaces aren't resources. Namespaces are constructs that help you organize and manage your tables in a scalable manner. For example, all the tables belonging to the human resources department in a company could be grouped under a common namespace value of `hr`.

To control access to specific namespaces, you can use table bucket resource policies. For more information, see [Resource-based policies for S3 Tables](s3-tables-resource-based-policies.md).

The following rules apply to table namespaces:
+ Each namespace must be unique within a table bucket.
+ You can create up to 10,000 namespaces per table bucket.
+ Each table name must be unique within a namespace.
+ Each table can have only one level of namespaces. Namespaces can't be nested.
+ Each table belongs to a single namespace.
+ You can move your tables between namespaces.

Table namespaces are referred to as databases in various AWS services and query engines. The following table maps the terminology used for S3 Tables namespaces to some common engines and services.


| **Service or Engine** | **Terminology** | 
| --- | --- | 
| AWS Lake Formation | Database | 
| AWS Glue Data Catalog | Database | 
| Athena | Database | 
| Spark | Namespace | 

**Topics**
+ [Creating a namespace](s3-tables-namespace-create.md)
+ [Delete a namespace](s3-tables-namespace-delete.md)

# Creating a namespace
<a name="s3-tables-namespace-create"></a>

A table namespace is a logical construct that you group tables under within an Amazon S3 table bucket. Each table belongs to a single namespace. Before creating a table in a table bucket, you must create a namespace to group tables under. You can create a namespace by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), Amazon S3 REST API, AWS SDKs, or integrated query engines.

**Namespace names**

The following naming rules apply to namespaces:
+ Names must be between 1 and 255 characters long.
+ Names can consist only of lowercase letters, numbers, and underscores (`_`). Underscores aren't allowed at the start or end of namespace names.
+ Names must begin and end with a letter or number.
+ Names must not contain hyphens (`-`) or periods (`.`).
+ A namespace must be unique within a table bucket.
+ Namespace names must not start with the reserved prefix `aws`.

For more information about valid namespace names, see [Naming rules for tables and namespaces](s3-tables-buckets-naming.md#naming-rules-table).

## Using the S3 console and Amazon Athena
<a name="create-namespace-console"></a>

The following procedure uses the **Create table with Athena** workflow to create a namespace in the Amazon S3 console. If you don't want to also use Amazon Athena to create a table in your namespace, you can cancel the workflow after creating your namespace. 

**To create a namespace**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. On the **Table buckets** page, choose the bucket that you want to create a namespace in.

1. On the bucket details page, choose **Create table with Athena**. 

1. In the **Create table with Athena** dialog box, choose **Create a namespace**, and then choose **Create namespace**.

1. Enter a name in the **Namespace name** field. Namespace names must be 1 to 255 characters and unique within the table bucket. Valid characters are a–z, 0–9, and underscores (`_`). Underscores aren't allowed at the start or end of namespace names.

1. Choose **Create namespace**.

1. If you also want to create a table, choose **Create table with Athena**. For more information about creating a table with Athena, see [Using the S3 console and Amazon Athena](s3-tables-create.md#create-table-console). If you don't want to create a table right now, choose **Cancel**.

## Using the AWS CLI
<a name="create-table-namespace-CLI"></a>

This example shows how to create a table namespace by using the AWS CLI. To use this example, replace the `user input placeholders` with your own information.

```
aws s3tables create-namespace \
    --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket1 \ 
    --namespace example_namespace
```

## Using a query engine
<a name="create-table-namespace-engine"></a>

You can create a namespace in an Apache Spark session connected to your Amazon S3 table buckets.

This example shows you how to create a table by using `CREATE` statements in a query engine integrated with S3 Tables. To use this example, replace the *user input placeholders* with your own information.

```
spark.sql("CREATE NAMESPACE IF NOT EXISTS s3tablesbucket.my_namespace")
```

# Delete a namespace
<a name="s3-tables-namespace-delete"></a>

Before you delete a table namespace from an Amazon S3 table bucket, you must delete all tables within the namespace, or move them under another namespace. You can delete a namespace by using the Amazon S3 REST API, AWS SDKs, AWS Command Line Interface (AWS CLI), or integrated query engines. 

For information about the permissions required to delete a namespace, see [https://docs.aws.amazon.com//AmazonS3/latest/API/API_s3TableBuckets_DeleteNamespace.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_s3TableBuckets_DeleteNamespace.html) in the *Amazon Simple Storage Service API Reference*.

## Using the AWS CLI
<a name="delete-table-namespace-CLI"></a>

This example shows you how to delete a table namespace by using the AWS CLI. To use this example, replace the `user input placeholders` with your own information.

```
aws s3tables delete-namespace \
    --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket1 \
    --namespace example_namespace
```

# Tables in S3 table buckets
<a name="s3-tables-tables"></a>

An S3 table represents a structured dataset consisting of underlying table data and related metadata. This data is stored inside a table bucket as a subresource. All tables in a table bucket are stored in the [Apache Iceberg](https://iceberg.apache.org/docs/latest/) table format. Amazon S3 manages maintenance of your tables through automatic file compaction and snapshot management. For more information, see [Maintenance for tables](s3-tables-maintenance.md).

To make tables in your account accessible by AWS analytics services, you integrate your Amazon S3 table buckets with AWS Glue Data Catalog. This integration allows AWS analytics services such as Amazon Athena and Amazon Redshift to automatically discover and access your table data. 

When you create a table, Amazon S3 automatically generates a warehouse location for the table. This is a unique S3 location that stores objects associated with the table. The following example shows the format of a warehouse location: 

```
s3://63a8e430-6e0b-46f5-k833abtwr6s8tmtsycedn8s4yc3xhuse1b--table-s3
```

Within your table bucket, you can organize tables into logical groupings called namespaces. For more information, see [Table namespaces](s3-tables-namespace.md).

You can rename tables, but each table has its own unique Amazon Resource Name (ARN) and unique table ID. Each table also has a resource policy attached to it. You can use this policy to manage access to the table.

Table ARNs use the following format:

```
arn:aws:s3tables:region:owner-account-id:bucket/bucket-name/table/table-id
```

By default, you can create up to 10,000 tables in a table bucket. To request a quota increase for table buckets or tables, contact [Support](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). 

Amazon S3 supports the following types of tables in table buckets:

**Customer tables**  
Customer tables are tables that you can read and write to. You can retrieve data from these tables using integrated query engines. You can insert, update, or delete data within them by using S3 API operations or integrated query engines. 

**AWS tables**  
AWS tables are read-only tables that are generated by an AWS service on your behalf. These tables are managed by Amazon S3 and can't be modified by any IAM principal outside of Amazon S3 itself. You can retrieve information from these tables, but you can't modify the data in them. AWS tables include S3 Metadata tables, which contain metadata that's captured from the objects within an S3 general purpose bucket. For more information, see [Discovering your data with S3 Metadata tables](metadata-tables-overview.md).

**Topics**
+ [Creating an Amazon S3 table](s3-tables-create.md)
+ [Deleting an Amazon S3 table](s3-tables-delete.md)
+ [Viewing details about an Amazon S3 table](s3-tables-table-details.md)
+ [Managing table policies](s3-tables-table-policy.md)
+ [Using tags with S3 tables](table-tagging.md)

# Creating an Amazon S3 table
<a name="s3-tables-create"></a>

An Amazon S3 table is a subresource of a table bucket. Tables are stored in the Apache Iceberg format so that you can work with them by using query engines and other applications that support Apache Iceberg. Amazon S3 continuously optimizes your tables to help reduce storage costs and improve analytics query performance.

When you create a table, Amazon S3 automatically generates a *warehouse location* for the table. A warehouse location is a unique S3 location where you can read and write objects associated with the table. The following example shows the format of a warehouse location:

```
s3://63a8e430-6e0b-46f5-k833abtwr6s8tmtsycedn8s4yc3xhuse1b--table-s3
```

Tables have the following Amazon Resource Name (ARN) format:

```
arn:aws:s3tables:region:owner-account-id:bucket/bucket-name/table/table-id
```

By default, you can create up to 10,000 tables in a table bucket. To request a quota increase for table buckets or tables, contact [Support](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase).

You can create a table by using the Amazon S3 console, Amazon S3 REST API, AWS SDKs, AWS Command Line Interface (AWS CLI), or query engines connected to your table buckets.

When you create a table you can specify the encryption settings for that table, unless you are creating the table with Athena. If you don't specify encryption settings the table is encrypted with the default settings for the table bucket. For more information, see [Specifying encryption for tables](s3-tables-kms-specify.md#specify-kms-table).

**Prerequisites for creating tables**

To create a table, you must first do the following: 
+ [Create a table bucket](s3-tables-buckets-create.md).
+ [Create a namespace](s3-tables-namespace-create.md) in your table bucket.
+ Make sure that you have AWS Identity and Access Management (IAM) permissions for `s3tables:CreateTable` and `s3tables:PutTableData`.
+ 
**Note**  
If you are using SSE-KMS encryption for your table, you need permissions for `s3tables:PutTableEncryption`, and `DescribeKey` permission on the chosen AWS KMS key. Additionally the AWS KMS key you use needs to grant S3 Tables permission to perform automatic table maintenance. For more information, see [Permission requirements for S3 Tables SSE-KMS encryption](s3-tables-kms-permissions.md)

For information about valid table names, see [Naming rules for tables and namespaces](s3-tables-buckets-naming.md#naming-rules-table).

**Important**  
When creating tables, make sure that you use all lowercase letters in your table names and table definitions. For example, make sure that your column names are all lowercase. If your table name or table definition contains capital letters, the table isn't supported by AWS Lake Formation or the AWS Glue Data Catalog. In this case, your table won't be visible to AWS analytics services such as Amazon Athena, even if your table buckets are integrated with AWS analytics services.   
If your table definition contains capital letters, you receive the following error message when running a `SELECT` query in Athena: "GENERIC\$1INTERNAL\$1ERROR: Get table request failed: com.amazonaws.services.glue.model.ValidationException: Unsupported Federation Resource - Invalid table or column names."

## Using the S3 console and Amazon Athena
<a name="create-table-console"></a>

The following procedure uses the Amazon S3 console to create a table with Amazon Athena. If you haven't already created a namespace in your table bucket, you can do so as part of this process. Before performing the following steps, make sure that you've integrated your table buckets with AWS analytics services in this Region. For more information, see [Integrating Amazon S3 Tables with AWS analytics services](s3-tables-integrating-aws.md).

**Note**  
When you create a table using Athena that table inherits the default encryption settings from the table bucket. If you want to use a different encryption type you need to create the table using another method.

**To create a table**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. On the **Table buckets** page, choose the bucket that you want to create a table in.

1. On the bucket details page, choose **Create table with Athena**. 

1. In the **Create table with Athena** dialog box, do one of the following:
   + Create a new namespace. Choose **Create a namespace**, and then enter a name in the **Namespace name** field. Namespace names must be 1 to 255 characters and unique within the table bucket. Valid characters are a–z, 0–9, and underscores (`_`). Underscores aren't allowed at the start of namespace names. 
   + Choose **Create namespace**.
   + Specify an existing namespace. Choose **Specify an existing namespace within this table bucket**. Then choose either **Choose from existing namespaces** or **Enter an existing namespace name**. If you have more than 1,000 namespaces in your bucket, you must enter the namespace name if it doesn't appear in the list. 

1. Choose **Create table with Athena**.

1. The Amazon Athena console opens and the Athena query editor appears. The **Catalog** field should be populated with **s3tablescatalog/** followed by the name of your table bucket, for example, **s3tablescatalog/*amzn-s3-demo-bucket***. The **Database** field should be populated with the namespace that you created or selected earlier. 
**Note**  
If you don't see these values in the **Catalog** and **Database** fields, make sure that you've integrated your table buckets with AWS analytics services in this Region. For more information, see [Integrating Amazon S3 Tables with AWS analytics services](s3-tables-integrating-aws.md). 

1. The query editor is populated with a sample query that you can use to create a table. Modify the query to specify the table name and columns that you want your table to have. 

1. When you're finished modifying the query, choose **Run** to create your table. 
**Note**  
If you receive the error "Insufficient permissions to execute the query. Principal does not have any privilege on specified resource" when you try to run a query in Athena, you must be granted the necessary Lake Formation permissions on the table. For more information, see [Granting Lake Formation permission on a table or database](grant-permissions-tables.md#grant-lf-table). 
If you receive the error "Iceberg cannot access the requested resource" when you try to run a query in Athena, go to the AWS Lake Formation console and make sure that you've granted yourself permissions on the table bucket catalog and database (namespace) that you created. Don't specify a table when granting these permissions. For more information, see [Granting Lake Formation permission on a table or database](grant-permissions-tables.md#grant-lf-table). 
If you receive the following error message when running a `SELECT` query in Athena, this message is caused by having capital letters in your table name or your column names in your table definition: "GENERIC\$1INTERNAL\$1ERROR: Get table request failed: com.amazonaws.services.glue.model.ValidationException: Unsupported Federation Resource - Invalid table or column names." Make sure that your table and column names are all lowercase. 

If your table creation was successful, the name of your new table appears in the list of tables in Athena. When you navigate back to the Amazon S3 console, your new table appears in the **Tables** list on the bucket details page for your table bucket after you refresh the list. 

## Using the AWS CLI
<a name="create-table-CLI"></a>

This example shows how to create a table with a schema by using the AWS CLI and specifying table metadata with JSON. To use this example, replace the `user input placeholders` with your own information.

```
aws s3tables create-table --cli-input-json file://mytabledefinition.json
```

For the `mytabledefinition.json` file, use the following example table definition. To use this example, replace the `user input placeholders` with your own information. 

```
{
    "tableBucketARN": "arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket",
    "namespace": "your_namespace",
    "name": "example_table",
    "format": "ICEBERG",
    "metadata": {
        "iceberg": {
            "schema": {
                "fields": [
                     {"name": "id", "type": "int","required": true},
                     {"name": "name", "type": "string"},
                     {"name": "value", "type": "int"}
                ]
            }
        }
    }
}
```

## Using a query engine
<a name="create-table-engine"></a>

You can create a table in a supported query engine that's connected to your table buckets, such as in an Apache Spark session on Amazon EMR.

The following example shows how to create a table with Spark by using `CREATE` statements, and add table data by using `INSERT` statements or by reading data from an existing file. To use this example, replace the `user input placeholders` with your own information.

```
spark.sql( 
" CREATE TABLE IF NOT EXISTS s3tablesbucket.example_namespace.`example_table` ( 
    id INT, 
    name STRING, 
    value INT 
) 
USING iceberg "
)
```

After you create the table, you can load data into the table. Choose from the following methods:
+ Add data into the table by using the `INSERT` statement.

  ```
  spark.sql(
  """
      INSERT INTO s3tablesbucket.my_namespace.my_table 
      VALUES 
          (1, 'ABC', 100), 
          (2, 'XYZ', 200)
  """)
  ```
+ Load an existing data file.

  1. Read the data into Spark:

     ```
     val data_file_location = "Path such as S3 URI to data file"
     val data_file = spark.read.parquet(data_file_location)
     ```

  1. Write the data into an Iceberg table:

     ```
     data_file.writeTo("s3tablesbucket.my_namespace.my_table").using("Iceberg").tableProperty ("format-version", "2").createOrReplace()
     ```

# Deleting an Amazon S3 table
<a name="s3-tables-delete"></a>

You can delete a table by using the Amazon S3 REST API, AWS SDKs, AWS Command Line Interface (AWS CLI), or by using integrated query engines.

**Note**  
S3 Tables doesn't support the `DROP TABLE` operation with `purge=false`. Some versions of Apache Spark always set this flag to `false` even when running `DROP TABLE PURGE` commands. To delete a table, you can retry `DROP TABLE` with `purge=true`, or use the S3 Tables [https://docs.aws.amazon.com//AmazonS3/latest/API/API_s3TableBuckets_DeleteTable.html](https://docs.aws.amazon.com//AmazonS3/latest/API/API_s3TableBuckets_DeleteTable.html) REST API operation.

**Important**  
 When you delete a table, you need to know the following:  
Deleting a table is permanent and cannot be undone. Before deleting a table, make sure that you have backed up or replicated any important data.
All data and configurations associated with the table are permanently removed.

## Using the AWS CLI
<a name="delete-table-CLI"></a>

This example shows how to delete a table by using the AWS CLI. To use this command, replace the `user input placeholders` with your own information.

```
aws s3tables delete-table \
    --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket \
    --namespace example_namespace --name example_table
```

## Using a query engine
<a name="create-table-namespace-engine"></a>

You can delete a table in an Apache Spark session connected to your Amazon S3 table buckets.

This example shows how to delete a table by using the `DROP TABLE PURGE` command. To use the command, replace the `user input placeholders` with your own information.

```
spark.sql( 
" DROP TABLE [IF EXISTS] s3tablesbucket.example_namespace.example_table PURGE")
```

# Viewing details about an Amazon S3 table
<a name="s3-tables-table-details"></a>

You can view the general details of a table in a table bucket, such creation details, format and type, in the console or programmatically. You can view table encryption settings, and maintenance settings programmatically by using the S3 Tables REST API, AWS CLI or AWS SDKs.

## Viewing table details
<a name="table-details-view"></a>

### Using the AWS CLI
<a name="table-details-CLI"></a>

This example shows how to get details about a table by using the AWS CLI. To use this example, replace the *user input placeholders* with your own information.

```
aws s3tables get-table --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket --namespace my-namespace --name my-table
```

### Using the S3 console
<a name="table-details-console"></a>

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. Select your table bucket, then select your table.

1. Select the **Properties** tab.

1. (Optional) For information about the permission policy attached to the table, select **Permissions**.

## Previewing table data
<a name="table-preview-data"></a>

### Using the S3 console
<a name="table-preview-data-console"></a>

You can preview the data in your table directly from the Amazon S3 console using the following procedure.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. On the **Table buckets** page, choose the bucket that contains the table that you want to query.

1. Select the table you want to preview then, choose **Preview**.

**Note**  
The preview shows the first 10 rows and up to 25 columns of your table. Tables over 50mb cannot be previewed.

## Encryption details
<a name="table-encryption-view"></a>

For more information about table bucket encryption, see [Using server-side encryption with AWS KMS keys (SSE-KMS) in table buckets](s3-tables-kms-encryption.md).

### Using the AWS CLI
<a name="table-encryption-view-CLI"></a>

This example shows how to get details about encryption settings for a table by using the AWS CLI. To use this example, replace the *user input placeholders* with your own information.

```
aws s3tables get-table-encryption --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket --namespace my-namespace --name my-table
```

## Maintenance details
<a name="table-maintenance-view"></a>

For information about maintenance settings, see [Maintenance for table buckets](s3-table-buckets-maintenance.md) 

### Using the AWS CLI
<a name="table-maintenance-view-CLI"></a>

This example shows how to get details about maintenance configuration settings for a table by using the AWS CLI. To use this example, replace the *user input placeholders* with your own information.

```
aws s3tables get-table-maintenance-configuration --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket --namespace my-namespace --name my-table
```

# Managing table policies
<a name="s3-tables-table-policy"></a>

You can add, delete, update, and view table policies for tables by using the Amazon S3 console, Amazon S3 REST API, AWS SDK and the AWS CLI. For more information, see the following topics. For more information about supported AWS Identity and Access Management (IAM) actions and condition keys for Amazon S3 Tables, see [Access management for S3 Tables](s3-tables-setting-up.md). For example table policies, see [Resource-based policies for S3 Tables](s3-tables-resource-based-policies.md).

## Adding a table policy
<a name="table-policy-add"></a>

To add a table policy to a table, you can use the Amazon S3 REST API, AWS SDK and the AWS CLI. 

### Using the AWS CLI
<a name="table-policy-get-CLI"></a>

This example shows how to view the policy attached to a table by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3tables get-table-policy \
    --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket1/table/tableID \
    --namespace my-namespace \
    --name my-table
```

### Using the S3 console
<a name="table-policy-add-console"></a>

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Amazon S3**.

1. Choose **Table buckets** and select the table bucket name that contains your table, then select your table from that bucket.

1. Choose the **Permissions** tab.

1. Under **Table policy**, Choose **Edit**.

1. In the policy editor, enter your policy JSON. 

1. (Optional) Choose **Policy examples** to see sample policies that you can adapt to your needs.

1. After entering your policy, choose **Save changes**.

## Viewing a table policy
<a name="table-policy-get"></a>

To view the bucket policy attached to a table, you can use the Amazon S3 REST API, AWS SDK and the AWS CLI. 

### Using the AWS CLI
<a name="table-policy-get-CLI"></a>

This example shows how to view the policy attached to a table by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3tables get-table-policy \
    --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket  \
    --namespace my-namespace \
    --name my-table
```

### Using the S3 console
<a name="get-policy-table-console"></a>

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Amazon S3**.

1. Choose **Table buckets** and select the table bucket name that contains your table, then select your table from that bucket.

1. Choose the **Permissions** tab.

## Deleting a table policy
<a name="table-policy-delete"></a>

To delete a policy attached to a table, you can use the Amazon S3 REST API, AWS SDK and the AWS CLI. 

### Using the AWS CLI
<a name="table-policy-delete-CLI"></a>

This example shows how to delete a table policy by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3tables delete-table-policy \
    --table-ARN arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket \
    --namespace your-namespace \
    --name your-table
```

### Using the S3 console
<a name="table-policy-delete-console"></a>

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Amazon S3**.

1. Choose **Table buckets** and select the table bucket name that contains your table, then select your table from that bucket.

1. Choose the **Permissions** tab.

1. Under **Table bucket policy**, choose **Delete**.

# Using tags with S3 tables
<a name="table-tagging"></a>

An AWS tag is a key-value pair that holds metadata about resources, in this case Amazon S3 tables. You can tag S3 tables when you create them or manage tags on existing tables. For general information about tags, see [Tagging for cost allocation or attribute-based access control (ABAC)](tagging.md).

**Note**  
There is no additional charge for using tags on tables beyond the standard S3 API request rates. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

## Common ways to use tags with tables
<a name="common-ways-to-use-tags-table"></a>

Use tags on your S3 tables for:

1. **Cost allocation** – Track storage costs by table tag in AWS Billing and Cost Management. For more information, see [Using tags for cost allocation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging.html#using-tags-for-cost-allocation).

1. **Attribute-based access control (ABAC)** – Scale access permissions and grant access to S3 tables based on their tags. For more information, see [Using tags for ABAC](https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging.html#using-tags-for-abac).

**Note**  
You can use the same tags for both cost allocation and access control.

### ABAC for S3 tables
<a name="abac-for-tables"></a>

Amazon S3 tables support attribute-based access control (ABAC) using tags. Use tag-based condition keys in your AWS organizations, AWS Identity and Access Management (IAM), and S3 table policies. ABAC in Amazon S3 supports authorization across multiple AWS accounts. 

In your IAM policies, you can control access to S3 tables based on the table's tags by using the `s3tables:TableBucketTag/tag-key` condition key or the [AWS global condition keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-tagkeys): `aws:ResourceTag/key-name`, `aws:RequestTag/key-name`, or `aws:TagKeys`. 

#### aws:ResourceTag/key-name
<a name="table-condition-key-resource-tag"></a>

Use this condition key to compare the tag key-value pair that you specify in the policy with the key-value pair attached to the resource. For example, you could require that access to a table is allowed only if the table has the tag key `Department` with the value `Marketing`.

This condition key applies to table actions performed using the Amazon S3 Console, the AWS Command Line Interface (CLI), S3 APIs, or the AWS SDKs,.

For an example policy, see [1.1 - table policy to restrict operations on the table using tags](#example-policy-table-resource-tag).

For additional example policies and more information, see [Controlling access to AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-resources) in the *AWS Identity and Access Management User Guide*.

**Note**  
For actions performed on tables, this condition key acts on the tags applied to the table and not on the tags applied to the table bucket containing the table. Use the `s3tables:TableBucketTag/tag-key` instead if you would like your ABAC policies to act on the tags of the table bucket when performaing table actions. 

#### aws:RequestTag/key-name
<a name="table-condition-key-request-tag"></a>

Use this condition key to compare the tag key-value pair that was passed in the request with the tag pair that you specify in the policy. For example, you could check whether the request to tag a table includes the tag key `Department` and that it has the value `Accounting`. 

This condition key applies when tag keys are passed in a `TagResource` or `CreateTable` API operation request, or when tagging or creating a table with tags using the Amazon S3 Console, the AWS Command Line Interface (CLI), or the AWS SDKs. 

For an example policy, see [1.2 - IAM policy to create or modify tables with specific tags](#example-policy-table-request-tag).

For additional example policies and more information, see [Controlling access during AWS requests](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-requests) in the *AWS Identity and Access Management User Guide*.

#### aws:TagKeys
<a name="table-condition-key-tag-keys"></a>

Use this condition key to compare the tag keys in a request with the keys that you specify in the policy to define what tag keys are allowed for access. For example, to allow tagging during the `CreateTable` action, you must create a policy that allows both the `s3tables:TagResource` and `s3tables:CreateTable` actions. You can then use the `aws:TagKeys` condition key to enforce that only specific tags are used in the `CreateTable` request. 

This condition key applies when tag keys are passed in a `TagResource`, `UntagResource`, or `CreateTable` API operations or when tagging, untagging, or creating a table with tags using the AWS Command Line Interface (CLI), or the AWS SDKs. 

For an example policy, see [1.3 - IAM policy to control the modification of tags on existing resources maintaining tagging governance](#example-policy-table-tag-keys).

For additional example policies and more information, see [Controlling access based on tag keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-tag-keys) in the *AWS Identity and Access Management User Guide*.

#### s3tables:TableBucketTag/tag-key
<a name="table-bucket-tag-condition-key"></a>

Use this condition key to grant permissions to specific data in table buckets using tags. This condition key acts, on the most part, on the tags assigned to the table bucket for all S3 tables actions. Even when you create a table with tags, this condition key acts on the tags applied to the table bucket that contains that table. The exceptions are: 
+ When you create a table bucket with tags, this condition key acts on the tags in the request.

For an example policy, see [1.4 - Using the s3tables:TableBucketTag condition key](#example-policy-table-bucket-tag-tables).

#### Example ABAC policies for tables
<a name="example-table-abac-policies"></a>

See the following example ABAC policies for Amazon S3 tables.

**Note**  
If you have an IAM or S3 Tables resource-based policy that restricts IAM users and IAM roles based on principal tags, you must attach the same principal tags to the IAM role that Lake Formation uses to access your Amazon S3 data (for example, LakeFormationDataAccessRole) and grant this role the necessary permissions. This is required for your tag-based access control policy to work correctly with your S3 Tables analytics integration. 

##### 1.1 - table policy to restrict operations on the table using tags
<a name="example-policy-table-resource-tag"></a>

In this table policy, the specified IAM principals (users and roles) can perform the `GetTable` action only if the value of the table's `project` tag matches the value of the principal's `project` tag.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowGetTable",
      "Effect": "Allow",
      "Principal": {
        "AWS": "111122223333"
      },
      "Action": "s3tables:GetTable",
      "Resource": "arn:aws::s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket/my_example_tab;e",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/project": "${aws:PrincipalTag/project}"
        }
      }
    }
  ]
}
```

##### 1.2 - IAM policy to create or modify tables with specific tags
<a name="example-policy-table-request-tag"></a>

In this IAM policy, users or roles with this policy can only create S3 tables if they tag the table with the tag key `project` and tag value `Trinity` in the table creation request. They can also add or modify tags on existing S3 tables as long as the `TagResource` request includes the tag key-value pair `project:Trinity`. This policy does not grant read, write, or delete permissions on the tables or its objects. 

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "CreateTableWithTags",
      "Effect": "Allow",
      "Action": [
        "s3tables:CreateTable",
        "s3tables:TagResource"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/project": [
            "Trinity"
          ]
        }
      }
    }
  ]
}
```

##### 1.3 - IAM policy to control the modification of tags on existing resources maintaining tagging governance
<a name="example-policy-table-tag-keys"></a>

In this IAM policy, IAM principals (users or roles) can modify tags on a table only if the value of the table's `project` tag matches the value of the principal's `project` tag. Only the four tags `project`, `environment`, `owner`, and `cost-center` specified in the `aws:TagKeys` condition keys are permitted for these tables. This helps enforce tag governance, prevents unauthorized tag modifications, and keeps the tagging schema consistent across your tables.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "EnforceTaggingRulesOnModification",
      "Effect": "Allow",
      "Action": [
        "s3tables:TagResource",
        "s3tables:UntagResource"
      ],
      "Resource": "arn:aws::s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket/my_example_table",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/project": "${aws:PrincipalTag/project}"
        },
        "ForAllValues:StringEquals": {
          "aws:TagKeys": [
            "project",
            "environment",
            "owner",
            "cost-center"
          ]
        }
      }
    }
  ]
}
```

##### 1.4 - Using the s3tables:TableBucketTag condition key
<a name="example-policy-table-bucket-tag-tables"></a>

In this IAM policy, the condition statement allows access to the table bucket's data only if the table bucket has the tag key `Environment` and tag value `Production`. The `s3tables:TableBucketTag/<tag-key>` differs from the `aws:ResourceTag/<tag-key>` condition key because, in addition to controlling access to table buckets depending on their tags, it allows you to control access to tables based on the tags on their parent table bucket.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowAccessToSpecificTables",
      "Effect": "Allow",
      "Action": "*",
      "Resource": "arn:aws::s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket/*",
      "Condition": {
        "StringEquals": {
          "s3tables:TableBucketTag/Environment": "Production"
        }
      }
    }
  ]
}
```

## Managing tags for tables
<a name="table-working-with-tags"></a>

You can add or manage tags for S3 tables using the Amazon S3 Console, the AWS Command Line Interface (CLI), the AWS SDKs, or using the S3 APIs: [TagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html), [UntagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html), and [ListTagsForResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html). For more information, see:

**Topics**
+ [Common ways to use tags with tables](#common-ways-to-use-tags-table)
+ [Managing tags for tables](#table-working-with-tags)
+ [Creating tables with tags](table-create-tag.md)
+ [Adding a tag to a table](table-tag-add.md)
+ [Viewing table tags](table-tag-view.md)
+ [Deleting a tag from a table](table-tag-delete.md)

# Creating tables with tags
<a name="table-create-tag"></a>

You can tag Amazon S3 tables when you create them. There is no additional charge for using tags on tables beyond the standard S3 API request rates. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). For more information about tagging tables, see [Using tags with S3 tables](table-tagging.md).

## Permissions
<a name="table-create-tag-permissions"></a>

To create a table with tags, you must have the following permissions:
+ `s3tables:CreateTable`
+ `s3tables:TagResource`

## Troubleshooting errors
<a name="table-create-tag-troubleshooting"></a>

If you encounter an error when attempting to create a table with tags, you can do the following: 
+ Verify that you have the required [Permissions](#table-create-tag-permissions) to create the table and apply a tag to it.
+ Check your IAM user policy for any attribute-based access control (ABAC) conditions. Your policy may require you to tag your tables with only specific tag keys and values. For more information about ABAC and example table ABAC policies, see [ABAC for S3 tables](https://docs.aws.amazon.com/AmazonS3/latest/userguide/table-tagging.html#abac-for-tables).

## Steps
<a name="table-create-tag-steps"></a>

You can create a table with tags applied by using the AWS Command Line Interface (AWS CLI), the Amazon S3 Tables REST API, and the AWS SDKs.

## Using the REST API
<a name="table-create-tag-api"></a>

For information about the Amazon S3 Tables REST API support for creating a table with tags, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [CreateTable](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_CreateTable.html)

## Using the AWS CLI
<a name="table-create-tag-cli"></a>

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to create a table with tags by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

When you create a table you must provide configuration details. For more information, see [Creating an Amazon S3 table](s3-tables-create.md). You must also name the table with a name that follows the table naming convention. For more information see [Amazon S3 table bucket, table, and namespace naming rules](s3-tables-buckets-naming.md). 

**Request:**

```
aws --region us-west-2 \
s3tables create-table \
--endpoint https://ufwae60e2k.execute-api.us-west-2.amazonaws.com/personal/ \
--table-bucket-arn arn:aws:s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket
--tags '{"Department":"Engineering"}' \
--name my_table_abc \
--namespace my_namesapce_123a \
--format ICEBERG
```

# Adding a tag to a table
<a name="table-tag-add"></a>



You can add tags to Amazon S3 tables and modify these tags. For more information about tagging tables, see [Using tags with S3 tables](table-tagging.md).

## Permissions
<a name="table-tag-add-permissions"></a>

To add a tag to a table, you must have the following permission:
+ `s3tables:TagResource`

## Troubleshooting errors
<a name="table-tag-add-troubleshooting"></a>

If you encounter an error when attempting to add a tag to a table, you can do the following: 
+ Verify that you have the required [Permissions](#table-tag-add-permissions) to add a tag to a table.
+ If you attempted to add a tag key that starts with the AWS reserved prefix `aws:`, change the tag key and try again. 
+ The tag key is required. Also, make sure that the tag key and the tag value do not exceed the maximum character length and do not contain restricted characters. For more information, see [Tagging for cost allocation or attribute-based access control (ABAC)](tagging.md).

## Steps
<a name="table-tag-add-steps"></a>

You can add tags to tables by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 Tables REST API, and AWS SDKs.

## Using the S3 console
<a name="table-tag-add-console"></a>

To add tags to a table using the Amazon S3 console:

1. Sign in to Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. Choose the table bucket name. 

1. Choose the table name. 

1. Choose the **Properties** tab. 

1. Scroll to the **Tags** section and choose **Add new Tag**. 

1. This opens the **Add Tags** page. You can enter up to 50 tag key value pairs. 

1. If you add a new tag with the same key name as an existing tag, the value of the new tag overrides the value of the existing tag.

1. You can also edit the values of existing tags on this page.

1. After you have added the tag(s), choose **Save changes**. 

## Using the REST API
<a name="table-tag-add-api"></a>

For information about the Amazon S3 REST API support for adding tags to a table, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [TagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_TagResource.html)

## Using the AWS CLI
<a name="table-tag-add-cli"></a>

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to add tags to a table by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

**Request:**

```
aws --region us-west-2 \ 
s3tables tag-resource \
--resource-arn arn:aws::s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket/table/my_example_table \
--tags '{"Department":"engineering"}'
```

# Viewing table tags
<a name="table-tag-view"></a>

You can view or list tags applied to Amazon S3 tables. For more information about tags, see [Using tags with S3 tables](table-tagging.md).

## Permissions
<a name="table-tag-view-permissions"></a>

To view tags applied to a table, you must have the following permission: 
+ `s3tables:ListTagsForResource`

## Troubleshooting errors
<a name="table-tag-view-troubleshooting"></a>

If you encounter an error when attempting to list or view the tags of a table, you can do the following: 
+ Verify that you have the required [Permissions](#table-tag-view-permissions) to view or list the tags of the table.

## Steps
<a name="table-tag-view-steps"></a>

You can view tags applied to tables by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and AWS SDKs.

## Using the S3 console
<a name="table-tag-view-console"></a>

To view tags applied to a table using the Amazon S3 console:

1. Sign in to Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. Choose the Table bucket name. 

1. Choose the table name within the Table bucket. 

1. Choose the **Properties** tab. 

1. Scroll to the **Tags** section to view all of the tags applied to the table. 

1. The **Tags** section shows the **User-defined tags** by default. You can select the **AWS-generated tags** tab to view tags applied to your table by AWS services.

## Using the REST API
<a name="table-tag-view-api"></a>

For information about the Amazon S3 REST API support for viewing the tags applied to a table, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [ListTagsforResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_ListTagsForResource.html)

## Using the AWS CLI
<a name="table-tag-view-cli"></a>

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to view tags applied to a table. To use the command replace the *user input placeholders* with your own information.

**Request:**

```
aws --region us-west-2 \ 
s3tables list-tags-for-resource \
--resource-arn arn:aws::s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket/table/my_example_table
```

# Deleting a tag from a table
<a name="table-tag-delete"></a>

You can remove tags from Amazon S3 tables. For more information about tagging tables, see [Using tags with S3 tables](table-tagging.md).

**Note**  
If you delete a tag and later learn that it was being used to track costs or for access control, you can add the tag back to the table. 

## Permissions
<a name="table-tag-delete-permissions"></a>

To delete a tag from a table, you must have the following permission: 
+ `s3tables:UntagResource`

## Troubleshooting errors
<a name="table-tag-delete-troubleshooting"></a>

If you encounter an error when attempting to delete a tag from a table, you can do the following: 
+ Verify that you have the required [Permissions](#table-tag-delete-permissions) to delete a tag from a table.

## Steps
<a name="table-tag-delete-steps"></a>

You can delete tags from tables by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 Tables REST API, and AWS SDKs.

## Using the S3 console
<a name="table-tag-delete-console"></a>

To delete tags from a table using the Amazon S3 console:

1. Sign in to Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. Choose the table bucket name. 

1. Choose the table name. 

1. Choose the **Properties** tab. 

1. Scroll to the **Tags** section and select the checkbox next to the tag or tags that you would like to delete. 

1. Choose **Delete**. 

1. The **Delete user-defined tags** pop-up appears and asks you to confirm the deletion of the tag or tags you selected. 

1. Choose **Delete** to confirm.

## Using the REST API
<a name="table-tag-delete-api"></a>

For information about the Amazon S3 REST API support for deleting tags from a table, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [UnTagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3Buckets_UntagResource.html)

## Using the AWS CLI
<a name="table-tag-delete-cli"></a>

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to delete tags from a table by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

**Request:**

```
aws --region us-west-2 \ 
s3tables untag-resource \
--resource-arn arn:aws::s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket/table/my_example_table \
--tag-keys '["department"]'
```

# Accessing table data
<a name="s3-tables-access"></a>

There are multiple ways to access tables in Amazon S3 table buckets. You can integrate tables with AWS analytics services using AWS Glue Data Catalog, or access tables directly using the Amazon S3 Tables Iceberg REST endpoint or the Amazon S3 Tables Catalog for Apache Iceberg. The access method you use will depend on your catalog setup, governance model, and access control needs. The following is an overview of these access methods.

**AWS Glue Data Catalog integration**  
This is the recommended access method for working with tables in S3 table buckets. This integration gives you a unified view of your data estate across multiple AWS analytics services through the AWS Glue Data Catalog. After integration, you can query tables using services such as Athena and Amazon Redshift. Access to tables is managed using IAM permissions. To access tables using this integration, the IAM identity you use needs access to your S3 Tables resources and actions, AWS Glue Data Catalog objects, and the query engine you're using. For more information, see [Access management for S3 Tables](s3-tables-setting-up.md).

**Direct access**  
Use this method if you need to work with AWS Partner Network (APN) catalog implementations, custom catalog implementations, or if you only need to perform basic read/write operations on tables within a single table bucket. Access to tables is managed using IAM permissions. To access tables, the IAM identity you use needs access to your table resources and S3 Tables actions. For more information, see [Access management for S3 Tables](s3-tables-setting-up.md).

## Accessing tables through the AWS Glue Data Catalog integration
<a name="table-access-gdc-integration"></a>

You can integrate S3 table buckets with AWS Glue Data Catalog to access tables from AWS analytics services, such as Amazon Athena, Amazon Redshift, and Quick. The integration populates the AWS Glue Data Catalog with your table resources, and federates access to those resources. For more information on integrating, see [Integrating Amazon S3 Tables with AWS analytics services](s3-tables-integrating-aws.md).

The following AWS analytics services can access tables through this integration:
+ [Amazon Athena](s3-tables-integrating-athena.md)
+ [Amazon Redshift](s3-tables-integrating-redshift.md)
+ [Amazon EMR](s3-tables-integrating-emr.md)
+ [Quick](s3-tables-integrating-quicksight.md)
+ [Amazon Data Firehose](s3-tables-integrating-firehose.md)
+ [AWS Glue ETL](s3-tables-integrating-glue.md)
+ [Querying S3 Tables with SageMaker Unified Studio](s3-tables-integrating-sagemaker.md)

### Accessing tables using the AWS Glue Iceberg REST endpoint
<a name="table-access-glue-irc"></a>

Once your S3 table buckets are integrated with AWS Glue Data Catalog, you can also use the AWS Glue Iceberg REST endpoint to connect to S3 tables from third-party query engines that support Iceberg. For more information, see [Accessing Amazon S3 tables using the AWS Glue Iceberg REST endpoint](s3-tables-integrating-glue-endpoint.md).

We recommend using the AWS Glue Iceberg REST endpoint when you want to access tables from Spark, PyIceberg, or other Iceberg-compatible clients.

The following clients can access tables directly through the AWS Glue Iceberg REST endpoint:
+ Any Iceberg client, including Spark, PyIceberg, and more.

## Accessing tables directly
<a name="table-access-direct"></a>

 You can access tables directly from open source query engines through methods that bridge S3 Tables management operations to your Apache Iceberg analytics applications. There are two direct access methods: the Amazon S3 Tables Iceberg REST endpoint or the Amazon S3 Tables Catalog for Apache Iceberg. The REST endpoint is recommended. 

We recommend direct access if you access tables in self-managed catalog implementations, or only need to perform basic read/write operations on tables in a single table bucket. For other access scenarios, we recommend the AWS Glue Data Catalog integration.

Direct access to tables is managed through either IAM identity-based policies or resource-based policies attached to tables and table buckets.

### Accessing tables through the Amazon S3 Tables Iceberg REST endpoint
<a name="access-tables-irc"></a>

You can use the Amazon S3 Tables Iceberg REST endpoint to access your tables directly from any Iceberg REST compatible clients through HTTP endpoints, for more information, see [Accessing tables using the Amazon S3 Tables Iceberg REST endpoint](s3-tables-integrating-open-source.md). 

The following AWS analytics services and query engines can access tables directly using the Amazon S3 Tables Iceberg REST endpoint:

**Supported query engines**
+ Any Iceberg client, including Spark, PyIceberg, and more.
+ [Amazon EMR](s3-tables-integrating-emr.md)
+ [AWS Glue ETL](s3-tables-integrating-glue.md)

### Accessing tables directly through the Amazon S3 Tables Catalog for Apache Iceberg
<a name="access-client-catalog"></a>

You can also access tables directly from query engines like Apache Spark by using the S3 Tables client catalog, for more information, see [Accessing Amazon S3 tables with the Amazon S3 Tables Catalog for Apache Iceberg](s3-tables-client-catalog.md). However, S3 recommends using the Amazon S3 Tables Iceberg REST endpoint for direct access because it supports more applications, without requiring language or engine-specific code.

The following query engines can access tables directly using the client catalog:
+ [Apache Spark](s3-tables-client-catalog.md#s3-tables-integrating-open-source-spark)

# Amazon S3 Tables integration with AWS analytics services overview
<a name="s3-tables-integration-overview"></a>

To make tables in your account accessible by AWS analytics services, you integrate your Amazon S3 table buckets with AWS Glue Data Catalog. This integration allows AWS analytics services to automatically discover and access your table data. You can use this integration to work with your tables in these services:
+ [Amazon Athena](s3-tables-integrating-athena.md) 
+  [Amazon Redshift](s3-tables-integrating-redshift.md)
+  [Amazon EMR](s3-tables-integrating-emr.md)
+  [Quick](s3-tables-integrating-quicksight.md)
+  [Amazon Data Firehose](s3-tables-integrating-firehose.md)

**Note**  
This integration uses AWS Glue and AWS Lake Formation services and might incur AWS Glue request and storage costs. For more information, see [AWS Glue Pricing.](https://aws.amazon.com/glue/pricing/)  
Additional pricing applies for running queries on your S3 tables. For more information, see pricing information for the query engine that you're using.

## How the integration works
<a name="how-table-integration-works"></a>

When you integrate S3 Tables with the AWS analytics services, Amazon S3 adds the catalog named `s3tablescatalog` to the AWS Glue Data Catalog in the current Region. Adding the `s3tablescatalog` allows all your table buckets, namespaces, and tables to be populated in the Data Catalog.

**Note**  
These actions are automated through the Amazon S3 console. If you perform this integration programmatically, you must manually take these actions.

You integrate your table buckets once per AWS Region. After the integration is completed, all current and future table buckets, namespaces, and tables are added to the AWS Glue Data Catalog in that Region.

The following illustration shows how the `s3tablescatalog` catalog automatically populates table buckets, namespaces, and tables in the current Region as corresponding objects in the Data Catalog. Table buckets are populated as subcatalogs. Namespaces within a table bucket are populated as databases within their respective subcatalogs. Tables are populated as tables in their respective databases.

![\[The ways that table resources are represented in AWS Glue Data Catalog.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/S3Tables-glue-catalog.png)


After integrating with Data Catalog, you can create Apache Iceberg tables in table buckets and access them via AWS analytics engines such as Amazon Athena, Amazon EMR, as well as third-party analytics engines.

**How permissions work**  
We recommend integrating your table buckets with AWS analytics services so that you can work with your table data across services that use the AWS Glue Data Catalog as a metadata store. Once the integration is enabled, you can use AWS Identity and Access Management (IAM) permissions to grant access to S3 Tables resources and their associated Data Catalog objects.

Make sure that you follow the steps in [Integrating S3 Tables with AWS analytics services](s3-tables-integrating-aws.md) so that you have the appropriate permissions to access the AWS Glue Data Catalog and your table resources, and to work with AWS analytics services.

## Regions supported
<a name="regions-supported-integration-overview"></a>

S3 Tables integration with AWS analytics services uses AWS Glue Data Catalog with IAM-based access controls in the following regions. In all other regions, the integration also requires AWS Lake Formation.
+ US East (N. Virginia)
+ US East (Ohio)
+ US West (N. California)
+ US West (Oregon)
+ Africa (Cape Town)
+ Asia Pacific (Hong Kong)
+ Asia Pacific (Taipei)
+ Asia Pacific (Tokyo)
+ Asia Pacific (Seoul)
+ Asia Pacific (Osaka)
+ Asia Pacific (Mumbai)
+ Asia Pacific (Hyderabad)
+ Asia Pacific (Singapore)
+ Asia Pacific (Sydney)
+ Asia Pacific (Jakarta)
+ Asia Pacific (Melbourne)
+ Asia Pacific (Malaysia)
+ Asia Pacific (New Zealand)
+ Asia Pacific (Thailand)
+ Canada (Central)
+ Canada West (Calgary)
+ Europe (Frankfurt)
+ Europe (Zurich)
+ Europe (Stockholm)
+ Europe (Milan)
+ Europe (Spain)
+ Europe (Ireland)
+ Europe (London)
+ Europe (Paris)
+ Israel (Tel Aviv)
+ Mexico (Central)
+ South America (São Paulo)

## Next steps
<a name="next-steps-integration-overview"></a>
+ [Integrating S3 Tables with AWS analytics services](s3-tables-integrating-aws.md)
+ [Create a namespace](s3-tables-namespace-create.md)
+ [Create a table](s3-tables-create.md)

# Integrating Amazon S3 Tables with AWS analytics services
<a name="s3-tables-integrating-aws"></a>

This topic covers the prerequisites and procedures needed to integrate your Amazon S3 table buckets with AWS analytics services. For an overview of how the integration works, see [S3 Tables integration overview](s3-tables-integration-overview.md).

**Note**  
This integration uses the AWS Glue Data Catalog and might incur AWS Glue request and storage costs. For more information, see [AWS Glue Pricing.](https://aws.amazon.com/glue/pricing/)  
Additional pricing applies for running queries on S3 Tables. For more information, see pricing information for the query engine that you're using.

## Prerequisites for integration
<a name="table-integration-prerequisites"></a>

The following prerequisites are required to integrate table buckets with AWS analytics services:
+ [Create a table bucket.](s3-tables-buckets-create.md)
+ Add the following AWS Glue permissions to your AWS Identity and Access Management (IAM) principal:
  + `glue:CreateCatalog` which is required to create `s3tablescatalog` federated catalog in the Data Catalog
  + `glue:PassConnection` grants the calling principal the right to delegate `aws:s3tables` connection creation to Amazon S3 service.
+ [Update to the latest version of the AWS Command Line Interface (AWS CLI)](https://docs.aws.amazon.com//cli/latest/userguide/getting-started-install.html#getting-started-install-instructions).

**Important**  
When creating tables, make sure that you use all lowercase letters in your table names and table definitions. For example, make sure that your column names are all lowercase. If your table name or table definition contains capital letters, the table isn't supported by AWS Lake Formation or the AWS Glue Data Catalog. In this case, your table won't be visible to AWS analytics services such as Amazon Athena, even if your table buckets are integrated with AWS analytics services.   
If your table definition contains capital letters, you receive the following error message when running a `SELECT` query in Athena: "GENERIC\$1INTERNAL\$1ERROR: Get table request failed: com.amazonaws.services.glue.model.ValidationException: Unsupported Federation Resource - Invalid table or column names."

## Integrating table buckets with AWS analytics services
<a name="table-integration-procedures"></a>

You can integrate table buckets with Data Catalog and AWS analytics services using IAM access controls by default, or optionally use Lake Formation access controls.

When you integrate using IAM access controls, you require IAM privileges to access Amazon S3 table buckets and tables, Data Catalog objects, and the query engine you're using. If you choose to integrate using Lake Formation, then both IAM access controls and Lake Formation grants determine the access to Data Catalog resources. Please refer to [https://docs.aws.amazon.com/lake-formation/latest/dg/create-s3-tables-catalog.html](https://docs.aws.amazon.com/lake-formation/latest/dg/create-s3-tables-catalog.html) to learn more about Lake Formation integration.

The following sections describe how you could use Amazon S3 management console or AWS CLI to configure the integration with IAM access controls.

### Using the S3 console
<a name="integrate-console"></a>

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. Choose **Create table bucket**.

   The **Create table bucket** page opens.

1. Enter a **Table bucket name** and make sure that the **Enable integration** checkbox is selected.

1. Choose **Create table bucket**. Amazon S3 will attempt to automatically integrate your table buckets in that Region.

### Using the AWS CLI
<a name="integrate-cli"></a>

**To integrate table buckets with IAM access controls using the AWS CLI**

The following steps show how to use the AWS CLI to integrate table buckets. To use these steps, replace the `user input placeholders` with your own information.

1. Create a table bucket.

   ```
   aws s3tables create-table-bucket \
   --region us-east-1 \
   --name amzn-s3-demo-table-bucket
   ```

1. Create a file called `catalog.json` that contains the following catalog:

   ```
   {
      "Name": "s3tablescatalog",
      "CatalogInput": {
         "FederatedCatalog": {
             "Identifier": "arn:aws:s3tables:us-east-1:111122223333:bucket/*",
             "ConnectionName": "aws:s3tables"
          },
          "CreateDatabaseDefaultPermissions":[
          {
                   "Principal": {
                       "DataLakePrincipalIdentifier": "IAM_ALLOWED_PRINCIPALS"
                   },
                   "Permissions": ["ALL"]
               }
          ],
          "CreateTableDefaultPermissions":[
          {
                   "Principal": {
                       "DataLakePrincipalIdentifier": "IAM_ALLOWED_PRINCIPALS"
                   },
                   "Permissions": ["ALL"]
               }
          ],
          "AllowFullTableExternalDataAccess": "True"
      }
   }
   ```

   Create the `s3tablescatalog` catalog by using the following command. Creating this catalog populates the AWS Glue Data Catalog with objects corresponding to table buckets, namespaces, and tables.

   ```
   aws glue create-catalog \
   --region us-east-1 \
   --cli-input-json file://catalog.json
   ```

1. Verify that the `s3tablescatalog` catalog was added in AWS Glue by using the following command:

   ```
   aws glue get-catalog --catalog-id s3tablescatalog
   ```

### Migrating to the updated integration process
<a name="migrate-integrate-console"></a>

The AWS analytics services integration process has been updated to use IAM permissions by default. If you've already set up the integration, you can continue to use your current integration. However, if you want to change your existing integration to use IAM permissions instead, see [https://docs.aws.amazon.com/lake-formation/latest/dg/create-s3-tables-catalog.html](https://docs.aws.amazon.com/lake-formation/latest/dg/create-s3-tables-catalog.html). You can also redo the integration to delete your existing setup in AWS Glue Data Catalog and AWS Lake Formation and re-run the integration. This will remove all existing Lake Formation grants and associated access permissions to the `s3tablescatalog`.

1. Open the AWS Lake Formation console at [https://console.aws.amazon.com/lakeformation/](https://console.aws.amazon.com/lakeformation/), and sign in as a data lake administrator. For more information about how to create a data lake administrator, see [Create a data lake administrator](https://docs.aws.amazon.com/lake-formation/latest/dg/initial-lf-config.html#create-data-lake-admin) in the *AWS Lake Formation Developer Guide*.

1. Delete your `s3tablescatalog` catalog by doing the following: 
   + In the left navigation pane, choose **Catalogs**. 
   + Select the option button next to the `s3tablescatalog` catalog in the **Catalogs** list. On the **Actions** menu, choose **Delete**.

1. Deregister the data location for the `s3tablescatalog` catalog by doing the following:
   + In the left navigation pane, go to the **Administration** section, and choose **Data lake locations**. 
   + Select the option button next to the `s3tablescatalog` data lake location, for example, `s3://tables:region:account-id:bucket/*`. 
   + On the **Actions** menu, choose **Remove**. 
   + In the confirmation dialog box that appears, choose **Remove**. 

1. Now that you've deleted your `s3tablescatalog` catalog and data lake location, you can follow the steps to [integrate your table buckets with AWS analytics services](#table-integration-procedures) by using the updated integration process. 

**Note**  
If you want to work with SSE-KMS encrypted tables in integrated AWS analytics services, the role you use needs to have permission to use your AWS KMS key for encryption operations. For more information, see [Granting IAM principals permissions to work with encrypted tables in integrated AWS analytics services](s3-tables-kms-permissions.md#tables-kms-integration-permissions).

**Next steps**
+ [Create a namespace](s3-tables-namespace-create.md).
+ [Create a table](s3-tables-create.md).

# Accessing Amazon S3 tables using the AWS Glue Iceberg REST endpoint
<a name="s3-tables-integrating-glue-endpoint"></a>

Once your S3 table buckets are integrated with the AWS Glue Data Catalog you can use the AWS Glue Iceberg REST endpoint to connect to your S3 tables from Apache Iceberg-compatible clients, such as PyIceberg or Spark. The AWS Glue Iceberg REST endpoint implements the [Iceberg REST Catalog Open API specification](https://github.com/apache/iceberg/blob/main/open-api/rest-catalog-open-api.yaml) which provides a standardized interface for interacting with Iceberg tables. To access S3 tables using the endpoint you need to configure permissions through a combination of IAM policies and AWS Lake Formation grants. The following sections explain how to set up access, including creating the necessary IAM role, defining the required policies, and establishing Lake Formation permissions for both database and table-level access. 

For an end to end walkthrough using PyIceberg, see [Access data in Amazon S3 Tables using PyIceberg through the AWS Glue Iceberg REST endpoint](https://aws.amazon.com/blogs/storage/access-data-in-amazon-s3-tables-using-pyiceberg-through-the-aws-glue-iceberg-rest-endpoint/).

**Prerequisites**
+ [Integrate your table buckets with AWS analytics services](s3-tables-integrating-aws.md)
+ [Create a table namespace](s3-tables-namespace-create.md)
+ [Have access to a data lake administrator account](https://docs.aws.amazon.com//lake-formation/latest/dg/initial-lf-config.html#create-data-lake-admin)

## Create an IAM role for your client
<a name="glue-endpoint-create-iam-role"></a>

To access tables through AWS Glue endpoints, you need to create an IAM role with permissions to AWS Glue and Lake Formation actions. This procedure explains how to create this role and configure its permissions.

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the left navigation pane, choose **Policies**.

1. Choose **Create a policy**, and choose **JSON** in policy editor.

1. Add the following inline policy that grants permissions to access AWS Glue and Lake Formation actions:

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "VisualEditor0",
               "Effect": "Allow",
               "Action": [
                   "glue:GetCatalog",
                   "glue:GetDatabase",
                   "glue:GetDatabases",
                   "glue:GetTable",
                   "glue:GetTables",
                   "glue:CreateTable",
                   "glue:UpdateTable"
               ],
               "Resource": [
                   "arn:aws:glue:us-east-1:111122223333:catalog",
                   "arn:aws:glue:us-east-1:111122223333:catalog/s3tablescatalog",
                   "arn:aws:glue:us-east-1:111122223333:catalog/s3tablescatalog/amzn-s3-demo-table-bucket",
                   "arn:aws:glue:us-east-1:111122223333:table/s3tablescatalog/amzn-s3-demo-table-bucket/<namespace>/*",
                   "arn:aws:glue:us-east-1:111122223333:database/s3tablescatalog/amzn-s3-demo-table-bucket/<namespace>"
               ]
           },
           {
               "Effect": "Allow",
               "Action": [
                   "lakeformation:GetDataAccess"
               ],
               "Resource": "*"
           }
       ]
   }
   ```

------

1. After you create the policy, create an IAM role and choose **Custom trust policy** as the **Trusted entity type**.

1. Enter the following for the **Custom trust policy**.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:role/Admin_role"
               },
               "Action": "sts:AssumeRole",
               "Condition": {}
           }
       ]
   }
   ```

------

## Define access in Lake Formation
<a name="define-access-lakeformation"></a>

Lake Formation provides fine-grained access control for your data lake tables. When you integrated your S3 bucket with the AWS Glue Data Catalog, your tables were automatically registered as resources in Lake Formation. To access these tables, you must grant specific Lake Formation permissions to your IAM identity, in addition to its IAM policy permissions.

The following steps explain how to apply Lake Formation access controls to allow your Iceberg client to connect to your tables. You must sign in as a data lake administrator to apply these permissions.

### Allow external engines to access table data
<a name="allow-external-engines"></a>

In Lake Formation, you must enable full table access for external engines to access data. This allows third-party applications to get temporary credentials from Lake Formation when using an IAM role that has full permissions on the requested table.

Open the Lake Formation console at [https://console.aws.amazon.com/lakeformation/](https://console.aws.amazon.com/lakeformation/).

1. Open the Lake Formation console at [https://console.aws.amazon.com/lakeformation/](https://console.aws.amazon.com/lakeformation/), and sign in as a data lake administrator.

1. In the navigation pane under **Administration**, choose **Application integration settings**.

1. Select **Allow external engines to access data in Amazon S3 locations with full table access**. Then choose **Save**.

### Grant Lake Formation permissions on your table resources
<a name="grant-lakeformation-permissions"></a>

Next, grant Lake Formation permissions to the IAM role you created for your Iceberg-compatible client. These permissions will allow the role to create and manage tables in your namespace. You need to provide both database and table-level permissions. For more information, see [Granting Lake Formation permission on a table or database](grant-permissions-tables.md#grant-lf-table).

## Set up your environment to use the endpoint
<a name="setup-client-glue-irc"></a>

After you have setup the IAM role with the permissions required for table access you can use it to run Iceberg clients from your local machine by configuring the AWS CLI with your role, using the following command:

```
aws sts assume-role --role-arn "arn:aws:iam::<accountid>:role/<glue-irc-role>" --role-session-name <glue-irc-role>
```

To access tables through the AWS Glue REST endpoint, you need to initialize a catalog in your Iceberg-compatible client. This initialization requires specifying custom properties, including sigv4 properties, the endpoint URI and the warehouse location. Specify these properties as follows:
+ Sigv4 properties - Sigv4 must be enabled, the signing name is `glue`
+ Warehouse location - This is your table bucket, specified in this format: `<accountid>:s3tablescatalog/<table-bucket-name>`
+ Endpoint URI - Refer to the AWS Glue service endpoints reference guide for the region-specific endpoint

The following example shows how to initialize a pyIceberg catalog.

```
rest_catalog = load_catalog(
        s3tablescatalog,
**{
"type": "rest",
"warehouse": "<accountid>:s3tablescatalog/<table-bucket-name>",
"uri": "https://glue.<region>.amazonaws.com/iceberg",
"rest.sigv4-enabled": "true",
"rest.signing-name": "glue",
"rest.signing-region": region
        }
)
```

For additional information about the AWS Glue Iceberg REST endpoint implementation, see [Connecting to the Data Catalog using AWS Glue Iceberg REST endpoint](https://docs.aws.amazon.com/glue/latest/dg/connect-glu-iceberg-rest.html) in the *AWS Glue User Guide*.

# Accessing tables using the Amazon S3 Tables Iceberg REST endpoint
<a name="s3-tables-integrating-open-source"></a>

You can connect your Iceberg REST client to the Amazon S3 Tables Iceberg REST endpoint and make REST API calls to create, update, or query tables in S3 table buckets. The endpoint implements a set of standardized Iceberg REST APIs specified in the [Apache Iceberg REST Catalog Open API specification](https://github.com/apache/iceberg/blob/main/open-api/rest-catalog-open-api.yaml) . The endpoint works by translating Iceberg REST API operations into corresponding S3 Tables operations.

**Note**  
Amazon S3 Tables Iceberg REST endpoint can be used to access tables in AWS Partner Network (APN) catalog implementations or custom catalog implementations. It can also be used if you only need basic read/write access to a single table bucket. For other access scenarios we recommend using the AWS Glue Iceberg REST endpoint to connect to tables, which provides unified table management, centralized governance, and fine-grained access control. For more information, see [Accessing Amazon S3 tables using the AWS Glue Iceberg REST endpoint](s3-tables-integrating-glue-endpoint.md)

## Configuring the endpoint
<a name="configure-endpoint"></a>

You connect to the Amazon S3 Tables Iceberg REST endpoint using the service endpoint. S3 Tables Iceberg REST endpoints have the following format:

```
https://s3tables.<REGION>.amazonaws.com/iceberg
```

Refer to [S3 Tables AWS Regions and endpoints](s3-tables-regions-quotas.md#s3-tables-regions) for the Region-specific endpoints.

**Catalog configuration properties**

When using an Iceberg client to connect an analytics engine to the service endpoint, you must specify the following configuration properties when you initialize the catalog. Replace the *placeholder values* with the information for your Region and table bucket.
+ The region-specific endpoint as the endpoint URI: `https://s3tables.<REGION>.amazonaws.com/iceberg`
+ Your table bucket ARN as the warehouse location: `arn:aws:s3tables:<region>:<accountID>:bucket/<bucketname>`
+ Sigv4 properties for authentication. The SigV4 signing name for the service endpoint requests is: `s3tables`

The following examples show you how to configure different clients to use the Amazon S3 Tables Iceberg REST endpoint.

------
#### [ PyIceberg ]

To use the Amazon S3 Tables Iceberg REST endpoint with PyIceberg, specify the following application configuration properties:

```
rest_catalog = load_catalog(
  catalog_name,
  **{
    "type": "rest",    
    "warehouse":"arn:aws:s3tables:<Region>:<accountID>:bucket/<bucketname>",
    "uri": "https://s3tables.<Region>.amazonaws.com/iceberg",
    "rest.sigv4-enabled": "true",
    "rest.signing-name": "s3tables",
    "rest.signing-region": "<Region>"
  }
)
```

------
#### [ Apache Spark ]

To use the Amazon S3 Tables Iceberg REST endpoint with Spark, specify the following application configuration properties, replacing the *placeholder values* with the information for your Region and table bucket.

```
spark-shell \
  --packages "org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.4.1,software.amazon.awssdk:bundle:2.20.160,software.amazon.awssdk:url-connection-client:2.20.160" \
  --master "local[*]" \
  --conf "spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions" \
  --conf "spark.sql.defaultCatalog=spark_catalog" \
   --conf "spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkCatalog" \
  --conf "spark.sql.catalog.spark_catalog.type=rest" \
  --conf "spark.sql.catalog.spark_catalog.uri=https://s3tables.<Region>.amazonaws.com/iceberg" \
  --conf "spark.sql.catalog.spark_catalog.warehouse=arn:aws:s3tables:<Region>:<accountID>:bucket/<bucketname>" \
  --conf "spark.sql.catalog.spark_catalog.rest.sigv4-enabled=true" \
  --conf "spark.sql.catalog.spark_catalog.rest.signing-name=s3tables" \
  --conf "spark.sql.catalog.spark_catalog.rest.signing-region=<Region>" \
  --conf "spark.sql.catalog.spark_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO" \
  --conf "spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.SimpleAWSCredentialProvider" \
  --conf "spark.sql.catalog.spark_catalog.rest-metrics-reporting-enabled=false"
```

------

## Authenticating and authorizing access to the endpoint
<a name="tables-endpoint-auth"></a>

API requests to the S3 Tables service endpoints are authenticated using AWS Signature Version 4 (SigV4). See [AWS Signature Version 4 for API requests](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_sigv.html) to learn more about AWS SigV4.

The SigV4 signing name for Amazon S3 Tables Iceberg REST endpoint requests is: `s3tables`

Requests to the Amazon S3 Tables Iceberg REST endpoint are authorized using `s3tables` IAM actions corresponding to the REST API operations. These permissions can be defined in either IAM identity-based policies or resource-based policies attached to tables and table buckets. For more information, see [Access management for S3 Tables](s3-tables-setting-up.md).

 You can track requests made to your tables through the REST endpoint with AWS CloudTrail. Requests will be logged as their corresponding S3 IAM action. For example, a `LoadTable` API will generate a management event for the `GetTableMetadataLocation` operation and a data event for the `GetTableData` operation. For more information, see [Logging with AWS CloudTrail for S3 Tables](s3-tables-logging.md). 

## Prefix and path parameters
<a name="endpoint-parameter"></a>

 Iceberg REST catalog APIs have a free-form prefix in their request URLs. For example, the `ListNamespaces` API call uses the `GET/v1/{prefix}/namespaces` URL format. For S3 Tables the REST path `{prefix}` is always your url-encoded table bucket ARN. 

For example, for the following table bucket ARN: `arn:aws:s3tables:us-east-1:111122223333:bucket/bucketname` the prefix would be: `arn%3Aaws%3As3tables%3Aus-east-1%3A111122223333%3Abucket%2Fbucketname` 

### Namespace path parameter
<a name="endpoint-parameter-namespace"></a>

 Namespaces in an Iceberg REST catalog API path can have multiple levels. However, S3 Tables only supports single-level namespaces. To access a namespace in a multi-level catalog hierarchy, you can connect to a multi-level catalog above the namespace when referencing the namespace. This allows any query engine that supports the 3-part notation of `catalog.namespace.table` to access objects in S3 Tables’ catalog hierarchy without compatibility issues compared to using the multi-level namespace. 

## Supported Iceberg REST API operations
<a name="endpoint-supported-api"></a>

The following table contains the supported Iceberg REST APIs and how they correspond to S3 Tables actions. 


| Iceberg REST operation | REST path | S3 Tables IAM action | CloudTrail EventName | 
| --- | --- | --- | --- | 
|  `getConfig`  |  `GET /v1/config`  |  `s3tables:GetTableBucket`  |  `s3tables:GetTableBucket`  | 
|  `listNamespaces`  |  `GET /v1/{prefix}/namespaces`  |  `s3tables:ListNamespaces`  |  `s3tables:ListNamespaces`  | 
|  `createNamespace`  |  `POST /v1/{prefix}/namespaces`  |  `s3tables:CreateNamespace`  |  `s3tables:CreateNamespace`  | 
|  `loadNamespaceMetadata`  |  `GET /v1/{prefix}/namespaces/{namespace}`  |  `s3tables:GetNamespace`  |  `s3tables:GetNamespace`  | 
|  `dropNamespace`  |  `DELETE /v1/{prefix}/namespaces/{namespace}`  |  `s3tables:DeleteNamespace`  |  `s3tables:DeleteNamespace`  | 
|  `listTables`  |  `GET /v1/{prefix}/namespaces/{namespace}/tables`  |  `s3tables:ListTables`  |  `s3tables:ListTables`  | 
|  `createTable`  |  `POST /v1/{prefix}/namespaces/{namespace}/tables`  |  `s3tables:CreateTable`, `s3tables:PutTableData`  |  `s3tables:CreateTable`, `s3tables:PutObject`  | 
|  `loadTable`  |  `GET /v1/{prefix}/namespaces/{namespace}/tables/{table}`  |  `s3tables:GetTableMetadataLocation`, `s3tables:GetTableData`  |  `s3tables:GetTableMetadataLocation`, `s3tables:GetObject`  | 
|  `updateTable`  |  `POST /v1/{prefix}/namespaces/{namespace}/tables/{table}`  |  `s3tables:UpdateTableMetadataLocation`, `s3tables:PutTableData`, `s3tables:GetTableData`  |  `s3tables:UpdateTableMetadataLocation`, `s3tables:PutObject`, `s3tables:GetObject`  | 
|  `dropTable`  |  `DELETE /v1/{prefix}/namespaces/{namespace}/tables/{table}`  |  `s3tables:DeleteTable`  |  `s3tables:DeleteTable`  | 
|  `renameTable`  |  `POST /v1/{prefix}/tables/rename`  |  `s3tables:RenameTable`  |  `s3tables:RenameTable`  | 
|  `tableExists`  |  `HEAD /v1/{prefix}/namespaces/{namespace}/tables/{table}`  |  `s3tables:GetTable`  |  `s3tables:GetTable`  | 
|  `namespaceExists`  |  `HEAD /v1/{prefix}/namespaces/{namespace}`  |  `s3tables:GetNamespace`  |  `s3tables:GetNamespace`  | 

## Considerations and limitations
<a name="endpoint-considerations"></a>

Following are considerations and limitations when using the Amazon S3 Tables Iceberg REST endpoint.

****Considerations****
+ **CreateTable API behavior** – The `stage-create` option is not supported for this operation, and results in a `400 Bad Request` error. This means you cannot create a table from query results using `CREATE TABLE AS SELECT` (CTAS).
+ **DeleteTable API behavior** – You can only drop tables with purge enabled. Dropping tables with `purge=false` is not supported and results in a `400 Bad Request` error. Some versions of Spark always set this flag to false even when running `DROP TABLE PURGE` commands. You can try with `DROP TABLE PURGE` or use the S3 Tables [DeleteTable](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTable.html) operation to delete a table.
+  The endpoint only supports standard table metadata operations. For table maintenance, such as snapshot management and compaction, use S3 Tables maintenance API operations. For more information, see [S3 Tables maintenance](s3-tables-maintenance-overview.md). 

****Limitations****
+ Multilevel namespaces are not supported.
+ OAuth-based authentication is not supported.
+ Only the `owner` property is supported for namespaces.
+ View-related APIs defined in the [Apache Iceberg REST Open API specification](https://github.com/apache/iceberg/blob/main/open-api/rest-catalog-open-api.yaml) are not supported.
+ Running operations on a table with a `metadata.json` file over 50MB is not supported, and will return a `400 Bad Request` error. To control the size of your `metadata.json` files use table maintenance operations. For more information, see [S3 Tables maintenance](s3-tables-maintenance-overview.md).

# Accessing Amazon S3 tables with the Amazon S3 Tables Catalog for Apache Iceberg
<a name="s3-tables-client-catalog"></a>

You can access S3 tables from open source query engines like Apache Spark by using the Amazon S3 Tables Catalog for Apache Iceberg client catalog. Amazon S3 Tables Catalog for Apache Iceberg is an open source library hosted by AWS Labs. It works by translating Apache Iceberg operations in your query engines (such as table discovery, metadata updates, and adding or removing tables) into S3 Tables API operations.

Amazon S3 Tables Catalog for Apache Iceberg is distributed as a Maven JAR called `s3-tables-catalog-for-iceberg.jar`. You can build the client catalog JAR from the [AWS Labs GitHub repository](https://github.com/awslabs/s3-tables-catalog) or download it from [Maven](https://mvnrepository.com/artifact/software.amazon.s3tables/s3-tables-catalog-for-iceberg). When connecting to tables, the client catalog JAR is used as a dependency when you initialize a Spark session for Apache Iceberg.

## Using the Amazon S3 Tables Catalog for Apache Iceberg with Apache Spark
<a name="s3-tables-integrating-open-source-spark"></a>

You can use the Amazon S3 Tables Catalog for Apache Iceberg client catalog to connect to tables from open-source applications when you initialize a Spark session. In your session configuration you specify Iceberg and Amazon S3 dependencies, and create a custom catalog that uses your table bucket as the metadata warehouse.

****Prerequisites****
+ An IAM identity with access to your table bucket and S3 Tables actions. For more information, see [Access management for S3 Tables](s3-tables-setting-up.md).

**To initialize a Spark session using the Amazon S3 Tables Catalog for Apache Iceberg**
+ Initialize Spark using the following command. To use the command, replace the replace the Amazon S3 Tables Catalog for Apache Iceberg *version number* with the latest version from [AWS Labs GitHub repository](https://github.com/awslabs/s3-tables-catalog), and the *table bucket ARN* with your own table bucket ARN.

  ```
  spark-shell \
  --packages org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.6.1,software.amazon.s3tables:s3-tables-catalog-for-iceberg-runtime:0.1.4 \
  --conf spark.sql.catalog.s3tablesbucket=org.apache.iceberg.spark.SparkCatalog \
  --conf spark.sql.catalog.s3tablesbucket.catalog-impl=software.amazon.s3tables.iceberg.S3TablesCatalog \
  --conf spark.sql.catalog.s3tablesbucket.warehouse=arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket \
  --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
  ```

### Querying S3 tables with Spark SQL
<a name="query-with-spark"></a>

Using Spark, you can run DQL, DML, and DDL operations on S3 tables. When you query tables you use the fully qualified table name, including the session catalog name following this pattern:

`CatalogName.NamespaceName.TableName`

The following example queries show some ways you can interact with S3 tables. To use these example queries in your query engine, replace the *user input placeholder* values with your own.

**To query tables with Spark**
+ Create a namespace

  ```
  spark.sql(" CREATE NAMESPACE IF NOT EXISTS s3tablesbucket.my_namespace")
  ```
+ Create a table

  ```
  spark.sql(" CREATE TABLE IF NOT EXISTS s3tablesbucket.my_namespace.`my_table` 
  ( id INT, name STRING, value INT ) USING iceberg ")
  ```
+ Query a table

  ```
  spark.sql(" SELECT * FROM s3tablesbucket.my_namespace.`my_table` ").show()
  ```
+ Insert data into a table

  ```
  spark.sql(
  """
      INSERT INTO s3tablesbucket.my_namespace.my_table 
      VALUES 
          (1, 'ABC', 100), 
          (2, 'XYZ', 200)
  """)
  ```
+ Load an existing data file into a table

  1. Read the data into Spark.

     ```
     val data_file_location = "Path such as S3 URI to data file"
     val data_file = spark.read.parquet(data_file_location)
     ```

  1. Write the data into an Iceberg table.

     ```
     data_file.writeTo("s3tablesbucket.my_namespace.my_table").using("Iceberg").tableProperty ("format-version", "2").createOrReplace()
     ```

# Querying Amazon S3 tables with Athena
<a name="s3-tables-integrating-athena"></a>

Amazon Athena is an interactive query service that you can use to analyze data directly in Amazon S3 by using standard SQL. For more information, see [What is Amazon Athena?](https://docs.aws.amazon.com//athena/latest/ug/what-is.html) in the *Amazon Athena User Guide*.

After you integrate your table buckets with AWS analytics services, you can run Data Definition Language (DDL), Data Manipulation Language (DML), and Data Query Language (DQL) queries on S3 tables by using Athena. For more information about how to query tables in a table bucket, see [Register S3 Table bucket catalogs](https://docs.aws.amazon.com//athena/latest/ug/gdc-register-s3-table-bucket-cat.html) in the *Amazon Athena User Guide*.

You can also run queries in Athena from the Amazon S3 console. 

**Important**  
When creating tables, make sure that you use all lowercase letters in your table names and table definitions. For example, make sure that your column names are all lowercase. If your table name or table definition contains capital letters, the table isn't supported by AWS Lake Formation or the AWS Glue Data Catalog. In this case, your table won't be visible to AWS analytics services such as Amazon Athena, even if your table buckets are integrated with AWS analytics services.   
If your table definition contains capital letters, you receive the following error message when running a `SELECT` query in Athena: "GENERIC\$1INTERNAL\$1ERROR: Get table request failed: com.amazonaws.services.glue.model.ValidationException: Unsupported Federation Resource - Invalid table or column names."

## Using the S3 console and Amazon Athena
<a name="query-table-console"></a>

The following procedure uses the Amazon S3 console to access the Athena query editor so that you can query a table with Amazon Athena. 

**Note**  
Before performing the following steps, make sure that you've integrated your table buckets with AWS analytics services in this Region. For more information, see [Integrating Amazon S3 Tables with AWS analytics services](s3-tables-integrating-aws.md).

**To query a table**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. On the **Table buckets** page, choose the bucket that contains the table that you want to query.

1. On the bucket details page, choose the option button next to the name of the table that you want to query. 

1. Choose **Query table with Athena**.

1. The Amazon Athena console opens and the Athena query editor appears with a sample `SELECT` query loaded for you. Modify this query as needed for your use case.

   In the query editor, the **Catalog** field should be populated with **s3tablescatalog/** followed by the name of your table bucket, for example, **s3tablescatalog/*amzn-s3-demo-bucket***. The **Database** field should be populated with the namespace where your table is stored. 
**Note**  
If you don't see these values in the **Catalog** and **Database** fields, make sure that you've integrated your table buckets with AWS analytics services in this Region. For more information, see [Integrating Amazon S3 Tables with AWS analytics services](s3-tables-integrating-aws.md). 

1. To run the query, choose **Run**.
**Note**  
If you receive the error "Insufficient permissions to execute the query. Principal does not have any privilege on specified resource" when you try to run a query in Athena, you must be granted the necessary Lake Formation permissions on the table. For more information, see [Granting Lake Formation permission on a table or database](grant-permissions-tables.md#grant-lf-table).
If you receive the error "Iceberg cannot access the requested resource" when you try to run the query, go to the AWS Lake Formation console and make sure that you've granted yourself permissions on the table bucket catalog and database (namespace) that you created. Don't specify a table when granting these permissions. For more information, see [Granting Lake Formation permission on a table or database](grant-permissions-tables.md#grant-lf-table). 
If you receive the following error message when running a `SELECT` query in Athena, this message is caused by having capital letters in your table name or your column names in your table definition: "GENERIC\$1INTERNAL\$1ERROR: Get table request failed: com.amazonaws.services.glue.model.ValidationException: Unsupported Federation Resource - Invalid table or column names." Make sure that your table and column names are all lowercase.

# Accessing Amazon S3 tables with Amazon Redshift
<a name="s3-tables-integrating-redshift"></a>

Amazon Redshift is a fast, fully managed, petabyte-scale data warehouse service that makes it simple and cost-effective to efficiently analyze all your data using your existing business intelligence tools. Redshift Serverless lets you access and analyze data without all of the configurations of a provisioned data warehouse. For more information, see [Get started with serverless data warehouses](https://docs.aws.amazon.com//redshift/latest/gsg/new-user-serverless.html) in the *Amazon Redshift Getting Started Guide*.

## Query Amazon S3 tables with Amazon Redshift
<a name="rs-query-table"></a>

**Prerequisites**
+ [Integrate your table buckets with AWS analytics services](s3-tables-integrating-aws.md).
  + [Create a namespace](s3-tables-namespace-create.md).
  + [Create a table](s3-tables-create.md).
+ [Managing access to a table or database with Lake Formation](grant-permissions-tables.md).

After you complete the prerequisites, you can begin using Amazon Redshift to query tables in one of the following ways:
+ [Using the Amazon Redshift query editor v2](https://docs.aws.amazon.com//redshift/latest/mgmt/query-editor-v2.html)
+ [Connecting to an Amazon Redshift data warehouse using SQL client tools](https://docs.aws.amazon.com//redshift/latest/mgmt/connecting-to-cluster.html)
+ [Using the Amazon Redshift Data API](https://docs.aws.amazon.com//redshift/latest/mgmt/data-api.html)

# Accessing Amazon S3 tables with Amazon EMR
<a name="s3-tables-integrating-emr"></a>

Amazon EMR (previously called Amazon Elastic MapReduce) is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. Using these frameworks and related open-source projects, you can process data for analytics purposes and business intelligence workloads. Amazon EMR also lets you transform and move large amounts of data into and out of other AWS data stores and databases.

You can use Apache Iceberg clusters in Amazon EMR to work with S3 tables by connecting to table buckets in a Spark session. To connect to table buckets in Amazon EMR, you can use the AWS analytics services integration through AWS Glue Data Catalog, or you can use the open source Amazon S3 Tables Catalog for Apache Iceberg client catalog.

**Note**  
S3 Tables is supported on [Amazon EMR version 7.5](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-release-components.html) or higher.

## Connecting to S3 table buckets with Spark on an Amazon EMR Iceberg cluster
<a name="emr-setup-cluster-spark"></a>

In this procedure, you set up an Amazon EMR cluster configured for Apache Iceberg and then launch a Spark session that connects to your table buckets. You can set this up using the AWS analytics services integration through AWS Glue, or you can use the open source Amazon S3 Tables Catalog for Apache Iceberg client catalog. For information about the client catalog, see [Accessing tables using the Amazon S3 Tables Iceberg REST endpoint](s3-tables-integrating-open-source.md). 

Choose your method of using tables with Amazon EMR from the following options.

------
#### [ Amazon S3 Tables Catalog for Apache Iceberg ]

The following prerequisites are required to query tables with Spark on Amazon EMR using the Amazon S3 Tables Catalog for Apache Iceberg.

For the latest version of the client catalog JAR, see the [s3-tables-catalog GitHub repository](https://github.com/awslabs/s3-tables-catalog).

**Prerequisites**
+ Attach the `AmazonS3TablesFullAccess` policy to the IAM role you use for Amazon EMR.

**To set up an Amazon EMR cluster to query tables with Spark**

1. Create a cluster with the following configuration. To use this example, replace the `user input placeholders` with your own information.

   ```
   aws emr create-cluster --release-label emr-7.5.0 \
   --applications Name=Spark \
   --configurations file://configurations.json \
   --region us-east-1 \
   --name My_Spark_Iceberg_Cluster \
   --log-uri s3://amzn-s3-demo-bucket/ \
   --instance-type m5.xlarge \
   --instance-count 2 \
   --service-role EMR_DefaultRole \
   --ec2-attributes \
   
   InstanceProfile=EMR_EC2_DefaultRole,SubnetId=subnet-1234567890abcdef0,KeyName=my-key-pair
   ```

   `configurations.json`:

   ```
   [{
   "Classification":"iceberg-defaults",
   "Properties":{"iceberg.enabled":"true"}
   }]
   ```

1. [Connect to the Spark primary node using SSH](https://docs.aws.amazon.com//emr/latest/ManagementGuide/emr-connect-master-node-ssh.html#emr-connect-cli).

1. To initialize a Spark session for Iceberg that connects to your table bucket, enter the following command. Replace the `user input placeholders` with your table bucket ARN.

   ```
   spark-shell \
   --packages software.amazon.s3tables:s3-tables-catalog-for-iceberg-runtime:0.1.8 \
   --conf spark.sql.catalog.s3tablesbucket=org.apache.iceberg.spark.SparkCatalog \
   --conf spark.sql.catalog.s3tablesbucket.catalog-impl=software.amazon.s3tables.iceberg.S3TablesCatalog \
   --conf spark.sql.catalog.s3tablesbucket.warehouse=arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket1 \
   --conf spark.sql.defaultCatalog=s3tablesbucket \
   --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions
   ```

1. Query your tables with Spark SQL. For example queries, see [Querying S3 tables with Spark SQL](s3-tables-client-catalog.md#query-with-spark).

------
#### [ AWS analytics services integration ]

The following prerequisites are required to query tables with Spark on Amazon EMR using the AWS analytics services integration.

**Prerequisites**
+ [Integrate your table buckets with AWS analytics services](s3-tables-integrating-aws.md).
+ Create the default service role for Amazon EMR (`EMR_DefaultRole_V2`). For details, see [Service role for Amazon EMR (EMR role) ](https://docs.aws.amazon.com//emr/latest/ManagementGuide/emr-iam-role.html).
+ Create the Amazon EC2 instance profile for Amazon EMR (`EMR_EC2_DefaultRole`). For details, see [Service role for cluster EC2 instances (EC2 instance profile)](https://docs.aws.amazon.com//emr/latest/ManagementGuide/emr-iam-role-ec2.html). 
  + Attach the `AmazonS3TablesFullAccess` policy to `EMR_EC2_DefaultRole`.

**To set up an Amazon EMR cluster to query tables with Spark**

1. Create a cluster with the following configuration. To use this example, replace the `user input placeholder` values with your own information.

   ```
   aws emr create-cluster --release-label emr-7.5.0 \
   --applications Name=Spark \
   --configurations file://configurations.json \
   --region us-east-1 \
   --name My_Spark_Iceberg_Cluster \
   --log-uri s3://amzn-s3-demo-bucket/ \
   --instance-type m5.xlarge \
   --instance-count 2 \
   --service-role EMR_DefaultRole \
   --ec2-attributes \
   
   InstanceProfile=EMR_EC2_DefaultRole,SubnetId=subnet-1234567890abcdef0,KeyName=my-key-pair
   ```

   `configurations.json`:

   ```
   [{
   "Classification":"iceberg-defaults",
   "Properties":{"iceberg.enabled":"true"}
   }]
   ```

1. [Connect to the Spark primary node using SSH](https://docs.aws.amazon.com//emr/latest/ManagementGuide/emr-connect-master-node-ssh.html#emr-connect-cli).

1. Enter the following command to initialize a Spark session for Iceberg that connects to your tables. Replace the `user input placeholders` for Region, account ID and table bucket name with your own information.

   ```
   spark-shell \
   --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \
   --conf spark.sql.defaultCatalog=s3tables \
   --conf spark.sql.catalog.s3tables=org.apache.iceberg.spark.SparkCatalog \
   --conf spark.sql.catalog.s3tables.catalog-impl=org.apache.iceberg.aws.glue.GlueCatalog \
   --conf spark.sql.catalog.s3tables.client.region=us-east-1 \
   --conf spark.sql.catalog.s3tables.glue.id=111122223333:s3tablescatalog/amzn-s3-demo-table-bucket
   ```

1. Query your tables with Spark SQL. For example queries, see [Querying S3 tables with Spark SQL](s3-tables-client-catalog.md#query-with-spark)

------

**Note**  
If you are using the `DROP TABLE PURGE` command with Amazon EMR:  
Amazon EMR version 7.5  
Set the Spark config `spark.sql.catalog.your-catalog-name.cache-enabled` to `false`. If this config is set to `true`, run the command in a new session or application so the table cache is not activated.
Amazon EMR versions higher than 7.5  
`DROP TABLE` is not supported. You can use the S3 Tables `DeleteTable` REST API to delete a table.

# Visualizing table data with Quick
<a name="s3-tables-integrating-quicksight"></a>

Quick is a fast business analytics service to build visualizations, perform ad hoc analysis, and quickly get business insights from your data. Quick seamlessly discovers AWS data sources, enables organizations to scale to hundreds of thousands of users, and delivers fast and responsive query performance by using the Quick Super-fast, Parallel, In-Memory, Calculation Engine (SPICE). For more information, see [What is Quick?](https://docs.aws.amazon.com//quicksight/latest/user/welcome.html) in the *Quick user guide*.

After you [Integrate your table buckets with AWS analytics services](s3-tables-integrating-aws.md), you can create data sets from your tables and work with them in Quick using SPICE or direct SQL queries from your query engine. Quick supports Athena as a data source for S3 tables.

## Configure permissions for Quick to access tables
<a name="quicksight-permissions-tables"></a>

Before working with S3 table data in Quick you must grant permissions to the Quick service role and the Quick admin user permissions on the tables you want to access. Additionally, if you use AWS Lake Formation, you also need to grant Lake Formation permissions to your Quick admin user on those tables you want to access.

**Grant permissions to the Quick service role**

When set up Quick for the first time in your account, AWS creates a service role that allows Quick to access data sources in other AWS services, such as Athena or Amazon Redshift. The default role name is `aws-quicksight-service-role-v0`.

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. Choose **Roles** and select the Quick service role. The default name is `aws-quicksight-service-role-v0`

1. Choose **Add permissions** and then **Create inline policy**.

1. Select **JSON** to open the JSON policy editor, then add the following policy.

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Sid": "VisualEditor0",
         "Effect": "Allow",
         "Action": "glue:GetCatalog",
         "Resource": "*"
       }
     ]
   }
   ```

------

1. Choose **Next**, enter a **Policy name** and then **Create policy**.

**To configure Lake Formation permissions for the Quick admin user**

1. Run the following AWS CLI command to find the ARN of your Quick admin user.

   ```
   aws quicksight list-users --aws-account-id 111122223333 --namespace default --region region
   ```

1. Grant Lake Formation permissions to this ARN. For details, see [Managing access to a table or database with Lake Formation](grant-permissions-tables.md).

## Using table data in Quick
<a name="quicksight-connect-tables"></a>

You can connect to table data using Athena as a data source.

**Prerequisites**
+ [Integrate your table buckets with AWS analytics services](s3-tables-integrating-aws.md).
  + [Create a namespace](s3-tables-namespace-create.md).
  + [Create a table](s3-tables-create.md).
  + [Configure permissions for Quick to access tables](#quicksight-permissions-tables).
+ [Sign up for Quick](https://docs.aws.amazon.com/quicksight/latest/user/signing-up.html).

1. Sign in to your Quick account at [ https://quicksight.aws.amazon.com/](https://quicksight.aws.amazon.com/.)

1. In the dashboard, choose **New analysis**.

1. Choose **New dataset**.

1. Select **Athena**.

1. Enter a **Data source name**, then choose **Create data source**.

1. Choose Use custom SQL. You will not be able to select your table from the **Choose your table** pane. 

1. Enter an Athena SQL query that captures the columns you want to visualize, then choose **Confirm query**. For example, use the following query to select all columns:

   ```
   SELECT * FROM "s3tablescatalog/table-bucket-name".namespace.table-name
   ```

1. Choose **Visualize** to analyze data and start building dashboards. For more information, see [Visualizing data in Quick ](https://docs.aws.amazon.com//quicksight/latest/user/working-with-visuals.html) and [Exploring interactive dashboards in Quick ](https://docs.aws.amazon.com//quicksight/latest/user/using-dashboards.html)

# Streaming data to tables with Amazon Data Firehose
<a name="s3-tables-integrating-firehose"></a>

Amazon Data Firehose is a fully managed service for delivering real-time [streaming data](https://aws.amazon.com//streaming-data/) to destinations such as Amazon S3, Amazon Redshift, Amazon OpenSearch Service, Splunk, Apache Iceberg tables, and custom HTTP endpoints or HTTP endpoints owned by supported third-party service providers. With Amazon Data Firehose, you don't need to write applications or manage resources. You configure your data producers to send data to Firehose, and it automatically delivers the data to the destination that you specified. You can also configure Firehose to transform your data before delivering it. To learn more about Amazon Data Firehose, see [What is Amazon Data Firehose?](https://docs.aws.amazon.com//firehose/latest/dev/what-is-this-service.html)

Complete these steps to set up Firehose streaming to tables in S3 table buckets:

1.  [Integrate your table buckets with AWS analytics services](s3-tables-integrating-aws.md). 

1. Configure Firehose to deliver data into your S3 tables. To do so, you [create an AWS Identity and Access Management (IAM) service role that allows Firehose to access your tables](#firehose-role-s3tables).

1. Grant the Firehose service role explicit permissions to your table or table's namespace. For more information, see [Grant necessary permissions](https://docs.aws.amazon.com/firehose/latest/dev/apache-iceberg-prereq.html#s3-tables-prerequisites).

1. [Create a Firehose stream that routes data to your table.](#firehose-stream-tables)

## Creating a role for Firehose to use S3 tables as a destination
<a name="firehose-role-s3tables"></a>

Firehose needs an IAM [service role](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-service.html) with specific permissions to access AWS Glue tables and write data to S3 tables. You need this provide this IAM role when you create a Firehose stream.

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the left navigation pane, choose **Policies**

1. Choose **Create a policy**, and choose **JSON** in policy editor.

1. Add the following inline policy that grants permissions to all databases and tables in your data catalog. If you want, you can give permissions only to specific tables and databases. To use this policy, replace the `user input placeholders` with your own information.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "S3TableAccessViaGlueFederation",
               "Effect": "Allow",
               "Action": [
                   "glue:GetTable",
                   "glue:GetDatabase",
                   "glue:UpdateTable"
               ],
               "Resource": [
                   "arn:aws:glue:us-east-1:111122223333:catalog/s3tablescatalog/*",
                   "arn:aws:glue:us-east-1:111122223333:catalog/s3tablescatalog",
                   "arn:aws:glue:us-east-1:111122223333:catalog",
                   "arn:aws:glue:us-east-1:111122223333:database/*",
                   "arn:aws:glue:us-east-1:111122223333:table/*/*"
               ]
           },
           {
               "Sid": "S3DeliveryErrorBucketPermission",
               "Effect": "Allow",
               "Action": [
                   "s3:AbortMultipartUpload",
                   "s3:GetBucketLocation",
                   "s3:GetObject",
                   "s3:ListBucket",
                   "s3:ListBucketMultipartUploads",
                   "s3:PutObject"
               ],
               "Resource": [
                   "arn:aws:s3:::error delivery bucket",
                   "arn:aws:s3:::error delivery bucket/*"
               ]
           },
           {
               "Sid": "RequiredWhenUsingKinesisDataStreamsAsSource",
               "Effect": "Allow",
               "Action": [
                   "kinesis:DescribeStream",
                   "kinesis:GetShardIterator",
                   "kinesis:GetRecords",
                   "kinesis:ListShards"
               ],
               "Resource": "arn:aws:kinesis:us-east-1:111122223333:stream/stream-name"
           },
           {
               "Sid": "RequiredWhenDoingMetadataReadsANDDataAndMetadataWriteViaLakeformation",
               "Effect": "Allow",
               "Action": [
                   "lakeformation:GetDataAccess"
               ],
               "Resource": "*"
           },
           {
               "Sid": "RequiredWhenUsingKMSEncryptionForS3ErrorBucketDelivery",
               "Effect": "Allow",
               "Action": [
                   "kms:Decrypt",
                   "kms:GenerateDataKey"
               ],
               "Resource": [
                   "arn:aws:kms:us-east-1:111122223333:key/KMS-key-id"
               ],
               "Condition": {
                   "StringEquals": {
                       "kms:ViaService": "s3.us-east-1.amazonaws.com"
                   },
                   "StringLike": {
                       "kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::error delivery bucket/prefix*"
                   }
               }
           },
           {
               "Sid": "LoggingInCloudWatch",
               "Effect": "Allow",
               "Action": [
                   "logs:PutLogEvents"
               ],
               "Resource": [
                   "arn:aws:logs:us-east-1:111122223333:log-group:log-group-name:log-stream:log-stream-name"
               ]
           },
           {
               "Sid": "RequiredWhenAttachingLambdaToFirehose",
               "Effect": "Allow",
               "Action": [
                   "lambda:InvokeFunction",
                   "lambda:GetFunctionConfiguration"
               ],
               "Resource": [
                   "arn:aws:lambda:us-east-1:111122223333:function:function-name:function-version"
               ]
           }
       ]
   }
   ```

------

   This policy has a statements that allow access to Kinesis Data Streams, invoking Lambda functions and access to AWS KMS keys. If you don't use any of these resources, you can remove the respective statements.

   If error logging is enabled, Firehose also sends data delivery errors to your CloudWatch log group and streams. For this, you must configure log group and log stream names. For log group and log stream names, see [Monitor Amazon Data Firehose Using CloudWatch Logs](https://docs.aws.amazon.com//firehose/latest/dev/controlling-access.html#using-iam-iceberg).

1. After you create the policy, create an IAM role with **AWS service** as the **Trusted entity type**.

1. For **Service or use case**, choose **Kinesis**. For **Use case** choose **Kinesis Firehose**.

1. Choose **Next**, and then select the policy you created earlier.

1. Give your role a name. Review your role details, and choose **Create role**. The role will have the following trust policy.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "sts:AssumeRole"
               ],
               "Principal": {
                   "Service": [
                       "firehose.amazonaws.com"
                   ]
               }
           }
       ]
   }
   ```

------

## Creating a Firehose stream to S3 tables
<a name="firehose-stream-tables"></a>

The following procedure shows how to create a Firehose stream to deliver data to S3 tables using the console. The following prerequisites are required to set up a Firehose stream to S3 tables.

**Prerequisites**
+ [Integrate your table buckets with AWS analytics services](s3-tables-integrating-aws.md).
  + [Create a namespace](s3-tables-namespace-create.md).
  + [Create a table](s3-tables-create.md).
+ Create the [ Role for Firehose to access S3 Tables](#firehose-role-s3tables).
+ [Grant necessary permissions](https://docs.aws.amazon.com/firehose/latest/dev/apache-iceberg-prereq.html#s3-tables-prerequisites) to the Firehose service role you created to access tables.

To provide routing information to Firehose when you configure a stream, you use your namespace as the database name and the name of a table in that namespace. You can use these values in the Unique key section of a Firehose stream configuration to route data to a single table. You can also use this values to route to a table using JSON Query expressions. For more information, see [Route incoming records to a single Iceberg table](https://docs.aws.amazon.com/firehose/latest/dev/apache-iceberg-format-input-record.html). 

**To set up a Firehose stream to S3 tables (Console)**

1. Open the Firehose console at [https://console.aws.amazon.com/firehose/](https://console.aws.amazon.com/firehose/).

1. Choose **Create Firehose stream**.

1. For **Source**, choose one of the following sources:
   + Amazon Kinesis Data Streams
   + Amazon MSK
   + Direct PUT

1. For **Destination**, choose **Apache Iceberg Tables**.

1. Enter a **Firehose stream name**.

1. Configure your **Source settings**.

1. For **Destination settings**, choose **Current account** to stream to tables in your account or **Cross-account** for tables in another account.
   + For tables in the **Current account**, select your S3 Tables catalog from the **Catalog** dropdown.
   + For tables in a **Cross-account**, enter the **Catalog ARN** of the catalog you want to stream to in another account.

1. Configure database and table names using **Unique Key configuration**, JSONQuery expressions, or in a Lambda function. For more information, refer to [Route incoming records to a single Iceberg table](https://docs.aws.amazon.com/firehose/latest/dev/apache-iceberg-format-input-record.html) and [Route incoming records to different Iceberg tables](https://docs.aws.amazon.com//firehose/latest/dev/apache-iceberg-format-input-record-different.html) in the *Amazon Data Firehose Developer Guide*.

1. Under **Backup settings**, specify a **S3 backup bucket**.

1. For **Existing IAM roles** under **Advanced settings**, select the IAM role you created for Firehose.

1. Choose **Create Firehose stream**.

For more information about the other settings that you can configure for a stream, see [Set up the Firehose stream](https://docs.aws.amazon.com/firehose/latest/dev/apache-iceberg-stream.html) in the *Amazon Data Firehose Developer Guide*.

# Running ETL jobs on Amazon S3 tables with AWS Glue
<a name="s3-tables-integrating-glue"></a>

AWS Glue is a serverless data integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources. You can use AWS Glue jobs to run extract, transform, and load (ETL) pipelines to load data into your data lakes. For more information about AWS Glue, see [What is AWS Glue?](https://docs.aws.amazon.com//glue/latest/dg/what-is-glue.html) in the *AWS Glue Developer Guide*.

An AWS Glue job encapsulates a script that connects to your source data, processes it, and then writes it out to your data target. Typically, a job runs extract, transform, and load (ETL) scripts. Jobs can run scripts designed for Apache Spark runtime environments. You can monitor job runs to understand runtime metrics such as completion status, duration, and start time.

You can use AWS Glue jobs to process data in your S3 tables by connecting to your tables through the integration with AWS analytics services, or, connect directly using the Amazon S3 Tables Iceberg REST endpoint or the Amazon S3 Tables Catalog for Apache Iceberg. This guide covers the basic steps to get started using AWS Glue with S3 Tables, including:

**Topics**
+ [Step 1 – Prerequisites](#glue-etl-prereqs)
+ [Step 2 – Create a script to connect to table buckets](#glue-etl-script)
+ [Step 3 – Create a AWS Glue job that queries tables](#glue-etl-job)

Choose your access method based on your specific AWS Glue ETL job requirements:
+ **AWS analytics services integration (Recommended)** – Recommended when you need centralized metadata management across multiple AWS analytics services, need to leverage existing AWS Glue Data Catalog permissions and optionally Lake Formation, or are building production ETL pipelines that integrate with other AWS services like Athena or Amazon EMR.
+ **Amazon S3 Tables Iceberg REST endpoint** – Recommended when you need to connect to S3 tables from third-party query engines that support Apache Iceberg, build custom ETL applications that need direct REST API access, or when you require control over catalog operations without dependencies on AWS Glue Data Catalog.
+ **Amazon S3 Tables Catalog for Apache Iceberg** – Use only for legacy applications or specific programmatic scenarios that require the Java client library. This method is not recommended for new AWS Glue ETL job implementations due to additional `JAR` dependency management and complexity.

**Note**  
S3 Tables is supported on [AWS Glue version 5.0 or higher](https://docs.aws.amazon.com//glue/latest/dg/release-notes.html).

## Step 1 – Prerequisites
<a name="glue-etl-prereqs"></a>

Before you can query tables from a AWS Glue job you must configure an IAM role that AWS Glue can use to run the job. Choose your method of access to see specific prerequisites for that method.

------
#### [ AWS analytics services integration (Recommended) ]

Prerequisites required to use the S3 Tables AWS analytics integration to run AWS Glue jobs.
+ [Integrate your table buckets with AWS analytics services](s3-tables-integrating-aws.md).
+ [Create an IAM role for AWS Glue](https://docs.aws.amazon.com//glue/latest/dg/create-an-iam-role.html).
  + Attach the `AmazonS3TablesFullAccess` managed policy to the role.
  + Attach the `AmazonS3FullAccess` managed policy to the role.

------
#### [ Amazon S3 Tables Iceberg REST endpoint ]

Prerequisites to use the Amazon S3 Tables Iceberg REST endpoint to run AWS Glue ETL jobs.
+ [Create an IAM role for AWS Glue](https://docs.aws.amazon.com//glue/latest/dg/create-an-iam-role.html).
  + Attach the `AmazonS3TablesFullAccess` managed policy to the role.
  + Attach the `AmazonS3FullAccess` managed policy to the role.

------
#### [ Amazon S3 Tables Catalog for Apache Iceberg ]

Prerequisites use the Amazon S3 Tables Catalog for Apache Iceberg to run AWS Glue ETL jobs.
+ [Create an IAM role for AWS Glue](https://docs.aws.amazon.com//glue/latest/dg/create-an-iam-role.html).
  + Attach the `AmazonS3TablesFullAccess` managed policy to the role.
  + Attach the `AmazonS3FullAccess` managed policy to the role.
  + To use the Amazon S3 Tables Catalog for Apache Iceberg you need to download the client catalog JAR and upload it to an S3 bucket.

****Downloading the catalog JAR****

    1. Check for the latest version on [Maven Central](https://mvnrepository.com/artifact/software.amazon.s3tables/s3-tables-catalog-for-iceberg-runtime). You can download the JAR from Maven central using your browser, or using the following command. Make sure to replace the *version number* with the latest version.

       ```
       wget https://repo1.maven.org/maven2/software/amazon/s3tables/s3-tables-catalog-for-iceberg-runtime/0.1.5/s3-tables-catalog-for-iceberg-runtime-0.1.5.jar                       
       ```

    1. Upload the downloaded JAR to an S3 bucket that your AWS Glue IAM role can access. You can use the following AWS CLI command to upload the JAR. Make sure to replace the *version number* with the latest version, and the *bucket name* and *path* with your own.

       ```
       aws s3 cp s3-tables-catalog-for-iceberg-runtime-0.1.5.jar s3://amzn-s3-demo-bucket/jars/
       ```

------

## Step 2 – Create a script to connect to table buckets
<a name="glue-etl-script"></a>

To access your table data when you run an AWS Glue ETL job, you configure a Spark session for Apache Iceberg that connects to your S3 table bucket. You can modify an existing script to connect to your table bucket or create a new script. For more information on creating AWS Glue scripts, see [Tutorial: Writing an AWS Glue for Spark script](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-intro-tutorial.html) in the *AWS Glue Developer Guide*.

You can configure the session to connect to your table buckets through the any of the following S3 Tables access methods:
+ S3 Tables AWS analytics services integration (Recommended)
+ Amazon S3 Tables Iceberg REST endpoint
+ Amazon S3 Tables Catalog for Apache Iceberg

Choose from the following access methods to view setup instructions and configuration examples.

------
#### [ AWS analytics services integration (Recommended) ]

As a prerequisites to query tables with Spark on AWS Glue using the AWS analytics services integration, you must [Integrate your table buckets with AWS analytics services](s3-tables-integrating-aws.md)

You can configure the connection to your table bucket through a Spark session in a job or with AWS Glue Studio magics in an interactive session. To use the following examples, replace the *placeholder values* with the information for your own table bucket.

**Using a PySpark script**  
Use the following code snippet in a PySpark script to configure a AWS Glue job to connect to your table bucket using the integration.  

```
spark = SparkSession.builder.appName("SparkIcebergSQL") \
    .config("spark.jars.packages", "org.apache.iceberg:iceberg-spark-runtime-3.4_2.12:1.4.2") \
    .config("spark.sql.extensions", "org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions") \
    .config("spark.sql.defaultCatalog","s3tables") \
    .config("spark.sql.catalog.s3tables", "org.apache.iceberg.spark.SparkCatalog") \
    .config("spark.sql.catalog.s3tables.catalog-impl", "org.apache.iceberg.aws.glue.GlueCatalog") \
    .config("spark.sql.catalog.s3tables.glue.id", "111122223333:s3tablescatalog/amzn-s3-demo-table-bucket") \
    .config("spark.sql.catalog.s3tables.warehouse", "s3://amzn-s3-demo-table-bucket/warehouse/") \
    .getOrCreate()
```

**Using an interactive AWS Glue session**  
If you are using an interactive notebook session with AWS Glue 5.0, specify the same configurations using the `%%configure` magic in a cell prior to code execution.  

```
%%configure
{"conf": "spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions --conf spark.sql.defaultCatalog=s3tables --conf spark.sql.catalog.s3tables=org.apache.iceberg.spark.SparkCatalog --conf spark.sql.catalog.s3tables.catalog-impl=org.apache.iceberg.aws.glue.GlueCatalog --conf spark.sql.catalog.s3tables.glue.id=111122223333:s3tablescatalog/amzn-s3-demo-table-bucket --conf spark.sql.catalog.s3tables.warehouse=s3://amzn-s3-demo-table-bucket/warehouse/"}
```

------
#### [ Amazon S3 Tables Iceberg REST endpoint ]

You can configure the connection to your table bucket through a Spark session in a job or with AWS Glue Studio magics in an interactive session. To use the following examples, replace the *placeholder values* with the information for your own table bucket.

**Using a PySpark script**  
Use the following code snippet in a PySpark script to configure a AWS Glue job to connect to your table bucket using the endpoint.   

```
spark = SparkSession.builder.appName("glue-s3-tables-rest") \
    .config("spark.jars.packages", "org.apache.iceberg:iceberg-spark-runtime-3.4_2.12:1.4.2") \
    .config("spark.sql.extensions", "org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions") \
    .config("spark.sql.defaultCatalog", "s3_rest_catalog") \
    .config("spark.sql.catalog.s3_rest_catalog", "org.apache.iceberg.spark.SparkCatalog") \
    .config("spark.sql.catalog.s3_rest_catalog.type", "rest") \
    .config("spark.sql.catalog.s3_rest_catalog.uri", "https://s3tables.Region.amazonaws.com/iceberg") \
    .config("spark.sql.catalog.s3_rest_catalog.warehouse", "arn:aws:s3tables:Region:111122223333:bucket/amzn-s3-demo-table-bucket") \
    .config("spark.sql.catalog.s3_rest_catalog.rest.sigv4-enabled", "true") \
    .config("spark.sql.catalog.s3_rest_catalog.rest.signing-name", "s3tables") \
    .config("spark.sql.catalog.s3_rest_catalog.rest.signing-region", "Region") \
    .config('spark.sql.catalog.s3_rest_catalog.io-impl','org.apache.iceberg.aws.s3.S3FileIO') \
    .config('spark.sql.catalog.s3_rest_catalog.rest-metrics-reporting-enabled','false') \
    .getOrCreate()
```

**Using an interactive AWS Glue session**  
If you are using an interactive notebook session with AWS Glue 5.0, specify the same configurations using the `%%configure` magic in a cell prior to code execution. Replace the placeholder values with the information for your own table bucket.  

```
%%configure
{"conf": "spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions --conf spark.sql.defaultCatalog=s3_rest_catalog --conf spark.sql.catalog.s3_rest_catalog=org.apache.iceberg.spark.SparkCatalog --conf spark.sql.catalog.s3_rest_catalog.type=rest --conf spark.sql.catalog.s3_rest_catalog.uri=https://s3tables.Region.amazonaws.com/iceberg --conf spark.sql.catalog.s3_rest_catalog.warehouse=arn:aws:s3tables:Region:111122223333:bucket/amzn-s3-demo-table-bucket --conf spark.sql.catalog.s3_rest_catalog.rest.sigv4-enabled=true --conf spark.sql.catalog.s3_rest_catalog.rest.signing-name=s3tables --conf spark.sql.catalog.s3_rest_catalog.rest.signing-region=Region --conf spark.sql.catalog.s3_rest_catalog.io-impl=org.apache.iceberg.aws.s3.S3FileIO --conf spark.sql.catalog.s3_rest_catalog.rest-metrics-reporting-enabled=false"}
```

------
#### [ Amazon S3 Tables Catalog for Apache Iceberg ]

As a prerequisite to connecting to tables using the Amazon S3 Tables Catalog for Apache Iceberg you must first download the latest catalog jar and upload it to an S3 bucket. Then, when you create your job, you add the the path to the client catalog JAR as a special parameter. For more information on job parameters in AWS Glue, see [Special parameters used in AWS Glue jobs](https://docs.aws.amazon.com/glue/latest/dg/aws-glue-programming-etl-glue-arguments.html) in the *AWS Glue Developer Guide*.

You can configure the connection to your table bucket through a Spark session in a job or with AWS Glue Studio magics in an interactive session. To use the following examples, replace the *placeholder values* with the information for your own table bucket.

**Using a PySpark script**  
Use the following code snippet in a PySpark script to configure a AWS Glue job to connect to your table bucket using the JAR. Replace the placeholder values with the information for your own table bucket.  

```
spark = SparkSession.builder.appName("glue-s3-tables") \
    .config("spark.jars.packages", "org.apache.iceberg:iceberg-spark-runtime-3.4_2.12:1.4.2") \
    .config("spark.sql.extensions", "org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions") \
    .config("spark.sql.defaultCatalog", "s3tablesbucket") \
    .config("spark.sql.catalog.s3tablesbucket", "org.apache.iceberg.spark.SparkCatalog") \
    .config("spark.sql.catalog.s3tablesbucket.catalog-impl", "software.amazon.s3tables.iceberg.S3TablesCatalog") \
    .config("spark.sql.catalog.s3tablesbucket.warehouse", "arn:aws:s3tables:Region:111122223333:bucket/amzn-s3-demo-table-bucket") \
    .getOrCreate()
```

**Using an interactive AWS Glue session**  
If you are using an interactive notebook session with AWS Glue 5.0, specify the same configurations using the `%%configure` magic in a cell prior to code execution. Replace the placeholder values with the information for your own table bucket.  

```
%%configure
{"conf": "spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions --conf spark.sql.defaultCatalog=s3tablesbucket --conf spark.sql.catalog.s3tablesbucket=org.apache.iceberg.spark.SparkCatalog --conf spark.sql.catalog.s3tablesbucket.catalog-impl=software.amazon.s3tables.iceberg.S3TablesCatalog --conf spark.sql.catalog.s3tablesbucket.warehouse=arn:aws:s3tables:Region:111122223333:bucket/amzn-s3-demo-table-bucket", "extra-jars": "s3://amzn-s3-demo-bucket/jars/s3-tables-catalog-for-iceberg-runtime-0.1.5.jar"}
```

------

### Sample scripts
<a name="w2aac20c25c29c19c13"></a>

The following example PySpark scripts can be used to test querying S3 tables with an AWS Glue job. These scripts connect to your table bucket and runs queries to: create a new namespace, create a sample table, insert data into the table, and return the table data. To use the scripts, replace the *placeholder values* with the information for you own table bucket.

Choose from the following scripts based on your S3 Tables access method.

------
#### [ S3 Tables integration with AWS analytics services ]

```
from pyspark.sql import SparkSession

spark = SparkSession.builder.appName("SparkIcebergSQL") \
    .config("spark.jars.packages", "org.apache.iceberg:iceberg-spark-runtime-3.4_2.12:1.4.2") \
    .config("spark.sql.extensions", "org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions") \
    .config("spark.sql.defaultCatalog","s3tables")
    .config("spark.sql.catalog.s3tables", "org.apache.iceberg.spark.SparkCatalog") \
    .config("spark.sql.catalog.s3tables.catalog-impl", "org.apache.iceberg.aws.glue.GlueCatalog") \
    .config("spark.sql.catalog.s3tables.glue.id", "111122223333:s3tablescatalog/amzn-s3-demo-table-bucket") \
    .config("spark.sql.catalog.s3tables.warehouse", "s3://amzn-s3-demo-table-bucket/bucket/amzn-s3-demo-table-bucket") \
    .getOrCreate()

namespace = "new_namespace"
table = "new_table"

spark.sql("SHOW DATABASES").show()

spark.sql(f"DESCRIBE NAMESPACE {namespace}").show()

spark.sql(f"""
    CREATE TABLE IF NOT EXISTS {namespace}.{table} (
       id INT,
       name STRING,
       value INT
    )
""")

spark.sql(f"""
    INSERT INTO {namespace}.{table}
    VALUES 
       (1, 'ABC', 100),
       (2, 'XYZ', 200)
""")

spark.sql(f"SELECT * FROM {namespace}.{table} LIMIT 10").show()
```

------
#### [ Amazon S3 Tables Iceberg REST endpoint ]

```
from pyspark.sql import SparkSession

spark = SparkSession.builder.appName("glue-s3-tables-rest") \
    .config("spark.jars.packages", "org.apache.iceberg:iceberg-spark-runtime-3.4_2.12:1.4.2") \
    .config("spark.sql.extensions", "org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions") \
    .config("spark.sql.defaultCatalog", "s3_rest_catalog") \
    .config("spark.sql.catalog.s3_rest_catalog", "org.apache.iceberg.spark.SparkCatalog") \
    .config("spark.sql.catalog.s3_rest_catalog.type", "rest") \
    .config("spark.sql.catalog.s3_rest_catalog.uri", "https://s3tables.Region.amazonaws.com/iceberg") \
    .config("spark.sql.catalog.s3_rest_catalog.warehouse", "arn:aws:s3tables:Region:111122223333:bucket/amzn-s3-demo-table-bucket") \
    .config("spark.sql.catalog.s3_rest_catalog.rest.sigv4-enabled", "true") \
    .config("spark.sql.catalog.s3_rest_catalog.rest.signing-name", "s3tables") \
    .config("spark.sql.catalog.s3_rest_catalog.rest.signing-region", "Region") \
    .config('spark.sql.catalog.s3_rest_catalog.io-impl','org.apache.iceberg.aws.s3.S3FileIO') \
    .config('spark.sql.catalog.s3_rest_catalog.rest-metrics-reporting-enabled','false') \
    .getOrCreate()

namespace = "s3_tables_rest_namespace"
table = "new_table_s3_rest"

spark.sql("SHOW DATABASES").show()

spark.sql(f"DESCRIBE NAMESPACE {namespace}").show()

spark.sql(f"""
    CREATE TABLE IF NOT EXISTS {namespace}.{table} (
       id INT,
       name STRING,
       value INT
    )
""")

spark.sql(f"""
    INSERT INTO {namespace}.{table}
    VALUES 
       (1, 'ABC', 100),
       (2, 'XYZ', 200)
""")

spark.sql(f"SELECT * FROM {namespace}.{table} LIMIT 10").show()
```

------
#### [ Amazon S3 Tables Catalog for Apache Iceberg ]

```
from pyspark.sql import SparkSession

#Spark session configurations
spark = SparkSession.builder.appName("glue-s3-tables") \
    .config("spark.jars.packages", "org.apache.iceberg:iceberg-spark-runtime-3.4_2.12:1.4.2") \
    .config("spark.sql.extensions", "org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions") \
    .config("spark.sql.defaultCatalog", "s3tablesbucket") \
    .config("spark.sql.catalog.s3tablesbucket", "org.apache.iceberg.spark.SparkCatalog") \
    .config("spark.sql.catalog.s3tablesbucket.catalog-impl", "software.amazon.s3tables.iceberg.S3TablesCatalog") \
    .config("spark.sql.catalog.s3tablesbucket.warehouse", "arn:aws:s3tables:Region:111122223333:bucket/amzn-s3-demo-table-bucket") \
    .getOrCreate()

#Script
namespace = "new_namespace"
table = "new_table"

spark.sql(f"CREATE NAMESPACE IF NOT EXISTS s3tablesbucket.{namespace}")
spark.sql(f"DESCRIBE NAMESPACE {namespace}").show()

spark.sql(f"""
    CREATE TABLE IF NOT EXISTS {namespace}.{table} (
       id INT,
       name STRING,
       value INT
    )
""")

spark.sql(f"""
    INSERT INTO {namespace}.{table}
    VALUES 
       (1, 'ABC', 100),
       (2, 'XYZ', 200)
""")

spark.sql(f"SELECT * FROM {namespace}.{table} LIMIT 10").show()
```

------

## Step 3 – Create a AWS Glue job that queries tables
<a name="glue-etl-job"></a>

The following procedures show how to setup AWS Glue jobs that connect to your S3 table buckets. You can do this using the AWS CLI or using the console with AWS Glue Studio script editor. For more information, see [Authoring jobs in AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/author-job-glue.html) in the *AWS Glue User Guide*.

### Using AWS Glue Studio script editor
<a name="tables-glue-studio-job"></a>

The following procedure shows how to use the AWS Glue Studio script editor to create an ETL job that queries your S3 tables.

**Prerequisites**
+ [Step 1 – Prerequisites](#glue-etl-prereqs)
+ [Step 2 – Create a script to connect to table buckets](#glue-etl-script)

1. Open the AWS Glue console at [https://console.aws.amazon.com/glue/](https://console.aws.amazon.com/glue/).

1. From the Navigation pane, choose **ETL jobs**.

1. Choose **Script editor**, then choose **Upload script** and upload the PySpark script you created to query S3 tables.

1. Select the **Job details** tab and enter the following for **Basic properties**.
   + For **Name**, enter a name for the job.
   + For **IAM Role**, select the role you created for AWS Glue.

1. (Optional) If you are using the Amazon S3 Tables Catalog for Apache Iceberg access method, expand **Advanced properties** and for **Dependent JARs path**, enter the S3 URI of the client catalog jar your uploaded to an S3 bucket as a prerequisite. For example, s3://*amzn-s3-demo-bucket1*/*jars*/s3-tables-catalog-for-iceberg-runtime-*0.1.5*.jar

1. Choose **Save** to create the job.

1. Choose **Run** start the job, and review the job status under the **Runs** tab.

### Using the AWS CLI
<a name="tables-glue-cli-job"></a>

The following procedure shows how to use the AWS CLI to create an ETL job that queries your S3 tables. To use the commands replace the *placeholder values* with your own.

**Prerequisites**
+ [Step 1 – Prerequisites](#glue-etl-prereqs)
+ [Step 2 – Create a script to connect to table buckets](#glue-etl-script) and upload it to an S3 bucket.

1. Create an AWS Glue job.

   ```
   aws glue create-job \
   --name etl-tables-job \
   --role arn:aws:iam::111122223333:role/AWSGlueServiceRole \
   --command '{
       "Name": "glueetl",
       "ScriptLocation": "s3://amzn-s3-demo-bucket1/scripts/glue-etl-query.py",
       "PythonVersion": "3"
   }' \
   --default-arguments '{
       "--job-language": "python",
       "--class": "GlueApp"
   }' \
   --glue-version "5.0"
   ```
**Note**  
(Optional) If you are using the Amazon S3 Tables Catalog for Apache Iceberg access method, add the client catalog JAR to the `--default-arguments` using the `--extra-jars` parameter. Replace the *input placeholders* with your own when you add the parameter.  

   ```
                               "--extra-jars": "s3://amzn-s3-demo-bucket/jar-path/s3-tables-catalog-for-iceberg-runtime-0.1.5.jar" 
   ```

1. Start your job.

   ```
   aws glue start-job-run \
   --job-name etl-tables-job
   ```

1. To review you job status, copy the run ID from the previous command and enter it into the following command.

   ```
   aws glue get-job-run --job-name etl-tables-job \
   --run-id jr_ec9a8a302e71f8483060f87b6c309601ea9ee9c1ffc2db56706dfcceb3d0e1ad
   ```

# Getting started querying S3 Tables with Amazon SageMaker Unified Studio
<a name="s3-tables-integrating-sagemaker"></a>

Amazon SageMaker Unified Studio is a comprehensive analytics service that enables you to query and derive insights from your data using SQL, natural language, and interactive notebooks. It supports team collaboration and analysis workflows across AWS data repositories and third-party sources within a unified interface. SageMaker Unified Studio integrates directly with S3 Tables, providing a seamless transition from data storage to analysis within the Amazon S3 console.

You can integrate S3 Tables with SageMaker Unified Studio through the Amazon S3 console or SageMaker Unified Studio console.

For setup through the SageMaker Unified Studio console, see the [SageMaker Unified Studio documentation](https://docs.aws.amazon.com/next-generation-sagemaker/latest/userguide/s3-tables-integration.html).

## Requirements for querying S3 Tables with SageMaker Unified Studio
<a name="sagemaker-unified-studio-requirements"></a>

Using SageMaker Unified Studio with S3 Tables requires the following:
+ Your table buckets have been integrated with AWS analytics services in the current Region. For more information, see [Integrating S3 Tables with AWS analytics services](s3-tables-integrating-aws.md).
+ You are using an IAM role with permissions to create and view resources in SageMaker Unified Studio. For more information, see [Setup IAM-based domains in SageMaker Unified Studio](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/setup-iam-based-domains.html).
+ You have a SageMaker domain and project. For more information, see [Domains](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/working-with-domains.html) in the *SageMaker Unified Studio Administrator Guide*, and [Projects](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/userguide/projects.html) in the *SageMaker Unified Studio User Guide*.

If you haven't already performed these actions or created these resources, S3 Tables can automatically complete this set up for you so you can begin querying with SageMaker Unified Studio.

## Getting started querying S3 Tables with SageMaker Unified Studio
<a name="sagemaker-unified-studio-getting-started"></a>

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. On the **Table buckets** page, choose the bucket that contains the table that you want to query.

1. On the bucket details page, select the table that you want to query.

1. Choose **Query**. 

1. Then, choose **Query table in SageMaker Unified Studio**.

   1. If you've already configured SageMaker Unified Studio for your tables the SageMaker Unified Studio console opens to the query editor with a sample `SELECT` query loaded for you. Modify this query as needed for your use case.

   1. If you haven't already configured SageMaker Unified Studio for S3 Tables, a set up page appears with a single step to enable Integration with AWS analytics services which integrates your tables with services like SageMaker Unified Studio. This step will execute automatically then you will be redirected to a page in the SageMaker Unified Studio console with the following options to configure your account for querying S3 Tables:

      1. In **Setting you up as an administrator**, your current federated IAM role is selected. If your current role does not already have the required permissions, you will need to [setup an IAM-based domain in SageMaker Unified Studio](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/setup-iam-based-domains.html) and assign permissions to your role so you can login to SageMaker Unified Studio.

      1. In **Project data and administrative control**, select **Auto-create a new role with required permissions** to automatically create a role with the required permissions, or select **Use an existing role** and choose a role. If the chosen role does not already have the required permissions, you will need to [setup an IAM-based domain in SageMaker Unified Studio](https://docs.aws.amazon.com/sagemaker-unified-studio/latest/adminguide/setup-iam-based-domains.html) and assign permissions to your admin execution role so you can access data in SageMaker Unified Studio.

      1. In **Data encryption** select **Use AWS owned key** to let AWS own and manage a key for you or **Choose a different AWS AWS KMS key (advanced)** to use an existing key or to create a new one.

      1. Select **Set up SageMaker Unified Studio**.

      1. Next, the SageMaker Unified Studio console opens to the query editor with a sample `SELECT` query loaded for you. Modify this query as needed for your use case.

         In the query editor, the **Catalog** field should be populated with `s3tablescatalog/` followed by the name of your table bucket, for example, `s3tablescatalog/amzn-s3-demo-table-bucket`. The **Database** field is populated with the namespace where your table is stored.

# Working with Apache Iceberg V3
<a name="working-with-apache-iceberg-v3"></a>

Apache Iceberg Version 3 (V3) is the latest version of the Apache Iceberg table format specification, introducing advanced capabilities for building petabyte-scale data lakes with improved performance and reduced operational overhead. V3 addresses common performance bottlenecks encountered with V2, particularly around batch updates and compliance deletes.

AWS provides support for deletion vectors and row lineage as defined in the Apache Iceberg Version 3 (V3) specification. These features are available with Apache Spark on [Amazon EMR 7.12](https://docs.aws.amazon.com/prescriptive-guidance/latest/apache-iceberg-on-aws/iceberg-emr.html), [AWS Glue ETL](https://docs.aws.amazon.com/prescriptive-guidance/latest/apache-iceberg-on-aws/iceberg-glue.html), [Amazon SageMaker Unified Studio Notebooks](https://docs.aws.amazon.com/next-generation-sagemaker/), and Apache Iceberg tables in [AWS Glue Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/catalog-and-crawler.html), including [Amazon S3 Tables](https://aws.amazon.com/s3/features/tables/).

## Key Features in V3
<a name="key-features-v3"></a>

Deletion Vectors  
Replaces V2's positional delete files with an efficient binary format stored as Puffin files. This eliminates write amplification from random batch updates and GDPR compliance deletes, significantly reducing the overhead of maintaining fresh data. Organizations processing high-frequency updates will see immediate improvements in write performance and reduced storage costs from fewer small files.

Row-lineage  
Enables precise change tracking at the row level. Your downstream systems can process changes incrementally, speeding up data pipelines and reducing compute costs for change data capture (CDC) workflows. This built-in capability eliminates the need for custom change tracking implementations.

## Version Compatibility
<a name="version-compatibility"></a>

V3 maintains backward compatibility with V2 tables. AWS services support both V2 and V3 tables simultaneously, allowing you to:
+ Run queries across both V2 and V3 tables
+ Upgrade existing V2 tables to V3 without data rewrites
+ Execute time travel queries that span V2 and V3 snapshots
+ Use schema evolution and hidden partitioning across table versions

**Important**  
V3 is a one-way upgrade. Once a table is upgraded from V2 to V3, it cannot be downgraded back to V2 through standard operations.

## Getting Started with V3
<a name="getting-started-v3"></a>

### Prerequisites
<a name="prerequisites"></a>

Before working with V3 tables, ensure you have:
+ An AWS account with appropriate IAM permissions
+ Access to one or more AWS analytics services (EMR, Glue, Amazon SageMaker Unified Studio Notebooks, or S3 Tables)
+ An S3 bucket for storing table data and metadata
+ A table bucket to get started with S3 Tables or a general purpose S3 bucket if you are building your own Iceberg infrastructure
+ AWS Glue catalog configured

### Creating V3 Tables
<a name="creating-v3-tables"></a>

#### Creating New V3 Tables
<a name="creating-new-v3-tables"></a>

To create a new Iceberg V3 table, set the format-version table property to 3.

**Using Spark SQL:**

```
CREATE TABLE IF NOT EXISTS myns.orders_v3 (  
    order_id bigint,  
    customer_id string,  
    order_date date,  
    total_amount decimal(10,2),  
    status string,  
    created_at timestamp  
)  
USING iceberg  
TBLPROPERTIES (  
    'format-version' = '3'  
)
```

#### Upgrading V2 Tables to V3
<a name="upgrading-v2-to-v3"></a>

You can upgrade existing V2 tables to V3 atomically without rewriting data.

**Using Spark SQL:**

```
ALTER TABLE myns.existing_table  
SET TBLPROPERTIES ('format-version' = '3')
```

**Important**  
V3 is a one-way upgrade. Once a table is upgraded from V2 to V3, it cannot be downgraded back to V2 through standard operations.

**What happens during upgrade:**
+ A new metadata snapshot is created atomically
+ Existing Parquet data files are reused
+ Row-lineage fields are added to the table metadata
+ The next compaction will remove old V2 delete files
+ New modifications will use V3's Deletion Vector files
+ The upgrade does not perform a historical backfill of row-lineage change tracking records

### Enabling Deletion Vectors
<a name="enabling-deletion-vectors"></a>

To take advantage of Deletion Vectors for updates, deletes, and merges, configure your write mode.

**Using Spark SQL:**

```
ALTER TABLE myns.orders_v3  
SET TBLPROPERTIES ('format-version' = '3',  
                   'write.delete.mode' = 'merge-on-read',  
                   'write.update.mode' = 'merge-on-read',  
                   'write.merge.mode' = 'merge-on-read'  
                  )
```

These settings ensure that update, delete, and merge operations create Deletion Vector files instead of rewriting entire data files.

### Leveraging Row-lineage for Change Tracking
<a name="leveraging-row-lineage"></a>

V3 automatically adds row-lineage metadata fields to track changes.

**Using Spark SQL:**

```
# Query with parameter value provided  
last_processed_sequence = 47  
  
SELECT   
    id,  
    data,  
    _row_id,  
    _last_updated_sequence_number  
FROM myns.orders_v3  
WHERE _last_updated_sequence_number > :last_processed_sequence
```

The \$1row\$1id field uniquely identifies each row, while \$1last\$1updated\$1sequence\$1number tracks when the row was last modified. Use these fields to:
+ Identify changed rows for incremental processing
+ Track data lineage for compliance
+ Optimize CDC pipelines
+ Reduce compute costs by processing only changes

## Best Practices for V3
<a name="best-practices-v3"></a>

### When to Use V3
<a name="when-to-use-v3"></a>

Consider upgrading to or starting with V3 when:
+ You perform frequent batch updates or deletes
+ You need to meet GDPR or compliance delete requirements
+ Your workloads involve high-frequency upserts
+ You require efficient CDC workflows
+ You want to reduce storage costs from small files
+ You need better change tracking capabilities

### Optimizing Write Performance
<a name="optimizing-write-performance"></a>
+ Enable Deletion Vectors for update-heavy workloads:

  ```
  SET TBLPROPERTIES (  
  'write.delete.mode' = 'merge-on-read',  
  'write.update.mode' = 'merge-on-read',  
  'write.merge.mode' = 'merge-on-read'  
  )
  ```
+ Configure appropriate file sizes:

  ```
  SET TBLPROPERTIES (  
  'write.target-file-size-bytes' = '536870912'  — 512 MB  
  )
  ```

### Optimizing Read Performance
<a name="optimizing-read-performance"></a>
+ Leverage row-lineage for incremental processing
+ Use time travel to access historical data without copying
+ Enable statistics collection for better query planning

## Migration Strategy
<a name="migration-strategy"></a>

When migrating from V2 to V3:
+ Test in non-production first - Validate upgrade process and performance
+ Upgrade during low-activity periods - Minimize impact on concurrent operations
+ Monitor initial performance - Track metrics after upgrade
+ Run compaction - Consolidate delete files after upgrade
+ Update documentation - Reflect V3 features in team documentation

## Compatibility Considerations
<a name="compatibility-considerations"></a>
+ Engine versions - Ensure all engines accessing the table support V3
+ Third-party tools - Verify V3 compatibility before upgrading
+ Backup strategy - Test snapshot-based recovery procedures
+ Monitoring - Update monitoring dashboards for V3-specific metrics

## Troubleshooting
<a name="troubleshooting"></a>

### Common Issues
<a name="common-issues"></a>

Error: "format-version 3 is not supported"  
+ Verify your engine version supports V3

  V3 support for Amazon AWS services is as follows:    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/working-with-apache-iceberg-v3.html)
+ Check catalog compatibility
+ Ensure latest AWS service versions

Performance degradation after upgrade  
+ Verify there are no compaction failures. See [Logging and monitoring for S3 Tables](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-monitoring-overview.html) for more details.
+ Check if Deletion Vectors are enabled. Ensure the following properties are set:

  ```
  SET TBLPROPERTIES (  
  'write.delete.mode' = 'merge-on-read',  
  'write.update.mode' = 'merge-on-read',  
  'write.merge.mode' = 'merge-on-read'  
  )
  ```
+ You can verify table properties with the following code:

  ```
  DESCRIBE FORMATTED myns.orders_v3
  ```
+ Review partition strategy. Over partitioning can lead to small files. Run the below query to get the average file size for your table:

  ```
  SELECT avg(file_size_in_bytes) as avg_file_size_bytes   
  FROM myns.orders_v3.files
  ```

Incompatibility with third-party tools  
+ Verify tool supports V3 specification
+ Consider maintaining V2 tables for unsupported tools
+ Contact tool vendor for V3 support timeline

### Getting Help
<a name="getting-help"></a>
+ AWS Support: Contact AWS Support for service-specific issues
+ Apache Iceberg Community: Iceberg Slack
+ AWS Documentation: AWS Analytics Documentation

## Pricing
<a name="pricing"></a>
+ Amazon EMR: [Compute and storage pricing](https://aws.amazon.com/emr/pricing/)
+ [Amazon SageMaker pricing](https://aws.amazon.com/sagemaker/pricing/)
+ AWS Glue: [Job run and Data Catalog pricing](https://aws.amazon.com/glue/pricing/)
+ S3 Tables: [Storage and request pricing](https://aws.amazon.com/s3/pricing/)

## Availability
<a name="availability"></a>

Apache Iceberg V3 support is available across all AWS regions where Amazon EMR, AWS Glue Data Catalog, AWS Glue ETL, and S3 Tables operate.

## Additional Resources
<a name="additional-resources"></a>
+ [Apache Iceberg V3 Documentation](https://docs.aws.amazon.com/prescriptive-guidance/latest/apache-iceberg-on-aws/introduction.html)
+ [Migration Best Practices](https://aws.amazon.com/solutions/guidance/migrating-tabular-data-from-amazon-s3-to-s3-tables/)
+ [Getting Started Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-getting-started.html)

# Replicating S3 tables
<a name="s3-tables-replication-tables"></a>

Amazon S3 Tables support automatic replication of Apache Iceberg tables stored in Amazon S3 table buckets. Replication destinations can be within the same AWS Region, across multiple AWS Regions, the same account, or to other AWS accounts. By configuring replication for your tables, you can maintain read-only replicas of your data across multiple locations. You can use replicas to enhance data availability, meet compliance requirements and increase access performance for distributed applications.

S3 Tables replication maintains data consistency by committing all table updates, including snapshots, metadata, and data files, to the destination table in the same order as the source table.

## When to use S3 Tables replication
<a name="s3-tables-replication-tables-when-to-use"></a>

You can use S3 Tables replication for the following purposes:
+ **Minimize latency** – If your customers are in two geographic locations, you can minimize latency when accessing tables by maintaining read replicas in AWS Regions that are geographically closer to your users.
+ **Regulatory compliance** – You can maintain read replicas in specific geographic locations or AWS accounts, which might help you meet certain regulatory or compliance requirements. You can configure the replication destination table bucket to encrypt tables with different AWS KMS keys than the source.
+ **Centralized analytics** – If you have data distributed across multiple AWS Regions, you can replicate Region-specific datasets to a centralized Region for unified reporting, cross-Region analysis, and machine learning model training. This eliminates the need to query across Regions or build custom data aggregation pipelines.
+ **Testing and development environments** – You can create read replicas of production tables in separate AWS accounts or table buckets to provide realistic test data for development and QA teams. This isolates testing workloads from production systems while ensuring test environments have current, production-like data without manual exports or data synchronization processes.

## Features
<a name="s3-tables-replication-tables-features"></a>

S3 Tables replication offers the following features.

**Read-only replicas for S3 Tables**  
S3 Tables replication creates read-only replicas of your Apache Iceberg tables across table buckets. You can query replicas independently using any Iceberg-compatible engine.

**Automatically maintained replicas**  
The S3 Tables replication service automatically maintains replica tables. Replication typically updates replicas within minutes of updates to the source. S3 Tables commits all updates in the same order as the source table to maintain consistency.

**Replication to multiple destinations**  
You can replicate the same table to multiple destination table buckets. Replication destinations can be within the same AWS Region, across multiple AWS Regions, in the same AWS account, or in other AWS accounts.

**Independent snapshot retention**  
Snapshot expiration for replica tables is independent of the source table, allowing you to set different retention periods on replica tables if needed. For example, you can configure your source table to retain snapshots for 30 days while setting a 90-day retention period for replica tables. If you configure a longer retention period on replicas, snapshots that expire at the source remain available and queryable in replicas. This configuration provides extended time-travel capabilities for historical analysis.

**Maintain replica tables in lower-cost storage tiers**  
You can configure destination table buckets to use the S3 Intelligent-Tiering storage class, which automatically optimizes storage costs based on access patterns without performance impact or operational overhead. S3 Intelligent-Tiering is well-suited for replica tables that may be accessed less frequently.

For more information about S3 Tables replication, see the following topics.

**Topics**
+ [When to use S3 Tables replication](#s3-tables-replication-tables-when-to-use)
+ [Features](#s3-tables-replication-tables-features)
+ [How S3 Tables replication works](s3-tables-replication-how-replication-works.md)
+ [Setting up S3 Tables replication](s3-tables-replication-setting-up.md)
+ [Managing S3 Tables replication](s3-tables-replication-managing.md)

# How S3 Tables replication works
<a name="s3-tables-replication-how-replication-works"></a>

S3 Tables replication creates read-only replicas of your Apache Iceberg tables across Regions and AWS accounts. Replica tables are maintained automatically by the S3 Tables service and contain the complete data, metadata, and snapshot history from your source table, making them queryable using any Iceberg-compatible engine for analytics and time-travel operations.

When you configure replication for a table, S3 Tables:
+ Creates a read-only replica table in each destination table bucket with the same name and namespace as the source table.
+ Backfills the replica with the latest state of the source table
+ Monitors the source table for new updates
+ Commits all updates to replicas in the same order as the source to maintain consistency

For more information, see the following sections.

**Topics**
+ [What is replicated](#s3-tables-replication-what-is-replicated)
+ [How data is replicated](#s3-tables-replication-how-data-replicated)
+ [Snapshot replication](#s3-tables-replication-snapshot-replication)
+ [Considerations and limitations](#s3-tables-replication-considerations-limitations)

## What is replicated
<a name="s3-tables-replication-what-is-replicated"></a>

The following table components are replicated:
+ **Table snapshots** – All snapshots, including compacted snapshots, are replicated in chronological order, maintaining parent-child relationships and sequence numbers from the source table. This ensures that replica tables provide the same time-travel capabilities as source tables.
+ **Table data** – All data files referenced by table snapshots are replicated to the destination Region. This includes:
  + **Metadata files** – Table metadata.json files, manifests, manifest lists, partition statistics and table statistics.
  + **Delete files** – All delete files are replicated to maintain data accuracy in replica tables.
  + **Data files** – All data files referenced by manifests are replicated.
+ **Table metadata** – Complete metadata replication, including schema information (current and historical), partition specifications, sort orders, and table properties.
  + **Schema Information** – All table schemas are replicated, including the current schema and historical schema versions. This ensures that queries against replica tables use the correct column definitions, data types, and field mappings. The replication process maintains schema evolution history, allowing time travel queries to work correctly on replica tables.
  + **Partition Specifications** – Current and historical partition specifications are replicated, ensuring replica tables maintain the same partitioning strategy as source tables.
  + **Sort Orders** – Table sort orders are replicated to maintain query performance optimizations.

## How data is replicated
<a name="s3-tables-replication-how-data-replicated"></a>

Replication determines a valid state for replica tables by comparing Apache Iceberg table metadata between the source and replica tables. Replication processes metadata in three categories to update your replica table.

### For table metadata
<a name="s3-tables-replication-table-metadata"></a>

For versioned metadata fields, replication merges values from the source table into the replica table's arrays for the following fields:
+ `snapshots` – Merges all snapshots from the source table into the replica table's snapshots array by snapshot-id.
+ `snapshot-log` – Merges snapshot logs from the source table into the replica table's snapshot-log array, sorted by timestamp and snapshot-id.
+ `sort-orders` – Merges sort order definitions from the source table into the replica table's sort-orders array by order-id.
+ `partition-specs` – Merges partition specifications from the source table into the replica table's partition-specs array by spec-id.

### For table configuration
<a name="s3-tables-replication-table-configuration"></a>

For fields that represent table configuration, replication copies values directly from the source table:
+ `properties`
+ `partition-statistics`
+ `statistics`

Current table state is also transferred from the source table:
+ `current-snapshot-id`
+ `current-schema-id`
+ `last-column-id`
+ `last-partition-id`
+ `last-sequence-number`
+ `default-sort-order-id`
+ `next-row-id` (Iceberg V3)
+ `encryption-keys` (Iceberg V3)

### Replica-specific state
<a name="s3-tables-replication-replica-specific-state"></a>

The following fields are calculated from merged data and updated for the replica table:
+ `location` is updated during replication to point to the correct file location in the replica table bucket, ensuring that all file references are valid in the destination environment.
+ `metadata-log` contains all destination metadata filenames and is updated after every successful replication with the current metadata filename.
+ All file paths are modified to point to the replica table locations.

## Snapshot replication
<a name="s3-tables-replication-snapshot-replication"></a>

S3 Tables replication maintains complete snapshot history across Regions by replicating all table snapshots in the same commit order as the source table. The parent-child relationships from the source table are preserved in the replica table.

### Snapshot retention
<a name="s3-tables-replication-snapshot-retention"></a>

You can configure a custom snapshot retention period for your replicated tables that differs from the retention period of the source. This means that even if snapshots are expired and no longer available in the source table, they can be preserved in replicas.

For example, if your source table has a 30-day snapshot retention period but your replica table is configured with a 90-day retention period, the replica will maintain snapshots from the previous two months that are no longer available in the source table.

Snapshots that you manually expired at the source table are also preserved in the replica table. For example, if you expired snapshots from February at the source table using a Spark procedure, you can still time travel to the snapshots in the replica table.

## Considerations and limitations
<a name="s3-tables-replication-considerations-limitations"></a>

The following considerations apply to replicated tables:
+ S3 Tables replicates both Iceberg V2 and V3 tables. However, replication of upgraded tables (V2 → V3) is not supported.
+ Metadata files larger than 500MB are not supported.
+ While table updates are typically replicated within minutes, replication can take longer depending on the size of the table update to be replicated, for example, when replication starts backfilling.
+ Tables with tags or branches are not supported.
+ Replication is not supported for Amazon S3 Metadata tables or other AWS-generated system tables.
+ All table snapshots, including compacted snapshots, are replicated from the source table. As a result, compaction is not supported on replica tables.

# Setting up S3 Tables replication
<a name="s3-tables-replication-setting-up"></a>

You can set up replication to automatically create table replicas from a source table to up to five destination table buckets. Replication can be configured at the bucket level (applying to all tables in the bucket) or at the table level (for specific tables). This topic explains how to configure replication using the Amazon S3 console or the AWS Command Line Interface (AWS CLI).

For more information about setting up replication, see the following topics.

**Topics**
+ [Prerequisites for setting up replication](#s3-tables-replication-prerequisites)
+ [Understanding replication configurations](#s3-tables-replication-understanding-configurations)
+ [Choosing between bucket-level and table-level replication](#s3-tables-replication-choosing-configuration)
+ [Setting up replication by using the Amazon S3 console](#s3-tables-replication-console)
+ [Setting up replication by using the AWS CLI](#s3-tables-replication-cli)

## Prerequisites for setting up replication
<a name="s3-tables-replication-prerequisites"></a>

Before you configure replication, ensure you have the following:

### Required resources
<a name="s3-tables-replication-required-resources"></a>
+ **Source table bucket** – The table bucket containing the table(s) you want to replicate
+ **Destination table bucket(s)** – One or more table buckets where you want to replicate your tables (up to 5 destination table buckets)
+ **Source table(s)** – Existing tables in your source table bucket to replicate
+ **IAM role(s)** – An IAM role that grants Amazon S3 permissions to replicate tables on your behalf

### Required permissions
<a name="s3-tables-replication-required-permissions"></a>

The IAM identity that you use to set up replication must have the following permissions:

**For bucket-level replication:**
+ `s3tables:PutTableBucketReplication` on the source table bucket
+ `s3tables:GetTableBucketReplication` on the source table bucket
+ `iam:PassRole` for the replication IAM role

**For table-level replication:**
+ `s3tables:PutTableReplication` on the source table
+ `s3tables:GetTableReplication` on the source table
+ `iam:PassRole` for the replication IAM role

**For cross-account replication:**
+ Permissions from the destination account's bucket policy

### Additional requirements for cross-account replication
<a name="s3-tables-replication-cross-account-requirements"></a>

If your source and destination table buckets are in different AWS accounts, you also need:
+ A bucket policy on the destination table bucket that grants the source account permissions to replicate tables
+ The destination account ID and table bucket Amazon Resource Name (ARN)

### Additional requirements for encrypted tables
<a name="s3-tables-replication-encrypted-requirements"></a>

If you want to encrypt replica tables with AWS KMS:
+ A KMS key in the destination Region
+ Permissions to use the KMS key in your IAM replication role
+ A KMS key policy that allows the replication role to encrypt data

## Understanding replication configurations
<a name="s3-tables-replication-understanding-configurations"></a>

A replication configuration defines how Amazon S3 replicates tables from your source table bucket. Replication can be configured at two levels:

### Bucket-level replication
<a name="s3-tables-replication-bucket-level"></a>

A bucket-level replication configuration applies to all tables in the source table bucket. When you configure bucket-level replication, Amazon S3 automatically replicates any existing tables and any new tables created in the bucket.

Use bucket-level replication when:
+ You want to replicate all tables in a bucket
+ You want consistent replication behavior across all tables
+ You want to simplify management by having a single configuration

### Table-level replication
<a name="s3-tables-replication-table-level"></a>

A table-level replication configuration applies to a specific table. Table-level configurations override bucket-level configurations for that specific table.

Use table-level replication when:
+ You want to replicate only specific tables
+ You need different replication destinations for different tables
+ You want to override a bucket-level configuration for certain tables

### Replication configuration elements
<a name="s3-tables-replication-configuration-elements"></a>

Each replication configuration contains:
+ **IAM role** – The role that Amazon S3 assumes to perform replication operations
+ **Rules** – One or more replication rules (limited to 1 rule at launch). Each rule contains:
  + **Destinations** – List of destination table bucket ARNs (up to 5 destinations)
  + **Status** – Whether the rule is enabled or disabled
+ **Version token** – A token used to prevent write conflicts when updating configurations

## Choosing between bucket-level and table-level replication
<a name="s3-tables-replication-choosing-configuration"></a>

### Configuration precedence
<a name="s3-tables-replication-configuration-precedence"></a>

When both bucket-level and table-level configurations exist:
+ Table-level configuration takes precedence for that specific table.
+ Other tables follow the bucket-level configuration.

## Setting up replication by using the Amazon S3 console
<a name="s3-tables-replication-console"></a>

This procedure shows you how to configure replication using the Amazon S3 console.

### To set up bucket-level replication
<a name="s3-tables-replication-bucket-level-console"></a>

This procedure shows you how to create a table bucket replication configuration using the Amazon S3 console. A table bucket replication configuration applies to all tables in the source table bucket.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane, choose **Table buckets**.

1. In the **Table buckets** list, choose the name of the table bucket for which you want to configure replication.

1. Choose the **Management** tab.

1. In the **Table bucket replication configuration** section, choose **Create table bucket replication configuration**.

1. In the **Destination** section, configure your replication destinations:

   1. In the **Table bucket ARN** field, enter the ARN of the destination table bucket. The format is: `arn:aws:s3tables:region:account-id:bucket/table-bucket-name`

      Alternatively, choose **Browse S3** to select a table bucket from your account.

   1. (Optional) To add additional destinations, choose **Add destination**. You can add up to 4 more table buckets for a total of 5 destinations.

1. In the **IAM role** section, configure the replication role:

   1. For **IAM role selection method**, choose one of the following options:
      + **Create new IAM role** – Amazon S3 creates a new role with the necessary permissions for replication.
      + **Choose from existing IAM roles** – Select an existing role that has the required replication permissions.
      + **Enter IAM role ARN** – Manually enter the ARN of an existing IAM role.

   1. If you chose **Choose from existing IAM roles**, select a role from the **IAM role** dropdown list.

   1. (Optional) Choose **View** to review the selected role's permissions in the IAM console.

1. Choose **Create replication configuration**.

   After you create the replication configuration, Amazon S3 begins the initial backfill process. You can monitor the replication status in the **Table replication status** section, which displays information about each destination including replication status, destination table ARN, and last replicated metadata.

### To set up table-level replication
<a name="s3-tables-replication-table-level-console"></a>

This procedure shows you how to create a table-level replication configuration using the Amazon S3 console. A table replication configuration applies to a specific table and overrides any bucket-level replication configuration for that table.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane, choose **Table buckets**.

1. In the **Table buckets** list, choose the name of the table bucket that contains the table you want to replicate.

1. Choose the **Tables** tab.

1. In the **Tables** list, choose the name of the table that you want to replicate.

1. Choose the **Management** tab.

1. In the **Table replication configuration** section, choose **Create table replication configuration**.

1. In the **Destination** section, configure your replication destinations:

   1. In the **Table bucket ARN** field, enter the ARN of the destination table bucket. The format is: `arn:aws:s3tables:region:account-id:bucket/table-bucket-name`

      Alternatively, choose **Browse S3** to select a table bucket from your account.

   1. (Optional) To add additional destinations, choose **Add destination**. You can add up to 4 more table buckets for a total of 5 destinations.

1. In the **IAM role** section, configure the replication role:

   1. For **IAM role selection method**, choose one of the following options:
      + **Create new IAM role** – Amazon S3 creates a new role with the necessary permissions for replication.
      + **Choose from existing IAM roles** – Select an existing role that has the required replication permissions.
      + **Enter IAM role ARN** – Manually enter the ARN of an existing IAM role.

   1. If you chose **Choose from existing IAM roles**, select a role from the **IAM role** list.

   1. (Optional) Choose **View** to review the selected role's permissions in the IAM console.

1. Choose **Create replication configuration**.

### What happens next?
<a name="s3-tables-replication-what-happens-next"></a>

After you create the replication configuration:
+ Amazon S3 begins the initial backfill process, creating replica tables in each destination bucket
+ The replication status changes to **Replicating** once backfill begins
+ You can monitor replication progress on the **Management** tab
+ Initial replication time depends on the size of your source table

## Setting up replication by using the AWS CLI
<a name="s3-tables-replication-cli"></a>

This procedure shows you how to configure replication using the AWS CLI. Replace the account IDs, regions, and bucket names with your actual values. Add all destination buckets to the permissions.

### Step 1: Create an IAM role for replication
<a name="s3-tables-replication-create-iam-role"></a>

First, create an IAM role that Amazon S3 can assume to replicate your tables.

1. Create a trust policy document that allows S3 Tables to assume the role. Save this as `trust-policy.json`:

   ```
   {
     "Version": "2012-10-17"		 	 	 
     "Statement": [
       {
         "Effect": "Allow",
         "Principal": {
           "Service": "replication.s3tables.amazonaws.com"
         },
         "Action": "sts:AssumeRole"
       }
     ]
   }
   ```

1. Create the IAM role:

   ```
   aws iam create-role \
       --role-name S3TablesReplicationRole \
       --assume-role-policy-document file://trust-policy.json \
       --description "Role for S3 Tables replication"
   ```

1. Create a permissions policy that grants replication permissions. Save this as `replication-permissions.json`:

   ```
   {
       "Version": "2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                   "s3tables:GetTable",
                   "s3tables:GetTableMetadataLocation",
                   "s3tables:GetTableMaintenanceConfiguration",
                   "s3tables:GetTableData"
               ],
               "Resource": "arn:aws:s3tables:us-east-2:111122223333:bucket/amzn-s3-demo-table-bucket-source/table/*"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3tables:ListTables"
               ],
               "Resource": "arn:aws:s3tables:us-east-2:111122223333:bucket/amzn-s3-demo-table-bucket-source"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3tables:CreateTable",
                   "s3tables:CreateNamespace"
               ],
               "Resource": "arn:aws:s3tables:us-east-2:444455556666:bucket/amzn-s3-demo-table-bucket-destination"
           },
           {
               "Effect": "Allow",
               "Action": [
                   "s3tables:PutTableData",
                   "s3tables:GetTableData",
                   "s3tables:UpdateTableMetadataLocation",
                   "s3tables:PutTableMaintenanceConfiguration"
               ],
               "Resource": "arn:aws:s3tables:us-east-2:444455556666:bucket/amzn-s3-demo-table-bucket-destination/table/*"
           }
       ]
   }
   ```

1. Attach the permissions policy to the role:

   ```
   aws iam put-role-policy \
       --role-name S3TablesReplicationRole \
       --policy-name S3TablesReplicationPermissions \
       --policy-document file://replication-permissions.json
   ```

1. (Optional) If using KMS encryption, add KMS permissions to your policy:

   ```
   {
     "Effect": "Allow",		 	 	 
     "Action": [
        "kms:Decrypt",
        "kms:GenerateDataKey",
        "kms:Encrypt"
   
     ],
     "Resource": "arn:aws:kms:us-east-1:111122223333:key/SOURCE-KEY-ID"
   },
   {
     "Effect": "Allow",
     "Action": [
       "kms:Decrypt",
       "kms:GenerateDataKey"
     ],
     "Resource": [
       "arn:aws:kms:us-west-2:444455556666:key/DESTINATION-KEY-ID-1"
     ]
   }
   ```

### (Cross-account only) Step 2: Configure destination bucket policy
<a name="s3-tables-replication-cross-account-policy"></a>

If you are replicating to a different AWS account, the destination account must grant permissions to the source account.

1. In the destination account, create a bucket policy for the destination table bucket. Save this as `destination-bucket-policy.json`:

   ```
   {
       "Version": "2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::444455556666:role/cross-account-test"
               },
               "Action": [
                   "s3tables:PutTableData",
                   "s3tables:GetTableData",
                   "s3tables:UpdateTableMetadataLocation",
                   "s3tables:PutTableMaintenanceConfiguration"
               ],
               "Resource": "arn:aws:s3tables:us-east-2:111122223333:bucket/amzn-s3-demo-table-bucket-cross-account-destination/table/*"
           },
           {
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::444455556666:role/cross-account-test"
               },
               "Action": [
                   "s3tables:CreateTable",
                   "s3tables:CreateNamespace"
               ],
               "Resource": "arn:aws:s3tables:us-east-2:111122223333:bucket/amzn-s3-demo-table-bucket-cross-account-destination"
           }
       ]
   }
   ```

1. Apply the policy using the S3 Tables API:

   ```
   aws s3tables put-table-bucket-policy \
       --table-bucket-arn arn:aws:s3tables:us-west-2:444455556666:bucket/amzn-s3-demo-table-bucket-cross-account-destination \
       --policy file://destination-bucket-policy.json \
       --profile destination-account
   ```

1. Modify your source KMS key to allow S3 Tables replication and maintenance:

   ```
   {
     "Version": "2012-10-17",		 	 	 
     "Id": "key-consolepolicy-3",
     "Statement": [
           {
               "Sid": "allow replication to decrypt",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "replication_role_arn"
               },
               "Action": [
                   "kms:Decrypt",
                   "kms:GenerateDataKey"
               ],
               "Resource": "arn:aws:kms:us-east-1:111122223333:key/SOURCE-KEY-ID"
           },
           {
               "Sid": "allow maintenance",
               "Effect": "Allow",
               "Principal": {
                   "Service": "maintenance.s3tables.amazonaws.com"
               },
               "Action": [
                   "kms:Decrypt",
                   "kms:GenerateDataKey"
               ],
               "Resource": "arn:aws:kms:us-east-1:111122223333:key/SOURCE-KEY-ID"
           }
     ]
   }
   ```

1. Similarly, add permissions in your destination KMS key policy

   ```
   {
       "Version": "2012-10-17",		 	 	 
       "Id": "key-policy-3",
       "Statement": [
           {
               "Sid": "allow maintenance",
               "Effect": "Allow",
               "Principal": {
                   "Service": "maintenance.s3tables.amazonaws.com"
               },
               "Action": [
                   "kms:Decrypt",
                   "kms:GenerateDataKey"
               ],
               "Resource": "arn:aws:kms:us-west-2:444455556666:key/DESTINATION-KEY-ID-1"
           },
           {
               "Sid": "allow replication to encrypt/decrypt",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "replication_role_arn"
               },
               "Action": [
                   "kms:Encrypt",
                   "kms:Decrypt",
                   "kms:GenerateDataKey"
               ],
               "Resource": "arn:aws:kms:us-west-2:444455556666:key/DESTINATION-KEY-ID-1"
           }
       ]
   ```

### Step 3: Create a replication configuration
<a name="s3-tabales-replication-cli"></a>

You can use the AWS CLI to create a replication configuration at the table bucket-level or the table-level. For more information, see the following procedures.

#### Create a bucket-level replication configuration
<a name="s3-tables-replication-bucket-level-cli"></a>

Use this approach to replicate all tables in a bucket.

1. Create a replication configuration file. Save this as `bucket-replication-config.json`:  
**Example : Single destination in same account**  

   ```
   {
     "role": "arn:aws:iam::111122223333:role/S3TablesReplicationRole",
     "rules": [
       {
         "destinations": [
           {
             "destinationTableBucketARN": "arn:aws:s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket-dr"
           }
         ]
       }
     ]
   }
   ```  
**Example : Multiple destinations across regions**  

   ```
   {
     "role": "arn:aws:iam::111122223333:role/S3TablesReplicationRole",
     "rules": [
       {
         "destinations": [
           {
             "destinationTableBucketARN": "arn:aws:s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket-dr"
           },
           {
             "destinationTableBucketARN": "arn:aws:s3tables:eu-west-1:111122223333:bucket/amzn-s3-demo-table-bucket-eu"
           },
           {
             "destinationTableBucketARN": "arn:aws:s3tables:ap-south-1:111122223333:bucket/amzn-s3-demo-table-bucket-apac"
           }
         ]
       }
     ]
   }
   ```  
**Example : Cross-account replication**  

   ```
   {
     "role": "arn:aws:iam::111122223333:role/S3TablesReplicationRole",
     "rules": [
       {
         "destinations": [
           {
             "destinationTableBucketARN": "arn:aws:s3tables:us-east-1:444455556666:bucket/amzn-s3-demo-table-bucket-partner"
           }
         ]
       }
     ]
   }
   ```

1. Apply the bucket-level replication configuration:

   ```
   aws s3tables put-table-bucket-replication \
       --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket \
       --configuration file://bucket-replication-config.json
   ```

   Expected output:

   ```
   {
     "versionToken": "3HL4kqtJl40Nr8X8gdRQBpUMLUo",
     "status": "Success"
   }
   ```

#### Create a table-level replication configuration
<a name="s3-tables-replication-table-level-cli"></a>

Use this approach to replicate specific tables or to override bucket-level replication.

1. Create a replication configuration file. Save this as `table-replication-config.json`:  
**Example : Single table replication**  

   ```
   {
     "role": "arn:aws:iam::111122223333:role/S3TablesReplicationRole",
     "rules": [
       {
         "destinations": [
           {
             "destinationTableBucketARN": "arn:aws:s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket-analytics-bucket"
           }
         ]
       }
     ]
   }
   ```  
**Example : Table with multiple destinations**  

   ```
   {
     "role": "arn:aws:iam::111122223333:role/S3TablesReplicationRole",
     "rules": [
       {
         "destinations": [
           {
             "destinationTableBucketARN": "arn:aws:s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket-dr"
           },
           {
             "destinationTableBucketARN": "arn:aws:s3tables:eu-west-1:111122223333:bucket/amzn-s3-demo-table-bucket-eu"
           }
         ]
       }
     ]
   }
   ```

1. Apply the table-level replication configuration:

   ```
   aws s3tables put-table-replication \
       --table-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket/table/amzn-s3-demo-table-bucket-sales-data \
       --configuration file://table-replication-config.json
   ```

   Expected output:

   ```
   {
     "versionToken": "xT2LZkFZ0UuTC2h8XqtGLx2Ak6M",
     "status": "Success"
   }
   ```

# Managing S3 Tables replication
<a name="s3-tables-replication-managing"></a>

After you configure S3 Tables replication, you can monitor replica status to verify what's replicated. You can check replication status in the Amazon S3 console on the source table's **Management** tab, or by using the AWS CLI. For more information, see [Setting up S3 Tables replication](s3-tables-replication-setting-up.md).This topic explains how to monitor replication and understand the different status values that indicate whether replication is completed, in progress, or has failed.

## Monitoring replication status
<a name="s3-tables-replication-monitoring-status"></a>

Replication jobs run continuously for your replicated tables. You can query the status of replication with the GetTableReplicationStatus API or view it in the Amazon S3 console.

### To get the status of replication by using the AWS CLI
<a name="s3-tables-replication-status-cli"></a>

The following example gets the replication status using the GetTableReplicationStatus API.

```
aws s3tables get-table-replication-status \
    --table-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket/table/sales-data
```

Expected output:

```
{
  "sourceTableARN": "arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket/table/sales-data",
  "destinations": [
    {
      "replicationStatus": "COMPLETED",
      "destinationBucketARN": "arn:aws:s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket",
      "destinationTableARN": "arn:aws:s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-table-bucket/table/sales-data",
      "lastSuccessfulReplicatedUpdate": {
        "metadataLocation": "latest_table_metadata.json",
        "timestamp": "2025-11-15T14:30:00Z"
      }
    },
    {
      "replicationStatus": "PENDING",
      "destinationBucketARN": "arn:aws:s3tables:eu-west-1:111122223333:bucket/amzn-s3-demo-table-bucket-eu-bucket",
      "destinationTableARN": "arn:aws:s3tables:eu-west-1:111122223333:bucket/amzn-s3-demo-table-bucket-eu-bucket/table/sales-data",
      "lastSuccessfulReplicatedUpdate": {
        "metadataLocation": "latest_table_metadata.json",
        "timestamp": "2025-11-15T14:25:00Z"
      }
    }
  ]
}
```

For more information, see [get-table-replication-status](https://docs.aws.amazon.com/cli/latest/reference/s3tables/get-table-replication-status.html) in the *AWS CLI Command Reference*.

### Understanding the response
<a name="s3-tables-replication-understanding-response"></a>

The response contains the following elements:
+ **sourceTableARN** – The ARN of the source table being replicated.
+ **destinations** – An array of destination status objects, one for each configured replication destination. Each destination object contains:
  + **replicationStatus** – The current replication status for this destination (COMPLETED, PENDING, or FAILED).
  + **destinationBucketARN** – The ARN of the destination table bucket.
  + **destinationTableARN** – The ARN of the replica table in the destination bucket.
  + **lastSuccessfulReplicatedUpdate** – Information about the most recent successful replication:
    + **metadataLocation** – The Iceberg metadata file name that was last successfully replicated. Compare this with the source table's current metadata location to determine if replication is up to date.
    + **timestamp** – The time when this metadata file was replicated to the destination.
  + **failureMessage** (only present when status is FAILED) – A detailed error message describing why replication failed.

### Replication status values
<a name="s3-tables-replication-status-values"></a>

Replication can have three possible statuses for each destination:
+ **COMPLETED** – All source table snapshots have been successfully replicated to the destination. The source table's latest metadata location matches the last replicated metadata location.
+ **PENDING** – Replication is in progress or new commits are waiting to be replicated. The source table's latest metadata location differs from the last replicated metadata location.
+ **FAILED** – The last replication job for this table failed. No new updates are being replicated.

# S3 Tables AWS Regions, endpoints, and service quotas
<a name="s3-tables-regions-quotas"></a>

The following sections include the supported AWS Regions and service quotas for S3 Tables.

**Topics**
+ [S3 Tables AWS Regions and endpoints](#s3-tables-regions)
+ [S3 Tables quotas](#s3-tables-quotas)

## S3 Tables AWS Regions and endpoints
<a name="s3-tables-regions"></a>

For a list of Regions S3 Tables is currently available in, see [Amazon S3 endpoints](https://docs.aws.amazon.com/general/latest/gr/s3.html#s3_region). To connect programmatically to an AWS service, you use an endpoint. For more information, see [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html).

S3 Tables supports dual-stack endpoints for public access and AWS PrivateLink. Dual-stack endpoints allow you to access S3 tables buckets using the Internet Protocol version 6 (IPv6), in addition to the IPv4 protocol, depending on what your network supports.

S3 Tables dual-stack endpoints use the following naming convention: `s3tables.aws-region.api.aws` 

For a complete list of S3 Tables endpoints, see [Amazon S3 endpoints](https://docs.aws.amazon.com/general/latest/gr/s3.html#s3_region).

## S3 Tables quotas
<a name="s3-tables-quotas"></a>

Quotas, also referred to as limits, are the maximum number of service resources or operations for your AWS account. The following are the quotas for S3 Tables resources. For more Amazon S3 quota information, see [Amazon S3 quotas](https://docs.aws.amazon.com/general/latest/gr/s3.html#limits_s3).


| Name | Default | Adjustable | Description | 
| --- | --- | --- | --- | 
| Table Buckets | 10 | To request a quota increase, contact [Support](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). | The number of Amazon S3 table buckets that you can create per AWS Region in an account. | 
| Namespaces | 10,000 | To request a quota increase, contact [Support](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). | The number of Amazon S3 table namespaces that you can create per table bucket. | 
| Tables | 10,000 | To request a quota increase, contact [Support](https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). | The number of Amazon S3 tables that you can create per table bucket. | 

# Making requests to S3 Tables over IPv6
<a name="s3-tables-ipv6"></a>

Amazon S3 supports the ability to access S3 buckets using the Internet Protocol version 6 (IPv6), in addition to the IPv4 protocol, using dual-stack endpoints. Dual-stack endpoints resolve to either an IPv6 endpoint or IPv4 endpoint depending on what your network supports.

The following are some things you should know before trying to access S3 Tables over IPv6:
+ The client and the network accessing the table bucket must be enabled to use IPv6.
+ Your tables client and your S3 client must both have dual-stack enabled.
+ If you use IP address filtering IAM policies they must be updated to handle IPv6 addresses. For more information about managing access permissions with IAM, see [Identity and Access Management for Amazon S3](security-iam.md).
+ When using IPv6, server access log files output IP addresses in an IPv6 format. You need to update existing tools, scripts, and software that you use to parse Amazon S3 log files so that they can parse the IPv6 formatted `Remote IP` addresses. For more information, see [Logging requests with server access logging](ServerLogs.md).

## Getting started making S3 Tables requests over IPv6
<a name="s3-tables-ipv6-getting-started"></a>

When you make a request to a dual-stack endpoint, the table bucket URL resolves to an IPv6 or an IPv4 address depending on what your network supports. If your network prefers IPv4 requests will automatically use IPv4. If your network prefers IPv6, requests will use IPv6. No configuration change is required other than updating your client or application to enable the dual-stack endpoint.

When using the REST API, you directly access an Amazon S3 endpoint by using the endpoint name (URI). You can access S3 Tables and table buckets through a dual-stack endpoint using the following naming convention:

`s3tables.<region>.api.aws`

For a complete list of endpoints for S3 Tables, see [Amazon Simple Storage Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/s3.html).

When using the AWS CLI, AWS SDKs, and Iceberg clients, you can use a parameter or flag to change to a dual-stack endpoint. You can also specify the dual-stack endpoint directly as an override of the Amazon S3 endpoint in the config file.

You can enable dual-stack endpoint resolution in SDKs or clients by setting the dual-stack flag using the following command:

```
S3TablesClient client = S3TablesClient.builder()
    .region(Region.US_EAST_1)
    .dualstackEnabled(true)
    .build();
```

To use the dual-stack endpoint in the AWS CLI, see [Using dual-stack endpoints from the AWS CLI](https://docs.aws.amazon.com/AmazonS3/latest/API/dual-stack-endpoints.html#dual-stack-endpoints-cli).

For information on using the dual-stack endpoints for AWS PrivateLink, see, [Using the dual-stack endpoints to access tables and table buckets](s3-tables-VPC.md#s3-tables-dual-stack-endpoints).

# Security for S3 Tables
<a name="s3-tables-security-overview"></a>

 Amazon S3 provides a variety of security features and tools. The following is a list of these features and tools supported by S3 Tables. Proper application of these tools can help ensure that your resources are protected and accessible only to the intended users. 

**Identity-based policies**  
[Identity-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html) are attached to an IAM user, group, or role. You can use identity-based policies to grant an IAM identity access to your table buckets or tables. By default, users and roles don't have permission to create and modify tables and table buckets. They also can't perform tasks by using S3 console, AWS CLI, or Amazon S3 REST APIs. You can create IAM users, groups, and roles in your account and attach access policies to them. You can then grant access to your resources. To create and access table buckets and tables, an IAM administrator must grant the necessary permissions to the AWS Identity and Access Management (IAM) role or users. For more information, see [Access management for S3 Tables](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-setting-up.html). 

**Resource-based policies**  
[Resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html) are attached to a resource. You can create resource-based policies for table buckets and tables. You can use a table bucket policy to control table bucket and namespace-level API access permissions. You can also use a table bucket policy to control table-level API permissions on multiple tables in a bucket. Depending on the policy definition, the permissions attached to the bucket can apply to all or specific tables in the bucket. You can also use a table policy to grant table-level API access permissions to individual tables in the bucket.

When S3 Tables receives a request to perform a table bucket operation or a table operation, it first verifies that the requester has the necessary permissions. It evaluates all the relevant access policies, user policies, and resource-based policies in deciding whether to authorize the request (IAM user policy, IAM role policy, table bucket policy, and table policy). With table bucket policies and table policies, you can personalize access to your resources to ensure that only the identities you have approved can access your resources and perform actions on them. For more information, see [Access management for S3 Tables.](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-tables-setting-up.html) 

**AWS Organizations service control policies (SCPs) for S3 Tables.**  
You can use Amazon S3 Tables in Service Control Policies (SCPs) to manage permissions to users in your organization. Similar to IAM and resource policies, all table and bucket level actions are referenced as part of `s3tables` namespace in the policies. For more information, see [Service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) in the *AWS Organizations User Guide.*

**Topics**
+ [Protecting S3 table data with encryption](s3-tables-encryption.md)
+ [Access management for S3 Tables](s3-tables-setting-up.md)
+ [VPC connectivity for S3 Tables](s3-tables-VPC.md)
+ [Security considerations and limitations for S3 Tables](s3-tables-restrictions.md)

# Protecting S3 table data with encryption
<a name="s3-tables-encryption"></a>

# Using server-side encryption with AWS KMS keys (SSE-KMS) in table buckets
<a name="s3-tables-kms-encryption"></a>

**Topics**
+ [How SSE-KMS works for tables and table buckets](#kms-tables-how)
+ [Enforcing and scoping SSE-KMS use for tables and table buckets](tables-require-kms.md)
+ [Monitoring and Auditing SSE-KMS encryption for tables and table buckets](#kms-tables-audit)
+ [Permission requirements for S3 Tables SSE-KMS encryption](s3-tables-kms-permissions.md)
+ [Specifying server-side encryption with AWS KMS keys (SSE-KMS) in table buckets](s3-tables-kms-specify.md)

Table buckets have a default encryption configuration that automatically encrypts tables by using server-side encryption with Amazon S3 managed keys (SSE-S3). This encryption applies to all tables in your S3 table buckets, and comes at no cost to you.

If you need more control over your encryption keys, such as managing key rotation and access policy grants, you can configure your table buckets to use server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS). The security controls in AWS KMS can help you meet encryption-related compliance requirements. For more information about SSE-KMS, see [Using server-side encryption with AWS KMS keys (SSE-KMS)](UsingKMSEncryption.md).

## How SSE-KMS works for tables and table buckets
<a name="kms-tables-how"></a>

SSE-KMS with table buckets differs from SSE-KMS in general purpose buckets in the following ways:
+ You can specify encryption settings for table buckets and individual tables.
+ You can only use customer managed keys with SSE-KMS. AWS managed keys aren't supported.
+ You must grant permissions for certain roles and AWS service principals to access your AWS KMS key. For more information, see [Permission requirements for S3 Tables SSE-KMS encryption](s3-tables-kms-permissions.md). This includes granting access to:
  + The S3 maintenance principal – for performing table maintenance on encrypted tables
  + Your S3 Tables integration role – for working with encrypted tables in AWS analytics services
  + Your client access role – for direct access to encrypted tables from Apache Iceberg clients
  + The S3 Metadata principal – for updating encrypted S3 metadata tables
+ Encrypted tables use table-level keys that minimize the number of requests made to AWS KMS to make working with SSE-KMS encrypted tables more cost effective. 

**SSE-KMS encryption for table buckets**  
When you create a table bucket, you can choose SSE-KMS as the default encryption type and select a specific KMS key that will be used for encryption. Any tables created within that bucket will automatically inherit these encryption settings from their table bucket. You can use the AWS CLI, S3 API, or AWS SDKs to modify or remove the default encryption settings on a table bucket at any time. When you modify the encryption settings on a table bucket those settings apply only to new tables created in that bucket. Encryption settings for pre-existing tables are not changed. For more information, see [Specifying encryption for table buckets](s3-tables-kms-specify.md#specify-kms-table-bucket).

**SSE-KMS encryption for tables**  
You also have an option to encrypt an individual table with a different KMS key regardless of the bucket's default encryption configuration. To set encryption for an individual table, you must specify the desired encryption key at the time of table creation. If you want to change the encryption for an existing table, then you'll need to create a table with desired key and copy data from old table to the new one. For more information, see [Specifying encryption for tables](s3-tables-kms-specify.md#specify-kms-table).

When using AWS KMS encryption, S3 Tables automatically creates unique table-level data keys that encrypt new objects associated with each table. These keys are used for a limited time period, minimizing the need for additional AWS KMS requests during encryption operations and reducing the cost of encryption. This is similar to [S3 Bucket Keys for SSE-KMS](bucket-key.md#bucket-key-overview).

# Enforcing and scoping SSE-KMS use for tables and table buckets
<a name="tables-require-kms"></a>

You can use S3 Tables resource-based policies, KMS key policies, IAM identity-based policies, or any combination of these, to enforce the use of SSE-KMS for S3 tables and table buckets. For more information on identity and resource polices for tables, see [Access management for S3 Tables](s3-tables-setting-up.md). For information on writing key policies, see [Key policies](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) in the *AWS Key Management Service Developer Guide*. The following examples show how you can use policies to enforce SSE-KMS.

## Enforcing the use of SSE-KMS for all tables with a table bucket policy
<a name="w2aac20c35c15b3c11b5b1"></a>

This is an example of table bucket policy that prevents users from creating tables in a specific table bucket unless they encrypt tables with a specific AWS KMS key. To use this policy, replace the *user input placeholders* with your own information: 

------
#### [ JSON ]

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "EnforceKMSEncryptionAlgorithm",
      "Effect": "Deny",
      "Principal": "*",
      "Action": [
        "s3tables:CreateTable"
      ],
      "Resource": [
        "arn:aws:s3tables:us-west-2:111122223333:bucket/example-table-bucket/*"
      ],
      "Condition": {
        "StringNotEquals": {
          "s3tables:sseAlgorithm": "aws:kms"
        }
      }
    },
    {
      "Sid": "EnforceKMSEncryptionKey",
      "Effect": "Deny",
      "Principal": "*",
      "Action": [
        "s3tables:CreateTable"
      ],
      "Resource": [
        "arn:aws:s3tables:us-west-2:111122223333:bucket/example-table-bucket/*"
      ],
      "Condition": {
        "StringNotEquals": {
          "s3tables:kmsKeyArn": "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
        }
      }
    }
  ]
}
```

------

## Requiring users to use SSE-KMS encryption with an IAM policy
<a name="w2aac20c35c15b3c11b7b1"></a>

This IAM identity policy requires users to use a specific AWS KMS key for encryption when creating or configuring S3 Tables resources. To use this policy, replace the *user input placeholders* with your own information:

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "RequireSSEKMSOnTables",
      "Action": [
          "s3tables:CreateTableBucket",
          "s3tables:PutTableBucketEncryption",
          "s3tables:CreateTable"
      ],
      "Effect": "Deny",
      "Resource": "*",
      "Condition": {
        "StringNotEquals": {
            "s3tables:sseAlgorithm": "aws:kms"
        }
      }
    },
    {
      "Sid": "RequireKMSKeyOnTables",
      "Action": [
          "s3tables:CreateTableBucket",
          "s3tables:PutTableBucketEncryption",
          "s3tables:CreateTable"
      ],
      "Effect": "Deny",
      "Resource": "*",
      "Condition": {
        "StringNotEquals": {
            "s3tables:kmsKeyArn": "<key_arn>"
        }
      }
    }
  ]
}
```

## Restricting the use of a key to a specific table bucket with a KMS key policy
<a name="w2aac20c35c15b3c11b9b1"></a>

This example KMS key policy allows the key to be used by a specific user only for encryption operations in a specific table bucket. This type of policy is useful for limiting access to a key in cross-account scenarios. To use this policy, replace the *user input placeholders* with your own information: 

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Id": "Id",
  "Statement": [
    {
      "Sid": "AllowPermissionsToKMS",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:root"
      },
      "Action": [
        "kms:GenerateDataKey",
        "kms:Decrypt"
      ],
      "Resource": "*",
      "Condition": {
        "StringLike": {
          "kms:EncryptionContext:aws:s3:arn": "<table-bucket-arn>/*"
        }
      }
    }
  ]
}
```

------

## Monitoring and Auditing SSE-KMS encryption for tables and table buckets
<a name="kms-tables-audit"></a>

To audit the usage of your AWS KMS keys for your SSE-KMS encrypted data, you can use AWS CloudTrail logs. You can get insight into your [cryptographic operations](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#cryptographic-operations), such as `GenerateDataKey` and `Decrypt`. CloudTrail supports numerous [attribute values](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_LookupEvents.html) for filtering your search, including event name, user name, and event source.

You can track encryption configuration requests for Amazon S3 tables and table buckets by using CloudTrail events. The following API event names are used in CloudTrail logs:
+ `s3tables:PutTableBucketEncryption`
+ `s3tables:GetTableBucketEncryption`
+ `s3tables:DeleteTableBucketEncryption`
+ `s3tables:GetTableEncryption`
+ `s3tables:CreateTable`
+ `s3tables:CreateTableBucket`

**Note**  
EventBridge isn't supported for table buckets.

# Permission requirements for S3 Tables SSE-KMS encryption
<a name="s3-tables-kms-permissions"></a>

When you use server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) for tables in S3 table buckets you need to grant permissions for different identities in your account. At minimum your access identity and the S3 Tables maintenance principal need access to your key, the other permissions required depend on your use case.

**Required Permissions**   
To access a table encrypted with a KMS key, you need these permissions on that key:  
+ `kms:GenerateDataKey`
+ `kms:Decrypt`
To use SSE-KMS on tables the Amazon S3 Tables maintenance service principal (`maintenance.s3tables.amazonaws.com`) needs `kms:GenerateDataKey` and `kms:Decrypt` permissions on the key.

**Additional permissions**  
These additional permissions are required depending on your use case:  
+ **Permissions for AWS analytics services and direct access** – If you work with SSE-KMS encrypted tables through AWS analytics services or third-party engines accessing S3 tables directly, the IAM role you use needs permission to use your KMS key.
+ **Permissions with Lake Formation enabled** – If you have opted in to AWS Lake Formation for access control, the Lake Formation service role needs permission to use your KMS key.
+ **Permissions for S3 Metadata tables** – If you use SSE-KMS encryption for S3 Metadata tables, you need to provide the S3 Metadata service principal (`metadata.s3.amazonaws.com`) access to your KMS key. This allows S3 Metadata to update encrypted tables so they will reflect your latest data changes.

**Note**  
For cross-account KMS keys, your IAM role needs both key access permission and explicit authorization in the key policy. For more information about cross-account permissions for KMS keys, see [Allowing external AWS accounts to use a KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/key-policy-modifying-external-accounts.html#cross-account-console) in the *AWS Key Management Service Service Developer Guide*.

**Topics**
+ [Granting the S3 Tables maintenance service principal permissions to your KMS key](#tables-kms-maintenance-permissions)
+ [Granting IAM principals permissions to work with encrypted tables in integrated AWS analytics services](#tables-kms-integration-permissions)
+ [Granting IAM principals permissions to work with encrypted tables when Lake Formation is enabled](#tables-kms-lf-permissions)
+ [Granting the S3 Metadata service principal permissions to use your KMS key](#tables-kms-metadata-permissions)

## Granting the S3 Tables maintenance service principal permissions to your KMS key
<a name="tables-kms-maintenance-permissions"></a>

This permission is required to create SSE-KMS encrypted tables and to allow automatic table maintenance like compaction, snapshot management, and unreferenced file removal on the encrypted tables.

**Note**  
Whenever you make a request to create an SSE-KMS encrypted table, S3 Tables checks to make sure the `maintenance.s3tables.amazonaws.com` principal has access to your KMS key. To perform this check, a zero-byte object is temporarily created in your table bucket, this object will be automatically removed by the [unreferenced file removal](s3-table-buckets-maintenance.md#s3-table-bucket-maintenance-unreferenced) maintenance operations. If the KMS key you specified for encryption doesn’t have maintenance access the createTable operation will fail.

To grant maintenance access on SSE-KMS encrypted tables, you can use the following example key policy. In this policy, the `maintenance.s3tables.amazonaws.com` service principal is granted permission to use a specific KMS key for encrypting and decrypting tables in a specific table bucket. To use the policy, replace the *user input placeholders* with your own information:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "EnableKeyUsage",
            "Effect": "Allow",
            "Principal": {
                "Service": "maintenance.s3tables.amazonaws.com"            
            },
            "Action": [
                "kms:GenerateDataKey",
                "kms:Decrypt"
            ],
            "Resource": "arn:aws:kms:us-east-1:111122223333:key/key-id",
            "Condition": {
                "StringLike": {
                    "kms:EncryptionContext:aws:s3:arn":"<table-or-table-bucket-arn>/*"
                }
            }
        }
    ]
}
```

------

## Granting IAM principals permissions to work with encrypted tables in integrated AWS analytics services
<a name="tables-kms-integration-permissions"></a>

To work with S3 tables in AWS analytics services, you integrate your table buckets with AWS Glue Data Catalog. This integration allows AWS analytics services to automatically discover and access table data. For more information on the integration, see [Integrating Amazon S3 Tables with AWS analytics services](s3-tables-integrating-aws.md).

When you work with SSE-KMS encrypted tables through AWS analytics services or third-party and open-source engines accessing S3 tables directly, the IAM role you use needs permission to use your AWS KMS key for encryption operations.

You can grant KMS key access through an IAM policy attached to your role or through a KMS key policy.

------
#### [ IAM policy ]

Attach this inline policy to the IAM role you use for querying to allow KMS key access. Replace the KMS key ARN with your own.

```
{
    "Version":"2012-10-17",		 	 	 ,                    
    "Statement": [
        {
            "Sid": "AllowKMSKeyUsage",
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt",
                "kms:GenerateDataKey"
            ],
            "Resource": "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
        }
    ]
}
```

------
#### [ KMS key policy ]

Alternatively, attach this statement to your KMS key policy to allow the specified IAM role to use the key. Replace the role ARN with the IAM role you use for querying.

```
{
    "Sid": "Allow use of the key",
    "Effect": "Allow",
    "Principal": {
        "AWS": [
            "arn:aws:iam::<catalog-account-id>:role/<role-name>"
        ]
    },
    "Action": [
        "kms:Decrypt",
        "kms:GenerateDataKey",
    ],
    "Resource": "*"
}
```

------

## Granting IAM principals permissions to work with encrypted tables when Lake Formation is enabled
<a name="tables-kms-lf-permissions"></a>

If you have opted in to AWS Lake Formation for access control on your S3 Tables integration, the Lake Formation service role needs permission to use your AWS KMS key for encryption operations. Lake Formation uses this role to vend credentials on behalf of principals accessing your tables.

The following KMS key policy example grants the Lake Formation service role permission to use a specific KMS key in your account for encryption operations. Replace the placeholder values with your own.

```
{
  "Sid": "AllowTableRoleAccess",
  "Effect": "Allow",
  "Principal": {
    "AWS": "arn:aws:iam::111122223333:role/service-role/S3TablesRoleForLakeFormation"
  },
  "Action": [
      "kms:GenerateDataKey", 
      "kms:Decrypt"
  ],
  "Resource": "<kms-key-arn>"
}
```

## Granting the S3 Metadata service principal permissions to use your KMS key
<a name="tables-kms-metadata-permissions"></a>

To allow Amazon S3 to update SSE-KMS encrypted metadata tables, and perform maintenance on those metadata tables, you can use the following example key policy. In this policy, you allow the `metadata.s3.amazonaws.com` and `maintenance.s3tables.amazonaws.com` service principals to encrypt and decrypt tables in a specific table bucket using a specific key. To use the policy, replace the *user input placeholders* with your own information:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "EnableKeyUsage",
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "maintenance.s3tables.amazonaws.com",
                    "metadata.s3.amazonaws.com"
                ]           
            },
            "Action": [
                "kms:GenerateDataKey",
                "kms:Decrypt"
            ],
            "Resource": "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab",
            "Condition": {
                "StringLike": {
                    "kms:EncryptionContext:aws:s3:arn":"<table-or-table-bucket-arn>/*"
                }
            }
        }
    ]
}
```

------

# Specifying server-side encryption with AWS KMS keys (SSE-KMS) in table buckets
<a name="s3-tables-kms-specify"></a>

All Amazon S3 table buckets have encryption configured by default, and all new tables created in an table bucket are automatically encrypted at rest. Server-side encryption with Amazon S3 managed keys (SSE-S3) is the default encryption configuration for every table bucket. If you want to specify a different encryption type, you can use server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS).

 You can specify SSE-KMS encryption in your `CreateTableBucket` or `CreateTable` requests, or you can set the default encryption configuration in the table bucket in a `PutTableBucketEncryption` request.

**Important**  
To allow automatic maintenance on SSE-KMS encrypted tables and table buckets you must grant the maintenance.s3tables.amazonaws.com service principal permission to use your KMS key. For more information, see [Permission requirements for S3 Tables SSE-KMS encryption](s3-tables-kms-permissions.md).

## Specifying encryption for table buckets
<a name="specify-kms-table-bucket"></a>

You can specify SSE-KMS as the default encryption type when you create a new table bucket, for examples, see [Creating a table bucket](s3-tables-buckets-create.md). After creating a table bucket, you can specify the use of SSE-KMS as the default encryption setting using REST API operations, AWS SDKs, and the AWS Command Line Interface (AWS CLI).

**Note**  
 When you specify SSE-KMS as the default encryption type, the key you use for encryption must allow access to the S3 Tables maintenance service principal. If the maintenance service principal does not have access, you will be unable to create tables in that table bucket. For more information, see [Granting the S3 Tables maintenance service principal permissions to your KMS key](s3-tables-kms-permissions.md#tables-kms-maintenance-permissions).

### Using the AWS CLI
<a name="w2aac20c35c15b3c17b9b9b1"></a>

To use the following example AWS CLI command, replace the *user input placeholders* with your own information.

```
aws s3tables put-table-bucket-encryption \
    --table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket; \
    --encryption-configuration '{
        "sseAlgorithm": "aws:kms",
        "kmsKeyArn": "arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
    }' \
    --region us-east-1
```

You can remove the default encryption setting for a table bucket using the [DeleteTableBucketEncryption](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTableBucketEncryption.html) API operation. When you remove encryption settings new tables created in the table bucket will use the default SSE-S3 encryption.

## Specifying encryption for tables
<a name="specify-kms-table"></a>

You can apply SSE-KMS encryption to a new table when you create it using query engines, REST API operations, AWS SDKs, and the AWS Command Line Interface (AWS CLI). The encryption settings you specify when creating a table take precedence over the default encryption setting of the table bucket.

**Note**  
When you use SSE-KMS encryption for a table the key you use for encryption must allow the S3 Tables maintenance service principal to access it. If the maintenance service principal does not have access, you will be unable to create the table. For more information, see [Granting the S3 Tables maintenance service principal permissions to your KMS key](s3-tables-kms-permissions.md#tables-kms-maintenance-permissions).

****Required permissions****

The following permissions are required to create encrypted tables
+ `s3tables:CreateTable`
+ `s3tables:PutTableEncryption`

## Using the AWS CLI
<a name="w2aac20c35c15b3c17c13b1"></a>

The following AWS CLI example creates a new table with a basic schema, and encrypts it with a customer managed AWS KMS key. To use the command, replace the *user input placeholders* with your own information.

```
aws s3tables create-table \
  --table-bucket-arn "arn:aws:s3tables:Region:ownerAccountId:bucket/amzn-s3-demo-table-bucket" \
  --namespace "mydataset" \
  --name "orders" \
  --format "ICEBERG" \
  --encryption-configuration '{
    "sseAlgorithm": "aws:kms",
    "kmsKeyArn": "arn:aws:kms:Region:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
  }' \
  --metadata '{
    "iceberg": {
      "schema": {
        "fields": [
          {
            "name": "order_id",
            "type": "string",
            "required": true
          },
          {
            "name": "order_date",
            "type": "timestamp",
            "required": true
          },
          {
            "name": "total_amount",
            "type": "decimal(10,2)",
            "required": true
          }
        ]
      }
    }
  }'
```

Data protection refers to protecting data while it's in transit (as it travels to and from Amazon S3) and at rest (while it's stored on disks in Amazon S3 data centers). S3 Tables always protects data in transit using Transport Layer Security (1.2 and above) through HTTPS. For protecting data at rest in S3 table buckets, you have the following options:

**Server-side encryption with Amazon S3 managed keys (SSE-S3)**  
All Amazon S3 table buckets have encryption configured by default. The default option for server-side encryption is with Amazon S3 managed keys (SSE-S3). This encryption comes at no cost to you and applies to all tables in your S3 table buckets unless you specify another form of encryption. Each object is encrypted with a unique key. As an additional safeguard, SSE-S3 encrypts the key itself with a root key that it regularly rotates. SSE-S3 uses one of the strongest block ciphers available, 256-bit Advanced Encryption Standard (AES-256), to encrypt your data.

**Server-side encryption with AWS KMS keys (SSE-KMS)**  
You can choose to configure table buckets or tables to use server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS). The security controls in AWS KMS can help you meet encryption-related compliance requirements. SSE-KMS gives you more control over your encryption keys by allowing you to do the following:   
+ Create, view, edit, monitor, enable or disable, rotate, and schedule deletion of KMS keys.
+ Define the policies that control how and by whom KMS keys can be used.
+ Track key usage in AWS CloudTrail to verify your KMS keys are being used correctly.
S3 Tables supports using customer managed keys in SSE-KMS to encrypt tables. AWS managed keys are not supported. For more information on using SSE-KMS for S3 tables and table buckets, see [Using server-side encryption with AWS KMS keys (SSE-KMS) in table buckets](s3-tables-kms-encryption.md).

# Access management for S3 Tables
<a name="s3-tables-setting-up"></a>

In S3 Tables resources include table buckets and the tables that they contain. The root user of the AWS account that created the resource (the resource owner) and AWS Identity and Access Management (IAM) users within that account that have the necessary permissions can access a resource that they created. The resource owner specifies who else can access the resource and the actions that they are allowed to perform on the resource. Amazon S3 has various access management tools that you can use to grant others access to your S3 resources. If you've integrated your tables with AWS Lake Formation, then you can also manage fine-grained access to your tables and namespaces. The following topics provide you with an overview of resources, IAM actions, and condition keys for S3 Tables. They also provide examples for both resource-based and identity-based policies for S3 Tables.

**Topics**
+ [Resources](#s3-tables-resources)
+ [Actions for S3 Tables](#s3-tables-actions)
+ [Condition keys for S3 Tables](#s3-tables-conditionkeys)
+ [IAM identity-based policies for S3 Tables](s3-tables-identity-based-policies.md)
+ [Resource-based policies for S3 Tables](s3-tables-resource-based-policies.md)
+ [AWS managed policies for S3 Tables](s3-tables-security-iam-awsmanpol.md)
+ [Granting access with SQL semantics](s3-tables-sql.md)
+ [Managing access to a table or database with Lake Formation](grant-permissions-tables.md)

## Resources
<a name="s3-tables-resources"></a>

S3 Tables resources include table buckets and the tables that they contain.
+ Table buckets – Table buckets are specifically designed for tables and provider higher transactions per seconds (TPS) and better query throughput compared to self-managed tables in general purpose S3 buckets. Table buckets deliver the same durability, availability, scalability, and performance characteristics as Amazon S3 general purpose buckets. 
+ Tables – Tables in your table buckets are stored in Apache Iceberg format. You can query these tables using standard SQL in query engines that support Iceberg.

Amazon Resource Names (ARNs) for tables and table buckets contain the `s3tables` namespace, the AWS Region, the AWS account ID, and the bucket name. To access and perform actions on your tables and table buckets, you must use the following ARN formats:
+ Table ARN format:

  `arn:aws:s3tables:us-west-2:111122223333:bucket/amzn-s3-demo-bucket/table/demo-tableID`

## Actions for S3 Tables
<a name="s3-tables-actions"></a>

In an identity-based policy or resource-based policy, you define which S3 Tables actions are allowed or denied for specific IAM principals. Tables actions correspond to bucket and table-level API operations. All actions are part of a unique IAM namespace: `s3tables`.

When you use an action in a policy, you usually allow or deny access to the API operation with the same name. However, in some cases, a single action controls access to more than one API operation. For example, the `s3tables:GetTableData` actions includes permissions for the `GetObject`, `ListParts`, and `ListMultiparts` API operations. 

The following are supported actions for table buckets. You can specify the following actions in the `Action` element of an IAM policy or resource policy.


| Action | Description | Access level | Cross-account access | 
| --- | --- | --- | --- | 
| s3tables:CreateTableBucket | Grants permissions to create a table bucket | Write | No | 
| s3tables:GetTableBucket | Grants permission to retrieve a table bucket ARN, table bucket name, and create date. | Write | Yes | 
| s3tables:ListTableBuckets | Grants permission to list all table buckets in this account. | Read | No | 
| s3tables:CreateNamespace | Grants permission to create a name space in a table bucket | Write | Yes | 
| s3tables:GetNamespace | Grants permission to retrieve namespace details | Read | Yes | 
| s3tables:ListNamespaces | Grants permission to list all namespaces on the table bucket. | Read | Yes | 
| s3tables:DeleteNamespace | Grants permission to delete a namespace in a table bucket | Write | Yes | 
| s3tables:DeleteTableBucket | Grants permission to delete the bucket  | Write | Yes  | 
| s3tables:PutTableBucketPolicy | Grants permission to add or replace a bucket policy | Permissions Management | No | 
| s3tables:GetTableBucketPolicy | Grants permission to retrieve the bucket policy | Read | No | 
| s3tables:DeleteTableBucketPolicy | Grants permission to delete the bucket policy | Permissions Management | No | 
| s3tables:GetTableBucketMaintenanceConfiguration | Grants permission to retrieve the maintenance configuration for a table bucket | Read | Yes  | 
| s3tables:PutTableBucketMaintenanceConfiguration | Grants permission to add or replace the maintenance configuration for a table bucket | Write | Yes | 
| s3tables:PutTableBucketEncryption | Grants permission to add or replace the encryption configuration for a table bucket | Write | No | 
| s3tables:GetTableBucketEncryption | Grants permission to retrieve the encryption configuration for a table bucket | Read | No | 
| s3tables:DeleteTableBucketEncryption | Grants permission to delete the encryption configuration for a table bucket | Write | No | 

The following actions are supported for tables:


| Action | Description | Access level | Cross-account access | 
| --- | --- | --- | --- | 
| s3tables:GetTableMaintenanceConfiguration | Grants permission to retrieve the maintenance configuration for a table | Read | Yes | 
| s3tables:PutTableMaintenanceConfiguration |  Grants permission to add or replace the maintenance configuration for a table | Write | Yes | 
| s3tables:PutTablePolicy | Grants permission to add or replace a table policy | Permissions Management | No | 
| s3tables:GetTablePolicy | Grants permission to retrieve the table policy | Read | No | 
| s3tables:DeleteTablePolicy | Grants permission to delete the table policy | Permissions management | No | 
| s3tables:CreateTable | Grants permission to create a table in a table bucket | Write | Yes | 
| s3tables:GetTable | Grants permission to retrieve a table information | Read | Yes | 
| s3tables:GetTableMetadataLocation | Grants permission to retrieve the table root pointer (metadata file) | Read | Yes  | 
| s3tables:ListTables | Grants permission to list all tables in a table bucket | Read | Yes  | 
| s3tables:RenameTable | Grant permissions to change the name of a table. | Write | Yes  | 
| s3tables:UpdateTableMetadataLocation | Grants permission to update table root pointer (metadata file) | Write | Yes  | 
| s3tables:GetTableData | Grants permission to read the table metadata and data objects stored in the table bucket | Read | Yes | 
| s3tables:PutTableData | Grants permission to write the table metadata and data objects stored in the table bucket | Write | Yes | 
| s3tables:GetTableEncryption  | Grants permission to retrieve the encryption settings for a table | Write | No | 
| s3tables:PutTableEncryption  | Grants permission to add encryption to a table | Write | No | 
| s3tables:DeleteTable | Grants permission to delete a table from a table bucket | Write | Yes | 

To perform table-level read and write actions, S3 Tables supports Amazon S3 API operations such as `GetObject` and `PutObject`. The following table provides a list of object-level actions. When granting read and write permissions to your tables, you use the following actions.


| Action | S3 object APIs | 
| --- | --- | 
| s3tables:GetTableData | GetObject, ListParts, HeadObject | 
| s3tables:PutTableData | PutObject, CreateMultipartUpload, CompleteMultipartUpload,  UploadPart, AbortMultipartUpload | 

For example, if a user has `GetTableData` permissions, then they can read all the files associated with the table, such as its metadata file, manifest, manifest list files, and parquet data files.

## Condition keys for S3 Tables
<a name="s3-tables-conditionkeys"></a>

S3 Tables supports [AWS global condition context keys](https://docs.aws.amazon.com//IAM/latest/UserGuide/reference_policies_condition-keys.html).

Additionally, S3 Tables defines the following condition keys that you can use in an access policy.


| Condition key | Description | Type | 
| --- | --- | --- | 
|  s3tables:tableName |  Filters access by the name of the tables in the table bucket. You can use the `s3tables:tableName` condition key to write IAM, or table bucket policies that restrict user or application access to only the tables that meet this name condition.   It's important to note that if you use the `s3tables:tableName` condition key to control access then changes in tables’ name could impact these policies. Example value: "s3tables:tableName":"department\$1"  | String | 
|  s3tables:namespace |  Filters access by the namespaces created in the table bucket.  You can use the `s3tables:namespace` condition key to write IAM, table, or table bucket policies that restrict user or application access to tables that are part of a specific namespace. *Example value:* `"s3tables:namespace":"hr" `  It's important to note that if you use the `s3tables:namespace` condition key to control access, then changes in namespaces could impact these policies.  | String | 
|  s3tables:SSEAlgorithm |  Filters access by the server-side encryption algorithm used to encrypt a table.  You can use the `s3tables:SSEAlgorithm` condition key to write IAM, table, or table bucket policies that restrict user or application access to tables that are encrypted with a certain encryption type. *Example value:* `"s3tables:SSEAlgorithm":"aws:kms" `  It's important to note that if you use the `s3tables:SSEAlgorithm` condition key to control access, then changes in encryption could impact these policies.  | String | 
|  s3tables:KMSKeyArn |  Filters access by the AWS KMS key ARN for the key used to encrypt a table You can use the `s3tables:KMSKeyArn` condition key to write IAM, table, or table bucket policies that restrict user or application access to tables that are encrypted with a specific KMS key.  It's important to note that if you use the `s3tables:KMSKeyArn` condition key to control access, then changing your KMS key could impact these policies.  | ARN | 

# IAM identity-based policies for S3 Tables
<a name="s3-tables-identity-based-policies"></a>

By default, users and roles don't have permission to create or modify tables and table buckets. They also can't perform tasks by using the s3 console, AWS Command Line Interface (AWS CLI), or Amazon S3 REST APIs. To create and access table buckets and tables, an AWS Identity and Access Management (IAM) administrator must grant the necessary permissions to the IAM role or users. To learn how to create an IAM identity-based policy by using these example JSON policy documents, see [Creating IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create-console.html) in the *IAM User Guide*. 

The following topic includes examples of IAM identity-based policies. To use the following example policies, replace the *user input placeholders* with your own information.

**Topics**
+ [Example 1: Allow access to create and use table buckets](#example-1-s3-tables-identity-based-policies)
+ [Example 2: Allow access to create and use tables in a table bucket](#example-2-s3-tables-identity-based-policies)

## Example 1: Allow access to create and use table buckets
<a name="example-1-s3-tables-identity-based-policies"></a>

**.**

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowBucketActions",
            "Effect": "Allow",
            "Action": [
                "s3tables:CreateTableBucket",
                "s3tables:PutTableBucketPolicy",
                "s3tables:GetTableBucketPolicy",
                "s3tables:ListTableBuckets",
                "s3tables:GetTableBucket"
            ],
            "Resource": "arn:aws:s3tables:us-east-1:111122223333:bucket/*"
        }
    ]
}
```

------

## Example 2: Allow access to create and use tables in a table bucket
<a name="example-2-s3-tables-identity-based-policies"></a>

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowBucketActions",
            "Effect": "Allow",
            "Action": [
                "s3tables:GetTableBucket",
                "s3tables:ListTables",
                "s3tables:CreateTable",
                "s3tables:PutTableData",
                "s3tables:GetTableData",
                "s3tables:GetTable",
                "s3tables:GetTableMetadataLocation",
                "s3tables:UpdateTableMetadataLocation",
                "s3tables:GetNamespace",
                "s3tables:CreateNamespace",
                "s3tables:ListNamespaces"
            ],
            "Resource": [
                "arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket",
                "arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket/table/*"
            ]
        }
    ]
}
```

------

# Resource-based policies for S3 Tables
<a name="s3-tables-resource-based-policies"></a>

S3 Tables provides resource-based policies for managing access to table buckets and tables: table bucket policies and table policies. You can use a table bucket policy to grant API access permissions at the table bucket, namespace, or table-level. The permissions attached to the table bucket can apply to all tables in the bucket or to specific tables in the bucket, depending on the policy definition. You can use a table policy to grant permissions at the table-level. 

When S3 Tables receives a request, it first verifies that the requester has the necessary permissions. It evaluates all the relevant access policies, user policies, and resource-based policies in deciding whether to authorize the request (IAM user policy, IAM role policy, table bucket policy, and table policy). For example, if a table bucket policy grants a user permissions to perform all actions on the tables in the bucket (including `DeleteTable`), but an individual table has a table policy that denies the `DeleteTable` action for all users, then the user cannot delete the table.

The following topic includes examples of table and table bucket policies. To use these policies, replace the *user input placeholders* with your own information.

**Note**  
Every policy that grants permissions to modify tables should include permissions for `GetTableMetadataLocation` to access the table root file. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTableMetadataLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTableMetadataLocation.html).
Every time that you perform a write or delete activity on your table, include permissions to `UpdateTableMetadataLocation` in your access policy.
We recommend using a table bucket policy for governing access to bucket-level actions and a table policy for governing access to table-level actions. In cases where you want to define the same set of permissions across multiple tables, then we recommend using a table bucket policy.

**Topics**
+ [Example 1: Table bucket policy allows access to `PutBucketMaintenanceConfiguration` for buckets in an account](#table-bucket-policy-1)
+ [Example 2: Table bucket policy to allows read (SELECT) access to tables stored in the `hr`namespace](#table-bucket-policy-2)
+ [Example 3: Table policy to allow user to delete a table](#table-bucket-policy-3)

## Example 1: Table bucket policy allows access to `PutBucketMaintenanceConfiguration` for buckets in an account
<a name="table-bucket-policy-1"></a>

The following example table bucket policy allows the IAM `data steward` to delete unreferenced objects for all the buckets in an account by allowing access to `PutBucketMaintenanceConfiguration`.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/datasteward"
            },
            "Action": [
                "s3tables:PutTableBucketMaintenanceConfiguration"
            ],
            "Resource": "arn:aws:s3tables:us-east-1:111122223333:bucket/*"
        }
    ]
}
```

------

## Example 2: Table bucket policy to allows read (SELECT) access to tables stored in the `hr`namespace
<a name="table-bucket-policy-2"></a>

The following an example table bucket policy allows Jane, a user from AWS account ID `123456789012` to access tables stored in the `hr` namespace in a table bucket.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::123456789012:user/Jane"
            },
            "Action": [
                "s3tables:GetTableData",
                "s3tables:GetTableMetadataLocation"
            ],
            "Resource": "arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket/table/*",
            "Condition": {
                "StringLike": {
                    "s3tables:namespace": "hr"
                }
            }
        }
    ]
}
```

------

## Example 3: Table policy to allow user to delete a table
<a name="table-bucket-policy-3"></a>

The following example table policy that allows the IAM role `data steward` to delete a table.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "DeleteTable",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:role/datasteward"
            },
            "Action": [
                "s3tables:DeleteTable",
                "s3tables:UpdateTableMetadataLocation",
                "s3tables:PutTableData",
                "s3tables:GetTableMetadataLocation"
            ],
            "Resource": "arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket/table/tableUUID"
        }
    ]
}
```

------

# AWS managed policies for S3 Tables
<a name="s3-tables-security-iam-awsmanpol"></a>

An AWS managed policy is a standalone policy that is created and administered by AWS. AWS managed policies are designed to provide permissions for many common use cases so that you can start assigning permissions to users, groups, and roles.

Keep in mind that AWS managed policies might not grant least-privilege permissions for your specific use cases because they're available for all AWS customers to use. We recommend that you reduce permissions further by defining [ customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#customer-managed-policies) that are specific to your use cases.

You cannot change the permissions defined in AWS managed policies. If AWS updates the permissions defined in an AWS managed policy, the update affects all principal identities (users, groups, and roles) that the policy is attached to. AWS is most likely to update an AWS managed policy when a new AWS service is launched or new API operations become available for existing services.

For more information, see [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) in the *IAM User Guide*.

## AWS managed policy: AmazonS3TablesFullAccess
<a name="s3-tables-security-iam-awsmanpol-amazons3tablesfullaccess"></a>

You can attach the `AmazonS3TablesFullAccess` policy to your IAM identities. This policy grants permissions that allow full access to Amazon S3 Tables. For more information about this policy, see [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3TablesFullAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3TablesFullAccess.html).

## AWS managed policy: AmazonS3TablesReadOnlyAccess
<a name="s3-tables-security-iam-awsmanpol-amazons3readonlyaccess"></a>

You can attach the `AmazonS3TablesReadOnlyAccess` policy to your IAM identities. This policy grants permissions that allow read-only access to Amazon S3 Tables. For more information about this policy, see [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3TablesReadOnlyAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3TablesReadOnlyAccess.html).

## AWS managed policy: AmazonS3TablesLakeFormationServiceRole
<a name="s3-tables-security-iam-awsmanpol-amazons3tableslakeformationservicerole"></a>

You can attach the `AmazonS3TablesLakeFormationServiceRole` policy to your IAM identities. This policy grants permissions that allow the AWS Lake Formation service role access to S3 Tables. AWS KMS permissions are used to allow Lake Formation to access encrypted tables. For more information about this policy, see [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3TablesLakeFormationServiceRole.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3TablesLakeFormationServiceRole.html).

## Amazon S3 Tables updates to AWS managed policies
<a name="s3-tables-security-iam-awsmanpol-updates"></a>

View details about updates to AWS managed policies for Amazon S3 Tables since S3 Tables began tracking these changes.


| Change | Description | Date | 
| --- | --- | --- | 
|  Amazon S3 Tables updated `AmazonS3TablesFullAccess`.  |  S3 Tables updated the AWS-managed policy called `AmazonS3TablesFullAccess`. This update grants permission to pass a role to the S3 Tables Replication service.  |  December 2, 2025  | 
|  Amazon S3 Tables added `AmazonS3TablesLakeFormationServiceRole`.  |  S3 Tables added a new AWS-managed policy called `AmazonS3TablesLakeFormationServiceRole`. This policy grants permissions that allows the Lake Formation service role access to S3 Tables.   | May 19, 2025 | 
|  Amazon S3 Tables added `AmazonS3TablesFullAccess`.  |  S3 Tables added a new AWS-managed policy called `AmazonS3TablesFullAccess`. This policy grants permissions that allow full access to Amazon S3 Tables.   | December 03, 2024 | 
|  Amazon S3 Tables added `AmazonS3TablesReadOnlyAccess`.  |  S3 Tables added a new AWS-managed policy called `AmazonS3TablesReadOnlyAccess`. This policy grants permissions to allow read-only access to Amazon S3 Tables.   | December 03, 2024 | 
|  Amazon S3 Tables started tracking changes.  |  Amazon S3 Tables started tracking changes for its AWS managed policies.  | December 03, 2024 | 

# Granting access with SQL semantics
<a name="s3-tables-sql"></a>

You can grant permissions to tables by using SQL semantics in table and table bucket policies. Examples of SQL semantics you can use are `CREATE`, `INSERT`, `DELETE`, `UPDATE`, and `ALTER`. The following table provides a list of API actions associated with SQL semantics that you can use to grant permissions to your users.

S3 Tables partially supports permissions using SQL semantics. For example, the `CreateTable` API only creates an empty table in the table bucket. You need additional permissions such as, `UpdateTableMetadata`, `PutTableData`, and `GetTableMetadataLocation` to be able to set the table schema. These additional permissions also mean that you are also granting the user access to insert rows in the table. If you wish to govern access purely based on SQL semantics, then we recommend using [AWS Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/what-is-lake-formation.html) or any third-party solution that is integrated with S3 Tables.


| Table-level activity | IAM actions | 
| --- | --- | 
| SELECT | s3tables:GetTableData, s3tables:GetTableMetadataLocation | 
| CREATE | s3tables:CreateTable, s3tables:UpdateTableMetadataLocation, s3tables:PutTableData, s3tables:GetTableMetadataLocation,  | 
| INSERT | s3tables:UpdateTableMetadataLocation, s3tables:PutTableData, s3tables:GetTableMetadataLocation | 
| UPDATE | s3tables:UpdateTableMetadataLocation, s3tables:PutTableData, s3tables:GetTableMetadataLocation | 
| ALTER,RENAME | s3tables:UpdateTableMetadataLocation, s3tables:PutTableData, s3tables:GetTableMetadataLocation, s3tables:RenameTable  | 
| DELETE,DROP | s3tables:DeleteTable, s3tables:UpdateTableMetadataLocation, s3tables:PutTableData, s3tables:GetTableMetadataLocation  | 

**Note**  
The `s3tables:DeleteTable` permission is required to delete a table from a table bucket. This permission allows you to permanently remove a table and all its associated data and metadata. Use this permission carefully as the delete operation cannot be undone.

# Managing access to a table or database with Lake Formation
<a name="grant-permissions-tables"></a>

If your table buckets are integrated with the AWS analytics service using Lake Formation, then Lake Formation manages access to your tables and requires that each IAM principal (user or role) be authorized to perform actions them. Lake Formation uses its own permissions model (Lake Formation permissions) that enables fine-grained access control for Data Catalog resources. 

For more information, see [Overview of Lake Formation permissions](https://docs.aws.amazon.com//lake-formation/latest/dg/lf-permissions-overview.html) in the *AWS Lake Formation Developer Guide*.

There are two main types of permissions in AWS Lake Formation: 

1. Metadata access permissions control the ability to create, read, update, and delete metadata databases and tables in the Data Catalog.

1. Underlying data access permissions control the ability to read and write data to the underlying Amazon S3 locations that the Data Catalog resources point to.

Lake Formation uses a combination of its own permissions model and the IAM permissions model to control access to Data Catalog resources and underlying data:
+ For a request to access Data Catalog resources or underlying data to succeed, the request must pass permission checks by both IAM and Lake Formation.
+ IAM permissions control access to the Lake Formation and AWS Glue APIs and resources, whereas Lake Formation permissions control access to the Data Catalog resources, Amazon S3 locations, and the underlying data.

Lake Formation permissions apply only in the Region in which they were granted, and a principal must be authorized by a data lake administrator or another principal with the necessary permissions in order to be granted Lake Formation permissions. 

**Note**  
If you're the user who performed the table bucket integration, you already have Lake Formation permissions to your tables. If you're the only principal who will access your tables, you can skip this step. You only need to grant Lake Formation permissions on your tables to other IAM principals. This allows other principals to access the table when running queries. For more information, see [Granting Lake Formation permission on a table or database](#grant-lf-table). 

## Granting Lake Formation permission on a table or database
<a name="grant-lf-table"></a>

You can grant a principal Lake Formation permissions on a table or database in a table bucket, either through the Lake Formation console or the AWS CLI.

**Note**  
When you grant Lake Formation permissions on a Data Catalog resource to an external account or directly to an IAM principal in another account, Lake Formation uses the AWS Resource Access Manager (AWS RAM) service to share the resource. If the grantee account is in the same organization as the grantor account, the shared resource is available immediately to the grantee. If the grantee account is not in the same organization, AWS RAM sends an invitation to the grantee account to accept or reject the resource grant. Then, to make the shared resource available, the data lake administrator in the grantee account must use the AWS RAM console or AWS CLI to accept the invitation. For more information about cross-account data sharing, see [Cross-account data sharing in Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/cross-account-permissions.html) in the *AWS Lake Formation Developer Guide*.

------
#### [ Console ]

1. Open the AWS Lake Formation console at [https://console.aws.amazon.com/lakeformation/](https://console.aws.amazon.com/lakeformation/), and sign in as a data lake administrator. For more information about how to create a data lake administrator, see [Create a data lake administrator](https://docs.aws.amazon.com/lake-formation/latest/dg/initial-lf-config.html#create-data-lake-admin) in the *AWS Lake Formation Developer Guide*.

1. In the navigation pane, choose **Data permissions**, and then choose **Grant**. 

1. On the **Grant Permissions** page, under **Principals**, do one of the following:
   + For Amazon Athena or Amazon Redshift, choose **IAM users and roles**, and select the IAM principal you use for queries.
   + For Amazon Data Firehose, choose **IAM users and roles**, and select the service role that you created to stream to tables.
   + For Quick, choose **SAML users and groups**, and enter the Amazon Resource Name (ARN) of your Quick admin user.
   + For AWS Glue Iceberg REST endpoint access, choose **IAM users and roles** then select the IAM role you created for you client. For more information, see [Create an IAM role for your client](s3-tables-integrating-glue-endpoint.md#glue-endpoint-create-iam-role)

1. Under **LF-Tags or catalog resources**, choose **Named Data Catalog resources**.

1. For **Catalogs**, choose the subcatalog that you created when you integrated your table bucket, for example, `account-id:s3tablescatalog/amzn-s3-demo-bucket`.

1. For **Databases**, choose the S3 table bucket namespace that you created.

1. (Optional) For **Tables**, choose the S3 table that you created in your table bucket. 
**Note**  
If you're creating a new table in the Athena query editor, don't select a table. 

1. Do one of the following:
   + If you specified a table in the prior step, for **Table permissions**, choose **Super**.
   + If you didn't specify a table in the prior step, go to **Database permissions**. For cross-account data sharing, you can't choose **Super** to grant the other principal all permissions on your database. Instead, choose more fine-grained permissions, such as **Describe**.

1. Choose **Grant**.

------
#### [ CLI ]

1. Make sure that you're running the following AWS CLI commands as a data lake administrator. For more information, see [Create a data lake administrator](https://docs.aws.amazon.com//lake-formation/latest/dg/initial-lf-config.html#create-data-lake-admin) in the *AWS Lake Formation Developer Guide*.

1. Run the following command to grant Lake Formation permissions on table in S3 table bucket to an IAM principal to access the table. To use this example, replace the *`user input placeholders`* with your own information. 

   ```
   aws lakeformation grant-permissions \
   --region us-east-1 \
   --cli-input-json \
   '{
       "Principal": {
           "DataLakePrincipalIdentifier": "user or role ARN, for example, arn:aws:iam::account-id:role/example-role"
       },
       "Resource": {
           "Table": {
               "CatalogId": "account-id:s3tablescatalog/amzn-s3-demo-bucket",
               "DatabaseName": "S3 table bucket namespace, for example, test_namespace",
               "Name": "S3 table bucket table name, for example test_table"
           }
       },
       "Permissions": [
           "ALL"
       ]
   }'
   ```

------

# VPC connectivity for S3 Tables
<a name="s3-tables-VPC"></a>

All tables in S3 Tables are in the Apache Iceberg format and are made up of two types of S3 objects. These two types of objects are data files which store the data and metadata files which track information about the data files at different points in time. All table bucket, namespace, and table operations (for example, `CreateNamespace`, `CreateTable`, and so on) are routed through an S3 Tables endpoint (`s3tables.region.amazonaws.com`) and all object-level operations that read or write the data and metadata files continue to be routed through an S3 service endpoint (`s3.region.amazonaws.com`). 

To access S3 Tables, Amazon S3 supports two types of VPC endpoints by using AWS PrivateLink: gateway endpoints and interface endpoints. A gateway endpoint is a gateway that you specify in your route table to access S3 from your VPC over the AWS network. Interface endpoints extend the functionality of gateway endpoints by using private IP addresses to route requests to Amazon S3 from within your VPC, on premises, or from a VPC in another AWS Region by using VPC peering or AWS Transit Gateway. 

To access S3 Tables from a VPC, we recommend creating two VPC endpoints (one for S3 and the other for S3 Tables). You can create either a gateway or an interface endpoint to route file (object) level operations to S3 and an interface endpoint to route bucket and table-level operations to S3 Tables. You can create and use VPC endpoints for file-level requests using S3. For more information, see [Gateway endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html) in the *AWS PrivateLink* User Guide. 

To learn more about using AWS PrivateLink to create and work with endpoints for S3 Tables, see the following topics. To create a VPC interface endpoint, see [Create a VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html#create-interface-endpoint-aws) in the *AWS PrivateLink Guide*.

**Topics**
+ [Creating VPC endpoints for S3 Tables](#s3-tables-endpoints)
+ [Accessing table buckets and tables through endpoints using the AWS CLI](#s3-tables-endpoints-cli-sdks)
+ [Configuring a VPC network when using query engines](#s3-tables-query-engine)
+ [Using the dual-stack endpoints to access tables and table buckets](#s3-tables-dual-stack-endpoints)
+ [Using VPC endpoint policy conditions with table bucket policies](#s3-tables-vpc-endpoint-policies)
+ [Restricting access to S3 Tables within the VPC network](#s3-tables-VPC-policy)

## Creating VPC endpoints for S3 Tables
<a name="s3-tables-endpoints"></a>

When you create a VPC endpoint, S3 Tables generates two types of endpoint-specific DNS names: Regional and Zonal. 
+ A Regional DNS name is of the following format: `VPCendpointID.s3tables.AWSregion.vpce.amazonaws.com`. For example, for the VPC endpoint ID `vpce-1a2b3c4d`, the DNS name generated will be similar to `vpce-1a2b3c4d-5e6f.s3tables.us-east-1.vpce.amazonaws.com` 
+ A Zonal DNS name is of the following format: `VPCendpointID-AvailabilityZone.s3tables.AWSregion.vpce.amazonaws.com`. For example, For the VPC endpoint ID `vpce-1a2b3c4d-5e6f.`, the DNS name generated will be similar to `vpce-1a2b3c4d-5e6f-us-east-1a.s3tables.us-east-1.vpce.amazonaws.com` 

   A Zonal DNS name includes your Availability Zone. You might use Zonal DNS names if your architecture isolates Availability Zones. Endpoint specific S3 DNS names can be resolved from the S3 public DNS domain. 

You can also use Private DNS options to simplify routing S3 traffic over VPC endpoints and help you take advantage of the lowest-cost network path available to your application. Private DNS maps the public endpoint of S3 Tables, for instance, `s3tables.region.amazonaws.com`, to a private IP in your VPC. You can use private DNS options to route Regional S3 traffic without updating your S3 clients to use the endpoint-specific DNS names of your interface endpoints.

## Accessing table buckets and tables through endpoints using the AWS CLI
<a name="s3-tables-endpoints-cli-sdks"></a>

You can use the AWS Command Line Interface (AWS CLI) to access table buckets and tables through the interface endpoints. With the AWS CLI, `aws s3` commands route traffic through the Amazon S3 endpoint. The `aws s3tables` AWS CLI commands use the Amazon S3 Tables endpoint. 

An example of an `s3tables` VPC endpoint is `vpce-0123456afghjipljw-nmopsqea.s3tables.region.vpce.amazonaws.com`

An `s3tables` VPC endpoint doesn't include a bucket name. You can access the `s3tables` VPC endpoint using the `aws s3tables` AWS CLI commands.

An example of an `s3` VPC endpoint is `amzn-s3-demo-bucket.vpce-0123456afghjipljw-nmopsqea.s3.region.vpce.amazonaws.com`

You can access the `s3` VPC endpoint using the `aws s3` AWS CLI commands.

### Using the AWS CLI
<a name="set-s3tables-vpc-cli"></a>

To access table buckets and tables through interface endpoints using the AWS CLI, use the `-region`- and `--endpoint-url` parameters. To perform table bucket and table level actions, use the S3 Tables endpoint URL. To perform object level actions, use the Amazon S3 endpoint URL.

In the following examples, replace the *user input placeholders* with your own information.

**Example 1: Use an endpoint URL to list table buckets in your account**

```
aws s3tables list-table-buckets --endpoint https://vpce-0123456afghjipljb-aac.s3tables.us-east-1.vpce.amazonaws.com —region us-east-1
```

**Example 2: Use an endpoint URL to list tables in your bucket**

```
aws s3tables list-tables --table-bucket-arn arn:aws:s3tables:us-east-1:123456789301:bucket/amzn-s3-demo-bucket --endpoint https://vpce-0123456afghjipljb-aac.s3tables.us-east-1.vpce.amazonaws.com --region us-east-1
```

## Configuring a VPC network when using query engines
<a name="s3-tables-query-engine"></a>

Use the following steps to configure a VPC network when using query engines. 

1. To get started, you can create or update a VPC. For more information, see [Create a VPC](https://docs.aws.amazon.com//vpc/latest/userguide/create-vpc.html#create-vpc-and-other-resources).

1.  For table and table bucket level operations that route to S3 Tables, create a new interface endpoint. For more information, see [Access an AWS service using an interface VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html#create-interface-endpoint-aws).

1.  For all object level operations that route to Amazon S3, create a gateway endpoint or a interface endpoint. For more information on gateway endpoints, see [Create a gateway endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-s3.html#create-gateway-endpoint-s3).

1.  Next, configure your data resources and launch an Amazon EMR cluster. For more information, see [Getting started with Amazon EMR](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-gs.html).

1. You can then submit a Spark application with an additional configuration by selecting your DNS names from the VPC endpoint. For example, `spark.sql.catalog.ice_catalog.s3tables.endpoint` and `https://interface-endpoint.s3tables.us-east-1.vpce.amazonaws.com` For more information, see [Submit work to your Amazon EMR cluster](https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-gs.html#emr-getting-started-manage).

## Using the dual-stack endpoints to access tables and table buckets
<a name="s3-tables-dual-stack-endpoints"></a>

S3 Tables supports dual-stack connectivity for AWS PrivateLink. Dual-stack endpoints allow you to access S3 tables buckets using the Internet Protocol version 6 (IPv6), in addition to the IPv4 protocol, depending on what your network supports. You can access an S3 bucket through a dual-stack endpoint using the following naming convention: 

```
s3tables.<region>.api.aws
```

The following are some things you should know before trying to access S3 tables and table buckets over IPv6 in your VPC:
+ The client you use to access tables and your S3 client must both have dual-stack enabled.
+ IPv6 inbound is not enabled by default for VPC security groups. To allow IPv6 access you will need to add a new rule allowing HTTPS (TCP port 443) to your security group. For more information, see [https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/changing-security-group.html#add-remove-security-group-rules](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/changing-security-group.html#add-remove-security-group-rules) in the *Amazon EC2user guide*
+ If your VPC doesn't have IPv6 CIDRs assigned, you will need to manually add an IPv6 CIDR block to your VPC. For more info, see [https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6-add.html](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6-add.html) in the *AWS PrivateLink user guide*
+ If you use IP address filtering IAM policies they must be updated to handle IPv6 addresses. For more information about managing access permissions with IAM, see [Identity and Access Management for Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-iam.html).

To create a new VPC endpoint that uses the dual-stack endpoint for S3 Tables use example CLI command:

```
aws ec2 create-vpc-endpoint \
  --vpc-id vpc-id \
  --service-name com.amazonaws.aws-region.s3tables \
  --subnet-ids subnet-1 subnet-2 \
  --vpc-endpoint-type Interface \
  --ip-address-type dualstack \
  --dns-options "DnsRecordIpType=dualstack" \
  --security-group-ids sg-id \
  --region aws-region
```

For more information on creating VPC endpoints see [https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html) in the VPC user guide.

If your network supports IPv6 and you want to update your VPC to enable IPv6 you can use the following CLI command:

```
aws ec2 modify-vpc-endpoint \
  --vpc-endpoint-id vpce-id \
  --ip-address-type dualstack \
  --dns-options "DnsRecordIpType=dualstack" \
  --region aws-region
```

## Using VPC endpoint policy conditions with table bucket policies
<a name="s3-tables-vpc-endpoint-policies"></a>

You can use the `aws:SourceVpce`, `aws:SourceVpc`, and `aws:VpcSourceIp` condition keys in a table bucket policy to restrict access to table bucket resources from a specific VPC or VPC endpoint. For more information about these condition keys, see [AWS global condition context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) in the *IAM User Guide*.

When using the [Iceberg REST APIs](s3-tables-integrating-open-source.md), S3 Tables makes requests to Amazon S3 on your behalf to read and write table metadata. When an Iceberg REST API request is made through a VPC endpoint, the source VPC of the initial request is not preserved in the table metadata requests to Amazon S3. If your table bucket policy restricts access using these condition keys, you must also add a condition for `aws:CalledVia` to allow these table metadata requests from S3 Tables.

The following example table bucket policy denies all S3 Tables actions unless the request comes from the specified VPC endpoint or is a request made by the Amazon S3 Tables Iceberg REST endpoint on your behalf.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "RestrictToVPCe",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3tables:*",
            "Resource": [
                "arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket",
                "arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket/*"
            ],
            "Condition": {
                "StringNotEqualsIfExists": {
                    "aws:SourceVpce": "vpce-1a2b3c4d",
                    "aws:CalledViaLast": "s3tables.amazonaws.com"
                }
            }
        }
    ]
}
```

------

## Restricting access to S3 Tables within the VPC network
<a name="s3-tables-VPC-policy"></a>

Similar to resource-based policies, you can attach an endpoint policy to your VPC endpoint that controls the access to tables and table buckets. In the following example, the interface endpoint policy restricts access to only specific table buckets.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Id": "Policy141511512309",
    "Statement": [
        {
            "Sid": "Access-to-specific-bucket-only",
            "Principal": "*",
            "Action": "s3tables:*",
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket",
                "arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket/*"
            ]
        }
    ]
}
```

------

# Security considerations and limitations for S3 Tables
<a name="s3-tables-restrictions"></a>

The following list describes which security and access control features and functionality are unsupported or limited for S3 Tables.
+ Public access policies are not supported. Users can't modify bucket or table policies to allow public access.
+ Presigned URLs to access objects associated with a table are not supported.
+ Requests made over HTTP are not supported. Amazon S3 automatically responds with an HTTP redirect for any requests made via HTTP to upgrade the requests to HTTPS.
+ You must use AWS Signature Version 4 when making requests to an access point by using the REST APIs.
+ Requests made over the Internet Protocol version 6 (IPv6) are supported only for object-level actions over table storage endpoints, and not for the table- and bucket-level actions.
+ Table bucket and table access policies are limited to 20 KB in size.

# Logging and monitoring for S3 Tables
<a name="s3-tables-monitoring-overview"></a>

Monitoring is an important part of maintaining the reliability, availability, and performance of Amazon S3 Tables. We recommend collecting monitoring data from your tables in table buckets so that you can more easily debug a multipoint failure if one occurs.

AWS provides several tools for monitoring your S3 Tables resources and responding to potential incidents.

**Amazon CloudWatch Alarms**  
Using Amazon CloudWatch alarms, you watch a single metric over a time period that you specify. If the metric exceeds a given threshold, a notification is sent to an Amazon SNS topic or AWS Auto Scaling policy. CloudWatch alarms do not invoke actions because they are in a particular state. Rather the state must have changed and been maintained for a specified number of periods. For more information, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md).

**AWS CloudTrail Logs**  
CloudTrail provides a record of actions taken by a user, role, or an AWS service in Amazon S3. Using the information collected by CloudTrail, you can determine the request that was made to Amazon S3, the IP address from which the request was made, who made the request, when it was made, and additional details. For more information, see [Logging Amazon S3 API calls using AWS CloudTrail](cloudtrail-logging.md).

**Topics**
+ [Logging with AWS CloudTrail for S3 Tables](s3-tables-logging.md)
+ [Monitoring metrics with Amazon CloudWatch](s3-tables-cloudwatch-metrics.md)

# Logging with AWS CloudTrail for S3 Tables
<a name="s3-tables-logging"></a>

 Amazon S3 is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service. CloudTrail captures all API calls for Amazon S3 as events. Using the information collected by CloudTrail, you can determine the request that was made to Amazon S3, the IP address from which the request was made, when it was made, and additional details. When a supported event activity occurs in Amazon S3, that activity is recorded in a CloudTrail event. You can use AWS CloudTrail trail to log management events and data events for S3 Tables. For more information, see [Amazon S3 CloudTrail events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) and [What is AWS CloudTrail? ](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) in the *AWS CloudTrailUser Guide*.

## CloudTrail management events for S3 Tables
<a name="s3-tables-management-events"></a>

Management events provide information about management operations that are performed on resources in your AWS account. 

By default, CloudTrail logs management events for S3 Tables. The `eventsource` for CloudTrail management events for S3 Tables is ` s3tables.amazonaws.com`. When you set up your AWS account, CloudTrail management events are enabled by default. The following API actions are tracked by CloudTrail and logged as management events. 
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_CreateNamespace.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_CreateNamespace.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_CreateTable.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_CreateTable.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_CreateTableBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_CreateTableBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteNamespace.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteNamespace.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTable.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTable.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTableBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTableBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTableBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTableBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTablePolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_DeleteTablePolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetNamespace.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetNamespace.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTable.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTable.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTableBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTableBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTableBucketMaintenanceConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTableBucketMaintenanceConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTableBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTableBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTableMaintenanceConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTableMaintenanceConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTableMaintenanceJobStatus.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTableMaintenanceJobStatus.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTableMetadataLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTableMetadataLocation.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTablePolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_GetTablePolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_ListNamespaces.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_ListNamespaces.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_ListTableBuckets.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_ListTableBuckets.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_ListTables.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_ListTables.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_PutTableBucketMaintenanceConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_PutTableBucketMaintenanceConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_PutTableMaintenanceConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_PutTableMaintenanceConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_PutBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_PutBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_PutTablePolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_PutTablePolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_RenameTable.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_RenameTable.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_UpdateTableMetadataLocation.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_s3TableBuckets_UpdateTableMetadataLocation.html)

For more information on CloudTrail management events, see [Logging management events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide*. 

## CloudTrail management events for S3 Tables maintenance
<a name="s3-tables-maintenance-events"></a>

S3 logs automatic maintenance operations as `TablesMaintenanceEvent` management events in CloudTrail. These events occur during operations like compaction and snapshot expiration. For more information about S3 table maintenance, see [Maintenance for tables](s3-tables-maintenance.md).

### How to identify maintenance events
<a name="identify-maintenance-event"></a>

You can identify S3 Tables maintenance events in CloudTrail logs by these attribute values:
+ `eventSource: s3tables.amazonaws.com`
+ `eventType: AwsServiceEvent`
+ `eventName: TablesMaintenanceEvent`
+ `userAgent: maintenance.s3tables.amazonaws.com`
+ `activityType:`
  + `IcebergCompaction` (for compaction)
  + `IcebergSnapshotManagement` (for snapshot expiration)

For an example of a compaction maintenance event, see [Example – CloudTrail log file for a table maintenance management event](s3-tables-log-files.md#example-ct-log-s3tables-3).

## CloudTrail data events for S3 Tables
<a name="s3-tables-data-events"></a>

Data events provide information about the resource operations performed on or in a resource.By default, CloudTrail trails don't log data events, but you can configure trails to log data events. 

When you log data events for a trail in CloudTrail, you will choose or specify the resource type. S3 Tables has two resources types, `AWS::S3Tables::Table` and `AWS::S3Tables::TableBucket`. 

The following data events are logged to CloudTrail. 
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html)

For more information on CloudTrail data events, see [Logging data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide*. 

For additional information about CloudTrail events for S3 Tables, see the following topics: 

**Topics**
+ [CloudTrail management events for S3 Tables](#s3-tables-management-events)
+ [CloudTrail management events for S3 Tables maintenance](#s3-tables-maintenance-events)
+ [CloudTrail data events for S3 Tables](#s3-tables-data-events)
+ [AWS CloudTrail data event log file examples for S3 Tables](s3-tables-log-files.md)

# AWS CloudTrail data event log file examples for S3 Tables
<a name="s3-tables-log-files"></a>

A AWS CloudTrail log file includes information about the requested API operation, the date and time of the operation, request parameters, and so on. This topic provides example log files for CloudTrail data events for S3 Tables.

**Topics**
+ [Example – CloudTrail log file for `GetObject` data event](#example-ct-log-s3tables)
+ [Example – CloudTrail log file for a table maintenance management event](#example-ct-log-s3tables-3)
+ [Example – CloudTrail log file for `PutObject` data event](#example-ct-log-s3tables-2)

## Example – CloudTrail log file for `GetObject` data event
<a name="example-ct-log-s3tables"></a>

The following example shows a CloudTrail log file example that demonstrates the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html) API operation. 

```
    {
        "eventVersion": "1.11",
        "userIdentity": {
          "type": "IAMUser",
          "principalId": "123456789012",
          "arn": "arn:aws:iam::111122223333:user/"myUserName",
          "accountId": "111122223333",
          "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
          "userName":"myUserName"
        },
        "eventTime": "2024-11-22T17:12:25Z",
        "eventSource": "s3tables.amazonaws.com",
        "eventName": "GetObject",
        "awsRegion": "us-east-1",
        "sourceIPAddress": "192.0.2.0",
        "userAgent": "[aws-cli/2.18.5]",
        "requestParameters": {
            "Host": "tableWarehouseLocation.s3.us-east-1.amazonaws.com",
            "key": "product-info.json"
        },
        "responseElements":  null,
        "additionalEventData": {
            "SignatureVersion": "SigV4",
            "cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
            "bytesTransferredIn": 0,
            "AuthenticationMethod": "AuthHeader",
            "xAmzId2": "q6xhNJYmhg",
            "bytesTransferredOut": 28441
            
          },
          "requestID": "07D681123BD12AED",
          "eventID": "f2b287f3-0df1-1234-a2f4-c4bdfed47657",
          "readOnly": true,
          "resources": [{
              "accountId": "111122223333",
              "type": "AWS::S3Tables::TableBucket",
               "ARN": "arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket1"
           }, {
              "accountId": "111122223333",
              "type": "AWS::S3Tables::Table",
              "ARN": "arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-bucket/table/111aa1111-22bb-33cc-44dd-5555eee66ffff"

           }],               
           "eventType": "AwsApiCall",
           "managementEvent": false,
           "recipientAccountId": "444455556666",
           "eventCategory": "Data",
           "tlsDetails": {
             "tlsVersion": "TLSv1.2",
             "cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
             "clientProvidedHostHeader": "tableWarehouseLocation.s3.us-east-1.amazonaws.com"
          }
    }
```

## Example – CloudTrail log file for a table maintenance management event
<a name="example-ct-log-s3tables-3"></a>

The following is a CloudTrail log file example that demonstrates a maintenance event for table compaction performed by S3 as part of automatic table maintenance. For more information on events for table maintenance, see [CloudTrail management events for S3 Tables maintenance](s3-tables-logging.md#s3-tables-maintenance-events)

```
{
  "eventVersion": "1.11",
  "userIdentity": {
    "type": "AWSService",
    "invokedBy": "maintenance.s3tables.amazonaws.com"
  },
  "eventTime": "2025-09-18T20:13:14Z",
  "eventSource": "s3tables.amazonaws.com",
  "eventName": "TablesMaintenanceEvent",
  "awsRegion": "us-east-1",
  "sourceIPAddress": "maintenance.s3tables.amazonaws.com",
  "userAgent": "maintenance.s3tables.amazonaws.com",
  "requestParameters": null,
  "responseElements": null,
  "eventID": "b8f96329-ef5c-32b5-94f6-eeed9061ea32",
  "readOnly": false,
  "resources": [
    {
      "accountId": "111122223333",
      "type": "AWS::S3Tables::TableBucket",
      "ARN": "arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket"
    },
    {
      "accountId": "111122223333",
      "type": "AWS::S3Tables::Table",
      "ARN": "arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket/table/7ff7750e-23b3-481e-a90a-7d87d423d336"
    }
  ],
  "eventType": "AwsServiceEvent",
  "managementEvent": true,
  "recipientAccountId": "111122223333",
  "sharedEventID": "62a57826-a66e-479b-befa-0e65663ee9e8",
  "serviceEventDetails": {
    "activityType": "icebergCompaction"
  },
  "eventCategory": "Management"
}
```

## Example – CloudTrail log file for `PutObject` data event
<a name="example-ct-log-s3tables-2"></a>

The following example shows a CloudTrail log file example that demonstrates the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html) API operation. 

```
{
        "eventVersion": "1.11",
        "userIdentity": {
          "type": "IAMUser",
          "principalId": "123456789012",
          "arn":  "arn:aws:iam::444455556666:user/"myUserName",
          "accountId": "444455556666",
          "accessKeyId": "AKIAI44QH8DHBEXAMPLE",
          "userName":"myUserName"
        },
        "eventTime": "2024-11-22T17:12:25Z",
        "eventSource": "s3tables.amazonaws.com",
        "eventName": "PutObject",
        "awsRegion": "us-east-1",
        "sourceIPAddress": "192.0.2.0",
        "userAgent": "[aws-cli/2.18.5]",
        "requestParameters": {
            "Host": "tableWarehouseLocation.s3.us-east-1.amazonaws.com",
            "key": "product-info.json"
        },
        "responseElements":  {
            "x-amz-server-side-encryption": "AES256",
            "x-amz-version-id": "13zAFMdccAjt3MWd6ehxgCCCDRdkAKDw"
        },
        "additionalEventData": {
            "SignatureVersion": "SigV4",
            "cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
            "bytesTransferredIn": 28441,
            "AuthenticationMethod": "AuthHeader",
            "xAmzId2": "q6xhCJYmhg",
            "bytesTransferredOut": 0
            
          },
          "requestID": "28d2faaf-1234-4649-997d-EXAMPLE72818",
          "eventID": "694d604a-d190-1234-0dd1-EXAMPLEe20c1",
          "readOnly": false,
          "resources": [{
              "accountId": "444455556666",
              "type": "AWS::S3Tables::TableBucket",
               "ARN": "arn:aws:s3tables:us-east-1:444455556666:bucket/amzn-s3-demo-bucket1"
           }, {
              "accountId": "444455556666",
              "type": "AWS::S3Tables::Table",
              "ARN": "arn:aws:s3tables:us-east-1:444455556666:bucket/amzn-s3-demo-bucket1/table/b89ec883-b1d9-4b37-9cd7-b86f590123f4"
           }],               
           "eventType": "AwsApiCall",
           "managementEvent": false,
           "recipientAccountId": "111122223333",
           "eventCategory": "Data",
           "tlsDetails": {
             "tlsVersion": "TLSv1.2",
             "cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
             "clientProvidedHostHeader": "tableWarehouseLocation.s3.us-east-1.amazonaws.com"
            }
          }
```

# Monitoring metrics with Amazon CloudWatch
<a name="s3-tables-cloudwatch-metrics"></a>

You can use Amazon CloudWatch metrics to track performance, detect anomalies, and monitor the operational health of tables. There are several sets of CloudWatch metrics that you can use with S3 Tables.

**Daily storage metrics for tables and table buckets**  
Monitor the amount of data stored in tables and table buckets, including total size in bytes and number of files. These metrics track total storage bytes per access tier and file counts at the table bucket, table, and namespace level. Storage metrics for S3 Tables are reported once per day and are provided to all customers at no additional cost.

**Table maintenance metrics**  
Monitor automated maintenance operations performed by Amazon S3 on your tables, such as compaction. These metrics track the number of bytes and files processed during maintenance activities. Maintenance metrics for S3 Tables are reported once per day and are provided to all customers at no additional cost.

**Request metrics**  
Monitor S3 Tables requests to quickly identify and act on operational issues. These CloudWatch metrics can be optionally enabled for individual table buckets. Request metrics for S3 Tables are reported once every minute and are billed at the same rate as CloudWatch custom metrics. Request metrics include:  
+ counts of data plane operations (GET, PUT, HEAD, POST)
+ bytes transferred
+ latency measurements
+ error rates

**Note**  
**Best-effort CloudWatch metrics delivery**  
CloudWatch metrics are delivered on a best-effort basis. Most requests for an Amazon S3 object that have request metrics result in a data point being sent to CloudWatch.  
The completeness and timeliness of metrics are not guaranteed. The data point for a particular request might be returned with a timestamp that is later than when the request was actually processed. The data point for a minute might be delayed before being available through CloudWatch, or it might not be delivered at all. CloudWatch request metrics give you an idea of the nature of traffic against your bucket in near-real time. It is not meant to be a complete accounting of all requests. It follows from the best-effort nature of this feature that the reports available at the Billing & Cost Management Dashboard might include one or more access requests that do not appear in the bucket metrics.

# Metrics and dimensions
<a name="s3-tables-metrics-dimensions"></a>

The storage metrics and dimensions that S3 Tables sends to Amazon CloudWatch are listed in the following tables.

**Note**  
**Best-effort CloudWatch metrics delivery**  
CloudWatch metrics are delivered on a best-effort basis. Most requests for an Amazon S3 object that have request metrics result in a data point being sent to CloudWatch.  
The completeness and timeliness of metrics are not guaranteed. The data point for a particular request might be returned with a timestamp that is later than when the request was actually processed. The data point for a minute might be delayed before being available through CloudWatch, or it might not be delivered at all. CloudWatch request metrics give you an idea of the nature of traffic against your bucket in near-real time. It is not meant to be a complete accounting of all requests. It follows from the best-effort nature of this feature that the reports available at the Billing & Cost Management Dashboard might include one or more access requests that do not appear in the bucket metrics.

## Daily storage metrics for table buckets in CloudWatch
<a name="daily-storage-metrics"></a>

The `AWS/S3/Tables` namespace includes the following daily storage metrics which are available at no additional cost. You can filter these metrics by table bucket, table, or namespace name.


**Daily storage metrics**  

| Metric Name | Description | Units | Statistics | Granularity | 
| --- | --- | --- | --- | --- | 
| Total Bucket Storage | The amount of storage in bytes used by all tables in a table bucket | Bytes | Sum | Daily | 
| Total number of files | The total count of all files stored in a table bucket | Count | Sum | Daily | 

## Table maintenance metrics
<a name="table-maintenance-metrics"></a>

The `AWS/S3/Tables` namespace includes the following table maintenance metrics which are available at no additional cost. You can filter these metrics by table bucket, table, or namespace name.


**Table maintenance metrics**  

| Metric Name | Description | Units | Statistics | Granularity | 
| --- | --- | --- | --- | --- | 
| CompactionBytesProcessed | The number of bytes processed during table compaction operations | Bytes | Sum | Daily | 
| CompactionObjectsCount | The number of objects processed during table compaction operations | Count | Sum | Daily | 

## Request metrics for tables and table buckets in CloudWatch
<a name="request-metrics"></a>

The `AWS/S3/Tables` namespace includes the following request metrics which are billed at the same rate as CloudWatch custom metrics. You can filter these metrics by table bucket, table, or namespace name.


**Request metrics**  

| Metric Name | Description | Units | Statistics | Granularity | 
| --- | --- | --- | --- | --- | 
| All requests count | The total number of HTTP requests made to a table bucket | Count | Sum | 1-minute | 
| Get requests count | The number of HTTP GET requests made to retrieve objects from tables | Count | Sum | 1-minute | 
| Put requests count | The number of HTTP PUT requests made to add objects to tables | Count | Sum | 1-minute | 
| Head requests count | The number of HTTP HEAD requests made to retrieve metadata from tables | Count | Sum | 1-minute | 
| Post requests counts | The number of HTTP POST requests made to tables | Count | Sum | 1-minute | 
| UpdateTableMetadataLocation requests count | The number of requests made to update table metadata locations | Count | Sum | 1-minute | 
| GetTableMetadataLocation requests count | The number of requests made to retrieve table metadata locations | Count | Sum | 1-minute | 
| BytesDownloaded | The number of bytes downloaded for table requests | Bytes | Sum | 1-minute | 
| BytesUploaded | The number of bytes uploaded for table requests | Bytes | Sum | 1-minute | 
| 4xxErrors | The count of HTTP 4xx client error status codes returned | Count | Sum | 1-minute | 
| 5xxErrors | The count of HTTP 5xx server error status codes returned | Count | Sum | 1-minute | 
| FirstByteLatency | The per-request time from the complete request being received to when the response starts being returned | Milliseconds | Sum | 1-minute | 
| TotalRequestLatency | The elapsed per-request time from the first byte received to the last byte sent | Milliseconds | Sum | 1-minute | 

## S3 Tables dimensions in CloudWatch
<a name="s3-tables-dimensions"></a>

The following dimensions are used to filter S3 Tables metrics.


**S3 Tables dimensions**  

| Dimension Name | Description | Example Value | 
| --- | --- | --- | 
| TableBucketName | The name of the Amazon S3 table bucket | my-table-bucket | 
| Namespace | The namespace within the table bucket that contains one or more tables | my-department | 
| TableName | The name of a specific table within a namespace | transactions | 

# Accessing CloudWatch metrics
<a name="s3-tables-accessing-cloudwatch-metrics"></a>

You can monitor S3 Tables metrics using the CloudWatch Console, the AWS CLI, or the CloudWatch API. This section explains how to access your metrics using these different methods.

## Using the S3 console
<a name="tables-metrics-using-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. In the buckets list, choose the name of the bucket that contains the tables you want to see metrics for.

1. Choose the **Metrics** tab.

1. Choose the **View in CloudWatch** in any metrics pane to navigate to the CloudWatch console and see your available metrics in the `AWS/S3/Tables` namespace.

## Using the AWS CLI
<a name="tables-metrics-using-cli"></a>

To list metrics for S3 Tables using the AWS CLI, use the `list-metrics` command with the `--namespace` parameter set to `AWS/S3/Tables`:

```
aws cloudwatch list-metrics --namespace AWS/S3/Tables
```

To get statistics for a specific S3 Tables metric, use the `get-metric-statistics` command. For example:

```
aws cloudwatch get-metric-statistics \
--namespace AWS/S3/Tables \
--metric-name TotalBucketStorage \
--dimensions Name=TableBucketName,Value=MyTableBucket \
--start-time 2025-03-01T00:00:00 \
--end-time 2025-03-02T00:00:00 \
--period 86400 \
--statistics Average
```

## Best Practices
<a name="best-practices"></a>
+ When retrieving metrics, set the Period value based on the metric's granularity. For daily metrics (like storage metrics), use 86400 seconds (24 hours). For minute-level metrics (like request metrics), use 60 seconds.
+ Use dimensions appropriately to filter metrics to the desired scope (table bucket, namespace, or individual table level).
+ Consider using metric math to create derived metrics that better match your monitoring needs.

## Related Resources
<a name="related-resources"></a>
+ [Amazon CloudWatch Concepts](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_concepts.html)
+ [Using Amazon CloudWatch Dashboards](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html)

# Managing CloudWatch metrics
<a name="s3-tables-managing-cloudwatch-metrics"></a>

Storage metrics are enabled by default for all Amazon S3 tables and table buckets. You can enable or disable additional Request metrics through the console, AWS Command Line Interface (AWS CLI), or AWS SDKs.

## Prerequisites
<a name="prerequisites"></a>
+ Requires `s3table:PutTableBucketMetricsConfiguration` IAM permission

**Note**  
S3 Tables Request metrics are billed at the same rate as CloudWatch custom metrics.

## Using the AWS Management Console
<a name="using-console-managing"></a>

To enable or disable additional metrics

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Table buckets**.

1. In the buckets list, choose the name of the table bucket that contains the tables you want to request metrics for.

1. Choose the **Metrics** tab.

1. From the Request metrics panel, choose **Edit**.

1. Select **Enabled** or **Disabled**, then **Save changes**.

## Using the AWS CLI
<a name="using-cli-managing"></a>

These examples show how to enable or disable request metrics for table buckets by using the AWS CLI. To use these commands, replace the *user input placeholders* with your own information.

**Example : To enable request metrics for a table bucket:**  

```
aws s3tables put-table-bucket-metrics-configuration \
--table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket
```

**Example : To disable request metrics for a table bucket:**  

```
aws s3tables delete-table-bucket-metrics-configuration \
--table-bucket-arn arn:aws:s3tables:us-east-1:111122223333:bucket/amzn-s3-demo-table-bucket
```