

# Working with directory buckets
<a name="directory-buckets-overview"></a>

Directory buckets organize data hierarchically into directories as opposed to the flat storage structure of general purpose buckets. There aren't prefix limits for directory buckets, and individual directories can scale horizontally.

You can create up to 100 directory buckets in each of your AWS accounts, with no limit on the number of objects that you can store in a bucket. Your bucket quota is applied to each Region in your AWS account. If your application requires increasing this limit, contact Support. 

**Important**  
Directory buckets in Availability Zones that have no request activity for a period of at least 90 days transition to an inactive state. While in an inactive state, a directory bucket is temporarily inaccessible for reads and writes. Inactive buckets retain all storage, object metadata, and bucket metadata. Existing storage charges apply to inactive buckets. If you make an access request to an inactive bucket, the bucket transitions to an active state, typically within a few minutes. During this transition period, reads and writes return an HTTP `503 (Service Unavailable)` error code. This doesn't apply to buckets in Local Zones.

There are several types of Amazon S3 buckets. Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements. For more information about the various bucket types and the appropriate use cases for each, see [Buckets](Welcome.md#BasicsBucket).

The following topics provide information about directory buckets. For more information about general purpose buckets, see [General purpose buckets overview](UsingBucket.md).

For more information about directory buckets, see the following topics.
+ [Directory bucket names](#directory-buckets-name)
+ [Directories](#directory-buckets-index)
+ [Key names](#directory-buckets-key-names)
+ [Access management](#directory-buckets-access-management)

## Directory bucket names
<a name="directory-buckets-name"></a>

A directory bucket name consists of a base name that you provide and a suffix that contains the ID of the Zone (Availability Zone or Local Zone) that your bucket is located in. Directory bucket names must use the following format and follow the naming rules for directory buckets:

```
bucket-base-name--zone-id--x-s3
```

For example, the following directory bucket name contains the Availability Zone ID `usw2-az1`:

```
bucket-base-name--usw2-az1--x-s3
```

For more information, see [Directory bucket naming rules](directory-bucket-naming-rules.md).

## Directories
<a name="directory-buckets-index"></a>

Directory buckets organize data hierarchically into directories as opposed to the flat sorting structure of general purpose buckets. 

With a hierarchical namespace, the delimiter in the object key is important. The only supported delimiter is a forward slash (`/`). Directories are determined by delimiter boundaries. For example, the object key `dir1/dir2/file1.txt` results in the directories `dir1`/ and `dir2/` being automatically created, and the object `file1.txt` being added to the `/dir2` directory in the path `dir1/dir2/file1.txt`.

The directory bucket indexing model returns unsorted results for the `ListObjectsV2` API operation. If you need to limit your results to a subsection of your bucket, you can specify a subdirectory path in the `prefix` parameter, for example, `prefix=dir1/`. 

## Key names
<a name="directory-buckets-key-names"></a>

For directory buckets, subdirectories that are common to multiple object keys are created with the first object key. Additional object keys for the same subdirectory use the previously created subdirectory. This model gives you flexibility in choosing object keys that are best suited to the application, with equal support for sparse and dense directories. 

## Access management
<a name="directory-buckets-access-management"></a>

Directory buckets have all S3 Block Public Access settings enabled by default at the bucket level. S3 Object Ownership is set to bucket owner enforced and access control lists (ACLs) are disabled. These settings can't be modified.

By default, users don't have permissions for directory buckets. To grant access permissions for directory buckets, you can use IAM to create users, groups, or roles and attach permissions to those identities. For more information, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md). 

You can also control access to directory buckets through access points. Access points simplify managing data access at scale for shared datasets in Amazon S3. Access points are unique hostnames you create to enforce distinct permissions and network controls for all requests made through an access point. For more information, see [Managing access to shared datasets in directory buckets with access points](access-points-directory-buckets.md).

## Directory buckets quotas
<a name="directory-buckets-quotas"></a>

Quotas, also referred to as limits, are the maximum number of service resources or operations for your AWS account. The following are the quotas for directory buckets. For more information on quotas in Amazon S3, see [Amazon S3 quotas](https://docs.aws.amazon.com/general/latest/gr/s3.html#limits_s3).


| Name | Default | Adjustable | Description | 
| --- | --- | --- | --- | 
| Directory buckets | Each Account: 100 | [Yes](https://console.aws.amazon.com/servicequotas/home/services/s3/quotas/L-775A314D) | The number of Amazon S3 directory buckets that you can create in an account. | 
| Read TPS per directory bucket | Each directory bucket: up to 200,000 read TPS | To request a quota increase, contact [Support](https://support.console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). | The number of GET/HEAD requests per second per directory bucket. | 
| Write TPS per directory bucket | Each directory bucket: up to 100,000 write TPS | To request a quota increase, contact [Support](https://support.console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase). | The number of PUT/DELETE requests per second per directory bucket. | 

## Creating and using directory buckets
<a name="directory-buckets-working"></a>

For more information about working with directory buckets, see the following topics.
+ [Use cases for directory buckets](directory-bucket-use-cases.md)
+ [Differences for directory buckets](s3-express-differences.md)
+ [Networking for directory buckets](s3-express-networking.md)
+ [Directory bucket naming rules](directory-bucket-naming-rules.md)
+ [Viewing directory bucket propertiesUsing the S3 console](directory-bucket-view.md)
+ [Managing directory bucket policies](directory-bucket-bucket-policy.md)
+ [Emptying a directory bucket](directory-bucket-empty.md)
+ [Deleting a directory bucket](directory-bucket-delete.md)
+ [Listing directory buckets](directory-buckets-objects-ListExamples.md)
+ [Determining whether you can access a directory bucket](directory-buckets-objects-HeadExamples.md)
+ [Working with objects in a directory bucket](directory-buckets-objects.md)
+ [Security for directory buckets](s3-express-security.md)
+ [Managing access to shared datasets in directory buckets with access points](access-points-directory-buckets.md)
+ [Optimizing directory bucket performance](s3-express-optimizing-performance.md)
+ [Developing with directory buckets](s3-express-developing.md)
+ [Using tags with S3 directory buckets](directory-buckets-tagging.md)
+ [Resilience testing in S3 Express One Zone](s3-express-fis.md)

# Use cases for directory buckets
<a name="directory-bucket-use-cases"></a>

Directory buckets support bucket creation in the following bucket location types: Availability Zone or Local Zone. 

For low latency use cases, you can create a directory bucket in a single Availability Zone to store data. Directory buckets in Availability Zones support the S3 Express One Zone storage class. S3 Express One Zone storage class is recommended if your application is performance sensitive and benefits from single-digit millisecond `PUT` and `GET` latencies. To learn more about creating directory buckets in Availability Zones, see [High performance workloads](directory-bucket-high-performance.md). 

 For data residency use cases, you can create a directory bucket in a single AWS Dedicated Local Zone (DLZ) to store data. Directory buckets in Local Zones support the S3 One Zone-Infrequent Access (S3 One Zone-IA; Z-IA) storage class. To learn more about creating directory buckets in Local Zones, see [Data residency workloads](directory-bucket-data-residency.md).

**Topics**
+ [High performance workloads](directory-bucket-high-performance.md)
+ [Data residency workloads](directory-bucket-data-residency.md)

# High performance workloads
<a name="directory-bucket-high-performance"></a>

## S3 Express One Zone
<a name="s3-express-one-zone"></a>

 You can use Amazon S3 Express One Zone for high-performance workloads. S3 Express One Zone is the first S3 storage class where you can select a single Availability Zone with the option to co-locate your object storage with your compute resources which provides the highest possible access speed. Objects in S3 Express One Zone are stored in directory buckets located in Availability Zones. For more information on directory buckets, see [Directory buckets](https://docs.aws.amazon.com//AmazonS3/latest/userguide/directory-buckets-overview.html). 

Amazon S3 Express One Zone is a high-performance, single-zone Amazon S3 storage class that is purpose-built to deliver consistent, single-digit millisecond data access for your most latency-sensitive applications. S3 Express One Zone is the lowest latency cloud-object storage class available today, with data access speeds up to 10x faster and with request costs 50 percent lower than S3 Standard. Applications can benefit immediately from requests being completed up to an order of magnitude faster. S3 Express One Zone provides similar performance elasticity as other S3 storage classes. S3 Express One Zone is used for workloads or performance-critical applications that require consistent single-digit millisecond latency. 

As with other Amazon S3 storage classes, you don't need to plan or provision capacity or throughput requirements in advance. You can scale your storage up or down, based on need, and access your data through the Amazon S3 API.

The Amazon S3 Express One Zone storage class is designed for 99.95 percent availability within a single Availability Zone and is backed by the [Amazon S3 Service Level Agreement](https://aws.amazon.com/s3/sla/). With S3 Express One Zone, your data is redundantly stored on multiple devices within a single Availability Zone. S3 Express One Zone is designed to handle concurrent device failures by quickly detecting and repairing any lost redundancy. If the existing device encounters a failure, S3 Express One Zone automatically shifts requests to new devices within an Availability Zone. This redundancy helps ensure uninterrupted access to your data within an Availability Zone.

S3 Express One Zone is ideal for any application where it's important to minimize the latency required to access an object. Such applications can be human-interactive workflows, like video editing, where creative professionals need responsive access to content from their user interfaces. S3 Express One Zone also benefits analytics and machine learning workloads that have similar responsiveness requirements from their data, especially workloads with lots of smaller accesses or large numbers of random accesses. S3 Express One Zone can be used with other AWS services to support analytics and artificial intelligence and machine learning (AI/ML) workloads, such as Amazon EMR, Amazon SageMaker AI, and Amazon Athena.

![\[Diagram showing how S3 Express One Zone works.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/s3-express-one-zone.png)


For the directory buckets that use the S3 Express One Zone storage class, data is stored across multiple devices within a single Availability Zone but doesn't store data redundantly across Availability Zones. When you create a directory bucket to use the S3 Express One Zone storage class, we recommend that you specify an AWS Region and an Availability Zone that's local to your Amazon EC2, Amazon Elastic Kubernetes Service, or Amazon Elastic Container Service (Amazon ECS) compute instances to optimize performance. 

When using S3 Express One Zone, you can interact with your directory bucket in a virtual private cloud (VPC) by using a gateway VPC endpoint. With a gateway endpoint, you can access S3 Express One Zone directory buckets from your VPC without an internet gateway or NAT device for your VPC, and at no additional cost. 

You can use many of the same Amazon S3 API operations and features with directory buckets that you use with general purpose buckets and other storage classes. These include Mountpoint for Amazon S3, server-side encryption with Amazon S3 managed keys (SSE-S3), server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), S3 Batch Operations, and S3 Block Public Access. You can access S3 Express One Zone by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), AWS SDKs, and the Amazon S3 REST API.

For more information about S3 Express One Zone, see the following topics.
+ [Overview](#s3-express-one-zone-overview)
+ [Features of S3 Express One Zone](#s3-express-features)
+ [Related services](#s3-express-related-services)
+ [Next steps](#s3-express-next-steps)

### Overview
<a name="s3-express-one-zone-overview"></a>

To optimize performance and reduce latency, S3 Express One Zone introduces the following new concepts.

#### Availability Zones
<a name="s3-express-overview-az"></a>

The Amazon S3 Express One Zone storage class is designed for 99.95 percent availability within a single Availability Zone and is backed by the [Amazon S3 Service Level Agreement](https://aws.amazon.com/s3/sla/). With S3 Express One Zone, your data is redundantly stored on multiple devices within a single Availability Zone. S3 Express One Zone is designed to handle concurrent device failures by quickly detecting and repairing any lost redundancy. If the existing device encounters a failure, S3 Express One Zone automatically shifts requests to new devices within an Availability Zone. This redundancy helps ensure uninterrupted access to your data within an Availability Zone.

An Availability Zone is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. When you create a directory bucket, you choose the Availability Zone and AWS Region where your bucket will be located. 

##### Single Availability Zone
<a name="directory-buckets-availability-zone"></a>

When you create a directory bucket, you choose the Availability Zone and AWS Region.

Directory buckets use the S3 Express One Zone storage class, which is built to be used by performance-sensitive applications. S3 Express One Zone is the first S3 storage class where you can select a single Availability Zone with the option to co-locate your object storage with your compute resources, which provides the highest possible access speed.

With S3 Express One Zone, your data is redundantly stored on multiple devices within a single Availability Zone. S3 Express One Zone is designed for 99.95 percent availability within a single Availability Zone and is backed by the [Amazon S3 Service Level Agreement](https://aws.amazon.com/s3/sla/). For more information, see [Availability Zones](#s3-express-overview-az)

#### Endpoints and gateway VPC endpoints
<a name="s3-express-overview-endpoints"></a>

Bucket-management API operations for directory buckets are available through a Regional endpoint and are referred to as Regional endpoint API operations. Examples of Regional endpoint API operations are `CreateBucket` and `DeleteBucket`. After you create a directory bucket, you can use Zonal endpoint API operations to upload and manage the objects in your directory bucket. Zonal endpoint API operations are available through a Zonal endpoint. Examples of Zonal endpoint API operations are `PutObject` and `CopyObject`.

You can access S3 Express One Zone from your VPC by using gateway VPC endpoints. After you create a gateway endpoint, you can add it as a target in your route table for traffic destined from your VPC to S3 Express One Zone. As with Amazon S3, there is no additional charge for using gateway endpoints. For more information about how to configure gateway VPC endpoints, see [Networking for directory buckets](s3-express-networking.md)

#### Session-based authorization
<a name="s3-express-overview-authorization"></a>

With S3 Express One Zone, you authenticate and authorize requests through a new session-based mechanism that is optimized to provide the lowest latency. You can use `CreateSession` to request temporary credentials that provide low-latency access to your bucket. These temporary credentials are scoped to a specific S3 directory bucket. Session tokens are used only with Zonal (object-level) operations (with the exception of [CopyObject](directory-buckets-objects-copy.md)). For more information, see [Authorizing Zonal endpoint API operations with `CreateSession`](s3-express-create-session.md). 

The [supported AWS SDKs for S3 Express One Zone](s3-express-SDKs.md#s3-express-getting-started-accessing-sdks) handle session establishment and refreshment on your behalf. To protect your sessions, temporary security credentials expire after 5 minutes. After you download and install the AWS SDKs and configure the necessary AWS Identity and Access Management (IAM) permissions, you can immediately start using API operations.

### Features of S3 Express One Zone
<a name="s3-express-features"></a>

The following S3 features are available for S3 Express One Zone. For a complete list of supported API operationss and unsupported features, see [Differences for directory buckets](s3-express-differences.md).

#### Access management and security
<a name="s3-express-features-access-management"></a>

You can use the following features to audit and manage access. By default, directory buckets are private and can be accessed only by users who are explicitly granted access. Unlike general purpose buckets, which can set the access control boundary at the bucket, prefix, or object tag level, the access control boundary for directory buckets is set only at the bucket level. For more information, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md). 
+ [S3 Block Public Access](access-control-block-public-access.md) – All S3 Block Public Access settings are enabled by default at the bucket level. This default setting can't be modified. 
+ [S3 Object Ownership](about-object-ownership.md) (bucket owner enforced by default) – Access control lists (ACLs) are not supported for directory buckets. Directory buckets automatically use the bucket owner enforced setting for S3 Object Ownership. Bucket owner enforced means that ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the bucket. This default setting can't be modified. 
+ [AWS Identity and Access Management (IAM)](s3-express-security-iam.md) – IAM helps you securely control access to your directory buckets. You can use IAM to grant access to bucket management (Regional) API operations and object management (Zonal) API operations through the `s3express:CreateSession` action. For more information, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md). Unlike object-management actions, bucket management actions cannot be cross-account. Only the bucket owner can perform those actions.
+ [Bucket policies](s3-express-security-iam-example-bucket-policies.md) – Use IAM-based policy language to configure resource-based permissions for your directory buckets. You can also use IAM to control access to the `CreateSession` API operation, which allows you to use the Zonal, or object management, API operations. You can grant same-account or cross-account access to Zonal API operations. For more information about S3 Express One Zone permissions and policies, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md).
+ [IAM Access Analyzer for S3](access-analyzer.md) – Evaluate and monitor your access policies to make sure that the policies provide only the intended access to your S3 resources.

#### Logging and monitoring
<a name="s3-express-features-logging-monitoring"></a>

S3 Express One Zone uses the following S3 logging and monitoring tools that you can use to monitor and control how your resources are being used:
+ [Amazon CloudWatch metrics](cloudwatch-monitoring.md) – Monitor your AWS resources and applications by using CloudWatch to collect and track metrics. S3 Express One Zone uses the same CloudWatch namespace as other Amazon S3 storage classes (`AWS/S3`) and supports daily storage metrics for directory buckets: `BucketSizeBytes` and `NumberOfObjects`. For more information, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md).
+ [AWS CloudTrail logs](cloudtrail-logging-s3-info.md) – AWS CloudTrail is an AWS service that helps you implement operational and risk auditing, governance, and compliance of your AWS account by recording the actions taken by a user, role, or an AWS service. For S3 Express One Zone, CloudTrail captures Regional endpoint API operations (for example, `CreateBucket` and `PutBucketPolicy`) as management events and Zonal API operations (for example, `GetObject` and `PutObject`) as data events. These events include actions taken in the AWS Management Console, AWS Command Line Interface (AWS CLI), AWS SDKs, and AWS API operations. For more information, see [Logging with AWS CloudTrail for S3 Express One Zone](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-one-zone-logging.html).

**Note**  
Amazon S3 server access logs aren't supported with S3 Express One Zone.

#### Object management
<a name="s3-express-features-object-management"></a>

You can manage your object storage by using the Amazon S3 console, AWS SDKs, and AWS CLI. The following features are available for object management with S3 Express One Zone:
+ [S3 Batch Operations](batch-ops-create-job.md) – Use Batch Operations to perform bulk operations on objects in directory buckets, for example, **Copy** and **Invoke AWS Lambda function**. For example, you can use Batch Operations to copy objects between directory buckets and general purpose buckets. With Batch Operations, you can manage billions of objects at scale with a single S3 request by using the AWS SDKs or AWS CLI or a few clicks in the Amazon S3 console. 
+ [Import](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-import-job.html) – After you create a directory bucket, you can populate your bucket with objects by using the import feature in the Amazon S3 console. Import is a streamlined method for creating Batch Operations jobs to copy objects from general purpose buckets to directory buckets.

#### AWS SDKs and client libraries
<a name="s3-express-features-client-libraries"></a>

 You can manage your object storage by using the AWS SDKs and client libraries. 
+ [Mountpoint for Amazon S3](https://github.com/awslabs/mountpoint-s3/blob/main/doc/SEMANTICS.md) – Mountpoint for Amazon S3 is an open source file client that delivers high-throughput access, lowering compute costs for data lakes on Amazon S3. Mountpoint for Amazon S3 translates local file system API calls to S3 object API calls like `GET` and `LIST`. It is ideal for read-heavy data lake workloads that process petabytes of data and need the high elastic throughput provided by Amazon S3 to scale up and down across thousands of instances.
+ [https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#Introducing_the_Hadoop_S3A_client](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#Introducing_the_Hadoop_S3A_client) – S3A is a recommended Hadoop-compatible interface for accessing data stores in Amazon S3. S3A replaces the S3N Hadoop file system client.
+ [PyTorch on AWS](https://docs.aws.amazon.com//sagemaker/latest/dg/pytorch.html) – PyTorch on AWS is an open source deep-learning framework that makes it easier to develop machine learning models and deploy them to production. 
+ [AWS SDKs](https://aws.amazon.com//developer/tools/) – You can use the AWS SDKs when developing applications with Amazon S3. The AWS SDKs simplify your programming tasks by wrapping the underlying Amazon S3 REST API. For more information about using the AWS SDKs with S3 Express One Zone, see [AWS SDKs](s3-express-SDKs.md#s3-express-getting-started-accessing-sdks).

### Encryption and data protection
<a name="s3-express-features-encryption"></a>

Objects in S3 Express One Zone are automatically encrypted by server-side encryption with Amazon S3 managed keys (SSE-S3). S3 Express One Zone also supports server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS). S3 Express One Zone doesn't support server-side encryption with customer-provided encryption keys (SSE-C), or dual-layer server-side encryption with AWS KMS keys (DSSE-KMS). For more information, see [Data protection and encryption](s3-express-data-protection.md).

S3 Express One Zone offers you the option to choose the checksum algorithm that is used to validate your data during upload or download. You can select one of the following Secure Hash Algorithms (SHA) or Cyclic Redundancy Check (CRC) data-integrity check algorithms: CRC32, CRC32C, SHA-1, and SHA-256. MD5-based checksums are not supported with the S3 Express One Zone storage class. 

For more information, see [S3 additional checksum best practices](s3-express-optimizing-performance.md#s3-express-optimizing-performance-checksums).

### AWS Signature Version 4 (SigV4)
<a name="s3-express-features-sigv4"></a>

S3 Express One Zone uses AWS Signature Version 4 (SigV4). SigV4 is a signing protocol used to authenticate requests to Amazon S3 over HTTPS. S3 Express One Zone signs requests by using AWS Sigv4. For more information, see [Authenticating Requests (AWS Signature Version 4)](https://docs.aws.amazon.com//AmazonS3/latest/API/sig-v4-authenticating-requests.html) in the *Amazon Simple Storage Service API Reference*.

### Strong consistency
<a name="s3-express-features-strong-consistency"></a>

S3 Express One Zone provides strong read-after-write consistency for `PUT` and `DELETE` requests of objects in your directory buckets in all AWS Regions. For more information, see [Amazon S3 data consistency model](Welcome.md#ConsistencyModel).

### Related services
<a name="s3-express-related-services"></a>

You can use the following AWS services with the S3 Express One Zone storage class to support your specific low-latency use case.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/index.html) – Amazon EC2 provides secure and scalable computing capacity in the AWS Cloud. Using Amazon EC2 lessens your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html.html) – Lambda is a compute service that lets you run code without provisioning or managing servers. You configure notification settings on a bucket, and grant Amazon S3 permission to invoke a function on the function's resource-based permissions policy. 
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) – Amazon EKS is a managed service that eliminates the need to install, operate, and maintain your own Kubernetes control plane on AWS. [https://kubernetes.io/docs/concepts/overview/](https://kubernetes.io/docs/concepts/overview/) is an open source system that automates the management, scaling, and deployment of containerized applications.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) – Amazon ECS is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) – AWS Key Management Service (AWS KMS) is an AWS managed service that makes it easy for you to create and control the encryption keys that are used to encrypt your data. The AWS KMS keys that you create in AWS KMS are protected by FIPS 140-2 validated hardware security modules (HSM). To use or manage your KMS keys, you interact with AWS KMS.
+ [Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/what-is.html) – Athena is an interactive query service that makes it easy to analyze data directly in Amazon S3 by using standard [SQL](https://docs.aws.amazon.com/athena/latest/ug/ddl-sql-reference.html). You can also use Athena to interactively run data analytics by using Apache Spark without having to plan for, configure, or manage resources. When you run Apache Spark applications on Athena, you submit Spark code for processing and receive the results directly.
+ [Amazon SageMaker Training](https://docs.aws.amazon.com//sagemaker/latest/dg/how-it-works-training.html) – Review the options for training models with Amazon SageMaker, including built-in algorithms, custom algorithms, libraries, and models from the AWS Marketplace.
+ [AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/what-is-glue.html) – AWS Glue is a serverless data-integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources. You can use AWS Glue for analytics, machine learning, and application development. AWS Glue also includes additional productivity and data-ops tooling for authoring, running jobs, and implementing business workflows.
+ [Amazon EMR](https://docs.aws.amazon.com//emr/latest/ManagementGuide/emr-what-is-emr.html) – Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. 
+ [AWSCloudTrail](https://docs.aws.amazon.com//awscloudtrail/latest/userguide/cloudtrail-user-guide.html) – AWSCloudTrail is s an AWS service that helps you enable operational and risk auditing, governance, and compliance of your AWS account. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs. 
+ [AWS CloudFormation ](https://docs.aws.amazon.com//AWSCloudFormation/latest/UserGuide/Welcome.html) – is a service that helps you model and set up your AWS resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and CloudFormation takes care of provisioning and configuring those resources for you. You don't need to individually create and configure AWS resources and figure out what's dependent on what; CloudFormation handles that.

### Next steps
<a name="s3-express-next-steps"></a>

For more information about working with the S3 Express One Zone storage class and directory buckets, see the following topics:
+ [Tutorial: Getting started with S3 Express One Zone](s3-express-getting-started.md)
+ [S3 Express One Zone Availability Zones and Regions](s3-express-Endpoints.md)
+ [Networking for directory buckets in an Availability Zone](directory-bucket-az-networking.md)
+ [Creating directory buckets in an Availability Zone](directory-bucket-create.md)
+ [Regional and Zonal endpoints for directory buckets in an Availability Zone](endpoint-directory-buckets-AZ.md)
+ [Optimizing S3 Express One Zone performance](s3-express-performance.md)

# Tutorial: Getting started with S3 Express One Zone
<a name="s3-express-getting-started"></a>

Amazon S3 Express One Zone is the first S3 storage class where you can select a single Availability Zone with the option to co-locate your object storage with your compute resources which provides the highest possible access speed. Data in S3 Express One Zone is stored in directory buckets located in Availability Zones. For more information on directory buckets, see [Directory buckets](https://docs.aws.amazon.com//AmazonS3/latest/userguide/directory-buckets-overview.html). 

 S3 Express One Zone is ideal for any application where it's critical to minimize request latency. Such applications can be human-interactive workflows, like video editing, where creative professionals need responsive access to content from their user interfaces. S3 Express One Zone also benefits analytics and machine learning workloads that have similar responsiveness requirements from their data, especially workloads with a lot of smaller accesses or a large numbers of random accesses. S3 Express One Zone can be used with other AWS services such as Amazon EMR, Amazon Athena, AWS Glue Data Catalog and Amazon SageMaker Model Training to support analytics, artificial intelligence and machine learning (AI/ML) workloads,. You can work with the S3 Express One Zone storage class and directory buckets by using the Amazon S3 console, AWS SDKs, AWS Command Line Interface (AWS CLI), and Amazon S3 REST API. For more information, see [What is S3 Express One Zone?](https://docs.aws.amazon.com//AmazonS3/latest/userguide/s3-express-one-zone.html) and [How is S3 Express One Zone different?](https://docs.aws.amazon.com//AmazonS3/latest/userguide/s3-express-differences.html). 

![\[This is an S3 Express One Zone workflow diagram.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/s3-express-one-zone.png)


**Objective**  
In this tutorial, you will learn how to create a gateway endpoint, create and attach an IAM policy, create a directory bucket and then use the Import action to populate your directory bucket with objects currently stored in your general purpose bucket. Alternatively, you can manually upload objects to your directory bucket. 

**Topics**
+ [Prerequisites](#s3-express-tutorial-prerequisites)
+ [Step 1: Configure a gateway VPC endpoint to reach S3 Express One Zone directory buckets](s3-express-tutorial-endpoints.md)
+ [Step 2: Create a S3 Express One Zone directory bucket](s3-express-tutorial-create-directory-bucket.md)
+ [Step 3: Importing data into a S3 Express One Zone directory bucket](s3-express-tutorial-Import.md)
+ [Step 4: Manually upload objects to your S3 Express One Zone directory bucket](s3-express-tutorial-Upload.md)
+ [Step 5: Empty your S3 Express One Zone directory bucket](s3-express-tutoiral-Empty.md)
+ [Step 6: Delete your S3 Express One Zone directory bucket](s3-express-tutoiral-Delete.md)
+ [Next steps](#s3-express-tutoiral-Next)

## Prerequisites
<a name="s3-express-tutorial-prerequisites"></a>

Before you start this tutorial, you must have an AWS account that you can sign in to as an AWS Identity and Access Management (IAM) user with correct permissions.

**Topics**
+ [Create an AWS account](#s3-express-create-account)
+ [Create an IAM user in your AWS account (console)](#s3-express-tutorial-user)
+ [Create an IAM policy and attach it to an IAM user or role (console)](#s3-express-tutorial-polict)

### Create an AWS account
<a name="s3-express-create-account"></a>

To complete this tutorial, you need an AWS account. When you sign up for AWS, your AWS account is automatically signed up for all services in AWS, including Amazon S3. You are charged only for the services that you use. For more information about pricing, see [S3 pricing](https://aws.amazon.com/s3/pricing/). 

### Create an IAM user in your AWS account (console)
<a name="s3-express-tutorial-user"></a>

AWS Identity and Access Management (IAM) is an AWS service that helps administrators securely control access to AWS resources. IAM administrators control who can be authenticated (signed in) and authorized (have permissions) to access objects and use directory buckets in S3 Express One Zone. You can use IAM for no additional charge. 

By default, users don't have permissions to access directory buckets and perform S3 Express One Zone operations. To grant access permissions for directory buckets and S3 Express One Zone operations, you can use IAM to create users or roles and attach permissions to those identities. For more information about how to create an IAM user, see [Creating IAM users (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) in the *IAM User Guide*. For more information about how to create an IAM role, see [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*. 

For simplicity, this tutorial creates and uses an IAM user. After completing this tutorial, remember to [Delete the IAM user](tutorial-s3-object-lambda-uppercase.md#ol-upper-step8-delete-user). For production use, we recommend that you follow the [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*. A best practice requires human users to use federation with an identity provider to access AWS with temporary credentials. Another best practice is to require workloads to use temporary credentials with IAM roles to access AWS. To learn more about using AWS IAM Identity Center to create users with temporary credentials, see [Getting started](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html) in the *AWS IAM Identity Center User Guide*. 

**Warning**  
IAM users have long-term credentials, which presents a security risk. To help mitigate this risk, we recommend that you provide these users with only the permissions they require to perform the task and that you remove these users when they are no longer needed.

### Create an IAM policy and attach it to an IAM user or role (console)
<a name="s3-express-tutorial-polict"></a>

By default, users don't have permissions for directory buckets and S3 Express One Zone operations. To grant access permissions for directory buckets, you can use IAM to create users, groups, or roles and attach permissions to those identities. Directory buckets are the only resource that you can include in bucket policies or IAM identity policies for S3 Express One Zone access. 

To use Regional endpoint API operations (bucket-level or control plane operations) with S3 Express One Zone, you use the IAM authorization model, which doesn't involve session management. Permissions are granted for actions individually. To use Zonal endpoint API operations (object-level or data plane operations), you use [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) to create and manage sessions that are optimized for low-latency authorization of data requests. To retrieve and use a session token, you must allow the `s3express:CreateSession` action for your directory bucket in an identity-based policy or a bucket policy. If you're accessing S3 Express One Zone in the Amazon S3 console, through the AWS Command Line Interface (AWS CLI), or by using the AWS SDKs, S3 Express One Zone creates a session on your behalf. For more information, see [`CreateSession` authorization](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-create-session.html) and [AWS Identity and Access Management (IAM) for S3 Express One Zone ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam.html). 

**To create an IAM policy and attach the policy to an IAM user (or role)**

1. Sign in to the AWS Management Console and open the IAM Management Console.

1. In the navigation pane, choose **Policies**.

1. Choose **Create Policy**.

1. Select **JSON**.

1. Copy the policy below into the **Policy editor** window. Before you can create directory buckets or use S3 Express One Zone, you must grant the necessary permissions to your AWS Identity and Access Management (IAM) role or users. This example policy allows access to the `CreateSession` API operation (for use with other Zonal or object-level API operations) and all of the Regional endpoint (bucket-level) API operations. This policy allows the `CreateSession` API operation for use with all directory buckets, but the Regional endpoint API operations are allowed only for use with the specified directory bucket. To use this example policy, replace the `user input placeholders` with your own information.

------
#### [ JSON ]

****  

   ```
   {
        "Version":"2012-10-17",		 	 	 
        "Statement": [ 
            {
                "Sid": "AllowAccessRegionalEndpointAPIs",
                "Effect": "Allow",
                "Action": [
                    "s3express:DeleteBucket",
                    "s3express:DeleteBucketPolicy",
                    "s3express:CreateBucket",
                    "s3express:PutBucketPolicy",
                    "s3express:GetBucketPolicy",
                    "s3express:ListAllMyDirectoryBuckets"
                ],
   
                "Resource": "arn:aws:s3express:us-east-1:111122223333:bucket/amzn-s3-demo-bucket--usw2-az1--x-s3/*"
            },
            {
                "Sid": "AllowCreateSession",
                "Effect": "Allow",
                "Action": "s3express:CreateSession",
                "Resource": "*"
            }
        ]
    }
   ```

------

1. Choose **Next**.

1. Name the policy.
**Note**  
Bucket tags are not supported for S3 Express One Zone.

1. Select **Create policy**.

1.  Now that you've created an IAM policy, you can attach it to an IAM user. In the navigation pane, choose **Policies**.

1. In the **search bar**, enter the name of your policy.

1. From the **Actions** menu, select **Attach**. 

1. Under **Filter by Entity Type**, select **IAM users** or **Roles**. 

1. In the **search field**, type the name of the user or role you wish to use.

1. Choose **Attach Policy**.

**Topics**
+ [Create an AWS account](#s3-express-create-account)
+ [Create an IAM user in your AWS account (console)](#s3-express-tutorial-user)
+ [Create an IAM policy and attach it to an IAM user or role (console)](#s3-express-tutorial-polict)

# Step 1: Configure a gateway VPC endpoint to reach S3 Express One Zone directory buckets
<a name="s3-express-tutorial-endpoints"></a>

 You can access both Zonal and Regional API operations through gateway virtual private cloud (VPC) endpoints. Gateway endpoints can allow traffic to reach S3 Express One Zone without traversing a NAT Gateway. We strongly recommend using gateway endpoints as they provide the most optimal networking path when working with S3 Express One Zone. You can access S3 Express One Zone directory buckets from your VPC without an internet gateway or NAT device for your VPC, and at no additional cost. Use the following procedure to configure a gateway endpoint that connects to S3 Express One Zone storage class objects and directory buckets.

To access S3 Express One Zone, you use Regional and Zonal endpoints that are different from standard Amazon S3 endpoints. Depending on the Amazon S3 API operation that you use, either a Zonal or Regional endpoint is required. For a complete list of supported API operations by endpoint type, see [API operations supported by S3 Express One Zone](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-differences.html#s3-express-differences-api-operations). You must access both Zonal and Regional endpoints through a gateway virtual private cloud (VPC) endpoint. 

 Use the following procedure to create a gateway endpoint that connects to S3 Express One Zone storage class objects and directory buckets.

**To configure a gateway VPC endpoint**

1. Open the Amazon VPC Console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the side navigation pane under **Virtual private cloud**, choose **Endpoints**.

1. Choose **Create endpoint**.

1. Create a name for your endpoint.

1. For **Service category**, choose **AWS services**. 

1. Under **Services**, search using the filter **Type=Gateway** and then choose the option button next to **com.amazonaws.*region*.s3express**. 

1. For **VPC**, choose the VPC in which to create the endpoint.

1. For **Route tables**, choose the route table on your Local Zone to be used by the endpoint. After the endpoint is created, a route record will be added to the route table that you select in this step.

1. For **Policy**, choose **Full access** to allow all operations by all principals on all resources over the VPC endpoint. Otherwise, choose **Custom** to attach a VPC endpoint policy that controls the principals' permissions to perform actions on resources over the VPC endpoint.

1. For **IP address type**, choose from the following options:
   +  **IPv4** – Assign IPv4 addresses to the endpoint network interfaces. This option is supported only if all selected subnets have IPv4 address ranges and the service accepts IPv4 requests. 
   +  **IPv6** – Assign IPv6 addresses to the endpoint network interfaces. This option is supported only if all selected subnets are IPv6 only subnets and the service accepts IPv6 requests.
   +  **Dualstack** – Assign both IPv4 and IPv6 addresses to the endpoint network interfaces. This option is supported only if all selected subnets have both IPv4 and IPv6 address ranges and the service accepts both IPv4 and IPv6 requests.

1. (Optional) To add a tag, choose **Add new tag**, and enter the tag key and the tag value.

1. Choose **Create endpoint**.

After creating a gateway endpoint, you can use Regional API endpoints and Zonal API endpoints to access Amazon S3 Express One Zone storage class objects and directory buckets.

# Step 2: Create a S3 Express One Zone directory bucket
<a name="s3-express-tutorial-create-directory-bucket"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region in which you want to create a bucket. 
**Note**  
To minimize latency and costs and address regulatory requirements, choose a Region close to you. Objects stored in a Region never leave that Region unless you explicitly transfer them to another Region. For a list of Amazon S3 AWS Regions, see [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) in the *Amazon Web Services General Reference*.

1. In the left navigation pane, choose **Directory buckets**.

1. Choose **Create bucket**. The **Create bucket** page opens.

1. Under **General configuration**, view the AWS Region where your bucket will be created. 

   Under **Bucket type**, choose **Directory**.
**Note**  
If you've chosen a Region that doesn't support directory buckets, the **Bucket type** option disappears, and the bucket type defaults to a general purpose bucket. To create a directory bucket, you must choose a supported Region. For a list of Regions that support directory buckets and the Amazon S3 Express One Zone storage class, see [S3 Express One Zone Availability Zones and Regions](s3-express-Endpoints.md).
After you create the bucket, you can't change the bucket type.
**Note**  
The Availability Zone can't be changed after the bucket is created. 

1. For **Availability Zone**, choose a Availability Zone local to your compute services. For a list of Availability Zones that support directory buckets and the S3 Express One Zone storage class, see [S3 Express One Zone Availability Zones and Regions](s3-express-Endpoints.md). 

   Under **Availability Zone**, select the check box to acknowledge that in the event of an Availability Zone outage, your data might be unavailable or lost. 
**Important**  
Although directory buckets are stored across multiple devices within a single Availability Zone, directory buckets don't store data redundantly across Availability Zones.

1. For **Bucket name**, enter a name for your directory bucket.

   The following naming rules apply for directory buckets.
   + Be unique within the chosen Zone (AWS Availability Zone or AWS Local Zone). 
   + Name must be between 3 (min) and 63 (max) characters long, including the suffix.
   + Consists only of lowercase letters, numbers and hyphens (-).
   + Begin and end with a letter or number. 
   + Must include the following suffix: `--zone-id--x-s3`.
   + Bucket names must not start with the prefix `xn--`.
   + Bucket names must not start with the prefix `sthree-`.
   + Bucket names must not start with the prefix `sthree-configurator`.
   + Bucket names must not start with the prefix ` amzn-s3-demo-`.
   + Bucket names must not end with the suffix `-s3alias`. This suffix is reserved for access point alias names. For more information, see [Access point aliases](access-points-naming.md#access-points-alias).
   + Bucket names must not end with the suffix `--ol-s3`. This suffix is reserved for Object Lambda Access Point alias names. For more information, see [How to use a bucket-style alias for your S3 bucket Object Lambda Access Point](olap-use.md#ol-access-points-alias).
   + Bucket names must not end with the suffix `.mrap`. This suffix is reserved for Multi-Region Access Point names. For more information, see [Rules for naming Amazon S3 Multi-Region Access Points](multi-region-access-point-naming.md).

   A suffix is automatically added to the base name that you provide when you create a directory bucket using the console. This suffix includes the Availability Zone ID of the Availability Zone that you chose.

   After you create the bucket, you can't change its name. For more information about naming buckets, see [General purpose bucket naming rules](bucketnamingrules.md). 
**Important**  
Do not include sensitive information, such as account numbers, in the bucket name. The bucket name is visible in the URLs that point to the objects in the bucket.

1. Under **Object Ownership**, the **Bucket owner enforced** setting is automatically enabled, and all access control lists (ACLs) are disabled. For directory buckets, ACLs can't be enabled. 

    **Bucket owner enforced (default)** – ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the general purpose bucket. ACLs no longer affect access permissions to data in the S3 general purpose bucket. The bucket uses policies exclusively to define access control.

1. Under **Block Public Access settings for this bucket**, all Block Public Access settings for your directory bucket are automatically enabled. These settings can't be modified for directory buckets. For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

1. To configure default encryption, under **Encryption type**, choose one of the following: 
   + **Server-side encryption with Amazon S3 managed key (SSE-S3)**
   + **Server-side encryption with AWS Key Management Service key (SSE-KMS)**

   For more information about using Amazon S3 server-side encryption to encrypt your data, see [Data protection and encryption](s3-express-data-protection.md).
**Important**  
If you use the SSE-KMS option for your default encryption configuration, you are subject to the requests per second (RPS) quota of AWS KMS. For more information about AWS KMS quotas and how to request a quota increase, see [Quotas](https://docs.aws.amazon.com/kms/latest/developerguide/limits.html) in the *AWS Key Management Service Developer Guide*.  
When you enable default encryption, you might need to update your bucket policy. For more information, see [Using SSE-KMS encryption for cross-account operations](bucket-encryption.md#bucket-encryption-update-bucket-policy).

1. If you chose **Server-side encryption with Amazon S3 managed keys (SSE-S3)**, under **Bucket Key**, **Enabled** appears. S3 Bucket Keys are always enabled when you configure your directory bucket to use default encryption with SSE-S3. S3 Bucket Keys are always enabled for `GET` and `PUT` operations in a directory bucket and can’t be disabled. S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), [the Copy operation in Batch Operations](directory-buckets-objects-Batch-Ops.md), or [the import jobs](create-import-job.md). In this case, Amazon S3 makes a call to AWS KMS every time a copy request is made for a KMS-encrypted object.

   S3 Bucket Keys lower the cost of encryption by decreasing request traffic from Amazon S3 to AWS KMS. For more information, see [Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys](bucket-key.md).

1. If you chose **Server-side encryption with AWS Key Management Service key (SSE-KMS)**, under ** AWS KMS key**, specify your AWS Key Management Service key in one of the following ways or create a new key.
   + To choose from a list of available KMS keys, choose **Choose from your AWS KMS keys**, and choose your **KMS key** from **Available AWS KMS keys**.

     Only your customer managed keys appear in this list. The AWS managed key (`aws/s3`) isn't supported in directory buckets. For more information about customer managed keys, see [Customer keys and AWS keys](https://docs.aws.amazon.com//kms/latest/developerguide/concepts.html#key-mgmt) in the *AWS Key Management Service Developer Guide*.
   + To enter the KMS key ARN or alias, choose **Enter AWS KMS key ARN**, and enter your KMS key ARN or alias in **AWS KMS key ARN**. 
   + To create a new customer managed key in the AWS KMS console, choose **Create a KMS key**.

     For more information about creating an AWS KMS key, see [Creating keys](https://docs.aws.amazon.com//kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*.
**Important**  
Your SSE-KMS configuration can only support 1 [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) per directory bucket for the lifetime of the bucket. The [https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) (`aws/s3`) isn't supported. Also, after you specify a customer managed key for SSE-KMS, you can't override the customer managed key for the bucket's SSE-KMS configuration.  
You can identify the customer managed key you specified for the bucket's SSE-KMS configuration, in the following way:  
You make a `HeadObject` API operation request to find the value of `x-amz-server-side-encryption-aws-kms-key-id` in your response.
To use a new customer managed key for your data, we recommend copying your existing objects to a new directory bucket with a new customer managed key.
You can use only KMS keys that are available in the same AWS Region as the bucket. The Amazon S3 console lists only the first 100 KMS keys in the same Region as the bucket. To use a KMS key that is not listed, you must enter your KMS key ARN. If you want to use a KMS key that is owned by a different account, you must first have permission to use the key and then you must enter the KMS key ARN. For more information on cross account permissions for KMS keys, see [Creating KMS keys that other accounts can use](https://docs.aws.amazon.com//kms/latest/developerguide/key-policy-modifying-external-accounts.html#cross-account-console) in the *AWS Key Management Service Developer Guide*. For more information on SSE-KMS, see [Specifying server-side encryption with AWS KMS (SSE-KMS) for new object uploads in directory buckets](s3-express-specifying-kms-encryption.md).
When you use an AWS KMS key for server-side encryption in directory buckets, you must choose a symmetric encryption KMS key. Amazon S3 supports only symmetric encryption KMS keys and not asymmetric KMS keys. For more information, see [Identifying symmetric and asymmetric KMS keys](https://docs.aws.amazon.com//kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide*.

   For more information about using AWS KMS with Amazon S3, see [Using server-side encryption with AWS KMS keys (SSE-KMS) in directory buckets](s3-express-UsingKMSEncryption.md).

1. Choose **Create bucket**. After creating the bucket, you can add files and folders to the bucket. For more information, see [Working with objects in a directory bucket](directory-buckets-objects.md).

# Step 3: Importing data into a S3 Express One Zone directory bucket
<a name="s3-express-tutorial-Import"></a>

To complete this step, you must have a general purpose bucket that contains objects and is located in the same AWS Region as your directory bucket.

After you create a directory bucket in Amazon S3, you can populate the new bucket with data by using the Import action in the Amazon S3 console. Import simplifies copying data into directory buckets by letting you choose a prefix or a general purpose bucket to Import data from without having to specify all of the objects to copy individually. Import uses S3 Batch Operations which copies the objects in the selected prefix or general purpose bucket. You can monitor the progress of the Import copy job through the S3 Batch Operations job details page. 

**To use the Import action**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region associated with the Availability Zone in which your directory bucket is located. 

1. In the left navigation pane, choose **Directory buckets**.

1. Choose the option button next to the name of the bucket that you want to import objects into.

1. Choose **Import**.

1. For **Source**, enter the general purpose bucket (or bucket path including prefix) that contains the objects that you want to import. To choose an existing general purpose bucket from a list, choose **Browse S3**.

1.  In the **Permissions** section, you can choose to have an IAM role auto-generated. Alternatively, you can select an IAM role from a list, or directly enter an IAM role ARN. 
   + To allow Amazon S3 to create a new IAM role on your behalf, choose **Create new IAM role**.
**Note**  
If your source objects are encrypted with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), don't choose the **Create new IAM role** option. Instead, specify an existing IAM role that has the `kms:Decrypt` permission.  
Amazon S3 will use this permission to decrypt your objects. During the import process, Amazon S3 will then re-encrypt those objects by using server-side encryption with Amazon S3 managed keys (SSE-S3).
   + To choose an existing IAM role from a list, choose **Choose from existing IAM roles**.
   + To specify an existing IAM role by entering its Amazon Resource Name (ARN), choose **Enter IAM role ARN**, then enter the ARN in the corresponding field.

1. Review the information that's displayed in the **Destination** and **Copied object settings** sections. If the information in the **Destination** section is correct, choose **Import** to start the copy job.

   The Amazon S3 console displays the status of your new job on the **Batch Operations** page. For more information about the job, choose the option button next to the job name, and then on the **Actions** menu, choose **View details**. To open the directory bucket that the objects will be imported into, choose **View import destination**.

# Step 4: Manually upload objects to your S3 Express One Zone directory bucket
<a name="s3-express-tutorial-Upload"></a>

You can also manually upload objects to your directory bucket.

**To manually upload objects**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the upper right corner of the page, choose the name of the currently displayed AWS Region. Next, choose the Region associated with the Availability Zone in which your directory bucket is located. 

1. In the left navigation pane, choose **Directory buckets**.

1. Choose the name of the bucket that you want to upload your folders or files to. 
**Note**  
 If you chose the same directory bucket that you used in previous steps of this tutorial, your directory bucket will contain the objects that were uploaded from the Import tool. Notice that these objects are now stored in the S3 Express One Zone storage class. 

1. In the **Objects** list, choose **Upload**.

1. On the **Upload** page, do one of the following: 
   + Drag and drop files and folders to the dotted upload area.
   + Choose **Add files** or **Add folder**, choose the files or folders to upload, and then choose **Open** or **Upload**.

1. Under **Checksums**, choose the **Checksum function** that you want to use. 
**Note**  
 We recommend using CRC32 and CRC32C for the best performance with the S3 Express One Zone storage class. For more information, see [S3 additional checksum best practices](https://docs.aws.amazon.com//AmazonS3/latest/userguide/s3-express-optimizing-performance-design-patterns.html#s3-express-optimizing--checksums.html). 

   (Optional) If you're uploading a single object that's less than 16 MB in size, you can also specify a pre-calculated checksum value. When you provide a pre-calculated value, Amazon S3 compares it with the value that it calculates by using the selected checksum function. If the values don't match, the upload won't start. 

1. The options in the **Permissions ** and **Properties** sections are automatically set to default settings and can't be modified. Block Public Access is automatically enabled, and S3 Versioning and S3 Object Lock can't be enabled for directory buckets. 

   (Optional) If you want to add metadata in key-value pairs to your objects, expand the **Properties** section, and then in the **Metadata** section, choose **Add metadata**.

1. To upload the listed files and folders, choose **Upload**.

   Amazon S3 uploads your objects and folders. When the upload is finished, you see a success message on the **Upload: status** page.

    You have successfully created a directory bucket and uploaded objects to your bucket. 

# Step 5: Empty your S3 Express One Zone directory bucket
<a name="s3-express-tutoiral-Empty"></a>

You can empty your Amazon S3 directory bucket by using the Amazon S3 console.

**To empty a directory bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the upper right corner of the page, choose the name of the currently displayed AWS Region. Next, choose the Region associated with the Availability Zone in which your directory bucket is located. 

1. In the left navigation pane, choose **Directory buckets**.

1. Choose the option button next to the name of the bucket that you want to empty, and then choose **Empty**.

1. On the **Empty bucket** page, confirm that you want to empty the bucket by entering **permanently delete** in the text field, and then choose **Empty**.

1. Monitor the progress of the bucket emptying process on the **Empty bucket: status** page.

# Step 6: Delete your S3 Express One Zone directory bucket
<a name="s3-express-tutoiral-Delete"></a>

After you empty your directory bucket and abort all in-progress multipart uploads, you can delete your bucket by using the Amazon S3 console.

**To delete a directory bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the upper right corner of the page, choose the name of the currently displayed AWS Region. Next, choose the Region associated with the Availability Zone in which your directory bucket is located. 

1. In the left navigation pane, choose **Directory buckets**.

1. In the **Directory buckets** list, choose the option button next to the bucket that you want to delete.

1. Choose **Delete**.

1. On the **Delete bucket** page, enter the name of the bucket in the text field to confirm the deletion of your bucket. 
**Important**  
Deleting a directory bucket can't be undone.

1. To delete your directory bucket, choose **Delete bucket**.

## Next steps
<a name="s3-express-tutoiral-Next"></a>

In this tutorial, you have learned how to create a directory bucket and use the S3 Express One Zone storage class. After completing this tutorial, you can explore related AWS services to use with the S3 Express One Zone storage class.

You can use the following AWS services with the S3 Express One Zone storage class to support your specific low-latency use case.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/index.html) – Amazon EC2 provides secure and scalable computing capacity in the AWS Cloud. Using Amazon EC2 lessens your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html.html) – Lambda is a compute service that lets you run code without provisioning or managing servers. You configure notification settings on a bucket, and grant Amazon S3 permission to invoke a function on the function's resource-based permissions policy. 
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) – Amazon EKS is a managed service that eliminates the need to install, operate, and maintain your own Kubernetes control plane on AWS. [https://kubernetes.io/docs/concepts/overview/](https://kubernetes.io/docs/concepts/overview/) is an open-source system that automates the management, scaling, and deployment of containerized applications.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) – Amazon ECS is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications.
+ [Amazon EMR](https://docs.aws.amazon.com//emr/latest/ManagementGuide/emr-express-one-zone.html) – Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark on AWS to process and analyze vast amounts of data. 
+ [Amazon Athena](https://docs.aws.amazon.com//athena/latest/ug/querying-express-one-zone.html) – Athena is an interactive query service that makes it easy to analyze data directly in Amazon S3 by using standard [SQL](https://docs.aws.amazon.com/athena/latest/ug/ddl-sql-reference.html). You can also use Athena to interactively run data analytics by using Apache Spark without having to plan for, configure, or manage resources. When you run Apache Spark applications on Athena, you submit Spark code for processing and receive the results directly.
+ [AWS Glue Data Catalog](https://docs.aws.amazon.com//glue/latest/dg/catalog-and-crawler.html) – AWS Glue is a serverless data-integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources. You can use AWS Glue for analytics, machine learning, and application development. AWS Glue Data Catalog is a centralized repository that stores metadata about your organization's data sets. It acts as an index to the location, schema, and run-time metrics of your data sources. 
+ [Amazon SageMaker Runtime Model Training](https://docs.aws.amazon.com//sagemaker/latest/dg/model-access-training-data.html) – Amazon SageMaker Runtime is a fully managed machine learning service. With SageMaker Runtime, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment.

 For more information on S3 Express One Zone, see [What is S3 Express One Zone?](https://docs.aws.amazon.com//AmazonS3/latest/userguide/s3-express-one-zone.html) and [How is S3 Express One Zone different?](https://docs.aws.amazon.com//AmazonS3/latest/userguide/s3-express-differences.html).

# S3 Express One Zone Availability Zones and Regions
<a name="s3-express-Endpoints"></a>

An Availability Zone is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. To optimize low-latency retrievals, objects in the Amazon S3 Express One Zone storage class are redundantly stored in S3 directory buckets in a single Availability Zone that's local to your compute workload. When you create a directory bucket, you choose the Availability Zone and AWS Region where your bucket will be located. 

AWS maps the physical Availability Zones randomly to the Availability Zone names for each AWS account. This approach helps to distribute resources across the Availability Zones in an AWS Region, instead of resources likely being concentrated in the first Availability Zone for each Region. As a result, the Availability Zone `us-east-1a` for your AWS account might not represent the same physical location as `us-east-1a` for a different AWS account. For more information, see [Regions and Availability Zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) in the *Amazon EC2 User Guide*.

To coordinate Availability Zones across accounts, you must use the *AZ ID*, which is a unique and consistent identifier for an Availability Zone. For example, `use1-az1` is an AZ ID for the `us-east-1` Region and it has the same physical location in every AWS account. The following illustration shows how the AZ IDs are the same for every account, even though the Availability Zone names might be mapped differently for each account.

![\[Illustration showing Availability Zone mapping and Regions.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/availability-zone-mapping.png)


With S3 Express One Zone, your data is redundantly stored on multiple devices within a single Availability Zone. S3 Express One Zone is designed for 99.95 percent availability within a single Availability Zone and is backed by the [Amazon S3 Service Level Agreement](https://aws.amazon.com/s3/sla/). For more information, see [Availability Zones](directory-bucket-high-performance.md#s3-express-overview-az)

 The following table shows the S3 Express One Zone supported Regions and Availability Zones. 

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Endpoints.html)

# Networking for directory buckets in an Availability Zone
<a name="directory-bucket-az-networking"></a>

To reduce the amount of time your packets spend on the network, configure your virtual private cloud (VPC) with a gateway endpoint to access directory buckets in Availability Zones while keeping traffic within the AWS network, and at no additional cost.

**Topics**
+ [Endpoints for directory buckets in Availability Zones](#s3-express-endpoints-az)
+ [Configuring VPC gateway endpoints](#s3-express-networking-vpc-gateway)

## Endpoints for directory buckets in Availability Zones
<a name="s3-express-endpoints-az"></a>

The following table shows the Regional and Zonal API endpoints that are available for each Region and Availability Zone.


| Region name | Region | Availability Zone IDs | Regional endpoint | Zonal endpoint | 
| --- | --- | --- | --- | --- | 
|  US East (N. Virginia)  |  `us-east-1`  |  `use1-az4` `use1-az5` `use1-az6`  |  `s3express-control.us-east-1.amazonaws.com` `s3express-control-dualstack.us-east-1.amazonaws.com `  |  `s3express-use1-az4.us-east-1.amazonaws.com` `s3express-use1-az4.dualstack.us-east-1.amazonaws.com` `s3express-use1-az5.us-east-1.amazonaws.com` `s3express-use1-az5.dualstack.us-east-1.amazonaws.com` `s3express-use1-az6.us-east-1.amazonaws.com` `s3express-use1-az6.dualstack.us-east-1.amazonaws.com`  | 
|  US East (Ohio)  |  `us-east-2`  |  `use2-az1` `use2-az2`  |  `s3express-control.us-east-2.amazonaws.com` `s3express-control-dualstack.us-east-2.amazonaws.com`  |  `s3express-use2-az1.us-east-2.amazonaws.com` `s3express-use2-az1.dualstack.us-east-2.amazonaws.com` `s3express-use2-az2.us-east-2.amazonaws.com` `s3express-use2-az2.dualstack.us-east-2.amazonaws.com`  | 
|  US West (Oregon)  |  `us-west-2`  |  `usw2-az1` `usw2-az3` `usw2-az4`  |  `s3express-control.us-west-2.amazonaws.com` `s3express-control-dualstack.us-west-2.amazonaws.com`  |  `s3express-usw2-az1.us-west-2.amazonaws.com` `s3express-usw2-az1.dualstack.us-west-2.amazonaws.com` `s3express-usw2-az3.us-west-2.amazonaws.com` `s3express-usw2-az3.dualstack.us-west-2.amazonaws.com` `s3express-usw2-az4.us-west-2.amazonaws.com` `s3express-usw2-az4.dualstack.us-west-2.amazonaws.com`  | 
|  Asia Pacific (Mumbai)  |  `ap-south-1`  |  `aps1-az1` `aps1-az3`  |  `s3express-control.ap-south-1.amazonaws.com` `s3express-control-dualstack.ap-south-1.amazonaws.com`  |  `s3express-aps1-az1.ap-south-1.amazonaws.com` `s3express-aps1-az1.dualstack.ap-south-1.amazonaws.com` `s3express-aps1-az3.ap-south-1.amazonaws.com` `s3express-aps1-az3.dualstack.ap-south-1.amazonaws.com`  | 
|  Asia Pacific (Tokyo)  |  `ap-northeast-1`  |  `apne1-az1` `apne1-az4`  |  `s3express-control.ap-northeast-1.amazonaws.com` `s3express-control-dualstack.ap-northeast-1.amazonaws.com`  |  `s3express-apne1-az1.ap-northeast-1.amazonaws.com` `s3express-apne1-az1.dualstack.ap-northeast-1.amazonaws.com` `s3express-apne1-az4.ap-northeast-1.amazonaws.com` `s3express-apne1-az4.dualstack.ap-northeast-1.amazonaws.com`  | 
|  Europe (Ireland)  |  `eu-west-1`  |  `euw1-az1` `euw1-az3`  |  `s3express-control.eu-west-1.amazonaws.com` `s3express-control-dualstack.eu-west-1.amazonaws.com`  |  `s3express-euw1-az1.eu-west-1.amazonaws.com` `s3express-euw1-az1.dualstack.eu-west-1.amazonaws.com` `s3express-euw1-az3.eu-west-1.amazonaws.com` `s3express-euw1-az3.dualstack.eu-west-1.amazonaws.com`  | 
|  Europe (Stockholm)  |  `eu-north-1`  |  `eun1-az1` `eun1-az2` `eun1-az3`  |  `s3express-control.eu-north-1.amazonaws.com` `s3express-control-dualstack.eu-north-1.amazonaws.com`  |  `s3express-eun1-az1.eu-north-1.amazonaws.com` `s3express-eun1-az1.dualstack.eu-north-1.amazonaws.com` `s3express-eun1-az2.eu-north-1.amazonaws.com` `s3express-eun1-az2.dualstack.eu-north-1.amazonaws.com` `s3express-eun1-az3.eu-north-1.amazonaws.com` `s3express-eun1-az3.dualstack.eu-north-1.amazonaws.com`  | 

## Configuring VPC gateway endpoints
<a name="s3-express-networking-vpc-gateway"></a>

Use the following procedure to create a gateway endpoint that connects to Amazon S3 Express One Zone storage class objects and directory buckets.

**To configure a gateway VPC endpoint**

1. Open the [Amazon VPC Console](https://console.aws.amazon.com/vpc/). 

1. In the navigation pane, choose **Endpoints**.

1. Choose **Create endpoint**.

1. Create a name for your endpoint.

1. For **Service category**, choose **AWS services**. 

1. For **Services**, add the filter **Type=Gateway** and then choose the option button next to **com.amazonaws.*region*.s3express**. 

1. For **VPC**, choose the VPC in which to create the endpoint.

1. For **Route tables**, choose the route table in your VPC to be used by the endpoint. After the endpoint is created, a route record will be added to the route table that you select in this step.

1. For **Policy**, choose **Full access** to allow all operations by all principals on all resources over the VPC endpoint. Otherwise, choose **Custom** to attach a VPC endpoint policy that controls the principals' permissions to perform actions on resources over the VPC endpoint.

1. For **IP address type**, choose from the following options:
   +  **IPv4** – Assign IPv4 addresses to the endpoint network interfaces. This option is supported only if all selected subnets have IPv4 address ranges and the service accepts IPv4 requests. 
   +  **IPv6** – Assign IPv6 addresses to the endpoint network interfaces. This option is supported only if all selected subnets are IPv6 only subnets and the service accepts IPv6 requests.
   +  **Dualstack** – Assign both IPv4 and IPv6 addresses to the endpoint network interfaces. This option is supported only if all selected subnets have both IPv4 and IPv6 address ranges and the service accepts both IPv4 and IPv6 requests.

1. (Optional) To add a tag, choose **Add new tag**, and enter the tag key and the tag value.

1. Choose **Create endpoint**.

After creating a gateway endpoint, you can use Regional API endpoints and Zonal API endpoints to access Amazon S3 Express One Zone storage class objects and directory buckets.

To learn more about gateway VPC endpoints, see [Gateway endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html) in the *AWS PrivateLink Guide*. For the data residency use cases, we recommend enabling access to your buckets only from your VPC using gateway VPC endpoints. When access is restricted to a VPC or a VPC endpoint, you can access the objects through the AWS Management Console, the REST API, AWS CLI, and AWS SDKs.

**Note**  
To restrict access to a VPC or a VPC endpoint using the AWS Management Console, you must use the AWS Management Console Private Access. For more information, see [AWS Management Console Private Access](https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/console-private-access.html) in the *AWS Management Console guide*AWS Management Console guide.

# Creating directory buckets in an Availability Zone
<a name="directory-bucket-create"></a>

To start using the Amazon S3 Express One Zone storage class, you create a directory bucket. The S3 Express One Zone storage class can be used only with directory buckets. The S3 Express One Zone storage class supports low-latency use cases and provides faster data processing within a single Availability Zone. If your application is performance sensitive and benefits from single-digit millisecond `PUT` and `GET` latencies, we recommend creating a directory bucket so that you can use the S3 Express One Zone storage class.

There are two types of Amazon S3 buckets, general purpose buckets and directory buckets. You should choose the bucket type that best fits your application and performance requirements. General purpose buckets are the original S3 bucket type. General purpose buckets are recommended for most use cases and access patterns and allow objects stored across all storage classes, except S3 Express One Zone. For more information about general purpose buckets, see [General purpose buckets overview](UsingBucket.md).

Directory buckets use the S3 Express One Zone storage class, which is designed to be used for workloads or performance-critical applications that require consistent single-digit millisecond latency. S3 Express One Zone is the first S3 storage class where you can select a single Availability Zone with the option to co-locate your object storage with your compute resources, which provides the highest possible access speed. When you create a directory bucket, you can optionally specify an AWS Region and an Availability Zone that's local to your Amazon EC2, Amazon Elastic Kubernetes Service, or Amazon Elastic Container Service (Amazon ECS) compute instances to optimize performance.

With S3 Express One Zone, your data is redundantly stored on multiple devices within a single Availability Zone. S3 Express One Zone is designed for 99.95 percent availability within a single Availability Zone and is backed by the [Amazon S3 Service Level Agreement](https://aws.amazon.com/s3/sla/). For more information, see [Availability Zones](directory-bucket-high-performance.md#s3-express-overview-az)

Directory buckets organize data hierarchically into directories, as opposed to the flat storage structure of general purpose buckets. There aren't prefix limits for directory buckets, and individual directories can scale horizontally. 

For more information about directory buckets, see [Working with directory buckets](directory-buckets-overview.md).

**Directory bucket names**  
Directory bucket names must follow this format and comply with the rules for directory bucket naming:

```
bucket-base-name--zone-id--x-s3
```

For example, the following directory bucket name contains the Availability Zone ID `usw2-az1`:

```
bucket-base-name--usw2-az1--x-s3
```

For more information about directory bucket naming rules, see [Directory bucket naming rules](directory-bucket-naming-rules.md).

## Using the S3 console
<a name="create-directory-bucket-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region in which you want to create a bucket. 
**Note**  
To minimize latency and costs and address regulatory requirements, choose a Region close to you. Objects stored in a Region never leave that Region unless you explicitly transfer them to another Region. For a list of Amazon S3 AWS Regions, see [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) in the *Amazon Web Services General Reference*.

1. In the left navigation pane, choose **Directory buckets**.

1. Choose **Create bucket**. The **Create bucket** page opens.

1. Under **General configuration**, view the AWS Region where your bucket will be created. 

   Under **Bucket type**, choose **Directory**.
**Note**  
If you've chosen a Region that doesn't support directory buckets, the **Bucket type** option disappears, and the bucket type defaults to a general purpose bucket. To create a directory bucket, you must choose a supported Region. For a list of Regions that support directory buckets and the Amazon S3 Express One Zone storage class, see [S3 Express One Zone Availability Zones and Regions](s3-express-Endpoints.md).
After you create the bucket, you can't change the bucket type.
**Note**  
The Availability Zone can't be changed after the bucket is created. 

1. For **Availability Zone**, choose a Availability Zone local to your compute services. For a list of Availability Zones that support directory buckets and the S3 Express One Zone storage class, see [S3 Express One Zone Availability Zones and Regions](s3-express-Endpoints.md). 

   Under **Availability Zone**, select the check box to acknowledge that in the event of an Availability Zone outage, your data might be unavailable or lost. 
**Important**  
Although directory buckets are stored across multiple devices within a single Availability Zone, directory buckets don't store data redundantly across Availability Zones.

1. For **Bucket name**, enter a name for your directory bucket.

   The following naming rules apply for directory buckets.
   + Be unique within the chosen Zone (AWS Availability Zone or AWS Local Zone). 
   + Name must be between 3 (min) and 63 (max) characters long, including the suffix.
   + Consists only of lowercase letters, numbers and hyphens (-).
   + Begin and end with a letter or number. 
   + Must include the following suffix: `--zone-id--x-s3`.
   + Bucket names must not start with the prefix `xn--`.
   + Bucket names must not start with the prefix `sthree-`.
   + Bucket names must not start with the prefix `sthree-configurator`.
   + Bucket names must not start with the prefix ` amzn-s3-demo-`.
   + Bucket names must not end with the suffix `-s3alias`. This suffix is reserved for access point alias names. For more information, see [Access point aliases](access-points-naming.md#access-points-alias).
   + Bucket names must not end with the suffix `--ol-s3`. This suffix is reserved for Object Lambda Access Point alias names. For more information, see [How to use a bucket-style alias for your S3 bucket Object Lambda Access Point](olap-use.md#ol-access-points-alias).
   + Bucket names must not end with the suffix `.mrap`. This suffix is reserved for Multi-Region Access Point names. For more information, see [Rules for naming Amazon S3 Multi-Region Access Points](multi-region-access-point-naming.md).

   A suffix is automatically added to the base name that you provide when you create a directory bucket using the console. This suffix includes the Availability Zone ID of the Availability Zone that you chose.

   After you create the bucket, you can't change its name. For more information about naming buckets, see [General purpose bucket naming rules](bucketnamingrules.md). 
**Important**  
Do not include sensitive information, such as account numbers, in the bucket name. The bucket name is visible in the URLs that point to the objects in the bucket.

1. Under **Object Ownership**, the **Bucket owner enforced** setting is automatically enabled, and all access control lists (ACLs) are disabled. For directory buckets, ACLs can't be enabled. 

    **Bucket owner enforced (default)** – ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the general purpose bucket. ACLs no longer affect access permissions to data in the S3 general purpose bucket. The bucket uses policies exclusively to define access control.

1. Under **Block Public Access settings for this bucket**, all Block Public Access settings for your directory bucket are automatically enabled. These settings can't be modified for directory buckets. For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

1. To configure default encryption, under **Encryption type**, choose one of the following: 
   + **Server-side encryption with Amazon S3 managed key (SSE-S3)**
   + **Server-side encryption with AWS Key Management Service key (SSE-KMS)**

   For more information about using Amazon S3 server-side encryption to encrypt your data, see [Data protection and encryption](s3-express-data-protection.md).
**Important**  
If you use the SSE-KMS option for your default encryption configuration, you are subject to the requests per second (RPS) quota of AWS KMS. For more information about AWS KMS quotas and how to request a quota increase, see [Quotas](https://docs.aws.amazon.com/kms/latest/developerguide/limits.html) in the *AWS Key Management Service Developer Guide*.  
When you enable default encryption, you might need to update your bucket policy. For more information, see [Using SSE-KMS encryption for cross-account operations](bucket-encryption.md#bucket-encryption-update-bucket-policy).

1. If you chose **Server-side encryption with Amazon S3 managed keys (SSE-S3)**, under **Bucket Key**, **Enabled** appears. S3 Bucket Keys are always enabled when you configure your directory bucket to use default encryption with SSE-S3. S3 Bucket Keys are always enabled for `GET` and `PUT` operations in a directory bucket and can’t be disabled. S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), [the Copy operation in Batch Operations](directory-buckets-objects-Batch-Ops.md), or [the import jobs](create-import-job.md). In this case, Amazon S3 makes a call to AWS KMS every time a copy request is made for a KMS-encrypted object.

   S3 Bucket Keys lower the cost of encryption by decreasing request traffic from Amazon S3 to AWS KMS. For more information, see [Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys](bucket-key.md).

1. If you chose **Server-side encryption with AWS Key Management Service key (SSE-KMS)**, under ** AWS KMS key**, specify your AWS Key Management Service key in one of the following ways or create a new key.
   + To choose from a list of available KMS keys, choose **Choose from your AWS KMS keys**, and choose your **KMS key** from **Available AWS KMS keys**.

     Only your customer managed keys appear in this list. The AWS managed key (`aws/s3`) isn't supported in directory buckets. For more information about customer managed keys, see [Customer keys and AWS keys](https://docs.aws.amazon.com//kms/latest/developerguide/concepts.html#key-mgmt) in the *AWS Key Management Service Developer Guide*.
   + To enter the KMS key ARN or alias, choose **Enter AWS KMS key ARN**, and enter your KMS key ARN or alias in **AWS KMS key ARN**. 
   + To create a new customer managed key in the AWS KMS console, choose **Create a KMS key**.

     For more information about creating an AWS KMS key, see [Creating keys](https://docs.aws.amazon.com//kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*.
**Important**  
Your SSE-KMS configuration can only support 1 [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) per directory bucket for the lifetime of the bucket. The [https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) (`aws/s3`) isn't supported. Also, after you specify a customer managed key for SSE-KMS, you can't override the customer managed key for the bucket's SSE-KMS configuration.  
You can identify the customer managed key you specified for the bucket's SSE-KMS configuration, in the following way:  
You make a `HeadObject` API operation request to find the value of `x-amz-server-side-encryption-aws-kms-key-id` in your response.
To use a new customer managed key for your data, we recommend copying your existing objects to a new directory bucket with a new customer managed key.
You can use only KMS keys that are available in the same AWS Region as the bucket. The Amazon S3 console lists only the first 100 KMS keys in the same Region as the bucket. To use a KMS key that is not listed, you must enter your KMS key ARN. If you want to use a KMS key that is owned by a different account, you must first have permission to use the key and then you must enter the KMS key ARN. For more information on cross account permissions for KMS keys, see [Creating KMS keys that other accounts can use](https://docs.aws.amazon.com//kms/latest/developerguide/key-policy-modifying-external-accounts.html#cross-account-console) in the *AWS Key Management Service Developer Guide*. For more information on SSE-KMS, see [Specifying server-side encryption with AWS KMS (SSE-KMS) for new object uploads in directory buckets](s3-express-specifying-kms-encryption.md).
When you use an AWS KMS key for server-side encryption in directory buckets, you must choose a symmetric encryption KMS key. Amazon S3 supports only symmetric encryption KMS keys and not asymmetric KMS keys. For more information, see [Identifying symmetric and asymmetric KMS keys](https://docs.aws.amazon.com//kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide*.

   For more information about using AWS KMS with Amazon S3, see [Using server-side encryption with AWS KMS keys (SSE-KMS) in directory buckets](s3-express-UsingKMSEncryption.md).

1. Choose **Create bucket**. After creating the bucket, you can add files and folders to the bucket. For more information, see [Working with objects in a directory bucket](directory-buckets-objects.md).

## Using the AWS SDKs
<a name="create-directory-bucket-sdks"></a>

------
#### [ SDK for Go ]

This example shows how to create a directory bucket by using the AWS SDK for Go. 

**Example**  

```
var bucket = "..."

func runCreateBucket(c *s3.Client) {
    resp, err := c.CreateBucket(context.Background(), &s3.CreateBucketInput{
        Bucket: &bucket,
        CreateBucketConfiguration: &types.CreateBucketConfiguration{
            Location: &types.LocationInfo{
                Name: aws.String("usw2-az1"),
                Type: types.LocationTypeAvailabilityZone,
            },  
            Bucket: &types.BucketInfo{
                DataRedundancy: types.DataRedundancySingleAvailabilityZone,
                Type:           types.BucketTypeDirectory,
            },  
        },  
    })  
    var terr *types.BucketAlreadyOwnedByYou
    if errors.As(err, &terr) {
        fmt.Printf("BucketAlreadyOwnedByYou: %s\n", aws.ToString(terr.Message))
        fmt.Printf("noop...\n")
        return
    }   
    if err != nil {
        log.Fatal(err)
    }   

    fmt.Printf("bucket created at %s\n", aws.ToString(resp.Location))
}
```

------
#### [ SDK for Java 2.x ]

This example shows how to create an directory bucket by using the AWS SDK for Java 2.x. 

**Example**  

```
public static void createBucket(S3Client s3Client, String bucketName) {

    //Bucket name format is {base-bucket-name}--{az-id}--x-s3
    //example: doc-example-bucket--usw2-az1--x-s3 is a valid name for a directory bucket created in
    //Region us-west-2, Availability Zone 2  

    CreateBucketConfiguration bucketConfiguration = CreateBucketConfiguration.builder()
             .location(LocationInfo.builder()
                     .type(LocationType.AVAILABILITY_ZONE)
                     .name("usw2-az1").build()) //this must match the Region and Availability Zone in your bucket name
             .bucket(BucketInfo.builder()
                    .type(BucketType.DIRECTORY)
                    .dataRedundancy(DataRedundancy.SINGLE_AVAILABILITY_ZONE)
                    .build()).build();
    try {
    
             CreateBucketRequest bucketRequest = CreateBucketRequest.builder().bucket(bucketName).createBucketConfiguration(bucketConfiguration).build();
             CreateBucketResponse response = s3Client.createBucket(bucketRequest);
             System.out.println(response);
    } 
    
    catch (S3Exception e) {
            System.err.println(e.awsErrorDetails().errorMessage());
            System.exit(1);
         }
    }
```

------
#### [ AWS SDK for JavaScript ]

This example shows how to create a directory bucket by using the AWS SDK for JavaScript. 

**Example**  

```
// file.mjs, run with Node.js v16 or higher
// To use with the preview build, place this in a folder 
// inside the preview build directory, such as /aws-sdk-js-v3/workspace/

import { S3 } from "@aws-sdk/client-s3";

const region = "us-east-1";
const zone = "use1-az4";
const suffix = `${zone}--x-s3`;

const s3 = new S3({ region });

const bucketName = `...--${suffix}`;

const createResponse = await s3.createBucket( 
    { Bucket: bucketName, 
      CreateBucketConfiguration: {Location: {Type: "AvailabilityZone", Name: zone},
      Bucket: { Type: "Directory", DataRedundancy: "SingleAvailabilityZone" }}
    } 
   );
```

------
#### [ SDK for .NET ]

This example shows how to create a directory bucket by using the SDK for .NET. 

**Example**  

```
using (var amazonS3Client = new AmazonS3Client())
{
    var putBucketResponse = await amazonS3Client.PutBucketAsync(new PutBucketRequest
    {

       BucketName = "DOC-EXAMPLE-BUCKET--usw2-az1--x-s3",
       PutBucketConfiguration = new PutBucketConfiguration
       {
         BucketInfo = new BucketInfo { DataRedundancy = DataRedundancy.SingleAvailabilityZone, Type = BucketType.Directory },
         Location = new LocationInfo { Name = "usw2-az1", Type = LocationType.AvailabilityZone }
       }
     }).ConfigureAwait(false);
}
```

------
#### [ SDK for PHP ]

This example shows how to create a directory bucket by using the AWS SDK for PHP. 

**Example**  

```
require 'vendor/autoload.php';

$s3Client = new S3Client([

    'region'      => 'us-east-1',
]);


$result = $s3Client->createBucket([
    'Bucket' => 'doc-example-bucket--use1-az4--x-s3',
    'CreateBucketConfiguration' => [
        'Location' => ['Name'=> 'use1-az4', 'Type'=> 'AvailabilityZone'],
        'Bucket' => ["DataRedundancy" => "SingleAvailabilityZone" ,"Type" => "Directory"]   ],
]);
```

------
#### [ SDK for Python ]

This example shows how to create a directory bucket by using the AWS SDK for Python (Boto3). 

**Example**  

```
import logging
import boto3
from botocore.exceptions import ClientError

def create_bucket(s3_client, bucket_name, availability_zone):
    '''
    Create a directory bucket in a specified Availability Zone

    :param s3_client: boto3 S3 client
    :param bucket_name: Bucket to create; for example, 'doc-example-bucket--usw2-az1--x-s3'
    :param availability_zone: String; Availability Zone ID to create the bucket in, for example, 'usw2-az1'
    :return: True if bucket is created, else False
    '''

    try:
        bucket_config = {
                'Location': {
                    'Type': 'AvailabilityZone',
                    'Name': availability_zone
                },
                'Bucket': {
                    'Type': 'Directory', 
                    'DataRedundancy': 'SingleAvailabilityZone'
                }
            }
        s3_client.create_bucket(
            Bucket = bucket_name,
            CreateBucketConfiguration = bucket_config
        )
    except ClientError as e:
        logging.error(e)
        return False
    return True


if __name__ == '__main__':
    bucket_name = 'BUCKET_NAME'
    region = 'us-west-2'
    availability_zone = 'usw2-az1'
    s3_client = boto3.client('s3', region_name = region)
    create_bucket(s3_client, bucket_name, availability_zone)
```

------
#### [ SDK for Ruby ]

This example shows how to create an directory bucket by using the AWS SDK for Ruby. 

**Example**  

```
s3 = Aws::S3::Client.new(region:'us-west-2')
s3.create_bucket(
  bucket: "bucket_base_name--az_id--x-s3",
  create_bucket_configuration: {
    location: { name: 'usw2-az1', type: 'AvailabilityZone' },
    bucket: { data_redundancy: 'SingleAvailabilityZone', type: 'Directory' }
  }
)
```

------

## Using the AWS CLI
<a name="create-directory-bucket-cli"></a>

This example shows how to create a directory bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

When you create a directory bucket you must provide configuration details and use the following naming convention: `bucket-base-name--zone-id--x-s3`

```
aws s3api create-bucket
--bucket bucket-base-name--zone-id--x-s3
--create-bucket-configuration 'Location={Type=AvailabilityZone,Name=usw2-az1},Bucket={DataRedundancy=SingleAvailabilityZone,Type=Directory}'
--region us-west-2
```

For more information, see [create-bucket](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/create-bucket.html) in the AWS Command Line Interface.

# Regional and Zonal endpoints for directory buckets in an Availability Zone
<a name="endpoint-directory-buckets-AZ"></a>

To access your objects and directory buckets stored in S3 Express One Zone, you use gateway VPC endpoints. Directory buckets use Regional and Zonal API endpoints. Depending on the Amazon S3 API operation that you use, either a Regional or Zonal endpoint is required. There is no additional charge for using gateway endpoints.

Bucket-level (or control plane) API operations are available through Regional endpoints and are referred to as Regional endpoint API operations. Examples of Regional endpoint API operations are `CreateBucket` and `DeleteBucket`.

 When you create directory buckets that are stored in S3 Express One Zone, you choose the Availability Zone where your bucket will be located. You can use Zonal endpoint API operations to upload and manage the objects in your directory bucket.

Object-level (or data plane) API operations are available through Zonal endpoints and are referred to as Zonal endpoint API operations. Examples of Zonal endpoint API operations are `CreateSession` and `PutObject`.


| Region name | Region | Availability Zone IDs | Regional endpoint | Zonal endpoint | 
| --- | --- | --- | --- | --- | 
|  US East (N. Virginia)  |  `us-east-1`  |  `use1-az4` `use1-az5` `use1-az6`  |  `s3express-control.us-east-1.amazonaws.com` `s3express-control-dualstack.us-east-1.amazonaws.com `  |  `s3express-use1-az4.us-east-1.amazonaws.com` `s3express-use1-az4.dualstack.us-east-1.amazonaws.com` `s3express-use1-az5.us-east-1.amazonaws.com` `s3express-use1-az5.dualstack.us-east-1.amazonaws.com` `s3express-use1-az6.us-east-1.amazonaws.com` `s3express-use1-az6.dualstack.us-east-1.amazonaws.com`  | 
|  US East (Ohio)  |  `us-east-2`  |  `use2-az1` `use2-az2`  |  `s3express-control.us-east-2.amazonaws.com` `s3express-control-dualstack.us-east-2.amazonaws.com`  |  `s3express-use2-az1.us-east-2.amazonaws.com` `s3express-use2-az1.dualstack.us-east-2.amazonaws.com` `s3express-use2-az2.us-east-2.amazonaws.com` `s3express-use2-az2.dualstack.us-east-2.amazonaws.com`  | 
|  US West (Oregon)  |  `us-west-2`  |  `usw2-az1` `usw2-az3` `usw2-az4`  |  `s3express-control.us-west-2.amazonaws.com` `s3express-control-dualstack.us-west-2.amazonaws.com`  |  `s3express-usw2-az1.us-west-2.amazonaws.com` `s3express-usw2-az1.dualstack.us-west-2.amazonaws.com` `s3express-usw2-az3.us-west-2.amazonaws.com` `s3express-usw2-az3.dualstack.us-west-2.amazonaws.com` `s3express-usw2-az4.us-west-2.amazonaws.com` `s3express-usw2-az4.dualstack.us-west-2.amazonaws.com`  | 
|  Asia Pacific (Mumbai)  |  `ap-south-1`  |  `aps1-az1` `aps1-az3`  |  `s3express-control.ap-south-1.amazonaws.com` `s3express-control-dualstack.ap-south-1.amazonaws.com`  |  `s3express-aps1-az1.ap-south-1.amazonaws.com` `s3express-aps1-az1.dualstack.ap-south-1.amazonaws.com` `s3express-aps1-az3.ap-south-1.amazonaws.com` `s3express-aps1-az3.dualstack.ap-south-1.amazonaws.com`  | 
|  Asia Pacific (Tokyo)  |  `ap-northeast-1`  |  `apne1-az1` `apne1-az4`  |  `s3express-control.ap-northeast-1.amazonaws.com` `s3express-control-dualstack.ap-northeast-1.amazonaws.com`  |  `s3express-apne1-az1.ap-northeast-1.amazonaws.com` `s3express-apne1-az1.dualstack.ap-northeast-1.amazonaws.com` `s3express-apne1-az4.ap-northeast-1.amazonaws.com` `s3express-apne1-az4.dualstack.ap-northeast-1.amazonaws.com`  | 
|  Europe (Ireland)  |  `eu-west-1`  |  `euw1-az1` `euw1-az3`  |  `s3express-control.eu-west-1.amazonaws.com` `s3express-control-dualstack.eu-west-1.amazonaws.com`  |  `s3express-euw1-az1.eu-west-1.amazonaws.com` `s3express-euw1-az1.dualstack.eu-west-1.amazonaws.com` `s3express-euw1-az3.eu-west-1.amazonaws.com` `s3express-euw1-az3.dualstack.eu-west-1.amazonaws.com`  | 
|  Europe (Stockholm)  |  `eu-north-1`  |  `eun1-az1` `eun1-az2` `eun1-az3`  |  `s3express-control.eu-north-1.amazonaws.com` `s3express-control-dualstack.eu-north-1.amazonaws.com`  |  `s3express-eun1-az1.eu-north-1.amazonaws.com` `s3express-eun1-az1.dualstack.eu-north-1.amazonaws.com` `s3express-eun1-az2.eu-north-1.amazonaws.com` `s3express-eun1-az2.dualstack.eu-north-1.amazonaws.com` `s3express-eun1-az3.eu-north-1.amazonaws.com` `s3express-eun1-az3.dualstack.eu-north-1.amazonaws.com`  | 

# Optimizing S3 Express One Zone performance
<a name="s3-express-performance"></a>

Amazon S3 Express One Zone is a high-performance, single Availability Zone (AZ) S3 storage class that's purpose-built to deliver consistent, single-digit millisecond data access for your most latency-sensitive applications. S3 Express One Zone is the first S3 storage class that gives you the option to co-locate high-performance object storage and AWS compute resources, such as Amazon Elastic Compute Cloud, Amazon Elastic Kubernetes Service, and Amazon Elastic Container Service, within a single Availability Zone. Co-locating your storage and compute resources optimizes compute performance and costs and provides increased data-processing speed. 

S3 Express One Zone provides similar performance elasticity to other S3 storage classes, but with consistent single-digit millisecond first-byte read and write request latencies—up to 10x faster than S3 Standard. S3 Express One Zone is designed from the ground up to support burst throughput up to very high aggregate levels. The S3 Express One Zone storage class uses a custom-built architecture to optimize for performance and deliver consistently low request latency by storing data on high-performance hardware. The object protocol for S3 Express One Zone has been enhanced to streamline authentication and metadata overhead. 

To further reduce latency and support up to 2 million reads and up to 200,000 writes per second, S3 Express One Zone stores data in an Amazon S3 directory bucket. By default, each directory bucket supports up to 200,000 reads and up to 100,000 writes per second. If your workload requires higher than the default TPS limits, you can request an increase through [AWS Support](https://support.console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase).

The combination of high-performance, purpose-built hardware and software that delivers single-digit millisecond data access speed and directory buckets that scale for large numbers of transactions per second makes S3 Express One Zone the best Amazon S3 storage class for request-intensive operations or performance-critical applications. 

The following topics describe best practice guidelines and design patterns for optimizing performance with applications that use the S3 Express One Zone storage class. 

**Topics**
+ [Best practices to optimize S3 Express One Zone performance](s3-express-optimizing-performance-design-patterns.md)

# Best practices to optimize S3 Express One Zone performance
<a name="s3-express-optimizing-performance-design-patterns"></a>

When building applications that upload and retrieve objects from Amazon S3 Express One Zone, follow our best practice guidelines to optimize performance. To use the S3 Express One Zone storage class, you must create an S3 directory bucket. The S3 Express One Zone storage class isn't supported for use with S3 general purpose buckets.

For performance guidelines for all other Amazon S3 storage classes and S3 general purpose buckets, see [Best practices design patterns: optimizing Amazon S3 performance](optimizing-performance.md).

For optimal performance and scalability with S3 Express One Zone storage class and directory buckets in high-scale workloads, it's important to understand how directory buckets work differently from general purpose buckets. Then, we provide best practices to align your applications with the way directory buckets work.

## How directory buckets work
<a name="s3-express-how-directory-buckets-work"></a>

Amazon S3 Express One Zone storage class can support workloads with up to 2,000,000 GET and up to 200,000 PUT transactions per second (TPS) per directory bucket. With S3 Express One Zone, data is stored in S3 directory buckets in Availability Zones. Objects in directory buckets are accessible within a hierarchical namespace, similar to a file system and in contrast to S3 general purpose buckets that have a flat namespace. Unlike general purpose buckets, directory buckets organize keys hierarchically into directories instead of prefixes. A prefix is a string of characters at the beginning of the object key name. You can use prefixes to organize your data and manage a flat object storage architecture in general purpose buckets. For more information, see [Organizing objects using prefixes](using-prefixes.md).

In directory buckets, objects are organized in a hierarchical namespace using forward slash (`/`) as the only supported delimiter. When you upload an object with a key like `dir1/dir2/file1.txt`, the directories `dir1/` and `dir2/` are automatically created and managed by Amazon S3. Directories are created during `PutObject` or `CreateMultiPartUpload` operations and automatically removed when they become empty after `DeleteObject` or `AbortMultiPartUpload` operations. There is no upper limit to the number of objects and subdirectories in a directory.

The directories that are created when objects are uploaded to directory buckets can scale instantaneously to reduce the chance of HTTP `503 (Slow Down)` errors. This automatic scaling allows your applications to parallelize read and write requests within and across directories as needed. For S3 Express One Zone, individual directories are designed to support the maximum request rate of a directory bucket. There is no need to randomize key prefixes to achieve optimal performance as the system automatically distributes objects for even load distribution, but as a result, keys are not stored lexicographically in directory buckets. This is in contrast to S3 general purpose buckets where keys that are lexicographically closer are more likely to be co-located on the same server. 

For more information about examples of directory bucket operations and directory interactions, see [Directory bucket operation and directory interaction examples](#s3-express-directory-bucket-examples).

## Best practices
<a name="s3-express-best-practices-section"></a>

Follow the best practices to optimize your directory bucket performance and help your workloads scale over time.

### Use directories that contain many entries (objects or subdirectories)
<a name="s3-express-best-practices-use-directories"></a>

Directory buckets deliver high performance by default for all workloads. For even greater performance optimization with certain operations, consolidating more entries (which are objects or subdirectories) into directories will lead to lower latency and higher request rate: 
+ Mutating API operations, such as `PutObject`, `DeleteObject`, `CreateMultiPartUpload` and `AbortMultiPartUpload`, achieve optimal performance when implemented with fewer, denser directories containing thousands of entries, rather than with a large number of smaller directories. 
+ `ListObjectsV2` operations perform better when fewer directories need to be traversed to populate a page of results.

#### Don't use entropy in prefixes
<a name="s3-express-best-practices-dont-use-entropy"></a>

In Amazon S3 operations, entropy refers to the randomness in prefix naming that helps distribute workloads evenly across storage partitions. However, since directory buckets internally manage load distribution, it's not recommended to use entropy in prefixes for the best performance. This is because for directory buckets, entropy can cause requests to be slower by not reusing the directories that have already been created.

A key pattern such as `$HASH/directory/object` could end up creating many intermediate directories. In the following example, all the `job-1` s are different directories since their parents are different. Directories will be sparse and mutation and list requests will be slower. In this example there are 12 intermediate Directories that all have a single entry.

```
s3://my-bucket/0cc175b9c0f1b6a831c399e269772661/job-1/file1
  
s3://my-bucket/92eb5ffee6ae2fec3ad71c777531578f/job-1/file2
  
s3://my-bucket/4a8a08f09d37b73795649038408b5f33/job-1/file3
  
s3://my-bucket/8277e0910d750195b448797616e091ad/job-1/file4
  
s3://my-bucket/e1671797c52e15f763380b45e841ec32/job-1/file5
  
s3://my-bucket/8fa14cdd754f91cc6554c9e71929cce7/job-1/file6
```

Instead, for better performance, we can remove the `$HASH` component and allow `job-1` to become a single directory, improving the density of a directory. In the following example, the single intermediate directory that has 6 entries can lead to better performance, compared with the previous example.

```
s3://my-bucket/job-1/file1
  
s3://my-bucket/job-1/file2
  
s3://my-bucket/job-1/file3
  
s3://my-bucket/job-1/file4
  
s3://my-bucket/job-1/file5
  
s3://my-bucket/job-1/file6
```

This performance advantage occurs because when an object key is initially created and its key name includes a directory, the directory is automatically created for the object. Subsequent object uploads to that same directory do not require the directory to be created, which reduces latency on object uploads to existing directories.

#### Use a separator other than the delimiter / to separate parts of your key if you don't need the ability to logically group objects during `ListObjectsV2` calls
<a name="s3-express-best-practices-use-separator"></a>

Since the `/` delimiter is treated specially for directory buckets, it should be used with intention. While directory buckets do not lexicographically order objects, objects within a directory are still grouped together in `ListObjectsV2` outputs. If you don't need this functionality, you can replace `/` with another character as a separator to not cause the creation of intermediate directories.

For example, assume the following keys are in a `YYYY/MM/DD/HH/` prefix pattern

```
s3://my-bucket/2024/04/00/01/file1
  
s3://my-bucket/2024/04/00/02/file2
  
s3://my-bucket/2024/04/00/03/file3
  
s3://my-bucket/2024/04/01/01/file4
  
s3://my-bucket/2024/04/01/02/file5
  
s3://my-bucket/2024/04/01/03/file6
```

If you don't have the need to group objects by hour or day in `ListObjectsV2` results, but you need to group objects by month, the following key pattern of `YYYY/MM/DD-HH-` will lead to significantly fewer directories and better performance for the `ListObjectsV2` operation.

```
s3://my-bucket/2024/04/00-01-file1
  
s3://my-bucket/2024/04/00-01-file2
  
s3://my-bucket/2024/04/00-01-file3
  
s3://my-bucket/2024/04/01-02-file4
  
s3://my-bucket/2024/04/01-02-file5
  
s3://my-bucket/2024/04/01-02-file6
```

#### Use delimited list operations where possible
<a name="s3-express-best-practices-use-delimited-list"></a>

A `ListObjectsV2` request without a `delimiter` performs depth-first recursive traversal of all directories. A `ListObjectsV2` request with a `delimiter` retrieves only entries in the directory specified by the `prefix` parameter, reducing request latency and increasing aggregate keys per second. For directory buckets, use delimited list operations where possible. Delimited lists result in directories being visited fewer times, which leads to more keys per second and lower request latency.

For example, for the following directories and objects in your directory bucket:

```
s3://my-bucket/2024/04/12-01-file1
  
s3://my-bucket/2024/04/12-01-file2
  
...
  
s3://my-bucket/2024/05/12-01-file1
  
s3://my-bucket/2024/05/12-01-file2
  
...
  
s3://my-bucket/2024/06/12-01-file1
  
s3://my-bucket/2024/06/12-01-file2
  
...
  
s3://my-bucket/2024/07/12-01-file1
  
s3://my-bucket/2024/07/12-01-file2
  
...
```

For better `ListObjectsV2` performance, use a delimited list to list your subdirectories and objects, if your application's logic allows for it. For example, you can run the following command for the delimited list operation,

```
aws s3api list-objects-v2 --bucket my-bucket --prefix '2024/' --delimiter '/'
```

The output is the list of subdirectories.

```
{
    "CommonPrefixes": [
        {
            "Prefix": "2024/04/"
        },
        {
            "Prefix": "2024/05/"
        },
        {
            "Prefix": "2024/06/"
        },
        {
            "Prefix": "2024/07/"
        }
    ]
}
```

To list each subdirectory with better performance, you can run a command like the following example:

Command:

```
aws s3api list-objects-v2 --bucket my-bucket --prefix '2024/04' --delimiter '/'
```

Output:

```
{
    "Contents": [
        {
            "Key": "2024/04/12-01-file1"
        },
        {
            "Key": "2024/04/12-01-file2"
        }
    ]
}
```

### Co-locate S3 Express One Zone storage with your compute resources
<a name="s3-express-best-practices-colocate"></a>

With S3 Express One Zone, each directory bucket is located in a single Availability Zone that you select when you create the bucket. You can get started by creating a new directory bucket in an Availability Zone local to your compute workloads or resources. You can then immediately begin very low-latency reads and writes. Directory buckets are a type of S3 buckets where you can choose the Availability Zone in an AWS Region to reduce latency between compute and storage.

If you access directory buckets across Availability Zones, you may experience slightly increased latency. To optimize performance, we recommend that you access a directory bucket from Amazon Elastic Container Service, Amazon Elastic Kubernetes Service, and Amazon Elastic Compute Cloud instances that are located in the same Availability Zone when possible.

### Use concurrent connections to achieve high throughput with objects over 1MB
<a name="s3-express-best-practices-concurrent-connections"></a>

You can achieve the best performance by issuing multiple concurrent requests to directory buckets to spread your requests over separate connections to maximize the accessible bandwidth. Like general purpose buckets, S3 Express One Zone doesn't have any limits for the number of connections made to your directory bucket. Individual directories can scale performance horizontally and automatically when large numbers of concurrent writes to the same directory are happening.

Individual TCP connections to directory buckets have a fixed upper bound on the number of bytes that can be uploaded or downloaded per second. When objects get larger, request times become dominated by byte streaming rather than transaction processing. To use multiple connections to parallelize the upload or download of larger objects, you can reduce end-to-end latency. If using the `Java 2.x` SDK, you should consider using the [S3 Transfer Manager](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/transfer-manager.html) which will take advantage of performance improvements such as the [multipart upload API operations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html) and byte-range fetches to access data in parallel.

### Use Gateway VPC endpoints
<a name="s3-express-best-practices-vpc-endpoints"></a>

Gateway endpoints provide a direct connection from your VPC to directory buckets, without requiring an internet gateway or a NAT device for your VPC. To reduce the amount of time your packets spend on the network, you should configure your VPC with a gateway VPC endpoint for directory buckets. For more information, see [Networking for directory buckets](s3-express-networking.md).

### Use session authentication and reuse session tokens while they're valid
<a name="s3-express-best-practices-session-auth"></a>

Directory buckets provide a session token authentication mechanism to reduce latency on performance-sensitive API operations. You can make a single call to `CreateSession` to get a session token which is then valid for all requests in the following 5 minutes. To get the lowest latency in your API calls, make sure to acquire a session token and reuse it for the entire lifetime of that token before refreshing it.

If you use AWS SDKs, SDKs handle the session token refreshes automatically to avoid service interruptions when a session expires. We recommend that you use the AWS SDKs to initiate and manage requests to the `CreateSession` API operation.

For more information about `CreateSession`, see [Authorizing Zonal endpoint API operations with `CreateSession`](s3-express-create-session.md).

### Use a CRT-based client
<a name="s3-express-best-practices-crt"></a>

The AWS Common Runtime (CRT) is a set of modular, performant, and efficient libraries written in C and meant to act as the base of the AWS SDKs. The CRT provides improved throughput, enhanced connection management, and faster startup times. The CRT is available through all the AWS SDKs except Go.

For more information on how to configure the CRT for the SDK you use, see [AWS Common Runtime (CRT) libraries](https://docs.aws.amazon.com/sdkref/latest/guide/common-runtime.html), [Accelerate Amazon S3 throughput with the AWS Common Runtime](https://aws.amazon.com/blogs//storage/improving-amazon-s3-throughput-for-the-aws-cli-and-boto3-with-the-aws-common-runtime/), [Introducing CRT-based S3 client and the S3 Transfer Manager in the AWS SDK for Java 2.x](https://aws.amazon.com/blogs//developer/introducing-crt-based-s3-client-and-the-s3-transfer-manager-in-the-aws-sdk-for-java-2-x/), [Using S3CrtClient for Amazon S3 operations](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/examples-s3-crt.html), and [Configure AWS CRT-based HTTP clients](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/http-configuration-crt.html).

### Use the latest version of the AWS SDKs
<a name="s3-express-best-practices-latest-sdks"></a>

The AWS SDKs provide built-in support for many of the recommended guidelines for optimizing Amazon S3 performance. The SDKs offer a simpler API for taking advantage of Amazon S3 from within an application and are regularly updated to follow the latest best practices. For example, the SDKs automatically retry requests after HTTP `503` errors and handle slow connections responses.

If using the `Java 2.x` SDK, you should consider using the [S3 Transfer Manager](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/transfer-manager.html), which automatically scales connections horizontally to achieve thousands of requests per second using byte-range requests when appropriate. Byte-range requests can improve performance because you can use concurrent connections to S3 to fetch different byte ranges from within the same object. This helps you achieve higher aggregate throughput versus a single whole-object request. So it's important to use the latest version of the AWS SDKs to obtain the latest performance optimization features.

## Performance troubleshooting
<a name="s3-express-performance-troubleshooting"></a>

### Are you setting retry requests for latency-sensitive applications?
<a name="s3-express-performance-troubleshooting-retry"></a>

S3 Express One Zone is purpose-built to deliver consistent levels of high-performance without additional tuning. However, setting aggressive timeout values and retries can further help drive consistent latency and performance. The AWS SDKs have configurable timeout and retry values that you can tune to the tolerances of your specific application.

### Are you using AWS Common Runtime (CRT) libraries and optimal Amazon EC2 instance types?
<a name="s3-express-performance-troubleshooting-crt-ec2"></a>

Applications that perform a large number of read and write operations likely need more memory or computing capacity than applications that don't. When launching your Amazon Elastic Compute Cloud (Amazon EC2) instances for your performance-demanding workload, choose instance types that have the amount of these resources that your application needs. S3 Express One Zone high-performance storage is ideally paired with larger and newer instance types with larger amounts of system memory and more powerful CPUs and GPUs that can take advantage of higher-performance storage. We also recommend using the latest versions of the CRT-enabled AWS SDKs, which can better accelerate read and write requests in parallel.

### Are you using AWS SDKs for session-based authentication?
<a name="s3-express-performance-troubleshooting-session-auth"></a>

With Amazon S3, you can also optimize performance when you're using HTTP REST API requests by following the same best practices that are part of the AWS SDKs. However, with the session-based authorization and authentication mechanism that's used by S3 Express One Zone, we strongly recommend that you use the AWS SDKs to manage `CreateSession` and its managed session token. The AWS SDKs automatically create and refresh tokens on your behalf by using the `CreateSession` API operation. Using `CreateSession` saves on per-request round-trip latency to AWS Identity and Access Management (IAM) to authorize each request.

## Directory bucket operation and directory interaction examples
<a name="s3-express-directory-bucket-examples"></a>

The following shows three examples about how directory buckets work.

### Example 1: How S3 `PutObject` requests to a directory bucket interact with directories
<a name="s3-express-directory-bucket-examples-put"></a>

1. When the operation `PUT(<bucket>, "documents/reports/quarterly.txt")` is executed in an empty bucket, the directory `documents/` within the root of the bucket is created, the directory `reports/` within `documents/` is created, and the object `quarterly.txt` within `reports/` is created. For this operation, two directories were created in addition to the object.  
![\[Diagram showing directory structure after PUT operation for documents/reports/quarterly.txt\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/directory-examples-foo-bar-baz.png)

1. Then, when another operation `PUT(<bucket>, "documents/logs/application.txt")` is executed, the directory `documents/` already exists, the directory `logs/` within `documents/` doesn't exist and is created, and the object `application.txt` within `logs/` is created. For this operation, only one directory was created in addition to the object.  
![\[Diagram showing directory structure after PUT operation for documents/logs/application.txt\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/directory-examples-foo-baz-quux.png)

1. Lastly, when a `PUT(<bucket>, "documents/readme.txt")` operation is executed, the directory `documents/` within the root already exists and the object `readme.txt` is created. For this operation, no directories are created.  
![\[Diagram showing directory structure after PUT operation for documents/readme.txt\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/directory-examples-foo-bar.png)

### Example 2: How S3 `ListObjectsV2` requests to a directory bucket interact with directories
<a name="s3-express-directory-bucket-examples-list"></a>

For the S3 `ListObjectsV2` requests without specifying a delimiter, a bucket is traversed in a depth-first manner. The outputs are returned in a consistent order. However, while this order remains the same between requests, the order is not lexicographic. For the bucket and directories created in the previous example:

1. When a `LIST(<bucket>)` is executed, the directory `documents/` is entered and the traversing begins.

1. The subdirectory `logs/` is entered and the traversing begins.

1. The object `application.txt` is found within `logs/`.

1. No more entries exist within `logs/`. The List operation exits from `logs/` and enters `documents/` again.

1. The `documents/` directory continues being traversed and the object `readme.txt` is found.

1. The `documents/` directory continues being traversed and the subdirectory `reports/` is entered and the traversing begins.

1. The object `quarterly.txt` is found within `reports/`.

1. No more entries exist within `reports/`. The List exits from `reports/` and enters `documents/` again.

1. No more entries exist within `documents/` and the List returns.

In this example, `logs/` is ordered before `readme.txt` and `readme.txt` is ordered before `reports/`.

### Example 3: How S3 `DeleteObject` requests to a directory bucket interact with directories
<a name="s3-express-directory-bucket-examples-delete"></a>

![\[Diagram showing initial directory structure before DELETE operations\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/directory-examples-delete-before.png)


1. In that same bucket, when the operation `DELETE(<bucket>, "documents/reports/quarterly.txt")` is executed, the object `quarterly.txt` is deleted, leaving the directory `reports/` empty and causing it to be deleted immediately. The `documents/` directory is not empty because it has both the directory `logs/` and the object `readme.txt` within it, so it's not deleted. For this operation, only one object and one directory were deleted.  
![\[Diagram showing directory structure after DELETE operation for documents/reports/quarterly.txt\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/directory-examples-delete1.png)

1. When the operation `DELETE(<bucket>, "documents/readme.txt")` is executed, the object `readme.txt` is deleted. `documents/` is still not empty because it contains the directory `logs/`, so it's not deleted. For this operation, no directories are deleted and only the object is deleted.  
![\[Diagram showing directory structure after DELETE operation for documents/readme.txt\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/directory-examples-delete2.png)

1. Lastly, when the operation `DELETE(<bucket>, "documents/logs/application.txt")` is executed, `application.txt` is deleted, leaving `logs/` empty and causing it to be deleted immediately. This then leaves `documents/` empty and causing it to also be deleted immediately. For this operation, two directories and one object are deleted. The bucket is now empty.  
![\[Diagram showing directory structure after DELETE operation for documents/logs/application.txt\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/directory-examples-delete3.png)

# Data residency workloads
<a name="directory-bucket-data-residency"></a>

AWS Dedicated Local Zones (Dedicated Local Zones) are a type of AWS Infrastructure that are fully managed by AWS, built for exclusive use by you or your community, and placed in a location or data center specified by you to help comply with regulatory requirements. Dedicated Local Zones are a type of AWS Local Zones (Local Zones) offering. For more information, see [AWS Dedicated Local Zones](https://aws.amazon.com/dedicatedlocalzones/).

In Dedicated Local Zones, you can create S3 directory buckets to store data in a specific data perimeter, which helps support data residency and isolation use cases. Directory buckets in Dedicated Local Zones can support the S3 Express One Zone and S3 One Zone-Infrequent Access (S3 One Zone-IA; Z-IA) storage classes. Directory buckets are not currently available in other [AWS Local Zones locations](https://aws.amazon.com/about-aws/global-infrastructure/localzones/locations/). 

You can use the AWS Management Console, REST API, AWS Command Line Interface (AWS CLI), and AWS SDKs in Dedicated Local Zones. 



For more information about working with the directory buckets in Local Zones, see the following topics:

**Topics**
+ [Concepts for directory buckets in Local Zones](s3-lzs-for-directory-buckets.md)
+ [Enable accounts for Local Zones](opt-in-directory-bucket-lz.md)
+ [Private connectivity from your VPC](connectivity-lz-directory-buckets.md)
+ [Creating a directory bucket in a Local Zone](create-directory-bucket-LZ.md)
+ [Authenticating and authorizing for directory buckets in Local Zones](iam-directory-bucket-LZ.md)

# Concepts for directory buckets in Local Zones
<a name="s3-lzs-for-directory-buckets"></a>

Before creating a directory bucket in a Local Zone, you must have the Local Zone ID where you want to create a bucket. You can find all Local Zone information by using the [DescribeAvailabilityZones](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeAvailabilityZones.html) API operation. This API operation lists information about Local Zones, including their Local Zone IDs, parent Region names, network border groups, and opt-in status. After you have your Local Zone ID and you are opted in, you can create a directory bucket in the Local Zone. A directory bucket name consists of a base name that you provide and a suffix that contains the Zone ID of your bucket location, followed by `--x-s3`. 

A Local Zone is connected to the **parent Region** using the Amazon redundant and very high-bandwidth private network. This gives applications running in the Local Zone fast, secure, and seamless access to the rest of the AWS services in the parent Region. **Parent Zone ID** is the ID of the zone that handles the Local Zone control plane operations. **Network Border Group** is a unique group from which AWS advertises public IP addresses. For more information about Local Zones, parent Region, and parent Zone ID, see [AWS Local Zones concepts](https://docs.aws.amazon.com/local-zones/latest/ug/concepts-local-zones.html) in the AWS Local Zones* User Guide*.

All directory buckets use the `s3express` namespace, which is separate from the `s3` namespace for general purpose buckets. For directory buckets, requests are routed to either a **Regional endpoint** or a **Zonal endpoint**. The routing is handled automatically for you if you use the AWS Management Console, AWS CLI, or AWS SDKs. 

Most bucket-level API operations (such as `CreateBucket` and `DeleteBucket`) are routed to Regional endpoints, and are referred to as Regional endpoint API operations. Regional endpoints are in the format of `s3express-control.ParentRegionCode.amazonaws.com`. All object-level API operations (such as `PutObject`) and two bucket-level API operations (`CreateSession` and `HeadBucket`) are routed to Zonal endpoints, and are referred to as Zonal endpoint API operations. Zonal endpoints are in the format of `s3express-LocalZoneID.ParentRegionCode.amazonaws.com`. For a complete list of API operations by endpoint type, see [Directory bucket API operations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-differences.html#s3-express-differences-api-operations).

To access directory buckets in Local Zones from your virtual private cloud (VPC), you can use gateway VPC endpoints. There is no additional charge for using gateway endpoints. To configure gateway VPC endpoints to access directory buckets and objects in Local Zones, see [Private connectivity from your VPC](connectivity-lz-directory-buckets.md). 

# Enable accounts for Local Zones
<a name="opt-in-directory-bucket-lz"></a>

The following topic describes how accounts are enabled for Dedicated Local Zones.

For all the services in AWS Dedicated Local Zones (Dedicated Local Zones), including Amazon S3, your administrator must enable your AWS account before you can create or access any resource in the Dedicated Local Zone. You can use the [DescribeAvailabilityZones](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeAvailabilityZones.html) API operation to confirm your account ID access to a Local Zone.

To further protect your data in Amazon S3, by default, you only have access to the S3 resources that you create. Buckets in Local Zones have all S3 Block Public Access settings enabled by default and S3 Object Ownership is set to bucket owner enforced. These settings can't be modified. Optionally, to restrict access to only within the Local Zone network border groups, you can use the condition key `s3express:AllAccessRestrictedToLocalZoneGroup` in your IAM policies. For more information, see [Authenticating and authorizing for directory buckets in Local Zones](iam-directory-bucket-LZ.md).

# Private connectivity from your VPC
<a name="connectivity-lz-directory-buckets"></a>

To reduce the amount of time your packets spend on the network, configure your virtual private cloud (VPC) with a gateway endpoint to access directory buckets in Availability Zones while keeping traffic within the AWS network, and at no additional cost.

**To configure a gateway VPC endpoint**

1. Open the [Amazon VPC Console](https://console.aws.amazon.com/vpc/). 

1. In the navigation pane, choose **Endpoints**.

1. Choose **Create endpoint**.

1. Create a name for your endpoint.

1. For **Service category**, choose **AWS services**. 

1. For **Services**, add the filter **Type=Gateway** and then choose the option button next to **com.amazonaws.*region*.s3express**. 

1. For **VPC**, choose the VPC in which to create the endpoint.

1. For **Route tables**, choose the route table in your VPC to be used by the endpoint. After the endpoint is created, a route record will be added to the route table that you select in this step.

1. For **Policy**, choose **Full access** to allow all operations by all principals on all resources over the VPC endpoint. Otherwise, choose **Custom** to attach a VPC endpoint policy that controls the principals' permissions to perform actions on resources over the VPC endpoint. 

1. For **IP address type**, choose from the following options:
   +  **IPv4** – Assign IPv4 addresses to the endpoint network interfaces. This option is supported only if all selected subnets have IPv4 address ranges and the service accepts IPv4 requests. 
   +  **IPv6** – Assign IPv6 addresses to the endpoint network interfaces. This option is supported only if all selected subnets are IPv6 only subnets and the service accepts IPv6 requests.
   +  **Dualstack** – Assign both IPv4 and IPv6 addresses to the endpoint network interfaces. This option is supported only if all selected subnets have both IPv4 and IPv6 address ranges and the service accepts both IPv4 and IPv6 requests.

1. (Optional) To add a tag, choose **Add new tag**, and enter the tag key and the tag value.

1. Choose **Create endpoint**.

To learn more about gateway VPC endpoints, see [Gateway endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html) in the *AWS PrivateLink Guide*. For the data residency use cases, we recommend enabling access to your buckets only from your VPC using gateway VPC endpoints. When access is restricted to a VPC or a VPC endpoint, you can access the objects through the AWS Management Console, the REST API, AWS CLI, and AWS SDKs.

**Note**  
To restrict access to a VPC or a VPC endpoint using the AWS Management Console, you must use the AWS Management Console Private Access. For more information, see [AWS Management Console Private Access](https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/console-private-access.html) in the *AWS Management Console guide*.

# Creating a directory bucket in a Local Zone
<a name="create-directory-bucket-LZ"></a>

In Dedicated Local Zones, you can create directory buckets to store and retrieve objects in a specific data perimeter to help meet your data residency and data isolation use cases. S3 directory buckets are the only supported bucket type in Local Zones, and contain a bucket location type called `LocalZone`. A directory bucket name consists of a base name that you provide and a suffix that contains the Zone ID of your bucket location and `--x-s3`. You can obtain a list of Local Zone IDs by using the [DescribeAvailabilityZones](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeAvailabilityZones.html) API operation. For more information, see [Directory bucket naming rules](directory-bucket-naming-rules.md).

**Note**  
For all the services in AWS Dedicated Local Zones (Dedicated Local Zones), including S3, your administrator must enable your AWS account before you can create or access any resource in the Dedicated Local Zone. For more information, see [Enable accounts for Local Zones](opt-in-directory-bucket-lz.md).
For the data residency requirements, we recommend enabling access to your buckets only from gateway VPC endpoints. For more information, see [Private connectivity from your VPC](connectivity-lz-directory-buckets.md).
To restrict access to only within the Local Zone network border groups, you can use the condition key `s3express:AllAccessRestrictedToLocalZoneGroup` in your IAM policies. For more information, see [Authenticating and authorizing for directory buckets in Local Zones](iam-directory-bucket-LZ.md).

The following describes ways to create a directory bucket in a single Local Zone with the AWS Management Console, AWS CLI, and AWS SDKs. 

## Using the S3 console
<a name="create-directory-bucket-lz-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the parent Region of a Local Zone in which you want to create a directory bucket. 
**Note**  
For more information about the parent Regions, see [Concepts for directory buckets in Local Zones](s3-lzs-for-directory-buckets.md).

1. In the left navigation pane, choose **Buckets**.

1. Choose **Create bucket**.

   The **Create bucket** page opens.

1. Under **General configuration**, view the AWS Region where your bucket will be created. 

1.  Under **Bucket type**, choose **Directory**.
**Note**  
If you've chosen a Region that doesn't support directory buckets, the bucket type defaults to a general purpose bucket. To create a directory bucket, you must choose a supported Region. For a list of Regions that support directory buckets, see [Regional and Zonal endpoints for directory buckets](s3-express-Regions-and-Zones.md).
After you create the bucket, you can't change the bucket type.

1. Under **Bucket location**, choose a Local Zone that you want to use. 
**Note**  
The Local Zone can't be changed after the bucket is created. 

1. Under **Bucket location**, select the checkbox to acknowledge that in the event of a Local Zone outage, your data might be unavailable or lost. 
**Important**  
Although directory buckets are stored across multiple devices within a single Local Zone, directory buckets don't store data redundantly across Local Zones.

1. For **Bucket name**, enter a name for your directory bucket.

   For more information about the naming rules for directory buckets, see [General purpose bucket naming rules](bucketnamingrules.md). A suffix is automatically added to the base name that you provide when you create a directory bucket using the console. This suffix includes the Zone ID of the Local Zone that you chose.

   After you create the bucket, you can't change its name. 
**Important**  
Don't include sensitive information, such as account numbers, in the bucket name. The bucket name is visible in the URLs that point to the objects in the bucket.

1. Under **Object Ownership**, the **Bucket owner enforced** setting is automatically enabled, and all access control lists (ACLs) are disabled. For directory buckets, ACLs are disabled and can't be enabled.

   With the **Bucket owner enforced** setting enabled, the bucket owner automatically owns and has full control over every object in the bucket. ACLs no longer affect access permissions to data in the S3 bucket. The bucket uses policies exclusively to define access control. A majority of modern use cases in Amazon S3 no longer require the use of ACLs. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

1. Under **Block Public Access settings for this bucket**, all Block Public Access settings for your directory bucket are automatically enabled. These settings can't be modified for directory buckets. For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

1. Under **Default encryption**, directory buckets use **Server-side encryption with Amazon S3 managed keys (SSE-S3)** to encrypt data by default. You also have the option to encrypt data in directory buckets with **Server-side encryption with AWS Key Management Service keys (SSE-KMS)**.

1. Choose **Create bucket**.

   After creating the bucket, you can add files and folders to the bucket. For more information, see [Working with objects in a directory bucket](directory-buckets-objects.md).

## Using the AWS CLI
<a name="create-directory-bucket-lz-cli"></a>

This example shows how to create a directory bucket in a Local Zone by using the AWS CLI. To use the command, replace the *user input placeholders* with your own information.

When you create a directory bucket, you must provide configuration details and use the following naming convention: `bucket-base-name--zone-id--x-s3`.

```
aws s3api create-bucket
--bucket bucket-base-name--zone-id--x-s3
--create-bucket-configuration 'Location={Type=LocalZone,Name=local-zone-id},Bucket={DataRedundancy=SingleLocalZone,Type=Directory}'
--region parent-region-code
```

For more information about Local Zone ID and Parent Region Code, see [Concepts for directory buckets in Local Zones](s3-lzs-for-directory-buckets.md). For more information about the AWS CLI command, see [create-bucket](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/create-bucket.html) in the *AWS CLI Command Reference*.

## Using the AWS SDKs
<a name="create-directory-bucket-lz-sdks"></a>

------
#### [ SDK for Go ]

This example shows how to create a directory bucket in a Local Zone by using the AWS SDK for Go. 

**Example**  

```
var bucket = "bucket-base-name--zone-id--x-s3" // The full directory bucket name

func runCreateBucket(c *s3.Client) {
    resp, err := c.CreateBucket(context.Background(), &s3.CreateBucketInput{
        Bucket: &bucket,
        CreateBucketConfiguration: &types.CreateBucketConfiguration{
            Location: &types.LocationInfo{
                Name: aws.String("local-zone-id"),
                Type: types.LocationTypeLocalZone,
            },  
            Bucket: &types.BucketInfo{
                DataRedundancy: types.DataRedundancySingleLocalZone,
                Type:           types.BucketTypeDirectory,
            },  
        },  
    })  
    var terr *types.BucketAlreadyOwnedByYou
    if errors.As(err, &terr) {
        fmt.Printf("BucketAlreadyOwnedByYou: %s\n", aws.ToString(terr.Message))
        fmt.Printf("noop...\n") // No operation performed, just printing a message
        return
    }   
    if err != nil {
        log.Fatal(err)
    }   

    fmt.Printf("bucket created at %s\n", aws.ToString(resp.Location))
}
```

------
#### [ SDK for Java 2.x ]

This example shows how to create a directory bucket in a Local Zone by using the AWS SDK for Java 2.x. 

**Example**  

```
public static void createBucket(S3Client s3Client, String bucketName) {

    //Bucket name format is {base-bucket-name}--{local-zone-id}--x-s3
    //example: doc-example-bucket--local-zone-id--x-s3 is a valid name for a directory bucket created in a Local Zone.

    CreateBucketConfiguration bucketConfiguration = CreateBucketConfiguration.builder()
             .location(LocationInfo.builder()
                     .type(LocationType.LOCAL_ZONE)
                     .name("local-zone-id").build()) //this must match the Local Zone ID in your bucket name
             .bucket(BucketInfo.builder()
                    .type(BucketType.DIRECTORY)
                    .dataRedundancy(DataRedundancy.SINGLE_LOCAL_ZONE)
                    .build()).build();
    try {
    
             CreateBucketRequest bucketRequest = CreateBucketRequest.builder().bucket(bucketName).createBucketConfiguration(bucketConfiguration).build();
             CreateBucketResponse response = s3Client.createBucket(bucketRequest);
             System.out.println(response);
    } 
    
    catch (S3Exception e) {
            System.err.println(e.awsErrorDetails().errorMessage());
            System.exit(1);
         }
    }
```

------
#### [ AWS SDK for JavaScript ]

This example shows how to create a directory bucket in a Local Zone by using the AWS SDK for JavaScript. 

**Example**  

```
// file.mjs, run with Node.js v16 or higher
// To use with the preview build, place this in a folder 
// inside the preview build directory, such as /aws-sdk-js-v3/workspace/

import { S3 } from "@aws-sdk/client-s3";

const region = "parent-region-code";
const zone = "local-zone-id";
const suffix = `${zone}--x-s3`;

const s3 = new S3({ region });

const bucketName = `bucket-base-name--${suffix}`; // Full directory bucket name

const createResponse = await s3.createBucket( 
    { Bucket: bucketName, 
      CreateBucketConfiguration: {Location: {Type: "LocalZone", Name: "local-zone-id"},
      Bucket: { Type: "Directory", DataRedundancy: "SingleLocalZone" }}
    } 
   );
```

------
#### [ SDK for .NET ]

This example shows how to create a directory bucket in a Local Zone by using the SDK for .NET. 

**Example**  

```
using (var amazonS3Client = new AmazonS3Client())
{
    var putBucketResponse = await amazonS3Client.PutBucketAsync(new PutBucketRequest
    {

       BucketName = "bucket-base-name--local-zone-id--x-s3",
       PutBucketConfiguration = new PutBucketConfiguration
       {
         BucketInfo = new BucketInfo { DataRedundancy = DataRedundancy.SingleLocalZone, Type = BucketType.Directory },
         Location = new LocationInfo { Name = "local-zone-id", Type = LocationType.LocalZone }
       }
     }).ConfigureAwait(false);
}
```

------
#### [ SDK for PHP ]

This example shows how to create a directory bucket in a Local Zone by using the AWS SDK for PHP. 

**Example**  

```
require 'vendor/autoload.php';

$s3Client = new S3Client([

    'region'      => 'parent-region-code',
]);


$result = $s3Client->createBucket([
    'Bucket' => 'bucket-base-name--local-zone-id--x-s3',
    'CreateBucketConfiguration' => [
        'Location' => ['Name'=> 'local-zone-id', 'Type'=> 'LocalZone'],
        'Bucket' => ["DataRedundancy" => "SingleLocalZone" ,"Type" => "Directory"]   ],
]);
```

------
#### [ SDK for Python ]

This example shows how to create a directory bucket in a Local Zone by using the AWS SDK for Python (Boto3). 

**Example**  

```
import logging
import boto3
from botocore.exceptions import ClientError

def create_bucket(s3_client, bucket_name, local_zone):
    '''
    Create a directory bucket in a specified Local Zone

    :param s3_client: boto3 S3 client
    :param bucket_name: Bucket to create; for example, 'bucket-base-name--local-zone-id--x-s3'
    :param local_zone: String; Local Zone ID to create the bucket in
    :return: True if bucket is created, else False
    '''

    try:
        bucket_config = {
                'Location': {
                    'Type': 'LocalZone',
                    'Name': local_zone
                },
                'Bucket': {
                    'Type': 'Directory', 
                    'DataRedundancy': 'SingleLocalZone'
                }
            }
        s3_client.create_bucket(
            Bucket = bucket_name,
            CreateBucketConfiguration = bucket_config
        )
    except ClientError as e:
        logging.error(e)
        return False
    return True


if __name__ == '__main__':
    bucket_name = 'BUCKET_NAME'
    region = 'parent-region-code'
    local_zone = 'local-zone-id'
    s3_client = boto3.client('s3', region_name = region)
    create_bucket(s3_client, bucket_name, local_zone)
```

------
#### [ SDK for Ruby ]

This example shows how to create an directory bucket in a Local Zone by using the AWS SDK for Ruby. 

**Example**  

```
s3 = Aws::S3::Client.new(region:'parent-region-code')
s3.create_bucket(
  bucket: "bucket-base-name--local-zone-id--x-s3",
  create_bucket_configuration: {
    location: { name: 'local-zone-id', type: 'LocalZone' },
    bucket: { data_redundancy: 'SingleLocalZone', type: 'Directory' }
  }
)
```

------

# Authenticating and authorizing for directory buckets in Local Zones
<a name="iam-directory-bucket-LZ"></a>

Directory buckets in Local Zones support both AWS Identity and Access Management (IAM) authorization and session-based authorization. For more information about authentication and authorization for directory buckets, see [Authenticating and authorizing requests](s3-express-authenticating-authorizing.md).

## Resources
<a name="directory-bucket-lz-resources"></a>

Amazon Resource Names (ARNs) for directory buckets contain the `s3express` namespace, the AWS parent Region, the AWS account ID, and the directory bucket name which includes the Zone ID. To access and perform actions on your directory bucket, you must use the following ARN format:

```
arn:aws:s3express:region-code:account-id:bucket/bucket-base-name--ZoneID--x-s3
```

For directory buckets in a Local Zone, the Zone ID is the ID of the Local Zone. For more information about directory buckets in Local Zones, see [Concepts for directory buckets in Local Zones](s3-lzs-for-directory-buckets.md). For more information about ARNs, see [Amazon Resource Names (ARNs)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html) in the *IAM User Guide*. For more information about resources, see [IAM JSON Policy Elements: Resource](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_resource.html) in the *IAM User Guide*.

## Condition keys for directory buckets in Local Zones
<a name="condition-key-db-lz"></a>

In Local Zones, you can use all of these [condition keys](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3express.html#amazons3express-policy-keys) in your IAM policies. Additionally, to create a data perimeter around your Local Zone network border groups, you can use the condition key `s3express:AllAccessRestrictedToLocalZoneGroup` to deny all requests from outside the groups. 

The following condition key can be used to further refine the conditions under which an IAM policy statement applies. For a complete list of API operations, policy actions, and condition keys that are supported by directory buckets, see [Policy actions for directory buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam.html#s3-express-security-iam-actions).

**Note**  
The following condition key only applies to Local Zones and isn't supported in Availability Zones and AWS Regions.


| API operations | Policy actions | Description | Condition key | Description | Type | 
| --- | --- | --- | --- | --- | --- | 
|  [Zonal endpoint API operations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-APIs.html)  |  s3express:CreateSession  |  Grants permission to create a session token, which is used for granting access to all Zonal endpoint API operations, such as `CreateSession`, `HeadBucket`, `CopyObject`, `PutObject`, and `GetObject`.  |  s3express:AllAccessRestrictedToLocalZoneGroup  | Filters all access to the bucket unless the request originates from the AWS Local Zone network border groups provided in this condition key.  **Values:** Local Zone network border group value   |  String  | 

## Example policies
<a name="directory-bucket-lz-policies"></a>

To restrict object access to requests from within a data residency boundary that you define (specifically, a Local Zone Group which is a set of Local Zones parented to the same AWS Region), you can set any of the following policies:
+ The service control policy (SCP). For information about SCPs, see [Service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) in the *AWS Organizations User Guide*.
+ The IAM identity-based policy for the IAM role.
+ The VPC endpoint policy. For more information about the VPC endpoint policies, see [Control access to VPC endpoints using endpoint policies](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html) in the *AWS PrivateLink Guide*.
+ The S3 bucket policy.

**Note**  
The condition key `s3express:AllAccessRestrictedToLocalZoneGroup` doesn't support access from an on-premises environment. To support the access from an on-premises environment, you must add the source IP to the policies. For more information, see [aws:SourceIp](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceip) in the IAM User Guide. 

**Example – SCP policy**  

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Access-to-specific-LocalZones-only",
            "Effect": "Deny",
            "Action": [
                "s3express:*",
            ],
            "Resource": "*",
            "Condition": {
                "StringNotEqualsIfExists": {
                    "s3express:AllAccessRestrictedToLocalZoneGroup": [
                        "local-zone-network-border-group-value"
                    ]
                }
            }
        }
    ]
}
```

**Example – IAM identity-based policy (attached to IAM role)**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": {
        "Effect": "Deny",
        "Action": "s3express:CreateSession",
        "Resource": "*",
        "Condition": {
            "StringNotEqualsIfExists": {
                "s3express:AllAccessRestrictedToLocalZoneGroup": [
                    "local-zone-network-border-group-value"
                ]              
            }
        }
    }
}
```

**Example – VPC endpoint policy**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {       
            "Sid": "Access-to-specific-LocalZones-only",
            "Principal": "*",
            "Action": "s3express:CreateSession",
            "Effect": "Deny",
            "Resource": "*",
            "Condition": {
                 "StringNotEqualsIfExists": {
                     "s3express:AllAccessRestrictedToLocalZoneGroup": [
                         "local-zone-network-border-group-value"
                     ]
                 }   
            }
        }
    ]
}
```

**Example – bucket policy**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {       
            "Sid": "Access-to-specific-LocalZones-only",
            "Principal": "*",
            "Action": "s3express:CreateSession",
            "Effect": "Deny",
            "Resource": "*",
            "Condition": {
                 "StringNotEqualsIfExists": {
                     "s3express:AllAccessRestrictedToLocalZoneGroup": [
                         "local-zone-network-border-group-value"
                     ]
                 }   
            }
        }
    ]
}
```

# Differences for directory buckets
<a name="s3-express-differences"></a>

When using Amazon S3, you can choose the bucket type that best fits your application and performance requirements. A directory bucket is a type of bucket that is best used for low latency or data residency use cases. To learn more about directory buckets, see [Working with directory buckets](directory-buckets-overview.md). 

 For more information about how directory buckets are different, see the following topics.

**Topics**
+ [Differences for directory buckets](#s3-express-specifications)
+ [API operations supported for directory buckets](#s3-express-differences-api-operations)
+ [Amazon S3 features not supported by directory buckets](#s3-express-differences-unsupported-features)

## Differences for directory buckets
<a name="s3-express-specifications"></a>
+ **Directory bucket names** 
  +  A directory bucket name consists of a base name that you provide and a suffix that contains the ID of the AWS Zone (an Availability Zone or Local Zone) that your bucket is located in, followed by `--x-s`. For a list of rules and examples of directory bucket names, see [Directory bucket naming rules](https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-bucket-naming-rules.html).
+ **`ListObjectsV2` behavior** 
  + For directory buckets, `ListObjectsV2` does not return objects in lexicographical (alphabetical) order. Additionally, prefixes must end in a delimiter and only "/" can be specified as the delimiter. 
  + For directory buckets, `ListObjectsV2` response includes the prefixes that are related only to in-progress multipart uploads.
+ **Deletion behavior** – When you delete an object in a directory bucket, Amazon S3 recursively deletes any empty directories in the object path. For example, if you delete the object key `dir1/dir2/file1.txt`, Amazon S3 deletes `file1.txt`. If the` dir1/` and `dir2/` directories are empty and contain no other objects, Amazon S3 also deletes those directories. 
+ **ETags and checksums** – Entity tags (ETags) for directory buckets are random alphanumeric strings unique to the object and not MD5 checksums. For more information about using additional checksums with directory buckets, see [S3 additional checksum best practices](s3-express-optimizing-performance.md#s3-express-optimizing-performance-checksums).
+ **Object keys in `DeleteObjects` requests** 
  + Object keys in `DeleteObjects` requests must contain at least one non-white space character. Strings of all white space characters aren't supported in `DeleteObjects` requests.
  + Object keys in `DeleteObjects` requests cannot contain Unicode control characters, except for the newline (`\n`), tab (`\t`), and carriage return (`\r`) characters.
+ **Regional and Zonal endpoints** – Bucket-management API operations for directory buckets are available through a Regional endpoint and are referred to as Regional endpoint API operations. Examples of Regional endpoint API operations are CreateBucket and DeleteBucket. After you create a directory bucket, you can use Zonal endpoint API operations to upload and manage the objects in your directory bucket. Zonal endpoint API operations are available through a Zonal endpoint. Examples of Zonal endpoint API operations are `PutObject` and `CopyObject`. When using directory buckets, you must specify the Region in all requests. For Regional endpoints, you specify the Region, for example, `s3express-control.us-west-2.amazonaws.com`. For Zonal endpoints, you specify both the Region and the Availability Zone, for example, `s3express-usw2-az1.us-west-2.amazonaws.com`. For more information, see [Regional and Zonal endpoints for directory buckets](s3-express-Regions-and-Zones.md).
+ **Multipart uploads** – You can upload and copy large objects that are stored in directory buckets by using the multipart upload process. However, the following are some differences when using the multipart upload process with objects stored in directory buckets. For more information, see [Using multipart uploads with directory buckets](s3-express-using-multipart-upload.md).
  + The object creation date is the completion date of the multipart upload.
  + Multipart part numbers must use consecutive part numbers. If you try to complete a multipart upload request with nonconsecutive part numbers, Amazon S3 generates an HTTP `400 (Bad Request)` error.
  + The initiator of a multipart upload can abort the multipart upload request only if they have been granted explicit allow access to `AbortMultipartUpload` through the `s3express:CreateSession` permission. For more information, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md).
+ **Emptying a directory bucket** – The `s3 rm` command through the AWS Command Line Interface (CLI), the `delete` operation through Mountpoint, and the **Empty** bucket option button through the AWS Management Console are unable to delete in-progress multipart uploads in a directory bucket. To delete these in-progress multipart uploads, use the `ListMultipartUploads` operation to list the in-progress multipart uploads in the bucket and use the `AbortMultipartUpload` operation to abort all the in-progress multipart uploads.
+ **AWS Local Zones** – Local Zones are only supported for directory buckets not general purpose buckets.
  +  Appending data to existing objects isn’t supported for directory buckets that reside in Local Zones. You can only append data to existing objects in directory buckets that reside in Availability Zones. 

## API operations supported for directory buckets
<a name="s3-express-differences-api-operations"></a>

The directory buckets support both Regional (bucket level, or control plane) and Zonal (object level, or data plane) endpoint API operations. For more information, see [Networking for directory buckets](s3-express-networking.md) and [Endpoints and gateway VPC endpoints](directory-bucket-high-performance.md#s3-express-overview-endpoints). For a list of supported API operations see [Directory bucket API operations](s3-express-APIs.md). 

## Amazon S3 features not supported by directory buckets
<a name="s3-express-differences-unsupported-features"></a>

The following Amazon S3 features are not supported by directory buckets: 
+ AWS managed policies
+ AWS PrivateLink for S3
+ MD5 checksums
+ Multi-factor authentication (MFA) delete
+ S3 Object Lock
+ Requester Pays
+ S3 Access Grants
+ Amazon CloudWatch request metrics
+ S3 Event Notifications
+ S3 Lifecycle transition actions
+ S3 Multi-Region Access Points
+ S3 Object Lambda Access Points
+ S3 Versioning
+ S3 Inventory
+ S3 Replication 
+ Object tags
+ S3 Select
+ Server access logs
+ Static website hosting
+ S3 Storage Lens
+ S3 Storage Lens groups
+ S3 Transfer Acceleration
+ Dual-layer server-side encryption with AWS Key Management Service (AWS KMS) keys (DSSE-KMS)
+ Server-side encryption with customer-provided keys (SSE-C)
+ The option to copy an existing bucket's settings when creating a new bucket in the Amazon S3 console
+ Enhanced access denied (HTTP `403 Forbidden`) error messages

# Networking for directory buckets
<a name="s3-express-networking"></a>

To access directory buckets and objects inside, you use Regional and Zonal API endpoints that are different from the standard Amazon S3 endpoints. Depending on the S3 API operation that you use, either a Zonal or Regional endpoint is required. For a complete list of API operations by endpoint type, see [Differences for directory buckets](s3-express-differences.md). 

You can access both Zonal and Regional API operations through gateway virtual private cloud (VPC) endpoints.

The following topics describe the networking requirements for accessing S3 Express One Zone by using a gateway VPC endpoint.

**Topics**
+ [Endpoints](#s3-express-endpoints)
+ [Configuring VPC gateway endpoints](#s3-express-networking-vpc-gateway-directory)

## Endpoints
<a name="s3-express-endpoints"></a>

You can access directory buckets and objects inside from your VPC by using gateway VPC endpoints. Directory buckets use Regional and Zonal API endpoints. Depending on the Amazon S3 API operation that you use, either a Regional or Zonal endpoint is required. There is no additional charge for using gateway endpoints.

Bucket-level (or control plane) API operations are available through Regional endpoints and are referred to as Regional endpoint API operations. Examples of Regional endpoint API operations are `CreateBucket` and `DeleteBucket`. When you create a directory bucket, you choose a single Zone (Availability Zone or Local Zone) where your directory bucket will be created. After you create a directory bucket, you can use Zonal endpoint API operations to upload and manage the objects in your directory bucket.

Object-level (or data plane) API operations are available through Zonal endpoints and are referred to as Zonal endpoint API operations. Examples of Zonal endpoint API operations are `CreateSession` and `PutObject`.

For more information about the endpoints and the locations that support directory buckets in Availability Zones, see [Endpoints for directory buckets in Availability Zones](directory-bucket-az-networking.md#s3-express-endpoints-az). 

For more information about the endpoints and the locations that support directory buckets in Local Zones, see [Enable accounts for Local Zones](opt-in-directory-bucket-lz.md). 

## Configuring VPC gateway endpoints
<a name="s3-express-networking-vpc-gateway-directory"></a>

To configure gateway VPC endpoints for access directory buckets in Availability Zones, see [Configuring VPC gateway endpoints](directory-bucket-az-networking.md#s3-express-networking-vpc-gateway).

To configure gateway VPC endpoints for access directory buckets in Local Zones, see [Private connectivity from your VPC](connectivity-lz-directory-buckets.md).

# Directory bucket naming rules
<a name="directory-bucket-naming-rules"></a>

When you create a directory bucket in Amazon S3, the following bucket naming rules apply. For general purpose bucket naming rules, see [General purpose bucket naming rules](bucketnamingrules.md).

A directory bucket name consists of a base name that you provide, and a suffix that contains the ID of the AWS Zone (an Availability Zone or a Local Zone) that your bucket is located in and `--x-s3`. The *zone-id* can be the ID of an Availability Zone or a Local Zone.

```
base-name--zoneid--x-s3
```

For example, the following directory bucket name contains the Availability Zone ID `usw2-az1`:

```
bucket-base-name--usw2-az1--x-s3
```

**Note**  
When you create a directory bucket by using the console, a suffix is automatically added to the base name that you provide. This suffix includes the Zone ID of the Zone (Availability Zone or Local Zone) that you chose.  
When you create a directory bucket by using an API, you must provide the full suffix, including the Zone ID, in your request. For a list of Zone IDs, see [Endpoints](s3-express-networking.md#s3-express-endpoints).

The following naming rules apply for directory buckets.
+ Be unique within the chosen Zone (AWS Availability Zone or AWS Local Zone). 
+ Name must be between 3 (min) and 63 (max) characters long, including the suffix.
+ Consists only of lowercase letters, numbers and hyphens (-).
+ Begin and end with a letter or number. 
+ Must include the following suffix: `--zone-id--x-s3`.
+ Bucket names must not start with the prefix `xn--`.
+ Bucket names must not start with the prefix `sthree-`.
+ Bucket names must not start with the prefix `sthree-configurator`.
+ Bucket names must not start with the prefix ` amzn-s3-demo-`.
+ Bucket names must not end with the suffix `-s3alias`. This suffix is reserved for access point alias names. For more information, see [Access point aliases](access-points-naming.md#access-points-alias).
+ Bucket names must not end with the suffix `--ol-s3`. This suffix is reserved for Object Lambda Access Point alias names. For more information, see [How to use a bucket-style alias for your S3 bucket Object Lambda Access Point](olap-use.md#ol-access-points-alias).
+ Bucket names must not end with the suffix `.mrap`. This suffix is reserved for Multi-Region Access Point names. For more information, see [Rules for naming Amazon S3 Multi-Region Access Points](multi-region-access-point-naming.md).

# Viewing directory bucket properties
<a name="directory-bucket-view"></a>

You can view and configure the properties for an Amazon S3 directory bucket by using the Amazon S3 console. For more information, see [Working with directory buckets](directory-buckets-overview.md).

## Using the S3 console
<a name="directory-bucket-view-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Directory buckets**.

1. In the **Directory buckets** list, choose the name of the bucket that you want to view the properties for.

1. Choose the **Properties** tab.

1. On the **Properties** tab, you can view the following properties for the bucket:
   + **Directory bucket overview** – You can see the AWS Region, Zone (Availability Zone or Local Zone), Amazon Resource Name (ARN), and creation date for the bucket.
   + **Server-side encryption settings** – Amazon S3 applies server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption for all S3 buckets. Amazon S3 encrypts an object before saving it to a disk and decrypts the object when you download it. For more information, see [Setting and monitoring default encryption for directory buckets](s3-express-bucket-encryption.md).

     For more information about supported features for directory buckets, see [Creating and using directory buckets](directory-buckets-overview.md#directory-buckets-working).

# Managing directory bucket policies
<a name="directory-bucket-bucket-policy"></a>

You can add, delete, update, and view bucket policies for Amazon S3 directory buckets by using the Amazon S3 console, the AWS SDKs and the AWS CLI. For more information, see the following topics. For more information about supported AWS Identity and Access Management (IAM) actions, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md). For example bucket policies for directory buckets, see [Example bucket policies for directory buckets](s3-express-security-iam-example-bucket-policies.md).

**Topics**
+ [Adding a bucket policy](#directory-bucket-bucket-policy-add)
+ [Viewing a bucket policy](#directory-bucket-bucket-policy-view)
+ [Deleting a bucket policy](#directory-bucket-bucket-policy-delete)

## Adding a bucket policy
<a name="directory-bucket-bucket-policy-add"></a>

To add a bucket policy to a directory bucket, you can use the Amazon S3 console, the AWS SDKs, or the AWS CLI.

### Using the S3 console
<a name="directory-bucket-bucket-policy-add-console"></a>

**To create or edit a bucket policy**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Directory buckets**.

1. In the **Directory buckets** list, choose the name of the bucket that you want to add a policy to.

1. Choose the **Permissions** tab.

1. Under **Bucket policy**, choose **Edit**. The **Edit bucket policy** page appears.

1. To generate a policy automatically, choose **Policy generator**.

   If you choose **Policy generator**, the AWS Policy Generator opens in a new window.

   If you don't want to use the AWS Policy Generator, you can add or edit JSON statements in the **Policy** section.

   1. On the **AWS Policy Generator** page, for **Select Type of Policy**, choose **S3 Bucket Policy**.

   1. Add a statement by entering the information in the provided fields, and then choose **Add Statement**. Repeat this step for as many statements as you want to add. For more information about these fields, see the [IAM JSON policy elements reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html) in the *IAM User Guide*. 
**Note**  
For your convenience, the **Edit bucket policy** page displays the **Bucket ARN ** (Amazon Resource Name) of the current bucket above the **Policy** text field. You can copy this ARN for use in the statements on the **AWS Policy Generator** page. 

   1. After you finish adding statements, choose **Generate Policy**.

   1. Copy the generated policy text, choose **Close**, and return to the **Edit bucket policy** page in the Amazon S3 console.

1. In the **Policy** box, edit the existing policy or paste the bucket policy from the AWS Policy Generator. Make sure to resolve security warnings, errors, general warnings, and suggestions before you save your policy.
**Note**  
Bucket policies are limited to 20 KB in size.

1. Choose **Save changes**, which returns you to the **Permissions** tab. 

### Using the AWS SDKs
<a name="directory-bucket-bucket-policy-add-sdks"></a>

------
#### [ SDK for Java 2.x ]

**Example**  
`PutBucketPolicy` AWS SDK for Java 2.x   

```
public static void setBucketPolicy(S3Client s3Client, String bucketName, String policyText) {
     
       //sample policy text
       /**
        * policy_statement = {
         *         'Version': '2012-10-17',
         *         'Statement': [
         *             {
         *                 'Sid': 'AdminPolicy',
         *                 'Effect': 'Allow',
         *                 'Principal': {
         *                     "AWS": "111122223333"
         *                 },
         *                 'Action': 's3express:*',
         *                 'Resource': 'arn:aws:s3express:region:111122223333:bucket/bucket-base-name--zone-id--x-s3'
         *             }
         *         ]
         *     }
         */
         System.out.println("Setting policy:");
         System.out.println("----");
         System.out.println(policyText);
         System.out.println("----");
         System.out.format("On Amazon S3 bucket: \"%s\"\n", bucketName);
         
         try {
             PutBucketPolicyRequest policyReq = PutBucketPolicyRequest.builder()
                     .bucket(bucketName)
                     .policy(policyText)
                     .build();
             s3Client.putBucketPolicy(policyReq);
             System.out.println("Done!");
         }    
         
         catch (S3Exception e) {
             System.err.println(e.awsErrorDetails().errorMessage());
             System.exit(1);
         }
    }
```

------

### Using the AWS CLI
<a name="directory-bucket-delete-cli"></a>

This example shows how to add a bucket policy to a directory bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3api put-bucket-policy --bucket bucket-base-name--zone-id--x-s3 --policy file://bucket_policy.json
```

bucket\$1policy.json:

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AdminPolicy",
            "Effect": "Allow",
            "Principal": {
                "AWS": "111122223333"
            },
            "Action": "s3express*",
            "Resource": "arn:aws:s3express:us-west-2:111122223333:bucket/amzn-s3-demo-bucket--usw2-az1--x-s3"
        }
    ]
}
```

------

For more information, see [put-bucket-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-policy.html) in the AWS Command Line Interface.

## Viewing a bucket policy
<a name="directory-bucket-bucket-policy-view"></a>

To view a bucket policy for a directory bucket, use the following examples.

### Using the AWS CLI
<a name="directory-bucket-bucket-policy-view-cli"></a>

This example shows how to view the bucket policy attached to a directory bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3api get-bucket-policy --bucket bucket-base-name--zone-id--x-s3
```

For more information, see [get-bucket-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-policy.html) in the AWS Command Line Interface.

## Deleting a bucket policy
<a name="directory-bucket-bucket-policy-delete"></a>

To delete a bucket policy for a directory bucket, use the following examples.

### Using the AWS SDKs
<a name="directory-bucket-bucket-policy-delete-sdks"></a>

------
#### [ SDK for Java 2.x ]

**Example**  
`DeleteBucketPolicy` AWS SDK for Java 2.x   

```
public static void deleteBucketPolicy(S3Client s3Client, String bucketName) {
      try {
          DeleteBucketPolicyRequest deleteBucketPolicyRequest = DeleteBucketPolicyRequest
                  .builder()
                  .bucket(bucketName)
                  .build()
          s3Client.deleteBucketPolicy(deleteBucketPolicyRequest);
          System.out.println("Successfully deleted bucket policy");
      }
      
      catch (S3Exception e) {
          System.err.println(e.awsErrorDetails().errorMessage());
          System.exit(1);
      }
```

------

### Using the AWS CLI
<a name="directory-bucket-delete-cli"></a>

This example shows how to delete a bucket policy for a directory bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3api delete-bucket-policy --bucket bucket-base-name--zone-id--x-s3
```

For more information, see [delete-bucket-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/delete-bucket-policy.html) in the AWS Command Line Interface.

# Emptying a directory bucket
<a name="directory-bucket-empty"></a>

You can empty an Amazon S3 directory bucket by using the Amazon S3 console. For more information about directory buckets, see [Working with directory buckets](directory-buckets-overview.md).

Before you empty a directory bucket, note the following:
+ When you empty a directory bucket, you delete all the objects, but you keep the directory bucket.
+ After you empty a directory bucket, the empty action can't be undone.
+ Objects that are added to the directory bucket while the empty bucket action is in progress might be deleted. 

If you also want to delete the bucket, note the following:
+ All objects in the directory bucket must be deleted before the bucket itself can be deleted.
+ In-progress multipart uploads in the directory bucket must be aborted before the bucket itself can be deleted.
**Note**  
The `s3 rm` command through the AWS Command Line Interface (CLI), the `delete` operation through Mountpoint, and the **Empty** bucket option button through the AWS Management Console are unable to delete in-progress multipart uploads in a directory bucket. To delete these in-progress multipart uploads, use the `ListMultipartUploads` operation to list the in-progress multipart uploads in the bucket and use the `AbortMultipartUpload` operation to abort all the in-progress multipart uploads.

To delete a directory bucket, see [Deleting a directory bucket](directory-bucket-delete.md). To abort an in-progress multipart upload, see [Aborting a multipart upload](abort-mpu.md).

To empty a general purpose bucket, see [Emptying a general purpose bucket](empty-bucket.md).

## Using the S3 console
<a name="directory-bucket-empty-console"></a>

**To empty a directory bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Directory buckets**.

1. Choose the option button next to the name of the bucket that you want to empty, and then choose **Empty**.

1. On the **Empty bucket** page, confirm that you want to empty the bucket by entering **permanently delete** in the text field, and then choose **Empty**.

1. Monitor the progress of the bucket emptying process on the **Empty bucket: status** page.

# Deleting a directory bucket
<a name="directory-bucket-delete"></a>

You can delete only empty Amazon S3 directory buckets. Before you delete your directory bucket, you must delete all objects in the bucket and abort all in-progress multipart uploads.

If the directory bucket is attached to an access point, you must delete the access point first. For more information, see [Delete your access point for directory buckets](access-points-directory-buckets-delete.md).

To empty a directory bucket, see [Emptying a directory bucket](directory-bucket-empty.md). To abort an in-progress multipart upload, see [Aborting a multipart upload](abort-mpu.md).

To delete a general purpose bucket, see [Deleting a general purpose bucket](delete-bucket.md).

## Using the S3 console
<a name="directory-bucket-delete-console"></a>

After you empty your directory bucket and abort all in-progress multipart uploads, you can delete your bucket.

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Directory buckets**.

1. In the **Directory buckets** list, choose the option button next to the bucket that you want to delete.

1. Choose **Delete**.

1. On the **Delete bucket** page, enter the name of the bucket in the text field to confirm the deletion of your bucket. 
**Important**  
Deleting a directory bucket can't be undone.

1. To delete your directory bucket, choose **Delete bucket**.

## Using the AWS SDKs
<a name="directory-bucket-delete-sdks"></a>

The following examples delete a directory bucket by using the AWS SDK for Java 2.x and AWS SDK for Python (Boto3).

------
#### [ SDK for Java 2.x ]

**Example**  

```
public static void deleteBucket(S3Client s3Client, String bucketName) {
     
    try {
        DeleteBucketRequest del = DeleteBucketRequest.builder()
                .bucket(bucketName)
                .build();
        s3Client.deleteBucket(del);
        System.out.println("Bucket " + bucketName + " has been deleted");
    } 
    catch (S3Exception e) {
        System.err.println(e.awsErrorDetails().errorMessage());
        System.exit(1);
    }
}
```

------
#### [ SDK for Python ]

**Example**  

```
import logging
import boto3
from botocore.exceptions import ClientError

def delete_bucket(s3_client, bucket_name):
    '''
    Delete a directory bucket in a specified Region

    :param s3_client: boto3 S3 client
    :param bucket_name: Bucket to delete; for example, 'doc-example-bucket--usw2-az1--x-s3'
    :return: True if bucket is deleted, else False
    '''

    try:
        s3_client.delete_bucket(Bucket = bucket_name)
    except ClientError as e:
        logging.error(e)
        return False
    return True

if __name__ == '__main__':
    bucket_name = 'BUCKET_NAME'
    region = 'us-west-2'
    s3_client = boto3.client('s3', region_name = region)
```

------

## Using the AWS CLI
<a name="directory-bucket-delete-cli"></a>

This example shows how to delete a directory bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3api delete-bucket --bucket bucket-base-name--zone-id--x-s3 --region us-west-2
```

For more information, see [delete-bucket](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/delete-bucket.html                 ) in the AWS Command Line Interface.

# Listing directory buckets
<a name="directory-buckets-objects-ListExamples"></a>

The following examples show how to list directory buckets by using the AWS Management Console, AWS SDKs, and AWS CLI. 

## Using the S3 console
<a name="directory-bucket-list-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region in which you want to view a list of your directory buckets.

1. In the left navigation pane, choose **Directory buckets**. A list of directory buckets appears. To view the objects in the bucket, bucket properties, bucket permissions, metrics, access points associated with the bucket, or to manage the bucket, choose the bucket name.

## Using the AWS SDKs
<a name="directory-bucket-list-sdks"></a>

------
#### [ SDK for Java 2.x ]

**Example**  
The following example lists directory buckets by using the AWS SDK for Java 2.x.   

```
 public static void listBuckets(S3Client s3Client) {
        try {
            ListDirectoryBucketsRequest listDirectoryBucketsRequest = ListDirectoryBucketsRequest.builder().build();
            ListDirectoryBucketsResponse response = s3Client.listDirectoryBuckets(listDirectoryBucketsRequest);
            if (response.hasBuckets()) {
                for (Bucket bucket: response.buckets()) {
                    System.out.println(bucket.name());
                    System.out.println(bucket.creationDate());
                }
            }
        } 
        
        catch (S3Exception e) {
            System.err.println(e.awsErrorDetails().errorMessage());
            System.exit(1);
        }
    }
```

------
#### [ SDK for Python ]

**Example**  
The following example lists directory buckets by using the AWS SDK for Python (Boto3).   

```
import logging
import boto3
from botocore.exceptions import ClientError
                                       
def list_directory_buckets(s3_client):
'''
Prints a list of all directory buckets in a Region
                                           
:param s3_client: boto3 S3 client                                         
:return: True if there are buckets in the Region, else False
'''
try:
    response = s3_client.list_directory_buckets()
    for bucket in response['Buckets']:
        print (bucket['Name'])
except ClientError as e:
    logging.error(e)
    return False
return True
                                          
                                          
if __name__ == '__main__': 
    region = 'us-east-1'
    s3_client = boto3.client('s3', region_name = region)
    list_directory_buckets(s3_client)
```

------
#### [ SDK for .NET ]

**Example**  
The following example lists directory buckets by using the AWS SDK for .NET.   

```
var listDirectoryBuckets = await amazonS3Client.ListDirectoryBucketsAsync(new ListDirectoryBucketsRequest
{
  MaxDirectoryBuckets = 10
  }).ConfigureAwait(false);
```

------
#### [ SDK for PHP ]

**Example**  
The following example lists directory buckets by using the AWS SDK for PHP.   

```
require 'vendor/autoload.php';

$s3Client = new S3Client([
    'region'      => 'us-east-1',
]);
$result = $s3Client->listDirectoryBuckets();
```

------
#### [ SDK for Ruby ]

**Example**  
The following example lists directory buckets by using the AWS SDK for Ruby.   

```
s3 = Aws::S3::Client.new(region:'us-west-1')
s3.list_directory_buckets
```

------

## Using the AWS CLI
<a name="list-directory-buckets-cli"></a>

The following `list-directory-buckets` example command shows how you can use the AWS CLI to list your directory buckets in the *us-east-1* region. To run this command, replace the `user input placeholders` with your own information.

```
aws s3api list-directory-buckets --region us-east-1 
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/list-directory-buckets.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/list-directory-buckets.html) in the *AWS CLI Command Reference*.

# Determining whether you can access a directory bucket
<a name="directory-buckets-objects-HeadExamples"></a>

The following AWS SDK examples show how to use the `HeadBucket` API operation to determine if an Amazon S3 directory bucket exists and if you have permission to access it. 

## Using the AWS SDKs
<a name="directory-bucket-copy-sdks"></a>

The following AWS SDK for Java 2.x example shows how to determine if a bucket exists and if you have permission to access it. 

------
#### [ SDK for Java 2.x ]

**Example**  
 AWS SDK for Java 2.x   

```
public static void headBucket(S3Client s3Client, String bucketName) {
   try {
        HeadBucketRequest headBucketRequest = HeadBucketRequest
                .builder()
                .bucket(bucketName)
                .build();
        s3Client.headBucket(headBucketRequest);
        System.out.format("Amazon S3 bucket: \"%s\" found.", bucketName);
   }

   catch (S3Exception e) {
       System.err.println(e.awsErrorDetails().errorMessage());
       System.exit(1);
   }
}
```

------

## Using the AWS CLI
<a name="directory-head-bucket-cli"></a>

The following `head-bucket` example command shows how you can use the AWS CLI to determine if a directory bucket exists and if you have permission to access it. To run this command, replace the user input placeholders with your own information.

```
aws s3api head-bucket --bucket bucket-base-name--zone-id--x-s3 
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/head-bucket.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/head-bucket.html) in the *AWS CLI Command Reference*.

# Working with objects in a directory bucket
<a name="directory-buckets-objects"></a>

After you create an Amazon S3 directory bucket, you can work with objects by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), and the AWS SDKs. 

For more information about performing bulk operations, importing, uploading, copying, deleting, and downloading objects in directory buckets, see the following topics.

**Topics**
+ [Importing objects into a directory bucket](create-import-job.md)
+ [Working with S3 Lifecycle for directory buckets](directory-buckets-objects-lifecycle.md)
+ [Using Batch Operations with directory buckets](directory-buckets-objects-Batch-Ops.md)
+ [Appending data to objects in directory buckets](directory-buckets-objects-append.md)
+ [Renaming objects in directory buckets](directory-buckets-objects-rename.md)
+ [Uploading objects to a directory bucket](directory-buckets-objects-upload.md)
+ [Copying objects from or to a directory bucket](directory-buckets-objects-copy.md)
+ [Deleting objects from a directory bucket](directory-bucket-delete-object.md)
+ [Downloading an object from a directory bucket](directory-buckets-objects-GetExamples.md)
+ [Generating presigned URLs to share objects directory bucket](directory-buckets-objects-generate-presigned-url-Examples.md)
+ [Retrieving object metadata from directory buckets](directory-buckets-objects-HeadObjectExamples.md)
+ [Listing objects from a directory bucket](directory-buckets-objects-listobjectsExamples.md)

# Importing objects into a directory bucket
<a name="create-import-job"></a>

After you create a directory bucket in Amazon S3, you can populate the new bucket with data by using the import action. Import is a streamlined method for creating S3 Batch Operations jobs to copy objects from general purpose buckets to directory buckets. 

**Note**  
The following limitations apply to import jobs:  
The source bucket and the destination bucket must be in the same AWS Region and account.
The source bucket cannot be a directory bucket.
Objects larger than 5GB are not supported and will be omitted from the copy operation.
Objects in the Glacier Flexible Retrieval, Glacier Deep Archive, Intelligent-Tiering Archive Access tier, and Intelligent-Tiering Deep Archive tier storage classes must be restored before they can be imported.
Imported objects with MD5 checksum algorithms are converted to use CRC32 checksums.
Imported objects use the Express One Zone storage class, which has a different pricing structure than the storage classes used by general purpose buckets. Consider this difference in cost when importing large numbers of objects.

When you configure an import job, you specify the source bucket or prefix where the existing objects will be copied from. You also provide an AWS Identity and Access Management (IAM) role that has permissions to access the source objects. Amazon S3 then starts a Batch Operations job that copies the objects and automatically applies appropriate storage class and checksum settings.

To configure import jobs, you use the Amazon S3 console.

## Using the Amazon S3 console
<a name="create-import-job-console-procedure"></a>

**To import objects into a directory bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**, and then choose the **Directory** buckets tab. Choose the option button next to the directory bucket that you want to import objects into.

1. Choose **Import**.

1. For **Source**, enter the general purpose bucket (or bucket path including prefix) that contains the objects that you want to import. To choose an existing general purpose bucket from a list, choose **Browse S3**.

1. For **Permission to access and copy source objects**, do one of the following to specify an IAM role with the permissions necessary to import your source objects:
   + To allow Amazon S3 to create a new IAM role on your behalf, choose **Create new IAM role**.
   + To choose an existing IAM role from a list, choose **Choose from existing IAM roles**.
   + To specify an existing IAM role by entering its Amazon Resource Name (ARN), choose **Enter IAM role ARN**, then enter the ARN in the corresponding field.

1. Review the information that's displayed in the **Destination** and **Copied object settings** sections. If the information in the **Destination** section is correct, choose **Import** to start the copy job.

   The Amazon S3 console displays the status of your new job on the **Batch Operations** page. For more information about the job, choose the option button next to the job name, and then on the **Actions** menu, choose **View details**. To open the directory bucket that the objects will be imported into, choose **View import destination**.

# Working with S3 Lifecycle for directory buckets
<a name="directory-buckets-objects-lifecycle"></a>

 S3 Lifecycle helps you store objects in S3 Express One Zone in directory buckets cost effectively by deleting expired objects on your behalf. To manage the lifecycle of your objects, create an S3 Lifecycle configuration for your directory bucket. An S3 Lifecycle configuration is a set of rules that define actions that Amazon S3 applies to a group of objects. You can set an Amazon S3 Lifecycle configuration on a directory bucket by using the AWS Command Line Interface (AWS CLI), the AWS SDKs, the Amazon S3 REST API and AWS CloudFormation. 

In your lifecycle configuration, you use rules to define actions that you want Amazon S3 to take on your objects. For objects stored in directory buckets, you can create lifecycle rules to expire objects as they age. You can also create lifecycle rules to delete incomplete multipart uploads in directory buckets at a daily frequency. 

When you add a Lifecycle configuration to a bucket, the configuration rules apply to both existing objects and objects that you add later. For example, if you add a Lifecycle configuration rule today with an expiration action that causes objects with a specific prefix to expire 30 days after creation, S3 will queue for removal any existing objects that are more than 30 days old and that have the specified prefix.

## How S3 Lifecycle for directory buckets is different
<a name="directory-bucket-lifecycle-differences"></a>

For objects in directory buckets, you can create lifecycle rules to expire objects and delete incomplete multipart uploads. However, S3 Lifecycle for directory buckets doesn't support transition actions between storage classes. 

**CreateSession**

Lifecycle uses public `DeleteObject` and `DeleteObjects` API operations to expire objects in directory buckets. To use these API operations, S3 Lifecycle will use the `CreateSession` API to establish temporary security credentials to access the objects in the directory buckets. For more information, see [`CreateSession`in the *Amazon S3 API Reference*.](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) 

If you have an active policy that denies delete permissions to the lifecycle principal, this will prevent you from allowing S3 Lifecycle to delete objects on your behalf. 

### Using a bucket policy to Grant permissions to the S3 Lifecycle service principal
<a name="lifecycle-directory-bucket-policy"></a>

The following bucket policy grants the S3 Lifecycle service principal permission to create sessions for performing operations such as `DeleteObject` and `DeleteObjects`. When no session mode is specified in a `CreateSession` request, the session is created with the maximum allowable privilege by the permissions in (attempting `ReadWrite` first, then `ReadOnly` if `ReadWrite` is not permitted). However, `ReadOnly` sessions are insufficient for lifecycle operations that modify or delete objects. Therefore, this example explicitly requires a `ReadWrite` session mode by using the `s3express:SessionMode` condition key.

**Example – Bucket policy to allow `CreateSession` calls with an explicit `ReadWrite` session mode for lifecycle operations**  

```
 { 
   "Version":"2008-10-17",		 	 	  
   "Statement":[
      {
         "Effect":"Allow",
         "Principal": {
            "Service":"lifecycle.s3.amazonaws.com"
          },
          "Action":"s3express:CreateSession", 
          "Condition": { 
             "StringEquals": {
                "s3express:SessionMode": "ReadWrite"
              }
           }, 
           "Resource":"arn:aws:s3express:us-east-2:412345678921:bucket/amzn-s3-demo-bucket--use2-az2--x-s3"
       }
   ] 
}
```

### Monitoring lifecycle rules
<a name="lifecycle-directory-bucket-monitoring"></a>

For objects stored in directory buckets, S3 Lifecycle generates AWS CloudTrail management and data event logs. For more information, see [CloudTrail log file examples for S3 Express One Zone](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-log-files.html). 

For more information about creating lifecycle configurations and troubleshooting S3 Lifecycle related issues, see the following topics: 

**Topics**

# Creating and managing a Lifecycle configuration for your directory bucket
<a name="directory-bucket-create-lc"></a>

You can create a lifecycle configuration for directory buckets by using the AWS Command Line Interface (AWS CLI), AWS SDKs and REST APIs.

## Using the AWS CLI
<a name="set-lifecycle-config-cli"></a>

You can use the following AWS CLI commands to manage S3 Lifecycle configurations:
+ `put-bucket-lifecycle-configuration`
+ `get-bucket-lifecycle-configuration`
+ `delete-bucket-lifecycle`

For instructions on setting up the AWS CLI, see [Developing with Amazon S3 using the AWS CLI](https://docs.aws.amazon.com/AmazonS3/latest/API/setup-aws-cli.html) in the *Amazon S3 API Reference*.

The Amazon S3 Lifecycle configuration is an XML file. But when you're using the AWS CLI, you cannot specify the XML format. You must specify the JSON format instead. The following are example XML lifecycle configurations and the equivalent JSON configurations that you can specify in an AWS CLIcommand.

The following AWS CLI example puts a lifecycle configuration policy on a directory bucket. This policy specifies that all objects that have the flagged prefix (myprefix) and are the defined object size expire after 7 days. To use this example, replace each *user input placeholder* with your own information.

Save the lifecycle configuration policy to a JSON file. In this example, the file is named lifecycle1.json.

**Example**  

```
{
    "Rules": [
        {
        "Expiration": {
            "Days": 7
        },
        "ID": "Lifecycle expiration rule",
        "Filter": {
            "And": {
                "Prefix": "myprefix/",
                "ObjectSizeGreaterThan": 500,
                "ObjectSizeLessThan": 64000
            }
        },
        "Status": "Enabled"
    }
    ]
}
```
Submit the JSON file as part of the `put-bucket-lifecycle-configuration` CLI command. To use this command, replace each *user input placeholder* with your own information.  

```
aws s3api put-bucket-lifecycle-configuration --region us-west-2 --profile default --bucket amzn-s3-demo-bucket--usw2-az1--x-s3 --lifecycle-configuration file://lc-policy.json --checksum-algorithm crc32c 
```

**Example**  

```
<LifecycleConfiguration>
    <Rule>
        <ID>Lifecycle expiration rule</ID>
        <Filter>
            <And>
                <Prefix>myprefix/</Prefix>
                <ObjectSizeGreaterThan>500</ObjectSizeGreaterThan>
                <ObjectSizeLessThan>64000</ObjectSizeLessThan>
            </And>
        </Filter>
        <Status>Enabled</Status>     
        <Expiration>
             <Days>7</Days>
        </Expiration>
    </Rule>
</LifecycleConfiguration>
```

## Using the AWS SDKs
<a name="directory-bucket-upload-sdks"></a>

------
#### [ SDK for Java ]

**Example**  

```
import software.amazon.awssdk.services.s3.model.PutBucketLifecycleConfigurationRequest;
import software.amazon.awssdk.services.s3.model.PutBucketLifecycleConfigurationResponse;
import software.amazon.awssdk.services.s3.model.ChecksumAlgorithm;
import software.amazon.awssdk.services.s3.model.BucketLifecycleConfiguration;
import software.amazon.awssdk.services.s3.model.LifecycleRule;
import software.amazon.awssdk.services.s3.model.LifecycleRuleFilter;
import software.amazon.awssdk.services.s3.model.LifecycleExpiration;
import software.amazon.awssdk.services.s3.model.LifecycleRuleAndOperator;
import software.amazon.awssdk.services.s3.model.GetBucketLifecycleConfigurationResponse;
import software.amazon.awssdk.services.s3.model.GetBucketLifecycleConfigurationRequest;
import software.amazon.awssdk.services.s3.model.DeleteBucketLifecycleRequest;
import software.amazon.awssdk.services.s3.model.DeleteBucketLifecycleResponse;
import software.amazon.awssdk.services.s3.model.AbortIncompleteMultipartUpload;

// PUT a Lifecycle policy
LifecycleRuleFilter objectExpirationFilter = LifecycleRuleFilter.builder().and(LifecycleRuleAndOperator.builder().prefix("dir1/").objectSizeGreaterThan(3L).objectSizeLessThan(20L).build()).build();
LifecycleRuleFilter mpuExpirationFilter = LifecycleRuleFilter.builder().prefix("dir2/").build();
       
LifecycleRule objectExpirationRule = LifecycleRule.builder().id("lc").filter(objectExpirationFilter).status("Enabled").expiration(LifecycleExpiration.builder()
                    .days(10)
                    .build())
                .build();
LifecycleRule mpuExpirationRule = LifecycleRule.builder().id("lc-mpu").filter(mpuExpirationFilter).status("Enabled").abortIncompleteMultipartUpload(AbortIncompleteMultipartUpload.builder()
                        .daysAfterInitiation(10)
                        .build())
                .build();
        
PutBucketLifecycleConfigurationRequest putLifecycleRequest = PutBucketLifecycleConfigurationRequest.builder()
                .bucket("amzn-s3-demo-bucket--usw2-az1--x-s3")
                .checksumAlgorithm(ChecksumAlgorithm.CRC32)
                .lifecycleConfiguration(
                        BucketLifecycleConfiguration.builder()
                                .rules(objectExpirationRule, mpuExpirationRule)
                                .build()
                ).build();

PutBucketLifecycleConfigurationResponse resp = client.putBucketLifecycleConfiguration(putLifecycleRequest);

// GET the Lifecycle policy 
GetBucketLifecycleConfigurationResponse getResp = client.getBucketLifecycleConfiguration(GetBucketLifecycleConfigurationRequest.builder().bucket("amzn-s3-demo-bucket--usw2-az1--x-s3").build());

// DELETE the Lifecycle policy
DeleteBucketLifecycleResponse delResp = client.deleteBucketLifecycle(DeleteBucketLifecycleRequest.builder().bucket("amzn-s3-demo-bucket--usw2-az1--x-s3").build());
```

------
#### [ SDK for Go ]

**Example**  

```
package main

import (
    "context"
    "log"

    "github.com/aws/aws-sdk-go-v2/aws"
    "github.com/aws/aws-sdk-go-v2/config"
    "github.com/aws/aws-sdk-go-v2/service/s3"
    "github.com/aws/aws-sdk-go-v2/service/s3/types"
)
// PUT a Lifecycle policy
func putBucketLifecycleConfiguration(client *s3.Client, bucketName string ) error {
    lifecycleConfig := &s3.PutBucketLifecycleConfigurationInput{
        Bucket: aws.String(bucketName),
        LifecycleConfiguration: &types.BucketLifecycleConfiguration{
            Rules: []types.LifecycleRule{
                {
                    ID:     aws.String("lc"),
                    Filter: &types.LifecycleRuleFilter{
                        And: &types.LifecycleRuleAndOperator{
                            Prefix: aws.String("foo/"), 
                            ObjectSizeGreaterThan: aws.Int64(1000000), 
                            ObjectSizeLessThan:    aws.Int64(100000000), 
                            },
                        },
                    
                    Status: types.ExpirationStatusEnabled,
                    Expiration: &types.LifecycleExpiration{
                        Days: aws.Int32(int32(1)), 
                    },
                },
                {
                    ID:     aws.String("abortmpu"),
                    Filter: &types.LifecycleRuleFilter{
                        Prefix: aws.String("bar/"), 
                    },
                    Status: types.ExpirationStatusEnabled,
                    AbortIncompleteMultipartUpload: &types.AbortIncompleteMultipartUpload{
                        DaysAfterInitiation: aws.Int32(int32(5)), 
                    },
                },
            },
        },
    }
    _, err := client.PutBucketLifecycleConfiguration(context.Background(), lifecycleConfig)
    return err
}
// Get the Lifecycle policy
func getBucketLifecycleConfiguration(client *s3.Client, bucketName string) error {
    getLifecycleConfig := &s3.GetBucketLifecycleConfigurationInput{
        Bucket: aws.String(bucketName),
    }

    resp, err := client.GetBucketLifecycleConfiguration(context.Background(), getLifecycleConfig)
    if err != nil {
        return err
    }
    return nil
}
// Delete the Lifecycle policy
func deleteBucketLifecycleConfiguration(client *s3.Client, bucketName string) error {
    deleteLifecycleConfig := &s3.DeleteBucketLifecycleInput{
        Bucket: aws.String(bucketName),
    }
    _, err := client.DeleteBucketLifecycle(context.Background(), deleteLifecycleConfig)
    return err
}
func main() {
    cfg, err := config.LoadDefaultConfig(context.Background(), config.WithRegion("us-west-2")) // Specify your region here
    if err != nil {
        log.Fatalf("unable to load SDK config, %v", err)
    }
    s3Client := s3.NewFromConfig(cfg)
    bucketName := "amzn-s3-demo-bucket--usw2-az1--x-s3" 
    putBucketLifecycleConfiguration(s3Client, bucketName)
    getBucketLifecycleConfiguration(s3Client, bucketName)
    deleteBucketLifecycleConfiguration(s3Client, bucketName)
    getBucketLifecycleConfiguration(s3Client, bucketName)
}
```

------
#### [ SDK for .NET ]

**Example**  

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Collections.Generic;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
    class LifecycleTest
    {
        private const string bucketName = "amzn-s3-demo-bucket--usw2-az1--x-s3";
        // Specify your bucket region (an example region is shown).
        private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
        private static IAmazonS3 client;
        public static void Main()
        {
            client = new AmazonS3Client(bucketRegion);
            AddUpdateDeleteLifecycleConfigAsync().Wait();
        }

        private static async Task AddUpdateDeleteLifecycleConfigAsync()
        {
            try
            {
                var lifeCycleConfiguration = new LifecycleConfiguration()
                {
                    Rules = new List <LifecycleRule>
                        {
                            new LifecycleRule
                            {
                                 Id = "delete rule",
                                  Filter = new LifecycleFilter()
                                 {
                                     LifecycleFilterPredicate = new LifecyclePrefixPredicate()
                                     {
                                         Prefix = "projectdocs/"
                                     }
                                 },
                                 Status = LifecycleRuleStatus.Enabled,
                                 Expiration = new LifecycleRuleExpiration()
                                 {
                                       Days = 10
                                 }
                            }
                        }
                };

                // Add the configuration to the bucket. 
                await AddExampleLifecycleConfigAsync(client, lifeCycleConfiguration);

                // Retrieve an existing configuration. 
                lifeCycleConfiguration = await RetrieveLifecycleConfigAsync(client);

                // Add a new rule.
                lifeCycleConfiguration.Rules.Add(new LifecycleRule
                {
                    Id = "mpu abort rule",
                    Filter = new LifecycleFilter()
                    {
                        LifecycleFilterPredicate = new LifecyclePrefixPredicate()
                        {
                            Prefix = "YearlyDocuments/"
                        }
                    },
                    Expiration = new LifecycleRuleExpiration()
                    {
                        Days = 10
                    },
                    AbortIncompleteMultipartUpload = new LifecycleRuleAbortIncompleteMultipartUpload()
                    {
                        DaysAfterInitiation = 10
                    }
                });

                // Add the configuration to the bucket. 
                await AddExampleLifecycleConfigAsync(client, lifeCycleConfiguration);

                // Verify that there are now two rules.
                lifeCycleConfiguration = await RetrieveLifecycleConfigAsync(client);
                Console.WriteLine("Expected # of rulest=2; found:{0}", lifeCycleConfiguration.Rules.Count);

                // Delete the configuration.
                await RemoveLifecycleConfigAsync(client);

                // Retrieve a nonexistent configuration.
                lifeCycleConfiguration = await RetrieveLifecycleConfigAsync(client);

            }
            catch (AmazonS3Exception e)
            {
                Console.WriteLine("Error encountered ***. Message:'{0}' when writing an object", e.Message);
            }
            catch (Exception e)
            {
                Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
            }
        }

        static async Task AddExampleLifecycleConfigAsync(IAmazonS3 client, LifecycleConfiguration configuration)
        {

            PutLifecycleConfigurationRequest request = new PutLifecycleConfigurationRequest
            {
                BucketName = bucketName,
                Configuration = configuration
            };
            var response = await client.PutLifecycleConfigurationAsync(request);
        }

        static async Task <LifecycleConfiguration> RetrieveLifecycleConfigAsync(IAmazonS3 client)
        {
            GetLifecycleConfigurationRequest request = new GetLifecycleConfigurationRequest
            {
                BucketName = bucketName
            };
            var response = await client.GetLifecycleConfigurationAsync(request);
            var configuration = response.Configuration;
            return configuration;
        }

        static async Task RemoveLifecycleConfigAsync(IAmazonS3 client)
        {
            DeleteLifecycleConfigurationRequest request = new DeleteLifecycleConfigurationRequest
            {
                BucketName = bucketName
            };
            await client.DeleteLifecycleConfigurationAsync(request);
        }
    }
}
```

------
#### [ SDK for Python ]

**Example**  

```
import boto3

client = boto3.client("s3", region_name="us-west-2")
bucket_name = 'amzn-s3-demo-bucket--usw2-az1--x-s3'

client.put_bucket_lifecycle_configuration(
    Bucket=bucket_name,
    ChecksumAlgorithm='CRC32',
    LifecycleConfiguration={
        'Rules': [
            {
                'ID': 'lc',
                'Filter': {
                    'And': {
                        'Prefix': 'foo/',
                        'ObjectSizeGreaterThan': 1000000,
                        'ObjectSizeLessThan': 100000000,
                    }
                },
                'Status': 'Enabled',
                'Expiration': {
                    'Days': 1
                }
            },
            {
                'ID': 'abortmpu',
                'Filter': {
                    'Prefix': 'bar/'
                },
                'Status': 'Enabled',
                'AbortIncompleteMultipartUpload': {
                    'DaysAfterInitiation': 5
                }
            }
        ]
    }
)

result = client.get_bucket_lifecycle_configuration(
    Bucket=bucket_name
)

client.delete_bucket_lifecycle(
    Bucket=bucket_name
)
```

------

# Troubleshooting S3 Lifecycle issues for directory buckets
<a name="directory-buckets-lifecycle-troubleshooting"></a>

**Topics**
+ [I set up my lifecycle configuration but objects in my directory bucket are not expiring](#troubleshoot-directory-bucket-lifecycle-1)
+ [How do I monitor the actions taken by my lifecycle rules?](#troubleshoot-directory-bucket-lifecycle-2)

## I set up my lifecycle configuration but objects in my directory bucket are not expiring
<a name="troubleshoot-directory-bucket-lifecycle-1"></a>

S3 Lifecycle for directory buckets utilizes public APIs to delete objects in S3 Express One Zone. To use object level public APIs, you must grant permission to `CreateSession` and allow S3 Lifecycle permission to delete your objects. If you have an active policy that denies deletes, this will prevent you from allowing S3 Lifecycle to delete objects on your behalf.

It’s important to configure your bucket policies correctly to ensure that the objects that you want to delete are eligible for expiration. You can check your AWS CloudTrail logs for `AccessDenied` Trails for `CreateSession` API invocations in CloudTrail to verify if access has been denied. Checking your CloudTrail logs can assist you in troubleshooting access issues and identifying the root cause of access denied errors. You can then fix your incorrect access controls by updating the relevant policies. 

If you confirm that your bucket policies are set correctly and you are still experiencing issues, we recommend that you review the lifecycle rules to ensure that they are applied to the right subset of objects. 

## How do I monitor the actions taken by my lifecycle rules?
<a name="troubleshoot-directory-bucket-lifecycle-2"></a>

 You can use AWS CloudTrail data event logs to monitor actions taken by S3 Lifecycle in directory buckets. For more information, see [CloudTrail log file examples](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-log-files.html).

# Using Batch Operations with directory buckets
<a name="directory-buckets-objects-Batch-Ops"></a>

You can use Amazon S3 Batch Operations to perform operations on objects stored in S3 buckets. To learn more about S3 Batch Operations, see [Performing large-scale batch operations on Amazon S3 objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops.html).

The following topics discuss performing batch operations on objects stored in the S3 Express One Zone storage class in directory buckets.

**Topics**
+ [Using Batch Operations with directory buckets](#UsingBOPsDirectoryBuckets)
+ [Key differences](#UsingBOPsDirectoryBucketsKeyDiffs)

## Using Batch Operations with directory buckets
<a name="UsingBOPsDirectoryBuckets"></a>

You can perform the **Copy** operation and the **Invoke AWS Lambda function** operations on objects that are stored in directory buckets. With **Copy**, you can copy objects between buckets of the same type (for example, from a directory bucket to a directory bucket). You can also copy between general purpose buckets and directory buckets. With **Invoke AWS Lambda function**, you can use a Lambda function to perform actions on objects in your directory bucket with code that you define. 

**Copying objects**  
You can copy between the same bucket type or between directory buckets and general purpose buckets. When you copy to a directory bucket, you must use the correct Amazon Resource Name (ARN) format for this bucket type. The ARN format for a directory bucket is `arn:aws:s3express:region:account-id:bucket/bucket-base-name--x-s3`. 

**Note**  
Copying objects across different AWS Regions isn't supported when the source or destination bucket is in an AWS Local Zone. The source and destination buckets must have the same parent AWS Region. The source and destination buckets can be different bucket location types (Availability Zone or Local Zone).

You can also populate your directory bucket with data by using the **Import** action in the S3 console. **Import** is a streamlined method for creating Batch Operations jobs to copy objects from general purpose buckets to directory buckets. For **Import** copy jobs from general purpose buckets to directory buckets, S3 automatically generates a manifest. For more information, see [ Importing objects to a directory bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-import-job.html) and [Specifying a manifest](https://docs.aws.amazon.com/AmazonS3/latest/userguide/specify-batchjob-manifest.html).

**Invoking Lambda functions (`LambdaInvoke`)**  
There are special requirements for using Batch Operations to invoke Lambda functions that act on directory buckets. For example, you must structure your Lambda request by using a v2 JSON invocation schema, and specify InvocationSchemaVersion 2.0 when you create the job. For more information, see [Invoke AWS Lambda function](https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-invoke-lambda.html).

## Key differences
<a name="UsingBOPsDirectoryBucketsKeyDiffs"></a>

The following is a list of key differences when you're using Batch Operations to perform bulk operations on objects that are stored in directory buckets with the S3 Express One Zone storage class:
+ For directory buckets, SSE-S3 and server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) are supported. If you make a `CopyObject` request that specifies to use server-side encryption with customer-provided keys (SSE-C) on a directory bucket (source or destination), the response returns an HTTP `400 (Bad Request)` error. 

  We recommend that the bucket's default encryption uses the desired encryption configuration and you don't override the bucket default encryption in your `CreateSession` requests or `PUT` object requests. Then, new objects are automatically encrypted with the desired encryption settings. For more information about the encryption overriding behaviors in directory buckets and how to encrypt new object copies in a directory bucket with SSE-KMS, see [Specifying server-side encryption with AWS KMS for new object uploads](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-specifying-kms-encryption.html).

  S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through [the Copy operation in Batch Operations](#directory-buckets-objects-Batch-Ops). In this case, Amazon S3 makes a call to AWS KMS every time a copy request is made for a KMS-encrypted object. For more information about using SSE-KMS on directory buckets, see [Setting and monitoring default encryption for directory buckets](s3-express-bucket-encryption.md) and [Using server-side encryption with AWS KMS keys (SSE-KMS) in directory buckets](s3-express-UsingKMSEncryption.md).
+ Objects in directory buckets can't be tagged. You can only specify an empty tag set. By default, Batch Operations copies tags. If you copy an object that has tags between general purpose buckets and directory buckets, you receive a `501 (Not Implemented)` response.
+ S3 Express One Zone offers you the option to choose the checksum algorithm that is used to validate your data during uploads or downloads. You can select one of the following Secure Hash Algorithms (SHA) or Cyclic Redundancy Check (CRC) data-integrity check algorithms: CRC32, CRC32, SHA-1, and SHA-256. MD5-based checksums are not supported with the S3 Express One Zone storage class. 
+ By default, all Amazon S3 buckets set the S3 Object Ownership setting to bucket owner enforced and access control lists (ACLs) are disabled. For directory buckets, this setting can't be modified. You can copy an object from general purpose buckets to directory buckets. However, you can't overwrite the default ACL when you copy to or from a directory bucket. 
+ Regardless of how you specify your manifest, the list itself must be stored in a general purpose bucket. Batch Operations can't import existing manifests from (or save generated manifests to) directory buckets. However, objects described within the manifest can be stored in directory buckets. 
+ Batch Operations can't specify a directory bucket as a location in an S3 Inventory report. Inventory reports don't support directory buckets. You can create a manifest file for objects within a directory bucket by using the `ListObjectsV2` API operation to list the objects. You can then insert the list in a CSV file.

### Granting access
<a name="BOPsAccess"></a>

 To perform copy jobs, you must have the following permissions:
+ To copy objects from one directory bucket to another directory bucket, you must have the `s3express:CreateSession` permission.
+ To copy objects from directory buckets to general purpose buckets, you must have the `s3express:CreateSession` permission and the `s3:PutObject` permission to write the object copy to the destination bucket. 
+ To copy objects from general purpose buckets to directory buckets, you must have the `s3express:CreateSession` permission and the `s3:GetObject` permission to read the source object that is being copied. 

   For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) in the *Amazon Simple Storage Service API Reference*.
+ To invoke a Lambda function, you must grant permissions to your resource based on your Lambda function. To determine which permissions are required, check the corresponding API operations. 

# Appending data to objects in directory buckets
<a name="directory-buckets-objects-append"></a>

You can add data to the end of existing objects stored in the S3 Express One Zone storage class in directory buckets. We recommend that you use the ability to append data to an object if the data is written continuously over a period of time or if you need to read the object while you are writing to the object. Appending data to objects is common for use-cases such as adding new log entries to log files or adding new video segments to video files as they are transcoded then streamed. By appending data to objects, you can simplify applications that previously combined data in local storage before copying the final object to Amazon S3.

There is no minimum size requirement for the data you can append to an object. However, the maximum size of the data that you can append to an object in a single request is 5GB. This is the same limit as the largest request size when uploading data using any Amazon S3 API.

With each successful append operation, you create a part of the object and each object can have up to 10,000 parts. This means you can append data to an object up to 10,000 times. If an object is created using S3 multipart upload, each uploaded part is counted towards the total maximum of 10,000 parts. For example, you can append up to 9,000 times to an object created by multipart upload comprising of 1,000 parts.

**Note**  
 If you hit the limit of parts, you will receive a [TooManyParts](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject_Errors) error. You can use the `CopyObject` API to reset the count.

 If you want to upload parts to an object in parallel and you don’t need to read the parts while the parts are being uploaded, we recommend that you use Amazon S3 multipart upload. For more information, see [Using multipart upload](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-using-multipart-upload.html).

Appending data to objects is only supported for objects in directory buckets that are stored in the S3 Express One Zone storage class. For more information on S3 Express One Zone Zone, see [Getting started with S3 Express One Zone](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-getting-started.html).

To get started appending data to objects in your directory buckets, you can use the AWS SDKs, AWS CLI, and the `PutObject` API . When you make a `PutObject` request, you set the `x-amz-write-offset-bytes` header to the size of the object that you are appending to. To use the `PutObject` API operation, you must use the `CreateSession` API to establish temporary security credentials to access the objects in your directory buckets. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html) and [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) in the *Amazon S3 API Reference*. 

Each successful append operation is billed as a `PutObject` request. To learn more about pricing, see [https://aws.amazon.com/s3/pricing/](https://aws.amazon.com/s3/pricing/). 

**Note**  
Starting with the 1.12 release, Mountpoint for Amazon S3 supports appending data to objects stored in S3 Express One Zone. To get started, you must opt-in by setting the `--incremental-upload ` flag. For more information on Mountpoint, see [Working with Mountpoint](https://docs.aws.amazon.com/AmazonS3/latest/userguide/mountpoint.html). 

 If you use a CRC (Cyclic Redundancy Check) algorithm while uploading the appended data, you can retrieve full object CRC-based checksums using the `HeadObject` or `GetObject` request. If you use the SHA-1 or SHA-256 algorithm while uploading your appended data, you can retrieve a checksum of the appended parts and verify their integrity using the SHA checksums returned on prior PutObject responses. For more information, see [Data protection and encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-data-protection.html.html). 

## Appending data to your objects by using the AWS CLI, AWS SDKs and the REST API
<a name="directory-bucket-append"></a>

You can append data to your objects by using the AWS Command Line Interface (AWS CLI), AWS SDKs and REST API.

### Using the AWS CLI
<a name="set-append--cli"></a>

The following `put-object` example command shows how you can use the AWS CLI to append data to an object. To run this command, replace the *user input placeholders* with your own information

```
aws s3api put-object --bucket amzn-s3-demo-bucket--azid--x-s3 --key sampleinput/file001.bin --body bucket-seed/file001.bin --write-offset-bytes size-of-sampleinput/file001.bin
```

### Using the AWS SDKs
<a name="directory-bucket-append-sdks"></a>

------
#### [ SDK for Java ]

You can use the AWS SDK for Java to append data to your objects. 

```
var putObjectRequestBuilder = PutObjectRequest.builder()
                                              .key(key)
                                              .bucket(bucketName)
                                              .writeOffsetBytes(9);
var response = s3Client.putObject(putObjectRequestBuilder.build());
```

------
#### [ SDK for Python ]

```
s3.put_object(Bucket='amzn-s3-demo-bucket--use2-az2--x-s3', Key='2024-11-05-sdk-test', Body=b'123456789', WriteOffsetBytes=9)
```

------

### Using the REST API
<a name="directory-bucket-append-api"></a>

 You can send REST requests to append data to an object. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html#API_PutObject). 

# Renaming objects in directory buckets
<a name="directory-buckets-objects-rename"></a>

Using the `RenameObject` operation, you can atomically rename an existing object in a directory bucket that uses the S3 Express One Zone storage class, without any data movement. You can rename an object by specifying the existing object’s name as the source and the new name of the object as the destination within the same directory bucket. The `RenameObject` API operation will not succeed on objects that end with the slash (`/`) delimiter character. For more information, see [Naming Amazon S3 objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-keys.html).

The `RenameObject` operation is typically completed in milliseconds regardless of the size of the object. This capability accelerates applications like log file management, media processing, and data analytics. Additionally, `RenameObject` preserves all object metadata properties, including the storage class, encryption type, creation date, last modified date, and checksum properties.

**Note**  
`RenameObject` is only supported for objects stored in the S3 Express One Zone storage class.

 To grant access to the `RenameObject` operation, we recommend that you use the `CreateSession` operation for session-based authorization. Specifically, you grant the `s3express:CreateSession` permission to the directory bucket in a bucket policy or an identity-based policy. Then, you make the `CreateSession` API call on the directory bucket to obtain a session token. With the session token in your request header, you can make API requests to this operation. After the session token expires, you make another `CreateSession` API call to generate a new session token for use. The AWS CLI and AWS SDKs will create and manage your session including refreshing the session token automatically to avoid service interruptions when a session expires. For more information about authorization, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) in the *Amazon S3 API Reference*. To learn more about Zonal endpoint API operations, see [Authorizing Zonal endpoint API operations with `CreateSession`](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-create-session.html). 

 If you don't want to overwrite an existing object, you can add the `If-None-Match` conditional header with the value `‘*’` in the `RenameObject` request. Amazon S3 returns a `412 Precondition Failed` error if the object name already exists. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_RenameObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_RenameObject.html) in the *Amazon S3 API Reference*. 

 `RenameObject` is a Zonal endpoint API operation (object-level or data plane operation) that is logged to AWS CloudTrail. You can use CloudTrail to gather information on the `RenameObject` operation performed on your objects in directory buckets. For more information, see [Logging with AWS CloudTrail for directory buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-one-zone-logging.html) and [CloudTrail log file examples for directory buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-log-files.html). 

S3 Express One Zone is the only storage class that supports `RenameObject`, which is priced the same as `PUT`, `COPY`, `POST`, and `LIST` requests (per 1,000 requests) in S3 Express One Zone. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). 

## Renaming an object
<a name="directory-bucket-rename"></a>

To rename an object in your directory bucket, you can use the Amazon S3 console, AWS CLI, AWS SDKs, the REST API or Mountpoint for Amazon S3 (version 1.19.0 or higher).

### Using the S3 console
<a name="set-rename--console"></a>

**To rename an object in a directory bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation pane, choose **Buckets**, and then choose the **Directory buckets** tab. Navigate to the Amazon S3 directory bucket that contains the object that you want to rename.

1. Select the check box for the object that you want to rename.

1. On the **Actions** menu, choose **Rename object**.

1. In the **New object name** box, enter the new name for the object.
**Note**  
If you specify the same object name as an existing object, the operation will fail and Amazon S3 returns a `412 Precondition Failed` error. The object key name length can't exceed 1,024 bytes. Prefixes included in the object name count toward the total length. 

1. Choose **Rename object**. Amazon S3 renames your object. 

### Using the AWS CLI
<a name="set-rename--cli"></a>

The `rename-object` examples show how you can use the AWS CLI to rename an object. To run these commands, replace the *user input placeholders* with your own information

The following example shows how to rename an object with a conditional check on the source object's ETag. 

```
aws s3api rename-object \                                    
    --bucket amzn-s3-demo-bucket--usw2-az1--x-s3 \
    --key new-file.txt \
    --rename-source original-file.txt \
    --source-if-match "\"a1b7c3d2e5f6\""
```

This command does the following:
+ Renames an object from *original-file.txt* to *new-file.txt* in the *amzn-s3-demo-bucket--usw2-az1--x-s3* directory bucket.
+ Only performs the rename if the source object's ETag matches "*a1b7c3d4e5f6*".

If the ETag doesn't match, the operation will fail with a `412 Precondition Failed` error. 

The following example shows how to rename an object with a conditional check on the new specified object name.

```
aws s3api rename-object \
    --bucket amzn-s3-demo-bucket--usw2-az1--x-s3 \
    --key new-file.txt \
    --rename-source amzn-s3-demo-bucket--usw2-az1--x-s3/original-file.txt \
    --destination-if-none-match "\"e5f3g7h8i9j0\""
```

This command does the following:
+ Renames an object from *original-file.txt* to *new-file.txt* in the *amzn-s3-demo-bucket--usw2-az1--x-s3* directory bucket.
+ Only performs the rename operation if the object exists and the object's ETag doesn't match "*e5f3g7h8i9j0*".

If an object already exists with the new specified name and the matching ETag, the operation will fail with a `412 Precondition Failed` error. 

### Using the AWS SDKs
<a name="directory-bucket-rename-sdks"></a>

------
#### [ SDK for Java ]

You can use the AWS SDK for Java to rename your objects. To use these examples, replace the *user input placeholders* with your own information

The following example demonstrates how to create a `RenameObjectRequest` using the AWS SDK for Java

```
String key = "key";
String newKey = "new-key";
String expectedETag = "e5f3g7h8i9j0";
RenameObjectRequest renameRequest = RenameObjectRequest.builder()
    .bucket(amzn-s3-demo-bucket--usw2-az1--x-s3)
    .key(newKey)
    .renameSource(key)
    .destinationIfMatch(e5f3g7h8i9j0)
    .build();
```

This code does the following:
+ Create a request to rename an object from "*key*" to "*new-key*" in the *amzn-s3-demo-bucket--usw2-az1--x-s3* directory bucket.
+ Includes a condition that the rename will only occur if the object's ETag matches "*e5f3g7h8i9j0*". 
+ If the ETag doesn't match or the object doesn't exist, the operation will fail.

The following example shows how to create a `RenameObjectRequest` with a none-match condition using the AWS SDK for Java.

```
String key = "key";
String newKey = "new-key";
String noneMatchETag = "e5f3g7h8i9j0";
RenameObjectRequest renameRequest = RenameObjectRequest.builder()
    .bucket(amzn-s3-demo-bucket--usw2-az1--x-s3)
    .key(newKey)
    .renameSource(key)
    .destinationIfNoneMatch(noneMatchETag)
    .build();
```

This code does the following:
+ Creates a request to rename an object from "*key*" to "*new-key*" in the *amzn-s3-demo-bucket--usw2-az1--x-s3* directory bucket.
+ Includes a condition using `.destinationIfNoneMatch(noneMatchETag)` that ensures the rename will only occur if the destination object's ETag doesn't match "*e5f3g7h8i9j0*".

The operation will fail with a `412 Precondition Failed` error if an object exists with the new specified name and has the specified ETag. 

------
#### [ SDK for Python ]

You can use the SDK for Python to rename your objects. To use these examples, replace the *user input placeholders* with your own information.

The following example demonstrates how to rename an object using the AWS SDK for Python (Boto3).

```
def basic_rename(bucket, source_key, destination_key):
    try:
        s3.rename_object(
            Bucket=amzn-s3-demo-bucket--usw2-az1--x-s3,
            Key=destination_key,
            RenameSource=f"{source_key}"
        )
        print(f"Successfully renamed {source_key} to {destination_key}")
    except ClientError as e:
        print(f"Error renaming object: {e}")
```

This code does the following:
+ Renames an object from *source\$1key* to *destination\$1key* in the *amzn-s3-demo-bucket--usw2-az1--x-s3* directory bucket.
+ Prints a success message if the renaming of your object is successful or prints an error message if it fails.

The following example demonstrate how to rename an object with the `SourceIfMatch` and `DestinationIfNoneMatch` conditions using the AWS SDK for Python (Boto3).

```
def rename_with_conditions(bucket, source_key, destination_key, source_etag, dest_etag):
    try:
        s3.rename_object(
            Bucket=amzn-s3-demo-bucket--usw2-az1--x-s3,
            Key=destination_key,
            RenameSource=f"{amzn-s3-demo-bucket--usw2-az1--x-s3}/{source_key}",
            SourceIfMatch=source_ETag,
            DestinationIfNoneMatch=dest_ETag
        )
        print(f"Successfully renamed {source_key} to {destination_key} with conditions")
    except ClientError as e:
        print(f"Error renaming object: {e}")
```

This code does the following:
+ Performs a conditional rename operation and applies two conditions, `SourceIfMatch` and `DestinationIfNoneMatch`. The combination of these conditions ensures that the object hasn't been modified and that an object doesn't already exist with the new specified name. 
+ Renames an object from *source\$1key* to *destination\$1key* in the *amzn-s3-demo-bucket--usw2-az1--x-s3* directory bucket.
+ Prints a success message if the renaming of your object is successful, or prints an error message if it fails or if conditions aren't met.

------
#### [ SDK for Rust ]

You can use the SDK for Rust to rename your objects. To use these examples, replace the *user input placeholders* with your own information.

The following example demonstrates how to rename an object in the *amzn-s3-demo-bucket--usw2-az1--x-s3* directory bucket using the SDK for Rust.

```
async fn basic_rename_example(client: &Client) -> Result<(), Box<dyn Error>> {
    let response = client
        .rename_object()
        .bucket(" amzn-s3-demo-bucket--usw2-az1--x-s3")
        .key("new-name.txt")  // New name/path for the object
        .rename_source("old-name.txt")  // Original object name/path
        .send()
        .await?;
    Ok(())
}
```

This code does the following:
+ Creates a request to rename an object from "*old-name.tx*" to "*new-name.txt*" in the *amzn-s3-demo-bucket--usw2-az1--x-s3* directory bucket.
+ Returns a `Result` type to handle potential errors. 

------

### Using the REST API
<a name="directory-bucket-rename-api"></a>

 You can send REST requests to rename an object. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_RenameObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_RenameObject.html) in the *Amazon S3 API Reference*. 

### Using Mountpoint for Amazon S3
<a name="directory-bucket-rename-api"></a>

 Starting with the 1.19.0 version or higher, Mountpoint for Amazon S3 supports renaming objects in S3 Express One Zone. For more information on Mountpoint, see [Working with Mountpoint](https://docs.aws.amazon.com/AmazonS3/latest/userguide/mountpoint.html).

# Uploading objects to a directory bucket
<a name="directory-buckets-objects-upload"></a>

After you create an Amazon S3 directory bucket, you can upload objects to it. The following examples show how to upload an object to a directory bucket by using the S3 console and the AWS SDKs. For information about bulk object upload operations with S3 Express One Zone, see [Object management](directory-bucket-high-performance.md#s3-express-features-object-management). 

## Using the S3 console
<a name="directory-bucket-upload-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Directory buckets**.

1. Choose the name of the bucket that you want to upload your folders or files to.

1. In the **Objects** list, choose **Upload**.

1. On the **Upload** page, do one of the following: 
   + Drag and drop files and folders to the dotted upload area.
   + Choose **Add files** or **Add folder**, choose the files or folders to upload, and then choose **Open** or **Upload**.

1. Under **Checksums**, choose the **Checksum function** that you want to use. 

   (Optional) If you're uploading a single object that's less than 16 MB in size, you can also specify a precalculated checksum value. When you provide a precalculated value, Amazon S3 compares it with the value that it calculates by using the selected checksum function. If the values don't match, the upload won't start. 

1. The options in the **Permissions ** and **Properties** sections are automatically set to default settings and can't be modified. Block Public Access is automatically enabled, and S3 Versioning and S3 Object Lock can't be enabled for directory buckets. 

   (Optional) If you want to add metadata in key-value pairs to your objects, expand the **Properties** section, and then in the **Metadata** section, choose **Add metadata**.

1. To upload the listed files and folders, choose **Upload**.

   Amazon S3 uploads your objects and folders. When the upload is finished, you see a success message on the **Upload: status** page.

## Using the AWS SDKs
<a name="directory-bucket-upload-sdks"></a>

------
#### [ SDK for Java 2.x ]

**Example**  

```
public static void putObject(S3Client s3Client, String bucketName, String objectKey, Path filePath) {
       //Using File Path to avoid loading the whole file into memory
       try {
           PutObjectRequest putObj = PutObjectRequest.builder()
                   .bucket(bucketName)
                   .key(objectKey)
                   //.metadata(metadata)
                   .build();
           s3Client.putObject(putObj, filePath);               
           System.out.println("Successfully placed " + objectKey +" into bucket "+bucketName);
                                              
       }
       
       catch (S3Exception e) {
           System.err.println(e.getMessage());
           System.exit(1);
       }
}
```

------
#### [ SDK for Python ]

**Example**  

```
import boto3
import botocore
from botocore.exceptions import ClientError
    
    
def put_object(s3_client, bucket_name, key_name, object_bytes):
    """  
    Upload data to a directory bucket.
    :param s3_client: The boto3 S3 client
    :param bucket_name: The bucket that will contain the object
    :param key_name: The key of the object to be uploaded
    :param object_bytes: The data to upload
    """
    try:
        response = s3_client.put_object(Bucket=bucket_name, Key=key_name,
                             Body=object_bytes)
        print(f"Upload object '{key_name}' to bucket '{bucket_name}'.") 
        return response
    except ClientError:    
        print(f"Couldn't upload object '{key_name}' to bucket '{bucket_name}'.")
        raise

def main():
    # Share the client session with functions and objects to benefit from S3 Express One Zone auth key
    s3_client = boto3.client('s3')
    # Directory bucket name must end with --zone-id--x-s3
    resp = put_object(s3_client, 'doc-bucket-example--use1-az5--x-s3', 'sample.txt', b'Hello, World!')
    print(resp)

if __name__ == "__main__":
    main()
```

------

## Using the AWS CLI
<a name="directory-upload-object-cli"></a>

The following `put-object` example command shows how you can use the AWS CLI to upload an object from Amazon S3. To run this command, replace the `user input placeholders` with your own information.

```
aws s3api put-object --bucket bucket-base-name--zone-id--x-s3 --key sampleinut/file001.bin --body bucket-seed/file001.bin
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-object.html) in the *AWS CLI Command Reference*.

**Topics**
+ [Using multipart uploads with directory buckets](s3-express-using-multipart-upload.md)

# Using multipart uploads with directory buckets
<a name="s3-express-using-multipart-upload"></a>

You can use the multipart upload process to upload a single object as a set of parts. Each part is a contiguous portion of the object's data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation.

Using multipart upload provides the following advantages:
+ **Improved throughput** – You can upload parts in parallel to improve throughput. 
+ **Quick recovery from any network issues** – Smaller part sizes minimize the impact of restarting a failed upload because of a network error.
+ **Pause and resume object uploads** – You can upload object parts over time. After you initiate a multipart upload, there is no expiration date. You must explicitly complete or abort the multipart upload.
+ **Begin an upload before you know the final object size** – You can upload an object as you are creating it. 

We recommend that you use multipart uploads in the following ways:
+ If you're uploading large objects over a stable high-bandwidth network, use multipart uploads to maximize the use of your available bandwidth by uploading object parts in parallel for multi-threaded performance.
+ If you're uploading over a spotty network, use multipart uploads to increase resiliency to network errors by avoiding upload restarts. When using multipart uploads, you need to retry uploading only the parts that are interrupted during the upload. You don't need to restart uploading your object from the beginning.

When you're using multipart uploads to upload objects to the Amazon S3 Express One Zone storage class in directory buckets, the multipart upload process is similar to the process of using multipart upload to upload objects to general purpose buckets. However, there are some notable differences. 

For more information about using multipart uploads to upload objects to S3 Express One Zone, see the following topics.

**Topics**
+ [The multipart upload process](#s3-express-mpu-process)
+ [Checksums with multipart upload operations](#s3-express-mpuchecksums)
+ [Concurrent multipart upload operations](#s3-express-distributedmpupload)
+ [Multipart uploads and pricing](#s3-express-mpuploadpricing)
+ [Multipart upload API operations and permissions](#s3-express-mpuAndPermissions)
+ [Examples](#directory-buckets-multipart-upload-examples)

## The multipart upload process
<a name="s3-express-mpu-process"></a>

A multipart upload is a three-step process: 
+ You initiate the upload.
+ You upload the object parts.
+ After you have uploaded all of the parts, you complete the multipart upload.



Upon receiving the complete multipart upload request, Amazon S3 constructs the object from the uploaded parts, and you can then access the object as you would any other object in your bucket. 

**Multipart upload initiation**  
When you send a request to initiate a multipart upload, Amazon S3 returns a response with an upload ID, which is a unique identifier for your multipart upload. You must include this upload ID whenever you upload parts, list the parts, complete an upload, or abort an upload. 

**Parts upload**  
When uploading a part, in addition to the upload ID, you must specify a part number. When you're using a multipart upload with S3 Express One Zone, the multipart part numbers must be consecutive part numbers. If you try to complete a multipart upload request with nonconsecutive part numbers, an `HTTP 400 Bad Request` (Invalid Part Order) error is generated. 

A part number uniquely identifies a part and its position in the object that you are uploading. If you upload a new part by using the same part number as a previously uploaded part, the previously uploaded part is overwritten. 

Whenever you upload a part, Amazon S3 returns an entity tag (ETag) header in its response. For each part upload, you must record the part number and the ETag value. The ETag values for all object part uploads will remain the same, but each part will be assigned a different part number. You must include these values in the subsequent request to complete the multipart upload.

Amazon S3 automatically encrypts all new objects that are uploaded to an S3 bucket. When doing a multipart upload, if you don't specify encryption information in your request, the encryption setting of the uploaded parts is set to the default encryption configuration of the destination bucket. The default encryption configuration of an Amazon S3 bucket is always enabled and is at a minimum set to server-side encryption with Amazon S3 managed keys (SSE-S3). For directory buckets, SSE-S3 and server-side encryption with AWS KMS keys (SSE-KMS) are supported. For more information, see [Data protection and encryption](s3-express-data-protection.md).

**Multipart upload completion**  
When you complete a multipart upload, Amazon S3 creates the object by concatenating the parts in ascending order based on the part number. After a successful *complete* request, the parts no longer exist. 

Your *complete multipart upload* request must include the upload ID and a list of both part numbers and their corresponding ETag values. The Amazon S3 response includes an ETag that uniquely identifies the combined object data. This ETag is not an MD5 hash of the object data. 

**Multipart upload listings**  
You can list the parts of a specific multipart upload or all in-progress multipart uploads. The list parts operation returns the parts information that you have uploaded for a specific multipart upload. For each list parts request, Amazon S3 returns the parts information for the specified multipart upload, up to a maximum of 1,000 parts. If there are more than 1,000 parts in the multipart upload, you must use pagination to retrieve all the parts. 

The returned list of parts doesn't include parts that haven't finished uploading. Using the *list multipart uploads* operation, you can obtain a list of multipart uploads that are in progress.

An in-progress multipart upload is an upload that you have initiated, but have not yet completed or aborted. Each request returns at most 1,000 multipart uploads. If there are more than 1,000 multipart uploads in progress, you must send additional requests to retrieve the remaining multipart uploads. Use the returned listing only for verification. Do not use the result of this listing when sending a *complete multipart upload* request. Instead, maintain your own list of the part numbers that you specified when uploading parts and the corresponding ETag values that Amazon S3 returns.

For more information about multipart upload listings, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html) in the *Amazon Simple Storage Service API Reference*. 

## Checksums with multipart upload operations
<a name="s3-express-mpuchecksums"></a>

When you upload an object to, you can specify a checksum algorithm to check object integrity. MD5 is not supported for directory buckets. You can specify one of the following Secure Hash Algorithms (SHA) or Cyclic Redundancy Check (CRC) data-integrity check algorithms:
+ CRC32 
+ CRC32C 
+ SHA-1
+ SHA-256

You can use the Amazon S3 REST API or the AWS SDKs to retrieve the checksum value for individual parts by using `GetObject` or `HeadObject`. If you want to retrieve the checksum values for individual parts of multipart uploads still in process, you can use `ListParts`.

**Important**  
When using the preceding checksum algorithms, the multipart part numbers must use consecutive part numbers. If you try to complete a multipart upload request with nonconsecutive part numbers, Amazon S3 generates an `HTTP 400 Bad Request` (Invalid Part Order) error.

 For more information about how checksums work with multipart upload objects, see [Checking object integrity in Amazon S3](checking-object-integrity.md).

## Concurrent multipart upload operations
<a name="s3-express-distributedmpupload"></a>

In a distributed development environment, your application can initiate several updates on the same object at the same time. For example, your application might initiate several multipart uploads by using the same object key. For each of these uploads, your application can then upload parts and send a complete upload request to Amazon S3 to create the object. For S3 Express One Zone, the object creation time is the completion date of the multipart upload.

**Important**  
Versioning isn’t supported for objects that are stored in directory buckets.

## Multipart uploads and pricing
<a name="s3-express-mpuploadpricing"></a>

After you initiate a multipart upload, Amazon S3 retains all the parts until you either complete or abort the upload. Throughout its lifetime, you are billed for all storage, bandwidth, and requests for this multipart upload and its associated parts. If you abort the multipart upload, Amazon S3 deletes the upload artifacts and any parts that you have uploaded, and you are no longer billed for them. There are no early delete charges for deleting incomplete multipart uploads, regardless of the storage class specified. For more information about pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

**Important**  
If the complete multipart upload request isn't sent successfully, the object parts aren't assembled and an object isn't created. You are billed for all storage associated with uploaded parts. It's important that you either complete the multipart upload to have the object created or abort the multipart upload to remove any uploaded parts.   
Before you can delete a directory bucket, you must complete or abort all in-progress multipart uploads. Directory buckets don't support S3 Lifecycle configurations. If needed, you can list your active multipart uploads, then abort the uploads, and then delete your bucket. 

## Multipart upload API operations and permissions
<a name="s3-express-mpuAndPermissions"></a>

To allow access to object management API operations on a directory bucket, you grant the `s3express:CreateSession` permission in a bucket policy or an AWS Identity and Access Management (IAM) identity-based policy.

You must have the necessary permissions to use the multipart upload operations. You can use bucket policies or IAM identity-based policies to grant IAM principals permissions to perform these operations. The following table lists the required permissions for various multipart upload operations. 

You can identify the initiator of a multipart upload through the `Initiator` element. If the initiator is an AWS account, this element provides the same information as the `Owner` element. If the initiator is an IAM user, this element provides the user ARN and display name.


| Action | Required permissions | 
| --- | --- | 
|  Create a multipart upload  |  To create the multipart upload, you must be allowed to perform the `s3express:CreateSession` action on the directory bucket.   | 
|  Initiate a multipart upload  |  To initiate the multipart upload, you must be allowed to perform the `s3express:CreateSession` action on the directory bucket.   | 
| Upload a part |  To upload a part, you must be allowed to perform the `s3express:CreateSession` action on the directory bucket.  For the initiator to upload a part, the bucket owner must allow the initiator to perform the `s3express:CreateSession` action on the directory bucket.  | 
| Upload a part (copy) |  To upload a part, you must be allowed to perform the `s3express:CreateSession` action on the directory bucket.  For the initiator to upload a part for an object, the owner of the bucket must allow the initiator to perform the `s3express:CreateSession` action on the object.  | 
| Complete a multipart upload |  To complete a multipart upload, you must be allowed to perform the `s3express:CreateSession` action on the directory bucket.  For the initiator to complete a multipart upload, the bucket owner must allow the initiator to perform the `s3express:CreateSession` action on the object.  | 
| Abort a multipart upload |  To abort a multipart upload, you must be allowed to perform the `s3express:CreateSession` action.  For the initiator to abort a multipart upload, the initiator must be granted explicit allow access to perform the `s3express:CreateSession` action.   | 
| List parts |  To list the parts in a multipart upload, you must be allowed to perform the `s3express:CreateSession` action on the directory bucket.  | 
| List in-progress multipart uploads |  To list the in-progress multipart uploads to a bucket, you must be allowed to perform the `s3:ListBucketMultipartUploads` action on that bucket.  | 

### API operation support for multipart uploads
<a name="s3-express-mpu-apis"></a>

The following sections in the Amazon Simple Storage Service API Reference describe the Amazon S3 REST API operations for multipart uploads. 
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html)

## Examples
<a name="directory-buckets-multipart-upload-examples"></a>

To use a multipart upload to upload an object to S3 Express One Zone in a directory bucket, see the following examples.

**Topics**
+ [Creating a multipart upload](#directory-buckets-multipart-upload-examples-create)
+ [Uploading the parts of a multipart upload](#directory-buckets-multipart-upload-examples-upload-part)
+ [Completing a multipart upload](#directory-buckets-multipart-upload-examples-complete)
+ [Aborting a multipart upload](#directory-buckets-multipart-upload-examples-abort)
+ [Creating a multipart upload copy operation](#directory-buckets-multipart-upload-examples-upload-part-copy)
+ [Listing in-progress multipart uploads](#directory-buckets-multipart-upload-examples-list)
+ [Listing the parts of a multipart upload](#directory-buckets-multipart-upload-examples-list-parts)

### Creating a multipart upload
<a name="directory-buckets-multipart-upload-examples-create"></a>

**Note**  
For directory buckets, when you perform a `CreateMultipartUpload` operation and an `UploadPartCopy` operation, the bucket's default encryption must use the desired encryption configuration, and the request headers you provide in the `CreateMultipartUpload` request must match the default encryption configuration of the destination bucket. 

The following examples show how to create a multipart upload. 

#### Using the AWS SDKs
<a name="directory-buckets-multipart-upload-create-sdks"></a>

------
#### [ SDK for Java 2.x ]

**Example**  

```
/**
 * This method creates a multipart upload request that generates a unique upload ID that is used to track
 * all the upload parts
 *
 * @param s3
 * @param bucketName - for example, 'doc-example-bucket--use1-az4--x-s3'
 * @param key
 * @return
 */
 private static String createMultipartUpload(S3Client s3, String bucketName, String key) {
 
     CreateMultipartUploadRequest createMultipartUploadRequest = CreateMultipartUploadRequest.builder() 
             .bucket(bucketName)
             .key(key)
             .build();
             
     String uploadId = null;
     
     try {
         CreateMultipartUploadResponse response = s3.createMultipartUpload(createMultipartUploadRequest);
         uploadId = response.uploadId();
     }
     catch (S3Exception e) {
         System.err.println(e.awsErrorDetails().errorMessage());
         System.exit(1);
     }
     return uploadId;
```

------
#### [ SDK for Python ]

**Example**  

```
def create_multipart_upload(s3_client, bucket_name, key_name):
    '''
   Create a multipart upload to a directory bucket
   
   :param s3_client: boto3 S3 client
   :param bucket_name: The destination bucket for the multipart upload
   :param key_name: The key name for the object to be uploaded
   :return: The UploadId for the multipart upload if created successfully, else None
   '''
   
   try:
        mpu = s3_client.create_multipart_upload(Bucket = bucket_name, Key = key_name)
        return mpu['UploadId'] 
    except ClientError as e:
        logging.error(e)
        return None
```

------

#### Using the AWS CLI
<a name="directory-bucket-multipart-upload-create-cli"></a>

This example shows how to create a multipart upload to a directory bucket by using the AWS CLI. This command starts a multipart upload to the directory bucket *bucket-base-name*--*zone-id*--x-s3 for the object *KEY\$1NAME*. To use the command replace the *user input placeholders* with your own information.

```
aws s3api create-multipart-upload --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME
```

For more information, see [create-multipart-upload](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/create-multipart-upload.html) in the AWS Command Line Interface.

### Uploading the parts of a multipart upload
<a name="directory-buckets-multipart-upload-examples-upload-part"></a>

The following examples show how to upload parts of a multipart upload. 

#### Using the AWS SDKs
<a name="directory-buckets-multipart-upload-part-sdks"></a>

------
#### [ SDK for Java 2.x ]

The following example shows how to break a single object into parts and then upload those parts to a directory bucket by using the SDK for Java 2.x. 

**Example**  

```
/**
 * This method creates part requests and uploads individual parts to S3 and then returns all the completed parts
 *
 * @param s3
 * @param bucketName
 * @param key
 * @param uploadId
 * @throws IOException
 */
 private static ListCompletedPartmultipartUpload(S3Client s3, String bucketName, String key, String uploadId, String filePath) throws IOException {

        int partNumber = 1;
        ListCompletedPart completedParts = new ArrayList<>();
        ByteBuffer bb = ByteBuffer.allocate(1024 * 1024 * 5); // 5 MB byte buffer

        // read the local file, breakdown into chunks and process
        try (RandomAccessFile file = new RandomAccessFile(filePath, "r")) {
            long fileSize = file.length();
            int position = 0;
            while (position < fileSize) {
                file.seek(position);
                int read = file.getChannel().read(bb);

                bb.flip(); // Swap position and limit before reading from the buffer.
                UploadPartRequest uploadPartRequest = UploadPartRequest.builder()
                        .bucket(bucketName)
                        .key(key)
                        .uploadId(uploadId)
                        .partNumber(partNumber)
                        .build();

                UploadPartResponse partResponse = s3.uploadPart(
                        uploadPartRequest,
                        RequestBody.fromByteBuffer(bb));

                CompletedPart part = CompletedPart.builder()
                        .partNumber(partNumber)
                        .eTag(partResponse.eTag())
                        .build();
                completedParts.add(part);

                bb.clear();
                position += read;
                partNumber++;
            }
        } 
        
        catch (IOException e) {
            throw e;
        }
        return completedParts;
    }
```

------
#### [ SDK for Python ]

The following example shows how to break a single object into parts and then upload those parts to a directory bucket by using the SDK for Python. 

**Example**  

```
def multipart_upload(s3_client, bucket_name, key_name, mpu_id, part_size):
    '''
    Break up a file into multiple parts and upload those parts to a directory bucket

    :param s3_client: boto3 S3 client
    :param bucket_name: Destination bucket for the multipart upload
    :param key_name: Key name for object to be uploaded and for the local file that's being uploaded
    :param mpu_id: The UploadId returned from the create_multipart_upload call
    :param part_size: The size parts that the object will be broken into, in bytes. 
                      Minimum 5 MiB, Maximum 5 GiB. There is no minimum size for the last part of your multipart upload.
    :return: part_list for the multipart upload if all parts are uploaded successfully, else None
    '''
    
    part_list = []
    try:
        with open(key_name, 'rb') as file:
            part_counter = 1
            while True:
                file_part = file.read(part_size)
                if not len(file_part):
                    break
                upload_part = s3_client.upload_part(
                    Bucket = bucket_name,
                    Key = key_name,
                    UploadId = mpu_id,
                    Body = file_part,
                    PartNumber = part_counter
                )
                part_list.append({'PartNumber': part_counter, 'ETag': upload_part['ETag']})
                part_counter += 1
    except ClientError as e:
        logging.error(e)
        return None
    return part_list
```

------

#### Using the AWS CLI
<a name="directory-bucket-multipart-upload-part-cli"></a>

This example shows how to break a single object into parts and then upload those parts to a directory bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3api upload-part --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME --part-number 1 --body LOCAL_FILE_NAME --upload-id "AS_mgt9RaQE9GEaifATue15dAAAAAAAAAAEMAAAAAAAAADQwNzI4MDU0MjUyMBYAAAAAAAAAAA0AAAAAAAAAAAH2AfYAAAAAAAAEBSD0WBKMAQAAAABneY9yBVsK89iFkvWdQhRCcXohE8RbYtc9QvBOG8tNpA"
```

For more information, see [upload-part](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/upload-part.html) in the AWS Command Line Interface.

### Completing a multipart upload
<a name="directory-buckets-multipart-upload-examples-complete"></a>

The following examples show how to complete a multipart upload. 

#### Using the AWS SDKs
<a name="directory-buckets-multipart-upload-complete-sdks"></a>

------
#### [ SDK for Java 2.x ]

The following examples show how to complete a multipart upload by using the SDK for Java 2.x.

**Example**  

```
/**
 * This method completes the multipart upload request by collating all the upload parts
 * @param s3
 * @param bucketName - for example, 'doc-example-bucket--usw2-az1--x-s3'
 * @param key
 * @param uploadId
 * @param uploadParts
 */
 private static void completeMultipartUpload(S3Client s3, String bucketName, String key, String uploadId, ListCompletedPart uploadParts) {
        CompletedMultipartUpload completedMultipartUpload = CompletedMultipartUpload.builder()
                .parts(uploadParts)
                .build();

        CompleteMultipartUploadRequest completeMultipartUploadRequest =
                CompleteMultipartUploadRequest.builder()
                        .bucket(bucketName)
                        .key(key)
                        .uploadId(uploadId)
                        .multipartUpload(completedMultipartUpload)
                        .build();

        s3.completeMultipartUpload(completeMultipartUploadRequest);
    }

    public static void multipartUploadTest(S3Client s3, String bucketName, String key, String localFilePath)  {
        System.out.println("Starting multipart upload for: " + key);
        try {
            String uploadId = createMultipartUpload(s3, bucketName, key);
            System.out.println(uploadId);
            ListCompletedPart parts = multipartUpload(s3, bucketName, key, uploadId, localFilePath);
            completeMultipartUpload(s3, bucketName, key, uploadId, parts);
            System.out.println("Multipart upload completed for: " + key);
        } 
        
        catch (Exception e) {
            System.err.println(e.getMessage());
            System.exit(1);
        }
    }
```

------
#### [ SDK for Python ]

The following examples show how to complete a multipart upload by using the SDK for Python.

**Example**  

```
def complete_multipart_upload(s3_client, bucket_name, key_name, mpu_id, part_list):
    '''
    Completes a multipart upload to a directory bucket

    :param s3_client: boto3 S3 client
    :param bucket_name: The destination bucket for the multipart upload
    :param key_name: The key name for the object to be uploaded
    :param mpu_id: The UploadId returned from the create_multipart_upload call
    :param part_list: The list of uploaded part numbers with their associated ETags 
    :return: True if the multipart upload was completed successfully, else False
    '''
    
    try:
        s3_client.complete_multipart_upload(
            Bucket = bucket_name,
            Key = key_name,
            UploadId = mpu_id,
            MultipartUpload = {
                'Parts': part_list
            }
        )
    except ClientError as e:
        logging.error(e)
        return False
    return True
    
if __name__ == '__main__':
    MB = 1024 ** 2
    region = 'us-west-2'
    bucket_name = 'BUCKET_NAME'
    key_name = 'OBJECT_NAME'
    part_size = 10 * MB
    s3_client = boto3.client('s3', region_name = region)
    mpu_id = create_multipart_upload(s3_client, bucket_name, key_name)
    if mpu_id is not None:
        part_list = multipart_upload(s3_client, bucket_name, key_name, mpu_id, part_size)
        if part_list is not None:
            if complete_multipart_upload(s3_client, bucket_name, key_name, mpu_id, part_list):
                print (f'{key_name} successfully uploaded through a ultipart upload to {bucket_name}')
            else:
                print (f'Could not upload {key_name} hrough a multipart upload to {bucket_name}')
```

------

#### Using the AWS CLI
<a name="directory-bucket-multipart-upload-complete-cli"></a>

This example shows how to complete a multipart upload for a directory bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3api complete-multipart-upload --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME --upload-id "AS_mgt9RaQE9GEaifATue15dAAAAAAAAAAEMAAAAAAAAADQwNzI4MDU0MjUyMBYAAAAAAAAAAA0AAAAAAAAAAAH2AfYAAAAAAAAEBSD0WBKMAQAAAABneY9yBVsK89iFkvWdQhRCcXohE8RbYtc9QvBOG8tNpA" --multipart-upload file://parts.json
```

This example takes a JSON structure that describes the parts of the multipart upload that should be reassembled into the complete file. In this example, the `file://` prefix is used to load the JSON structure from a file in the local folder named `parts`.

parts.json:

```
parts.json
{
  "Parts": [
    {
      "ETag": "6b78c4a64dd641a58dac8d9258b88147",
      "PartNumber": 1
    }
  ]
}
```

For more information, see [complete-multipart-upload](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/complete-multipart-upload.html) in the AWS Command Line Interface.

### Aborting a multipart upload
<a name="directory-buckets-multipart-upload-examples-abort"></a>

The following examples show how to abort a multipart upload. 

#### Using the AWS SDKs
<a name="directory-bucket-multipart-upload-abort-sdk"></a>

------
#### [ SDK for Java 2.x ]

The following example shows how to abort a multipart upload by using the SDK for Java 2.x.

**Example**  

```
public static void abortMultiPartUploads( S3Client s3, String bucketName ) {

         try {
             ListMultipartUploadsRequest listMultipartUploadsRequest = ListMultipartUploadsRequest.builder()
                     .bucket(bucketName)
                     .build();

             ListMultipartUploadsResponse response = s3.listMultipartUploads(listMultipartUploadsRequest);
             ListMultipartUpload uploads = response.uploads();

             AbortMultipartUploadRequest abortMultipartUploadRequest;
             for (MultipartUpload upload: uploads) {
                 abortMultipartUploadRequest = AbortMultipartUploadRequest.builder()
                         .bucket(bucketName)
                         .key(upload.key())
                         .uploadId(upload.uploadId())
                         .build();

                 s3.abortMultipartUpload(abortMultipartUploadRequest);
             }

         } 
         
         catch (S3Exception e) {
             System.err.println(e.getMessage());
             System.exit(1);
         }
     }
```

------
#### [ SDK for Python ]

The following example shows how to abort a multipart upload by using the SDK for Python.

**Example**  

```
import logging
import boto3
from botocore.exceptions import ClientError


def abort_multipart_upload(s3_client, bucket_name, key_name, upload_id):
    '''
    Aborts a partial multipart upload in a directory bucket.
    
    :param s3_client: boto3 S3 client
    :param bucket_name: Bucket where the multipart upload was initiated - for example, 'doc-example-bucket--usw2-az1--x-s3'
    :param key_name: Name of the object for which the multipart upload needs to be aborted
    :param upload_id: Multipart upload ID for the multipart upload to be aborted
    :return: True if the multipart upload was successfully aborted, False if not
    '''
    try:
        s3_client.abort_multipart_upload(
            Bucket = bucket_name,
            Key = key_name,
            UploadId = upload_id
        )
    except ClientError as e:
        logging.error(e)
        return False
    return True


if __name__ == '__main__':
    region = 'us-west-2'
    bucket_name = 'BUCKET_NAME'
    key_name = 'KEY_NAME'
        upload_id = 'UPLOAD_ID'
    s3_client = boto3.client('s3', region_name = region)
    if abort_multipart_upload(s3_client, bucket_name, key_name, upload_id):
        print (f'Multipart upload for object {key_name} in {bucket_name} bucket has been aborted')
    else:
        print (f'Unable to abort multipart upload for object {key_name} in {bucket_name} bucket')
```

------

#### Using the AWS CLI
<a name="directory-bucket-multipart-upload-complete-cli"></a>

The following example shows how to abort a multipart upload by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3api abort-multipart-upload --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME --upload-id "AS_mgt9RaQE9GEaifATue15dAAAAAAAAAAEMAAAAAAAAADQwNzI4MDU0MjUyMBYAAAAAAAAAAA0AAAAAAAAAAAH2AfYAAAAAAAAEAX5hFw-MAQAAAAB0OxUFeA7LTbWWFS8WYwhrxDxTIDN-pdEEq_agIHqsbg"
```

For more information, see [abort-multipart-upload](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/abort-multipart-upload.html) in the AWS Command Line Interface.

### Creating a multipart upload copy operation
<a name="directory-buckets-multipart-upload-examples-upload-part-copy"></a>

**Note**  
To encrypt new object part copies in a directory bucket with SSE-KMS, you must specify SSE-KMS as the directory bucket's default encryption configuration with a KMS key (specifically, a [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk)). The [AWS managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) (`aws/s3`) isn't supported. Your SSE-KMS configuration can only support 1 [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) per directory bucket for the lifetime of the bucket. After you specify a customer managed key for SSE-KMS, you can't override the customer managed key for the bucket's SSE-KMS configuration. You can't specify server-side encryption settings for new object part copies with SSE-KMS in the [UploadPartCopy ](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html) request headers. Also, the request headers you provide in the `CreateMultipartUpload` request must match the default encryption configuration of the destination bucket. 
S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html). In this case, Amazon S3 makes a call to AWS KMS every time a copy request is made for a KMS-encrypted object.

The following examples show how to copy objects from one bucket to another using a multipart upload. 

#### Using the AWS SDKs
<a name="directory-bucket-multipart-upload-copy-sdk"></a>

------
#### [ SDK for Java 2.x ]

The following example shows how to use a multipart upload to programmatically copy an object from one bucket to another by using the SDK for Java 2.x.

**Example**  

```
/**
 * This method creates a multipart upload request that generates a unique upload ID that is used to track
 * all the upload parts.
 *
 * @param s3
 * @param bucketName
 * @param key
 * @return
 */
 private static String createMultipartUpload(S3Client s3, String bucketName, String key) {
        CreateMultipartUploadRequest createMultipartUploadRequest = CreateMultipartUploadRequest.builder()
                .bucket(bucketName)
                .key(key)
                .build();
        String uploadId = null;
        try {
            CreateMultipartUploadResponse response = s3.createMultipartUpload(createMultipartUploadRequest);
            uploadId = response.uploadId();
        } catch (S3Exception e) {
            System.err.println(e.awsErrorDetails().errorMessage());
            System.exit(1);
        }
        return uploadId;
  }

  /**
   * Creates copy parts based on source object size and copies over individual parts
   *
   * @param s3
   * @param sourceBucket
   * @param sourceKey
   * @param destnBucket
   * @param destnKey
   * @param uploadId
   * @return
   * @throws IOException
   */
    public static ListCompletedPart multipartUploadCopy(S3Client s3, String sourceBucket, String sourceKey, String destnBucket, String destnKey, String uploadId) throws IOException {

        // Get the object size to track the end of the copy operation.
        HeadObjectRequest headObjectRequest = HeadObjectRequest
                .builder()
                .bucket(sourceBucket)
                .key(sourceKey)
                .build();
        HeadObjectResponse response = s3.headObject(headObjectRequest);
        Long objectSize = response.contentLength();

        System.out.println("Source Object size: " + objectSize);

        // Copy the object using 20 MB parts.
        long partSize = 20 * 1024 * 1024;
        long bytePosition = 0;
        int partNum = 1;
        ListCompletedPart completedParts = new ArrayList<>();
        while (bytePosition < objectSize) {
            // The last part might be smaller than partSize, so check to make sure
            // that lastByte isn't beyond the end of the object.
            long lastByte = Math.min(bytePosition + partSize - 1, objectSize - 1);

            System.out.println("part no: " + partNum + ", bytePosition: " + bytePosition + ", lastByte: " + lastByte);

            // Copy this part.
            UploadPartCopyRequest req = UploadPartCopyRequest.builder()
                    .uploadId(uploadId)
                    .sourceBucket(sourceBucket)
                    .sourceKey(sourceKey)
                    .destinationBucket(destnBucket)
                    .destinationKey(destnKey)
                    .copySourceRange("bytes="+bytePosition+"-"+lastByte)
                    .partNumber(partNum)
                    .build();
            UploadPartCopyResponse res = s3.uploadPartCopy(req);
            CompletedPart part = CompletedPart.builder()
                    .partNumber(partNum)
                    .eTag(res.copyPartResult().eTag())
                    .build();
            completedParts.add(part);
            partNum++;
            bytePosition += partSize;
        }
        return completedParts;
    }


    public static void multipartCopyUploadTest(S3Client s3, String srcBucket, String srcKey, String destnBucket, String destnKey)  {
        System.out.println("Starting multipart copy for: " + srcKey);
        try {
            String uploadId = createMultipartUpload(s3, destnBucket, destnKey);
            System.out.println(uploadId);
            ListCompletedPart parts = multipartUploadCopy(s3, srcBucket, srcKey,destnBucket,  destnKey, uploadId);
            completeMultipartUpload(s3, destnBucket, destnKey, uploadId, parts);
            System.out.println("Multipart copy completed for: " + srcKey);
        } catch (Exception e) {
            System.err.println(e.getMessage());
            System.exit(1);
        }
    }
```

------
#### [ SDK for Python ]

The following example shows how to use a multipart upload to programmatically copy an object from one bucket to another by using the SDK for Python.

**Example**  

```
import logging
import boto3
from botocore.exceptions import ClientError

def head_object(s3_client, bucket_name, key_name):
    '''
    Returns metadata for an object in a directory bucket

    :param s3_client: boto3 S3 client
    :param bucket_name: Bucket that contains the object to query for metadata
    :param key_name: Key name to query for metadata
    :return: Metadata for the specified object if successful, else None
    '''

    try:
        response = s3_client.head_object(
            Bucket = bucket_name,
            Key = key_name
        )
        return response
    except ClientError as e:
        logging.error(e)
        return None
    
def create_multipart_upload(s3_client, bucket_name, key_name):
    '''
    Create a multipart upload to a directory bucket

    :param s3_client: boto3 S3 client
    :param bucket_name: Destination bucket for the multipart upload
    :param key_name: Key name of the object to be uploaded
    :return: UploadId for the multipart upload if created successfully, else None
    '''
    
    try:
        mpu = s3_client.create_multipart_upload(Bucket = bucket_name, Key = key_name)
        return mpu['UploadId'] 
    except ClientError as e:
        logging.error(e)
        return None

def multipart_copy_upload(s3_client, source_bucket_name, key_name, target_bucket_name, mpu_id, part_size):
    '''
    Copy an object in a directory bucket to another bucket in multiple parts of a specified size
    
    :param s3_client: boto3 S3 client
    :param source_bucket_name: Bucket where the source object exists
    :param key_name: Key name of the object to be copied
    :param target_bucket_name: Destination bucket for copied object
    :param mpu_id: The UploadId returned from the create_multipart_upload call
    :param part_size: The size parts that the object will be broken into, in bytes. 
                      Minimum 5 MiB, Maximum 5 GiB. There is no minimum size for the last part of your multipart upload.
    :return: part_list for the multipart copy if all parts are copied successfully, else None
    '''
    
    part_list = []
    copy_source = {
        'Bucket': source_bucket_name,
        'Key': key_name
    }
    try:
        part_counter = 1
        object_size = head_object(s3_client, source_bucket_name, key_name)
        if object_size is not None:
            object_size = object_size['ContentLength']
        while (part_counter - 1) * part_size <object_size:
            bytes_start = (part_counter - 1) * part_size
            bytes_end = (part_counter * part_size) - 1
            upload_copy_part = s3_client.upload_part_copy (
                Bucket = target_bucket_name,
                CopySource = copy_source,
                CopySourceRange = f'bytes={bytes_start}-{bytes_end}',
                Key = key_name,
                PartNumber = part_counter,
                UploadId = mpu_id
            )
            part_list.append({'PartNumber': part_counter, 'ETag': upload_copy_part['CopyPartResult']['ETag']})
            part_counter += 1
    except ClientError as e:
        logging.error(e)
        return None
    return part_list

def complete_multipart_upload(s3_client, bucket_name, key_name, mpu_id, part_list):
    '''
    Completes a multipart upload to a directory bucket

    :param s3_client: boto3 S3 client
    :param bucket_name: Destination bucket for the multipart upload
    :param key_name: Key name of the object to be uploaded
    :param mpu_id: The UploadId returned from the create_multipart_upload call
    :param part_list: List of uploaded part numbers with associated ETags 
    :return: True if the multipart upload was completed successfully, else False
    '''
    
    try:
        s3_client.complete_multipart_upload(
            Bucket = bucket_name,
            Key = key_name,
            UploadId = mpu_id,
            MultipartUpload = {
                'Parts': part_list
            }
        )
    except ClientError as e:
        logging.error(e)
        return False
    return True

if __name__ == '__main__':
    MB = 1024 ** 2
    region = 'us-west-2'
    source_bucket_name = 'SOURCE_BUCKET_NAME'
    target_bucket_name = 'TARGET_BUCKET_NAME'
    key_name = 'KEY_NAME'
    part_size = 10 * MB
    s3_client = boto3.client('s3', region_name = region)
    mpu_id = create_multipart_upload(s3_client, target_bucket_name, key_name)
    if mpu_id is not None:
        part_list = multipart_copy_upload(s3_client, source_bucket_name, key_name, target_bucket_name, mpu_id, part_size)
        if part_list is not None:
            if complete_multipart_upload(s3_client, target_bucket_name, key_name, mpu_id, part_list):
                print (f'{key_name} successfully copied through multipart copy from {source_bucket_name} to {target_bucket_name}')
            else:
                print (f'Could not copy {key_name} through multipart copy from {source_bucket_name} to {target_bucket_name}')
```

------

#### Using the AWS CLI
<a name="directory-bucket-multipart-upload-copy-cli"></a>

The following example shows how to use a multipart upload to programmatically copy an object from one bucket to a directory bucket using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3api upload-part-copy --bucket bucket-base-name--zone-id--x-s3 --key TARGET_KEY_NAME --copy-source SOURCE_BUCKET_NAME/SOURCE_KEY_NAME --part-number 1 --upload-id "AS_mgt9RaQE9GEaifATue15dAAAAAAAAAAEMAAAAAAAAADQwNzI4MDU0MjUyMBYAAAAAAAAAAA0AAAAAAAAAAAH2AfYAAAAAAAAEBnJ4cxKMAQAAAABiNXpOFVZJ1tZcKWib9YKE1C565_hCkDJ_4AfCap2svg"
```

For more information, see [upload-part-copy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/upload-part-copy.html                         ) in the AWS Command Line Interface.

### Listing in-progress multipart uploads
<a name="directory-buckets-multipart-upload-examples-list"></a>

To list in-progress multipart uploads to a directory bucket, you can use the AWS SDKs, or the AWS CLI. 

#### Using the AWS SDKs
<a name="directory-bucket-multipart-upload-list-sdk"></a>

------
#### [ SDK for Java 2.x ]

The following examples show how to list in-progress (incomplete) multipart uploads by using the SDK for Java 2.x.

**Example**  

```
 public static void listMultiPartUploads( S3Client s3, String bucketName) {
        try {
            ListMultipartUploadsRequest listMultipartUploadsRequest = ListMultipartUploadsRequest.builder()
                .bucket(bucketName)
                .build();
                
            ListMultipartUploadsResponse response = s3.listMultipartUploads(listMultipartUploadsRequest);
            List MultipartUpload uploads = response.uploads();
            for (MultipartUpload upload: uploads) {
                System.out.println("Upload in progress: Key = \"" + upload.key() + "\", id = " + upload.uploadId());
            }
      }
      catch (S3Exception e) {
            System.err.println(e.getMessage());
            System.exit(1);
      }
  }
```

------
#### [ SDK for Python ]

The following examples show how to list in-progress (incomplete) multipart uploads by using the SDK for Python.

**Example**  

```
import logging
import boto3
from botocore.exceptions import ClientError

def list_multipart_uploads(s3_client, bucket_name):
    '''
    List any incomplete multipart uploads in a directory bucket in e specified gion

    :param s3_client: boto3 S3 client
    :param bucket_name: Bucket to check for incomplete multipart uploads
    :return: List of incomplete multipart uploads if there are any, None if not
    '''
    
    try:
        response = s3_client.list_multipart_uploads(Bucket = bucket_name)
        if 'Uploads' in response.keys():
            return response['Uploads']
        else:
            return None 
    except ClientError as e:
        logging.error(e)

if __name__ == '__main__':
    bucket_name = 'BUCKET_NAME'
    region = 'us-west-2'
    s3_client = boto3.client('s3', region_name = region)
    multipart_uploads = list_multipart_uploads(s3_client, bucket_name)
    if multipart_uploads is not None:
        print (f'There are {len(multipart_uploads)} ncomplete multipart uploads for {bucket_name}')
    else:
        print (f'There are no incomplete multipart uploads for {bucket_name}')
```

------

#### Using the AWS CLI
<a name="directory-bucket-multipart-upload-list-cli"></a>

The following examples show how to list in-progress (incomplete) multipart uploads by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3api list-multipart-uploads --bucket bucket-base-name--zone-id--x-s3
```

For more information, see [list-multipart-uploads](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/list-multipart-uploads.html                         ) in the AWS Command Line Interface.

### Listing the parts of a multipart upload
<a name="directory-buckets-multipart-upload-examples-list-parts"></a>

The following examples show how to list the parts of a multipart upload to a directory bucket.

#### Using the AWS SDKs
<a name="directory-bucket-multipart-upload-list-parts-sdk"></a>

------
#### [ SDK for Java 2.x ]

The following examples show how to list the parts of a multipart upload to a directory bucket by using SDK for Java 2.x.

```
public static void listMultiPartUploadsParts( S3Client s3, String bucketName, String objKey, String uploadID) {
         
         try {
             ListPartsRequest listPartsRequest = ListPartsRequest.builder()
                 .bucket(bucketName)
                 .uploadId(uploadID)
                 .key(objKey)
                 .build();

             ListPartsResponse response = s3.listParts(listPartsRequest);
             ListPart parts = response.parts();
             for (Part part: parts) {
                 System.out.println("Upload in progress: Part number = \"" + part.partNumber() + "\", etag = " + part.eTag());
             }

         } 
         
         catch (S3Exception e) {
             System.err.println(e.getMessage());
             System.exit(1);
         }
         
         
     }
```

------
#### [ SDK for Python ]

The following examples show how to list the parts of a multipart upload to a directory bucket by using SDK for Python.

```
import logging
import boto3
from botocore.exceptions import ClientError

def list_parts(s3_client, bucket_name, key_name, upload_id):
    '''
    Lists the parts that have been uploaded for a specific multipart upload to a directory bucket.
    
    :param s3_client: boto3 S3 client
    :param bucket_name: Bucket that multipart uploads parts have been uploaded to
    :param key_name: Name of the object that has parts uploaded
    :param upload_id: Multipart upload ID that the parts are associated with
    :return: List of parts associated with the specified multipart upload, None if there are no parts
    '''
    parts_list = []
    next_part_marker = ''
    continuation_flag = True
    try:
        while continuation_flag:
            if next_part_marker == '':
                response = s3_client.list_parts(
                    Bucket = bucket_name,
                    Key = key_name,
                    UploadId = upload_id
                )
            else:
                response = s3_client.list_parts(
                    Bucket = bucket_name,
                    Key = key_name,
                    UploadId = upload_id,
                    NextPartMarker = next_part_marker
                )
            if 'Parts' in response:
                for part in response['Parts']:
                    parts_list.append(part)
                if response['IsTruncated']:
                    next_part_marker = response['NextPartNumberMarker']
                else:
                    continuation_flag = False
            else:
                continuation_flag = False
        return parts_list
    except ClientError as e:
        logging.error(e)
        return None

if __name__ == '__main__':
    region = 'us-west-2'
    bucket_name = 'BUCKET_NAME'
    key_name = 'KEY_NAME'
    upload_id = 'UPLOAD_ID'
    s3_client = boto3.client('s3', region_name = region)
    parts_list = list_parts(s3_client, bucket_name, key_name, upload_id)
    if parts_list is not None:
        print (f'{key_name} has {len(parts_list)} parts uploaded to {bucket_name}')
    else:
        print (f'There are no multipart uploads with that upload ID for {bucket_name} bucket')
```

------

#### Using the AWS CLI
<a name="directory-bucket-multipart-upload-list-parts-cli"></a>

The following examples show how to list the parts of a multipart upload to a directory bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

```
aws s3api list-parts --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME --upload-id "AS_mgt9RaQE9GEaifATue15dAAAAAAAAAAEMAAAAAAAAADQwNzI4MDU0MjUyMBYAAAAAAAAAAA0AAAAAAAAAAAH2AfYAAAAAAAAEBSD0WBKMAQAAAABneY9yBVsK89iFkvWdQhRCcXohE8RbYtc9QvBOG8tNpA"
```

For more information, see [list-parts](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/list-parts.html) in the AWS Command Line Interface.

# Copying objects from or to a directory bucket
<a name="directory-buckets-objects-copy"></a>

The copy operation creates a copy of an object that is already stored in Amazon S3. You can copy objects between directory buckets and general purpose buckets. You can also copy objects within a bucket and across buckets of the same type, for example, from directory bucket to directory bucket. 

**Note**  
Copying objects across different AWS Regions isn't supported when the source or destination bucket is in an AWS Local Zone. The source and destination buckets must have the same parent AWS Region. The source and destination buckets can be different bucket location types (Availability Zone or Local Zone).

You can create a copy of object up to 5 GB in a single atomic operation. However, to copy an object that is greater than 5 GB, you must use the multipart upload API operations. For more information, see [Using multipart uploads with directory buckets](s3-express-using-multipart-upload.md).

**Permissions**  
 To copy objects, you must have the following permissions:
+ To copy objects from one directory bucket to another directory bucket, you must have the `s3express:CreateSession` permission.
+ To copy objects from directory buckets to general purpose buckets, you must have the `s3express:CreateSession` permission and the `s3:PutObject` permission to write the object copy to the destination bucket. 
+ To copy objects from general purpose buckets to directory buckets, you must have the `s3express:CreateSession` permission and `s3:GetObject` permission to read the source object that is being copied. 

   For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) in the *Amazon Simple Storage Service API Reference*.

**Encryption**  
Amazon S3 automatically encrypts all new objects that are uploaded to an S3 bucket. The default encryption configuration of an S3 bucket is always enabled and is at a minimum set to server-side encryption with Amazon S3 managed keys (SSE-S3). 

For directory buckets, SSE-S3 and server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) are supported. When the destination bucket is a directory bucket, we recommend that the destination bucket's default encryption uses the desired encryption configuration and you don't override the bucket default encryption. Then, new objects are automatically encrypted with the desired encryption settings. Also, S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html). In this case, Amazon S3 makes a call to AWS KMS every time a copy request is made for a KMS-encrypted object. For more information about the encryption overriding behaviors in directory buckets, see [Specifying server-side encryption with AWS KMS for new object uploads](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-specifying-kms-encryption.html).

For general purpose buckets, you can use SSE-S3 (the default), server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), dual-layer server-side encryption with AWS KMS keys (DSSE-KMS), or server-side encryption with customer-provided keys (SSE-C). 

If you make a copy request that specifies to use DSSE-KMS or SSE-C for a directory bucket (either the source or destination bucket), the response returns an error.

**Tags**  
Directory buckets don't support tags. If you copy an object that has tags from a general purpose bucket to a directory bucket, you receive an HTTP `501 (Not Implemented)` response. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) in the *Amazon Simple Storage Service API Reference*.

**ETags**  
Entity tags (ETags) for S3 Express One Zone are random alphanumeric strings and are not MD5 checksums. To help ensure object integrity, use additional checksums.

**Additional checksums**  
S3 Express One Zone offers you the option to choose the checksum algorithm that is used to validate your data during upload or download. You can select one of the following Secure Hash Algorithms (SHA) or Cyclic Redundancy Check (CRC) data-integrity check algorithms: CRC32, CRC32C, SHA-1, and SHA-256. MD5-based checksums are not supported with the S3 Express One Zone storage class. 

For more information, see [S3 additional checksum best practices](s3-express-optimizing-performance.md#s3-express-optimizing-performance-checksums).

**Supported features**  
For more information about which Amazon S3 features are supported for S3 Express One Zone, see [Differences for directory buckets](s3-express-differences.md). 

## Using the S3 console (copy to a directory bucket)
<a name="directory-bucket-copy-console"></a>

**Note**  
The restrictions and limitations when you copy an object to a directory bucket with the console are as follows:  
The `Copy` action applies to all objects within the specified folders (prefixes). Objects added to these folders while the action is in progress might be affected.
Objects encrypted with customer-provided encryption keys (SSE-C) cannot be copied by using the S3 console. To copy objects encrypted with SSE-C, use the AWS CLI, AWS SDK, or the Amazon S3 REST API.
Copied objects will not retain the Object Lock settings from the original objects.
If the bucket you are copying objects from uses the bucket owner enforced setting for S3 Object Ownership, object ACLs will not be copied to the specified destination.
If you want to copy objects to a bucket that uses the bucket owner enforced setting for S3 Object Ownership, make sure that the source bucket also uses the bucket owner enforced setting, or remove any object ACL grants to other AWS accounts and groups.
Objects copied from a general purpose bucket to a directory bucket will not retain object tags, ACLs, or Etag values. Checksum values can be copied, but are not equivalent to an Etag. The checksum value may change compared to when it was added.
 All objects copied to a directory bucket will be with the bucket owner enforced setting for S3 Object Ownership.

**To copy an object from a general purpose bucket or a directory bucket to a directory bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, the bucket type that you want to copy objects from:
   + To copy from a general purpose bucket, choose the **General purpose buckets** tab.
   + To copy from a directory bucket, choose the **Directory buckets** tab.

1. Choose the general purpose bucket or directory bucket that contains the objects that you want to copy.

1. Choose the **Objects** tab. On the **Objects** page, select the check box to the left of the names of the objects that you want to copy.

1. On the **Actions** menu, choose **Copy**.

   The **Copy** page appears.

1. Under **Destination**, choose **Directory bucket ** for your destination type. To specify the destination path, choose **Browse S3**, navigate to the destination, and then choose the option button to the left of the destination. Choose **Choose destination** in the lower-right corner. 

   Alternatively, enter the destination path. 

1. Under **Additional copy settings**, choose whether you want to **Copy source settings**, **Don’t specify settings**, or **Specify settings**. **Copy source settings** is the default option. If you only want to copy the object without the source settings attributes, choose **Don’t specify settings**. Choose **Specify settings** to specify settings for server-side encryption, checksums, and metadata.

1. Choose **Copy** in the bottom-right corner. Amazon S3 copies your objects to the destination.

## Using the S3 console (copy to a general purpose bucket)
<a name="directory-bucket-copy-console"></a>

**Note**  
The restrictions and limitations when you copy an object to a general purpose bucket with the console are as follows:  
The `Copy` action applies to all objects within the specified folders (prefixes). Objects added to these folders while the action is in progress might be affected.
Objects encrypted with customer-provided encryption keys (SSE-C) cannot be copied by using the S3 console. To copy objects encrypted with SSE-C, use the AWS CLI, AWS SDK, or the Amazon S3 REST API.
Copied objects will not retain the Object Lock settings from the original objects.
If the bucket you are copying objects from uses the bucket owner enforced setting for S3 Object Ownership, object ACLs will not be copied to the specified destination.
If you want to copy objects to a bucket that uses the bucket owner enforced setting for S3 Object Ownership, make sure that the source bucket also uses the bucket owner enforced setting, or remove any object ACL grants to other AWS accounts and groups.

**To copy an object from a directory bucket to a general purpose bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. Choose the ** Directory buckets** tab.

1. Choose the directory bucket that contains the objects that you want to copy.

1. Choose the **Objects** tab. On the **Objects** page, select the check box to the left of the names of the objects that you want to copy.

1. On the **Actions** menu, choose **Copy**.

    

1. Under **Destination**, choose **General purpose bucket** for your destination type. To specify the destination path, choose **Browse S3**, navigate to the destination, and choose the option button to the left of the destination. Choose **Choose destination** in the lower-right corner. 

   Alternatively, enter the destination path. 

1. Under **Additional copy settings**, choose whether you want to **Copy source settings**, **Don’t specify settings**, or **Specify settings**. **Copy source settings** is the default option. If you only want to copy the object without the source settings attributes, choose **Don’t specify settings**. Choose **Specify settings** to specify settings for storage class, ACLs, object tags, metadata, server-side encryption, and additional checksums.

1. Choose **Copy** in the bottom-right corner. Amazon S3 copies your objects to the destination.

## Using the AWS SDKs
<a name="directory-bucket-copy-sdks"></a>

------
#### [ SDK for Java 2.x ]

**Example**  

```
 public static void copyBucketObject (S3Client s3, String sourceBucket, String objectKey, String targetBucket) {
      CopyObjectRequest copyReq = CopyObjectRequest.builder()
          .sourceBucket(sourceBucket)
          .sourceKey(objectKey)
          .destinationBucket(targetBucket)
          .destinationKey(objectKey)
          .build();
       String temp = "";
                                             
       try {
           CopyObjectResponse copyRes = s3.copyObject(copyReq);
           System.out.println("Successfully copied " + objectKey +" from bucket " + sourceBucket +" into bucket "+targetBucket);
       }
       
       catch (S3Exception e) {
           System.err.println(e.awsErrorDetails().errorMessage());
           System.exit(1);
       }
 }
```

------

## Using the AWS CLI
<a name="directory-copy-object-cli"></a>

The following `copy-object` example command shows how you can use the AWS CLI to copy an object from one bucket to another bucket. You can copy objects between bucket types. To run this command, replace the user input placeholders with your own information.

```
aws s3api copy-object --copy-source SOURCE_BUCKET/SOURCE_KEY_NAME --key TARGET_KEY_NAME --bucket TARGET_BUCKET_NAME
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/copy-object.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/copy-object.html) in the *AWS CLI Command Reference*.

# Deleting objects from a directory bucket
<a name="directory-bucket-delete-object"></a>

You can delete objects from an Amazon S3 directory bucket by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), or AWS SDKs. For more information, see [Working with directory buckets](directory-buckets-overview.md) and [S3 Express One Zone](directory-bucket-high-performance.md#s3-express-one-zone).

**Warning**  
Deleting an object can't be undone.
This action deletes all specified objects. When deleting folders, wait for the delete action to finish before adding new objects to the folder. Otherwise, new objects might be deleted as well.

**Note**  
When you programmatically delete multiple objects from a directory bucket, note the following:  
Object keys in `DeleteObjects` requests must contain at least one non-white space character. Strings of all white space characters are not supported.
Object keys in `DeleteObjects` requests cannot contain Unicode control characters, except for newline (`\n`), tab (`\t`), and carriage return (`\r`).

## Using the S3 console
<a name="delete-object-directory-bucket-console"></a>

**To delete objects**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Directory buckets**.

1. Choose the directory bucket that contains the objects that you want to delete.

1. Choose the **Objects** tab. In the **Objects** list, select the check box to the left of the object or objects that you want to delete.

1. Choose **Delete**.

   

1. On the **Delete objects** page, enter **permanently delete** in the text box.

1. Choose **Delete objects**.

## Using the AWS SDKs
<a name="delete-object-directory-bucket-sdks"></a>

------
#### [ SDK for Java 2.x ]

**Example**  
The following example deletes objects in a directory bucket by using the AWS SDK for Java 2.x.   

```
static void deleteObject(S3Client s3Client, String bucketName, String objectKey) {


        
        try {
            
            DeleteObjectRequest del = DeleteObjectRequest.builder()
                    .bucket(bucketName)
                    .key(objectKey)
                    .build();

            s3Client.deleteObject(del);
            
            System.out.println("Object " + objectKey + " has been deleted");
            
            
        } catch (S3Exception e) {
            System.err.println(e.awsErrorDetails().errorMessage());
            System.exit(1);
        }
        
    }
```

------
#### [ SDK for Python ]

**Example**  
The following example deletes objects in a directory bucket by using the AWS SDK for Python (Boto3).   

```
import logging
import boto3
from botocore.exceptions import ClientError

def delete_objects(s3_client, bucket_name, objects):
    '''
    Delete a list of objects in a directory bucket

    :param s3_client: boto3 S3 client
    :param bucket_name: Bucket that contains objects to be deleted; for example, 'doc-example-bucket--usw2-az1--x-s3'
    :param objects: List of dictionaries that specify the key names to delete
    :return: Response output, else False
    '''

    try:
        response = s3_client.delete_objects(
            Bucket = bucket_name,
            Delete = {
                'Objects': objects
            } 
        )
        return response
    except ClientError as e:
        logging.error(e)
        return False
    

if __name__ == '__main__':
    region = 'us-west-2'
    bucket_name = 'BUCKET_NAME'
    objects = [
        {
            'Key': '0.txt'
        },
        {
            'Key': '1.txt'
        },
        {
            'Key': '2.txt'
        },
        {
            'Key': '3.txt'
        },
        {
            'Key': '4.txt'
        }
    ]
    
    s3_client = boto3.client('s3', region_name = region)
    results = delete_objects(s3_client, bucket_name, objects)
    if results is not None:
        if 'Deleted' in results:
            print (f'Deleted {len(results["Deleted"])} objects from {bucket_name}')
        if 'Errors' in results:
            print (f'Failed to delete {len(results["Errors"])} objects from {bucket_name}')
```

------

## Using the AWS CLI
<a name="directory-download-object-cli"></a>

The following `delete-object` example command shows how you can use the AWS CLI to delete an object from a directory bucket. To run this command, replace the `user input placeholders` with your own information.

```
aws s3api delete-object --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME 
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/delete-object.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/delete-object.html) in the *AWS CLI Command Reference*.

The following `delete-objects` example command shows how you can use the AWS CLI to delete objects from a directory bucket. To run this command, replace the `user input placeholders` with your own information.

The `delete.json` file is as follows: 

```
{
    "Objects": [
        {
            "Key": "0.txt"
        },
        {
            "Key": "1.txt"
        },
        {
            "Key": "2.txt"
        },
        {
            "Key": "3.txt"
        }
    ]
}
```

The `delete-objects` example command is as follows:

```
aws s3api delete-objects --bucket bucket-base-name--zone-id--x-s3 --delete file://delete.json 
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/delete-objects.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/delete-objects.html) in the *AWS CLI Command Reference*.

# Downloading an object from a directory bucket
<a name="directory-buckets-objects-GetExamples"></a>

 The following code examples show how to read data from (download) an object in an Amazon S3 directory bucket by using the `GetObject` API operation. 

## Using the AWS SDKs
<a name="directory-bucket-copy-sdks"></a>

------
#### [ SDK for Java 2.x ]

**Example**  
The following code example shows how to read data from an object in a directory bucket by using the AWS SDK for Java 2.x.   

```
public static void getObject(S3Client s3Client, String bucketName, String objectKey) {
     try {
         GetObjectRequest objectRequest = GetObjectRequest
            .builder()
            .key(objectKey)
            .bucket(bucketName)
            .build();
            
         ResponseBytes GetObjectResponse objectBytes = s3Client.getObjectAsBytes(objectRequest);
         byte[] data = objectBytes.asByteArray();
         
         //Print object contents to console
         String s = new String(data, StandardCharsets.UTF_8);
         System.out.println(s);
    }
    
    catch (S3Exception e) {
        System.err.println(e.awsErrorDetails().errorMessage());
        System.exit(1);
    }
}
```

------
#### [ SDK for Python ]

**Example**  
The following code example shows how to read data from an object in a directory bucket by using the AWS SDK for Python (Boto3).   

```
import boto3
from botocore.exceptions import ClientError
from botocore.response import StreamingBody

def get_object(s3_client: boto3.client, bucket_name: str, key_name: str) -> StreamingBody:
    """
    Gets the object.
    :param s3_client:
    :param bucket_name: The bucket that contains the object. 
    :param key_name: The key of the object to be downloaded.
    :return: The object data in bytes.
    """
    try:
        response = s3_client.get_object(Bucket=bucket_name, Key=key_name)
        body = response['Body'].read()
        print(f"Got object '{key_name}' from bucket '{bucket_name}'.")
    except ClientError:
        print(f"Couldn't get object '{key_name}' from bucket '{bucket_name}'.")
        raise
    else:
        return body
        
def main():
    s3_client = boto3.client('s3')
    resp = get_object(s3_client, 'doc-example-bucket--use1-az4--x-s3', 'sample.txt')
    print(resp)
    
if __name__ == "__main__":
     main()
```

------

## Using the AWS CLI
<a name="directory-download-object-cli"></a>

The following `get-object` example command shows how you can use the AWS CLI to download an object from Amazon S3. This command gets the object `KEY_NAME` from the directory bucket `bucket-base-name--zone-id--x-s3`. The object will be downloaded to a file named `LOCAL_FILE_NAME`. To run this command, replace the `user input placeholders` with your own information.

```
aws s3api get-object --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME LOCAL_FILE_NAME
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-object.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-object.html) in the *AWS CLI Command Reference*.

# Generating presigned URLs to share objects directory bucket
<a name="directory-buckets-objects-generate-presigned-url-Examples"></a>

 The following code examples show how to generate presigned URLs to share objects from an Amazon S3 directory bucket.

## Using the AWS CLI
<a name="directory-download-object-cli"></a>

The following example command shows how you can use the AWS CLI to generate a presigned URL for an object from Amazon S3. This command generates a presigned URL for an object `KEY_NAME` from the directory bucket `bucket-base-name--zone-id--x-s3`. To run this command, replace the `user input placeholders` with your own information.

```
aws s3 presign s3://bucket-base-name--zone-id--x-s3/KEY_NAME --expires-in 7200
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/presign.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/presign.html) in the *AWS CLI Command Reference*.

# Retrieving object metadata from directory buckets
<a name="directory-buckets-objects-HeadObjectExamples"></a>

The following AWS SDK and AWS CLI examples show how to use the `HeadObject` and `GetObjectAttributes` API operation to retrieve metadata from an object in an Amazon S3 directory bucket without returning the object itself. 

## Using the AWS SDKs
<a name="directory-bucket-copy-sdks"></a>

------
#### [ SDK for Java 2.x ]

**Example**  

```
public static void headObject(S3Client s3Client, String bucketName, String objectKey) {
     try {
         HeadObjectRequest headObjectRequest = HeadObjectRequest
                 .builder()
                 .bucket(bucketName)
                 .key(objectKey)
                 .build();
         HeadObjectResponse response = s3Client.headObject(headObjectRequest);
         System.out.format("Amazon S3 object: \"%s\" found in bucket: \"%s\" with ETag: \"%s\"", objectKey, bucketName, response.eTag());
     }
     catch (S3Exception e) {
         System.err.println(e.awsErrorDetails().errorMessage());
```

------

## Using the AWS CLI
<a name="directory-head-object-cli"></a>

The following `head-object` example command shows how you can use the AWS CLI to retrieve metadata from an object. To run this command, replace the `user input placeholders` with your own information.

```
aws s3api head-object --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/head-object.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/head-object.html) in the *AWS CLI Command Reference*.

The following `get-object-attributes` example command shows how you can use the AWS CLI to retrieve metadata from an object. To run this command, replace the `user input placeholders` with your own information.

```
aws s3api get-object-attributes --bucket bucket-base-name--zone-id--x-s3 --key KEY_NAME --object-attributes "StorageClass" "ETag" "ObjectSize"
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-object-attributes.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-object-attributes.html) in the *AWS CLI Command Reference*.

# Listing objects from a directory bucket
<a name="directory-buckets-objects-listobjectsExamples"></a>

 The following code examples show how to list objects in an Amazon S3 directory bucket by using the `ListObjectsV2` API operation. 

## Using the AWS CLI
<a name="directory-download-object-cli"></a>

The following `list-objects-v2` example command shows how you can use the AWS CLI to list objects from Amazon S3. This command lists objects from the directory bucket `bucket-base-name--zone-id--x-s3`. To run this command, replace the `user input placeholders` with your own information.

```
aws s3api list-objects-v2 --bucket bucket-base-name--zone-id--x-s3
```

For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/list-objects-v2.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/list-objects-v2.html) in the *AWS CLI Command Reference*.

# Security for directory buckets
<a name="s3-express-security"></a>

Cloud security at AWS is the highest priority. As an AWS customer, you benefit from data centers and network architectures that are built to meet the requirements of the most security-sensitive organizations. Security is a shared responsibility between AWS and you. The shared responsibility model describes this as security of the cloud and security in the cloud: 
+ **Security of the cloud** – AWS is responsible for protecting the infrastructure that runs AWS services in the AWS Cloud. AWS also provides you with services that you can use securely. Third-party auditors regularly test and verify the effectiveness of our security as part of the [https://aws.amazon.com/compliance/programs/](https://aws.amazon.com/compliance/programs/).

  To learn about the compliance programs, see [https://aws.amazon.com/compliance/services-in-scope/](https://aws.amazon.com/compliance/services-in-scope/).
+ **Security in the cloud** – Your responsibility is determined by the AWS service that you use. You are also responsible for other factors, including the sensitivity of your data, your company's requirements, and applicable laws and regulations.

This documentation will help you understand how to apply the shared responsibility model when using directory buckets. The following topics show you how to configure directory buckets to meet your security and compliance objectives. You will also learn how to use other AWS services that can help you monitor and secure your objects in directory buckets. 

# Data protection and encryption
<a name="s3-express-data-protection"></a>

 For more information about how you can configure encryption for directory buckets, see the following topics.

**Topics**
+ [Server-side encryption](#s3-express-ecnryption)
+ [Setting and monitoring default encryption for directory buckets](s3-express-bucket-encryption.md)
+ [Using server-side encryption with AWS KMS keys (SSE-KMS) in directory buckets](s3-express-UsingKMSEncryption.md)
+ [Encryption in transit](#s3-express-ecnryption-transit)
+ [Data deletion](#s3-express-data-deletion)

## Server-side encryption
<a name="s3-express-ecnryption"></a>

All directory buckets have encryption configured by default, and all new objects that are uploaded to directory buckets are automatically encrypted at rest. Server-side encryption with Amazon S3 managed keys (SSE-S3) is the default encryption configuration for every directory bucket. If you want to specify a different encryption type, you can use server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), by setting the default encryption configuration of the bucket. For more information about SSE-KMS in directory buckets, see [Using server-side encryption with AWS KMS keys (SSE-KMS) in directory buckets](s3-express-UsingKMSEncryption.md).

We recommend that the bucket's default encryption uses the desired encryption configuration and you don't override the bucket default encryption in your `CreateSession` requests or `PUT` object requests. Then, new objects are automatically encrypted with the desired encryption settings. For more information about the encryption overriding behaviors in directory buckets, see [Specifying server-side encryption with AWS KMS for new object uploads](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-specifying-kms-encryption.html).

SSE-KMS with directory buckets differs from SSE-KMS in general purpose buckets in the following aspects.
+ Your SSE-KMS configuration can only support 1 [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) per directory bucket for the lifetime of the bucket. The [https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) (`aws/s3`) isn't supported. Also, after you specify a customer managed key for SSE-KMS, you can't override the customer managed key for the bucket's SSE-KMS configuration.

  You can identify the customer managed key you specified for the bucket's SSE-KMS configuration, in the following way:
  + You make a `HeadObject` API operation request to find the value of `x-amz-server-side-encryption-aws-kms-key-id` in your response.

  To use a new customer managed key for your data, we recommend copying your existing objects to a new directory bucket with a new customer managed key.
+ For [Zonal endpoint (object-level) API operations](s3-express-differences.md#s3-express-differences-api-operations) except [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) and [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), you authenticate and authorize requests through [CreateSession](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) for low latency. We recommend that the bucket's default encryption uses the desired encryption configuration and you don't override the bucket default encryption in your `CreateSession` requests or `PUT` object requests. Then, new objects are automatically encrypted with the desired encryption settings. To encrypt new objects in a directory bucket with SSE-KMS, you must specify SSE-KMS as the directory bucket's default encryption configuration with a KMS key (specifically, a [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk)). Then, when a session is created for Zonal endpoint API operations, new objects are automatically encrypted and decrypted with SSE-KMS and S3 Bucket Keys during the session. For more information about the encryption overriding behaviors in directory buckets, see [Specifying server-side encryption with AWS KMS for new object uploads](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-specifying-kms-encryption.html).

  In the Zonal endpoint API calls (except [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) and [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)), you can't override the values of the encryption settings (`x-amz-server-side-encryption`, `x-amz-server-side-encryption-aws-kms-key-id`, `x-amz-server-side-encryption-context`, and `x-amz-server-side-encryption-bucket-key-enabled`) from the `CreateSession` request. You don't need to explicitly specify these encryption settings values in Zonal endpoint API calls, and Amazon S3 will use the encryption settings values from the `CreateSession` request to protect new objects in the directory bucket. 
**Note**  
When you use the AWS CLI or the AWS SDKs, for `CreateSession`, the session token refreshes automatically to avoid service interruptions when a session expires. The AWS CLI or the AWS SDKs use the bucket's default encryption configuration for the `CreateSession` request. It's not supported to override the encryption settings values in the `CreateSession` request. Also, in the Zonal endpoint API calls (except [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) and [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)), it's not supported override the values of the encryption settings from the `CreateSession` request. 
+ For [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), to encrypt new object copies in a directory bucket with SSE-KMS, you must specify SSE-KMS as the directory bucket's default encryption configuration with a KMS key (specifically, a [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk)). Then, when you specify server-side encryption settings for new object copies with SSE-KMS, you must make sure the encryption key is the same customer managed key that you specified for the directory bucket's default encryption configuration. For [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), to encrypt new object part copies in a directory bucket with SSE-KMS, you must specify SSE-KMS as the directory bucket's default encryption configuration with a KMS key (specifically, a [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk)). You can't specify server-side encryption settings for new object part copies with SSE-KMS in the [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html) request headers. Also, the encryption settings that you provide in the [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) request must match the default encryption configuration of the destination bucket. 
+ S3 Bucket Keys are always enabled for `GET` and `PUT` operations in a directory bucket and can’t be disabled. S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), [the Copy operation in Batch Operations](directory-buckets-objects-Batch-Ops.md), or [the import jobs](create-import-job.md). In this case, Amazon S3 makes a call to AWS KMS every time a copy request is made for a KMS-encrypted object.
+ When you specify an [AWS KMS customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) for encryption in your directory bucket, only use the key ID or key ARN. The key alias format of the KMS key isn't supported.

Directory buckets don't support dual-layer server-side encryption with AWS Key Management Service (AWS KMS) keys (DSSE-KMS), or server-side encryption with customer-provided encryption keys (SSE-C).

# Setting and monitoring default encryption for directory buckets
<a name="s3-express-bucket-encryption"></a>

Amazon S3 buckets have bucket encryption enabled by default, and new objects are automatically encrypted by using server-side encryption with Amazon S3 managed keys (SSE-S3). This encryption applies to all new objects in your Amazon S3 buckets, and comes at no cost to you.

If you need more control over your encryption keys, such as managing key rotation and access policy grants, you can elect to use server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS).

**Note**  
We recommend that the bucket's default encryption uses the desired encryption configuration and you don't override the bucket default encryption in your `CreateSession` requests or `PUT` object requests. Then, new objects are automatically encrypted with the desired encryption settings. For more information about the encryption overriding behaviors in directory buckets, see [Specifying server-side encryption with AWS KMS for new object uploads](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-specifying-kms-encryption.html).
To encrypt new objects in a directory bucket with SSE-KMS, you must specify SSE-KMS as the directory bucket's default encryption configuration with a KMS key (specifically, a customer managed key). Then, when a session is created for Zonal endpoint API operations, new objects are automatically encrypted and decrypted with SSE-KMS and S3 Bucket Keys during the session.
When you set default bucket encryption to SSE-KMS, S3 Bucket Keys are always enabled for `GET` and `PUT` operations in a directory bucket and can’t be disabled. S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), [the Copy operation in Batch Operations](directory-buckets-objects-Batch-Ops.md), or [the import jobs](create-import-job.md). In this case, Amazon S3 makes a call to AWS KMS every time a copy request is made for a KMS-encrypted object. For more information about how S3 Bucket Keys reduce your AWS KMS request costs, see [Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys](bucket-key.md). 
When you specify an [AWS KMS customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) for encryption in your directory bucket, only use the key ID or key ARN. The key alias format of the KMS key isn't supported.
Dual-layer server-side encryption with AWS KMS keys (DSSE-KMS) and server-side encryption with customer-provided keys (SSE-C) aren't supported for default encryption in directory buckets.

For more information about configuring default encryption, see [Configuring default encryption](default-bucket-encryption.md).

For more information about the permissions required for default encryption, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html) in the *Amazon Simple Storage Service API Reference*.

You can configure Amazon S3 default encryption for an S3 bucket by using the Amazon S3 console, the AWS SDKs, the Amazon S3 REST API, and the AWS Command Line Interface (AWS CLI).

## Using the S3 console
<a name="s3-express-bucket-encryption-how-to-set-up-console"></a>

**To configure default encryption on an Amazon S3 bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Buckets**.

1. In the **Buckets** list, choose the name of the bucket that you want. 

1. Choose the **Properties** tab.

1. Under **Server-side encryption settings**, directory buckets use Server-side encryption with **Amazon S3 managed keys (SSE-S3)**.

1. Choose **Save changes**.

## Using the AWS CLI
<a name="s3-express-default-bucket-encryption-cli"></a>

These examples show you how to configure default encryption by using SSE-S3 or by using SSE-KMS with an S3 Bucket Key.

For more information about default encryption, see [Setting default server-side encryption behavior for Amazon S3 buckets](bucket-encryption.md). For more information about using the AWS CLI to configure default encryption, see [put-bucket-encryption](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/put-bucket-encryption.html).

**Example – Default encryption with SSE-S3**  
This example configures default bucket encryption with Amazon S3 managed keys. To use the command, replace the *user input placeholders* with your own information.  

```
aws s3api put-bucket-encryption --bucket bucket-base-name--zone-id--x-s3 --server-side-encryption-configuration '{
    "Rules": [
        {
            "ApplyServerSideEncryptionByDefault": {
                "SSEAlgorithm": "AES256"
            }
        }
    ]
}'
```

**Example – Default encryption with SSE-KMS using an S3 Bucket Key**  
This example configures default bucket encryption with SSE-KMS using an S3 Bucket Key. To use the command, replace the *user input placeholders* with your own information.  

```
aws s3api put-bucket-encryption --bucket bucket-base-name--zone-id--x-s3 --server-side-encryption-configuration '{
    "Rules": [
            {
                "ApplyServerSideEncryptionByDefault": {
                    "SSEAlgorithm": "aws:kms",
                    "KMSMasterKeyID": "KMS-Key-ARN"
                },
                "BucketKeyEnabled": true
            }
        ]
    }'
```

## Using the REST API
<a name="s3-express-bucket-encryption-how-to-set-up-api"></a>

Use the REST API `PutBucketEncryption` operation to set default encryption with a type of server-side encryption to use — SSE-S3, or SSE-KMS. 

For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTencryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTencryption.html) in the *Amazon Simple Storage Service API Reference*.

## Using the AWS SDKs
<a name="s3-express-kms-put-bucket-encryption-using-sdks"></a>

When using AWS SDKs, you can request Amazon S3 to use AWS KMS keys for server-side encryption. The following AWS SDKs for Java and .NET examples configure default encryption configuration for a directory bucket with SSE-KMS and an S3 Bucket Key. For information about other SDKs, see [Sample code and libraries](https://aws.amazon.com/code) on the AWS Developer Center.

**Important**  
When you use an AWS KMS key for server-side encryption in Amazon S3, you must choose a symmetric encryption KMS key. Amazon S3 supports only symmetric encryption KMS keys. For more information about these keys, see [Symmetric encryption KMS keys](https://docs.aws.amazon.com//kms/latest/developerguide/concepts.html#symmetric-cmks) in the *AWS Key Management Service Developer Guide*.

------
#### [ Java ]

With the AWS SDK for Java 2.x, you can request Amazon S3 to use an AWS KMS key by using the `applyServerSideEncryptionByDefault` method to specify the default encryption configuration of your directory bucket for data encryption with SSE-KMS. You create a symmetric encryption KMS key and specify that in the request.

```
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.PutBucketEncryptionRequest;
import software.amazon.awssdk.services.s3.model.ServerSideEncryption;
import software.amazon.awssdk.services.s3.model.ServerSideEncryptionByDefault;
import software.amazon.awssdk.services.s3.model.ServerSideEncryptionConfiguration;
import software.amazon.awssdk.services.s3.model.ServerSideEncryptionRule;

public class Main {
    public static void main(String[] args) {
        S3Client s3 = S3Client.create();
        String bucketName = "bucket-base-name--zoneid--x-s3";
        String kmsKeyId = "your-kms-customer-managed-key-id";

        // AWS managed KMS keys aren't supported. Only customer-managed keys are supported.
        ServerSideEncryptionByDefault serverSideEncryptionByDefault = ServerSideEncryptionByDefault.builder()
                .sseAlgorithm(ServerSideEncryption.AWS_KMS)
                .kmsMasterKeyID(kmsKeyId)
                .build();

        // The bucketKeyEnabled field is enforced to be true.
        ServerSideEncryptionRule rule = ServerSideEncryptionRule.builder()
                .bucketKeyEnabled(true)
                .applyServerSideEncryptionByDefault(serverSideEncryptionByDefault)
                .build();
  
        ServerSideEncryptionConfiguration serverSideEncryptionConfiguration = ServerSideEncryptionConfiguration.builder()
                .rules(rule)
                .build();

        PutBucketEncryptionRequest putRequest = PutBucketEncryptionRequest.builder()
                .bucket(bucketName)
                .serverSideEncryptionConfiguration(serverSideEncryptionConfiguration)
                .build();

        s3.putBucketEncryption(putRequest);
        
    }
}
```

For more information about creating customer managed keys, see [Programming the AWS KMS API](https://docs.aws.amazon.com/kms/latest/developerguide/programming-top.html) in the *AWS Key Management Service Developer Guide*.

For working code examples of uploading an object, see the following topics. To use these examples, you must update the code examples and provide encryption information as shown in the preceding code fragment.
+ For uploading an object in a single operation, see [Uploading objects to a directory bucket](directory-buckets-objects-upload.md).
+ For multipart upload API operations, see [Using multipart uploads with directory buckets](s3-express-using-multipart-upload.md). 

------
#### [ .NET ]

With the AWS SDK for .NET, you can request Amazon S3 to use an AWS KMS key by using the `ServerSideEncryptionByDefault` property to specify the default encryption configuration of your directory bucket for data encryption with SSE-KMS. You create a symmetric encryption customer managed key and specify that in the request.

```
    // Set the bucket server side encryption to use AWSKMS with a customer-managed key id.
    // bucketName: Name of the directory bucket. "bucket-base-name--zonsid--x-s3"
    // kmsKeyId: The Id of the customer managed KMS Key. "your-kms-customer-managed-key-id"
    // Returns True if successful.
    public static async Task<bool> SetBucketServerSideEncryption(string bucketName, string kmsKeyId)
    {
        var serverSideEncryptionByDefault = new ServerSideEncryptionConfiguration
        {
            ServerSideEncryptionRules = new List<ServerSideEncryptionRule>
            {
                new ServerSideEncryptionRule
                {
                    ServerSideEncryptionByDefault = new ServerSideEncryptionByDefault
                    {
                        ServerSideEncryptionAlgorithm = ServerSideEncryptionMethod.AWSKMS,
                        ServerSideEncryptionKeyManagementServiceKeyId = kmsKeyId
                    }
                }
            }
        };
        try
        {
            var encryptionResponse =await _s3Client.PutBucketEncryptionAsync(new PutBucketEncryptionRequest
            {
                BucketName = bucketName,
                ServerSideEncryptionConfiguration = serverSideEncryptionByDefault,
            });
            
            return encryptionResponse.HttpStatusCode == HttpStatusCode.OK;
        }
        catch (AmazonS3Exception ex)
        {
            Console.WriteLine(ex.ErrorCode == "AccessDenied"
                ? $"This account does not have permission to set encryption on {bucketName}, please try again."
                : $"Unable to set bucket encryption for bucket {bucketName}, {ex.Message}");
        }
        return false;
    }
```

For more information about creating customer managed keys, see [Programming the AWS KMS API](https://docs.aws.amazon.com/kms/latest/developerguide/programming-top.html) in the *AWS Key Management Service Developer Guide*. 

For working code examples of uploading an object, see the following topics. To use these examples, you must update the code examples and provide encryption information as shown in the preceding code fragment.
+ For uploading an object in a single operation, see [Uploading objects to a directory bucket](directory-buckets-objects-upload.md).
+ For multipart upload API operations, see [Using multipart uploads with directory buckets](s3-express-using-multipart-upload.md). 

------

## Monitoring default encryption for directory buckets with AWS CloudTrail
<a name="s3-express-bucket-encryption-tracking"></a>

You can track default encryption configuration requests for Amazon S3 directory buckets by using AWS CloudTrail events. The following API event names are used in CloudTrail logs:
+ `PutBucketEncryption`
+ `GetBucketEncryption`
+ `DeleteBucketEncryption`

**Note**  
EventBridge isn't supported in directory buckets.
Dual-layer server-side encryption with AWS Key Management Service (AWS KMS) keys (DSSE-KMS) or server-side encryption with customer-provided encryption keys (SSE-C) aren't supported in directory buckets.

For more information about monitoring default encryption with AWS CloudTrail, see [Monitoring default encryption with AWS CloudTrail and Amazon EventBridge](bucket-encryption-tracking.md).

# Using server-side encryption with AWS KMS keys (SSE-KMS) in directory buckets
<a name="s3-express-UsingKMSEncryption"></a>

 The security controls in AWS KMS can help you meet encryption-related compliance requirements. You can choose to configure directory buckets to use server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) and use these KMS keys to protect your data in Amazon S3 directory buckets. For more information about SSE-KMS, see [Using server-side encryption with AWS KMS keys (SSE-KMS)](UsingKMSEncryption.md).

**Permissions**  
To upload or download an object encrypted with an AWS KMS key to or from Amazon S3, you need `kms:GenerateDataKey` and `kms:Decrypt` permissions on the key. For more information, see [Allow key users to use a KMS key for cryptographic operations](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html#key-policy-users-crypto) in the *AWS Key Management Service Developer Guide*. For information about the AWS KMS permissions that are required for multipart uploads, see [Multipart upload API and permissions](mpuoverview.md#mpuAndPermissions).

For more information about KMS keys for SSE-KMS, see [Specifying server-side encryption with AWS KMS (SSE-KMS)](specifying-kms-encryption.md).

**Topics**
+ [AWS KMS keys](#s3-express-aws-managed-customer-managed-keys)
+ [Using SSE-KMS for cross-account operations](#s3-express-bucket-encryption-update-bucket-policy)
+ [Amazon S3 Bucket Keys](#s3-express-sse-kms-bucket-keys)
+ [Requiring SSE-KMS](#s3-express-require-sse-kms)
+ [Encryption context](#s3-express-encryption-context)
+ [Sending requests for AWS KMS encrypted objects](#s3-express-aws-signature-version-4-sse-kms)
+ [Auditing SSE-KMS encryption in directory buckets](#s3-express-bucket-encryption-sse-auditing)
+ [Specifying server-side encryption with AWS KMS (SSE-KMS) for new object uploads in directory buckets](s3-express-specifying-kms-encryption.md)

## AWS KMS keys
<a name="s3-express-aws-managed-customer-managed-keys"></a>

Your SSE-KMS configuration can only support 1 [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) per directory bucket for the lifetime of the bucket. The [https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) (`aws/s3`) isn't supported. Also, after you specify a customer managed key for SSE-KMS, you can't override the customer managed key for the bucket's SSE-KMS configuration.

You can identify the customer managed key you specified for the bucket's SSE-KMS configuration, in the following way:
+ You make a `HeadObject` API operation request to find the value of `x-amz-server-side-encryption-aws-kms-key-id` in your response.

To use a new customer managed key for your data, we recommend copying your existing objects to a new directory bucket with a new customer managed key.

When you specify an [AWS KMS customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) for encryption in your directory bucket, only use the key ID or key ARN. The key alias format of the KMS key isn't supported.

For more information about KMS keys for SSE-KMS, see [AWS KMS keys](UsingKMSEncryption.md#aws-managed-customer-managed-keys).

## Using SSE-KMS for cross-account operations
<a name="s3-express-bucket-encryption-update-bucket-policy"></a>

When using encryption for cross-account operations in directory buckets, be aware of the following:
+ If you want to grant cross-account access to your S3 objects, configure a policy of a customer managed key to allow access from another account.
+ To specify a customer managed key, you must use a fully qualified KMS key ARN.

## Amazon S3 Bucket Keys
<a name="s3-express-sse-kms-bucket-keys"></a>

S3 Bucket Keys are always enabled for `GET` and `PUT` operations in a directory bucket and can’t be disabled. S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), [the Copy operation in Batch Operations](directory-buckets-objects-Batch-Ops.md), or [the import jobs](create-import-job.md). In this case, Amazon S3 makes a call to AWS KMS every time a copy request is made for a KMS-encrypted object.

For [Zonal endpoint (object-level) API operations](s3-express-differences.md#s3-express-differences-api-operations) except [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) and [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), you authenticate and authorize requests through [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) for low latency. We recommend that the bucket's default encryption uses the desired encryption configuration and you don't override the bucket default encryption in your `CreateSession` requests or `PUT` object requests. Then, new objects are automatically encrypted with the desired encryption settings. To encrypt new objects in a directory bucket with SSE-KMS, you must specify SSE-KMS as the directory bucket's default encryption configuration with an KMS key (specifically, a [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk)). Then, when a session is created for Zonal endpoint API operations, new objects are automatically encrypted and decrypted with SSE-KMS and S3 Bucket Keys during the session. For more information about the encryption overriding behaviors in directory buckets, see [Specifying server-side encryption with AWS KMS for new object uploads](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-specifying-kms-encryption.html).

S3 Bucket Keys are used for a time-limited period within Amazon S3, further reducing the need for Amazon S3 to make requests to AWS KMS to complete encryption operations. For more information about using S3 Bucket Keys, see [Amazon S3 Bucket Keys](UsingKMSEncryption.md#sse-kms-bucket-keys) and [Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys](bucket-key.md).

## Requiring SSE-KMS
<a name="s3-express-require-sse-kms"></a>

To require SSE-KMS of all objects in a particular directory bucket, you can use a bucket policy. For example, when you use the `CreateSession` API operation to grant permission to upload a new object (`PutObject`, `CopyObject`, and `CreateMultipartUpload`), the following bucket policy denies the upload object permission (`s3express:CreateSession`) to everyone if the `CreateSession` request doesn't include an `x-amz-server-side-encryption-aws-kms-key-id` header that requests SSE-KMS.

------
#### [ JSON ]

****  

```
{
   "Version":"2012-10-17",		 	 	 
   "Id":"UploadObjectPolicy",
   "Statement":[{
         "Sid":"DenyObjectsThatAreNotSSEKMS",
         "Effect":"Deny",
         "Principal":"*",
         "Action":"s3express:CreateSession",
         "Resource":"arn:aws:s3express:us-east-1:111122223333:bucket/amzn-s3-demo-bucket--usw2-az1--x-s3",
         "Condition":{
            "Null":{
               "s3express:x-amz-server-side-encryption-aws-kms-key-id":"true"
            }
         }
      }
   ]
}
```

------

To require that a particular AWS KMS key be used to encrypt the objects in a bucket, you can use the `s3express:x-amz-server-side-encryption-aws-kms-key-id` condition key. To specify the KMS key, you must use a key Amazon Resource Name (ARN) that is in the `arn:aws:kms:region:acct-id:key/key-id` format. AWS Identity and Access Management does not validate if the string for `s3express:x-amz-server-side-encryption-aws-kms-key-id` exists. The AWS KMS key ID that Amazon S3 uses for object encryption must match the AWS KMS key ID in the policy, otherwise Amazon S3 denies the request.

For more information about how to use SSE-KMS for new object uploads, see [Specifying server-side encryption with AWS KMS (SSE-KMS) for new object uploads in directory buckets](s3-express-specifying-kms-encryption.md).

For a complete list of specific condition keys for directory buckets, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md).

## Encryption context
<a name="s3-express-encryption-context"></a>

For directory buckets, an *encryption context* is a set of key-value pairs that contains contextual information about the data. An additional encryption context value is not supported. For more information about the encryption context, see [Encryption context](UsingKMSEncryption.md#encryption-context). 



By default, if you use SSE-KMS on a directory bucket, Amazon S3 uses the bucket Amazon Resource Name (ARN) as the encryption context pair:

```
arn:aws:s3express:region:account-id:bucket/bucket-base-name--zone-id--x-s3
```

Make sure your IAM policies or AWS KMS key policies use your bucket ARN as the encryption context.

You can optionally provide an explicit encryption context pair by using the `x-amz-server-side-encryption-context` header in a Zonal endpoint API request, such as [ CreateSession](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html#API_CreateSession_RequestSyntax). The value of this header is a Base64-encoded string of a UTF-8 encoded JSON, which contains the encryption context as key-value pairs. For directory buckets, the encryption context must match the default encryption context – the bucket Amazon Resource Name (ARN). Also, because the encryption context is not encrypted, make sure it does not include sensitive information.

You can use the encryption context to identify and categorize your cryptographic operations. You can also use the default encryption context ARN value to track relevant requests in AWS CloudTrail by viewing which directory bucket ARN was used with which encryption key.

In the `requestParameters` field of a CloudTrail log file, if you use SSE-KMS on a directory bucket, the encryption context value is the ARN of the bucket. 

```
"encryptionContext": {
    "aws:s3express:arn": "arn:aws:s3:::arn:aws:s3express:region:account-id:bucket/bucket-base-name--zone-id--x-s3"
}
```

Also, for object encryption with SSE-KMS in a directory bucket, your AWS KMS CloudTrail events log your bucket ARN instead of your object ARN. 

## Sending requests for AWS KMS encrypted objects
<a name="s3-express-aws-signature-version-4-sse-kms"></a>

Directory buckets can only be accessed through HTTPS (TLS). Also, directory buckets sign requests by using AWS Signature Version 4 (SigV4). For more information about sending requests for AWS KMS encrypted objects, see [Sending requests for AWS KMS encrypted objects](UsingKMSEncryption.md#aws-signature-version-4-sse-kms).

If your object uses SSE-KMS, don't send encryption request headers for `GET` requests and `HEAD` requests. Otherwise, you’ll get an HTTP 400 Bad Request error.

## Auditing SSE-KMS encryption in directory buckets
<a name="s3-express-bucket-encryption-sse-auditing"></a>

To audit the usage of your AWS KMS keys for your SSE-KMS encrypted data, you can use AWS CloudTrail logs. You can get insight into your [cryptographic operations](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#cryptographic-operations), such as [https://docs.aws.amazon.com/kms/latest/developerguide/ct-generatedatakey.html](https://docs.aws.amazon.com/kms/latest/developerguide/ct-generatedatakey.html) and [https://docs.aws.amazon.com/kms/latest/developerguide/ct-decrypt.html](https://docs.aws.amazon.com/kms/latest/developerguide/ct-decrypt.html). CloudTrail supports numerous [attribute values](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_LookupEvents.html) for filtering your search, including event name, user name, and event source. 

**Topics**
+ [AWS KMS keys](#s3-express-aws-managed-customer-managed-keys)
+ [Using SSE-KMS for cross-account operations](#s3-express-bucket-encryption-update-bucket-policy)
+ [Amazon S3 Bucket Keys](#s3-express-sse-kms-bucket-keys)
+ [Requiring SSE-KMS](#s3-express-require-sse-kms)
+ [Encryption context](#s3-express-encryption-context)
+ [Sending requests for AWS KMS encrypted objects](#s3-express-aws-signature-version-4-sse-kms)
+ [Auditing SSE-KMS encryption in directory buckets](#s3-express-bucket-encryption-sse-auditing)
+ [Specifying server-side encryption with AWS KMS (SSE-KMS) for new object uploads in directory buckets](s3-express-specifying-kms-encryption.md)

# Specifying server-side encryption with AWS KMS (SSE-KMS) for new object uploads in directory buckets
<a name="s3-express-specifying-kms-encryption"></a>

For directory buckets, to encrypt your data with server-side encryption, you can use either server-side encryption with Amazon S3 managed keys (SSE-S3) (the default) or server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS). We recommend that the bucket's default encryption uses the desired encryption configuration and you don't override the bucket default encryption in your `CreateSession` requests or `PUT` object requests. Then, new objects are automatically encrypted with the desired encryption settings. For more information about the encryption overriding behaviors in directory buckets, see [Specifying server-side encryption with AWS KMS for new object uploads](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-specifying-kms-encryption.html).

All Amazon S3 buckets have encryption configured by default, and all new objects that are uploaded to an S3 bucket are automatically encrypted at rest. Server-side encryption with Amazon S3 managed keys (SSE-S3) is the default encryption configuration for every bucket in Amazon S3. If you want to specify a different encryption type for a directory bucket, you can use server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS). To encrypt new objects in a directory bucket with SSE-KMS, you must specify SSE-KMS as the directory bucket's default encryption configuration with a KMS key (specifically, a [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk)). The [AWS managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) (`aws/s3`) isn't supported. Your SSE-KMS configuration can only support 1 [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) per directory bucket for the lifetime of the bucket. After you specify a customer managed key for SSE-KMS, you can't override the customer managed key for the bucket's SSE-KMS configuration. Then, when you specify server-side encryption settings for new objects with SSE-KMS, you must make sure the encryption key is the same customer managed key that you specified for the directory bucket's default encryption configuration. To use a new customer managed key for your data, we recommend copying your existing objects to a new directory bucket with a new customer managed key.

You can apply encryption when you are either uploading a new object or copying an existing object. If you change an object's encryption, a new object is created to replace the old one.

You can specify SSE-KMS by using the REST API operations, AWS SDKs, and the AWS Command Line Interface (AWS CLI). 

**Note**  
 For directory buckets, the encryption overriding behaviors are as follows:   
When you use [CreateSession](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) with the REST API to authenticate and authorize Zonal endpoint API requests except [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) and [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), you can override the encryption settings to SSE-S3 or to SSE-KMS only if you specified the bucket’s default encryption with SSE-KMS previously.
When you use [CreateSession](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) with the AWS CLI or the AWS SDKs to authenticate and authorize Zonal endpoint API requests except [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) and [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), you can’t override the encryption settings at all.
When you make [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) requests, you can override the encryption settings to SSE-S3 or to SSE-KMS only if you specified the bucket’s default encryption with SSE-KMS previously. When you make [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html) requests, you can’t override the encryption settings.
You can use multi-Region AWS KMS keys in Amazon S3. However, Amazon S3 currently treats multi-Region keys as though they were single-Region keys, and does not use the multi-Region features of the key. For more information, see [ Using multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in *AWS Key Management Service Developer Guide*.
If you want to use a KMS key that's owned by a different account, you must have permission to use the key. For more information about cross-account permissions for KMS keys, see [Creating KMS keys that other accounts can use](https://docs.aws.amazon.com//kms/latest/developerguide/key-policy-modifying-external-accounts.html#cross-account-console) in the *AWS Key Management Service Developer Guide*. 

## Using the REST API
<a name="s3-express-KMSUsingRESTAPI"></a>

**Note**  
 Only 1 [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) is supported per directory bucket for the lifetime of the bucket. The [https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) (`aws/s3`) isn't supported. After you specify SSE-KMS as your bucket's default encryption configuration with a customer managed key, you can't change the customer managed key for the bucket's SSE-KMS configuration. 

For [Zonal endpoint (object-level) API operations](s3-express-differences.md#s3-express-differences-api-operations) except [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) and [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), you authenticate and authorize requests through [CreateSession](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) for low latency. We recommend that the bucket's default encryption uses the desired encryption configurations and you don't override the bucket default encryption in your `CreateSession` requests or `PUT` object requests. Then, new objects are automatically encrypted with the desired encryption settings. To encrypt new objects in a directory bucket with SSE-KMS, you must specify SSE-KMS as the directory bucket's default encryption configuration with a KMS key (specifically, a [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk)). Then, when a session is created for Zonal endpoint API operations, new objects are automatically encrypted and decrypted with SSE-KMS and S3 Bucket Keys during the session. For more information about the encryption overriding behaviors in directory buckets, see [Specifying server-side encryption with AWS KMS for new object uploads](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-specifying-kms-encryption.html).

In the Zonal endpoint API calls (except [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) and [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)) using the REST API, you can't override the values of the encryption settings (`x-amz-server-side-encryption`, `x-amz-server-side-encryption-aws-kms-key-id`, `x-amz-server-side-encryption-context`, and `x-amz-server-side-encryption-bucket-key-enabled`) from the `CreateSession` request. You don't need to explicitly specify these encryption settings values in Zonal endpoint API calls, and Amazon S3 will use the encryption settings values from the `CreateSession` request to protect new objects in the directory bucket. 

**Note**  
When you use the AWS CLI or the AWS SDKs, for `CreateSession`, the session token refreshes automatically to avoid service interruptions when a session expires. The AWS CLI or the AWS SDKs use the bucket's default encryption configuration for the `CreateSession` request. It's not supported to override the encryption settings values in the `CreateSession` request. Also, in the Zonal endpoint API calls (except [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) and [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)), it's not supported to override the values of the encryption settings from the `CreateSession` request. 

For [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), to encrypt new object copies in a directory bucket with SSE-KMS, you must specify SSE-KMS as the directory bucket's default encryption configuration with a KMS key (specifically, a [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk)). Then, when you specify server-side encryption settings for new object copies with SSE-KMS, you must make sure the encryption key is the same customer managed key that you specified for the directory bucket's default encryption configuration. For [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), to encrypt new object part copies in a directory bucket with SSE-KMS, you must specify SSE-KMS as the directory bucket's default encryption configuration with a KMS key (specifically, a [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk)). You can't specify server-side encryption settings for new object part copies with SSE-KMS in the [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html) request headers. Also, the encryption settings that you provide in the [CreateMultipartUpload](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) request must match the default encryption configuration of the destination bucket. 



**Topics**
+ [Amazon S3 REST API operations that support SSE-KMS](#s3-express-sse-request-headers-kms)
+ [Encryption context (`x-amz-server-side-encryption-context`)](#s3-express-s3-kms-encryption-context)
+ [AWS KMS key ID (`x-amz-server-side-encryption-aws-kms-key-id`)](#s3-express-s3-kms-key-id-api)
+ [S3 Bucket Keys (`x-amz-server-side-encryption-aws-bucket-key-enabled`)](#s3-express-bucket-key-api)

### Amazon S3 REST API operations that support SSE-KMS
<a name="s3-express-sse-request-headers-kms"></a>

The following object-level REST API operations in directory buckets accept the `x-amz-server-side-encryption`, `x-amz-server-side-encryption-aws-kms-key-id`, and `x-amz-server-side-encryption-context` request headers.
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) – When you use Zonal endpoint (object-level) API operations (except CopyObject and UploadPartCopy), you can specify these request headers. 
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html) – When you upload data by using the `PUT` API operation, you can specify these request headers. 
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) – When you copy an object, you have both a source object and a target object. When you pass SSE-KMS headers with the `CopyObject` operation, they're applied only to the target object.
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) – When you upload large objects by using the multipart upload API operation, you can specify these headers. You specify these headers in the `CreateMultipartUpload` request.

The response headers of the following REST API operations return the `x-amz-server-side-encryption` header when an object is stored by using server-side encryption.
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html)

**Important**  
All `GET` and `PUT` requests for an object protected by AWS KMS fail if you don't make these requests by using Transport Layer Security (TLS), or Signature Version 4.
If your object uses SSE-KMS, don't send encryption request headers for `GET` requests and `HEAD` requests, or you’ll get an HTTP 400 BadRequest error.

### Encryption context (`x-amz-server-side-encryption-context`)
<a name="s3-express-s3-kms-encryption-context"></a>

If you specify `x-amz-server-side-encryption:aws:kms`, the Amazon S3 API supports you to optionally provide an explicit encryption context with the `x-amz-server-side-encryption-context` header. For directory buckets, an encryption context is a set of key-value pairs that contain contextual information about the data. The value must match the default encryption context — the bucket Amazon Resource Name (ARN). An additional encryption context value is not supported. 

For information about the encryption context in directory buckets, see [Encryption context](s3-express-UsingKMSEncryption.md#s3-express-encryption-context). For general information about the encryption context, see [AWS Key Management Service Concepts - Encryption context](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context) in the *AWS Key Management Service Developer Guide*. 

### AWS KMS key ID (`x-amz-server-side-encryption-aws-kms-key-id`)
<a name="s3-express-s3-kms-key-id-api"></a>

You can use the `x-amz-server-side-encryption-aws-kms-key-id` header to specify the ID of the customer managed key that's used to protect the data.

Your SSE-KMS configuration can only support 1 [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) per directory bucket for the lifetime of the bucket. The [https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) (`aws/s3`) isn't supported. Also, after you specify a customer managed key for SSE-KMS, you can't override the customer managed key for the bucket's SSE-KMS configuration.

You can identify the customer managed key you specified for the bucket's SSE-KMS configuration, in the following way:
+ You make a `HeadObject` API operation request to find the value of `x-amz-server-side-encryption-aws-kms-key-id` in your response.

To use a new customer managed key for your data, we recommend copying your existing objects to a new directory bucket with a new customer managed key.

For information about the encryption context in directory buckets, see [AWS KMS keys](s3-express-UsingKMSEncryption.md#s3-express-aws-managed-customer-managed-keys). 

### S3 Bucket Keys (`x-amz-server-side-encryption-aws-bucket-key-enabled`)
<a name="s3-express-bucket-key-api"></a>

S3 Bucket Keys are always enabled for `GET` and `PUT` operations in a directory bucket and can’t be disabled. S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), [the Copy operation in Batch Operations](directory-buckets-objects-Batch-Ops.md), or [the import jobs](create-import-job.md). In this case, Amazon S3 makes a call to AWS KMS every time a copy request is made for a KMS-encrypted object. For information about the S3 Bucket Keys in directory buckets, see [Encryption context](s3-express-UsingKMSEncryption.md#s3-express-encryption-context). 

## Using the AWS CLI
<a name="s3-express-KMSUsingCLI"></a>

**Note**  
When you use the AWS CLI, for `CreateSession`, the session token refreshes automatically to avoid service interruptions when a session expires. It's not supported to override the encryption settings values for the `CreateSession` request. Also, in the Zonal endpoint API calls (except [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) and [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)), it's not supported to override the values of the encryption settings from the `CreateSession` request.   
To encrypt new objects in a directory bucket with SSE-KMS, you must specify SSE-KMS as the directory bucket's default encryption configuration with a KMS key (specifically, a customer managed key). Then, when a session is created for Zonal endpoint API operations, new objects are automatically encrypted and decrypted with SSE-KMS and S3 Bucket Keys during the session.

To use the following example AWS CLI commands, replace the `user input placeholders` with your own information.

When you upload a new object or copy an existing object, you can specify the use of server-side encryption with AWS KMS keys to encrypt your data. To do this, use the `put-bucket-encryption` command to set the directory bucket's default encryption configuration as SSE-KMS (`aws:kms`). Specifically, add the `--server-side-encryption aws:kms` header to the request. Use the `--ssekms-key-id example-key-id` to add your [customer managed AWS KMS key](https://docs.aws.amazon.com//kms/latest/developerguide/concepts.html#customer-cmk) that you created. If you specify `--server-side-encryption aws:kms`, you must provide an AWS KMS key ID of your customer managed key. Directory buckets don't use an AWS managed key. For an example command, see [Using the AWS CLI](s3-express-bucket-encryption.md#s3-express-default-bucket-encryption-cli). 

Then, when you upload a new object with the following command, Amazon S3 uses the bucket settings for default encryption to encrypt the object by default.

```
aws s3api put-object --bucket bucket-base-name--zone-id--x-s3 --key example-object-key --body filepath
```

You don't need to add `-\-bucket-key-enabled` explicitly in your Zonal endpoint API operations commands. S3 Bucket Keys are always enabled for `GET` and `PUT` operations in a directory bucket and can’t be disabled. S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), [the Copy operation in Batch Operations](directory-buckets-objects-Batch-Ops.md), or [the import jobs](create-import-job.md). In this case, Amazon S3 makes a call to AWS KMS every time a copy request is made for a KMS-encrypted object.

You can copy an object from a source bucket (for example, a general purpose bucket) to a new bucket (for example, a directory bucket) and use SSE-KMS encryption for the destination objects. To do this, use the `put-bucket-encryption` command to set the default encryption configuration of the destination bucket (for example, a directory bucket) as SSE-KMS (`aws:kms`). For an example command, see [Using the AWS CLI](s3-express-bucket-encryption.md#s3-express-default-bucket-encryption-cli). Then, when you copy an object with the following command, Amazon S3 uses the bucket settings for default encryption to encrypt the object by default.

```
aws s3api copy-object --copy-source amzn-s3-demo-bucket/example-object-key --bucket bucket-base-name--zone-id--x-s3 --key example-object-key  
```

## Using the AWS SDKs
<a name="s3-express-kms-using-sdks"></a>

When using AWS SDKs, you can request Amazon S3 to use AWS KMS keys for server-side encryption. The following examples show how to use SSE-KMS with the AWS SDKs for Java and .NET. For information about other SDKs, see [Sample code and libraries](https://aws.amazon.com/code) on the AWS Developer Center.

**Note**  
When you use the AWS SDKs, for `CreateSession`, the session token refreshes automatically to avoid service interruptions when a session expires. It's not supported to override the encryption settings values for the `CreateSession` request. Also, in the Zonal endpoint API calls (except [CopyObject](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) and [UploadPartCopy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)), it's not supported to override the values of the encryption settings from the `CreateSession` request.   
To encrypt new objects in a directory bucket with SSE-KMS, you must specify SSE-KMS as the directory bucket's default encryption configuration with a KMS key (specifically, a customer managed key). Then, when a session is created for Zonal endpoint API operations, new objects are automatically encrypted and decrypted with SSE-KMS and S3 Bucket Keys during the session.  
For more information about using AWS SDKs to set the default encryption configuration of a directory bucket as SSE-KMS, see [Using the AWS SDKs](s3-express-bucket-encryption.md#s3-express-kms-put-bucket-encryption-using-sdks).

**Important**  
When you use an AWS KMS key for server-side encryption in Amazon S3, you must choose a symmetric encryption KMS key. Amazon S3 supports only symmetric encryption KMS keys. For more information about these keys, see [Symmetric encryption KMS keys](https://docs.aws.amazon.com//kms/latest/developerguide/concepts.html#symmetric-cmks) in the *AWS Key Management Service Developer Guide*.

For more information about creating customer managed keys, see [Programming the AWS KMS API](https://docs.aws.amazon.com/kms/latest/developerguide/programming-top.html) in the *AWS Key Management Service Developer Guide*.

## Encryption in transit
<a name="s3-express-ecnryption-transit"></a>

Directory buckets use Regional and Zonal API endpoints. Depending on the Amazon S3 API operation that you use, either a Regional or Zonal endpoint is required. You can access Zonal and Regional endpoints through a gateway virtual private cloud (VPC) endpoint. There is no additional charge for using gateway endpoints. To learn more about Regional and Zonal API endpoints, see [Networking for directory buckets](s3-express-networking.md). 

## Data deletion
<a name="s3-express-data-deletion"></a>

You can delete one or more objects directly from your directory buckets by using the Amazon S3 console, AWS SDKs, AWS Command Line Interface (AWS CLI), or Amazon S3 REST API. Because all objects in your directory buckets incur storage costs, we recommend deleting objects that you no longer need.

Deleting an object that's stored in a directory bucket also recursively deletes any parent directories, if those parent directories don't contain any objects other than the object that's being deleted.

**Note**  
Multi-factor authentication (MFA) delete and S3 Versioning are not supported for S3 Express One Zone. 

# Authenticating and authorizing requests
<a name="s3-express-authenticating-authorizing"></a>

By default, directory buckets are private and can be accessed only by users who are explicitly granted access. The access control boundary for directory buckets is set only at the bucket level. In contrast, the access control boundary for general purpose buckets can be set at the bucket, prefix, or object tag level. This difference means that directory buckets are the only resource that you can include in bucket policies or IAM identity policies for S3 Express One Zone access. 

Amazon S3 Express One Zone supports both AWS Identity and Access Management (AWS IAM) authorization and session-based authorization: 
+ To use Regional endpoint API operations (bucket-level, or control plane, operations) with S3 Express One Zone, you use the IAM authorization model, which doesn't involve session management. Permissions are granted for actions individually. For more information, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md).
+ To use Zonal endpoint API operations (object-level, or data plane, operations), except for `CopyObject` and `HeadBucket`, you use the `CreateSession` API operation to create and manage sessions that are optimized for low-latency authorization of data requests. To retrieve and use a session token, you must allow the `s3express:CreateSession` action for your directory bucket in an identity-based policy or a bucket policy. For more information, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md). If you're accessing S3 Express One Zone in the Amazon S3 console, through the AWS Command Line Interface (AWS CLI), or by using the AWS SDKs, S3 Express One Zone creates a session on your behalf.

With the `CreateSession` API operation, you authenticate and authorize requests through a new session-based mechanism. You can use `CreateSession` to request temporary credentials that provide low-latency access to your bucket. These temporary credentials are scoped to a specific directory bucket. 

To work with `CreateSession`, we recommend using the latest version of the AWS SDKs or using the AWS Command Line Interface (AWS CLI). The supported AWS SDKs and the AWS CLI handle session establishment, refreshment, and termination on your behalf. 

You use session tokens with only Zonal (object-level) operations (except for `CopyObject` and `HeadBucket`) to distribute the latency that’s associated with authorization over a number of requests in a session. For Regional endpoint API operations (bucket-level operations), you use IAM authorization, which doesn’t involve managing a session. For more information, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md) and [Authorizing Zonal endpoint API operations with `CreateSession`](s3-express-create-session.md). 

## How API operations are authenticated and authorized
<a name="s3-express-security-iam-authorization"></a>

The following table lists authentication and authorization information for directory bucket API operations. For each API operation, the table shows the API operation name, IAM policy action, endpoint type (Regional or Zonal), and authorization mechanism (IAM or session-based). This table also indicates whether cross-account access is supported. Access to bucket-level actions can be granted only in IAM identity-based policies (user or role), not bucket policies.


| API | Endpoint type | IAM action | Cross-account access | 
| --- | --- | --- | --- | 
| CreateBucket | Regional | s3express:CreateBucket | No | 
| DeleteBucket | Regional | s3express:DeleteBucket | No | 
| DeleteBucketInventoryConfiguration | Regional | s3express:PutInventoryConfiguration | No | 
| DeleteBucketPolicy | Regional | s3express:DeleteBucketPolicy | No | 
| GetBucketInventoryConfiguration | Regional | s3express:GetInventoryConfiguration | No | 
| GetBucketPolicy | Regional | s3express:GetBucketPolicy | No | 
| ListBucketInventoryConfigurations | Regional | s3express:GetInventoryConfiguration | No | 
| ListDirectoryBuckets | Regional | s3express:ListAllMyDirectoryBuckets | No | 
| PutBucketInventoryConfiguration | Regional | s3express:PutInventoryConfiguration | No | 
| PutBucketPolicy | Regional | s3express:PutBucketPolicy | No | 
| CreateSession | Zonal | s3express:CreateSession | Yes | 
| CopyObject | Zonal | s3express:CreateSession | Yes  | 
| DeleteObject | Zonal | s3express:CreateSession | Yes  | 
| DeleteObjects | Zonal | s3express:CreateSession | Yes  | 
| HeadObject | Zonal | s3express:CreateSession | Yes  | 
| PutObject | Zonal | s3express:CreateSession | Yes | 
| RenameObject | Zonal | s3express:CreateSession | No | 
| GetObjectAttributes | Zonal | s3express:CreateSession | Yes | 
| ListObjectsV2 | Zonal | s3express:CreateSession | Yes  | 
| HeadBucket | Zonal | s3express:CreateSession | Yes  | 
| CreateMultipartUpload | Zonal | s3express:CreateSession | Yes | 
| UploadPart | Zonal | s3express:CreateSession | Yes  | 
| UploadPartCopy | Zonal | s3express:CreateSession | Yes  | 
| CompleteMultipartUpload | Zonal | s3express:CreateSession | Yes  | 
| AbortMultipartUpload | Zonal | s3express:CreateSession | Yes  | 
| ListParts | Zonal | s3express:CreateSession | Yes  | 
| ListMultipartUploads | Zonal | s3express:CreateSession | Yes  | 
| ListAccessPointsForDirectoryBuckets | Zonal | s3express:ListAccessPointsForDirectoryBuckets | Yes | 
| GetAccessPointScope | Zonal | s3express:GetAccessPointScope | Yes | 
| PutAccessPointScope | Zonal | s3express:PutAccessPointScope | Yes | 
| DeleteAccessPointScope | Zonal | s3express:DeleteAccessPointScope | Yes | 

**Topics**
+ [How API operations are authenticated and authorized](#s3-express-security-iam-authorization)
+ [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md)
+ [Authorizing Zonal endpoint API operations with `CreateSession`](s3-express-create-session.md)

# Authorizing Regional endpoint API operations with IAM
<a name="s3-express-security-iam"></a>

AWS Identity and Access Management (IAM) is an AWS service that helps administrators securely control access to AWS resources. IAM administrators control who can be authenticated (signed in) and authorized (have permissions) to use Amazon S3 resources in directory buckets and S3 Express One Zone operations. You can use IAM for no additional charge. 

By default, users don't have permissions for directory buckets. To grant access permissions for directory buckets, you can use IAM to create users, groups, or roles and attach permissions to those identities. For more information about IAM, see [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*. 

To provide access, you can add permissions to your users, groups, or roles through the following means:
+ **Users and groups in AWS IAM Identity Center** – Create a permission set. Follow the instructions in [Create a permission set](https://docs.aws.amazon.com/singlesignon/latest/userguide/get-started-create-a-permission-set.html) in the *AWS IAM Identity Center User Guide*.
+ **Users managed in IAM through an identity provider** – Create a role for identity federation. Follow the instructions in [Creating a role for a third-party identity provider (federation)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*.
+ **IAM roles and users** – Create a role that your user can assume. Follow the instructions in [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.

For more information about IAM for S3 Express One Zone, see the following topics.

**Topics**
+ [Principals](#s3-express-security-iam-principals)
+ [Resources](#s3-express-security-iam-resources)
+ [Actions for directory buckets](#s3-express-security-iam-actions)
+ [IAM identity-based policies for directory buckets](s3-express-security-iam-identity-policies.md)
+ [Example bucket policies for directory buckets](s3-express-security-iam-example-bucket-policies.md)
+ [AWS managed policies for Amazon S3 Express One Zone](s3-express-one-zone-security-iam-awsmanpol.md)

## Principals
<a name="s3-express-security-iam-principals"></a>

When you create a resource-based policy to grant access to your buckets, you must use the `Principal` element to specify the person or application that can make a request for an action or operation on that resource. For directory bucket policies, you can use the following principals:
+ An AWS account
+ An IAM user
+ An IAM role
+ A federated user

For more information, see [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) in the *IAM User Guide*.

## Resources
<a name="s3-express-security-iam-resources"></a>

Amazon Resource Names (ARNs) for directory buckets contain the `s3express` namespace, the AWS Region, the AWS account ID, and the directory bucket name, which includes the AWS Zone ID. (an Availability Zone or Local Zone ID).

To access and perform actions on your directory bucket, you must use the following ARN format:

```
arn:aws:s3express:region:account-id:bucket/base-bucket-name--zone-id--x-s3
```

To access and perform actions on your access point for a directory bucket, you must use the following ARN format:

```
arn:aws::s3express:region:account-id:accesspoint/accesspoint-basename--zone-id--xa-s3
```

For more information about ARNs, see [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html) in the *IAM User Guide*. For more information about resources, see [IAM JSON Policy Elements: Resource](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_resource.html) in the *IAM User Guide*.

## Actions for directory buckets
<a name="s3-express-security-iam-actions"></a>

In an IAM identity-based policy or resource-based policy, you define which S3 actions are allowed or denied. Actions correspond to specific API operations. With directory buckets, you must use the S3 Express One Zone namespace to grant permissions, called `s3express`.

When you allow the `s3express:CreateSession` permission, the `CreateSession` API operation retrieves a temporary session token for all Zonal endpoint API (object level) operations. The session token returns credentials that are used for all other Zonal endpoint API operations. As a result, you don't grant access permissions to Zonal API operations with IAM policies. Instead, `CreateSession` enables access for all object level operations. For the list of Zonal API operations and permissions, see [Authenticating and authorizing requests](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-authenticating-authorizing.html). 

To learn more about the `CreateSession` API operation, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) in the *Amazon Simple Storage Service API Reference*.

You can specify the following actions in the `Action` element of an IAM policy statement. Use policies to grant permissions to perform an operation in AWS. When you use an action in a policy, you usually allow or deny access to the API operation with the same name. However, in some cases, a single action controls access to more than one API operation. Access to bucket-level actions can be granted in only IAM identity-based policies (user or role) and not bucket policies.

For more information about how to configure access point policies, see [Configuring IAM policies for using access points for directory buckets](access-points-directory-buckets-policies.md).

For more information, see [Actions, resources, and condition keys for Amazon S3 Express](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3express.html). 

# IAM identity-based policies for directory buckets
<a name="s3-express-security-iam-identity-policies"></a>

Before you can create directory buckets, you must grant the necessary permissions to your AWS Identity and Access Management (IAM) role or users. This example policy allows access to the `CreateSession` API operation (for use with Zonal endpoint [object level] API operations) and all of the Regional endpoint (bucket-level) API operations. This policy allows the `CreateSession` API operation for use with all directory buckets, but the Regional endpoint API operations are allowed only for use with the specified directory bucket. To use this example policy, replace the `user input placeholders` with your own information.

# Example bucket policies for directory buckets
<a name="s3-express-security-iam-example-bucket-policies"></a>

This section provides example directory bucket policies. To use these policies, replace the `user input placeholders` with your own information.

The following example bucket policy allows AWS account ID `111122223333` to use the `CreateSession` API operation for the specified directory bucket. When no session mode is specified, the session will be created with the maximum allowable privilege (attempting `ReadWrite` first, then `ReadOnly` if not permitted). This policy grants access to the Zonal endpoint (object level) API operations. 

**Example – Bucket policy to allow `CreateSession` calls**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "ReadWriteAccess",
            "Effect": "Allow",
            "Resource": "arn:aws:s3express:us-west-2:111122223333:bucket/amzn-s3-demo-bucket--usw2-az1--x-s3",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::111122223333:root"
                ]
            },
            "Action": [
                "s3express:CreateSession"
            ]
        }
    ]
}
```

**Example – Bucket policy to allow `CreateSession` calls with a `ReadOnly` session**  
The following example bucket policy allows AWS account ID `111122223333` to use the `CreateSession` API operation. This policy uses the `s3express:SessionMode` condition key with the `ReadOnly` value to set a read-only session.     
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "ReadOnlyAccess",
            "Effect": "Allow",
            "Principal": {
                "AWS": "111122223333"
            },
            "Action": "s3express:CreateSession",
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "s3express:SessionMode": "ReadOnly"
                }
            }
        }
    ]
}
```

**Example – Bucket policy to allow cross-account access for `CreateSession` calls**  
The following example bucket policy allows AWS account ID `111122223333` to use the `CreateSession` API operation for the specified directory bucket that's owned by AWS account ID *`444455556666`*.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "CrossAccount",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::111122223333:root"
            },
            "Action": [
                "s3express:CreateSession"
            ],
            "Resource": "arn:aws:s3express:us-west-2:444455556666:bucket/amzn-s3-demo-bucket--usw2-az1--x-s3"
        }
    ]
}
```

# AWS managed policies for Amazon S3 Express One Zone
<a name="s3-express-one-zone-security-iam-awsmanpol"></a>

An AWS managed policy is a standalone policy that is created and administered by AWS. AWS managed policies are designed to provide permissions for many common use cases so that you can start assigning permissions to users, groups, and roles.

Keep in mind that AWS managed policies might not grant least-privilege permissions for your specific use cases because they're available for all AWS customers to use. We recommend that you reduce permissions further by defining [ customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#customer-managed-policies) that are specific to your use cases.

You cannot change the permissions defined in AWS managed policies. If AWS updates the permissions defined in an AWS managed policy, the update affects all principal identities (users, groups, and roles) that the policy is attached to. AWS is most likely to update an AWS managed policy when a new AWS service is launched or new API operations become available for existing services.

For more information, see [AWS managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-vs-inline.html#aws-managed-policies) in the *IAM User Guide*.

## AWS managed policy: AmazonS3ExpressFullAccess
<a name="s3-express-one-zone-security-iam-awsmanpol-amazons3expressfullaccess"></a>

You can attach the `AmazonS3ExpressFullAccess` policy to your IAM identities. This policy grants full access to Amazon S3 Express One Zone directory buckets and operations. It allows all actions under the `s3express` service prefix on all resources.

This policy is intended for users or roles that need unrestricted access to directory buckets. This policy covers only Amazon S3 Express One Zone operations. For standard Amazon S3 operations, you need additional policies.

To view the permissions for this policy, see [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3ExpressFullAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3ExpressFullAccess.html) in the AWS Managed Policy Reference.

## AWS managed policy: AmazonS3ExpressReadOnlyAccess
<a name="s3-express-one-zone-security-iam-awsmanpol-amazons3expressreadonlyaccess"></a>

You can attach the `AmazonS3ExpressReadOnlyAccess` policy to your IAM identities. This policy grants permissions that allow `ReadOnly` access to Amazon S3 Express One Zone directory buckets.

**Note**  
The `CreateSession` action supports the `SessionMode` condition key which can be set to `ReadOnly` or `ReadWrite`. This policy uses `SessionMode` for a `ReadOnly` session.

To view the permissions for this policy, see [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3ExpressReadOnlyAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AmazonS3ExpressReadOnlyAccess.html) in the AWS Managed Policy Reference.

## Amazon S3 Express One Zone updates to AWS managed policies
<a name="s3-express-one-zone-security-iam-awsmanpol-updates"></a>

View details about updates to AWS managed policies for Amazon S3 Express One Zone since this service began tracking these changes.


| Change | Description | Date | 
| --- | --- | --- | 
|  Amazon S3 Express One Zone added `AmazonS3ExpressFullAccess`.  |  Amazon S3 Express One Zone added a new AWS managed policy called `AmazonS3ExpressFullAccess`. This policy grants permissions that allow full access to Amazon S3 Express One Zone directory buckets and operations.  |  April 03, 2026  | 
|  Amazon S3 Express One Zone added `AmazonS3ExpressReadOnlyAccess`.  |  Amazon S3 Express One Zone added a new AWS managed policy called `AmazonS3ExpressReadOnlyAccess`. This policy grants permissions that allow read-only access to Amazon S3 Express One Zone directory buckets.  |  April 03, 2026  | 
|  Amazon S3 Express One Zone started tracking changes.  |  Amazon S3 Express One Zone started tracking changes for its AWS managed policies.  |  April 03, 2026  | 

# Authorizing Zonal endpoint API operations with `CreateSession`
<a name="s3-express-create-session"></a>

To use Zonal endpoint API operations (object-level, or data plane operations), except for `CopyObject` and `HeadBucket`, you use the `CreateSession` API operation to create and manage sessions that are optimized for low-latency authorization of data requests. To retrieve and use a session token, you must allow the `s3express:CreateSession` action for your directory bucket in an identity-based policy or a bucket policy. For more information, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md). If you're accessing S3 Express One Zone in the Amazon S3 console, through the AWS Command Line Interface (AWS CLI), or by using the AWS SDKs, S3 Express One Zone creates a session on your behalf. However, you can't modify the `SessionMode` parameter when using the AWS CLI or AWS SDKs. 

If you use the Amazon S3 REST API, you can then use the `CreateSession` API operation to obtain temporary security credentials that include an access key ID, a secret access key, a session token, and an expiration time. The temporary credentials provide the same permissions as long-term security credentials, such as IAM user credentials, but temporary security credentials must include a session token.

**Session Mode**  
Session mode defines the scope of the session. If the session mode is not specified in the CreateSession API request, the CreateSession action will attempt to create the session with the maximum allowable privilege, attempting `ReadWrite` first, then falling back to `ReadOnly` only if `ReadWrite` is not permitted by the policies. In your bucket policy, you can specify the `s3express:SessionMode` condition key to explicitly control who can create a `ReadWrite` or `ReadOnly` session. For more information about `ReadWrite` or `ReadOnly` sessions, see the `x-amz-create-session-mode` parameter for [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) in the *Amazon S3 API Reference*. For more information about the bucket policy to create, see [Example bucket policies for directory buckets](s3-express-security-iam-example-bucket-policies.md).

**Session Token**  
When you make a call by using temporary security credentials, the call must include a session token. The session token is returned along with the temporary credentials. A session token is scoped to your directory bucket and is used to verify that the security credentials are valid and haven't expired. To protect your sessions, temporary security credentials expire after 5 minutes. 

**`CopyObject` and `HeadBucket`**  
Temporary security credentials are scoped to a specific directory bucket and are automatically enabled for all Zonal (object-level) operation API calls to a given directory bucket. Unlike other Zonal endpoint API operations, `CopyObject` and `HeadBucket` don't use `CreateSession` authentication. All `CopyObject` and `HeadBucket` requests must be authenticated and signed by using IAM credentials. However, `CopyObject` and `HeadBucket` are still authorized by `s3express:CreateSession`, like other Zonal endpoint API operations.

For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) in the *Amazon Simple Storage Service API Reference*.

# Security best practices for directory buckets
<a name="s3-express-security-best-practices"></a>

There are a number of security features to consider when working with directory buckets. The following best practices are general guidelines and don't represent a complete security solution. Because these best practices might not be appropriate or sufficient for your environment, treat them as helpful recommendations rather than prescriptions.

## Default Block Public Access and Object Ownership settings
<a name="s3-express-security-best-practices-manage-access"></a>

 Directory buckets support S3 Block Public Access and S3 Object Ownership. These S3 features are used to audit and manage access to your buckets and objects. 

By default, all Block Public Access settings for directory buckets are enabled. In addition, Object Ownership is set to bucket owner enforced, which means that access control lists (ACLs) are disabled. These settings can't be modified. For more information about these features, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md) and [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

**Note**  
You can't grant access to objects stored in directory buckets. You can grant access only to your directory buckets. The authorization model for S3 Express One Zone is different than the authorization model for Amazon S3. For more information, see [Authorizing Zonal endpoint API operations with `CreateSession`](s3-express-create-session.md).

## Authentication and authorization
<a name="s3-express-security-best-practices-create-session"></a>

The authentication and authorization mechanisms for directory buckets differ, depending on whether you are making requests to Zonal endpoint API operations or Regional endpoint API operations. Zonal API operations are object-level (data plane) operations. Regional API operations are bucket-level (control plane) operations. 

You authenticate and authorize requests to Zonal endpoint API operations through a new session-based mechanism ``that is optimized to provide the lowest latency. With session-based authentication, the AWS SDKs use the `CreateSession` API operation ``to request temporary credentials that provide low-latency access to your directory bucket. These temporary credentials are scoped to a specific directory bucket and expire after 5 minutes. You can use these temporary credentials to sign Zonal (object level) API calls. For more information, see [Authorizing Zonal endpoint API operations with `CreateSession`](s3-express-create-session.md).

**Signing requests with credentials for directory bucket management**  
You use your credentials to sign Zonal endpoint (object level) API requests with AWS Signature Version 4, with `s3express` as the service name. When you sign your requests, use the secret key that's returned from `CreateSession` and also provide the session token with the `x-amzn-s3session-token header`. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html).

The [supported AWS SDKs](s3-express-SDKs.md#s3-express-getting-started-accessing-sdks) manage credentials and signing on your behalf. We recommend using the AWS SDKs to refresh credentials and sign requests for you.

**Signing requests with IAM credentials**  
All Regional (bucket-level) API calls must be authenticated and signed by AWS Identity and Access Management (IAM) credentials instead of temporary session credentials. IAM credentials consist of the access key ID and secret access key for the IAM identities. All `CopyObject` and `HeadBucket` requests must also be authenticated and signed by using IAM credentials.

To achieve the lowest latency for your Zonal (object-level) operation calls, we recommend using credentials obtained from calling `CreateSession` to sign your requests, except for requests to `CopyObject` and `HeadBucket`.

## Use AWS CloudTrail
<a name="s3-express-security-best-practices-cloudtrail"></a>

AWS CloudTrail provides a record of the actions taken by a user, a role, or an AWS service in Amazon S3. You can use information collected by CloudTrail to determine the following:
+ The request that was made to Amazon S3
+ The IP address from which the request was made
+ Who made the request
+ When the request was made
+ Additional details about the request

When you set up your AWS account, CloudTrail management events are enabled by default. The following Regional endpoint API operations (bucket-level, or control plane, API operations) are logged to CloudTrail. 
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListDirectoryBuckets.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListDirectoryBuckets.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketEncryption.html)

**Note**  
`ListMultipartUploads` is a Zonal endpoint API operation. However, it is logged to CloudTrail as a management event. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html) in the *Amazon Simple Storage Service API Reference*. 

By default, CloudTrail trails don't log data events, but you can configure trails to log data events for directory buckets that you specify, or to log data events for all the directory buckets in your AWS account. The following Zonal endpoint API operations (object-level, or data plane, API operations) are logged to CloudTrail.
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)

 For more information on using AWS CloudTrail with directory buckets , see [Logging with AWS CloudTrail for directory buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-one-zone-cloudtrail-logging.html).

### Implement monitoring by using AWS monitoring tools
<a name="s3-express-security-best-practices-monitoring"></a>

Monitoring is an important part of maintaining the reliability, security, availability, and performance of Amazon S3 and your AWS solutions. AWS provides several tools and services to help you monitor Amazon S3 and your other AWS services. For example, you can monitor Amazon CloudWatch metrics for Amazon S3, particularly the `BucketSizeBytes` and `NumberOfObjects` storage metrics.

Objects stored in the directory buckets won't be reflected in the `BucketSizeBytes` and `NumberOfObjects` storage metrics for Amazon S3. However, the `BucketSizeBytes` and `NumberOfObjects` storage metrics are supported for directory buckets. To see the metrics of your choice, you can differentiate between the Amazon S3 storage classes by specifying a `StorageType` dimension. For more information, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md).

For more information, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md) and [Logging and monitoring in Amazon S3](monitoring-overview.md).

# Managing access to shared datasets in directory buckets with access points
<a name="access-points-directory-buckets"></a>

Amazon S3 Access Points simplify managing data access at scale for shared datasets in Amazon S3. Access points are unique hostnames you create to enforce distinct permissions and network controls for all requests made through an access point. You can create hundreds of access points per bucket, each with a distinct name and permissions customized for each application. Each access point works in conjunction with the bucket policy that is attached to the underlying bucket.

In directory buckets, an access point name consists of a base name you provide, followed by the Zone ID (AWS Availability Zone or Local Zone) of your directory bucket location, and then `--xa-s3`. For example, `accesspointname--zoneID--xa-s3`. After you create an access point, you can't change the name or the Zone ID.

With access points for directory buckets, you can use the access point scope to restrict access to specific prefixes or API operations. You can specify any amount of prefixes, but the total length of characters of all prefixes must be less than 256 bytes.

You can configure any access point to accept requests only from a virtual private cloud (VPC). This restricts Amazon S3 data access to a private network.

In this section, the topics explain how to use access points for directory buckets. For information about directory buckets, see [Working with directory buckets](directory-buckets-overview.md).

**Topics**
+ [Access points for directory buckets naming rules, restrictions, and limitations](access-points-directory-buckets-restrictions-limitations-naming-rules.md)
+ [Referencing access points for directory buckets](access-points-directory-buckets-naming.md)
+ [Object operations for access points for directory buckets](access-points-directory-buckets-service-api-support.md)
+ [Configuring IAM policies for using access points for directory buckets](access-points-directory-buckets-policies.md)
+ [Monitoring and logging access points for directory buckets](access-points-directory-buckets-monitoring-logging.md)
+ [Creating access points for directory buckets](creating-access-points-directory-buckets.md)
+ [Managing your access points for directory buckets](access-points-directory-buckets-manage.md)
+ [Using tags with S3 Access Points for directory buckets](access-points-db-tagging.md)

# Access points for directory buckets naming rules, restrictions, and limitations
<a name="access-points-directory-buckets-restrictions-limitations-naming-rules"></a>

Access points simplify managing data access at scale for shared datasets in Amazon S3. The following topics provide information about access point naming rules and restrictions and limitations.

**Topics**
+ [Naming rules for access points for directory buckets](#access-points-directory-buckets-names)
+ [Restrictions and limitations for access points for directory buckets](#access-points-directory-buckets-restrictions-limitations)

## Naming rules for access points for directory buckets
<a name="access-points-directory-buckets-names"></a>

An access point must be created in the same zone that the bucket is in. An access point name must be unique within the zone.

Access point names must be DNS-compliant and must meet the following conditions:
+ Must begin with a number or lowercase letter
+ The base name you provide must be between 3 and 50 characters long
+ Can't begin or end with a hyphen (`-`)
+ Can't contain underscores (`_`), uppercase letters, spaces, or periods (`.`)
+ Must end with the suffix `zoneid--xa--s3`.

## Restrictions and limitations for access points for directory buckets
<a name="access-points-directory-buckets-restrictions-limitations"></a>

Access points for directory buckets have the following restrictions and limitations:
+ Each access point is associated to one directory bucket. After you create an access point, you can't associate it to a different bucket. However, you can delete an access point, and then create a new one with the same name and associate it to a different bucket.
+ After you create an access point, you can't change its virtual private cloud (VPC) configuration.
+ Access point policies are limited to 20 KB in size.
+ Access point scope prefixes are limited to 256 bytes in total size.
+ You can create a maximum of 10,000 access points per AWS account per AWS Region. If you need more than 10,000 access points for a single account in a single Region, you can request a service quota increase. For more information about service quotas and requesting an increase, see [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) in the *AWS General Reference*.
+ You can only use access points to perform operations on objects. You can't use access points to perform Amazon S3 bucket operations, such as modifying or deleting buckets. For a complete list of supported operations, see [Object operations for access points for directory buckets](access-points-directory-buckets-service-api-support.md).
+ You can refer to access points by name, access point alias, or virtual-hosted-style URI. You cannot address access points by ARN. For more information, see [Referencing access points for directory buckets](access-points-directory-buckets-naming.md).
+ API operations that control access point functionality (for example, `PutAccessPointPolicy` and `GetAccessPointPolicy`) must specify the AWS account that owns the access point.
+ You must use AWS Signature Version 4 when making requests to an access point by using the REST API. For more information about authenticating requests, see [Authenticating Requests (AWS Signature Version 4)](https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html) in the *Amazon Simple Storage Service API Reference*.
+ Access points only support requests over HTTPS. Amazon S3 will automatically respond with an HTTP redirect for any requests made through HTTP, to upgrade the request to HTTPS.
+ Access points don't support anonymous access.
+ If you create an access point to a bucket that's owned by another account (a cross-account access point), the cross-account access point doesn't grant you access to data until the bucket owner grants you permission to access the bucket. The bucket owner always retains ultimate control over access to the data and must update the bucket policy to authorize requests from the cross-account access point. To view a bucket policy example, see [Configuring IAM policies for using access points for directory buckets](access-points-directory-buckets-policies.md).

# Referencing access points for directory buckets
<a name="access-points-directory-buckets-naming"></a>

After you create an access point, you can use it as an endpoint to preform object operations. For access points for directory buckets, the access point alias is the same as the access point name. You can use the access point name instead of a bucket name for all data operations. For a list of these supported operations, see [Object operations for access points for directory buckets](access-points-directory-buckets-service-api-support.md).

## Referring to access points by virtual-hosted-style URIs
<a name="accessing-directory-bucket-through-s3-access-point"></a>

Access points only support virtual-host-style addressing. Access points use the same format as directory bucket endpoints. For more information, see [Regional and Zonal endpoints for directory buckets](s3-express-Regions-and-Zones.md).

S3 access points don't support access through HTTP. Access points support only secure access through HTTPS.

# Object operations for access points for directory buckets
<a name="access-points-directory-buckets-service-api-support"></a>

You can use access points to access an object using the following S3 data operations.
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)

# Configuring IAM policies for using access points for directory buckets
<a name="access-points-directory-buckets-policies"></a>

Access points support AWS Identity and Access Management (IAM) resource policies that allow you to control the use of the access point by resource, user, or other conditions. For an application or user to access objects through an access point, both the access point and the underlying bucket policy must permit the request.

**Important**  
Adding an access point to a directory bucket doesn't change the bucket's behavior when the bucket is accessed directly through the bucket's name. All existing operations against the bucket will continue to work as before. Restrictions that you include in an access point policy or access point scope apply only to requests made through that access point. 

When using IAM resource policies, make sure to resolve security warnings, errors, general warnings, and suggestions from AWS Identity and Access Management Access Analyzer before you save your policy. IAM Access Analyzer runs policy checks to validate your policy against IAM [policy grammar](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_grammar.html) and [best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html). These checks generate findings and provide recommendations to help you author policies that are functional and conform to security best practices. 

To learn more about validating policies by using IAM Access Analyzer, see [IAM Access Analyzer policy validation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-validation.html) in the *IAM User Guide*. To view a list of the warnings, errors, and suggestions that are returned by IAM Access Analyzer, see [IAM Access Analyzer policy check reference](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-reference-policy-checks.html).

## Access points for directory buckets policy examples
<a name="access-points-directory-buckets-policy-examples"></a>

The following access point policies demonstrate how to control requests to a directory bucket. Access point policies require bucket ARNs or access point ARNs. Access point aliases are not supported in policies. Following is an example of an access point ARN:

```
  arn:aws:s3express:region:account-id:accesspoint/myaccesspoint--zoneID--xa-s3
```

You can view the access point ARN in the details of an access point. For more information, see [View details for your access points for directory buckets](access-points-directory-buckets-details.md).

**Note**  
Permissions granted in an access point policy are effective only if the underlying bucket also allows the same access. You can accomplish this in two ways:  
**(Recommended)** Delegate access control from the bucket to the access point, as described in [Delegating access control to access points](#access-points-directory-buckets-delegating-control).
Add the same permissions contained in the access point policy to the underlying bucket's policy. 

**Example 1 – Service control policy to limit access points to VPC network origins**  
The following service control policy requires all new access points are to be created with a virtual private cloud (VPC) network origin. With this policy in place, users in your organization can't create any access point that is accessible from the internet.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
    {
        "Effect": "Deny",
        "Action": "s3express:CreateAccessPoint",
        "Resource": "*",
        "Condition": {
            "StringNotEquals": {
                "s3express:AccessPointNetworkOrigin": "VPC"
            }
        }
    }
  ]
}
```

**Example 2 – Access point policy to limit bucket access to access points with VPC network origin**  
The following access point policy limits all access to the bucket *amzn-s3-demo-bucket--zoneID--x-s3* to an access point with a VPC networking origin.    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "DenyCreateSessionFromNonVPC",
            "Principal": "*",
            "Action": "s3express:CreateSession",
            "Effect": "Deny",
            "Resource": "arn:aws:s3express:us-east-1:111122223333:bucket/amzn-s3-demo-bucket--usw2-az1--x-s3"
        }
    ]
}
```

## Condition keys
<a name="access-points-directory-buckets-condition-keys"></a>

Access points for directory buckets have condition keys that you can use in IAM policies to control access to your resources. The following condition keys represent only part of an IAM policy. For full policy examples, see [Access points for directory buckets policy examples](#access-points-directory-buckets-policy-examples), [Delegating access control to access points](#access-points-directory-buckets-delegating-control), and [Granting permissions for cross-account access points](#access-points-directory-buckets-cross-account). 

**`s3express:DataAccessPointArn`**  
This example shows how to filter access by the Amazon resource name (ARN) of an access point and matches all access points for AWS account *111122223333* in Region *region*:  

```
"Condition" : {
    "StringLike": {
        "s3express:DataAccessPointArn": "arn:aws:s3express:region:111122223333:accesspoint/*"
    }
}
```

**`s3express:DataAccessPointAccount`**  
This example shows a string operator that you can use to match on the account ID of the owner of an access point. The following example matches all access points that are owned by the AWS account *`111122223333`*.  

```
"Condition" : {
    "StringEquals": {
        "s3express:DataAccessPointAccount": "111122223333"
    }
}
```

**`s3express:AccessPointNetworkOrigin`**  
This example shows a string operator that you can use to match on the network origin, either `Internet` or `VPC`. The following example matches only access points with a VPC origin.  

```
"Condition" : {
    "StringEquals": {
        "s3express:AccessPointNetworkOrigin": "VPC"
    }
}
```

**`s3express:Permissions`**  
You can use `s3express:Permissions` to restrict access to specific API operations in access point scope. The following API operations are supported:  
+ `PutObject`
+ `GetObject`
+ `DeleteObject`
+ `ListBucket` (required for `ListObjectsV2`)
+ `GetObjectAttributes`
+ `AbortMultipartUpload`
+ `ListBucketMultipartUploads`
+ `ListMultipartUploadParts`
When using multi-value condition keys, we recommend you use `ForAllValues` with `Allow` statements and `ForAnyValue` with `Deny` statements. For more information, see [Multivalued context keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-single-vs-multi-valued-context-keys.html#reference_policies_condition-multi-valued-context-keys) in the IAM User Guide.

For more information about using condition keys with Amazon S3, see [ Actions, resources, and condition keys for Amazon S3](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) in the *Service Authorization Reference*.

For more information about the required permissions to S3 API operations by S3 resource types, see [Required permissions for Amazon S3 API operations](using-with-s3-policy-actions.md).

## Delegating access control to access points
<a name="access-points-directory-buckets-delegating-control"></a>

You can delegate access control from the bucket policy to the access point policy. The following example bucket policy allows full access to all access points that are owned by the bucket owner's account. After applying the policy, all access to this bucket is controlled by access point policies. We recommend configuring your buckets this way for all use cases that don't require direct access to the bucket.

**Example bucket policy that delegates access control to access points**  

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement" : [
    {
        "Effect": "Allow",
        "Principal" : { "AWS": "*" },
        "Action" : "*",
        "Resource" : [ "Bucket ARN",
        "Condition": {
            "StringEquals" : { "s3express:DataAccessPointAccount" : "Bucket owner's account ID" }
        }
    }]
}
```

## Granting permissions for cross-account access points
<a name="access-points-directory-buckets-cross-account"></a>

To create an access point to a bucket that's owned by another account, you must first create the access point by specifying the bucket name and account owner ID. Then, the bucket owner must update the bucket policy to authorize requests from the access point. Creating an access point is similar to creating a DNS CNAME in that the access point doesn't provide access to the bucket contents. All bucket access is controlled by the bucket policy. The following example bucket policy allows `GET` and `LIST` requests on the bucket from an access point that's owned by a trusted AWS account.

Replace *Bucket ARN* with the ARN of the bucket.

**Example of bucket policy delegating permissions to another AWS account**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement" : [
    {
        "Sid": "AllowCreateSessionForDirectoryBucket",
        "Effect": "Allow",
        "Principal" : { "AWS": "*" },
        "Action" : "s3express:CreateSession",
        "Resource" : [ "arn:aws:s3express:us-west-2:111122223333:bucket/amzn-s3-demo-bucket--usw2-az1--x-s3" ],
        "Condition": {
            "ForAllValues:StringEquals": {
                "s3express:Permissions": [
                    "GetObject",
                    "ListBucket"
                ]
            }
        }
    }]
}
```

# Monitoring and logging access points for directory buckets
<a name="access-points-directory-buckets-monitoring-logging"></a>

You can log requests made through access points and requests made to the APIs that manage access points, such as `CreateAccessPoint` and `GetAccessPointPolicy,` by using AWS CloudTrail. CloudTrail log entries for requests made through access points include the access point ARN (which includes the access point name) in the `resources` section of the log.

For example, suppose you have the following configuration: 
+ A bucket named `amzn-s3-demo-bucket--zone-id--x-s3` in Region `region` that contains an object named `my-image.jpg`.
+ An access point named `my-bucket-ap--zoneID--xa-s3` that is associated with `amzn-s3-demo-bucket--zone-id--x-s3`
+ An AWS account ID of `123456789012`

The following example shows the `resources` section of a CloudTrail log entry for the preceding configuration:

```
"resources": [
        {"type": "AWS::S3Express::Object",
        
            "ARN": "arn:aws:s3express-region:123456789012:bucket/amzn-s3-demo-bucket--zone-id--x-s3/my-image.jpg"
        },
        {"accountId": "c",
            "type": "AWS::S3Express::DirectoryBucket",
            "ARN": "arn:aws::s3express:region:123456789012:bucket/amzn-s3-demo-bucket--zone-id--x-s3"
        },
        {"accountId": "123456789012",
            "type": "AWS::S3::AccessPoint",
            "ARN": "arn:aws:s3express:region:123456789012:accesspoint/my-bucket-ap--zoneID--xa-s3"
        }
    ]
```

For more information about AWS CloudTrail, see [What is AWS CloudTrail?](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) in the *AWS CloudTrail User Guide*.

# Creating access points for directory buckets
<a name="creating-access-points-directory-buckets"></a>

Like Directory buckets, access points can be created in Availability Zones or in Dedicated Local Zones. The access point must be created in the same zone as the directory bucket associated with it.

An access point is associated with exactly one Amazon S3 directory bucket. If you want to use a directory bucket in your AWS account, you must first create a directory bucket. For more information about creating directory buckets, see [Creating directory buckets in an Availability Zone](directory-bucket-create.md) or [Creating a directory bucket in a Local Zone](create-directory-bucket-LZ.md).

You can also create a cross-account access point that's associated with a bucket in another AWS account, as long as you know the bucket name and the bucket owner's account ID. However, creating cross-account access points doesn't grant you access to data in the bucket until you are granted permissions from the bucket owner. The bucket owner must grant the access point owner's account (your account) access to the bucket through the bucket policy. For more information, see [Granting permissions for cross-account access points](access-points-policies.md#access-points-cross-account).

You can create an access point for any directory bucket with the AWS Management Console, AWS CLI, REST API, or AWS SDKs. Each access point is associated with a single directory bucket, and you can create hundreds of access points per bucket. When creating an access point, you choose the name of the access point and the directory bucket to associate it with. The access point name consists of a base name that you provide and suffix that includes the Zone ID of your bucket location, followed by `--xa-s3`. For example, `myaccesspoint-zoneID--xa-s3`. you can also restrict access to the access point through a Virtual Private Cloud (VPC). Then, you can immediately begin reading and writing data through your access point by using its name, just like you use a directory bucket name.

You can use the access point scope to restrict access to the directory bucket through the access point to specific prefixes or API operations. If you don't add a scope to the access point, all prefixes in the directory bucket and all API operations can be performed on objects in the bucket when accessed through the access point. After you create the access point, you can add, modify, or delete scope using the AWS CLI, AWS SDKs, or REST API. For more information, see [Manage the scope of your access points for directory buckets](access-points-directory-buckets-manage-scope.md).

After you create the access point, you can configure your access point IAM resource policy. For more information, see [Viewing, editing or deleting access point policies](access-points-directory-buckets-policy.md).

## Using the S3 console
<a name="access-points-directory-buckets-create-ap"></a>

**Note**  
You can also create an access point for a directory bucket from the directory bucket screen. When you do this, the directory bucket name is provided and you don't need to choose a bucket when creating the access point. For more information, see [Listing directory buckets](directory-buckets-objects-ListExamples.md).

**To create an access point for directory buckets**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region in which you want to create an access point. The access point must be created in the same Region as the associated bucket. 

1. In the left navigation pane, choose **Access points for directory buckets**.

1. On the **Access Points** page, choose **Create access point**.

1. You can create an access point for a directory bucket in your account or in another account. To create an access point for a directory bucket in another account:
**Note**  
If you're using a bucket in a different AWS account, the bucket owner must update the bucket policy to authorize requests from the access point. For an example bucket policy, see [Granting permissions for cross-account access points](access-points-directory-buckets-policies.md#access-points-directory-buckets-cross-account).

   1. In the **Directory bucket** field, choose **Specify a bucket in another account**.

   1. In the **Bucket owner account ID** field, enter the AWS account ID that owns the bucket.

   1. In the **Bucket name** field, enter the name of the bucket, including the base name and the zone ID. For example, ***bucket-base-name*--*zone-id*--x-s3**.

1. To create an access point for a directory bucket in your account:

   1. In the **Directory bucket** field, choose **Choose a bucket in this account**.

   1. In the **Bucket name** field, enter the name of the bucket, including the base name and the zone ID. For example, ***bucket-base-name*--*zone-id*--x-s3**. To choose the bucket from a list, choose **Browse S3** and choose the directory bucket.

1. In **Access point name**, in the **Base name** field, enter the base name for the access point. The zone ID and full access point name appear. For more information about naming access points, see [Naming rules for access points for directory buckets](access-points-directory-buckets-restrictions-limitations-naming-rules.md#access-points-directory-buckets-names).

1. In **Network origin**, choose either **virtual private cloud (VPC)** or **Internet**. If you choose **virtual private cloud (VPC)**, in the **VPC ID** field, enter the ID of the VPC that you want to use with the access point.

1. (Optional) In **Access point scope**, to apply a scope to this access point, choose **Limit the scope of this access point using prefixes or permissions**. 

   1. To limit access to prefixes in the directory bucket, in **Prefixes**, enter one or more prefixes. To add another prefix, choose **Add prefix**. To remove a prefix, choose **Remove**.
**Note**  
An access point scope has a character limit of 512 total characters for all prefixes. You can see the quantity of characters remaining below **Add prefix**.

   1. In **Permissions**, choose one or more API operations that the access point will allow. To remove a data operation, choose the **X** next to the data operation name.

1. To not apply a scope to the access point and allow access to all prefixes in the directory bucket and all API operations through the access point, in **Access point scope**, choose **Apply access to the entire bucket**.

1. Choose **Create access point for directory bucket**. The access point name and other information about it appear in the **Access points for directory buckets** list.

## Using the AWS CLI
<a name="creating-access-point-cli-directory-bucket"></a>

The following example command creates an access point named *example-ap* for the bucket **amzn-s3-demo-bucket*--*zone-id*--x-s3* in the account *111122223333*. 

```
aws s3control create-access-point --name example-ap--zoneID--xa-s3 --account-id 111122223333 --bucket amzn-s3-demo-bucket--zone-id--x-s3
```

To restrict access to the access point through a VPC, include the `--vpc` parameter and the VPC ID.

```
aws s3control create-access-point --name example-ap--zoneID--xa-s3 --account-id 111122223333 --bucket amzn-s3-demo-bucket--zone-id--x-s3 --vpc vpc-id
```

When you create an access point for a cross-account bucket, include the `--bucket-account-id` parameter. The following example command creates an access point in the AWS account *111122223333*, for the bucket **amzn-s3-demo-bucket*--*zone-id*--x-s3*, owned by the AWS account *444455556666*.

```
aws s3control create-access-point --name example-ap--zoneID--xa-s3 --account-id 111122223333 --bucket amzn-s3-demo-bucket--zone-id--x-s3 --bucket-account-id 444455556666
```

For more information and examples, see [create-access-point](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/create-access-point.html) in the AWS CLI Command Reference.

## Using the REST API
<a name="creating-access-point-directory-bucket-rest-api"></a>

The following example command creates an access point named *example-ap* for the bucket **amzn-s3-demo-bucket*--*zone-id*--x-s3* in the account *111122223333* and access restricted through the VPC *vpc-id* (optional). 

```
PUT /v20180820/accesspoint/example-ap--zoneID--xa-s3 HTTP/1.1
Host: s3express-control.region.amazonaws.com
x-amz-account-id: 111122223333
<?xml version="1.0" encoding="UTF-8"?>
<CreateAccessPointRequest>
   <Bucket>amzn-s3-demo-bucket--zone-id--x-s3s</Bucket>
   <BucketAccountId>111122223333</BucketAccountId>
   <VpcConfiguration>
       <VpcId>vpc-id</VpcId>
   </VpcConfiguration>
</CreateAccessPointRequest>
```

Response:

```
HTTP/1.1 200
<?xml version="1.0" encoding="UTF-8"?>
<CreateAccessPointResult>
   <AccessPointArn>
       "arn:aws:s3express:region:111122223333:accesspoint/example-ap--zoneID--xa-s3"
   </AccessPointArn>
   <Alias>example-ap--zoneID--xa-s3</Alias>
</CreateAccessPointResult>
```

## Using the AWS SDKs
<a name="creating-access-point-directory-bucket-sdk"></a>

You can use the AWS SDKs to create an access point. For more information, see [list of supported SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPoint.html#API_control_CreateAccessPoint_SeeAlso) in the Amazon Simple Storage Service API Reference.

# Managing your access points for directory buckets
<a name="access-points-directory-buckets-manage"></a>

This section explains how to manage your access points for directory buckets using the AWS Command Line Interface, Amazon S3 REST API, or AWS SDK.

**Topics**
+ [List your access points for directory buckets](access-points-directory-buckets-list.md)
+ [View details for your access points for directory buckets](access-points-directory-buckets-details.md)
+ [Viewing, editing or deleting access point policies](access-points-directory-buckets-policy.md)
+ [Manage the scope of your access points for directory buckets](access-points-directory-buckets-manage-scope.md)
+ [Delete your access point for directory buckets](access-points-directory-buckets-delete.md)

# List your access points for directory buckets
<a name="access-points-directory-buckets-list"></a>

This section explains how to list access points for a directory bucket using the AWS Management Console, AWS Command Line Interface (AWS CLI), REST API, or AWS SDKs.

## Using the S3 console
<a name="access-points-directory-buckets-list-console"></a>

**To list access points in your AWS account**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to list access points for. 

1. In the navigation pane on the left side of the console, choose **Access points for directory buckets**.

1. (Optional) Search for access points by name. Only access points in your selected AWS Region will appear here.

1. Choose the name of the access point you want to manage.

## Using the AWS CLI
<a name="access-points-directory-buckets-list-cli"></a>

The following `list-access-points-for-directory-buckets` example command shows how you can use the AWS CLI to list the access points owned by an AWS account and associated with a directory bucket.

The following command lists access points for AWS account *111122223333* that are attached to bucket **amzn-s3-demo-bucket*--*zone-id*--x-s3*.

```
aws s3control list-access-points-for-directory-buckets --account-id 111122223333 --directory-bucket amzn-s3-demo-bucket--zone-id--x-s3
```

For more information and examples, see [list-access-points-for-directory-buckets](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/list-access-points-for-directory-buckets.html) in the AWS CLI Command Reference.

## Using the REST API
<a name="access-points-directory-buckets-list-rest"></a>

The following example shows how you can use the REST API to list your access points.

```
GET /v20180820/directoryaccesspoint?directoryBucket=amzn-s3-demo-bucket--zone-id--x-s3
&maxResults=maxResults HTTP/1.1
Host: s3express-control.region.amazonaws.com 
x-amz-account-id: 111122223333
```

**Example of `ListAccessPointsForDirectoryBuckets` response**  

```
HTTP/1.1 200
<?xml version="1.0" encoding="UTF-8"?>
<ListDirectoryAccessPointsResult>
    <AccessPointList>
        <AccessPoint>
            <AccessPointArn>arn:aws:s3express:region:111122223333:accesspoint/example-access-point--zoneID--xa-s3</AccessPointArn>
            <Alias>example-access-point--zoneID--xa-s3</Alias>
            <Bucket>amzn-s3-demo-bucket--zone-id--x-s3</Bucket>
            <BucketAccountId>111122223333</BucketAccountId>
            <Name>example-access-point--zoneID--xa-s3</Name>
            <NetworkOrigin>VPC</NetworkOrigin>
            <VpcConfiguration>
                <VpcId>VPC-1</VpcId>
            </VpcConfiguration>
        </AccessPoint>    
    </AccessPointList>  
</ListDirectoryAccessPointsResult>
```

## Using the AWS SDKs
<a name="access-points-directory-buckets-list-sdk"></a>

You can use the AWS SDKs to list your access points. For more information, see [list of supported SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessPointsForDirectoryBuckets.html#API_control_ListAccessPointsForDirectoryBuckets_SeeAlso) in the Amazon Simple Storage Service API Reference.

# View details for your access points for directory buckets
<a name="access-points-directory-buckets-details"></a>

This section explains how to view details for your access point for directory buckets using the AWS Management Console, AWS CLI, AWS SDKs, or REST API.

## Using the S3 console
<a name="access-points-details-console"></a>

View details of an access point for directory buckets to see the following information about the access point and the associated directory bucket:
+ Properties:
  + Directory bucket name
  + Directory bucket owner account ID
  + AWS Region
  + Directory bucket location type
  + Directory bucket location name
  + Creation date of access point
  + Network origin
  + VPC ID
  + S3 URI
  + Access point ARN
  + Access point alias
+ Permissions:
  + IAM external access analyzer findings
  + Access point scope
  + Access point policy

**To view details for your access point in your AWS account**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to list access points for. 

1. In the navigation pane on the left side of the console, choose **Access points for directory buckets**.

1. (Optional) Search for access points by name. Only access points in your selected AWS Region will appear here.

1. Choose the name of the access point you want to manage.

1. Choose the **Properties** tab or the **Permissions** tab.

## Using the AWS CLI
<a name="access-points-directory-buckets-details-cli"></a>

The following `get-access-point` example command shows how you can use the AWS CLI to view details for your access point.

The following command lists details for the access point **my-access-point*--*zoneID*--xa-s3* for AWS account *111122223333*.

```
aws s3control get-access-point --name my-access-point--zoneID--xa-s3 --account-id 111122223333
```

**Example of output of `get-access-point` command**  

```
{
    "Name": "example-access-point--zoneID--xa-s3",
    "Bucket": "amzn-s3-demo-bucket--zone-id--x-s3",
    "NetworkOrigin": "Internet",
    "PublicAccessBlockConfiguration": {
        "BlockPublicAcls": true,
        "IgnorePublicAcls": true,
        "BlockPublicPolicy": true,
        "RestrictPublicBuckets": true
    },
    "CreationDate": "2025-04-23T18:26:22.146000+00:00",
    "Alias": "example-access-point--zoneID--xa-s3",
    "AccessPointArn": "arn:aws:s3express:region:111122223333:accesspoint/example-access-point--zoneID--xa-s3",
    "BucketAccountId": "296805379465"
}
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/get-access-point.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/get-access-point.html) in the *AWS CLI Command Reference*.

## Using the REST API
<a name="access-points-directory-buckets-details-rest"></a>

You can use the REST API to view details for your access point. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPoint.html) in the *Amazon Simple Storage Service API Reference*.

## Using the AWS SDKs
<a name="access-points-directory-buckets-details-sdk"></a>

You can use the AWS SDKs to view details of your access points. For more information, see [list of supported SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPoint.html#API_control_GetAccessPoint_SeeAlso) in the Amazon Simple Storage Service API Reference.

# Viewing, editing or deleting access point policies
<a name="access-points-directory-buckets-policy"></a>

You can use an AWS Identity and Access Management (IAM) access point policy to control the principal and resource that can access the access point. The access point scope manages the prefixes and API permissions for the access point. You can create, edit, and delete an access point policy using the AWS Command Line Interface, REST API, or AWS SDKs. For more information about access point scope, see [Manage the scope of your access points for directory buckets](access-points-directory-buckets-manage-scope.md).

**Note**  
Since directory buckets use session-based authorization, your policy must always include the `s3express:CreateSession` action.

## Using the S3 console
<a name="access-point-directory-bucket-edit-policy-console"></a>

**To view, edit, or delete an access point policy**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to list access points for. 

1. In the navigation pane on the left side of the console, choose **Access points for directory buckets**.

1. (Optional) Search for access points by name. Only access points in your selected AWS Region will appear here.

1. Choose the name of the access point you want to manage.

1. Choose the **Permissions** tab.

1. To create or edit the access point policy, in **Access point policy**, choose **Edit**. Edit the policy. Choose **Save**.

1. To delete the access point policy, in **Access point policy**, choose **Delete**. In the **Delete access point policy** window, type **confirm** and choose **Delete**.

## Using the AWS CLI
<a name="access-points-directory-buckets-edit-policy-cli"></a>

You can use the `get-acccess-point-policy`, `put-access-point-policy`, and `delete-access-point-policy` commands to view, edit, or delete an access point policy. For more information, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/get-access-point-policy.html#get-access-point-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/get-access-point-policy.html#get-access-point-policy), [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/put-access-point-policy.html#put-access-point-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/put-access-point-policy.html#put-access-point-policy), or [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/delete-access-point-policy.html#delete-access-point-policy](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/delete-access-point-policy.html#delete-access-point-policy) in the AWS CLI Command Reference.

## Using the REST API
<a name="access-points-directory-buckets-edit-policy-rest"></a>

You can use the REST API `GetAccessPointPolicy`, `DeleteAccessPointPolicy`, and `PutAccessPointPolicy` operations to view, delete, or edit an access point policy. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessPointPolicy.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicy.html), or [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointPolicy.html) in the Amazon Simple Storage Service API Reference.

## Using the AWS SDKs
<a name="access-points-directory-buckets-edit-policy-sdk"></a>

You can use the AWS SDKs to view, delete, or edit an access point policy. For more information, see the list of supported SDKs for [GetAccessControlPolicy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicy.html#API_control_PutAccessPointPolicy_SeeAlso), [DeleteAccessControlPolicy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointPolicy.html#API_control_PutAccessPointPolicy_SeeAlso), and [PutAccessControlPolicy](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessPointPolicy.html#API_control_PutAccessPointPolicy_SeeAlso) in the Amazon Simple Storage Service API Reference.

# Manage the scope of your access points for directory buckets
<a name="access-points-directory-buckets-manage-scope"></a>

This section explains how to view and modify the scope of your access points for directory buckets using the AWS Command Line Interface, REST API, or AWS SDKs. You can use the access point scope to restrict access to specific prefixes or API operations.

**Topics**
+ [View the scope of your access points for directory buckets](#access-points-directory-buckets-view-scope)
+ [Modify the scope of your access point for directory buckets](#access-points-directory-buckets-modify-scope)
+ [Delete the scope of your access points for directory buckets](#access-points-directory-buckets-delete-scope)

## View the scope of your access points for directory buckets
<a name="access-points-directory-buckets-view-scope"></a>

You can use the AWS Management Console, AWS Command Line Interface, REST API, or AWS SDKs to view the scope of your access point for directory buckets.

### Using the S3 console
<a name="access-points-directory-buckets-view-scope-console"></a>

**To view the scope of your access point for directory buckets**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to list access points for. 

1. In the navigation pane on the left side of the console, choose **Access points for directory buckets**.

1. (Optional) Search for access points by name. Only access points in your selected AWS Region will appear here.

1. Choose the name of the access point you want to manage.

1. Choose the **Permissions** tab.

1. In the **Access point scope**, you can see the prefixes and permissions applied to the access point.

### Using the AWS CLI
<a name="access-points-directory-buckets-view-scope-cli"></a>

The following `get-access-point-scope` example command shows how you can use the AWS CLI to view the scope of your access point.

The following command shows the scope of the access point **my-access-point**--*zoneID*--xa-s3 for AWS account *111122223333*.

```
aws s3control get-access-point-scope --name my-access-point--zoneID--xa-s3 --account-id 111122223333      
```

For more information and examples, see [get-access-point-scope](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/get-access-point-scope.html) in the AWS CLI Command Reference.

**Example result of `get-access-point-scope`**  

```
{
    "Scope": {
        "Permissions": [
            "ListBucket",
            "PutObject"
        ]
  "Prefixes": [
            "Prefix": "MyPrefix1*",
            "Prefix": "MyObjectName.csv"
        ]
    }
}
```

### Using the REST API
<a name="access-points-directory-buckets-view-scope-rest-api"></a>

The following `GetAccessPointScope` example request shows how you can use the REST API to view the scope of your access point.

The following request shows the scope of the access point **my-access-point**--*region*-*zoneID*--xa-s3 for AWS account *111122223333*.

```
GET /v20180820/accesspoint/my-access-point--zoneID--xa-s3/scope HTTP/1.1 
Host: s3express-control.region.amazonaws.com 
x-amz-account-id: 111122223333
```

**Example result of `GetAccessPointScope`**  

```
      HTTP/1.1 200
      <?xml version="1.0" encoding="UTF-8"?>
      <GetAccessPointScopeResult>    
          <Scope>            
              <Prefixes>                
                  <Prefix>MyPrefix1*</Prefix>
                  <Prefix>MyObjectName.csv</Prefix>
              </Prefixes>            
              <Permissions>                 
                  <Permission>ListBucket</Permission>                 
                  <Permission>PutObject</Permission>
              </Permissions>     
              <Scope>
      </GetAccessPointScopeResult>
```

### Using the AWS SDKs
<a name="access-points-directory-buckets-view-scope-sdk"></a>

You can use the AWS SDKs to view the scope of your access point. For more information, see [list of supported SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointScope.html#API_control_GetAccessPointScope_SeeAlso) in the Amazon Simple Storage Service API Reference.

## Modify the scope of your access point for directory buckets
<a name="access-points-directory-buckets-modify-scope"></a>

You can use the AWS Management Console, AWS Command Line Interface, REST API, or AWS SDKs to modify the scope of your access points for directory buckets. Access point scope is used to restrict access to specific prefixes, API operations, or a combination of both.

You can include one or more of the following API operations as permissions:
+ `PutObject`
+ `GetObject`
+ `DeleteObject`
+ `ListBucket` (required for `ListObjectsV2`)
+ `GetObjectAttributes`
+ `AbortMultipartUploads`
+ `ListBucketMultipartUploads`
+ `ListMultipartUploadParts`

**Note**  
You can specify any amount of prefixes, but the total length of characters of all prefixes must be less than 256 bytes in size.

### Using the S3 console
<a name="access-points-directory-buckets-modify-scope-console"></a>

**To modify access point scope**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to list access points for. 

1. In the navigation pane on the left side of the console, choose **Access points for directory buckets**.

1. (Optional) Search for access points by name. Only access points in your selected AWS Region will appear here.

1. Choose the name of the access point you want to manage.

1. Choose the **Permissions** tab.

1. In the **Access point scope** section, choose **Edit**.

1. To add or remove prefixes:

   1. To add a prefix, choose **Add prefix**. In the **Prefix** field, enter a prefix of the directory bucket. Repeat to add more prefixes.

   1. To remove a prefix, choose **Remove**.

1. To add or remove permissions:

   1. To add a permission, in the **Choose data operations** field, choose the permission.

   1. To remove a permission, choose the **X** next to the permission.

1. Choose **Save changes**.

### Using the AWS CLI
<a name="access-points-directory-buckets-modify-scope-cli"></a>

The following `put-access-point-scope` example command shows how you can use the AWS CLI to modify the scope of your access point.

The following command modifies the access point scope of **my-access-point**--*zoneID*--xa-s3 for AWS account *111122223333*.

**Note**  
You can use wildcards in prefixes by using the asterisk (\$1) character. If you want to use the asterisk character as a literal, add a backslash character (\$1) before it to escape it.  
All prefixes have an implicit '\$1' ending, meaning all paths withing the prefix will be included.  
When you modify the scope of an access point with the AWS CLI, you replace the existing scope.

```
aws s3control put-access-point-scope --name my-access-point--zoneID--xa-s3 --account-id 111122223333 --scope Prefixes=string,Permissions=string
```

For more information and examples, see [put-access-point-scope](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/put-access-point-scope.html) in the AWS CLI Command Reference.

### Using the REST API
<a name="access-points-directory-buckets-modify-scope-rest-api"></a>

The following `PutAccessPointScope` example request shows how you can use the REST API to modify the scope of your access point.

The following request modifies the access point scope of **my-access-point**--*zoneID*--xa-s3 for AWS account *111122223333*.

**Note**  
You can use wildcards in prefixes by using the asterisk (\$1) character. If you want to use the asterisk character as a literal, add a backslash character (\$1) before it to escape it.  
All prefixes have an implicit '\$1' ending, meaning all paths withing the prefix will be included.  
When you modify the scope of an access point with the API, you replace the existing scope.

```
PUT /v20180820/accesspoint/my-access-point--zoneID--xa-s3/scope HTTP/1.1 
Host: s3express-control.region.amazonaws.com 
x-amz-account-id: 111122223333
<?xml version="1.0" encoding="UTF-8"?>
<PutAccessPointScopeRequest>   
        <Scope>        
            <Prefixes>        
                <Prefix>Jane/*</Prefix>              
            </Prefixes>       
            <Permissions>        
                <Permission>PutObject</Permission>
                <Permission>GetObject</Permission>
            </Permissions>   
            <Scope>
    </PutAccessPointScopeRequest>
```

### Using the AWS SDKs
<a name="access-points-directory-buckets-modify-scope-sdk"></a>

You can use the AWS CLI, AWS SDKs, or REST API to modify the scope of your access point. For more information, see [list of supported SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessPointScope.html#API_control_PutAccessPointScope_SeeAlso) in the Amazon Simple Storage Service API Reference.

## Delete the scope of your access points for directory buckets
<a name="access-points-directory-buckets-delete-scope"></a>

You can use the AWS Management Console, AWS Command Line Interface, REST API, or AWS SDKs to delete the scope of your access points for directory buckets.

**Note**  
When you delete the scope of an access point, all prefixes and permissions are deleted.

### Using the S3 console
<a name="access-points-directory-buckets-delete-scope-console"></a>

**To delete access point scope**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to list access points for. 

1. In the navigation pane on the left side of the console, choose **Access points for directory buckets**.

1. (Optional) Search for access points by name. Only access points in your selected AWS Region will appear here.

1. Choose the name of the access point you want to manage.

1. Choose the **Permissions** tab.

1. In **Access point scope**, choose **Delete**.

1. In the **to confirm this deletion, type "confirm".** field, enter **confirm**.

1. Choose **Delete**.

### Using the AWS CLI
<a name="access-points-directory-buckets-delete-scope-cli"></a>

The following `delete-access-point-scope` example command shows how you can use the AWS CLI to delete the scope of your access point.

The following command deletes the scope of the access point **my-access-point**--*zoneID*--xa-s3 for AWS account *111122223333*.

```
aws s3control delete-access-point-scope --name my-access-point--region-zoneID--xa-s3 --account-id 111122223333
```

For more information and examples, see [delete-access-point-scope](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/delete-access-point-scope.html) in the AWS CLI Command Reference.

### Using the REST API
<a name="access-points-directory-buckets-delete-scope-rest-api"></a>

The following request deletes the scope of the access point **my-access-point**--*zoneID*--xa-s3 for AWS account *111122223333*.

```
DELETE /v20180820/accesspoint/my-access-point--zoneID--xa-s3/scope HTTP/1.1 
Host: s3express-control.region.amazonaws.com 
x-amz-account-id: 111122223333
```

### Using the AWS SDKs
<a name="access-points-directory-buckets-delete-scope-sdk"></a>

You can use the AWS SDKs to delete the scope of your access point. For more information, see [list of supported SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointScope.html#API_control_DeleteAccessPointScope_SeeAlso) in the Amazon Simple Storage Service API Reference.

# Delete your access point for directory buckets
<a name="access-points-directory-buckets-delete"></a>

This section explains how to delete your access point using the AWS Management Console, AWS Command Line Interface, REST API, or AWS SDKs.

**Note**  
Before you can delete a directory bucket attached to an access point, you must delete the access point.

## Using the S3 console
<a name="access-points-directory-buckets-delete-console"></a>

**To delete access points for directory buckets in your AWS account**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region that you want to list access points for.

1. In the navigation pane on the left side of the console, choose **Access points for directory buckets**.

1. (Optional) Search for access points by name. Only access points in your selected AWS Region will appear here.

1. Choose the name of the access point you want to delete.

1. Choose **Delete**.

1. To confirm deletion, type **confirm** and choose **Delete**.

## Using the AWS CLI
<a name="access-points-directory-buckets-delete-cli"></a>

The following `delete-access-point` example command shows how you can use the AWS CLI to delete your access point.

The following command deletes the access point **my-access-point**--*zoneID*--xa-s3 for AWS account *111122223333*.

```
aws s3control delete-access-point --name my-access-point--zoneID--xa-s3 --account-id 111122223333      
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/delete-access-point.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/delete-access-point.html) in the *AWS CLI Command Reference*.

## Using the REST API
<a name="access-points-directory-buckets-delete-rest"></a>

You can use the REST API to delete your access point. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPoint.html) in the *Amazon Simple Storage Service API Reference*.

## Using the AWS SDKs
<a name="access-points-directory-buckets-delete-sdk"></a>

You can use the AWS SDKs to delete your access points. For more information, see [list of supported SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPoint.html#API_control_DeleteAccessPoint_SeeAlso) in the Amazon Simple Storage Service API Reference.

# Using tags with S3 Access Points for directory buckets
<a name="access-points-db-tagging"></a>

An AWS tag is a key-value pair that holds metadata about resources, in this case Amazon S3 Access Points for directory buckets. You can tag access points when you create them or manage tags on existing access points. For general information about tags, see [Tagging for cost allocation or attribute-based access control (ABAC)](tagging.md).

**Note**  
There is no additional charge for using tags on access points for directory buckets beyond the standard S3 API request rates. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

## Common ways to use tags with access points for directory buckets
<a name="common-ways-to-use-tags-access-points-db"></a>

Attribute-based access control (ABAC) allows you to scale access permissions and grant access to access points for directory buckets based on their tags. For more information about ABAC in Amazon S3, see [Using tags for ABAC](https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging.html#using-tags-for-abac).

### ABAC for S3 Access Points
<a name="abac-for-access-points-db"></a>

Amazon S3 Access Points support attribute-based access control (ABAC) using tags. Use tag-based condition keys in your AWS organizations, IAM, and Access Points policies. For enterprises, ABAC in Amazon S3 supports authorization across multiple AWS accounts. 

In your IAM policies, you can control access to access points for directory buckets based on the bucket's tags by using the following [global condition keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-tagkeys):
+ `aws:ResourceTag/key-name`
  + Use this key to compare the tag key-value pair that you specify in the policy with the key-value pair attached to the resource. For example, you could require that access to a resource is allowed only if the resource has the attached tag key `Dept` with the value `Marketing`. For more information, see [Controlling access to AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-resources).
+ `aws:RequestTag/key-name`
  + Use this key to compare the tag key-value pair that was passed in the request with the tag pair that you specify in the policy. For example, you could check whether the request includes the tag key `Dept` and that it has the value `Accounting`. For more information, see [Controlling access during AWS requests](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-requests). You can use this condition key to restrict which tag key-value pairs can be passed during the `TagResource` and `CreateAccessPoint` API operations.
+ `aws:TagKeys`
  + Use this key to compare the tag keys in a request with the keys that you specify in the policy. We recommend that when you use policies to control access using tags, use the `aws:TagKeys` condition key to define what tag keys are allowed. For example policies and more information, see [Controlling access based on tag keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-tag-keys). You can create an access point for directory buckets with tags. To allow tagging during the `CreateAccessPoint` API operation, you must create a policy that includes both the `s3express:TagResource` and `s3express:CreateAccessPoint` actions. You can then use the `aws:TagKeys` condition key to enforce using specific tags in the `CreateAccessPoint` request.
+ `s3express:AccessPointTag/tag-key`
  + Use this condition key to grant permissions to specific data via access points using tags. When using `aws:ResourceTag/tag-key` in an IAM policy, both the access point as well as the bucket to which the access point points to are required to have the same tag as they are both considered during authorization. If you want to control access to your data specifically via the access-point tag only, you can use `s3express:AccessPointTag/tag-key` condition key.

### Example ABAC policies for access points for directory buckets
<a name="example-access-points-db-abac-policies"></a>

See the following example ABAC policies for access points for directory buckets.

#### 1.1 - IAM policy to create or modify access points with specific tags
<a name="example-access-points-db-user-policy-request-tag"></a>

In this IAM policy, users or roles with this policy can only create access points if they tag the access points with the tag key `project` and tag value `Trinity` in the access point creation request. They can also add or modify tags on existing access points for directory buckets as long as the `TagResource` request includes the tag key-value pair `project:Trinity`. 

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "CreateAccessPointWithTags",
      "Effect": "Allow",
      "Action": [
        "s3express:CreateAccessPoint",
        "s3express:TagResource"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/project": [
            "Trinity"
          ]
        }
      }
    }
  ]
}
```

#### 1.2 - Access Point policy to restrict operations on the bucket using tags
<a name="example-access-points-db-user-policy-resource-tag"></a>

In this Access Point policy, IAM principals (users and roles) can perform operations using the `CreateSession` action on the access point only if the value of the access point's `project` tag matches the value of the principal's `project` tag.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowObjectOperations",
      "Effect": "Allow",
      "Principal": {
        "AWS": "111122223333"
      },
      "Action": "s3express:CreateSession",
      "Resource": "arn:aws::s3express:region:111122223333:access-point/my-access-point",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/project": "${aws:PrincipalTag/project}"
        }
      }
    }
  ]
}
```

#### 1.3 - IAM policy to modify tags on existing resources maintaining tagging governence
<a name="example-access-points-db-user-policy-tag-keys"></a>

In this IAM policy, IAM principals (users or roles) can modify tags on an access point only if the value of the access point's `project` tag matches the value of the principal's `project` tag. Only the four tags `project`, `environment`, `owner`, and `cost-center` specified in the `aws:TagKeys` condition keys are permitted for these access points. This helps enforce tag governance, prevents unauthorized tag modifications, and keeps the tagging schema consistent across your access points.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "EnforceTaggingRulesOnModification",
      "Effect": "Allow",
      "Action": [
        "s3express:TagResource"
      ],
      "Resource": "arn:aws::s3express:region:111122223333:accesspoint/my-access-point",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/project": "${aws:PrincipalTag/project}"
        },
        "ForAllValues:StringEquals": {
          "aws:TagKeys": [
            "project",
            "environment",
            "owner",
            "cost-center"
          ]
        }
      }
    }
  ]
}
```

#### 1.4 - Using the s3express:AccessPointTag condition key
<a name="example-access-points-db-policy-bucket-tag"></a>

In this IAM policy, the condition statement allows access to the bucket's data only if the access point used to access the bucket has the tag key `Environment` and tag value `Production`. 

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowAccessToSpecificAccessPoint",
      "Effect": "Allow",
      "Action": "*",
      "Resource": "arn:aws::s3express:region:111122223333:accesspoint/my-access-point",
      "Condition": {
        "StringEquals": {
          "s3express:AccessPointTag/Environment": "Production"
        }
      }
    }
  ]
}
```

## Working with tags for access points for directory buckets
<a name="working-with-tags-access-points-db"></a>

You can add or manage tags for access points for directory buckets using the Amazon S3 Console, the AWS Command Line Interface (CLI), the AWS SDKs, or using the S3 APIs: [TagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html), [UntagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html), and [ListTagsForResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html). For more information, see:

**Topics**
+ [Common ways to use tags with access points for directory buckets](#common-ways-to-use-tags-access-points-db)
+ [Working with tags for access points for directory buckets](#working-with-tags-access-points-db)
+ [Creating access points for directory buckets with tags](access-points-db-create-tag.md)
+ [Adding a tag to an access point for directory buckets](access-points-db-tag-add.md)
+ [Viewing the tags of an access point for directory buckets](access-points-db-tag-view.md)
+ [Deleting a tag from an access point for directory buckets](access-points-db-tag-delete.md)

# Creating access points for directory buckets with tags
<a name="access-points-db-create-tag"></a>

You can tag Amazon S3 Access Points for directory buckets when you create them. For additional information, see [Using tags with S3 Access Points for directory buckets](access-points-db-tagging.md).

## Permissions
<a name="access-points-db-create-tag-permissions"></a>

To create an access point for directory buckets with tags, you must have the following permissions:
+ `s3express:CreateAccessPoint`
+ `s3express:TagResource`

## Troubleshooting errors
<a name="access-points-db-create-tag-troubleshooting"></a>

If you encounter an error when attempting to create an access point for directory buckets with tags, you can do the following: 
+ Verify that you have the required [Permissions](#access-points-db-create-tag-permissions) to create the access point for directory buckets and add a tag to it.
+ Check your IAM user policy for any attribute-based access control (ABAC) conditions. You may be required to label your access points for directory buckets only with specific tag keys and values. For more information, see [Using tags for attribute-based access control (ABAC)](tagging.md#using-tags-for-abac).

## Steps
<a name="access-points-db-create-tag-steps"></a>

You can create an access point for directory buckets with tags applied by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and AWS SDKs.

## Using the S3 console
<a name="access-points-db-create-tag-console"></a>

To create an access point for directory buckets with tags using the Amazon S3 console:

1. Sign in to Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Points (Directory Buckets)**.

1. Choose **create access point** to create a new access point.

1. Enter a name for the access point. For more information, see [Access points for directory buckets naming rules, restrictions, and limitations](access-points-directory-buckets-restrictions-limitations-naming-rules.md). 

1. On the **Create access point** page, **Tags** is an option when creating a new access point.

1. Choose **Add new Tag** to open the Tags editor and enter a tag key-value pair. The tag key is required, but the value is optional. 

1. To add another tag, select **Add new Tag** again. You can enter up to 50 tag key-value pairs.

1. After you complete specifying the options for your new access point, choose **Create access point**. 

## Using the AWS SDKs
<a name="access-points-db-create-tag-sdks"></a>

------
#### [ SDK for Java 2.x ]

This example shows you how to create an access point with tags by using the AWS SDK for Java 2.x. To use the command replace the *user input placeholders* with your own information. 

```
CreateAccessPointRequest createAccessPointRequest = CreateAccessPointRequest.builder()
                .accountId(111122223333)
                .name(my-access-point)
                .bucket(amzn-s3-demo-bucket--zone-id--x-s3)
                .tags(Collections.singletonList(Tag.builder().key("key1").value("value1").build()))
                .build();
 awss3Control.createAccessPoint(createAccessPointRequest);
```

------

## Using the REST API
<a name="access-points-db-create-tag-api"></a>

For information about the Amazon S3 REST API support for creating a directory bucket with tags, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [CreateBucket](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html)

## Using the AWS CLI
<a name="access-points-db-create-tag-cli"></a>

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to create an access point for directory buckets with tags by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

When you create an access point for directory buckets you must provide configuration details and use the following naming convention: `my-access-point`

**Request:**

```
aws s3control create-access-point \
--account-id 111122223333 \ 
--name my-access-point \
--bucket amzn-s3-demo-bucket--zone-id--x-s3 \
--profile personal \
--tags Key=key1,Value=value1 Key=MyKey2,Value=value2 \
--region region
```

# Adding a tag to an access point for directory buckets
<a name="access-points-db-tag-add"></a>



You can add tags to Amazon S3 Access Points for directory buckets and modify these tags. For additional information, see [Using tags with S3 Access Points for directory buckets](access-points-db-tagging.md).

## Permissions
<a name="access-points-db-tag-add-permissions"></a>

To add a tag to an access point for directory buckets, you must have the following permission:
+ `s3express:TagResource`

## Troubleshooting errors
<a name="access-points-db-tag-add-troubleshooting"></a>

If you encounter an error when attempting to add a tag to an access point for directory buckets, you can do the following: 
+ Verify that you have the required [Permissions](#access-points-db-tag-add-permissions) to add a tag to an access point for directory buckets.
+ If you attempted to add a tag key that starts with the AWS reserved prefix `aws:`, change the tag key and try again. 

## Steps
<a name="access-points-db-tag-add-steps"></a>

You can add tags to access points for directory buckets by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and AWS SDKs.

## Using the S3 console
<a name="access-points-db-tag-add-console"></a>

To add tags to an access point for directory buckets using the Amazon S3 console:

1. Sign in to Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Points (Directory Buckets)**.

1. Choose the access point name. 

1. Choose the **Properties** tab. 

1. Scroll to the **Tags** section and choose **Add new Tag**. 

1. This opens the **Add Tags** page. You can enter up to 50 tag key value pairs. 

1. If you add a new tag with the same key name as an existing tag, the value of the new tag overrides the value of the existing tag.

1. You can also edit the values of existing tags on this page.

1. After you have added the tag(s), choose **Save changes**. 

## Using the AWS SDKs
<a name="access-points-db-tag-add-sdks"></a>

------
#### [ SDK for Java 2.x ]

This example shows you how to add tags to an access point for directory buckets by using the AWS SDK for Java 2.x. To use the command replace the *user input placeholders* with your own information. 

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.Tag;
import software.amazon.awssdk.services.s3control.model.TagResourceRequest;
import software.amazon.awssdk.services.s3control.model.TagResourceResponse;

public class TagResourceExample {
    public static void tagResourceExample() {
        S3ControlClient s3Control = S3ControlClient.builder().region(Region.US_WEST_2).build();

        TagResourceRequest tagResourceRequest = TagResourceRequest.builder()
                .resourceArn("arn:aws::s3:region:111122223333:accesspoint/my-access-point/*")
                .accountId("111122223333")
                .tags(Tag.builder().key("key1").value("value1").build())
                .build();

        TagResourceResponse response = s3Control.tagResource(tagResourceRequest);
        System.out.println("Status code (should be 204):");
        System.out.println(response.sdkHttpResponse().statusCode());
    }
}
```

------

## Using the REST API
<a name="access-points-db-tag-add-api"></a>

For information about the Amazon S3 REST API support for adding tags to an access point for directory buckets, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [TagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html)

## Using the AWS CLI
<a name="access-points-db-tag-add-cli"></a>

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to add tags to a directory bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

**Request:**

```
aws s3control tag-resource \
--account-id 111122223333 \
--resource-arn arn:aws::s3express:region:444455556666:bucket/prefix--use1-az4--x-s3 \
--tags "Key=key1,Value=value1"
```

**Response:**

```
{
  "ResponseMetadata": {
      "RequestId": "EXAMPLE123456789",
      "HTTPStatusCode": 200,
      "HTTPHeaders": {
          "date": "Wed, 19 Jun 2025 10:30:00 GMT",
          "content-length": "0"
      },
      "RetryAttempts": 0
  }
}
```

# Viewing the tags of an access point for directory buckets
<a name="access-points-db-tag-view"></a>

You can view or list tags applied to Amazon S3 Access Points for directory buckets. For additional information, see [Using tags with S3 directory buckets](directory-buckets-tagging.md).

## Permissions
<a name="access-points-db-tag-view-permissions"></a>

To view tags applied to an access point, you must have the following permission: 
+ `s3express:ListTagsForResource`

## Troubleshooting errors
<a name="access-points-db-tag-view-troubleshooting"></a>

If you encounter an error when attempting to list or view the tags of an access point for directory buckets, you can do the following: 
+ Verify that you have the required [Permissions](#access-points-db-tag-view-permissions) to view or list the tags of the access point for directory buckets.

## Steps
<a name="access-points-db-tag-view-steps"></a>

You can view tags applied to access points for directory buckets by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and AWS SDKs.

## Using the S3 console
<a name="access-points-db-tag-view-console"></a>

To view tags applied to an access point for directory buckets using the Amazon S3 console:

1. Sign in to Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Points (Directory Buckets)**.

1. Choose the access point name. 

1. Choose the **Properties** tab. 

1. Scroll to the **Tags** section to view all of the tags applied to the access point for directory buckets. 

1. The **Tags** section shows the **User-defined tags** by default. You can select the **AWS-generated tags** tab to view tags applied to your access point by AWS services.

## Using the AWS SDKs
<a name="access-points-db-tag-view-sdks"></a>

This section provides an example of how to view tags applied to an access point for directory buckets by using the AWS SDKs.

------
#### [ SDK for Java 2.x ]

This example shows you how to view tags applied to an access point for directory buckets by using the AWS SDK for Java 2.x. 

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.ListTagsForResourceRequest;
import software.amazon.awssdk.services.s3control.model.ListTagsForResourceResponse;

public class ListTagsForResourceExample {
    public static void listTagsForResourceExample() {
        S3ControlClient s3Control = S3ControlClient.builder().region(Region.US_WEST_2).build();

        ListTagsForResourceRequest listTagsForResourceRequest = ListTagsForResourceRequest.builder()
                .resourceArn("arn:aws::s3:us-west-2:111122223333:accesspoint/my-access-point/*")
                .accountId("111122223333")
                .build();
        ListTagsForResourceResponse response = s3Control.listTagsForResource(listTagsForResourceRequest);
        System.out.println("Tags on my resource:");
        System.out.println(response.toString());
    }
}
```

------

## Using the REST API
<a name="access-points-db-tag-view-api"></a>

For information about the Amazon S3 REST API support for viewing the tags applied to a directory bucket, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [ListTagsforResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html)

## Using the AWS CLI
<a name="access-points-db-tag-view-cli"></a>

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to view tags applied to an access point for directory buckets. To use the command replace the *user input placeholders* with your own information.

**Request:**

```
aws s3control list-tags-for-resource \
--account-id 111122223333 \
--resource-arn arn:aws::s3express:region:444455556666:bucket/prefix--use1-az4--x-s3 \
```

**Response - tags present:**

```
{
  "Tags": [
      {
          "Key": "MyKey1",
          "Value": "MyValue1"
      },
      {
          "Key": "MyKey2",
          "Value": "MyValue2"
      },
      {
          "Key": "MyKey3",
          "Value": "MyValue3"
      }
  ]
}
```

**Response - no tags present:**

```
{
  "Tags": []
}
```

# Deleting a tag from an access point for directory buckets
<a name="access-points-db-tag-delete"></a>

You can remove tags from Access Points for directory buckets. For additional information, see [Using tags with S3 Access Points for directory buckets](access-points-db-tagging.md).

**Note**  
If you delete a tag and later learn that it was being used to track costs or for access control, you can add the tag back to the access point for directory buckets. 

## Permissions
<a name="access-points-db-tag-delete-permissions"></a>

To delete a tag from an access point for directory buckets, you must have the following permission: 
+ `s3express:UntagResource`

## Troubleshooting errors
<a name="access-points-db-tag-delete-troubleshooting"></a>

If you encounter an error when attempting to delete a tag from an access point for directory buckets, you can do the following: 
+ Verify that you have the required [Permissions](#access-points-db-tag-delete-permissions) to delete a tag from an access point for directory buckets.

## Steps
<a name="access-points-db-tag-delete-steps"></a>

You can delete tags from access points for directory buckets by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and AWS SDKs.

## Using the S3 console
<a name="access-points-db-tag-delete-console"></a>

To delete tags from an access point for directory buckets using the Amazon S3 console:

1. Sign in to Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Access Points (Directory Buckets)**.

1. Choose the access point name. 

1. Choose the **Properties** tab. 

1. Scroll to the **Tags** section and select the checkbox next to the tag or tags that you would like to delete. 

1. Choose **Delete**. 

1. The **Delete user-defined tags** pop-up appears and asks you to confirm the deletion of the tag or tags you selected. 

1. Choose **Delete** to confirm.

## Using the AWS SDKs
<a name="access-points-db-tag-delete-sdks"></a>

------
#### [ SDK for Java 2.x ]

This example shows you how to delete tags from a directory bucket by using the AWS SDK for Java 2.x. To use the command replace the *user input placeholders* with your own information. 

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.ListTagsForResourceRequest;
import software.amazon.awssdk.services.s3control.model.ListTagsForResourceResponse;

public class ListTagsForResourceExample {
    public static void listTagsForResourceExample() {
        S3ControlClient s3Control = S3ControlClient.builder().region(Region.US_WEST_2).build();

        UntagResourceRequest untagResourceRequest = UntagResourceRequest.builder()
                .resourceArn("arn:aws::s3:region:111122223333:accesspoint/my-access-point/*")
                .accountId("111122223333")
                .tagKeys("key1")
                .build();

        UntagResourceResponse response = s3Control.untagResource(untagResourceRequest);
        System.out.println("Status code (should be 204):");
        System.out.println(response.sdkHttpResponse().statusCode());
    }
}
```

------

## Using the REST API
<a name="access-points-db-tag-delete-api"></a>

For information about the Amazon S3 REST API support for deleting tags from an access point, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [UnTagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html)

## Using the AWS CLI
<a name="access-points-db-tag-delete-cli"></a>

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to delete tags from an access point by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

**Request:**

```
aws s3control untag-resource \
--account-id 111122223333 \
--resource-arn arn:aws::s3:region:111122223333:accesspoint/my-access-point/* \
--tag-keys "key1" "key2"
```

**Response:**

```
{
  "ResponseMetadata": {
    "RequestId": "EXAMPLE123456789",
    "HTTPStatusCode": 204,
    "HTTPHeaders": {
        "date": "Wed, 19 Jun 2025 10:30:00 GMT",
        "content-length": "0"
    },
    "RetryAttempts": 0
  }
}
```

# Logging with AWS CloudTrail for directory buckets
<a name="s3-express-one-zone-logging"></a>

 Amazon S3 is integrated with AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service. CloudTrail captures all API calls for Amazon S3 as events. Using the information collected by CloudTrail, you can determine the request that was made to Amazon S3, the IP address from which the request was made, when it was made, and additional details. When a supported event activity occurs in Amazon S3, that activity is recorded in a CloudTrail event. You can use AWS CloudTrail trail to log management events and data events for directory buckets. For more information, see [Amazon S3 CloudTrail events](https://docs.aws.amazon.com/AmazonS3/latest/userguide/cloudtrail-logging-s3-info.html) and [What is AWS CloudTrail? ](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) in the *AWS CloudTrail User Guide*.

## CloudTrail management events for directory buckets
<a name="s3-express-management"></a>

 By default, CloudTrail logs bucket-level actions for directory buckets as management events. The `eventsource` for CloudTrail management events for directory buckets is `s3express.amazonaws.com`. When you set up your AWS account, CloudTrail management events are enabled by default. The following Regional endpoint API operations (bucket-level, or control plane, API operations) are logged to CloudTrail. 
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListDirectoryBuckets.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListDirectoryBuckets.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketEncryption.html)

**Note**  
 `ListMultipartUploads` is a Zonal endpoint API operation. However, this API operation is logged to CloudTrail as a management event. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads.html) in the *Amazon Simple Storage Service API Reference*.

For more information on CloudTrail management events, see [Logging management events ](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide.*

## CloudTrail data events for directory buckets
<a name="s3-express-data-events"></a>

Data events provide information about the resource operations performed on or in a resource (for example, reading or writing to an Amazon S3 object). These are also known as data plane operations. Data events are often high-volume activities. By default, CloudTrail trails don't log data events, but you can configure trails to log data events for objects stored in general purpose buckets and directory buckets. For more information, see [Enable logging for objects in a bucket using the console ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-cloudtrail-logging-for-s3.html#enable-cloudtrail-events).

When you log data events for a trail in CloudTrail, you can choose to use advanced event selectors or basic event selectors. To log data events for objects stored in directory buckets, you must use advanced event selectors. When configuring advanced resource selectors, you will choose or specify the resource type which is `AWS::S3Express::Object`. 

The following Zonal endpoint API operations (object-level , or. data plane, API operations) are logged to CloudTrail. 
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_RenameObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_RenameObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)

For more information on CloudTrail data events, see [Logging data events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide*. 

For additional information about CloudTrail events for directory buckets, see the following topics: 

**Topics**
+ [CloudTrail management events for directory buckets](#s3-express-management)
+ [CloudTrail data events for directory buckets](#s3-express-data-events)
+ [CloudTrail log file examples for directory buckets](s3-express-log-files.md)

# CloudTrail log file examples for directory buckets
<a name="s3-express-log-files"></a>

A CloudTrail log file includes information about the requested API operation, the date and time of the operation, request parameters, and so on. This topic features examples for CloudTrail data events and management events for directory buckets.

**Topics**
+ [CloudTrail data event log file examples for directory buckets](#example-ct-log-s3express)

## CloudTrail data event log file examples for directory buckets
<a name="example-ct-log-s3express"></a>

The following example shows a CloudTrail log file example that demonstrates [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html). 

```
    {
        "eventVersion": "1.09",
        "userIdentity": {
          "type": "AssumedRole",
          "principalId": "AROAIDPPEZS35WEXAMPLE:AssumedRoleSessionName",
          "arn": "arn:aws:sts::111122223333assumed-role/RoleToBeAssumed/MySessionName",
          "accountId": "111122223333",
          "accessKeyId": "AKIAIOSFODNN7EXAMPLE",
          "sessionContext": {
            "sessionIssuer": {
              "type": "Role",
              "principalId": "AROAIDPPEZS35WEXAMPLE",
              "arn": "arn:aws:iam::111122223333:role/RoleToBeAssumed",
              "accountId": "111122223333",
              "userName":"RoleToBeAssumed
            },
            
            "attributes": {
              "creationDate": "2024-07-02T00:21:16Z",
            "mfaAuthenticated": "false"
            }
          }
        },
        "eventTime": "2024-07-02T00:22:11Z",
        "eventSource": "s3express.amazonaws.com",
        "eventName": "CreateSession",
        "awsRegion": "us-west-2",
        "sourceIPAddress": "72.21.198.68",
        "userAgent": "aws-sdk-java/2.20.160-SNAPSHOT Linux/5.10.216-225.855.amzn2.x86_64 OpenJDK_64-Bit_Server_VM/11.0.23+9-LTS Java/11.0.23 vendor/Amazon.com_Inc. md/internal exec-env/AWS_Lambda_java11 io/sync http/Apache cfg/retry-mode/standard",
        "requestParameters": {
          "bucketName": "bucket-base-name--usw2-az1--x-s3".
            "host": "bucket-base-name--usw2-az1--x-s3.s3express-usw2-az1.us-west-2.amazonaws.com",
            "x-amz-create-session-mode": "ReadWrite"
        },
        "responseElements": {
            "credentials": {
                "accessKeyId": "AKIAI44QH8DHBEXAMPLE"
                "expiration": ""Mar 20, 2024, 11:16:09 PM",
                "sessionToken": "<session token string>"
           },
        },
        "additionalEventData": {
            "SignatureVersion": "SigV4",
            "cipherSuite": "TLS_AES_128_GCM_SHA256",
            "bytesTransferredIn": 0,
            "AuthenticationMethod": "AuthHeader",
            "xAmzId2": "q6xhNJYmhg",
            "bytesTransferredOut": 1815,
            "availabilityZone": "usw2-az1"
          },
          "requestID": "28d2faaf-3319-4649-998d-EXAMPLE72818",
          "eventID": "694d604a-d190-4470-8dd1-EXAMPLEe20c1",
          "readOnly": true,
          "resources": [
            {
              "type": "AWS::S3Express::Object",
              "ARNPrefix": "arn:aws:s3express:us-west-2:111122223333:bucket-base-name--usw2-az1--x-s3"
            },
            {
              "accountId": "111122223333"  
              "type": "AWS::S3Express::DirectoryBucket",
              "ARN": "arn:aws:s3express:us-west-2:111122223333:bucket-base-name--usw2-az1--x-s3"
             }
           ],               
           "eventType": "AwsApiCall",
           "managementEvent": false,
           "recipientAccountId": "111122223333",
           "eventCategory": "Data",
           "tlsDetails": {
             "tlsVersion": "TLSv1.3",
             "cipherSuite": "TLS_AES_128_GCM_SHA256",
             "clientProvidedHostHeader": "bucket-base-name--usw2-az1--x-s3.s3express-usw2-az1.us-west-2.amazonaws.com"
            }
          }
```

To use Zonal endpoint API operations (object-level, or data plane, operations), you can use the `CreateSession` API operation to create and manage sessions that are optimized for low-latency authorization of data requests. You can also use `CreateSession` to reduce the amount of logging. To identify which Zonal API operations were performed during a session, you can match the `accessKeyId` under the `responseElements` in your `CreateSession` log file to the `accessKeyId` in the log file of other Zonal API operations. For more information, see [`CreateSession` authorization](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-create-session.html).

The following example shows a CloudTrail log file example that demonstrates the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html) API operation that was authenticated by `CreateSession`.

```
    {
        "eventVersion": "1.09",
        "userIdentity": {
          "type": "AssumedRole",
          "principalId": "AROAIDPPEZS35WEXAMPLE:AssumedRoleSessionName",
          "arn": "arn:aws:sts::111122223333assumed-role/RoleToBeAssumed/MySessionName",
          "accountId": "111122223333",
          "accessKeyId": "AKIAI44QH8DHBEXAMPLE",
          "sessionContext": {
            "attributes": {
              "creationDate": "2024-07-02T00:21:49Z"
            }
          }
        },    
        "eventTime": "2024-07-02T00:22:01Z",
        "eventSource": "s3express.amazonaws.com",
        "eventName": "GetObject",
        "awsRegion": "us-west-2",
        "sourceIPAddress": "72.21.198.68",
        "userAgent": "aws-sdk-java/2.25.66 Linux/5.10.216-225.855.amzn2.x86_64 OpenJDK_64-Bit_Server_VM/17.0.11+9-LTS Java/17.0.11 vendor/Amazon.com_Inc. md/internal exec-env/AWS_Lambda_java17 io/sync http/Apache cfg/retry-mode/legacy",  
        "requestParameters": {
          "bucketName": "bucket-base-name--usw2-az1--x-s3",
          "x-amz-checksum-mode": "ENABLED",
          "Host": "bucket-base-name--usw2-az1--x-s3.s3express-usw2-az1.us-west-2.amazonaws.com",
          "key": "test-get-obj-with-checksum"
        },
        "responseElements": null,
        "additionalEventData": {
          "SignatureVersion": "Sigv4",
          "CipherSuite": "TLS_AES_128_GCM_SHA256",
          "bytesTransferredIn": 0,
          "AuthenticationMethod": "AuthHeader",
          "x-amz-id-2": "oOy6w8K7LFsyFN",
          "bytesTransferredOut": 9,
          "availabilityZone": "usw2-az1",
          "sessionModeApplied": "ReadWrite"
         },
          "requestID": "28d2faaf-3319-4649-998d-EXAMPLE72818",
          "eventID": "694d604a-d190-4470-8dd1-EXAMPLEe20c1",
          "readOnly": true,
          "resources": [
            {
              "type": "AWS::S3Express::Object",
              "ARNPrefix": "arn:aws:s3express:us-west-2:111122223333:bucket-base-name--usw2-az1--x-s3"
            },
            {
              "accountId": "111122223333",  
              "type": "AWS::S3Express::DirectoryBucket",
              "ARN": "arn:aws:s3express:us-west-2:111122223333:bucket-base-name--usw2-az1--x-s3"
             }
           ],               
           "eventType": "AwsApiCall",
           "managementEvent": false,
           "recipientAccountId": "111122223333",
           "eventCategory": "Data",
           "tlsDetails": {
             "tlsVersion": "TLSv1.3",
             "cipherSuite": "TLS_AES_128_GCM_SHA256",
             "clientProvidedHostHeader": "bucket-base-name--usw2-az1--x-s3.s3express-usw2-az1.us-west-2.amazonaws.com"
            }
          }
```

 In the `GetObject` log file example above, the `accessKeyId`(AKIAI44QH8DHBEXAMPLE) matches the `accessKeyId` under the `responseElements` in the CreateSession log file example. The matching `accessKeyId` indicates the session in which `GetObject` operation was performed.

The following example shows a CloudTrail log entry that demonstrates a `DeleteObjects` action on a directory bucket, invoked by S3 Lifecycle. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-objects-lifecycle.html](https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-objects-lifecycle.html). 

```
eventVersion:"1.09",
  userIdentity:{

    type:"AWSService",
    invokedBy:"lifecycle.s3.amazonaws.com"
  },
  eventTime:"2024-09-11T00:55:54Z",
  eventSource:"s3express.amazonaws.com",
  eventName:"DeleteObjects",
  awsRegion:"us-east-2",
  sourceIPAddress:"lifecycle.s3.amazonaws.com",
  userAgent:"gamma.lifecycle.s3.amazonaws.com",
  requestParameters:{

    bucketName:"amzn-s3-demo-bucket--use2-az2--x-s3",
    'x-amz-expected-bucket-owner':"637423581905",
    Host:"amzn-s3-demo-bucket--use2-az2--x-s3.gamma.use2-az2.express.s3.aws.dev",
    delete:"",
    'x-amz-sdk-checksum-algorithm':"CRC32C"
  },
  responseElements:null,
  additionalEventData:{

    SignatureVersion:"Sigv4",
    CipherSuite:"TLS_AES_128_GCM_SHA256",
    bytesTransferredIn:41903,
    AuthenticationMethod:"AuthHeader",
    'x-amz-id-2':"9H5YWZY0",
    bytesTransferredOut:35316,
    availabilityZone:"use2-az2",
    sessionModeApplied:"ReadWrite"
  },
  requestID:"011eeadd04000191",
  eventID:"d3d8b116-219d-4ee6-a072-5f9950733c74",
  readOnly:false,
  resources:[

    {

      type:"AWS::S3Express::Object",
      ARNPrefix:"arn:aws:s3express:us-east-2:637423581905:bucket/amzn-s3-demo-bucket--use2-az2--x-s3/"
    },
    {

      accountId:"637423581905",
      type:"AWS::S3Express::DirectoryBucket",
      ARN:"arn:aws:s3express:us-east-2:637423581905:bucket/amzn-s3-demo-bucket--use2-az2--x-s3"
    }
  ],
  eventType:"AwsApiCall",
  managementEvent:false,
  recipientAccountId:"637423581905",
  sharedEventID:"59f877ac-1dd9-415d-b315-9bb8133289ce",
  eventCategory:"Data"
}
```

The following example shows a CloudTrail log entry that demonstrates an `Access Denied` request on a `CreateSession` action invoked by S3 Lifecycle. For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html). 

```
{
    "eventVersion": "1.09",
    "userIdentity": {
        "type": "AWSService",
        "invokedBy": "gamma.lifecycle.s3.amazonaws.com"
    },
    "eventTime": "2024-09-11T18:13:08Z",
    "eventSource": "s3express.amazonaws.com",
    "eventName": "CreateSession",
    "awsRegion": "us-east-2",
    "sourceIPAddress": "gamma.lifecycle.s3.amazonaws.com",
    "userAgent": "gamma.lifecycle.s3.amazonaws.com",
    "errorCode": "AccessDenied",
    "errorMessage": "Access Denied",
    "requestParameters": {
        "bucketName": "amzn-s3-demo-bucket--use2-az2--x-s3",
        "Host": "amzn-s3-demo-bucket--use2-az2--x-s3.gamma.use2-az2.express.s3.aws.dev",
        "x-amz-create-session-mode": "ReadWrite",
        "x-amz-server-side-encryption": "AES256"
    },
    "responseElements": null,
    "additionalEventData": {
        "SignatureVersion": "Sigv4",
        "CipherSuite": "TLS_AES_128_GCM_SHA256",
        "bytesTransferredIn": 0,
        "AuthenticationMethod": "AuthHeader",
        "x-amz-id-2": "zuDDC1VNbC4LoNwUIc5",
        "bytesTransferredOut": 210,
        "availabilityZone": "use2-az2"
    },
    "requestID": "010932f174000191e24a0",
    "eventID": "dce7cc46-4cd3-46c0-9a47-d1b8b70e301c",
    "readOnly": true,
    "resources": [{
            "type": "AWS::S3Express::Object",
            "ARNPrefix": "arn:aws:s3express:us-east-2:637423581905:bucket/amzn-s3-demo-bucket--use2-az2--x-s3/"
        },
        {
            "accountId": "637423581905",
            "type": "AWS::S3Express::DirectoryBucket",
            "ARN": "arn:aws:s3express:us-east-2:637423581905:bucket/amzn-s3-demo-bucket--use2-az2--x-s3"
        }
    ],
    "eventType": "AwsApiCall",
    "managementEvent": false,
    "recipientAccountId": "637423581905",
    "sharedEventID": "da96b5bd-6066-4a8d-ad8d-f7f427ca7d58",
    "eventCategory": "Data"
}
```

# Monitoring metrics with Amazon CloudWatch for directory buckets
<a name="cloudwatch-monitoring-directory-buckets"></a>

Amazon CloudWatch metrics for directory buckets can help you understand and improve the performance of applications that use directory buckets. There are several sets of CloudWatch metrics that you can use with directory buckets for the S3 Express One Zone storage class and the S3 One Zone-Infrequent Access (S3 One Zone-IA; Z-IA) storage class in a local zone.

**Daily storage metrics**  
Monitor the amount of data stored in directory buckets, including total size in bytes and number of objects. These storage metrics for S3 Express One Zone are reported once per day and are provided to all customers at no additional cost.

**Request metrics**   
Monitor directory bucket requests to quickly identify and act on operational issues. The metrics are available at 1-minute intervals after some latency for processing. These CloudWatch metrics are billed at the same rate as the Amazon CloudWatch custom metrics. For information about CloudWatch pricing, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/). To learn how to opt in to getting these metrics, see [Configuring request metrics for directory buckets](metrics-configurations-directory-buckets.md).  
When enabled, request metrics are reported for all object operations. By default, these 1-minute metrics are available at the directory bucket level. You can also define a filter for the metrics using a shared directory or access point:  
+ **Access point** – Access points are named network endpoints that are attached to directory buckets and simplify managing data access at scale for shared datasets in S3. With the access point filter, you can gain insights into your access point usage. For more information about access points for directory buckets, see [Managing access to shared datasets in directory buckets with access points](access-points-directory-buckets.md).
+ **Directory** – directory buckets use actual directories to organize objects hierarchically. A directory enables you to group similar objects together in a directory bucket. If you filter by directory, objects that are stored in the same directory are included in the metrics configuration.
To align these metrics to specific business applications, workflows, or internal organizations, you can filter on a shared directory or access point. 

All CloudWatch statistics are retained for a period of 15 months so that you can access historical information and gain a better perspective on how your web application or service is performing. For more information about CloudWatch, see [What is Amazon CloudWatch?](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/WhatIsCloudWatch.html) in the *Amazon CloudWatch User Guide*. You may need some additional configurations to your CloudWatch alarms, depending on your use cases. For example, you can use metric math expression to create an alarm. For more information, see [Use CloudWatch metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html), [Use metric math](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/using-metric-math.html), [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html), and [Create a CloudWatch alarm based on a metric math expression](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html) in the *Amazon CloudWatch User Guide*.

**Best-effort CloudWatch metrics delivery**  
 CloudWatch metrics are delivered on a best-effort basis. Most requests for an Amazon S3 object that have request metrics result in a data point being sent to CloudWatch.

The completeness and timeliness of metrics are not guaranteed. The data point for a particular request might be returned with a timestamp that is later than when the request was actually processed. The data point for a minute might be delayed before being available through CloudWatch, or it might not be delivered at all. CloudWatch request metrics give you an idea of the nature of traffic against your bucket in near-real time. It is not meant to be a complete accounting of all requests.

It follows from the best-effort nature of this feature that the reports available at the [Billing & Cost Management Dashboard](https://console.aws.amazon.com/billing/home?#/) might include one or more access requests that do not appear in the bucket metrics.

For more information, see the following topics.

**Topics**
+ [Metrics and dimensions for directory buckets](metrics-dimensions-directory-buckets.md)
+ [Configuring request metrics for directory buckets](metrics-configurations-directory-buckets.md)

# Metrics and dimensions for directory buckets
<a name="metrics-dimensions-directory-buckets"></a>

The metrics and dimensions that S3 Express One Zone send to Amazon CloudWatch are listed in the following tables.

**Best-effort CloudWatch metrics delivery**  
 CloudWatch metrics are delivered on a best-effort basis. Most requests for an Amazon S3 object that have request metrics result in a data point being sent to CloudWatch.

The completeness and timeliness of metrics are not guaranteed. The data point for a particular request might be returned with a timestamp that is later than when the request was actually processed. The data point for a minute might be delayed before being available through CloudWatch, or it might not be delivered at all. CloudWatch request metrics give you an idea of the nature of traffic against your bucket in near-real time. It is not meant to be a complete accounting of all requests.

It follows from the best-effort nature of this feature that the reports available at the [Billing & Cost Management Dashboard](https://console.aws.amazon.com/billing/home?#/) might include one or more access requests that do not appear in the bucket metrics.

**Topics**
+ [Amazon S3 daily storage metrics for directory buckets in CloudWatch](#s3-cloudwatch-metrics-directory-buckets)
+ [Amazon S3 request metrics for directory buckets in CloudWatch](#s3-cloudwatch-request-metrics-directory-buckets)
+ [Amazon S3 dimensions for directory buckets](#s3-cloudwatch-dimensions-directory-buckets)

## Amazon S3 daily storage metrics for directory buckets in CloudWatch
<a name="s3-cloudwatch-metrics-directory-buckets"></a>

The `AWS/S3` namespace includes the following daily storage metrics for directory buckets.


| Metric | Description | 
| --- | --- | 
| BucketSizeBytes |  The amount of data in bytes that is stored in a directory bucket. This value is calculated by summing the size of all objects and metadata (such as bucket names) in the bucket, including the size of all parts for all incomplete multipart uploads to the bucket. Units: Bytes Valid statistics: Average  | 
| NumberOfObjects |  The total number of objects stored in a directory bucket. This value is calculated by counting all objects in the bucket and doesn't include incomplete multipart uploads to the bucket. Units: Count Valid statistics: Average  | 

## Amazon S3 request metrics for directory buckets in CloudWatch
<a name="s3-cloudwatch-request-metrics-directory-buckets"></a>

The `AWS/S3` namespace includes the following request metrics for directory buckets.


| Metric | Description | 
| --- | --- | 
| AllRequests |  The total number of HTTP requests made to a directory bucket, regardless of type. If you're using a metrics configuration with a filter, then this metric returns only the HTTP requests that meet the filter's requirements. Units: Count Valid statistics: Sum  | 
| GetRequests |  The number of HTTP `GET` requests made for objects in a directory bucket. This doesn't include list operations. This metric is incremented for the source of each `CopyObject` request. Units: Count Valid statistics: Sum  Paginated list-oriented requests, such as [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListMPUpload.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html](https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadListParts.html), and others, are not included in this metric.   | 
| PutRequests |  The number of HTTP `PUT` requests made for objects in a directory bucket. This metric is incremented for the destination of each `CopyObject` request. Units: Count Valid statistics: Sum  | 
| DeleteRequests |  The number of HTTP `DELETE` requests made for objects in a directory bucket. This metric also includes [https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html](https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html) requests. This metric shows the number of requests made, not the number of objects deleted. Units: Count Valid statistics: Sum  | 
| HeadRequests |  The number of HTTP `HEAD` requests made to a directory bucket. Units: Count Valid statistics: Sum  | 
| PostRequests |  The number of HTTP `POST` requests made to a directory bucket. Units: Count Valid statistics: Sum  [https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html](https://docs.aws.amazon.com/AmazonS3/latest/API/multiobjectdeleteapi.html) requests are not included in this metric.    | 
| ListRequests |  The number of HTTP requests that list the contents of a directory bucket. Units: Count Valid statistics: Sum  | 
| BytesDownloaded |  The number of bytes downloaded for requests made to a directory bucket, where the response includes a body. Units: Bytes Valid statistics: Average (bytes per request), Sum (bytes per period), Sample Count, Min, Max (same as p100), any percentile between p0.0 and p99.9  | 
| BytesUploaded |  The number of bytes uploaded for requests made to a directory bucket, where the request includes a body. Units: Bytes Valid statistics: Average (bytes per request), Sum (bytes per period), Sample Count, Min, Max (same as p100), any percentile between p0.0 and p99.9  | 
| 4xxErrors |  The number of HTTP 4*xx* client error status code requests made to a directory bucket with a value of either 0 or 1. The Average statistic shows the error rate, and the Sum statistic shows the count of that type of error, during each period. Units: Count Valid statistics: Average (reports per request), Sum (reports per period), Min, Max, Sample Count  | 
| 5xxErrors |  The number of HTTP 5*xx* server error status code requests made to a directory bucket with a value of either 0 or 1. The Average statistic shows the error rate, and the Sum statistic shows the count of that type of error, during each period. Units: Count Valid statistics: Average (reports per request), Sum (reports per period), Min, Max, Sample Count  | 
| FirstByteLatency |  The per-request time from the complete request being received by a directory bucket to when the response starts to be returned. Units: Milliseconds Valid statistics: Average, Sum, Min, Max (same as p100), Sample Count, any percentile between p0.0 and p100  | 
| TotalRequestLatency |  The elapsed per-request time from the first byte received to the last byte sent to a directory bucket. This metric includes the time taken to receive the request body and send the response body, which is not included in `FirstByteLatency`. Units: Milliseconds Valid statistics: Average, Sum, Min, Max (same as p100), Sample Count, any percentile between p0.0 and p100  | 

## Amazon S3 dimensions for directory buckets
<a name="s3-cloudwatch-dimensions-directory-buckets"></a>

The following dimensions are used to filter Amazon S3 metrics for directory buckets.


| Dimension | Description | 
| --- | --- | 
| BucketName | This dimension filters the data that you request for the identified directory bucket only. | 
| FilterId | This dimension filters metrics configurations that you specify for request metrics on a directory bucket. You set up the metrics configuration filter when you configure request metrics. For more information, see [Configuring request metrics for directory buckets](metrics-configurations-directory-buckets.md). | 

# Configuring request metrics for directory buckets
<a name="metrics-configurations-directory-buckets"></a>

You can configure request metrics for directory buckets to monitor the operational performance of your S3 Express One Zone storage. Request metrics are available at 1-minute intervals and help you quickly identify and act on operational issues.

You can configure request metrics for directory buckets using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), or the Amazon S3 REST API.

## Using the S3 console
<a name="metrics-configuration-console-directory-buckets"></a>

You can use the Amazon S3 console to configure request metrics for your directory buckets.

**To configure request metrics for a directory bucket**

1. Sign in to the and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **Directory buckets**.

1. In the buckets list, choose the name of the directory bucket that contains the objects you want request metrics for.

1. Choose the **Metrics** tab.

1. Under **Bucket metrics**, choose **View additional charts**.

1. Choose the **Request metrics** tab.

1. Choose **Create filter**.

1. In the **Filter name** box, enter your filter name.

   Names can only contain letters, numbers, periods, dashes, and underscores. We recommend using the name `EntireBucket` for a filter that applies to all objects.

1. Under **Filter scope**, choose one of the following:
   + **This filter applies to all objects in the bucket** – You don't specify a filter.
   + **Limit the scope of this filter using directory or access point** – You specify a filter. To limit the scope, you can filter by directory or access point.

   You can also define a filter so that the metrics are only collected and reported on a subset of objects in the directory bucket.

1. If you chose to limit the scope of the filter, specify the filter details:
   + To filter by **directory**, enter a directory name.
   + To filter by **access point**, enter the access point ARN.

1. Choose **Save changes**.

1. On the **Request metrics** tab, under **Filters**, choose the filter that you just created.

   After about 15 minutes, CloudWatch begins tracking these request metrics. You can see them on the **Request metrics** tab. You can see graphs for the metrics on the Amazon S3 or CloudWatch console. Request metrics are billed at the standard CloudWatch rate. For more information, see [Amazon CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/).

## Using the AWS CLI
<a name="metrics-configuration-cli-directory-buckets"></a>

You can use the AWS CLI to configure request metrics for your directory buckets.

1. Install and set up the AWS CLI. For instructions, see [Installing, updating, and uninstalling the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html) in the *AWS Command Line Interface User Guide*.

1. Open a terminal.

1. Run the following command to add a metrics configuration.

   ```
   aws s3api put-bucket-metrics-configuration --endpoint https://s3express-control.region-code.amazonaws.com --bucket directory-bucket-name --id metrics-config-id --metrics-configuration '{"Id":"metrics-config-id"}'
   ```

## Using the REST API
<a name="metrics-configuration-rest-directory-buckets"></a>

You can use the Amazon S3 REST API to configure request metrics for your directory buckets. When using the REST API with directory buckets, you must use the S3 Express One Zone endpoint with the regional control endpoint in the format: `https://s3express-control.region-code.amazonaws.com`.

For more information about the REST API operations, see the following topics in the *Amazon S3 API Reference*:
+ [PUT Bucket metrics](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTMetricConfiguration.html)
+ [GET Bucket metrics](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETMetricConfiguration.html)
+ [DELETE Bucket metrics](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETEMetricConfiguration.html)
+ [LIST Bucket metrics](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketListMetricConfigurations.html)

# Optimizing directory bucket performance
<a name="s3-express-optimizing-performance"></a>

To obtain the best performance when using directory buckets, we recommend the following guidelines.

For more information about best practices for S3 Express One Zone, see [Best practices to optimize S3 Express One Zone performance](s3-express-optimizing-performance-design-patterns.md).

## Use session-based authentication
<a name="s3-express-optimizing-performance-session-authentication"></a>

Directory buckets support a new session-based authorization mechanism to authenticate and authorize requests to a directory bucket. With session-based authentication, the AWS SDKs automatically use the `CreateSession` API operation to create a temporary session token that can be used for low-latency authorization of data requests to a directory bucket.

The AWS SDKs use the `CreateSession` API operation to request temporary credentials, and then automatically create and refresh tokens for you on your behalf every 5 minutes. To take advantage of the performance benefits of directory buckets, we recommended that you use the AWS SDKs to initiate and manage the `CreateSession` API request. For more information about this session-based model, see [Authorizing Zonal endpoint API operations with `CreateSession`](s3-express-create-session.md).

## S3 additional checksum best practices
<a name="s3-express-optimizing-performance-checksums"></a>

Directory buckets offer you the option to choose the checksum algorithm that is used to validate your data during upload or download. You can select one of the following Secure Hash Algorithms (SHA) or Cyclic Redundancy Check (CRC) data-integrity check algorithms: CRC32, CRC32C, SHA-1, and SHA-256. MD5-based checksums are not supported with the S3 Express One Zone storage class. 

CRC32 is the default checksum used by the AWS SDKs when transmitting data to or from directory buckets. We recommend using CRC32 and CRC32C for the best performance with directory buckets. 

## Use the latest version of the AWS SDKs and common runtime libraries
<a name="s3-express-optimizing-performance-aws-sdks"></a>

Several of the AWS SDKs also provide the AWS Common Runtime (CRT) libraries to further accelerate performance in S3 clients. These SDKs include the AWS SDK for Java 2.x, the AWS SDK for C\$1\$1, and the AWS SDK for Python (Boto3). The CRT-based S3 client transfers objects to and from directory buckets with enhanced performance and reliability by automatically using the multipart upload API operation and byte-range fetches to automate horizontally scaling connections. 

To achieve the highest performance with the directory buckets, we recommend using the latest version of the AWS SDKs that include the CRT libraries or using the AWS Command Line Interface (AWS CLI). 

# Developing with directory buckets
<a name="s3-express-developing"></a>

After you create your directory bucket, you can then immediately begin very low-latency reads and writes. You can communicate with your directory bucket by using an endpoint connection over a virtual private cloud (VPC), or you can use Zonal and Regional API operations to manage your objects and directory buckets. You can work with directory buckets by using the AWS SDKs, Amazon S3 console, AWS Command Line Interface (AWS CLI), and Amazon S3 REST APIs.

**Topics**
+ [Regional and Zonal endpoints for directory buckets](s3-express-Regions-and-Zones.md)
+ [Working with directory buckets by using the S3 console, AWS CLI, and AWS SDKs](s3-express-SDKs.md)
+ [Directory bucket API operations](s3-express-APIs.md)

# Regional and Zonal endpoints for directory buckets
<a name="s3-express-Regions-and-Zones"></a>

 To access the Regional and Zonal endpoints for directory buckets from your virtual private cloud (VPC), you can use gateway VPC endpoints. After you create a gateway endpoint, you can add it as a target in your route table for traffic destined from your VPC to your bucket. There is no additional charge for using gateway endpoints. For more information about how to configure gateway VPC endpoints, see [Networking for directory buckets](s3-express-networking.md).

 Bucket-level (control plane) API operations are available through a Regional endpoint and are referred to as Regional endpoint API operations. Examples of Regional endpoint API operations are `CreateBucket` and `DeleteBucket`. 

You use Zonal (object level, or data plane endpoint API operations) to upload and manage your objects. Zonal endpoint API operations are available through a Zonal endpoint. Examples of Zonal API operations are `PutObject` and `CopyObject`.

For more information about Regional and Zonal endpoints for directory buckets in Availability Zones, see [Regional and Zonal endpoints for directory buckets in an Availability Zone](endpoint-directory-buckets-AZ.md).

For more information about Regional and Zonal endpoints for directory buckets in Local Zones, see [Concepts for directory buckets in Local Zones](s3-lzs-for-directory-buckets.md).


| Region name | Region | Availability Zone IDs | Regional endpoint | Zonal endpoint | 
| --- | --- | --- | --- | --- | 
|  US East (N. Virginia)  |  `us-east-1`  |  `use1-az4` `use1-az5` `use1-az6`  |  `s3express-control.us-east-1.amazonaws.com` `s3express-control-dualstack.us-east-1.amazonaws.com `  |  `s3express-use1-az4.us-east-1.amazonaws.com` `s3express-use1-az4.dualstack.us-east-1.amazonaws.com` `s3express-use1-az5.us-east-1.amazonaws.com` `s3express-use1-az5.dualstack.us-east-1.amazonaws.com` `s3express-use1-az6.us-east-1.amazonaws.com` `s3express-use1-az6.dualstack.us-east-1.amazonaws.com`  | 
|  US East (Ohio)  |  `us-east-2`  |  `use2-az1` `use2-az2`  |  `s3express-control.us-east-2.amazonaws.com` `s3express-control-dualstack.us-east-2.amazonaws.com`  |  `s3express-use2-az1.us-east-2.amazonaws.com` `s3express-use2-az1.dualstack.us-east-2.amazonaws.com` `s3express-use2-az2.us-east-2.amazonaws.com` `s3express-use2-az2.dualstack.us-east-2.amazonaws.com`  | 
|  US West (Oregon)  |  `us-west-2`  |  `usw2-az1` `usw2-az3` `usw2-az4`  |  `s3express-control.us-west-2.amazonaws.com` `s3express-control-dualstack.us-west-2.amazonaws.com`  |  `s3express-usw2-az1.us-west-2.amazonaws.com` `s3express-usw2-az1.dualstack.us-west-2.amazonaws.com` `s3express-usw2-az3.us-west-2.amazonaws.com` `s3express-usw2-az3.dualstack.us-west-2.amazonaws.com` `s3express-usw2-az4.us-west-2.amazonaws.com` `s3express-usw2-az4.dualstack.us-west-2.amazonaws.com`  | 
|  Asia Pacific (Mumbai)  |  `ap-south-1`  |  `aps1-az1` `aps1-az3`  |  `s3express-control.ap-south-1.amazonaws.com` `s3express-control-dualstack.ap-south-1.amazonaws.com`  |  `s3express-aps1-az1.ap-south-1.amazonaws.com` `s3express-aps1-az1.dualstack.ap-south-1.amazonaws.com` `s3express-aps1-az3.ap-south-1.amazonaws.com` `s3express-aps1-az3.dualstack.ap-south-1.amazonaws.com`  | 
|  Asia Pacific (Tokyo)  |  `ap-northeast-1`  |  `apne1-az1` `apne1-az4`  |  `s3express-control.ap-northeast-1.amazonaws.com` `s3express-control-dualstack.ap-northeast-1.amazonaws.com`  |  `s3express-apne1-az1.ap-northeast-1.amazonaws.com` `s3express-apne1-az1.dualstack.ap-northeast-1.amazonaws.com` `s3express-apne1-az4.ap-northeast-1.amazonaws.com` `s3express-apne1-az4.dualstack.ap-northeast-1.amazonaws.com`  | 
|  Europe (Ireland)  |  `eu-west-1`  |  `euw1-az1` `euw1-az3`  |  `s3express-control.eu-west-1.amazonaws.com` `s3express-control-dualstack.eu-west-1.amazonaws.com`  |  `s3express-euw1-az1.eu-west-1.amazonaws.com` `s3express-euw1-az1.dualstack.eu-west-1.amazonaws.com` `s3express-euw1-az3.eu-west-1.amazonaws.com` `s3express-euw1-az3.dualstack.eu-west-1.amazonaws.com`  | 
|  Europe (Stockholm)  |  `eu-north-1`  |  `eun1-az1` `eun1-az2` `eun1-az3`  |  `s3express-control.eu-north-1.amazonaws.com` `s3express-control-dualstack.eu-north-1.amazonaws.com`  |  `s3express-eun1-az1.eu-north-1.amazonaws.com` `s3express-eun1-az1.dualstack.eu-north-1.amazonaws.com` `s3express-eun1-az2.eu-north-1.amazonaws.com` `s3express-eun1-az2.dualstack.eu-north-1.amazonaws.com` `s3express-eun1-az3.eu-north-1.amazonaws.com` `s3express-eun1-az3.dualstack.eu-north-1.amazonaws.com`  | 

# Working with directory buckets by using the S3 console, AWS CLI, and AWS SDKs
<a name="s3-express-SDKs"></a>

You can work with the S3 Express One Zone storage class and directory buckets by using the AWS SDKs, Amazon S3 console, AWS Command Line Interface (AWS CLI), and Amazon S3 REST API.

## S3 Console
<a name="s3-express-getting-started-console"></a>



To get started using the S3 console, follow these steps:
+ [Creating directory buckets in an Availability Zone](directory-bucket-create.md)
+ [Emptying a directory bucket](directory-bucket-empty.md)
+ [Deleting a directory bucket](directory-bucket-delete.md)

For a full tutorial, see [Tutorial: Getting started with S3 Express One Zone](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-getting-started.html). 

## AWS SDKs
<a name="s3-express-getting-started-accessing-sdks"></a>

S3 Express One Zone supports the following AWS SDKs:
+ AWS SDK for C\$1\$1
+ AWS SDK for Go v2
+ AWS SDK for Java 2.x
+ AWS SDK for JavaScript v3
+ AWS SDK for .NET
+ AWS SDK for PHP
+ AWS SDK for Python (Boto3)
+ AWS SDK for Ruby
+ AWS SDK for Kotlin
+ AWS SDK for Rust

When you're working with S3 Express One Zone, we recommend using the latest version of the AWS SDKs. The supported AWS SDKs for S3 Express One Zone handle session establishment, refreshment, and termination on your behalf. This means that you can immediately start using API operations after you download and install the AWS SDKs and configure the necessary IAM permissions. For more information, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md).

For information about the AWS SDKs, including how to download and install them, see [Tools to Build on AWS](https://aws.amazon.com/developer/tools/).

For AWS SDK examples, see the following:
+ [Creating directory buckets in an Availability Zone](directory-bucket-create.md)
+ [Emptying a directory bucket](directory-bucket-empty.md)
+ [Deleting a directory bucket](directory-bucket-delete.md)

## AWS Command Line Interface (AWS CLI)
<a name="s3-express-getting-started-cli"></a>

You can use the AWS Command Line Interface (AWS CLI) to create directory buckets and use supported Regional and Zonal endpoint API operations for S3 Express One Zone. 

To get started with the AWS CLI, see [Get started with the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) in the *AWS CLI Command Reference*.

**Note**  
To use directory buckets with the [high-level `aws s3` commands](https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html), update your AWS CLI to the latest version. For more information about how to install and configure the AWS CLI, see [Install or update the latest version of the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS CLI Command Reference*.

For AWS CLI examples, see the following:
+ [Creating directory buckets in an Availability Zone](directory-bucket-create.md)
+ [Emptying a directory bucket](directory-bucket-empty.md)
+ [Deleting a directory bucket](directory-bucket-delete.md)

# Directory bucket API operations
<a name="s3-express-APIs"></a>

To manage directory buckets, you can use Regional (bucket level, or control plane) endpoint API operations. To manage objects in your directory buckets, you can use Zonal (object level, or data plane) endpoint API operations. For more information, see [Networking for directory buckets](s3-express-networking.md) and [Endpoints and gateway VPC endpoints](directory-bucket-high-performance.md#s3-express-overview-endpoints).

**Regional endpoint API operations**  
The following Regional endpoint API operations are supported for directory buckets: 
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_CreateAccessPoint.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPoint.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointScope.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_DeleteAccessPointScope.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketInventoryConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketInventoryConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketLifecycle.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketLifecycle.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPoint.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPoint.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointScope.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetAccessPointScope.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketInventoryConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketInventoryConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLifecycleConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketLifecycleConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessPointsForDirectoryBuckets.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListAccessPointsForDirectoryBuckets.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketInventoryConfigurations.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBucketInventoryConfigurations.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListDirectoryBuckets.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListDirectoryBuckets.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessPointPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessPointPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessPointScope.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_PutAccessPointScope.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketInventoryConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketInventoryConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketPolicy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketPolicy.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketEncryption.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html)

**Zonal endpoint API operations**  
The following Zonal endpoint API operations are supported for directory buckets: 
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_RenameObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_RenameObject.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_AbortMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListMultipartUploads)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListParts.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html)

# Using tags with S3 directory buckets
<a name="directory-buckets-tagging"></a>

An AWS tag is a key-value pair that holds metadata about resources, in this case Amazon S3 directory buckets. You can tag S3 directory buckets when you create them or manage tags on existing directory buckets. For general information about tags, see [Tagging for cost allocation or attribute-based access control (ABAC)](tagging.md).

**Note**  
There is no additional charge for using tags on directory buckets beyond the standard S3 API request rates. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

## Common ways to use tags with directory buckets
<a name="common-ways-to-use-tags-directory-bucket"></a>

Use tags on your S3 directory buckets for:

1. **Cost allocation** – Track storage costs by bucket tag in AWS Billing and Cost Management. For more information, see [Using tags for cost allocation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging.html#using-tags-for-cost-allocation).

1. **Attribute-based access control (ABAC)** – Scale access permissions and grant access to S3 directory buckets based on their tags. For more information, see [Using tags for ABAC](https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging.html#using-tags-for-abac).

**Note**  
You can use the same tags for both cost allocation and access control.

### ABAC for S3 directory buckets
<a name="abac-for-directory-buckets"></a>

Amazon S3 directory buckets support attribute-based access control (ABAC) using tags. Use tag-based condition keys in your AWS organizations, IAM, and S3 directory bucket policies. For enterprises, ABAC in Amazon S3 supports authorization across multiple AWS accounts. 

In your IAM policies, you can control access to S3 directory buckets based on the bucket's tags by using the following [global condition keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-tagkeys):
+ `aws:ResourceTag/key-name`
  + Use this key to compare the tag key-value pair that you specify in the policy with the key-value pair attached to the resource. For example, you could require that access to a resource is allowed only if the resource has the attached tag key `Dept` with the value `Marketing`. For more information, see [Controlling access to AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-resources).
+ `aws:RequestTag/key-name`
  + Use this key to compare the tag key-value pair that was passed in the request with the tag pair that you specify in the policy. For example, you could check whether the request includes the tag key `Dept` and that it has the value `Accounting`. For more information, see [Controlling access during AWS requests](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-requests). You can use this condition key to restrict which tag key-value pairs can be passed during the `TagResource` and `CreateBucket` API operations.
+ `aws:TagKeys`
  + Use this key to compare the tag keys in a request with the keys that you specify in the policy. We recommend that when you use policies to control access using tags, use the `aws:TagKeys` condition key to define what tag keys are allowed. For example policies and more information, see [Controlling access based on tag keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_tags.html#access_tags_control-tag-keys). You can create an S3 directory bucket with tags. To allow tagging during the `CreateBucket` API operation, you must create a policy that includes both the `s3express:TagResource` and `s3express:CreateBucket` actions. You can then use the `aws:TagKeys` condition key to enforce using specific tags in the `CreateBucket` request.
+ `s3express:BucketTag/tag-key`
  + Use this condition key to grant permissions to specific data in directory buckets using tags. When accessing a directory bucket by using an access point, this condition key references the tags on the directory bucket both when authorizing against the access point and the directory bucket, while the `aws:ResourceTag/tag-key` will reference the tags only of the resource it is being authorized against. 

### Example ABAC policies for directory buckets
<a name="example-directory-buckets-abac-policies"></a>

See the following example ABAC policies for Amazon S3 directory buckets.

#### 1.1 - IAM policy to create or modify buckets with specific tags
<a name="example-user-policy-request-tag"></a>

In this IAM policy, users or roles with this policy can only create S3 directory buckets if they tag the bucket with the tag key `project` and tag value `Trinity` in the bucket creation request. They can also add or modify tags on existing S3 directory buckets as long as the `TagResource` request includes the tag key-value pair `project:Trinity`. This policy does not grant read, write, or delete permissions on the buckets or its objects. 

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "CreateBucketWithTags",
      "Effect": "Allow",
      "Action": [
        "s3express:CreateBucket",
        "s3express:TagResource"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:RequestTag/project": [
            "Trinity"
          ]
        }
      }
    }
  ]
}
```

#### 1.2 - Bucket policy to restrict operations on the bucket using tags
<a name="example-user-policy-resource-tag"></a>

In this bucket policy, IAM principals (users and roles) can perform operations using the `CreateSession` action on the bucket only if the value of the bucket's `project` tag matches the value of the principal's `project` tag.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowObjectOperations",
      "Effect": "Allow",
      "Principal": {
        "AWS": "111122223333"
      },
      "Action": "s3express:CreateSession",
      "Resource": "arn:aws::s3express:us-west-2:111122223333:bucket/amzn-s3-demo-bucket--usw2-az1--x-s3",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/project": "${aws:PrincipalTag/project}"
        }
      }
    }
  ]
}
```

#### 1.3 - IAM policy to modify tags on existing resources maintaining tagging governence
<a name="example-user-policy-tag-keys"></a>

In this IAM policy, IAM principals (users or roles) can modify tags on a bucket only if the value of the bucket's `project` tag matches the value of the principal's `project` tag. Only the four tags `project`, `environment`, `owner`, and `cost-center` specified in the `aws:TagKeys` condition keys are permitted for these directory buckets. This helps enforce tag governance, prevents unauthorized tag modifications, and keeps the tagging schema consistent across your buckets.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "EnforceTaggingRulesOnModification",
      "Effect": "Allow",
      "Action": [
        "s3express:TagResource"
      ],
      "Resource": "arn:aws::s3express:us-west-2:111122223333:bucket/*",
      "Condition": {
        "StringEquals": {
          "aws:ResourceTag/project": "${aws:PrincipalTag/project}"
        },
        "ForAllValues:StringEquals": {
          "aws:TagKeys": [
            "project",
            "environment",
            "owner",
            "cost-center"
          ]
        }
      }
    }
  ]
}
```

#### 1.4 - Using the s3express:BucketTag condition key
<a name="example-policy-bucket-tag"></a>

In this IAM policy, the condition statement allows access to the bucket's data only if the bucket has the tag key `Environment` and tag value `Production`. 

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AllowAccessToSpecificAccessPoint",
      "Effect": "Allow",
      "Action": "*",
      "Resource": "arn:aws::s3express:us-west-2:111122223333:accesspoint/*",
      "Condition": {
        "StringEquals": {
          "s3express:BucketTag/Environment": "Production"
        }
      }
    }
  ]
}
```

## Managing tags for directory buckets
<a name="working-with-tags"></a>

You can add or manage tags for S3 directory buckets using the Amazon S3 Console, the AWS Command Line Interface (CLI), the AWS SDKs, or using the S3 APIs: [TagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html), [UntagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html), and [ListTagsForResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html). For more information, see:

**Topics**
+ [Common ways to use tags with directory buckets](#common-ways-to-use-tags-directory-bucket)
+ [Managing tags for directory buckets](#working-with-tags)
+ [Creating directory buckets with tags](directory-bucket-create-tag.md)
+ [Adding a tag to a directory bucket](directory-bucket-tag-add.md)
+ [Viewing directory bucket tags](directory-bucket-tag-view.md)
+ [Deleting a tag from a directory bucket](directory-bucket-tag-delete.md)

# Creating directory buckets with tags
<a name="directory-bucket-create-tag"></a>

You can tag Amazon S3 directory buckets when you create them. There is no additional charge for using tags on directory buckets beyond the standard S3 API request rates. For more information, see [Amazon S3 pricing](https://docs.aws.amazon.com/s3/pricing/). For more information about tagging directory buckets, see [Using tags with S3 directory buckets](directory-buckets-tagging.md).

## Permissions
<a name="create-tag-permissions"></a>

To create a directory bucket with tags, you must have the following permissions:
+ `s3express:CreateBucket`
+ `s3express:TagResource`

## Troubleshooting errors
<a name="create-tag-troubleshooting"></a>

If you encounter an error when attempting to create a directory bucket with tags, you can do the following: 
+ Verify that you have the required [Permissions](#create-tag-permissions) to create the directory bucket and add a tag to it.
+ Check your IAM user policy for any attribute-based access control (ABAC) conditions. You may be required to label your directory buckets only with specific tag keys and values. For more information, see [Using tags for attribute-based access control (ABAC)](tagging.md#using-tags-for-abac).

## Steps
<a name="create-tag-steps"></a>

You can create a directory bucket with tags applied by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and AWS SDKs.

## Using the S3 console
<a name="directory-bucket-create-tag-console"></a>

To create a directory bucket with tags using the Amazon S3 console:

1. Sign in to Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **directory buckets**.

1. Choose **create bucket** to create a new directory bucket.

1. You can create two types of directory buckets: 

   Create a directory bucket in an Availability Zone for a high performance workload. For more information, see [High performance workloads](directory-bucket-high-performance.md). 

   Create a directory bucket in a Local Zone for a data residency workload. For more information, see [Data residency workloads](directory-bucket-data-residency.md).

1. For both types of directory buckets, on the **Create bucket** page, **Tags** is an option when creating a new directory bucket.

1. Enter a name for the bucket. For more information, see [Directory bucket naming rules](directory-bucket-naming-rules.md). 

1. Choose **Add new Tag** to open the Tags editor and enter a tag key-value pair. The tag key is required, but the value is optional. 

1. To add another tag, select **Add new Tag** again. You can enter up to 50 tag key-value pairs.

1. After you complete specifying the options for your new directory bucket, choose **Create bucket**. 

## Using the AWS SDKs
<a name="directory-bucket-create-tag-sdks"></a>

------
#### [ SDK for Java 2.x ]

This example shows you how to create a directory bucket with tags by using the AWS SDK for Java 2.x. To use the command replace the *user input placeholders* with your own information. 

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.BucketInfo;
import software.amazon.awssdk.services.s3.model.BucketType;
import software.amazon.awssdk.services.s3.model.CreateBucketConfiguration;
import software.amazon.awssdk.services.s3.model.CreateBucketRequest;
import software.amazon.awssdk.services.s3.model.CreateBucketResponse;
import software.amazon.awssdk.services.s3.model.DataRedundancy;
import software.amazon.awssdk.services.s3.model.LocationInfo;
import software.amazon.awssdk.services.s3.model.LocationType;
import software.amazon.awssdk.services.s3.model.Tag;

public class CreateBucketWithTagsExample {
    public static void createBucketWithTagsExample() {
        S3Client s3 = S3Client.builder().region(Region.US_WEST_2).build();

        CreateBucketConfiguration bucketConfiguration = CreateBucketConfiguration.builder()
                .location(LocationInfo.builder()
                        .type(LocationType.AVAILABILITY_ZONE)
                        .name("usw2-az1").build())
                .bucket(BucketInfo.builder()
                        .type(BucketType.DIRECTORY)
                        .dataRedundancy(DataRedundancy.SINGLE_AVAILABILITY_ZONE)
                        .build())
                .tags(Tag.builder().key("MyTagKey").value("MyTagValue").build())
                .build();

        CreateBucketRequest createBucketRequest = CreateBucketRequest.builder()
                .bucket("amzn-s3-demo-bucket--usw2-az1--x-s3--usw2-az1--x-s3")
                .createBucketConfiguration(bucketConfiguration)
                .build();

        CreateBucketResponse response = s3.createBucket(createBucketRequest);
        System.out.println("Status code (should be 200):");
        System.out.println(response.sdkHttpResponse().statusCode());
    }
}
```

------

## Using the REST API
<a name="directory-bucket-tag-delete-api"></a>

For information about the Amazon S3 REST API support for creating a directory bucket with tags, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [CreateBucket](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html)

## Using the AWS CLI
<a name="directory-bucket-create-tag-cli"></a>

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to create a directory bucket with tags by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

When you create a directory bucket you must provide configuration details and use the following naming convention: `bucket-base-name--zone-id--x-s3`

**Request:**

```
aws s3api create-bucket \
--bucket bucket-base-name--zone-id--x-s3 \
--create-bucket-configuration "Location={Type=AvailabilityZone,Name=zone-id},Bucket={DataRedundancy=SingleAvailabilityZone,Type=Directory},Tags=[{Key=mykey1,Value=myvalue1}, {Key=mykey2,Value=myvalue2}]"
```

**Response:**

```
{
  "Location": "http://bucket--use1-az4--x-s3.s3express-use1-az4.us-east-1.amazonaws.com/"
}
```

# Adding a tag to a directory bucket
<a name="directory-bucket-tag-add"></a>



You can add tags to Amazon S3 directory buckets and modify these tags. There is no additional charge for using tags on directory buckets beyond the standard S3 API request rates. For more information, see [Amazon S3 pricing](https://docs.aws.amazon.com/s3/pricing/). For more information about tagging directory buckets, see [Using tags with S3 directory buckets](directory-buckets-tagging.md).

## Permissions
<a name="tag-add-permissions"></a>

To add a tag to a directory bucket, you must have the following permission:
+ `s3express:TagResource`

## Troubleshooting errors
<a name="tag-add-troubleshooting"></a>

If you encounter an error when attempting to add a tag to a directory bucket, you can do the following: 
+ Verify that you have the required [Permissions](#tag-add-permissions) to add a tag to a directory bucket.
+ If you attempted to add a tag key that starts with the AWS reserved prefix `aws:`, change the tag key and try again. 

## Steps
<a name="tag-add-steps"></a>

You can add tags to directory buckets by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and AWS SDKs.

## Using the S3 console
<a name="directory-bucket-tag-add-console"></a>

To add tags to a directory bucket using the Amazon S3 console:

1. Sign in to Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **directory buckets**.

1. Choose the bucket name. 

1. Choose the **Properties** tab. 

1. Scroll to the **Tags** section and choose **Add new Tag**. 

1. This opens the **Add Tags** page. You can enter up to 50 tag key value pairs. 

1. If you add a new tag with the same key name as an existing tag, the value of the new tag overrides the value of the existing tag.

1. You can also edit the values of existing tags on this page.

1. After you have added the tag(s), choose **Save changes**. 

## Using the AWS SDKs
<a name="directory-bucket-tag-add-sdks"></a>

------
#### [ SDK for Java 2.x ]

This example shows you how to add tags to a directory bucket by using the AWS SDK for Java 2.x. To use the command replace the *user input placeholders* with your own information. 

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.Tag;
import software.amazon.awssdk.services.s3control.model.TagResourceRequest;
import software.amazon.awssdk.services.s3control.model.TagResourceResponse;

public class TagResourceExample {
    public static void tagResourceExample() {
        S3ControlClient s3Control = S3ControlClient.builder().region(Region.US_WEST_2).build();

        TagResourceRequest tagResourceRequest = TagResourceRequest.builder()
                .resourceArn("arn:aws::s3express:us-west-2:111122223333:bucket/my-directory-bucket--usw2-az1--x-s3")
                .accountId("111122223333")
                .tags(Tag.builder().key("MyTagKey").value("MyTagValue").build())
                .build();

        TagResourceResponse response = s3Control.tagResource(tagResourceRequest);
        System.out.println("Status code (should be 204):");
        System.out.println(response.sdkHttpResponse().statusCode());
    }
}
```

------

## Using the REST API
<a name="directory-bucket-tag-add-api"></a>

For information about the Amazon S3 REST API support for adding tags to a directory bucket, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [TagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_TagResource.html)

## Using the AWS CLI
<a name="directory-bucket-tag-add-cli"></a>

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to add tags to a directory bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

**Request:**

```
aws s3control tag-resource \
--account-id 111122223333 \
--resource-arn arn:aws::s3express:us-east-1:444455556666:bucket/prefix--use1-az4--x-s3 \
--tags "Key=mykey,Value=myvalue"
```

**Response:**

```
{
  "ResponseMetadata": {
      "RequestId": "EXAMPLE123456789",
      "HTTPStatusCode": 200,
      "HTTPHeaders": {
          "date": "Wed, 19 Jun 2025 10:30:00 GMT",
          "content-length": "0"
      },
      "RetryAttempts": 0
  }
}
```

# Viewing directory bucket tags
<a name="directory-bucket-tag-view"></a>

You can view or list tags applied to S3 directory buckets. For more information about tags, see [Using tags with S3 directory buckets](directory-buckets-tagging.md).

## Permissions
<a name="tag-view-permissions"></a>

To view tags applied to a directory bucket, you must have the following permission: 
+ `s3express:ListTagsForResource`

## Troubleshooting errors
<a name="tag-view-troubleshooting"></a>

If you encounter an error when attempting to list or view the tags of a directory bucket, you can do the following: 
+ Verify that you have the required [Permissions](#tag-view-permissions) to view or list the tags of the directory bucket.

## Steps
<a name="tag-view-steps"></a>

You can view tags applied to directory buckets by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and AWS SDKs.

## Using the S3 console
<a name="directory-bucket-tag-view-console"></a>

To view tags applied to a directory bucket using the Amazon S3 console:

1. Sign in to Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **directory buckets**.

1. Choose the bucket name. 

1. Choose the **Properties** tab. 

1. Scroll to the **Tags** section to view all of the tags applied to the directory bucket. 

1. The **Tags** section shows the **User-defined tags** by default. You can select the **AWS-generated tags** tab to view tags applied to your directory bucket by AWS services.

## Using the AWS SDKs
<a name="directory-bucket-tag-view-sdks"></a>

This section provides an example of how to view tags applied to a directory bucket by using the AWS SDKs.

------
#### [ SDK for Java 2.x ]

This example shows you how to view tags applied to a directory bucket by using the AWS SDK for Java 2.x. 

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.ListTagsForResourceRequest;
import software.amazon.awssdk.services.s3control.model.ListTagsForResourceResponse;

public class ListTagsForResourceExample {
    public static void listTagsForResourceExample() {
        S3ControlClient s3Control = S3ControlClient.builder().region(Region.US_WEST_2).build();

        ListTagsForResourceRequest listTagsForResourceRequest = ListTagsForResourceRequest.builder()
                .resourceArn("arn:aws::s3express:us-west-2:111122223333:bucket/my-directory-bucket--usw2-az1--x-s3")
                .accountId("111122223333")
                .build();

        ListTagsForResourceResponse response = s3Control.listTagsForResource(listTagsForResourceRequest);
        System.out.println("Tags on my resource:");
        System.out.println(response.toString());
    }
}
```

------

## Using the REST API
<a name="directory-bucket-tag-view-api"></a>

For information about the Amazon S3 REST API support for viewing the tags applied to a directory bucket, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [ListTagsforResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListTagsForResource.html)

## Using the AWS CLI
<a name="directory-bucket-tag-view-cli"></a>

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to view tags applied to a directory bucket. To use the command replace the *user input placeholders* with your own information.

**Request:**

```
aws s3control list-tags-for-resource \
--account-id 111122223333 \
--resource-arn arn:aws::s3express:us-east-1:444455556666:bucket/prefix--use1-az4--x-s3 \
```

**Response - tags present:**

```
{
  "Tags": [
      {
          "Key": "MyKey1",
          "Value": "MyValue1"
      },
      {
          "Key": "MyKey2",
          "Value": "MyValue2"
      },
      {
          "Key": "MyKey3",
          "Value": "MyValue3"
      }
  ]
}
```

**Response - no tags present:**

```
{
  "Tags": []
}
```

# Deleting a tag from a directory bucket
<a name="directory-bucket-tag-delete"></a>

You can remove tags from S3 directory buckets. An AWS tag is a key-value pair that holds metadata about resources, in this case Amazon S3 directory buckets. For more information about tags, see [Using tags with S3 directory buckets](directory-buckets-tagging.md).

**Note**  
If you delete a tag and later learn that it was being used to track costs or for access control, you can add the tag back to the directory bucket. 

## Permissions
<a name="tag-delete-permissions"></a>

To delete a tag from a directory bucket, you must have the following permission: 
+ `s3express:UntagResource`

## Troubleshooting errors
<a name="tag-delete-troubleshooting"></a>

If you encounter an error when attempting to delete a tag from a directory bucket, you can do the following: 
+ Verify that you have the required [Permissions](#tag-delete-permissions) to delete a tag from a directory bucket.

## Steps
<a name="tag-delete-steps"></a>

You can delete tags from directory buckets by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the Amazon S3 REST API, and AWS SDKs.

## Using the S3 console
<a name="directory-bucket-tag-delete-console"></a>

To delete tags from a directory bucket using the Amazon S3 console:

1. Sign in to Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **directory buckets**.

1. Choose the bucket name. 

1. Choose the **Properties** tab. 

1. Scroll to the **Tags** section and select the checkbox next to the tag or tags that you would like to delete. 

1. Choose **Delete**. 

1. The **Delete user-defined tags** pop-up appears and asks you to confirm the deletion of the tag or tags you selected. 

1. Choose **Delete** to confirm.

## Using the AWS SDKs
<a name="directory-bucket-tag-delete-sdks"></a>

------
#### [ SDK for Java 2.x ]

This example shows you how to delete tags from a directory bucket by using the AWS SDK for Java 2.x. To use the command replace the *user input placeholders* with your own information. 

```
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3control.S3ControlClient;
import software.amazon.awssdk.services.s3control.model.UntagResourceRequest;
import software.amazon.awssdk.services.s3control.model.UntagResourceResponse;

public class UntagResourceExample {
    public static void untagResourceExample() {
        S3ControlClient s3Control = S3ControlClient.builder().region(Region.US_WEST_2).build();

        UntagResourceRequest untagResourceRequest = UntagResourceRequest.builder()
                .resourceArn("arn:aws::s3express:us-west-2:111122223333:bucket/my-directory-bucket--usw2-az1--x-s3")
                .accountId("111122223333")
                .tagKeys("myTagKey")
                .build();

        UntagResourceResponse response = s3Control.untagResource(untagResourceRequest);
        System.out.println("Status code (should be 204):");
        System.out.println(response.sdkHttpResponse().statusCode());
    }
}
```

------

## Using the REST API
<a name="directory-bucket-tag-delete-api"></a>

For information about the Amazon S3 REST API support for deleting tags from a directory bucket, see the following section in the *Amazon Simple Storage Service API Reference*:
+ [UnTagResource](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_UntagResource.html)

## Using the AWS CLI
<a name="directory-bucket-tag-delete-cli"></a>

To install the AWS CLI, see [Installing the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html) in the *AWS Command Line Interface User Guide*.

The following CLI example shows you how to delete tags from a directory bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

**Request:**

```
aws s3control untag-resource \
--account-id 111122223333 \
--resource-arn arn:aws::s3express:us-east-1:444455556666:bucket/prefix--use1-az4--x-s3 \
--tag-keys "tagkey1" "tagkey2"
```

**Response:**

```
{
  "ResponseMetadata": {
    "RequestId": "EXAMPLE123456789",
    "HTTPStatusCode": 204,
    "HTTPHeaders": {
        "date": "Wed, 19 Jun 2025 10:30:00 GMT",
        "content-length": "0"
    },
    "RetryAttempts": 0
  }
}
```

# Resilience testing in S3 Express One Zone
<a name="s3-express-fis"></a>

Amazon S3 Express One Zone storage class supports resilience testing with AWS Fault Injection Service (AWS FIS), a fully managed service for performing fault injection experiments on your AWS workloads. With AWS FIS, you can simulate connectivity disruptions to your directory buckets, causing Zonal (object level, or data plane) endpoint API operations to timeout as would occur during an Availability Zone disruption.

These experiments can help you:
+ Verify your monitoring systems can detect S3 Express One Zone access issues
+ Test and strengthen recovery processes
+ Validate that application failover mechanisms work as expected
+ Ensure your application recovery time meets your organization's service level objectives (SLOs) and service level agreements (SLAs)

By testing your application's response to simulated disruptions, you can better prepare for the unlikely event of an actual Availability Zone outage that affects access to your data in S3 Express One Zone.

## How it works
<a name="s3-express-fis-how-it-works"></a>

The test uses the `aws:network:disrupt-connectivity` action with scope set to `S3 Express`. This action disrupts network connectivity to S3 Express One Zone endpoints, causing requests to directory buckets to timeout.

You can target subnets where your applications are running or Gateway VPC endpoints used to access S3 Express One Zone. For more information, see [AZ Availability: Power Interruption](https://docs.aws.amazon.com/fis/latest/userguide/az-availability-power-interruption.html) in the *AWS Fault Injection Service User Guide*.

To test the resilience of applications that store data in S3 Express One Zone, see [Simulate a connectivity event](https://docs.aws.amazon.com/fis/latest/userguide/fis-tutorial-disrupt-connectivity.html) in the *AWS Fault Injection Service User Guide*.

## Considerations and limitations
<a name="s3-express-fis-considerations"></a>

Keep in mind the following considerations and limitations for disrupting connectivity in Amazon S3 Express One Zone storage class:

### Considerations
<a name="s3-express-fis-considerations-general"></a>
+ **IAM Permissions:** To use AWS FIS with S3 Express One Zone, you must configure an IAM role with appropriate permissions. For more information on AWS FIS roles, see [Create an IAM role for AWS FIS](https://docs.aws.amazon.com/fis/latest/userguide/getting-started-iam-service-role.html) in the *AWS Fault Injection Service User Guide*. We recommend scoping these permissions to only the necessary resources.
+ **Target Resolution:** Targets are resolved at the beginning of the experiment. If a target subnet or gateway VPC endpoint is deleted during the experiment, the experiment will fail.
+ **Shared Resources Impact:** If multiple applications share the same subnets or gateway VPC endpoints, all traffic to S3 Express One Zone from these applications will be affected during the experiment.
+ **Rollback Behavior:** When the AWS FIS action ends, connectivity is automatically restored and requests will resume the expected operation without manual intervention.
+ **Stop Conditions:** Configure appropriate CloudWatch alarms as stop conditions to automatically terminate experiments if unexpected impacts occur.

### Limitations
<a name="s3-express-fis-limitations"></a>
+ **Target Selection:** You can't target specific S3 directory buckets. The action affects all directory buckets accessed through the targeted networking components.
+ **Maximum Targets:** There is a maximum number of subnets you can target per AWS FIS action. For more information, see [Service quotas for AWS Fault Injection Service](https://docs.aws.amazon.com/fis/latest/userguide/fis-quotas.html) in the *AWS FIS User Guide*.
+ **Access Methods:** The AWS FIS action only affects requests made through the internet or gateway virtual private cloud (VPC) endpoints. Requests from interface VPC endpoints (AWS PrivateLink) aren't supported.
+ **Regional Availability:** This feature is available only in [AWS Regions where S3 Express One Zone is supported](s3-express-Endpoints.md).