

# Use cases for directory buckets


Directory buckets support bucket creation in the following bucket location types: Availability Zone or Local Zone. 

For low latency use cases, you can create a directory bucket in a single Availability Zone to store data. Directory buckets in Availability Zones support the S3 Express One Zone storage class. S3 Express One Zone storage class is recommended if your application is performance sensitive and benefits from single-digit millisecond `PUT` and `GET` latencies. To learn more about creating directory buckets in Availability Zones, see [High performance workloads](directory-bucket-high-performance.md). 

 For data residency use cases, you can create a directory bucket in a single AWS Dedicated Local Zone (DLZ) to store data. Directory buckets in Local Zones support the S3 One Zone-Infrequent Access (S3 One Zone-IA; Z-IA) storage class. To learn more about creating directory buckets in Local Zones, see [Data residency workloads](directory-bucket-data-residency.md).

**Topics**
+ [

# High performance workloads
](directory-bucket-high-performance.md)
+ [

# Data residency workloads
](directory-bucket-data-residency.md)

# High performance workloads


## S3 Express One Zone
S3 Express One Zone

 You can use Amazon S3 Express One Zone for high-performance workloads. S3 Express One Zone is the first S3 storage class where you can select a single Availability Zone with the option to co-locate your object storage with your compute resources which provides the highest possible access speed. Objects in S3 Express One Zone are stored in directory buckets located in Availability Zones. For more information on directory buckets, see [Directory buckets](https://docs.aws.amazon.com//AmazonS3/latest/userguide/directory-buckets-overview.html). 

Amazon S3 Express One Zone is a high-performance, single-zone Amazon S3 storage class that is purpose-built to deliver consistent, single-digit millisecond data access for your most latency-sensitive applications. S3 Express One Zone is the lowest latency cloud-object storage class available today, with data access speeds up to 10x faster and with request costs 50 percent lower than S3 Standard. Applications can benefit immediately from requests being completed up to an order of magnitude faster. S3 Express One Zone provides similar performance elasticity as other S3 storage classes. S3 Express One Zone is used for workloads or performance-critical applications that require consistent single-digit millisecond latency. 

As with other Amazon S3 storage classes, you don't need to plan or provision capacity or throughput requirements in advance. You can scale your storage up or down, based on need, and access your data through the Amazon S3 API.

The Amazon S3 Express One Zone storage class is designed for 99.95 percent availability within a single Availability Zone and is backed by the [Amazon S3 Service Level Agreement](https://aws.amazon.com/s3/sla/). With S3 Express One Zone, your data is redundantly stored on multiple devices within a single Availability Zone. S3 Express One Zone is designed to handle concurrent device failures by quickly detecting and repairing any lost redundancy. If the existing device encounters a failure, S3 Express One Zone automatically shifts requests to new devices within an Availability Zone. This redundancy helps ensure uninterrupted access to your data within an Availability Zone.

S3 Express One Zone is ideal for any application where it's important to minimize the latency required to access an object. Such applications can be human-interactive workflows, like video editing, where creative professionals need responsive access to content from their user interfaces. S3 Express One Zone also benefits analytics and machine learning workloads that have similar responsiveness requirements from their data, especially workloads with lots of smaller accesses or large numbers of random accesses. S3 Express One Zone can be used with other AWS services to support analytics and artificial intelligence and machine learning (AI/ML) workloads, such as Amazon EMR, Amazon SageMaker AI, and Amazon Athena.

![\[Diagram showing how S3 Express One Zone works.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/s3-express-one-zone.png)


For the directory buckets that use the S3 Express One Zone storage class, data is stored across multiple devices within a single Availability Zone but doesn't store data redundantly across Availability Zones. When you create a directory bucket to use the S3 Express One Zone storage class, we recommend that you specify an AWS Region and an Availability Zone that's local to your Amazon EC2, Amazon Elastic Kubernetes Service, or Amazon Elastic Container Service (Amazon ECS) compute instances to optimize performance. 

When using S3 Express One Zone, you can interact with your directory bucket in a virtual private cloud (VPC) by using a gateway VPC endpoint. With a gateway endpoint, you can access S3 Express One Zone directory buckets from your VPC without an internet gateway or NAT device for your VPC, and at no additional cost. 

You can use many of the same Amazon S3 API operations and features with directory buckets that you use with general purpose buckets and other storage classes. These include Mountpoint for Amazon S3, server-side encryption with Amazon S3 managed keys (SSE-S3), server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), S3 Batch Operations, and S3 Block Public Access. You can access S3 Express One Zone by using the Amazon S3 console, AWS Command Line Interface (AWS CLI), AWS SDKs, and the Amazon S3 REST API.

For more information about S3 Express One Zone, see the following topics.
+ [Overview](#s3-express-one-zone-overview)
+ [Features of S3 Express One Zone](#s3-express-features)
+ [Related services](#s3-express-related-services)
+ [Next steps](#s3-express-next-steps)

### Overview


To optimize performance and reduce latency, S3 Express One Zone introduces the following new concepts.

#### Availability Zones


The Amazon S3 Express One Zone storage class is designed for 99.95 percent availability within a single Availability Zone and is backed by the [Amazon S3 Service Level Agreement](https://aws.amazon.com/s3/sla/). With S3 Express One Zone, your data is redundantly stored on multiple devices within a single Availability Zone. S3 Express One Zone is designed to handle concurrent device failures by quickly detecting and repairing any lost redundancy. If the existing device encounters a failure, S3 Express One Zone automatically shifts requests to new devices within an Availability Zone. This redundancy helps ensure uninterrupted access to your data within an Availability Zone.

An Availability Zone is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. When you create a directory bucket, you choose the Availability Zone and AWS Region where your bucket will be located. 

##### Single Availability Zone


When you create a directory bucket, you choose the Availability Zone and AWS Region.

Directory buckets use the S3 Express One Zone storage class, which is built to be used by performance-sensitive applications. S3 Express One Zone is the first S3 storage class where you can select a single Availability Zone with the option to co-locate your object storage with your compute resources, which provides the highest possible access speed.

With S3 Express One Zone, your data is redundantly stored on multiple devices within a single Availability Zone. S3 Express One Zone is designed for 99.95 percent availability within a single Availability Zone and is backed by the [Amazon S3 Service Level Agreement](https://aws.amazon.com/s3/sla/). For more information, see [Availability Zones](#s3-express-overview-az)

#### Endpoints and gateway VPC endpoints


Bucket-management API operations for directory buckets are available through a Regional endpoint and are referred to as Regional endpoint API operations. Examples of Regional endpoint API operations are `CreateBucket` and `DeleteBucket`. After you create a directory bucket, you can use Zonal endpoint API operations to upload and manage the objects in your directory bucket. Zonal endpoint API operations are available through a Zonal endpoint. Examples of Zonal endpoint API operations are `PutObject` and `CopyObject`.

You can access S3 Express One Zone from your VPC by using gateway VPC endpoints. After you create a gateway endpoint, you can add it as a target in your route table for traffic destined from your VPC to S3 Express One Zone. As with Amazon S3, there is no additional charge for using gateway endpoints. For more information about how to configure gateway VPC endpoints, see [Networking for directory buckets](s3-express-networking.md)

#### Session-based authorization


With S3 Express One Zone, you authenticate and authorize requests through a new session-based mechanism that is optimized to provide the lowest latency. You can use `CreateSession` to request temporary credentials that provide low-latency access to your bucket. These temporary credentials are scoped to a specific S3 directory bucket. Session tokens are used only with Zonal (object-level) operations (with the exception of [CopyObject](directory-buckets-objects-copy.md)). For more information, see [Authorizing Zonal endpoint API operations with `CreateSession`](s3-express-create-session.md). 

The [supported AWS SDKs for S3 Express One Zone](s3-express-SDKs.md#s3-express-getting-started-accessing-sdks) handle session establishment and refreshment on your behalf. To protect your sessions, temporary security credentials expire after 5 minutes. After you download and install the AWS SDKs and configure the necessary AWS Identity and Access Management (IAM) permissions, you can immediately start using API operations.

### Features of S3 Express One Zone


The following S3 features are available for S3 Express One Zone. For a complete list of supported API operationss and unsupported features, see [Differences for directory buckets](s3-express-differences.md).

#### Access management and security


You can use the following features to audit and manage access. By default, directory buckets are private and can be accessed only by users who are explicitly granted access. Unlike general purpose buckets, which can set the access control boundary at the bucket, prefix, or object tag level, the access control boundary for directory buckets is set only at the bucket level. For more information, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md). 
+ [S3 Block Public Access](access-control-block-public-access.md) – All S3 Block Public Access settings are enabled by default at the bucket level. This default setting can't be modified. 
+ [S3 Object Ownership](about-object-ownership.md) (bucket owner enforced by default) – Access control lists (ACLs) are not supported for directory buckets. Directory buckets automatically use the bucket owner enforced setting for S3 Object Ownership. Bucket owner enforced means that ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the bucket. This default setting can't be modified. 
+ [AWS Identity and Access Management (IAM)](s3-express-security-iam.md) – IAM helps you securely control access to your directory buckets. You can use IAM to grant access to bucket management (Regional) API operations and object management (Zonal) API operations through the `s3express:CreateSession` action. For more information, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md). Unlike object-management actions, bucket management actions cannot be cross-account. Only the bucket owner can perform those actions.
+ [Bucket policies](s3-express-security-iam-example-bucket-policies.md) – Use IAM-based policy language to configure resource-based permissions for your directory buckets. You can also use IAM to control access to the `CreateSession` API operation, which allows you to use the Zonal, or object management, API operations. You can grant same-account or cross-account access to Zonal API operations. For more information about S3 Express One Zone permissions and policies, see [Authorizing Regional endpoint API operations with IAM](s3-express-security-iam.md).
+ [IAM Access Analyzer for S3](access-analyzer.md) – Evaluate and monitor your access policies to make sure that the policies provide only the intended access to your S3 resources.

#### Logging and monitoring


S3 Express One Zone uses the following S3 logging and monitoring tools that you can use to monitor and control how your resources are being used:
+ [Amazon CloudWatch metrics](cloudwatch-monitoring.md) – Monitor your AWS resources and applications by using CloudWatch to collect and track metrics. S3 Express One Zone uses the same CloudWatch namespace as other Amazon S3 storage classes (`AWS/S3`) and supports daily storage metrics for directory buckets: `BucketSizeBytes` and `NumberOfObjects`. For more information, see [Monitoring metrics with Amazon CloudWatch](cloudwatch-monitoring.md).
+ [AWS CloudTrail logs](cloudtrail-logging-s3-info.md) – AWS CloudTrail is an AWS service that helps you implement operational and risk auditing, governance, and compliance of your AWS account by recording the actions taken by a user, role, or an AWS service. For S3 Express One Zone, CloudTrail captures Regional endpoint API operations (for example, `CreateBucket` and `PutBucketPolicy`) as management events and Zonal API operations (for example, `GetObject` and `PutObject`) as data events. These events include actions taken in the AWS Management Console, AWS Command Line Interface (AWS CLI), AWS SDKs, and AWS API operations. For more information, see [Logging with AWS CloudTrail for S3 Express One Zone](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-one-zone-logging.html).

**Note**  
Amazon S3 server access logs aren't supported with S3 Express One Zone.

#### Object management


You can manage your object storage by using the Amazon S3 console, AWS SDKs, and AWS CLI. The following features are available for object management with S3 Express One Zone:
+ [S3 Batch Operations](batch-ops-create-job.md) – Use Batch Operations to perform bulk operations on objects in directory buckets, for example, **Copy** and **Invoke AWS Lambda function**. For example, you can use Batch Operations to copy objects between directory buckets and general purpose buckets. With Batch Operations, you can manage billions of objects at scale with a single S3 request by using the AWS SDKs or AWS CLI or a few clicks in the Amazon S3 console. 
+ [Import](https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-import-job.html) – After you create a directory bucket, you can populate your bucket with objects by using the import feature in the Amazon S3 console. Import is a streamlined method for creating Batch Operations jobs to copy objects from general purpose buckets to directory buckets.

#### AWS SDKs and client libraries


 You can manage your object storage by using the AWS SDKs and client libraries. 
+ [Mountpoint for Amazon S3](https://github.com/awslabs/mountpoint-s3/blob/main/doc/SEMANTICS.md) – Mountpoint for Amazon S3 is an open source file client that delivers high-throughput access, lowering compute costs for data lakes on Amazon S3. Mountpoint for Amazon S3 translates local file system API calls to S3 object API calls like `GET` and `LIST`. It is ideal for read-heavy data lake workloads that process petabytes of data and need the high elastic throughput provided by Amazon S3 to scale up and down across thousands of instances.
+ [https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#Introducing_the_Hadoop_S3A_client](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#Introducing_the_Hadoop_S3A_client) – S3A is a recommended Hadoop-compatible interface for accessing data stores in Amazon S3. S3A replaces the S3N Hadoop file system client.
+ [PyTorch on AWS](https://docs.aws.amazon.com//sagemaker/latest/dg/pytorch.html) – PyTorch on AWS is an open source deep-learning framework that makes it easier to develop machine learning models and deploy them to production. 
+ [AWS SDKs](https://aws.amazon.com//developer/tools/) – You can use the AWS SDKs when developing applications with Amazon S3. The AWS SDKs simplify your programming tasks by wrapping the underlying Amazon S3 REST API. For more information about using the AWS SDKs with S3 Express One Zone, see [AWS SDKs](s3-express-SDKs.md#s3-express-getting-started-accessing-sdks).

### Encryption and data protection


Objects in S3 Express One Zone are automatically encrypted by server-side encryption with Amazon S3 managed keys (SSE-S3). S3 Express One Zone also supports server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS). S3 Express One Zone doesn't support server-side encryption with customer-provided encryption keys (SSE-C), or dual-layer server-side encryption with AWS KMS keys (DSSE-KMS). For more information, see [Data protection and encryption](s3-express-data-protection.md).

S3 Express One Zone offers you the option to choose the checksum algorithm that is used to validate your data during upload or download. You can select one of the following Secure Hash Algorithms (SHA) or Cyclic Redundancy Check (CRC) data-integrity check algorithms: CRC32, CRC32C, SHA-1, and SHA-256. MD5-based checksums are not supported with the S3 Express One Zone storage class. 

For more information, see [S3 additional checksum best practices](s3-express-optimizing-performance.md#s3-express-optimizing-performance-checksums).

### AWS Signature Version 4 (SigV4)


S3 Express One Zone uses AWS Signature Version 4 (SigV4). SigV4 is a signing protocol used to authenticate requests to Amazon S3 over HTTPS. S3 Express One Zone signs requests by using AWS Sigv4. For more information, see [Authenticating Requests (AWS Signature Version 4)](https://docs.aws.amazon.com//AmazonS3/latest/API/sig-v4-authenticating-requests.html) in the *Amazon Simple Storage Service API Reference*.

### Strong consistency


S3 Express One Zone provides strong read-after-write consistency for `PUT` and `DELETE` requests of objects in your directory buckets in all AWS Regions. For more information, see [Amazon S3 data consistency model](Welcome.md#ConsistencyModel).

### Related services


You can use the following AWS services with the S3 Express One Zone storage class to support your specific low-latency use case.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/index.html) – Amazon EC2 provides secure and scalable computing capacity in the AWS Cloud. Using Amazon EC2 lessens your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html.html) – Lambda is a compute service that lets you run code without provisioning or managing servers. You configure notification settings on a bucket, and grant Amazon S3 permission to invoke a function on the function's resource-based permissions policy. 
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) – Amazon EKS is a managed service that eliminates the need to install, operate, and maintain your own Kubernetes control plane on AWS. [https://kubernetes.io/docs/concepts/overview/](https://kubernetes.io/docs/concepts/overview/) is an open source system that automates the management, scaling, and deployment of containerized applications.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) – Amazon ECS is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications.
+ [AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) – AWS Key Management Service (AWS KMS) is an AWS managed service that makes it easy for you to create and control the encryption keys that are used to encrypt your data. The AWS KMS keys that you create in AWS KMS are protected by FIPS 140-2 validated hardware security modules (HSM). To use or manage your KMS keys, you interact with AWS KMS.
+ [Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/what-is.html) – Athena is an interactive query service that makes it easy to analyze data directly in Amazon S3 by using standard [SQL](https://docs.aws.amazon.com/athena/latest/ug/ddl-sql-reference.html). You can also use Athena to interactively run data analytics by using Apache Spark without having to plan for, configure, or manage resources. When you run Apache Spark applications on Athena, you submit Spark code for processing and receive the results directly.
+ [Amazon SageMaker Training](https://docs.aws.amazon.com//sagemaker/latest/dg/how-it-works-training.html) – Review the options for training models with Amazon SageMaker, including built-in algorithms, custom algorithms, libraries, and models from the AWS Marketplace.
+ [AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/what-is-glue.html) – AWS Glue is a serverless data-integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources. You can use AWS Glue for analytics, machine learning, and application development. AWS Glue also includes additional productivity and data-ops tooling for authoring, running jobs, and implementing business workflows.
+ [Amazon EMR](https://docs.aws.amazon.com//emr/latest/ManagementGuide/emr-what-is-emr.html) – Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark, on AWS to process and analyze vast amounts of data. 
+ [AWSCloudTrail](https://docs.aws.amazon.com//awscloudtrail/latest/userguide/cloudtrail-user-guide.html) – AWSCloudTrail is s an AWS service that helps you enable operational and risk auditing, governance, and compliance of your AWS account. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs. 
+ [AWS CloudFormation ](https://docs.aws.amazon.com//AWSCloudFormation/latest/UserGuide/Welcome.html) – is a service that helps you model and set up your AWS resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and CloudFormation takes care of provisioning and configuring those resources for you. You don't need to individually create and configure AWS resources and figure out what's dependent on what; CloudFormation handles that.

### Next steps


For more information about working with the S3 Express One Zone storage class and directory buckets, see the following topics:
+ [Tutorial: Getting started with S3 Express One Zone](s3-express-getting-started.md)
+ [S3 Express One Zone Availability Zones and Regions](s3-express-Endpoints.md)
+ [Networking for directory buckets in an Availability Zone](directory-bucket-az-networking.md)
+ [Creating directory buckets in an Availability Zone](directory-bucket-create.md)
+ [Regional and Zonal endpoints for directory buckets in an Availability Zone](endpoint-directory-buckets-AZ.md)
+ [Optimizing S3 Express One Zone performance](s3-express-performance.md)

# Tutorial: Getting started with S3 Express One Zone
Tutorial: Getting started with S3 Express One Zone

Amazon S3 Express One Zone is the first S3 storage class where you can select a single Availability Zone with the option to co-locate your object storage with your compute resources which provides the highest possible access speed. Data in S3 Express One Zone is stored in directory buckets located in Availability Zones. For more information on directory buckets, see [Directory buckets](https://docs.aws.amazon.com//AmazonS3/latest/userguide/directory-buckets-overview.html). 

 S3 Express One Zone is ideal for any application where it's critical to minimize request latency. Such applications can be human-interactive workflows, like video editing, where creative professionals need responsive access to content from their user interfaces. S3 Express One Zone also benefits analytics and machine learning workloads that have similar responsiveness requirements from their data, especially workloads with a lot of smaller accesses or a large numbers of random accesses. S3 Express One Zone can be used with other AWS services such as Amazon EMR, Amazon Athena, AWS Glue Data Catalog and Amazon SageMaker Model Training to support analytics, artificial intelligence and machine learning (AI/ML) workloads,. You can work with the S3 Express One Zone storage class and directory buckets by using the Amazon S3 console, AWS SDKs, AWS Command Line Interface (AWS CLI), and Amazon S3 REST API. For more information, see [What is S3 Express One Zone?](https://docs.aws.amazon.com//AmazonS3/latest/userguide/s3-express-one-zone.html) and [How is S3 Express One Zone different?](https://docs.aws.amazon.com//AmazonS3/latest/userguide/s3-express-differences.html). 

![\[This is an S3 Express One Zone workflow diagram.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/s3-express-one-zone.png)


**Objective**  
In this tutorial, you will learn how to create a gateway endpoint, create and attach an IAM policy, create a directory bucket and then use the Import action to populate your directory bucket with objects currently stored in your general purpose bucket. Alternatively, you can manually upload objects to your directory bucket. 

**Topics**
+ [

## Prerequisites
](#s3-express-tutorial-prerequisites)
+ [

# Step 1: Configure a gateway VPC endpoint to reach S3 Express One Zone directory buckets
](s3-express-tutorial-endpoints.md)
+ [

# Step 2: Create a S3 Express One Zone directory bucket
](s3-express-tutorial-create-directory-bucket.md)
+ [

# Step 3: Importing data into a S3 Express One Zone directory bucket
](s3-express-tutorial-Import.md)
+ [

# Step 4: Manually upload objects to your S3 Express One Zone directory bucket
](s3-express-tutorial-Upload.md)
+ [

# Step 5: Empty your S3 Express One Zone directory bucket
](s3-express-tutoiral-Empty.md)
+ [

# Step 6: Delete your S3 Express One Zone directory bucket
](s3-express-tutoiral-Delete.md)
+ [

## Next steps
](#s3-express-tutoiral-Next)

## Prerequisites


Before you start this tutorial, you must have an AWS account that you can sign in to as an AWS Identity and Access Management (IAM) user with correct permissions.

**Topics**
+ [

### Create an AWS account
](#s3-express-create-account)
+ [

### Create an IAM user in your AWS account (console)
](#s3-express-tutorial-user)
+ [

### Create an IAM policy and attach it to an IAM user or role (console)
](#s3-express-tutorial-polict)

### Create an AWS account


To complete this tutorial, you need an AWS account. When you sign up for AWS, your AWS account is automatically signed up for all services in AWS, including Amazon S3. You are charged only for the services that you use. For more information about pricing, see [S3 pricing](https://aws.amazon.com/s3/pricing/). 

### Create an IAM user in your AWS account (console)


AWS Identity and Access Management (IAM) is an AWS service that helps administrators securely control access to AWS resources. IAM administrators control who can be authenticated (signed in) and authorized (have permissions) to access objects and use directory buckets in S3 Express One Zone. You can use IAM for no additional charge. 

By default, users don't have permissions to access directory buckets and perform S3 Express One Zone operations. To grant access permissions for directory buckets and S3 Express One Zone operations, you can use IAM to create users or roles and attach permissions to those identities. For more information about how to create an IAM user, see [Creating IAM users (console)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html#id_users_create_console) in the *IAM User Guide*. For more information about how to create an IAM role, see [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*. 

For simplicity, this tutorial creates and uses an IAM user. After completing this tutorial, remember to [Delete the IAM user](tutorial-s3-object-lambda-uppercase.md#ol-upper-step8-delete-user). For production use, we recommend that you follow the [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*. A best practice requires human users to use federation with an identity provider to access AWS with temporary credentials. Another best practice is to require workloads to use temporary credentials with IAM roles to access AWS. To learn more about using AWS IAM Identity Center to create users with temporary credentials, see [Getting started](https://docs.aws.amazon.com/singlesignon/latest/userguide/getting-started.html) in the *AWS IAM Identity Center User Guide*. 

**Warning**  
IAM users have long-term credentials, which presents a security risk. To help mitigate this risk, we recommend that you provide these users with only the permissions they require to perform the task and that you remove these users when they are no longer needed.

### Create an IAM policy and attach it to an IAM user or role (console)


By default, users don't have permissions for directory buckets and S3 Express One Zone operations. To grant access permissions for directory buckets, you can use IAM to create users, groups, or roles and attach permissions to those identities. Directory buckets are the only resource that you can include in bucket policies or IAM identity policies for S3 Express One Zone access. 

To use Regional endpoint API operations (bucket-level or control plane operations) with S3 Express One Zone, you use the IAM authorization model, which doesn't involve session management. Permissions are granted for actions individually. To use Zonal endpoint API operations (object-level or data plane operations), you use [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateSession.html) to create and manage sessions that are optimized for low-latency authorization of data requests. To retrieve and use a session token, you must allow the `s3express:CreateSession` action for your directory bucket in an identity-based policy or a bucket policy. If you're accessing S3 Express One Zone in the Amazon S3 console, through the AWS Command Line Interface (AWS CLI), or by using the AWS SDKs, S3 Express One Zone creates a session on your behalf. For more information, see [`CreateSession` authorization](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-create-session.html) and [AWS Identity and Access Management (IAM) for S3 Express One Zone ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam.html). 

**To create an IAM policy and attach the policy to an IAM user (or role)**

1. Sign in to the AWS Management Console and open the IAM Management Console.

1. In the navigation pane, choose **Policies**.

1. Choose **Create Policy**.

1. Select **JSON**.

1. Copy the policy below into the **Policy editor** window. Before you can create directory buckets or use S3 Express One Zone, you must grant the necessary permissions to your AWS Identity and Access Management (IAM) role or users. This example policy allows access to the `CreateSession` API operation (for use with other Zonal or object-level API operations) and all of the Regional endpoint (bucket-level) API operations. This policy allows the `CreateSession` API operation for use with all directory buckets, but the Regional endpoint API operations are allowed only for use with the specified directory bucket. To use this example policy, replace the `user input placeholders` with your own information.

------
#### [ JSON ]

****  

   ```
   {
        "Version":"2012-10-17",		 	 	 
        "Statement": [ 
            {
                "Sid": "AllowAccessRegionalEndpointAPIs",
                "Effect": "Allow",
                "Action": [
                    "s3express:DeleteBucket",
                    "s3express:DeleteBucketPolicy",
                    "s3express:CreateBucket",
                    "s3express:PutBucketPolicy",
                    "s3express:GetBucketPolicy",
                    "s3express:ListAllMyDirectoryBuckets"
                ],
   
                "Resource": "arn:aws:s3express:us-east-1:111122223333:bucket/amzn-s3-demo-bucket--usw2-az1--x-s3/*"
            },
            {
                "Sid": "AllowCreateSession",
                "Effect": "Allow",
                "Action": "s3express:CreateSession",
                "Resource": "*"
            }
        ]
    }
   ```

------

1. Choose **Next**.

1. Name the policy.
**Note**  
Bucket tags are not supported for S3 Express One Zone.

1. Select **Create policy**.

1.  Now that you've created an IAM policy, you can attach it to an IAM user. In the navigation pane, choose **Policies**.

1. In the **search bar**, enter the name of your policy.

1. From the **Actions** menu, select **Attach**. 

1. Under **Filter by Entity Type**, select **IAM users** or **Roles**. 

1. In the **search field**, type the name of the user or role you wish to use.

1. Choose **Attach Policy**.

**Topics**
+ [

### Create an AWS account
](#s3-express-create-account)
+ [

### Create an IAM user in your AWS account (console)
](#s3-express-tutorial-user)
+ [

### Create an IAM policy and attach it to an IAM user or role (console)
](#s3-express-tutorial-polict)

# Step 1: Configure a gateway VPC endpoint to reach S3 Express One Zone directory buckets
Configure a VPC endpoint

 You can access both Zonal and Regional API operations through gateway virtual private cloud (VPC) endpoints. Gateway endpoints can allow traffic to reach S3 Express One Zone without traversing a NAT Gateway. We strongly recommend using gateway endpoints as they provide the most optimal networking path when working with S3 Express One Zone. You can access S3 Express One Zone directory buckets from your VPC without an internet gateway or NAT device for your VPC, and at no additional cost. Use the following procedure to configure a gateway endpoint that connects to S3 Express One Zone storage class objects and directory buckets.

To access S3 Express One Zone, you use Regional and Zonal endpoints that are different from standard Amazon S3 endpoints. Depending on the Amazon S3 API operation that you use, either a Zonal or Regional endpoint is required. For a complete list of supported API operations by endpoint type, see [API operations supported by S3 Express One Zone](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-differences.html#s3-express-differences-api-operations). You must access both Zonal and Regional endpoints through a gateway virtual private cloud (VPC) endpoint. 

 Use the following procedure to create a gateway endpoint that connects to S3 Express One Zone storage class objects and directory buckets.

**To configure a gateway VPC endpoint**

1. Open the Amazon VPC Console at [https://console.aws.amazon.com/vpc/](https://console.aws.amazon.com/vpc/).

1. In the side navigation pane under **Virtual private cloud**, choose **Endpoints**.

1. Choose **Create endpoint**.

1. Create a name for your endpoint.

1. For **Service category**, choose **AWS services**. 

1. Under **Services**, search using the filter **Type=Gateway** and then choose the option button next to **com.amazonaws.*region*.s3express**. 

1. For **VPC**, choose the VPC in which to create the endpoint.

1. For **Route tables**, choose the route table on your Local Zone to be used by the endpoint. After the endpoint is created, a route record will be added to the route table that you select in this step.

1. For **Policy**, choose **Full access** to allow all operations by all principals on all resources over the VPC endpoint. Otherwise, choose **Custom** to attach a VPC endpoint policy that controls the principals' permissions to perform actions on resources over the VPC endpoint.

1. For **IP address type**, choose from the following options:
   +  **IPv4** – Assign IPv4 addresses to the endpoint network interfaces. This option is supported only if all selected subnets have IPv4 address ranges and the service accepts IPv4 requests. 
   +  **IPv6** – Assign IPv6 addresses to the endpoint network interfaces. This option is supported only if all selected subnets are IPv6 only subnets and the service accepts IPv6 requests.
   +  **Dualstack** – Assign both IPv4 and IPv6 addresses to the endpoint network interfaces. This option is supported only if all selected subnets have both IPv4 and IPv6 address ranges and the service accepts both IPv4 and IPv6 requests.

1. (Optional) To add a tag, choose **Add new tag**, and enter the tag key and the tag value.

1. Choose **Create endpoint**.

After creating a gateway endpoint, you can use Regional API endpoints and Zonal API endpoints to access Amazon S3 Express One Zone storage class objects and directory buckets.

# Step 2: Create a S3 Express One Zone directory bucket
Create a directory bucket

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region in which you want to create a bucket. 
**Note**  
To minimize latency and costs and address regulatory requirements, choose a Region close to you. Objects stored in a Region never leave that Region unless you explicitly transfer them to another Region. For a list of Amazon S3 AWS Regions, see [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) in the *Amazon Web Services General Reference*.

1. In the left navigation pane, choose **Directory buckets**.

1. Choose **Create bucket**. The **Create bucket** page opens.

1. Under **General configuration**, view the AWS Region where your bucket will be created. 

   Under **Bucket type**, choose **Directory**.
**Note**  
If you've chosen a Region that doesn't support directory buckets, the **Bucket type** option disappears, and the bucket type defaults to a general purpose bucket. To create a directory bucket, you must choose a supported Region. For a list of Regions that support directory buckets and the Amazon S3 Express One Zone storage class, see [S3 Express One Zone Availability Zones and Regions](s3-express-Endpoints.md).
After you create the bucket, you can't change the bucket type.
**Note**  
The Availability Zone can't be changed after the bucket is created. 

1. For **Availability Zone**, choose a Availability Zone local to your compute services. For a list of Availability Zones that support directory buckets and the S3 Express One Zone storage class, see [S3 Express One Zone Availability Zones and Regions](s3-express-Endpoints.md). 

   Under **Availability Zone**, select the check box to acknowledge that in the event of an Availability Zone outage, your data might be unavailable or lost. 
**Important**  
Although directory buckets are stored across multiple devices within a single Availability Zone, directory buckets don't store data redundantly across Availability Zones.

1. For **Bucket name**, enter a name for your directory bucket.

   The following naming rules apply for directory buckets.
   + Be unique within the chosen Zone (AWS Availability Zone or AWS Local Zone). 
   + Name must be between 3 (min) and 63 (max) characters long, including the suffix.
   + Consists only of lowercase letters, numbers and hyphens (-).
   + Begin and end with a letter or number. 
   + Must include the following suffix: `--zone-id--x-s3`.
   + Bucket names must not start with the prefix `xn--`.
   + Bucket names must not start with the prefix `sthree-`.
   + Bucket names must not start with the prefix `sthree-configurator`.
   + Bucket names must not start with the prefix ` amzn-s3-demo-`.
   + Bucket names must not end with the suffix `-s3alias`. This suffix is reserved for access point alias names. For more information, see [Access point aliases](access-points-naming.md#access-points-alias).
   + Bucket names must not end with the suffix `--ol-s3`. This suffix is reserved for Object Lambda Access Point alias names. For more information, see [How to use a bucket-style alias for your S3 bucket Object Lambda Access Point](olap-use.md#ol-access-points-alias).
   + Bucket names must not end with the suffix `.mrap`. This suffix is reserved for Multi-Region Access Point names. For more information, see [Rules for naming Amazon S3 Multi-Region Access Points](multi-region-access-point-naming.md).

   A suffix is automatically added to the base name that you provide when you create a directory bucket using the console. This suffix includes the Availability Zone ID of the Availability Zone that you chose.

   After you create the bucket, you can't change its name. For more information about naming buckets, see [General purpose bucket naming rules](bucketnamingrules.md). 
**Important**  
Do not include sensitive information, such as account numbers, in the bucket name. The bucket name is visible in the URLs that point to the objects in the bucket.

1. Under **Object Ownership**, the **Bucket owner enforced** setting is automatically enabled, and all access control lists (ACLs) are disabled. For directory buckets, ACLs can't be enabled. 

    **Bucket owner enforced (default)** – ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the general purpose bucket. ACLs no longer affect access permissions to data in the S3 general purpose bucket. The bucket uses policies exclusively to define access control.

1. Under **Block Public Access settings for this bucket**, all Block Public Access settings for your directory bucket are automatically enabled. These settings can't be modified for directory buckets. For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

1. To configure default encryption, under **Encryption type**, choose one of the following: 
   + **Server-side encryption with Amazon S3 managed key (SSE-S3)**
   + **Server-side encryption with AWS Key Management Service key (SSE-KMS)**

   For more information about using Amazon S3 server-side encryption to encrypt your data, see [Data protection and encryption](s3-express-data-protection.md).
**Important**  
If you use the SSE-KMS option for your default encryption configuration, you are subject to the requests per second (RPS) quota of AWS KMS. For more information about AWS KMS quotas and how to request a quota increase, see [Quotas](https://docs.aws.amazon.com/kms/latest/developerguide/limits.html) in the *AWS Key Management Service Developer Guide*.  
When you enable default encryption, you might need to update your bucket policy. For more information, see [Using SSE-KMS encryption for cross-account operations](bucket-encryption.md#bucket-encryption-update-bucket-policy).

1. If you chose **Server-side encryption with Amazon S3 managed keys (SSE-S3)**, under **Bucket Key**, **Enabled** appears. S3 Bucket Keys are always enabled when you configure your directory bucket to use default encryption with SSE-S3. S3 Bucket Keys are always enabled for `GET` and `PUT` operations in a directory bucket and can’t be disabled. S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), [the Copy operation in Batch Operations](directory-buckets-objects-Batch-Ops.md), or [the import jobs](create-import-job.md). In this case, Amazon S3 makes a call to AWS KMS every time a copy request is made for a KMS-encrypted object.

   S3 Bucket Keys lower the cost of encryption by decreasing request traffic from Amazon S3 to AWS KMS. For more information, see [Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys](bucket-key.md).

1. If you chose **Server-side encryption with AWS Key Management Service key (SSE-KMS)**, under ** AWS KMS key**, specify your AWS Key Management Service key in one of the following ways or create a new key.
   + To choose from a list of available KMS keys, choose **Choose from your AWS KMS keys**, and choose your **KMS key** from **Available AWS KMS keys**.

     Only your customer managed keys appear in this list. The AWS managed key (`aws/s3`) isn't supported in directory buckets. For more information about customer managed keys, see [Customer keys and AWS keys](https://docs.aws.amazon.com//kms/latest/developerguide/concepts.html#key-mgmt) in the *AWS Key Management Service Developer Guide*.
   + To enter the KMS key ARN or alias, choose **Enter AWS KMS key ARN**, and enter your KMS key ARN or alias in **AWS KMS key ARN**. 
   + To create a new customer managed key in the AWS KMS console, choose **Create a KMS key**.

     For more information about creating an AWS KMS key, see [Creating keys](https://docs.aws.amazon.com//kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*.
**Important**  
Your SSE-KMS configuration can only support 1 [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) per directory bucket for the lifetime of the bucket. The [https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) (`aws/s3`) isn't supported. Also, after you specify a customer managed key for SSE-KMS, you can't override the customer managed key for the bucket's SSE-KMS configuration.  
You can identify the customer managed key you specified for the bucket's SSE-KMS configuration, in the following way:  
You make a `HeadObject` API operation request to find the value of `x-amz-server-side-encryption-aws-kms-key-id` in your response.
To use a new customer managed key for your data, we recommend copying your existing objects to a new directory bucket with a new customer managed key.
You can use only KMS keys that are available in the same AWS Region as the bucket. The Amazon S3 console lists only the first 100 KMS keys in the same Region as the bucket. To use a KMS key that is not listed, you must enter your KMS key ARN. If you want to use a KMS key that is owned by a different account, you must first have permission to use the key and then you must enter the KMS key ARN. For more information on cross account permissions for KMS keys, see [Creating KMS keys that other accounts can use](https://docs.aws.amazon.com//kms/latest/developerguide/key-policy-modifying-external-accounts.html#cross-account-console) in the *AWS Key Management Service Developer Guide*. For more information on SSE-KMS, see [Specifying server-side encryption with AWS KMS (SSE-KMS) for new object uploads in directory buckets](s3-express-specifying-kms-encryption.md).
When you use an AWS KMS key for server-side encryption in directory buckets, you must choose a symmetric encryption KMS key. Amazon S3 supports only symmetric encryption KMS keys and not asymmetric KMS keys. For more information, see [Identifying symmetric and asymmetric KMS keys](https://docs.aws.amazon.com//kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide*.

   For more information about using AWS KMS with Amazon S3, see [Using server-side encryption with AWS KMS keys (SSE-KMS) in directory buckets](s3-express-UsingKMSEncryption.md).

1. Choose **Create bucket**. After creating the bucket, you can add files and folders to the bucket. For more information, see [Working with objects in a directory bucket](directory-buckets-objects.md).

# Step 3: Importing data into a S3 Express One Zone directory bucket
Importing data into a directory bucket

To complete this step, you must have a general purpose bucket that contains objects and is located in the same AWS Region as your directory bucket.

After you create a directory bucket in Amazon S3, you can populate the new bucket with data by using the Import action in the Amazon S3 console. Import simplifies copying data into directory buckets by letting you choose a prefix or a general purpose bucket to Import data from without having to specify all of the objects to copy individually. Import uses S3 Batch Operations which copies the objects in the selected prefix or general purpose bucket. You can monitor the progress of the Import copy job through the S3 Batch Operations job details page. 

**To use the Import action**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region associated with the Availability Zone in which your directory bucket is located. 

1. In the left navigation pane, choose **Directory buckets**.

1. Choose the option button next to the name of the bucket that you want to import objects into.

1. Choose **Import**.

1. For **Source**, enter the general purpose bucket (or bucket path including prefix) that contains the objects that you want to import. To choose an existing general purpose bucket from a list, choose **Browse S3**.

1.  In the **Permissions** section, you can choose to have an IAM role auto-generated. Alternatively, you can select an IAM role from a list, or directly enter an IAM role ARN. 
   + To allow Amazon S3 to create a new IAM role on your behalf, choose **Create new IAM role**.
**Note**  
If your source objects are encrypted with server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), don't choose the **Create new IAM role** option. Instead, specify an existing IAM role that has the `kms:Decrypt` permission.  
Amazon S3 will use this permission to decrypt your objects. During the import process, Amazon S3 will then re-encrypt those objects by using server-side encryption with Amazon S3 managed keys (SSE-S3).
   + To choose an existing IAM role from a list, choose **Choose from existing IAM roles**.
   + To specify an existing IAM role by entering its Amazon Resource Name (ARN), choose **Enter IAM role ARN**, then enter the ARN in the corresponding field.

1. Review the information that's displayed in the **Destination** and **Copied object settings** sections. If the information in the **Destination** section is correct, choose **Import** to start the copy job.

   The Amazon S3 console displays the status of your new job on the **Batch Operations** page. For more information about the job, choose the option button next to the job name, and then on the **Actions** menu, choose **View details**. To open the directory bucket that the objects will be imported into, choose **View import destination**.

# Step 4: Manually upload objects to your S3 Express One Zone directory bucket
Upload objects to a directory bucket

You can also manually upload objects to your directory bucket.

**To manually upload objects**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the upper right corner of the page, choose the name of the currently displayed AWS Region. Next, choose the Region associated with the Availability Zone in which your directory bucket is located. 

1. In the left navigation pane, choose **Directory buckets**.

1. Choose the name of the bucket that you want to upload your folders or files to. 
**Note**  
 If you chose the same directory bucket that you used in previous steps of this tutorial, your directory bucket will contain the objects that were uploaded from the Import tool. Notice that these objects are now stored in the S3 Express One Zone storage class. 

1. In the **Objects** list, choose **Upload**.

1. On the **Upload** page, do one of the following: 
   + Drag and drop files and folders to the dotted upload area.
   + Choose **Add files** or **Add folder**, choose the files or folders to upload, and then choose **Open** or **Upload**.

1. Under **Checksums**, choose the **Checksum function** that you want to use. 
**Note**  
 We recommend using CRC32 and CRC32C for the best performance with the S3 Express One Zone storage class. For more information, see [S3 additional checksum best practices](https://docs.aws.amazon.com//AmazonS3/latest/userguide/s3-express-optimizing-performance-design-patterns.html#s3-express-optimizing--checksums.html). 

   (Optional) If you're uploading a single object that's less than 16 MB in size, you can also specify a pre-calculated checksum value. When you provide a pre-calculated value, Amazon S3 compares it with the value that it calculates by using the selected checksum function. If the values don't match, the upload won't start. 

1. The options in the **Permissions ** and **Properties** sections are automatically set to default settings and can't be modified. Block Public Access is automatically enabled, and S3 Versioning and S3 Object Lock can't be enabled for directory buckets. 

   (Optional) If you want to add metadata in key-value pairs to your objects, expand the **Properties** section, and then in the **Metadata** section, choose **Add metadata**.

1. To upload the listed files and folders, choose **Upload**.

   Amazon S3 uploads your objects and folders. When the upload is finished, you see a success message on the **Upload: status** page.

    You have successfully created a directory bucket and uploaded objects to your bucket. 

# Step 5: Empty your S3 Express One Zone directory bucket
Empty a directory bucket

You can empty your Amazon S3 directory bucket by using the Amazon S3 console.

**To empty a directory bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the upper right corner of the page, choose the name of the currently displayed AWS Region. Next, choose the Region associated with the Availability Zone in which your directory bucket is located. 

1. In the left navigation pane, choose **Directory buckets**.

1. Choose the option button next to the name of the bucket that you want to empty, and then choose **Empty**.

1. On the **Empty bucket** page, confirm that you want to empty the bucket by entering **permanently delete** in the text field, and then choose **Empty**.

1. Monitor the progress of the bucket emptying process on the **Empty bucket: status** page.

# Step 6: Delete your S3 Express One Zone directory bucket
Deleting a directory bucket

After you empty your directory bucket and abort all in-progress multipart uploads, you can delete your bucket by using the Amazon S3 console.

**To delete a directory bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the upper right corner of the page, choose the name of the currently displayed AWS Region. Next, choose the Region associated with the Availability Zone in which your directory bucket is located. 

1. In the left navigation pane, choose **Directory buckets**.

1. In the **Directory buckets** list, choose the option button next to the bucket that you want to delete.

1. Choose **Delete**.

1. On the **Delete bucket** page, enter the name of the bucket in the text field to confirm the deletion of your bucket. 
**Important**  
Deleting a directory bucket can't be undone.

1. To delete your directory bucket, choose **Delete bucket**.

## Next steps


In this tutorial, you have learned how to create a directory bucket and use the S3 Express One Zone storage class. After completing this tutorial, you can explore related AWS services to use with the S3 Express One Zone storage class.

You can use the following AWS services with the S3 Express One Zone storage class to support your specific low-latency use case.
+ [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/ec2/index.html) – Amazon EC2 provides secure and scalable computing capacity in the AWS Cloud. Using Amazon EC2 lessens your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage.
+ [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/welcome.html.html) – Lambda is a compute service that lets you run code without provisioning or managing servers. You configure notification settings on a bucket, and grant Amazon S3 permission to invoke a function on the function's resource-based permissions policy. 
+ [Amazon Elastic Kubernetes Service (Amazon EKS)](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html) – Amazon EKS is a managed service that eliminates the need to install, operate, and maintain your own Kubernetes control plane on AWS. [https://kubernetes.io/docs/concepts/overview/](https://kubernetes.io/docs/concepts/overview/) is an open-source system that automates the management, scaling, and deployment of containerized applications.
+ [Amazon Elastic Container Service (Amazon ECS)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html) – Amazon ECS is a fully managed container orchestration service that helps you easily deploy, manage, and scale containerized applications.
+ [Amazon EMR](https://docs.aws.amazon.com//emr/latest/ManagementGuide/emr-express-one-zone.html) – Amazon EMR is a managed cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Apache Spark on AWS to process and analyze vast amounts of data. 
+ [Amazon Athena](https://docs.aws.amazon.com//athena/latest/ug/querying-express-one-zone.html) – Athena is an interactive query service that makes it easy to analyze data directly in Amazon S3 by using standard [SQL](https://docs.aws.amazon.com/athena/latest/ug/ddl-sql-reference.html). You can also use Athena to interactively run data analytics by using Apache Spark without having to plan for, configure, or manage resources. When you run Apache Spark applications on Athena, you submit Spark code for processing and receive the results directly.
+ [AWS Glue Data Catalog](https://docs.aws.amazon.com//glue/latest/dg/catalog-and-crawler.html) – AWS Glue is a serverless data-integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources. You can use AWS Glue for analytics, machine learning, and application development. AWS Glue Data Catalog is a centralized repository that stores metadata about your organization's data sets. It acts as an index to the location, schema, and run-time metrics of your data sources. 
+ [Amazon SageMaker Runtime Model Training](https://docs.aws.amazon.com//sagemaker/latest/dg/model-access-training-data.html) – Amazon SageMaker Runtime is a fully managed machine learning service. With SageMaker Runtime, data scientists and developers can quickly and easily build and train machine learning models, and then directly deploy them into a production-ready hosted environment.

 For more information on S3 Express One Zone, see [What is S3 Express One Zone?](https://docs.aws.amazon.com//AmazonS3/latest/userguide/s3-express-one-zone.html) and [How is S3 Express One Zone different?](https://docs.aws.amazon.com//AmazonS3/latest/userguide/s3-express-differences.html).

# S3 Express One Zone Availability Zones and Regions


An Availability Zone is one or more discrete data centers with redundant power, networking, and connectivity in an AWS Region. To optimize low-latency retrievals, objects in the Amazon S3 Express One Zone storage class are redundantly stored in S3 directory buckets in a single Availability Zone that's local to your compute workload. When you create a directory bucket, you choose the Availability Zone and AWS Region where your bucket will be located. 

AWS maps the physical Availability Zones randomly to the Availability Zone names for each AWS account. This approach helps to distribute resources across the Availability Zones in an AWS Region, instead of resources likely being concentrated in the first Availability Zone for each Region. As a result, the Availability Zone `us-east-1a` for your AWS account might not represent the same physical location as `us-east-1a` for a different AWS account. For more information, see [Regions and Availability Zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) in the *Amazon EC2 User Guide*.

To coordinate Availability Zones across accounts, you must use the *AZ ID*, which is a unique and consistent identifier for an Availability Zone. For example, `use1-az1` is an AZ ID for the `us-east-1` Region and it has the same physical location in every AWS account. The following illustration shows how the AZ IDs are the same for every account, even though the Availability Zone names might be mapped differently for each account.

![\[Illustration showing Availability Zone mapping and Regions.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/availability-zone-mapping.png)


With S3 Express One Zone, your data is redundantly stored on multiple devices within a single Availability Zone. S3 Express One Zone is designed for 99.95 percent availability within a single Availability Zone and is backed by the [Amazon S3 Service Level Agreement](https://aws.amazon.com/s3/sla/). For more information, see [Availability Zones](directory-bucket-high-performance.md#s3-express-overview-az)

 The following table shows the S3 Express One Zone supported Regions and Availability Zones. 

[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-Endpoints.html)

# Networking for directory buckets in an Availability Zone


To reduce the amount of time your packets spend on the network, configure your virtual private cloud (VPC) with a gateway endpoint to access directory buckets in Availability Zones while keeping traffic within the AWS network, and at no additional cost.

**Topics**
+ [

## Endpoints for directory buckets in Availability Zones
](#s3-express-endpoints-az)
+ [

## Configuring VPC gateway endpoints
](#s3-express-networking-vpc-gateway)

## Endpoints for directory buckets in Availability Zones


The following table shows the Regional and Zonal API endpoints that are available for each Region and Availability Zone.


| Region name | Region | Availability Zone IDs | Regional endpoint | Zonal endpoint | 
| --- | --- | --- | --- | --- | 
|  US East (N. Virginia)  |  `us-east-1`  |  `use1-az4` `use1-az5` `use1-az6`  |  `s3express-control.us-east-1.amazonaws.com` `s3express-control-dualstack.us-east-1.amazonaws.com `  |  `s3express-use1-az4.us-east-1.amazonaws.com` `s3express-use1-az4.dualstack.us-east-1.amazonaws.com` `s3express-use1-az5.us-east-1.amazonaws.com` `s3express-use1-az5.dualstack.us-east-1.amazonaws.com` `s3express-use1-az6.us-east-1.amazonaws.com` `s3express-use1-az6.dualstack.us-east-1.amazonaws.com`  | 
|  US East (Ohio)  |  `us-east-2`  |  `use2-az1` `use2-az2`  |  `s3express-control.us-east-2.amazonaws.com` `s3express-control-dualstack.us-east-2.amazonaws.com`  |  `s3express-use2-az1.us-east-2.amazonaws.com` `s3express-use2-az1.dualstack.us-east-2.amazonaws.com` `s3express-use2-az2.us-east-2.amazonaws.com` `s3express-use2-az2.dualstack.us-east-2.amazonaws.com`  | 
|  US West (Oregon)  |  `us-west-2`  |  `usw2-az1` `usw2-az3` `usw2-az4`  |  `s3express-control.us-west-2.amazonaws.com` `s3express-control-dualstack.us-west-2.amazonaws.com`  |  `s3express-usw2-az1.us-west-2.amazonaws.com` `s3express-usw2-az1.dualstack.us-west-2.amazonaws.com` `s3express-usw2-az3.us-west-2.amazonaws.com` `s3express-usw2-az3.dualstack.us-west-2.amazonaws.com` `s3express-usw2-az4.us-west-2.amazonaws.com` `s3express-usw2-az4.dualstack.us-west-2.amazonaws.com`  | 
|  Asia Pacific (Mumbai)  |  `ap-south-1`  |  `aps1-az1` `aps1-az3`  |  `s3express-control.ap-south-1.amazonaws.com` `s3express-control-dualstack.ap-south-1.amazonaws.com`  |  `s3express-aps1-az1.ap-south-1.amazonaws.com` `s3express-aps1-az1.dualstack.ap-south-1.amazonaws.com` `s3express-aps1-az3.ap-south-1.amazonaws.com` `s3express-aps1-az3.dualstack.ap-south-1.amazonaws.com`  | 
|  Asia Pacific (Tokyo)  |  `ap-northeast-1`  |  `apne1-az1` `apne1-az4`  |  `s3express-control.ap-northeast-1.amazonaws.com` `s3express-control-dualstack.ap-northeast-1.amazonaws.com`  |  `s3express-apne1-az1.ap-northeast-1.amazonaws.com` `s3express-apne1-az1.dualstack.ap-northeast-1.amazonaws.com` `s3express-apne1-az4.ap-northeast-1.amazonaws.com` `s3express-apne1-az4.dualstack.ap-northeast-1.amazonaws.com`  | 
|  Europe (Ireland)  |  `eu-west-1`  |  `euw1-az1` `euw1-az3`  |  `s3express-control.eu-west-1.amazonaws.com` `s3express-control-dualstack.eu-west-1.amazonaws.com`  |  `s3express-euw1-az1.eu-west-1.amazonaws.com` `s3express-euw1-az1.dualstack.eu-west-1.amazonaws.com` `s3express-euw1-az3.eu-west-1.amazonaws.com` `s3express-euw1-az3.dualstack.eu-west-1.amazonaws.com`  | 
|  Europe (Stockholm)  |  `eu-north-1`  |  `eun1-az1` `eun1-az2` `eun1-az3`  |  `s3express-control.eu-north-1.amazonaws.com` `s3express-control-dualstack.eu-north-1.amazonaws.com`  |  `s3express-eun1-az1.eu-north-1.amazonaws.com` `s3express-eun1-az1.dualstack.eu-north-1.amazonaws.com` `s3express-eun1-az2.eu-north-1.amazonaws.com` `s3express-eun1-az2.dualstack.eu-north-1.amazonaws.com` `s3express-eun1-az3.eu-north-1.amazonaws.com` `s3express-eun1-az3.dualstack.eu-north-1.amazonaws.com`  | 

## Configuring VPC gateway endpoints


Use the following procedure to create a gateway endpoint that connects to Amazon S3 Express One Zone storage class objects and directory buckets.

**To configure a gateway VPC endpoint**

1. Open the [Amazon VPC Console](https://console.aws.amazon.com/vpc/). 

1. In the navigation pane, choose **Endpoints**.

1. Choose **Create endpoint**.

1. Create a name for your endpoint.

1. For **Service category**, choose **AWS services**. 

1. For **Services**, add the filter **Type=Gateway** and then choose the option button next to **com.amazonaws.*region*.s3express**. 

1. For **VPC**, choose the VPC in which to create the endpoint.

1. For **Route tables**, choose the route table in your VPC to be used by the endpoint. After the endpoint is created, a route record will be added to the route table that you select in this step.

1. For **Policy**, choose **Full access** to allow all operations by all principals on all resources over the VPC endpoint. Otherwise, choose **Custom** to attach a VPC endpoint policy that controls the principals' permissions to perform actions on resources over the VPC endpoint.

1. For **IP address type**, choose from the following options:
   +  **IPv4** – Assign IPv4 addresses to the endpoint network interfaces. This option is supported only if all selected subnets have IPv4 address ranges and the service accepts IPv4 requests. 
   +  **IPv6** – Assign IPv6 addresses to the endpoint network interfaces. This option is supported only if all selected subnets are IPv6 only subnets and the service accepts IPv6 requests.
   +  **Dualstack** – Assign both IPv4 and IPv6 addresses to the endpoint network interfaces. This option is supported only if all selected subnets have both IPv4 and IPv6 address ranges and the service accepts both IPv4 and IPv6 requests.

1. (Optional) To add a tag, choose **Add new tag**, and enter the tag key and the tag value.

1. Choose **Create endpoint**.

After creating a gateway endpoint, you can use Regional API endpoints and Zonal API endpoints to access Amazon S3 Express One Zone storage class objects and directory buckets.

To learn more about gateway VPC endpoints, see [Gateway endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html) in the *AWS PrivateLink Guide*. For the data residency use cases, we recommend enabling access to your buckets only from your VPC using gateway VPC endpoints. When access is restricted to a VPC or a VPC endpoint, you can access the objects through the AWS Management Console, the REST API, AWS CLI, and AWS SDKs.

**Note**  
To restrict access to a VPC or a VPC endpoint using the AWS Management Console, you must use the AWS Management Console Private Access. For more information, see [AWS Management Console Private Access](https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/console-private-access.html) in the *AWS Management Console guide*AWS Management Console guide.

# Creating directory buckets in an Availability Zone


To start using the Amazon S3 Express One Zone storage class, you create a directory bucket. The S3 Express One Zone storage class can be used only with directory buckets. The S3 Express One Zone storage class supports low-latency use cases and provides faster data processing within a single Availability Zone. If your application is performance sensitive and benefits from single-digit millisecond `PUT` and `GET` latencies, we recommend creating a directory bucket so that you can use the S3 Express One Zone storage class.

There are two types of Amazon S3 buckets, general purpose buckets and directory buckets. You should choose the bucket type that best fits your application and performance requirements. General purpose buckets are the original S3 bucket type. General purpose buckets are recommended for most use cases and access patterns and allow objects stored across all storage classes, except S3 Express One Zone. For more information about general purpose buckets, see [General purpose buckets overview](UsingBucket.md).

Directory buckets use the S3 Express One Zone storage class, which is designed to be used for workloads or performance-critical applications that require consistent single-digit millisecond latency. S3 Express One Zone is the first S3 storage class where you can select a single Availability Zone with the option to co-locate your object storage with your compute resources, which provides the highest possible access speed. When you create a directory bucket, you can optionally specify an AWS Region and an Availability Zone that's local to your Amazon EC2, Amazon Elastic Kubernetes Service, or Amazon Elastic Container Service (Amazon ECS) compute instances to optimize performance.

With S3 Express One Zone, your data is redundantly stored on multiple devices within a single Availability Zone. S3 Express One Zone is designed for 99.95 percent availability within a single Availability Zone and is backed by the [Amazon S3 Service Level Agreement](https://aws.amazon.com/s3/sla/). For more information, see [Availability Zones](directory-bucket-high-performance.md#s3-express-overview-az)

Directory buckets organize data hierarchically into directories, as opposed to the flat storage structure of general purpose buckets. There aren't prefix limits for directory buckets, and individual directories can scale horizontally. 

For more information about directory buckets, see [Working with directory buckets](directory-buckets-overview.md).

**Directory bucket names**  
Directory bucket names must follow this format and comply with the rules for directory bucket naming:

```
bucket-base-name--zone-id--x-s3
```

For example, the following directory bucket name contains the Availability Zone ID `usw2-az1`:

```
bucket-base-name--usw2-az1--x-s3
```

For more information about directory bucket naming rules, see [Directory bucket naming rules](directory-bucket-naming-rules.md).

## Using the S3 console


1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region in which you want to create a bucket. 
**Note**  
To minimize latency and costs and address regulatory requirements, choose a Region close to you. Objects stored in a Region never leave that Region unless you explicitly transfer them to another Region. For a list of Amazon S3 AWS Regions, see [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) in the *Amazon Web Services General Reference*.

1. In the left navigation pane, choose **Directory buckets**.

1. Choose **Create bucket**. The **Create bucket** page opens.

1. Under **General configuration**, view the AWS Region where your bucket will be created. 

   Under **Bucket type**, choose **Directory**.
**Note**  
If you've chosen a Region that doesn't support directory buckets, the **Bucket type** option disappears, and the bucket type defaults to a general purpose bucket. To create a directory bucket, you must choose a supported Region. For a list of Regions that support directory buckets and the Amazon S3 Express One Zone storage class, see [S3 Express One Zone Availability Zones and Regions](s3-express-Endpoints.md).
After you create the bucket, you can't change the bucket type.
**Note**  
The Availability Zone can't be changed after the bucket is created. 

1. For **Availability Zone**, choose a Availability Zone local to your compute services. For a list of Availability Zones that support directory buckets and the S3 Express One Zone storage class, see [S3 Express One Zone Availability Zones and Regions](s3-express-Endpoints.md). 

   Under **Availability Zone**, select the check box to acknowledge that in the event of an Availability Zone outage, your data might be unavailable or lost. 
**Important**  
Although directory buckets are stored across multiple devices within a single Availability Zone, directory buckets don't store data redundantly across Availability Zones.

1. For **Bucket name**, enter a name for your directory bucket.

   The following naming rules apply for directory buckets.
   + Be unique within the chosen Zone (AWS Availability Zone or AWS Local Zone). 
   + Name must be between 3 (min) and 63 (max) characters long, including the suffix.
   + Consists only of lowercase letters, numbers and hyphens (-).
   + Begin and end with a letter or number. 
   + Must include the following suffix: `--zone-id--x-s3`.
   + Bucket names must not start with the prefix `xn--`.
   + Bucket names must not start with the prefix `sthree-`.
   + Bucket names must not start with the prefix `sthree-configurator`.
   + Bucket names must not start with the prefix ` amzn-s3-demo-`.
   + Bucket names must not end with the suffix `-s3alias`. This suffix is reserved for access point alias names. For more information, see [Access point aliases](access-points-naming.md#access-points-alias).
   + Bucket names must not end with the suffix `--ol-s3`. This suffix is reserved for Object Lambda Access Point alias names. For more information, see [How to use a bucket-style alias for your S3 bucket Object Lambda Access Point](olap-use.md#ol-access-points-alias).
   + Bucket names must not end with the suffix `.mrap`. This suffix is reserved for Multi-Region Access Point names. For more information, see [Rules for naming Amazon S3 Multi-Region Access Points](multi-region-access-point-naming.md).

   A suffix is automatically added to the base name that you provide when you create a directory bucket using the console. This suffix includes the Availability Zone ID of the Availability Zone that you chose.

   After you create the bucket, you can't change its name. For more information about naming buckets, see [General purpose bucket naming rules](bucketnamingrules.md). 
**Important**  
Do not include sensitive information, such as account numbers, in the bucket name. The bucket name is visible in the URLs that point to the objects in the bucket.

1. Under **Object Ownership**, the **Bucket owner enforced** setting is automatically enabled, and all access control lists (ACLs) are disabled. For directory buckets, ACLs can't be enabled. 

    **Bucket owner enforced (default)** – ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the general purpose bucket. ACLs no longer affect access permissions to data in the S3 general purpose bucket. The bucket uses policies exclusively to define access control.

1. Under **Block Public Access settings for this bucket**, all Block Public Access settings for your directory bucket are automatically enabled. These settings can't be modified for directory buckets. For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

1. To configure default encryption, under **Encryption type**, choose one of the following: 
   + **Server-side encryption with Amazon S3 managed key (SSE-S3)**
   + **Server-side encryption with AWS Key Management Service key (SSE-KMS)**

   For more information about using Amazon S3 server-side encryption to encrypt your data, see [Data protection and encryption](s3-express-data-protection.md).
**Important**  
If you use the SSE-KMS option for your default encryption configuration, you are subject to the requests per second (RPS) quota of AWS KMS. For more information about AWS KMS quotas and how to request a quota increase, see [Quotas](https://docs.aws.amazon.com/kms/latest/developerguide/limits.html) in the *AWS Key Management Service Developer Guide*.  
When you enable default encryption, you might need to update your bucket policy. For more information, see [Using SSE-KMS encryption for cross-account operations](bucket-encryption.md#bucket-encryption-update-bucket-policy).

1. If you chose **Server-side encryption with Amazon S3 managed keys (SSE-S3)**, under **Bucket Key**, **Enabled** appears. S3 Bucket Keys are always enabled when you configure your directory bucket to use default encryption with SSE-S3. S3 Bucket Keys are always enabled for `GET` and `PUT` operations in a directory bucket and can’t be disabled. S3 Bucket Keys aren't supported, when you copy SSE-KMS encrypted objects from general purpose buckets to directory buckets, from directory buckets to general purpose buckets, or between directory buckets, through [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), [https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html), [the Copy operation in Batch Operations](directory-buckets-objects-Batch-Ops.md), or [the import jobs](create-import-job.md). In this case, Amazon S3 makes a call to AWS KMS every time a copy request is made for a KMS-encrypted object.

   S3 Bucket Keys lower the cost of encryption by decreasing request traffic from Amazon S3 to AWS KMS. For more information, see [Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys](bucket-key.md).

1. If you chose **Server-side encryption with AWS Key Management Service key (SSE-KMS)**, under ** AWS KMS key**, specify your AWS Key Management Service key in one of the following ways or create a new key.
   + To choose from a list of available KMS keys, choose **Choose from your AWS KMS keys**, and choose your **KMS key** from **Available AWS KMS keys**.

     Only your customer managed keys appear in this list. The AWS managed key (`aws/s3`) isn't supported in directory buckets. For more information about customer managed keys, see [Customer keys and AWS keys](https://docs.aws.amazon.com//kms/latest/developerguide/concepts.html#key-mgmt) in the *AWS Key Management Service Developer Guide*.
   + To enter the KMS key ARN or alias, choose **Enter AWS KMS key ARN**, and enter your KMS key ARN or alias in **AWS KMS key ARN**. 
   + To create a new customer managed key in the AWS KMS console, choose **Create a KMS key**.

     For more information about creating an AWS KMS key, see [Creating keys](https://docs.aws.amazon.com//kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*.
**Important**  
Your SSE-KMS configuration can only support 1 [customer managed key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#customer-cmk) per directory bucket for the lifetime of the bucket. The [https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#aws-managed-cmk) (`aws/s3`) isn't supported. Also, after you specify a customer managed key for SSE-KMS, you can't override the customer managed key for the bucket's SSE-KMS configuration.  
You can identify the customer managed key you specified for the bucket's SSE-KMS configuration, in the following way:  
You make a `HeadObject` API operation request to find the value of `x-amz-server-side-encryption-aws-kms-key-id` in your response.
To use a new customer managed key for your data, we recommend copying your existing objects to a new directory bucket with a new customer managed key.
You can use only KMS keys that are available in the same AWS Region as the bucket. The Amazon S3 console lists only the first 100 KMS keys in the same Region as the bucket. To use a KMS key that is not listed, you must enter your KMS key ARN. If you want to use a KMS key that is owned by a different account, you must first have permission to use the key and then you must enter the KMS key ARN. For more information on cross account permissions for KMS keys, see [Creating KMS keys that other accounts can use](https://docs.aws.amazon.com//kms/latest/developerguide/key-policy-modifying-external-accounts.html#cross-account-console) in the *AWS Key Management Service Developer Guide*. For more information on SSE-KMS, see [Specifying server-side encryption with AWS KMS (SSE-KMS) for new object uploads in directory buckets](s3-express-specifying-kms-encryption.md).
When you use an AWS KMS key for server-side encryption in directory buckets, you must choose a symmetric encryption KMS key. Amazon S3 supports only symmetric encryption KMS keys and not asymmetric KMS keys. For more information, see [Identifying symmetric and asymmetric KMS keys](https://docs.aws.amazon.com//kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide*.

   For more information about using AWS KMS with Amazon S3, see [Using server-side encryption with AWS KMS keys (SSE-KMS) in directory buckets](s3-express-UsingKMSEncryption.md).

1. Choose **Create bucket**. After creating the bucket, you can add files and folders to the bucket. For more information, see [Working with objects in a directory bucket](directory-buckets-objects.md).

## Using the AWS SDKs


------
#### [ SDK for Go ]

This example shows how to create a directory bucket by using the AWS SDK for Go. 

**Example**  

```
var bucket = "..."

func runCreateBucket(c *s3.Client) {
    resp, err := c.CreateBucket(context.Background(), &s3.CreateBucketInput{
        Bucket: &bucket,
        CreateBucketConfiguration: &types.CreateBucketConfiguration{
            Location: &types.LocationInfo{
                Name: aws.String("usw2-az1"),
                Type: types.LocationTypeAvailabilityZone,
            },  
            Bucket: &types.BucketInfo{
                DataRedundancy: types.DataRedundancySingleAvailabilityZone,
                Type:           types.BucketTypeDirectory,
            },  
        },  
    })  
    var terr *types.BucketAlreadyOwnedByYou
    if errors.As(err, &terr) {
        fmt.Printf("BucketAlreadyOwnedByYou: %s\n", aws.ToString(terr.Message))
        fmt.Printf("noop...\n")
        return
    }   
    if err != nil {
        log.Fatal(err)
    }   

    fmt.Printf("bucket created at %s\n", aws.ToString(resp.Location))
}
```

------
#### [ SDK for Java 2.x ]

This example shows how to create an directory bucket by using the AWS SDK for Java 2.x. 

**Example**  

```
public static void createBucket(S3Client s3Client, String bucketName) {

    //Bucket name format is {base-bucket-name}--{az-id}--x-s3
    //example: doc-example-bucket--usw2-az1--x-s3 is a valid name for a directory bucket created in
    //Region us-west-2, Availability Zone 2  

    CreateBucketConfiguration bucketConfiguration = CreateBucketConfiguration.builder()
             .location(LocationInfo.builder()
                     .type(LocationType.AVAILABILITY_ZONE)
                     .name("usw2-az1").build()) //this must match the Region and Availability Zone in your bucket name
             .bucket(BucketInfo.builder()
                    .type(BucketType.DIRECTORY)
                    .dataRedundancy(DataRedundancy.SINGLE_AVAILABILITY_ZONE)
                    .build()).build();
    try {
    
             CreateBucketRequest bucketRequest = CreateBucketRequest.builder().bucket(bucketName).createBucketConfiguration(bucketConfiguration).build();
             CreateBucketResponse response = s3Client.createBucket(bucketRequest);
             System.out.println(response);
    } 
    
    catch (S3Exception e) {
            System.err.println(e.awsErrorDetails().errorMessage());
            System.exit(1);
         }
    }
```

------
#### [ AWS SDK for JavaScript ]

This example shows how to create a directory bucket by using the AWS SDK for JavaScript. 

**Example**  

```
// file.mjs, run with Node.js v16 or higher
// To use with the preview build, place this in a folder 
// inside the preview build directory, such as /aws-sdk-js-v3/workspace/

import { S3 } from "@aws-sdk/client-s3";

const region = "us-east-1";
const zone = "use1-az4";
const suffix = `${zone}--x-s3`;

const s3 = new S3({ region });

const bucketName = `...--${suffix}`;

const createResponse = await s3.createBucket( 
    { Bucket: bucketName, 
      CreateBucketConfiguration: {Location: {Type: "AvailabilityZone", Name: zone},
      Bucket: { Type: "Directory", DataRedundancy: "SingleAvailabilityZone" }}
    } 
   );
```

------
#### [ SDK for .NET ]

This example shows how to create a directory bucket by using the SDK for .NET. 

**Example**  

```
using (var amazonS3Client = new AmazonS3Client())
{
    var putBucketResponse = await amazonS3Client.PutBucketAsync(new PutBucketRequest
    {

       BucketName = "DOC-EXAMPLE-BUCKET--usw2-az1--x-s3",
       PutBucketConfiguration = new PutBucketConfiguration
       {
         BucketInfo = new BucketInfo { DataRedundancy = DataRedundancy.SingleAvailabilityZone, Type = BucketType.Directory },
         Location = new LocationInfo { Name = "usw2-az1", Type = LocationType.AvailabilityZone }
       }
     }).ConfigureAwait(false);
}
```

------
#### [ SDK for PHP ]

This example shows how to create a directory bucket by using the AWS SDK for PHP. 

**Example**  

```
require 'vendor/autoload.php';

$s3Client = new S3Client([

    'region'      => 'us-east-1',
]);


$result = $s3Client->createBucket([
    'Bucket' => 'doc-example-bucket--use1-az4--x-s3',
    'CreateBucketConfiguration' => [
        'Location' => ['Name'=> 'use1-az4', 'Type'=> 'AvailabilityZone'],
        'Bucket' => ["DataRedundancy" => "SingleAvailabilityZone" ,"Type" => "Directory"]   ],
]);
```

------
#### [ SDK for Python ]

This example shows how to create a directory bucket by using the AWS SDK for Python (Boto3). 

**Example**  

```
import logging
import boto3
from botocore.exceptions import ClientError

def create_bucket(s3_client, bucket_name, availability_zone):
    '''
    Create a directory bucket in a specified Availability Zone

    :param s3_client: boto3 S3 client
    :param bucket_name: Bucket to create; for example, 'doc-example-bucket--usw2-az1--x-s3'
    :param availability_zone: String; Availability Zone ID to create the bucket in, for example, 'usw2-az1'
    :return: True if bucket is created, else False
    '''

    try:
        bucket_config = {
                'Location': {
                    'Type': 'AvailabilityZone',
                    'Name': availability_zone
                },
                'Bucket': {
                    'Type': 'Directory', 
                    'DataRedundancy': 'SingleAvailabilityZone'
                }
            }
        s3_client.create_bucket(
            Bucket = bucket_name,
            CreateBucketConfiguration = bucket_config
        )
    except ClientError as e:
        logging.error(e)
        return False
    return True


if __name__ == '__main__':
    bucket_name = 'BUCKET_NAME'
    region = 'us-west-2'
    availability_zone = 'usw2-az1'
    s3_client = boto3.client('s3', region_name = region)
    create_bucket(s3_client, bucket_name, availability_zone)
```

------
#### [ SDK for Ruby ]

This example shows how to create an directory bucket by using the AWS SDK for Ruby. 

**Example**  

```
s3 = Aws::S3::Client.new(region:'us-west-2')
s3.create_bucket(
  bucket: "bucket_base_name--az_id--x-s3",
  create_bucket_configuration: {
    location: { name: 'usw2-az1', type: 'AvailabilityZone' },
    bucket: { data_redundancy: 'SingleAvailabilityZone', type: 'Directory' }
  }
)
```

------

## Using the AWS CLI


This example shows how to create a directory bucket by using the AWS CLI. To use the command replace the *user input placeholders* with your own information.

When you create a directory bucket you must provide configuration details and use the following naming convention: `bucket-base-name--zone-id--x-s3`

```
aws s3api create-bucket
--bucket bucket-base-name--zone-id--x-s3
--create-bucket-configuration 'Location={Type=AvailabilityZone,Name=usw2-az1},Bucket={DataRedundancy=SingleAvailabilityZone,Type=Directory}'
--region us-west-2
```

For more information, see [create-bucket](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/create-bucket.html) in the AWS Command Line Interface.

# Regional and Zonal endpoints for directory buckets in an Availability Zone


To access your objects and directory buckets stored in S3 Express One Zone, you use gateway VPC endpoints. Directory buckets use Regional and Zonal API endpoints. Depending on the Amazon S3 API operation that you use, either a Regional or Zonal endpoint is required. There is no additional charge for using gateway endpoints.

Bucket-level (or control plane) API operations are available through Regional endpoints and are referred to as Regional endpoint API operations. Examples of Regional endpoint API operations are `CreateBucket` and `DeleteBucket`.

 When you create directory buckets that are stored in S3 Express One Zone, you choose the Availability Zone where your bucket will be located. You can use Zonal endpoint API operations to upload and manage the objects in your directory bucket.

Object-level (or data plane) API operations are available through Zonal endpoints and are referred to as Zonal endpoint API operations. Examples of Zonal endpoint API operations are `CreateSession` and `PutObject`.


| Region name | Region | Availability Zone IDs | Regional endpoint | Zonal endpoint | 
| --- | --- | --- | --- | --- | 
|  US East (N. Virginia)  |  `us-east-1`  |  `use1-az4` `use1-az5` `use1-az6`  |  `s3express-control.us-east-1.amazonaws.com` `s3express-control-dualstack.us-east-1.amazonaws.com `  |  `s3express-use1-az4.us-east-1.amazonaws.com` `s3express-use1-az4.dualstack.us-east-1.amazonaws.com` `s3express-use1-az5.us-east-1.amazonaws.com` `s3express-use1-az5.dualstack.us-east-1.amazonaws.com` `s3express-use1-az6.us-east-1.amazonaws.com` `s3express-use1-az6.dualstack.us-east-1.amazonaws.com`  | 
|  US East (Ohio)  |  `us-east-2`  |  `use2-az1` `use2-az2`  |  `s3express-control.us-east-2.amazonaws.com` `s3express-control-dualstack.us-east-2.amazonaws.com`  |  `s3express-use2-az1.us-east-2.amazonaws.com` `s3express-use2-az1.dualstack.us-east-2.amazonaws.com` `s3express-use2-az2.us-east-2.amazonaws.com` `s3express-use2-az2.dualstack.us-east-2.amazonaws.com`  | 
|  US West (Oregon)  |  `us-west-2`  |  `usw2-az1` `usw2-az3` `usw2-az4`  |  `s3express-control.us-west-2.amazonaws.com` `s3express-control-dualstack.us-west-2.amazonaws.com`  |  `s3express-usw2-az1.us-west-2.amazonaws.com` `s3express-usw2-az1.dualstack.us-west-2.amazonaws.com` `s3express-usw2-az3.us-west-2.amazonaws.com` `s3express-usw2-az3.dualstack.us-west-2.amazonaws.com` `s3express-usw2-az4.us-west-2.amazonaws.com` `s3express-usw2-az4.dualstack.us-west-2.amazonaws.com`  | 
|  Asia Pacific (Mumbai)  |  `ap-south-1`  |  `aps1-az1` `aps1-az3`  |  `s3express-control.ap-south-1.amazonaws.com` `s3express-control-dualstack.ap-south-1.amazonaws.com`  |  `s3express-aps1-az1.ap-south-1.amazonaws.com` `s3express-aps1-az1.dualstack.ap-south-1.amazonaws.com` `s3express-aps1-az3.ap-south-1.amazonaws.com` `s3express-aps1-az3.dualstack.ap-south-1.amazonaws.com`  | 
|  Asia Pacific (Tokyo)  |  `ap-northeast-1`  |  `apne1-az1` `apne1-az4`  |  `s3express-control.ap-northeast-1.amazonaws.com` `s3express-control-dualstack.ap-northeast-1.amazonaws.com`  |  `s3express-apne1-az1.ap-northeast-1.amazonaws.com` `s3express-apne1-az1.dualstack.ap-northeast-1.amazonaws.com` `s3express-apne1-az4.ap-northeast-1.amazonaws.com` `s3express-apne1-az4.dualstack.ap-northeast-1.amazonaws.com`  | 
|  Europe (Ireland)  |  `eu-west-1`  |  `euw1-az1` `euw1-az3`  |  `s3express-control.eu-west-1.amazonaws.com` `s3express-control-dualstack.eu-west-1.amazonaws.com`  |  `s3express-euw1-az1.eu-west-1.amazonaws.com` `s3express-euw1-az1.dualstack.eu-west-1.amazonaws.com` `s3express-euw1-az3.eu-west-1.amazonaws.com` `s3express-euw1-az3.dualstack.eu-west-1.amazonaws.com`  | 
|  Europe (Stockholm)  |  `eu-north-1`  |  `eun1-az1` `eun1-az2` `eun1-az3`  |  `s3express-control.eu-north-1.amazonaws.com` `s3express-control-dualstack.eu-north-1.amazonaws.com`  |  `s3express-eun1-az1.eu-north-1.amazonaws.com` `s3express-eun1-az1.dualstack.eu-north-1.amazonaws.com` `s3express-eun1-az2.eu-north-1.amazonaws.com` `s3express-eun1-az2.dualstack.eu-north-1.amazonaws.com` `s3express-eun1-az3.eu-north-1.amazonaws.com` `s3express-eun1-az3.dualstack.eu-north-1.amazonaws.com`  | 

# Optimizing S3 Express One Zone performance


Amazon S3 Express One Zone is a high-performance, single Availability Zone (AZ) S3 storage class that's purpose-built to deliver consistent, single-digit millisecond data access for your most latency-sensitive applications. S3 Express One Zone is the first S3 storage class that gives you the option to co-locate high-performance object storage and AWS compute resources, such as Amazon Elastic Compute Cloud, Amazon Elastic Kubernetes Service, and Amazon Elastic Container Service, within a single Availability Zone. Co-locating your storage and compute resources optimizes compute performance and costs and provides increased data-processing speed. 

S3 Express One Zone provides similar performance elasticity to other S3 storage classes, but with consistent single-digit millisecond first-byte read and write request latencies—up to 10x faster than S3 Standard. S3 Express One Zone is designed from the ground up to support burst throughput up to very high aggregate levels. The S3 Express One Zone storage class uses a custom-built architecture to optimize for performance and deliver consistently low request latency by storing data on high-performance hardware. The object protocol for S3 Express One Zone has been enhanced to streamline authentication and metadata overhead. 

To further reduce latency and support up to 2 million reads and up to 200,000 writes per second, S3 Express One Zone stores data in an Amazon S3 directory bucket. By default, each directory bucket supports up to 200,000 reads and up to 100,000 writes per second. If your workload requires higher than the default TPS limits, you can request an increase through [AWS Support](https://support.console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase).

The combination of high-performance, purpose-built hardware and software that delivers single-digit millisecond data access speed and directory buckets that scale for large numbers of transactions per second makes S3 Express One Zone the best Amazon S3 storage class for request-intensive operations or performance-critical applications. 

The following topics describe best practice guidelines and design patterns for optimizing performance with applications that use the S3 Express One Zone storage class. 

**Topics**
+ [

# Best practices to optimize S3 Express One Zone performance
](s3-express-optimizing-performance-design-patterns.md)

# Best practices to optimize S3 Express One Zone performance
Optimizing S3 Express One Zone performance

When building applications that upload and retrieve objects from Amazon S3 Express One Zone, follow our best practice guidelines to optimize performance. To use the S3 Express One Zone storage class, you must create an S3 directory bucket. The S3 Express One Zone storage class isn't supported for use with S3 general purpose buckets.

For performance guidelines for all other Amazon S3 storage classes and S3 general purpose buckets, see [Best practices design patterns: optimizing Amazon S3 performance](optimizing-performance.md).

For optimal performance and scalability with S3 Express One Zone storage class and directory buckets in high-scale workloads, it's important to understand how directory buckets work differently from general purpose buckets. Then, we provide best practices to align your applications with the way directory buckets work.

## How directory buckets work


Amazon S3 Express One Zone storage class can support workloads with up to 2,000,000 GET and up to 200,000 PUT transactions per second (TPS) per directory bucket. With S3 Express One Zone, data is stored in S3 directory buckets in Availability Zones. Objects in directory buckets are accessible within a hierarchical namespace, similar to a file system and in contrast to S3 general purpose buckets that have a flat namespace. Unlike general purpose buckets, directory buckets organize keys hierarchically into directories instead of prefixes. A prefix is a string of characters at the beginning of the object key name. You can use prefixes to organize your data and manage a flat object storage architecture in general purpose buckets. For more information, see [Organizing objects using prefixes](using-prefixes.md).

In directory buckets, objects are organized in a hierarchical namespace using forward slash (`/`) as the only supported delimiter. When you upload an object with a key like `dir1/dir2/file1.txt`, the directories `dir1/` and `dir2/` are automatically created and managed by Amazon S3. Directories are created during `PutObject` or `CreateMultiPartUpload` operations and automatically removed when they become empty after `DeleteObject` or `AbortMultiPartUpload` operations. There is no upper limit to the number of objects and subdirectories in a directory.

The directories that are created when objects are uploaded to directory buckets can scale instantaneously to reduce the chance of HTTP `503 (Slow Down)` errors. This automatic scaling allows your applications to parallelize read and write requests within and across directories as needed. For S3 Express One Zone, individual directories are designed to support the maximum request rate of a directory bucket. There is no need to randomize key prefixes to achieve optimal performance as the system automatically distributes objects for even load distribution, but as a result, keys are not stored lexicographically in directory buckets. This is in contrast to S3 general purpose buckets where keys that are lexicographically closer are more likely to be co-located on the same server. 

For more information about examples of directory bucket operations and directory interactions, see [Directory bucket operation and directory interaction examples](#s3-express-directory-bucket-examples).

## Best practices


Follow the best practices to optimize your directory bucket performance and help your workloads scale over time.

### Use directories that contain many entries (objects or subdirectories)


Directory buckets deliver high performance by default for all workloads. For even greater performance optimization with certain operations, consolidating more entries (which are objects or subdirectories) into directories will lead to lower latency and higher request rate: 
+ Mutating API operations, such as `PutObject`, `DeleteObject`, `CreateMultiPartUpload` and `AbortMultiPartUpload`, achieve optimal performance when implemented with fewer, denser directories containing thousands of entries, rather than with a large number of smaller directories. 
+ `ListObjectsV2` operations perform better when fewer directories need to be traversed to populate a page of results.

#### Don't use entropy in prefixes


In Amazon S3 operations, entropy refers to the randomness in prefix naming that helps distribute workloads evenly across storage partitions. However, since directory buckets internally manage load distribution, it's not recommended to use entropy in prefixes for the best performance. This is because for directory buckets, entropy can cause requests to be slower by not reusing the directories that have already been created.

A key pattern such as `$HASH/directory/object` could end up creating many intermediate directories. In the following example, all the `job-1` s are different directories since their parents are different. Directories will be sparse and mutation and list requests will be slower. In this example there are 12 intermediate Directories that all have a single entry.

```
s3://my-bucket/0cc175b9c0f1b6a831c399e269772661/job-1/file1
  
s3://my-bucket/92eb5ffee6ae2fec3ad71c777531578f/job-1/file2
  
s3://my-bucket/4a8a08f09d37b73795649038408b5f33/job-1/file3
  
s3://my-bucket/8277e0910d750195b448797616e091ad/job-1/file4
  
s3://my-bucket/e1671797c52e15f763380b45e841ec32/job-1/file5
  
s3://my-bucket/8fa14cdd754f91cc6554c9e71929cce7/job-1/file6
```

Instead, for better performance, we can remove the `$HASH` component and allow `job-1` to become a single directory, improving the density of a directory. In the following example, the single intermediate directory that has 6 entries can lead to better performance, compared with the previous example.

```
s3://my-bucket/job-1/file1
  
s3://my-bucket/job-1/file2
  
s3://my-bucket/job-1/file3
  
s3://my-bucket/job-1/file4
  
s3://my-bucket/job-1/file5
  
s3://my-bucket/job-1/file6
```

This performance advantage occurs because when an object key is initially created and its key name includes a directory, the directory is automatically created for the object. Subsequent object uploads to that same directory do not require the directory to be created, which reduces latency on object uploads to existing directories.

#### Use a separator other than the delimiter / to separate parts of your key if you don't need the ability to logically group objects during `ListObjectsV2` calls


Since the `/` delimiter is treated specially for directory buckets, it should be used with intention. While directory buckets do not lexicographically order objects, objects within a directory are still grouped together in `ListObjectsV2` outputs. If you don't need this functionality, you can replace `/` with another character as a separator to not cause the creation of intermediate directories.

For example, assume the following keys are in a `YYYY/MM/DD/HH/` prefix pattern

```
s3://my-bucket/2024/04/00/01/file1
  
s3://my-bucket/2024/04/00/02/file2
  
s3://my-bucket/2024/04/00/03/file3
  
s3://my-bucket/2024/04/01/01/file4
  
s3://my-bucket/2024/04/01/02/file5
  
s3://my-bucket/2024/04/01/03/file6
```

If you don't have the need to group objects by hour or day in `ListObjectsV2` results, but you need to group objects by month, the following key pattern of `YYYY/MM/DD-HH-` will lead to significantly fewer directories and better performance for the `ListObjectsV2` operation.

```
s3://my-bucket/2024/04/00-01-file1
  
s3://my-bucket/2024/04/00-01-file2
  
s3://my-bucket/2024/04/00-01-file3
  
s3://my-bucket/2024/04/01-02-file4
  
s3://my-bucket/2024/04/01-02-file5
  
s3://my-bucket/2024/04/01-02-file6
```

#### Use delimited list operations where possible


A `ListObjectsV2` request without a `delimiter` performs depth-first recursive traversal of all directories. A `ListObjectsV2` request with a `delimiter` retrieves only entries in the directory specified by the `prefix` parameter, reducing request latency and increasing aggregate keys per second. For directory buckets, use delimited list operations where possible. Delimited lists result in directories being visited fewer times, which leads to more keys per second and lower request latency.

For example, for the following directories and objects in your directory bucket:

```
s3://my-bucket/2024/04/12-01-file1
  
s3://my-bucket/2024/04/12-01-file2
  
...
  
s3://my-bucket/2024/05/12-01-file1
  
s3://my-bucket/2024/05/12-01-file2
  
...
  
s3://my-bucket/2024/06/12-01-file1
  
s3://my-bucket/2024/06/12-01-file2
  
...
  
s3://my-bucket/2024/07/12-01-file1
  
s3://my-bucket/2024/07/12-01-file2
  
...
```

For better `ListObjectsV2` performance, use a delimited list to list your subdirectories and objects, if your application's logic allows for it. For example, you can run the following command for the delimited list operation,

```
aws s3api list-objects-v2 --bucket my-bucket --prefix '2024/' --delimiter '/'
```

The output is the list of subdirectories.

```
{
    "CommonPrefixes": [
        {
            "Prefix": "2024/04/"
        },
        {
            "Prefix": "2024/05/"
        },
        {
            "Prefix": "2024/06/"
        },
        {
            "Prefix": "2024/07/"
        }
    ]
}
```

To list each subdirectory with better performance, you can run a command like the following example:

Command:

```
aws s3api list-objects-v2 --bucket my-bucket --prefix '2024/04' --delimiter '/'
```

Output:

```
{
    "Contents": [
        {
            "Key": "2024/04/12-01-file1"
        },
        {
            "Key": "2024/04/12-01-file2"
        }
    ]
}
```

### Co-locate S3 Express One Zone storage with your compute resources


With S3 Express One Zone, each directory bucket is located in a single Availability Zone that you select when you create the bucket. You can get started by creating a new directory bucket in an Availability Zone local to your compute workloads or resources. You can then immediately begin very low-latency reads and writes. Directory buckets are a type of S3 buckets where you can choose the Availability Zone in an AWS Region to reduce latency between compute and storage.

If you access directory buckets across Availability Zones, you may experience slightly increased latency. To optimize performance, we recommend that you access a directory bucket from Amazon Elastic Container Service, Amazon Elastic Kubernetes Service, and Amazon Elastic Compute Cloud instances that are located in the same Availability Zone when possible.

### Use concurrent connections to achieve high throughput with objects over 1MB


You can achieve the best performance by issuing multiple concurrent requests to directory buckets to spread your requests over separate connections to maximize the accessible bandwidth. Like general purpose buckets, S3 Express One Zone doesn't have any limits for the number of connections made to your directory bucket. Individual directories can scale performance horizontally and automatically when large numbers of concurrent writes to the same directory are happening.

Individual TCP connections to directory buckets have a fixed upper bound on the number of bytes that can be uploaded or downloaded per second. When objects get larger, request times become dominated by byte streaming rather than transaction processing. To use multiple connections to parallelize the upload or download of larger objects, you can reduce end-to-end latency. If using the `Java 2.x` SDK, you should consider using the [S3 Transfer Manager](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/transfer-manager.html) which will take advantage of performance improvements such as the [multipart upload API operations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html) and byte-range fetches to access data in parallel.

### Use Gateway VPC endpoints


Gateway endpoints provide a direct connection from your VPC to directory buckets, without requiring an internet gateway or a NAT device for your VPC. To reduce the amount of time your packets spend on the network, you should configure your VPC with a gateway VPC endpoint for directory buckets. For more information, see [Networking for directory buckets](s3-express-networking.md).

### Use session authentication and reuse session tokens while they're valid


Directory buckets provide a session token authentication mechanism to reduce latency on performance-sensitive API operations. You can make a single call to `CreateSession` to get a session token which is then valid for all requests in the following 5 minutes. To get the lowest latency in your API calls, make sure to acquire a session token and reuse it for the entire lifetime of that token before refreshing it.

If you use AWS SDKs, SDKs handle the session token refreshes automatically to avoid service interruptions when a session expires. We recommend that you use the AWS SDKs to initiate and manage requests to the `CreateSession` API operation.

For more information about `CreateSession`, see [Authorizing Zonal endpoint API operations with `CreateSession`](s3-express-create-session.md).

### Use a CRT-based client


The AWS Common Runtime (CRT) is a set of modular, performant, and efficient libraries written in C and meant to act as the base of the AWS SDKs. The CRT provides improved throughput, enhanced connection management, and faster startup times. The CRT is available through all the AWS SDKs except Go.

For more information on how to configure the CRT for the SDK you use, see [AWS Common Runtime (CRT) libraries](https://docs.aws.amazon.com/sdkref/latest/guide/common-runtime.html), [Accelerate Amazon S3 throughput with the AWS Common Runtime](https://aws.amazon.com/blogs//storage/improving-amazon-s3-throughput-for-the-aws-cli-and-boto3-with-the-aws-common-runtime/), [Introducing CRT-based S3 client and the S3 Transfer Manager in the AWS SDK for Java 2.x](https://aws.amazon.com/blogs//developer/introducing-crt-based-s3-client-and-the-s3-transfer-manager-in-the-aws-sdk-for-java-2-x/), [Using S3CrtClient for Amazon S3 operations](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/examples-s3-crt.html), and [Configure AWS CRT-based HTTP clients](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/http-configuration-crt.html).

### Use the latest version of the AWS SDKs


The AWS SDKs provide built-in support for many of the recommended guidelines for optimizing Amazon S3 performance. The SDKs offer a simpler API for taking advantage of Amazon S3 from within an application and are regularly updated to follow the latest best practices. For example, the SDKs automatically retry requests after HTTP `503` errors and handle slow connections responses.

If using the `Java 2.x` SDK, you should consider using the [S3 Transfer Manager](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/transfer-manager.html), which automatically scales connections horizontally to achieve thousands of requests per second using byte-range requests when appropriate. Byte-range requests can improve performance because you can use concurrent connections to S3 to fetch different byte ranges from within the same object. This helps you achieve higher aggregate throughput versus a single whole-object request. So it's important to use the latest version of the AWS SDKs to obtain the latest performance optimization features.

## Performance troubleshooting


### Are you setting retry requests for latency-sensitive applications?


S3 Express One Zone is purpose-built to deliver consistent levels of high-performance without additional tuning. However, setting aggressive timeout values and retries can further help drive consistent latency and performance. The AWS SDKs have configurable timeout and retry values that you can tune to the tolerances of your specific application.

### Are you using AWS Common Runtime (CRT) libraries and optimal Amazon EC2 instance types?


Applications that perform a large number of read and write operations likely need more memory or computing capacity than applications that don't. When launching your Amazon Elastic Compute Cloud (Amazon EC2) instances for your performance-demanding workload, choose instance types that have the amount of these resources that your application needs. S3 Express One Zone high-performance storage is ideally paired with larger and newer instance types with larger amounts of system memory and more powerful CPUs and GPUs that can take advantage of higher-performance storage. We also recommend using the latest versions of the CRT-enabled AWS SDKs, which can better accelerate read and write requests in parallel.

### Are you using AWS SDKs for session-based authentication?


With Amazon S3, you can also optimize performance when you're using HTTP REST API requests by following the same best practices that are part of the AWS SDKs. However, with the session-based authorization and authentication mechanism that's used by S3 Express One Zone, we strongly recommend that you use the AWS SDKs to manage `CreateSession` and its managed session token. The AWS SDKs automatically create and refresh tokens on your behalf by using the `CreateSession` API operation. Using `CreateSession` saves on per-request round-trip latency to AWS Identity and Access Management (IAM) to authorize each request.

## Directory bucket operation and directory interaction examples


The following shows three examples about how directory buckets work.

### Example 1: How S3 `PutObject` requests to a directory bucket interact with directories


1. When the operation `PUT(<bucket>, "documents/reports/quarterly.txt")` is executed in an empty bucket, the directory `documents/` within the root of the bucket is created, the directory `reports/` within `documents/` is created, and the object `quarterly.txt` within `reports/` is created. For this operation, two directories were created in addition to the object.  
![\[Diagram showing directory structure after PUT operation for documents/reports/quarterly.txt\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/directory-examples-foo-bar-baz.png)

1. Then, when another operation `PUT(<bucket>, "documents/logs/application.txt")` is executed, the directory `documents/` already exists, the directory `logs/` within `documents/` doesn't exist and is created, and the object `application.txt` within `logs/` is created. For this operation, only one directory was created in addition to the object.  
![\[Diagram showing directory structure after PUT operation for documents/logs/application.txt\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/directory-examples-foo-baz-quux.png)

1. Lastly, when a `PUT(<bucket>, "documents/readme.txt")` operation is executed, the directory `documents/` within the root already exists and the object `readme.txt` is created. For this operation, no directories are created.  
![\[Diagram showing directory structure after PUT operation for documents/readme.txt\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/directory-examples-foo-bar.png)

### Example 2: How S3 `ListObjectsV2` requests to a directory bucket interact with directories


For the S3 `ListObjectsV2` requests without specifying a delimiter, a bucket is traversed in a depth-first manner. The outputs are returned in a consistent order. However, while this order remains the same between requests, the order is not lexicographic. For the bucket and directories created in the previous example:

1. When a `LIST(<bucket>)` is executed, the directory `documents/` is entered and the traversing begins.

1. The subdirectory `logs/` is entered and the traversing begins.

1. The object `application.txt` is found within `logs/`.

1. No more entries exist within `logs/`. The List operation exits from `logs/` and enters `documents/` again.

1. The `documents/` directory continues being traversed and the object `readme.txt` is found.

1. The `documents/` directory continues being traversed and the subdirectory `reports/` is entered and the traversing begins.

1. The object `quarterly.txt` is found within `reports/`.

1. No more entries exist within `reports/`. The List exits from `reports/` and enters `documents/` again.

1. No more entries exist within `documents/` and the List returns.

In this example, `logs/` is ordered before `readme.txt` and `readme.txt` is ordered before `reports/`.

### Example 3: How S3 `DeleteObject` requests to a directory bucket interact with directories


![\[Diagram showing initial directory structure before DELETE operations\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/directory-examples-delete-before.png)


1. In that same bucket, when the operation `DELETE(<bucket>, "documents/reports/quarterly.txt")` is executed, the object `quarterly.txt` is deleted, leaving the directory `reports/` empty and causing it to be deleted immediately. The `documents/` directory is not empty because it has both the directory `logs/` and the object `readme.txt` within it, so it's not deleted. For this operation, only one object and one directory were deleted.  
![\[Diagram showing directory structure after DELETE operation for documents/reports/quarterly.txt\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/directory-examples-delete1.png)

1. When the operation `DELETE(<bucket>, "documents/readme.txt")` is executed, the object `readme.txt` is deleted. `documents/` is still not empty because it contains the directory `logs/`, so it's not deleted. For this operation, no directories are deleted and only the object is deleted.  
![\[Diagram showing directory structure after DELETE operation for documents/readme.txt\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/directory-examples-delete2.png)

1. Lastly, when the operation `DELETE(<bucket>, "documents/logs/application.txt")` is executed, `application.txt` is deleted, leaving `logs/` empty and causing it to be deleted immediately. This then leaves `documents/` empty and causing it to also be deleted immediately. For this operation, two directories and one object are deleted. The bucket is now empty.  
![\[Diagram showing directory structure after DELETE operation for documents/logs/application.txt\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/directory-examples-delete3.png)

# Data residency workloads


AWS Dedicated Local Zones (Dedicated Local Zones) are a type of AWS Infrastructure that are fully managed by AWS, built for exclusive use by you or your community, and placed in a location or data center specified by you to help comply with regulatory requirements. Dedicated Local Zones are a type of AWS Local Zones (Local Zones) offering. For more information, see [AWS Dedicated Local Zones](https://aws.amazon.com/dedicatedlocalzones/).

In Dedicated Local Zones, you can create S3 directory buckets to store data in a specific data perimeter, which helps support data residency and isolation use cases. Directory buckets in Dedicated Local Zones can support the S3 Express One Zone and S3 One Zone-Infrequent Access (S3 One Zone-IA; Z-IA) storage classes. Directory buckets are not currently available in other [AWS Local Zones locations](https://aws.amazon.com/about-aws/global-infrastructure/localzones/locations/). 

You can use the AWS Management Console, REST API, AWS Command Line Interface (AWS CLI), and AWS SDKs in Dedicated Local Zones. 



For more information about working with the directory buckets in Local Zones, see the following topics:

**Topics**
+ [

# Concepts for directory buckets in Local Zones
](s3-lzs-for-directory-buckets.md)
+ [

# Enable accounts for Local Zones
](opt-in-directory-bucket-lz.md)
+ [

# Private connectivity from your VPC
](connectivity-lz-directory-buckets.md)
+ [

# Creating a directory bucket in a Local Zone
](create-directory-bucket-LZ.md)
+ [

# Authenticating and authorizing for directory buckets in Local Zones
](iam-directory-bucket-LZ.md)

# Concepts for directory buckets in Local Zones


Before creating a directory bucket in a Local Zone, you must have the Local Zone ID where you want to create a bucket. You can find all Local Zone information by using the [DescribeAvailabilityZones](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeAvailabilityZones.html) API operation. This API operation lists information about Local Zones, including their Local Zone IDs, parent Region names, network border groups, and opt-in status. After you have your Local Zone ID and you are opted in, you can create a directory bucket in the Local Zone. A directory bucket name consists of a base name that you provide and a suffix that contains the Zone ID of your bucket location, followed by `--x-s3`. 

A Local Zone is connected to the **parent Region** using the Amazon redundant and very high-bandwidth private network. This gives applications running in the Local Zone fast, secure, and seamless access to the rest of the AWS services in the parent Region. **Parent Zone ID** is the ID of the zone that handles the Local Zone control plane operations. **Network Border Group** is a unique group from which AWS advertises public IP addresses. For more information about Local Zones, parent Region, and parent Zone ID, see [AWS Local Zones concepts](https://docs.aws.amazon.com/local-zones/latest/ug/concepts-local-zones.html) in the AWS Local Zones* User Guide*.

All directory buckets use the `s3express` namespace, which is separate from the `s3` namespace for general purpose buckets. For directory buckets, requests are routed to either a **Regional endpoint** or a **Zonal endpoint**. The routing is handled automatically for you if you use the AWS Management Console, AWS CLI, or AWS SDKs. 

Most bucket-level API operations (such as `CreateBucket` and `DeleteBucket`) are routed to Regional endpoints, and are referred to as Regional endpoint API operations. Regional endpoints are in the format of `s3express-control.ParentRegionCode.amazonaws.com`. All object-level API operations (such as `PutObject`) and two bucket-level API operations (`CreateSession` and `HeadBucket`) are routed to Zonal endpoints, and are referred to as Zonal endpoint API operations. Zonal endpoints are in the format of `s3express-LocalZoneID.ParentRegionCode.amazonaws.com`. For a complete list of API operations by endpoint type, see [Directory bucket API operations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-differences.html#s3-express-differences-api-operations).

To access directory buckets in Local Zones from your virtual private cloud (VPC), you can use gateway VPC endpoints. There is no additional charge for using gateway endpoints. To configure gateway VPC endpoints to access directory buckets and objects in Local Zones, see [Private connectivity from your VPC](connectivity-lz-directory-buckets.md). 

# Enable accounts for Local Zones


The following topic describes how accounts are enabled for Dedicated Local Zones.

For all the services in AWS Dedicated Local Zones (Dedicated Local Zones), including Amazon S3, your administrator must enable your AWS account before you can create or access any resource in the Dedicated Local Zone. You can use the [DescribeAvailabilityZones](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeAvailabilityZones.html) API operation to confirm your account ID access to a Local Zone.

To further protect your data in Amazon S3, by default, you only have access to the S3 resources that you create. Buckets in Local Zones have all S3 Block Public Access settings enabled by default and S3 Object Ownership is set to bucket owner enforced. These settings can't be modified. Optionally, to restrict access to only within the Local Zone network border groups, you can use the condition key `s3express:AllAccessRestrictedToLocalZoneGroup` in your IAM policies. For more information, see [Authenticating and authorizing for directory buckets in Local Zones](iam-directory-bucket-LZ.md).

# Private connectivity from your VPC


To reduce the amount of time your packets spend on the network, configure your virtual private cloud (VPC) with a gateway endpoint to access directory buckets in Availability Zones while keeping traffic within the AWS network, and at no additional cost.

**To configure a gateway VPC endpoint**

1. Open the [Amazon VPC Console](https://console.aws.amazon.com/vpc/). 

1. In the navigation pane, choose **Endpoints**.

1. Choose **Create endpoint**.

1. Create a name for your endpoint.

1. For **Service category**, choose **AWS services**. 

1. For **Services**, add the filter **Type=Gateway** and then choose the option button next to **com.amazonaws.*region*.s3express**. 

1. For **VPC**, choose the VPC in which to create the endpoint.

1. For **Route tables**, choose the route table in your VPC to be used by the endpoint. After the endpoint is created, a route record will be added to the route table that you select in this step.

1. For **Policy**, choose **Full access** to allow all operations by all principals on all resources over the VPC endpoint. Otherwise, choose **Custom** to attach a VPC endpoint policy that controls the principals' permissions to perform actions on resources over the VPC endpoint. 

1. For **IP address type**, choose from the following options:
   +  **IPv4** – Assign IPv4 addresses to the endpoint network interfaces. This option is supported only if all selected subnets have IPv4 address ranges and the service accepts IPv4 requests. 
   +  **IPv6** – Assign IPv6 addresses to the endpoint network interfaces. This option is supported only if all selected subnets are IPv6 only subnets and the service accepts IPv6 requests.
   +  **Dualstack** – Assign both IPv4 and IPv6 addresses to the endpoint network interfaces. This option is supported only if all selected subnets have both IPv4 and IPv6 address ranges and the service accepts both IPv4 and IPv6 requests.

1. (Optional) To add a tag, choose **Add new tag**, and enter the tag key and the tag value.

1. Choose **Create endpoint**.

To learn more about gateway VPC endpoints, see [Gateway endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html) in the *AWS PrivateLink Guide*. For the data residency use cases, we recommend enabling access to your buckets only from your VPC using gateway VPC endpoints. When access is restricted to a VPC or a VPC endpoint, you can access the objects through the AWS Management Console, the REST API, AWS CLI, and AWS SDKs.

**Note**  
To restrict access to a VPC or a VPC endpoint using the AWS Management Console, you must use the AWS Management Console Private Access. For more information, see [AWS Management Console Private Access](https://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/console-private-access.html) in the *AWS Management Console guide*.

# Creating a directory bucket in a Local Zone


In Dedicated Local Zones, you can create directory buckets to store and retrieve objects in a specific data perimeter to help meet your data residency and data isolation use cases. S3 directory buckets are the only supported bucket type in Local Zones, and contain a bucket location type called `LocalZone`. A directory bucket name consists of a base name that you provide and a suffix that contains the Zone ID of your bucket location and `--x-s3`. You can obtain a list of Local Zone IDs by using the [DescribeAvailabilityZones](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeAvailabilityZones.html) API operation. For more information, see [Directory bucket naming rules](directory-bucket-naming-rules.md).

**Note**  
For all the services in AWS Dedicated Local Zones (Dedicated Local Zones), including S3, your administrator must enable your AWS account before you can create or access any resource in the Dedicated Local Zone. For more information, see [Enable accounts for Local Zones](opt-in-directory-bucket-lz.md).
For the data residency requirements, we recommend enabling access to your buckets only from gateway VPC endpoints. For more information, see [Private connectivity from your VPC](connectivity-lz-directory-buckets.md).
To restrict access to only within the Local Zone network border groups, you can use the condition key `s3express:AllAccessRestrictedToLocalZoneGroup` in your IAM policies. For more information, see [Authenticating and authorizing for directory buckets in Local Zones](iam-directory-bucket-LZ.md).

The following describes ways to create a directory bucket in a single Local Zone with the AWS Management Console, AWS CLI, and AWS SDKs. 

## Using the S3 console


1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the parent Region of a Local Zone in which you want to create a directory bucket. 
**Note**  
For more information about the parent Regions, see [Concepts for directory buckets in Local Zones](s3-lzs-for-directory-buckets.md).

1. In the left navigation pane, choose **Buckets**.

1. Choose **Create bucket**.

   The **Create bucket** page opens.

1. Under **General configuration**, view the AWS Region where your bucket will be created. 

1.  Under **Bucket type**, choose **Directory**.
**Note**  
If you've chosen a Region that doesn't support directory buckets, the bucket type defaults to a general purpose bucket. To create a directory bucket, you must choose a supported Region. For a list of Regions that support directory buckets, see [Regional and Zonal endpoints for directory buckets](s3-express-Regions-and-Zones.md).
After you create the bucket, you can't change the bucket type.

1. Under **Bucket location**, choose a Local Zone that you want to use. 
**Note**  
The Local Zone can't be changed after the bucket is created. 

1. Under **Bucket location**, select the checkbox to acknowledge that in the event of a Local Zone outage, your data might be unavailable or lost. 
**Important**  
Although directory buckets are stored across multiple devices within a single Local Zone, directory buckets don't store data redundantly across Local Zones.

1. For **Bucket name**, enter a name for your directory bucket.

   For more information about the naming rules for directory buckets, see [General purpose bucket naming rules](bucketnamingrules.md). A suffix is automatically added to the base name that you provide when you create a directory bucket using the console. This suffix includes the Zone ID of the Local Zone that you chose.

   After you create the bucket, you can't change its name. 
**Important**  
Don't include sensitive information, such as account numbers, in the bucket name. The bucket name is visible in the URLs that point to the objects in the bucket.

1. Under **Object Ownership**, the **Bucket owner enforced** setting is automatically enabled, and all access control lists (ACLs) are disabled. For directory buckets, ACLs are disabled and can't be enabled.

   With the **Bucket owner enforced** setting enabled, the bucket owner automatically owns and has full control over every object in the bucket. ACLs no longer affect access permissions to data in the S3 bucket. The bucket uses policies exclusively to define access control. A majority of modern use cases in Amazon S3 no longer require the use of ACLs. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

1. Under **Block Public Access settings for this bucket**, all Block Public Access settings for your directory bucket are automatically enabled. These settings can't be modified for directory buckets. For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

1. Under **Default encryption**, directory buckets use **Server-side encryption with Amazon S3 managed keys (SSE-S3)** to encrypt data by default. You also have the option to encrypt data in directory buckets with **Server-side encryption with AWS Key Management Service keys (SSE-KMS)**.

1. Choose **Create bucket**.

   After creating the bucket, you can add files and folders to the bucket. For more information, see [Working with objects in a directory bucket](directory-buckets-objects.md).

## Using the AWS CLI


This example shows how to create a directory bucket in a Local Zone by using the AWS CLI. To use the command, replace the *user input placeholders* with your own information.

When you create a directory bucket, you must provide configuration details and use the following naming convention: `bucket-base-name--zone-id--x-s3`.

```
aws s3api create-bucket
--bucket bucket-base-name--zone-id--x-s3
--create-bucket-configuration 'Location={Type=LocalZone,Name=local-zone-id},Bucket={DataRedundancy=SingleLocalZone,Type=Directory}'
--region parent-region-code
```

For more information about Local Zone ID and Parent Region Code, see [Concepts for directory buckets in Local Zones](s3-lzs-for-directory-buckets.md). For more information about the AWS CLI command, see [create-bucket](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/create-bucket.html) in the *AWS CLI Command Reference*.

## Using the AWS SDKs


------
#### [ SDK for Go ]

This example shows how to create a directory bucket in a Local Zone by using the AWS SDK for Go. 

**Example**  

```
var bucket = "bucket-base-name--zone-id--x-s3" // The full directory bucket name

func runCreateBucket(c *s3.Client) {
    resp, err := c.CreateBucket(context.Background(), &s3.CreateBucketInput{
        Bucket: &bucket,
        CreateBucketConfiguration: &types.CreateBucketConfiguration{
            Location: &types.LocationInfo{
                Name: aws.String("local-zone-id"),
                Type: types.LocationTypeLocalZone,
            },  
            Bucket: &types.BucketInfo{
                DataRedundancy: types.DataRedundancySingleLocalZone,
                Type:           types.BucketTypeDirectory,
            },  
        },  
    })  
    var terr *types.BucketAlreadyOwnedByYou
    if errors.As(err, &terr) {
        fmt.Printf("BucketAlreadyOwnedByYou: %s\n", aws.ToString(terr.Message))
        fmt.Printf("noop...\n") // No operation performed, just printing a message
        return
    }   
    if err != nil {
        log.Fatal(err)
    }   

    fmt.Printf("bucket created at %s\n", aws.ToString(resp.Location))
}
```

------
#### [ SDK for Java 2.x ]

This example shows how to create a directory bucket in a Local Zone by using the AWS SDK for Java 2.x. 

**Example**  

```
public static void createBucket(S3Client s3Client, String bucketName) {

    //Bucket name format is {base-bucket-name}--{local-zone-id}--x-s3
    //example: doc-example-bucket--local-zone-id--x-s3 is a valid name for a directory bucket created in a Local Zone.

    CreateBucketConfiguration bucketConfiguration = CreateBucketConfiguration.builder()
             .location(LocationInfo.builder()
                     .type(LocationType.LOCAL_ZONE)
                     .name("local-zone-id").build()) //this must match the Local Zone ID in your bucket name
             .bucket(BucketInfo.builder()
                    .type(BucketType.DIRECTORY)
                    .dataRedundancy(DataRedundancy.SINGLE_LOCAL_ZONE)
                    .build()).build();
    try {
    
             CreateBucketRequest bucketRequest = CreateBucketRequest.builder().bucket(bucketName).createBucketConfiguration(bucketConfiguration).build();
             CreateBucketResponse response = s3Client.createBucket(bucketRequest);
             System.out.println(response);
    } 
    
    catch (S3Exception e) {
            System.err.println(e.awsErrorDetails().errorMessage());
            System.exit(1);
         }
    }
```

------
#### [ AWS SDK for JavaScript ]

This example shows how to create a directory bucket in a Local Zone by using the AWS SDK for JavaScript. 

**Example**  

```
// file.mjs, run with Node.js v16 or higher
// To use with the preview build, place this in a folder 
// inside the preview build directory, such as /aws-sdk-js-v3/workspace/

import { S3 } from "@aws-sdk/client-s3";

const region = "parent-region-code";
const zone = "local-zone-id";
const suffix = `${zone}--x-s3`;

const s3 = new S3({ region });

const bucketName = `bucket-base-name--${suffix}`; // Full directory bucket name

const createResponse = await s3.createBucket( 
    { Bucket: bucketName, 
      CreateBucketConfiguration: {Location: {Type: "LocalZone", Name: "local-zone-id"},
      Bucket: { Type: "Directory", DataRedundancy: "SingleLocalZone" }}
    } 
   );
```

------
#### [ SDK for .NET ]

This example shows how to create a directory bucket in a Local Zone by using the SDK for .NET. 

**Example**  

```
using (var amazonS3Client = new AmazonS3Client())
{
    var putBucketResponse = await amazonS3Client.PutBucketAsync(new PutBucketRequest
    {

       BucketName = "bucket-base-name--local-zone-id--x-s3",
       PutBucketConfiguration = new PutBucketConfiguration
       {
         BucketInfo = new BucketInfo { DataRedundancy = DataRedundancy.SingleLocalZone, Type = BucketType.Directory },
         Location = new LocationInfo { Name = "local-zone-id", Type = LocationType.LocalZone }
       }
     }).ConfigureAwait(false);
}
```

------
#### [ SDK for PHP ]

This example shows how to create a directory bucket in a Local Zone by using the AWS SDK for PHP. 

**Example**  

```
require 'vendor/autoload.php';

$s3Client = new S3Client([

    'region'      => 'parent-region-code',
]);


$result = $s3Client->createBucket([
    'Bucket' => 'bucket-base-name--local-zone-id--x-s3',
    'CreateBucketConfiguration' => [
        'Location' => ['Name'=> 'local-zone-id', 'Type'=> 'LocalZone'],
        'Bucket' => ["DataRedundancy" => "SingleLocalZone" ,"Type" => "Directory"]   ],
]);
```

------
#### [ SDK for Python ]

This example shows how to create a directory bucket in a Local Zone by using the AWS SDK for Python (Boto3). 

**Example**  

```
import logging
import boto3
from botocore.exceptions import ClientError

def create_bucket(s3_client, bucket_name, local_zone):
    '''
    Create a directory bucket in a specified Local Zone

    :param s3_client: boto3 S3 client
    :param bucket_name: Bucket to create; for example, 'bucket-base-name--local-zone-id--x-s3'
    :param local_zone: String; Local Zone ID to create the bucket in
    :return: True if bucket is created, else False
    '''

    try:
        bucket_config = {
                'Location': {
                    'Type': 'LocalZone',
                    'Name': local_zone
                },
                'Bucket': {
                    'Type': 'Directory', 
                    'DataRedundancy': 'SingleLocalZone'
                }
            }
        s3_client.create_bucket(
            Bucket = bucket_name,
            CreateBucketConfiguration = bucket_config
        )
    except ClientError as e:
        logging.error(e)
        return False
    return True


if __name__ == '__main__':
    bucket_name = 'BUCKET_NAME'
    region = 'parent-region-code'
    local_zone = 'local-zone-id'
    s3_client = boto3.client('s3', region_name = region)
    create_bucket(s3_client, bucket_name, local_zone)
```

------
#### [ SDK for Ruby ]

This example shows how to create an directory bucket in a Local Zone by using the AWS SDK for Ruby. 

**Example**  

```
s3 = Aws::S3::Client.new(region:'parent-region-code')
s3.create_bucket(
  bucket: "bucket-base-name--local-zone-id--x-s3",
  create_bucket_configuration: {
    location: { name: 'local-zone-id', type: 'LocalZone' },
    bucket: { data_redundancy: 'SingleLocalZone', type: 'Directory' }
  }
)
```

------

# Authenticating and authorizing for directory buckets in Local Zones


Directory buckets in Local Zones support both AWS Identity and Access Management (IAM) authorization and session-based authorization. For more information about authentication and authorization for directory buckets, see [Authenticating and authorizing requests](s3-express-authenticating-authorizing.md).

## Resources


Amazon Resource Names (ARNs) for directory buckets contain the `s3express` namespace, the AWS parent Region, the AWS account ID, and the directory bucket name which includes the Zone ID. To access and perform actions on your directory bucket, you must use the following ARN format:

```
arn:aws:s3express:region-code:account-id:bucket/bucket-base-name--ZoneID--x-s3
```

For directory buckets in a Local Zone, the Zone ID is the ID of the Local Zone. For more information about directory buckets in Local Zones, see [Concepts for directory buckets in Local Zones](s3-lzs-for-directory-buckets.md). For more information about ARNs, see [Amazon Resource Names (ARNs)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference-arns.html) in the *IAM User Guide*. For more information about resources, see [IAM JSON Policy Elements: Resource](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_resource.html) in the *IAM User Guide*.

## Condition keys for directory buckets in Local Zones


In Local Zones, you can use all of these [condition keys](https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3express.html#amazons3express-policy-keys) in your IAM policies. Additionally, to create a data perimeter around your Local Zone network border groups, you can use the condition key `s3express:AllAccessRestrictedToLocalZoneGroup` to deny all requests from outside the groups. 

The following condition key can be used to further refine the conditions under which an IAM policy statement applies. For a complete list of API operations, policy actions, and condition keys that are supported by directory buckets, see [Policy actions for directory buckets](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-security-iam.html#s3-express-security-iam-actions).

**Note**  
The following condition key only applies to Local Zones and isn't supported in Availability Zones and AWS Regions.


| API operations | Policy actions | Description | Condition key | Description | Type | 
| --- | --- | --- | --- | --- | --- | 
|  [Zonal endpoint API operations](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-express-APIs.html)  |  s3express:CreateSession  |  Grants permission to create a session token, which is used for granting access to all Zonal endpoint API operations, such as `CreateSession`, `HeadBucket`, `CopyObject`, `PutObject`, and `GetObject`.  |  s3express:AllAccessRestrictedToLocalZoneGroup  | Filters all access to the bucket unless the request originates from the AWS Local Zone network border groups provided in this condition key.  **Values:** Local Zone network border group value   |  String  | 

## Example policies


To restrict object access to requests from within a data residency boundary that you define (specifically, a Local Zone Group which is a set of Local Zones parented to the same AWS Region), you can set any of the following policies:
+ The service control policy (SCP). For information about SCPs, see [Service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) in the *AWS Organizations User Guide*.
+ The IAM identity-based policy for the IAM role.
+ The VPC endpoint policy. For more information about the VPC endpoint policies, see [Control access to VPC endpoints using endpoint policies](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-access.html) in the *AWS PrivateLink Guide*.
+ The S3 bucket policy.

**Note**  
The condition key `s3express:AllAccessRestrictedToLocalZoneGroup` doesn't support access from an on-premises environment. To support the access from an on-premises environment, you must add the source IP to the policies. For more information, see [aws:SourceIp](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html#condition-keys-sourceip) in the IAM User Guide. 

**Example – SCP policy**  

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Access-to-specific-LocalZones-only",
            "Effect": "Deny",
            "Action": [
                "s3express:*",
            ],
            "Resource": "*",
            "Condition": {
                "StringNotEqualsIfExists": {
                    "s3express:AllAccessRestrictedToLocalZoneGroup": [
                        "local-zone-network-border-group-value"
                    ]
                }
            }
        }
    ]
}
```

**Example – IAM identity-based policy (attached to IAM role)**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": {
        "Effect": "Deny",
        "Action": "s3express:CreateSession",
        "Resource": "*",
        "Condition": {
            "StringNotEqualsIfExists": {
                "s3express:AllAccessRestrictedToLocalZoneGroup": [
                    "local-zone-network-border-group-value"
                ]              
            }
        }
    }
}
```

**Example – VPC endpoint policy**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {       
            "Sid": "Access-to-specific-LocalZones-only",
            "Principal": "*",
            "Action": "s3express:CreateSession",
            "Effect": "Deny",
            "Resource": "*",
            "Condition": {
                 "StringNotEqualsIfExists": {
                     "s3express:AllAccessRestrictedToLocalZoneGroup": [
                         "local-zone-network-border-group-value"
                     ]
                 }   
            }
        }
    ]
}
```

**Example – bucket policy**    
****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {       
            "Sid": "Access-to-specific-LocalZones-only",
            "Principal": "*",
            "Action": "s3express:CreateSession",
            "Effect": "Deny",
            "Resource": "*",
            "Condition": {
                 "StringNotEqualsIfExists": {
                     "s3express:AllAccessRestrictedToLocalZoneGroup": [
                         "local-zone-network-border-group-value"
                     ]
                 }   
            }
        }
    ]
}
```