

# Creating, configuring, and working with Amazon S3 general purpose buckets
<a name="creating-buckets-s3"></a>

To store your data in Amazon S3, you work with resources known as buckets and objects. A *bucket* is a container for objects. An *object* is a file and any metadata that describes that file.

To store an object in Amazon S3, you create a bucket and then upload the object to a bucket. When the object is in the bucket, you can open it, download it, and move it. When you no longer need an object or a bucket, you can clean up your resources.

The topics in this section provide an overview of working with general purpose buckets in Amazon S3. They include information about naming, creating, accessing, and deleting general purpose buckets. For more information about viewing or listing objects in a bucket, see [Organizing, listing, and working with your objects](organizing-objects.md).

There are several types of Amazon S3 buckets. Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements. For more information about the various bucket types and the appropriate use cases for each, see [Buckets](Welcome.md#BasicsBucket).

**Note**  
For more information about using the Amazon S3 Express One Zone storage class with directory buckets, see [S3 Express One Zone](directory-bucket-high-performance.md#s3-express-one-zone) and [Working with directory buckets](directory-buckets-overview.md).

**Note**  
With Amazon S3, you pay only for what you use. For more information about Amazon S3 features and pricing, see [Amazon S3](https://aws.amazon.com/s3). If you are a new Amazon S3 customer, you can get started with Amazon S3 for free. For more information, see [AWS Free Tier](https://aws.amazon.com/free).

**Topics**
+ [General purpose buckets overview](UsingBucket.md)
+ [Namespaces for general purpose buckets](gpbucketnamespaces.md)
+ [Common general purpose bucket patterns for building applications on Amazon S3](common-bucket-patterns.md)
+ [General purpose bucket naming rules](bucketnamingrules.md)
+ [General purpose bucket quotas, limitations, and restrictions](BucketRestrictions.md)
+ [Accessing an Amazon S3 general purpose bucket](access-bucket-intro.md)
+ [Creating a general purpose bucket](create-bucket-overview.md)
+ [Viewing the properties for an S3 general purpose bucket](view-bucket-properties.md)
+ [Listing Amazon S3 general purpose buckets](list-buckets.md)
+ [Emptying a general purpose bucket](empty-bucket.md)
+ [Deleting a general purpose bucket](delete-bucket.md)
+ [Mount an Amazon S3 bucket as a local file system](mountpoint.md)
+ [Working with Storage Browser for Amazon S3](storage-browser.md)
+ [Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration](transfer-acceleration.md)
+ [Using Requester Pays general purpose buckets for storage transfers and usage](RequesterPaysBuckets.md)

# General purpose buckets overview
<a name="UsingBucket"></a>

To upload your data (photos, videos, documents, etc.) to Amazon S3, you must first create an S3 bucket in one of the AWS Regions. 

There are several types of Amazon S3 buckets. Before creating a bucket, make sure that you choose the bucket type that best fits your application and performance requirements. For more information about the various bucket types and the appropriate use cases for each, see [Buckets](Welcome.md#BasicsBucket).

The following sections provide more information about general purpose buckets, including bucket naming rules, quotas, and bucket configuration details. For a list of restriction and limitations related to Amazon S3 buckets see, [General purpose bucket quotas, limitations, and restrictions](BucketRestrictions.md).

**Topics**
+ [General purpose buckets overview](#general-purpose-buckets-overview)
+ [Common general purpose bucket patterns](#bucket-patterns-overview)
+ [Permissions](#about-access-permissions-create-bucket)
+ [Managing public access to general purpose buckets](#block-public-access-intro)
+ [Managing public access to general purpose buckets](#bucket-tagging-intro)
+ [General purpose buckets configuration options](#bucket-config-options-intro)
+ [General purpose buckets operations](#bucket-operations-limits)
+ [General purpose buckets performance monitoring](#bucket-monitoring-use-cases)

## General purpose buckets overview
<a name="general-purpose-buckets-overview"></a>

Every object is contained in a bucket. For example, if the object named `photos/puppy.jpg` is stored in the `amzn-s3-demo-bucket` general purpose bucket in the US West (Oregon) Region, then it is addressable by using the URL `https://amzn-s3-demo-bucket.s3.us-west-2.amazonaws.com/photos/puppy.jpg`. For more information, see [Accessing a Bucket](access-bucket-intro.md). 
+ General purpose bucket quotas for commercial Regions can only be viewed and managed from US East (N. Virginia).
+ General purpose bucket quotas for AWS GovCloud (US) can only be viewed and managed from AWS GovCloud (US-West).

In terms of implementation, buckets and objects are AWS resources, and Amazon S3 provides APIs for you to manage them. For example, you can create a bucket and upload objects using the Amazon S3 API. You can also use the Amazon S3 console to perform these operations. The console uses the Amazon S3 APIs to send requests to Amazon S3. 

This section describes how to work with general purpose buckets. For information about working with objects, see [Amazon S3 objects overview](UsingObjects.md).

By default, general purpose buckets exist in a global namespace, which means that each bucket name must be unique across all AWS accounts in all the AWS Regions within a partition. A partition is a grouping of Regions. AWS currently has four partitions: `aws` (Standard Regions), `aws-cn` (China Regions), `aws-us-gov` (AWS GovCloud (US)), and `aws-eusc` (European Sovereign Cloud). After creating a general purpose bucket in the shared global namespace, that bucket name is unavailable for anyone else to create within partition. When a bucket owner deletes their bucket, the bucket name becomes available again in the global namespace for anyone to re-create.

Alternatively, you can create buckets in your reserved account regional namespace to easily create predictable bucket names with assurance that the names you want will always be available for you to use. Your account regional namespace is a subdivision of the global namespace that only your account can use. By creating new buckets in your account regional namespace, you have assurance that your desired bucket names will always be available for you to use. For more information on account regional namespaces, see [Namespaces for general purpose buckets](gpbucketnamespaces.md).

After a general purpose bucket is created, the name of that bucket cannot be used by another AWS account in the same partition until the bucket is deleted. You should not depend on specific bucket naming conventions for availability or security verification purposes. For bucket naming guidelines, see [General purpose bucket naming rules](bucketnamingrules.md).

Amazon S3 creates buckets in a Region that you specify. To reduce latency, minimize costs, or address regulatory requirements, choose any AWS Region that is geographically close to you. For example, if you reside in Europe, you might find it advantageous to create buckets in the Europe (Ireland) or Europe (Frankfurt) Regions. For a list of Amazon S3 Regions, see [Regions and Endpoints](https://docs.aws.amazon.com/general/latest/gr/s3.html) in the *AWS General Reference*.

**Note**  
Objects that belong to a bucket that you create in a specific AWS Region never leave that Region, unless you explicitly transfer them to another Region. For example, objects that are stored in the Europe (Ireland) Region never leave it. 

## Common general purpose bucket patterns
<a name="bucket-patterns-overview"></a>

When you build applications on Amazon S3, you can use unique general purpose buckets to separate different datasets or workloads. Depending on your use case, there are different design patterns and best practices for using general purpose buckets. For more information, see [Common general purpose bucket patterns for building applications on Amazon S3Bucket-per-use pattern](common-bucket-patterns.md).

## Permissions
<a name="about-access-permissions-create-bucket"></a>

You can use your AWS account root user credentials to create a general purpose bucket and perform any other Amazon S3 operation. However, we recommend that you do not use the root user credentials of your AWS account to make requests, such as to create a bucket. Instead, create an AWS Identity and Access Management (IAM) user, and grant that user full access (users by default have no permissions). 

These users are referred to as *administrators*. You can use the administrator user credentials, instead of the root user credentials of your account, to interact with AWS and perform tasks, such as create a bucket, create users, and grant them permissions. 

For more information, see [AWS account root user credentials and IAM user credentials](https://docs.aws.amazon.com/general/latest/gr/root-vs-iam.html) in the *AWS General Reference* and [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) in the *IAM User Guide*.

The AWS account that creates a resource owns that resource. For example, if you create an IAM user in your AWS account and grant the user permission to create a bucket, the user can create a bucket. But the user does not own the bucket; the AWS account that the user belongs to owns the bucket. The user needs additional permission from the resource owner to perform any other bucket operations. For more information about managing permissions for your Amazon S3 resources, see [Identity and Access Management for Amazon S3](security-iam.md).

## Managing public access to general purpose buckets
<a name="block-public-access-intro"></a>

Public access is granted to general purpose buckets and objects through bucket policies, access control lists (ACLs), or both. To help you manage public access to Amazon S3 resources, Amazon S3 provides settings to block public access. Amazon S3 Block Public Access settings can override ACLs and bucket policies so that you can enforce uniform limits on public access to these resources. You can apply Block Public Access settings to individual buckets or to all buckets in your account.

To ensure that all of your Amazon S3 general purpose buckets and objects have their public access blocked, all four settings for Block Public Access are enabled by default when you create a new bucket. We recommend that you turn on all four settings for Block Public Access for your account too. These settings block all public access for all current and future buckets.

Before applying these settings, verify that your applications will work correctly without public access. If you require some level of public access to your buckets or objects—for example, to host a static website, as described at [Hosting a static website using Amazon S3](WebsiteHosting.md)—you can customize the individual settings to suit your storage use cases. For more information, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).

 However, we highly recommend keeping Block Public Access enabled. If you want to keep all four Block Public Access settings enabled and host a static website, you can use Amazon CloudFront origin access control (OAC). Amazon CloudFront provides the capabilities required to set up a secure static website. Amazon S3 static websites support only HTTP endpoints. Amazon CloudFront uses the durable storage of Amazon S3 while providing additional security headers, such as HTTPS. HTTPS adds security by encrypting a normal HTTP request and protecting against common cyberattacks. 

For more information, see [Getting started with a secure static website](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/getting-started-secure-static-website-cloudformation-template.html) in the *Amazon CloudFront Developer Guide*.

**Note**  
If you see an `Error` when you list your general purpose buckets and their public access settings, you might not have the required permissions. Make sure that you have the following permissions added to your user or role policy:  

```
s3:GetAccountPublicAccessBlock
s3:GetBucketPublicAccessBlock
s3:GetBucketPolicyStatus
s3:GetBucketLocation
s3:GetBucketAcl
s3:ListAccessPoints
s3:ListAllMyBuckets
```
In some rare cases, requests can also fail because of an AWS Region outage.

## Managing public access to general purpose buckets
<a name="bucket-tagging-intro"></a>

You can add tags to your Amazon S3 buckets to categorize and track your AWS costs or for access control. You can use tags as cost allocation tags to track storage costs in AWS Billing and Cost Management. You can also use tags for attribute-based access control (ABAC), to scale access permissions and grant access to S3 buckets based on their tags.

For more information, see [Using tags with S3 general purpose buckets](buckets-tagging.md)

## General purpose buckets configuration options
<a name="bucket-config-options-intro"></a>

Amazon S3 supports various options for you to configure your general purpose bucket. For example, you can configure your bucket for website hosting, add a configuration to manage the lifecycle of objects in the bucket, and configure the bucket to log all access to the bucket. Amazon S3 supports subresources for you to store and manage the bucket configuration information. You can use the Amazon S3 API to create and manage these subresources. However, you can also use the console or the AWS SDKs. 

**Note**  
There are also object-level configurations. For example, you can configure object-level permissions by configuring an access control list (ACL) specific to that object.

These are referred to as subresources because they exist in the context of a specific bucket or object. The following table lists subresources that enable you to manage bucket-specific configurations. 


| Subresource | Description | 
| --- | --- | 
|   *cors* (cross-origin resource sharing)   |   You can configure your bucket to allow cross-origin requests. For more information, see [Using cross-origin resource sharing (CORS)](cors.md).  | 
|   *event notification*   |  You can enable your bucket to send you notifications of specified bucket events.  For more information, see [Amazon S3 Event Notifications](EventNotifications.md).  | 
| lifecycle |  You can define lifecycle rules for objects in your bucket that have a well-defined lifecycle. For example, you can define a rule to archive objects one year after creation, or delete an object 10 years after creation.  For more information, see [Managing the lifecycle of objects](object-lifecycle-mgmt.md).   | 
|   *location*   |   When you create a bucket, you specify the AWS Region where you want Amazon S3 to create the bucket. Amazon S3 stores this information in the location subresource and provides an API for you to retrieve this information.   | 
|   *logging*   |  Logging enables you to track requests for access to your bucket. Each access log record provides details about a single access request, such as the requester, bucket name, request time, request action, response status, and error code, if any. Access log information can be useful in security and access audits. It can also help you learn about your customer base and understand your Amazon S3 bill.   For more information, see [Logging requests with server access logging](ServerLogs.md).   | 
|   *object locking*   |  To use S3 Object Lock, you must enable it for a bucket. You can also optionally configure a default retention mode and period that applies to new objects that are placed in the bucket.  For more information, see [Locking objects with Object Lock](object-lock.md).   | 
|   *policy* and *ACL* (access control list)   |  All your resources (such as buckets and objects) are private by default. Amazon S3 supports both bucket policy and access control list (ACL) options for you to grant and manage bucket-level permissions. Amazon S3 stores the permission information in the *policy* and *acl* subresources. For more information, see [Identity and Access Management for Amazon S3](security-iam.md).  | 
|   *replication*   |  Replication is the automatic, asynchronous copying of objects across buckets in different or the same AWS Regions. For more information, see [Replicating objects within and across Regions](replication.md).  | 
|   *requestPayment*   |  By default, the AWS account that creates the bucket (the bucket owner) pays for downloads from the bucket. Using this subresource, the bucket owner can specify that the person requesting the download will be charged for the download. Amazon S3 provides an API for you to manage this subresource. For more information, see [Using Requester Pays general purpose buckets for storage transfers and usage](RequesterPaysBuckets.md).  | 
|   *tagging*   |  You can add tags to your Amazon S3 buckets to categorize and track your AWS costs or for access control. You can use tags as cost allocation tags to track storage costs in AWS Billing and Cost Management. You can also use tags for attribute-based access control (ABAC), to scale access permissions and grant access to S3 buckets based on their tags. For more information, see [Using tags with S3 general purpose buckets](buckets-tagging.md).   | 
|   *transfer acceleration*   |  Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration takes advantage of the globally distributed edge locations of Amazon CloudFront. For more information, see [Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration](transfer-acceleration.md).  | 
| versioning |  Versioning helps you recover accidental overwrites and deletes.  We recommend versioning as a best practice to recover objects from being deleted or overwritten by mistake.  For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).  | 
|  website |  You can configure your bucket for static website hosting. Amazon S3 stores this configuration by creating a *website* subresource. For more information, see [Hosting a static website using Amazon S3](WebsiteHosting.md).   | 

## General purpose buckets operations
<a name="bucket-operations-limits"></a>

The high availability engineering of Amazon S3 is focused on *get*, *put*, *list*, and *delete* operations. Because general purpose bucket operations work against a centralized, global resource space, we recommend that you don't create, delete, or configure buckets on the high availability code path of your application. It's better to create, delete, or configure buckets in a separate initialization or setup routine that you run less often. 

## General purpose buckets performance monitoring
<a name="bucket-monitoring-use-cases"></a>

When you have critical applications and business processes that rely on AWS resources, it’s important to monitor and get alerts for your system. [Monitoring your data](https://docs.aws.amazon.com/AmazonS3/latest/userguide/monitoring-overview.html) can help maintain the reliability, availability, and performance of Amazon S3 and your AWS solutions. There are several AWS services that you can use to collect and aggregates metrics and logs for your S3 buckets. 

Depending on your use case, you can choose which AWS service best suits your organization’s needs to debug issues, monitor your data, optimize storage costs, or troubleshoot multi-point issues. For example:
+ **To improve the performance of applications that use S3:** [Set up CloudWatch alarms](https://docs.aws.amazon.com/AmazonS3/latest/userguide/cloudwatch-monitoring.html) to monitor your storage data, replication metrics, or request metrics.
+ **To plan for storage usage, optimize storage costs, or to find out how much storage you have across your entire organization:** [Use Amazon S3 Storage Lens](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-lens-optimize-storage.html). Alternatively, you can [use S3 Storage Lens to improve your data performance](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage-lens-detailed-status-code.html) by enabling advanced metrics and using the detailed status-code metrics to get counts for successful or failed requests.
+  **For a unified view of your operational health:** [Publish S3 Storage Lens usage and activity metrics](https://docs.aws.amazon.com/AmazonS3/latest/userguide/storage_lens_view_metrics_cloudwatch.html) to a [Amazon CloudWatch dashboard](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Dashboards.html).
**Note**  
The Amazon CloudWatch publishing option is available for S3 Storage Lens dashboards upgraded to **Advanced metrics and recommendations**. You can enable the CloudWatch publishing option for a new or existing dashboard configuration in S3 Storage Lens.
+ **To obtain a record of actions taken by a user, role, or an AWS service:** Set up [AWS CloudTrail logs](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-working-with-log-files.html). You can also use AWS CloudTrail logs to review API calls for Amazon S3 as events.
+ **To receive notifications about when a certain event happens in your S3 bucket:** [Set up Amazon S3 event notifications](https://docs.aws.amazon.com/AmazonS3/latest/userguide/EventNotifications.html).
+ **To obtain detailed records for the requests that are made to an S3 bucket:** [Set up S3 access logs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerLogs.html).

For a list of all the different AWS services that you can use to monitor your data, see [Logging and monitoring in Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/monitoring-overview.html).

# Namespaces for general purpose buckets
<a name="gpbucketnamespaces"></a>

By default, general purpose buckets exist in a global namespace. This means that each bucket name must be unique across all AWS accounts in all the AWS Regions within a partition. A partition is a grouping of Regions. AWS currently has four partitions: `aws` (Standard Regions), `aws-cn` (China Regions), `aws-us-gov` (AWS GovCloud (US)), and `aws-eusc` (European Sovereign Cloud). When you create a general purpose bucket, you can choose to create a bucket in the shared global namespace. You can also choose to create a bucket in your account regional namespace. Your account regional namespace is a subdivision of the global namespace that only your account can create buckets in.

**Topics**
+ [Global general purpose buckets](#global-gp-buckets)
+ [Account regional namespace general purpose buckets](#account-regional-gp-buckets)
+ [Restrictions and considerations](#namespace-restrictions)
+ [AWS Region Code Format](#region-code-format)
+ [Requiring the creation of buckets in your account regional namespace](#require-account-regional)
+ [Creating a bucket in your account regional namespace](#create-account-regional-bucket)

## Global general purpose buckets
<a name="global-gp-buckets"></a>

By default, you create global general purpose buckets in the shared global namespace. After creating a general purpose bucket in the shared global namespace, that bucket name is unavailable for anyone else to create within the partition. When you delete a global general purpose bucket, the bucket name becomes available again in the global namespace for anyone to re-create.

When creating global general purpose buckets, you can request any name that adheres to the bucket naming rules. These rules include specifying a name between 3 (minimum) and 63 (maximum) characters long. The name can only consist of lowercase letters, numbers, periods (.), and hyphens (-). Bucket names must begin and end with a letter or number and cannot contain two adjacent periods. For more information on bucket naming rules, see [General purpose bucket naming rules](bucketnamingrules.md).

When specifying a global general purpose bucket name, you must select a unique name that is not already in use for the partition. If you attempt to create a bucket that already exists and is owned by someone else, you will receive an HTTP 409 BucketAlreadyExists error. If you attempt to create a bucket that already exists and is owned by you, you will receive an HTTP 409 BucketAlreadyOwnedByYou error.

You can create global general purpose buckets to have the most flexibility in selecting your requested bucket names. Since it's a shared global namespace, other accounts can create similar bucket names. Other accounts can also re-create bucket names that you previously deleted. You should not depend on specific bucket naming conventions for availability or security verification purposes. Don't write code assuming your chosen bucket name is available unless you have already created the bucket. One method for creating bucket names that aren't predictable is to append a Globally Unique Identifier (GUID) to your bucket name. For example, `amzn-s3-demo-bucket-a1b2c3d4-5678-90ab-cdef-example11111`. For more information, see [Creating a bucket that uses a GUID in the bucket name](bucketnamingrules.md#create-bucket-name-guid).

## Account regional namespace general purpose buckets
<a name="account-regional-gp-buckets"></a>

Although Amazon S3 general purpose buckets exist in a shared global namespace, you can optionally create buckets in your account regional namespace. The account regional namespace is a reserved subdivision of the global bucket namespace. Only your account can create general purpose buckets in this namespace. New general purpose buckets created in your account regional namespace are unique to your account. These buckets can never be re-created by another account. These buckets support all the S3 features and AWS services that general purpose buckets in the shared global namespace already support. Your applications require no change to interact with buckets in your account regional namespace.

**Note**  
You can create buckets in your account regional namespace in all AWS Regions except Middle East (Bahrain) and Middle East (UAE).

Creating buckets in your account regional namespace is a security best practice. These bucket names can only ever be used by your account. You can create buckets in your account regional namespace to easily template general purpose bucket names across multiple AWS Regions. You can have assurance that no other account can create bucket names in your namespace. If another account attempts to create a bucket with your account regional suffix, the CreateBucket request will be rejected.

### Account regional namespace naming convention
<a name="account-regional-naming"></a>

General purpose buckets in your account regional namespace must follow a specific naming convention. These buckets consist of a bucket name prefix that you create and a suffix that contains your 12-digit AWS Account ID, the AWS Region code, and ends with `-an`.

```
bucket-name-prefix-accountId-region-an
```

For example, the following general purpose bucket exists in the account regional namespace for AWS Account 111122223333 in the us-west-2 Region:

```
amzn-s3-demo-bucket-111122223333-us-west-2-an
```

To create a bucket in your account regional namespace, you make a CreateBucket request. Specify the `x-amz-bucket-namespace` request header with the value set to `account-regional`. Also specify an account regional namespace formatted bucket name: `<customer-chosen-name>-<AWS-Account-ID>-<AWS-Region>-an`.

**Note**  
When you create a general purpose bucket in your account regional namespace using the console, a suffix is automatically added to the bucket name prefix that you provide. This suffix includes your AWS Account ID and the AWS Region that you selected to create your bucket in. When you create a general purpose bucket in your account regional namespace using the CreateBucket API, you must provide the full suffix. This includes your AWS Account ID and the AWS Region in your request. For a list of the AWS Region codes, see [AWS Region Code Format](#region-code-format).

### Integrating the account regional namespace to your CloudFormation templates
<a name="cfn-integration"></a>

You can update your infrastructure-as-code tools, like CloudFormation, to simplify creating buckets in your account regional namespace. CloudFormation offers the pseudo parameters `AWS::AccountId` and `AWS::Region`. These parameters make it easy to build CloudFormation templates that create account regional namespace buckets. For more information, see [Get AWS values using pseudo parameters](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/pseudo-parameter-reference.html#available-pseudo-parameters).

**Example 1 using BucketName with Sub:**

```
BucketName: !Sub "amzn-s3-demo-bucket-${AWS::AccountId}-${AWS::Region}-an"
BucketNamespace: "account-regional"
```

**Example 2 using BucketNamePrefix:**

```
BucketNamePrefix: 'amzn-s3-demo-bucket'
BucketNamespace: "account-regional"
```

Alternatively, you can also use the BucketNamePrefix property to update your CloudFormation template. The BucketNamePrefix lets you simply provide the customer-defined portion of the bucket name. It then automatically adds the account regional namespace suffix based on the requesting AWS Account and AWS Region specified.

Using these options, you can build a custom CloudFormation template to easily create general purpose buckets in your account regional namespace. For more information, see [AWS::S3::Bucket](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-s3-bucket.html) in the CloudFormation User Guide.

## Restrictions and considerations
<a name="namespace-restrictions"></a>

When creating buckets in the shared global namespace, the following considerations apply:
+ A bucket name in the shared global namespace can't be used by another AWS account in the same partition until the bucket is deleted. After you delete a bucket in the shared global namespace, be aware that another AWS account in the same partition can use the same bucket name for a new bucket and can therefore potentially receive requests intended for the deleted bucket.
+ When building applications that will create buckets in the shared global namespace, make sure to consider that your desired bucket names may be already taken by another account and that other accounts may have bucket names that are similar to yours.
+ Because Amazon S3 identifies buckets based on their names, an application that uses an incorrect bucket name in a request could inadvertently perform operations against a different bucket than expected. To help avoid unintentional bucket interactions in situations like this, you can use bucket owner condition. For more information, see [Verifying bucket ownership with bucket owner condition](bucket-owner-condition.md).

When creating buckets in your account regional namespace, the following restrictions and considerations apply:
+ Any attempt to re-create an account regional namespace bucket that you already own in any AWS Region will return an HTTP 409 BucketAlreadyOwnedByYou error.
+ You should use the S3 regional endpoints to create buckets in your account regional namespace. For [backwards compatibility](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#VirtualHostingBackwardsCompatibility), you can use the legacy global endpoint to create buckets in your account regional namespace in the US East (N. Virginia) Region.
+ Your account regional suffix counts towards the maximum number of 63-characters allowed in general purpose bucket names. So if your account regional suffix is `-012345678910-us-east-1-an`, then you have 37-characters available for your bucket name prefix.

## AWS Region Code Format
<a name="region-code-format"></a>

To create a bucket in your account regional namespace, you must include the AWS Region in the suffix where you want to create the general purpose bucket. You must specify the full AWS Region code (for example, `us-west-2`) in the suffix. See [AWS Regions](https://docs.aws.amazon.com/global-infrastructure/latest/regions/aws-regions.html#available-regions) for an entire list of the AWS Region codes. The following bucket names show two examples of the AWS Region code format that you must use when creating buckets in your account regional namespace:
+ `amzn-s3-demo-bucket-012345678910-ap-southeast-1-an`
+ `amzn-s3-demo-bucket-987654321012-eu-north-1-an`

## Requiring the creation of buckets in your account regional namespace
<a name="require-account-regional"></a>

You can enforce that your IAM principals only create buckets in your account regional namespace. Use the `s3:x-amz-bucket-namespace` condition key. The following examples show how you can enforce account regional bucket creation in an IAM policy, Resource Control Policy, or Service Control Policy.

### IAM policy
<a name="require-iam-policy"></a>

The following IAM policy denies the s3:CreateBucket permission to the IAM principal if the request does not include the x-amz-bucket-namespace header set to account-regional.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "RequireAccountRegionalBucketCreation",
      "Effect": "Deny",
      "Action": "s3:CreateBucket",
      "Resource": "*",
      "Condition": {
        "StringNotEquals": {
          "s3:x-amz-bucket-namespace": "account-regional"
        }
      }
    }
  ]
}
```

### Resource Control Policy
<a name="require-rcp"></a>

The following Resource Control Policy denies the s3:CreateBucket permission to everyone if the request does not include the x-amz-bucket-namespace header set to account-regional.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "OnlyCreateBucketsInAccountRegionalNamespace",
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:CreateBucket",
            "Resource": "*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-bucket-namespace": "account-regional"
                }
            }
        }
    ]
}
```

### Service Control Policy
<a name="require-scp"></a>

The following Service Control Policy denies the s3:CreateBucket permission to everyone if the request does not include the x-amz-bucket-namespace header set to account-regional.

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "RequireAccountRegionalBucketCreation",
            "Effect": "Deny",
            "Action": "s3:CreateBucket",
            "Resource": "*",
            "Condition": {
                "StringNotEquals": {
                    "s3:x-amz-bucket-namespace": "account-regional"
                }
            }
        }
    ]
}
```

## Creating a bucket in your account regional namespace
<a name="create-account-regional-bucket"></a>

The following examples show you how to create a general purpose bucket in your account regional namespace.

### Using the AWS CLI
<a name="create-account-regional-cli"></a>

The following AWS CLI example creates a general purpose bucket in the account regional namespace for AWS Account 012345678910 in the US West (N. California) Region (us-west-1). To use this example command, replace the `user input placeholders` with your own information.

```
aws s3api create-bucket \
    --bucket amzn-s3-demo-bucket-012345678910-us-west-1-an \
    --bucket-namespace account-regional
    --region us-west-1 \
    --create-bucket-configuration LocationConstraint=us-west-1
```

# Common general purpose bucket patterns for building applications on Amazon S3
<a name="common-bucket-patterns"></a>

When you build applications on Amazon S3, you can use unique general purpose buckets to separate different datasets or workloads. When you build applications that serve end users or different user groups, use our best practices design patterns to build applications that can best take advantage of Amazon S3 features and scalability. 

**Important**  
We recommend that you create general purpose bucket names that are not predictable. Do not write code assuming your chosen bucket name is available unless you have already created the bucket. We recommend creating buckets in your account regional namespace for assurance that only your account can ever own these bucket names, see [Namespaces for general purpose buckets](gpbucketnamespaces.md). For more information about general purpose bucket naming rules, see [General purpose bucket naming rules](bucketnamingrules.md).

**Topics**
+ [Multi-tenant general purpose bucket pattern](#multi-tenant-buckets)
+ [Bucket-per-use pattern](#bucket-per-customer)

## Multi-tenant general purpose bucket pattern
<a name="multi-tenant-buckets"></a>

With multi-tenant buckets, you create a single general purpose bucket for a team or workload. You use [unique S3 prefixes](using-prefixes.md) to organize the objects that you store in the bucket. A prefix is a string of characters at the beginning of the object key name. A prefix can be any length, subject to the maximum length of the object key name (1,024 bytes). You can think of prefixes as a way to organize your data in a similar way to directories. However, prefixes are not directories. 

For example, to store information about cities, you might organize it by continent, then by country, then by province or state. Because these names don't usually contain punctuation, you might use slash (/) as the delimiter. The following examples shows prefixes being used to organize city names by continent, country, and then province or state, using a slash (/) delimiter.
+ Europe/France/NouvelleA-Aquitaine/Bordeaux
+ North America/Canada/Quebec/Montreal
+ North America/USA/Washington/Bellevue
+ North America/USA/Washington/Seattle

This pattern scales well when you have hundreds of unique datasets within a general purpose bucket. With prefixes, you can easily organize and group these datasets.

However, one potential drawback to the multi-tenant general purpose bucket pattern is that many S3 bucket-level features like [default bucket encryption](bucket-encryption.md), [S3 Versioning](versioning-workflows.md), and [S3 Requester Pays](RequesterPaysBuckets.md) are set at the bucket-level and not the prefix-level. If the different datasets within the multi-tenant bucket have unique requirements, the fact that you can't configure many S3 bucket-level features at the prefix-level can make it difficult for you to specify the correct settings for each dataset. Additionally, in a multi-tenant bucket, [cost allocation](BucketBilling.md) can become complex as you work to understand the storage, requests, and data transfer associated with specific prefixes. 

## Bucket-per-use pattern
<a name="bucket-per-customer"></a>

With the bucket-per-use pattern, you create a general purpose bucket for each distinct dataset, end user, or team. Because you can configure S3 bucket-level features for each of these buckets, you can use this pattern to configure unique bucket-level settings. For example, you can configure features like [default bucket encryption](bucket-encryption.md), [S3 Versioning](versioning-workflows.md), and [S3 Requester Pays](RequesterPaysBuckets.md) in a way that is customized to the dataset in each bucket. Using one bucket for each distinct dataset, end user, or team can also help you simplify both your access management and cost allocation strategies.

A potential drawback to this strategy is that you will need to manage potentially thousands of buckets. All AWS accounts have a default quota of 10,000 general purpose buckets. You can increase the bucket quota for an account by submitting a quota increase request. To request an increase for general purpose buckets, visit the [Service Quotas console](https://console.aws.amazon.com/servicequotas/home/services/s3/quotas/).

To manage your bucket-per-use pattern and simplify your infrastructure management, you can use [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html#welcome-simplify-infrastructure-management). You can create a custom CloudFormation template for your pattern that already defines all of your desired settings for your S3 general purpose buckets so that you can easily deploy and track any changes to your infrastructure. For more information, see [AWS::S3::Bucket](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-s3-bucket.html) in the *AWS CloudFormation User Guide*.

![\[A diagram showing you how you can create a CloudFormation template customized to your application that defines settings for your S3 buckets.\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/images/create-stack-diagram.png)


When building a workload with a bucket-per-use pattern, we recommend that you create the buckets in your account regional namespace. By creating buckets in your account regional namespace, you avoid competing for bucket names against others and have assurance that only your account can ever create buckets with your selected naming convention. For more information on account regional namespaces, see [Namespaces for general purpose buckets](gpbucketnamespaces.md).

# General purpose bucket naming rules
<a name="bucketnamingrules"></a>

When you create a general purpose bucket, make sure that you consider the length, valid characters, formatting, and uniqueness of bucket names. The following sections provide information about general purpose bucket naming, including naming rules, best practices, and an example for how you can create buckets in your account regional namespace. an example for creating a general purpose bucket with a name that includes a globally unique identifier (GUID). 

For information about object key names, see [Creating object key names](https://docs.aws.amazon.com/en_us/AmazonS3/latest/userguide/object-keys.html).

To create a general purpose bucket, see [Creating a general purpose bucket](create-bucket-overview.md).

**Topics**
+ [General purpose buckets naming rules](#general-purpose-bucket-names)
+ [Account regional namespace naming rules](#account-regional-naming-rules)
+ [Example general purpose bucket names](#bucket-names)
+ [Best practices](#automatically-created-buckets)
+ [Creating a bucket that uses a GUID in the bucket name](#create-bucket-name-guid)
+ [Creating a bucket in your account regional namespace](#create-account-regional-naming)

## General purpose buckets naming rules
<a name="general-purpose-bucket-names"></a>

The following naming rules apply for general purpose buckets.
+ Bucket names must be between 3 (min) and 63 (max) characters long.
+ Bucket names can consist only of lowercase letters, numbers, periods (`.`), and hyphens (`-`).
+ Bucket names must begin and end with a letter or number.
+ Bucket names must not contain two adjacent periods.
+ Bucket names must not be formatted as an IP address (for example, `192.168.5.4`).
+ Bucket names must not start with the prefix `xn--`.
+ Bucket names must not start with the prefix `sthree-`.
+ Bucket names must not start with the prefix `amzn-s3-demo-`.
+ Bucket names must not end with the suffix `-s3alias`. This suffix is reserved for access point alias names. For more information, see [Access point aliases](access-points-naming.md#access-points-alias).
+ Bucket names must not end with the suffix `--ol-s3`. This suffix is reserved for Object Lambda Access Point alias names. For more information, see [How to use a bucket-style alias for your S3 bucket Object Lambda Access Point](olap-use.md#ol-access-points-alias).
+ Bucket names must not end with the suffix `.mrap`. This suffix is reserved for Multi-Region Access Point names. For more information, see [Rules for naming Amazon S3 Multi-Region Access Points](multi-region-access-point-naming.md).
+ Bucket names must not end with the suffix `--x-s3`. This suffix is reserved for directory buckets. For more information, see [Directory bucket naming rules](directory-bucket-naming-rules.md).
+ Bucket names must not end with the suffix `--table-s3`. This suffix is reserved for S3 Tables buckets. For more information, see [Amazon S3 table bucket, table, and namespace naming rules](s3-tables-buckets-naming.md).
+ Bucket names can only end with the suffix `-an` when you are creating buckets in your account regional namespace. For more information, see [Namespaces for general purpose buckets](gpbucketnamespaces.md).
+ Buckets used with Amazon S3 Transfer Acceleration can't have periods (`.`) in their names. For more information about Transfer Acceleration, see [Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration](transfer-acceleration.md).

**Important**  
General purpose buckets exist in a global namespace, which means that each bucket name must be unique across all AWS accounts in all the AWS Regions within a partition. A partition is a grouping of Regions. AWS currently has four partitions: `aws` (Standard Regions), `aws-cn` (China Regions), `aws-us-gov` (AWS GovCloud (US)), and `aws-eusc` (European Sovereign Cloud). After creating a general purpose bucket in the shared global namespace, that bucket name is unavailable for anyone else to create within partition. When a bucket owner deletes their bucket, the bucket name becomes available again in the global namespace for anyone to re-create.
A bucket name in the shared global namespace can't be used by another AWS account in the same partition until the bucket is deleted. **After you delete a bucket in the shared global namespace, be aware that another AWS account in the same partition can use the same bucket name for a new bucket and can therefore potentially receive requests intended for the deleted bucket.** If you want to prevent this, or if you want to continue to use the same bucket name, don't delete the bucket. We recommend that you empty the bucket and keep it, and instead, block any bucket requests as needed. For buckets no longer in active use, we recommend emptying the bucket of all objects to minimize costs while retaining the bucket itself.
We recommend creating buckets in your account regional namespace for assurance that only your account can ever own these bucket names.
When you create a general purpose bucket, you choose its name and the AWS Region to create it in. After you create a general purpose bucket, you can't change its name or Region. 
Don't include sensitive information in the bucket name. The bucket name is visible in the URLs that point to the objects in the bucket.

**Note**  
Before March 1, 2018, buckets created in the US East (N. Virginia) Region could have names that were up to 255 characters long and included uppercase letters and underscores. Beginning March 1, 2018, new buckets in US East (N. Virginia) must conform to the same rules applied in all other Regions.

## Account regional namespace naming rules
<a name="account-regional-naming-rules"></a>

Although Amazon S3 general purpose buckets exist in a shared global namespace, you can optionally create buckets in your account regional namespace. The account regional namespace is a reserved subdivision of the global bucket namespace where only your account can create general purpose buckets. New general purpose buckets created in your account regional namespace are unique to your account and can never be re-created by another account. These buckets support all the S3 features and AWS services that general purpose buckets in the shared global namespace already support, your applications require no change to interact with buckets in your account regional namespace.

General purpose buckets in your account regional namespace must follow a specific naming convention. These buckets consist of a bucket name prefix that you create, and a suffix that contains your 12-digit AWS account ID, the AWS Region code, and ends with `-an`.

```
bucket-name-prefix-accountId-region-an
```

For example, the following general purpose bucket exists in the account regional namespace for AWS account 111122223333 in the us-west-2 Region:

```
amzn-s3-demo-bucket-111122223333-us-west-2-an
```

To create bucket in your account regional namespace, you make a `CreateBucket` request and specify the `x-amz-bucket-namespace` request header with the value set to `account-regional` along with specifying an account regional namespace formatted bucket name: `customer-chosen-name-AWS-Account-ID-AWS-Region-an`. For example, you could specify to create a bucket named: `amzn-s3-demo-bucket-111122223333-us-east-1-an` where your account regional suffix is `-111122223333-us-east-1-an`. For more information on account regional namespaces, see [Namespaces for general purpose buckets](gpbucketnamespaces.md).

## Example general purpose bucket names
<a name="bucket-names"></a>

The following bucket names show examples of which characters are allowed in general purpose bucket names: a-z, 0-9, and hyphens (`-`). The `amzn-s3-demo-` reserved prefix is used here only for illustration. Because it's a reserved prefix, you can't create bucket names that start with `amzn-s3-demo-`.
+ `amzn-s3-demo-bucket1-a1b2c3d4-5678-90ab-cdef-example11111`
+ `amzn-s3-demo-bucket`

The following examples shows bucket names in your account regional namespace. These buckets must adhere to the specific account regional namespace naming convention: `customer-chosen-name-AWS-Account-ID-AWS-Region-an`
+ `amzn-s3-demo-bucket-111122223333-us-west-2-an`
+ `amzn-s3-demo-bucket-012345678910-ap-southeast-2-an`

The following example bucket names are valid but not recommended for uses other than static website hosting because they contain periods (`.`):
+ `example.com`
+ `www.example.com`
+ `my.example.s3.bucket`

The following example bucket names are *not* valid:
+ `amzn_s3_demo_bucket` (contains underscores)
+ `AmznS3DemoBucket` (contains uppercase letters)
+ `amzn-s3-demo-bucket-` (starts with `amzn-s3-demo-` prefix and ends with a hyphen)
+ `example..com` (contains two periods in a row)
+ `192.168.5.4` (matches format of an IP address)

## Best practices
<a name="automatically-created-buckets"></a>

When naming your general purpose buckets, consider the following bucket naming best practices.

**Create buckets in your account regional namespace**  
We recommend creating buckets in your account regional namespace for assurance that only your account can ever own these bucket names. With account regional namespaces, you can create predictable bucket names across multiple AWS Regions with assurance that no other account can create bucket names in your namespace.

**Choose a bucket naming scheme that's unlikely to cause naming conflicts**  
If your application automatically creates buckets, choose a bucket naming scheme that's unlikely to cause naming conflicts. Ensure that your application logic will choose a different bucket name if a bucket name is already taken.

**Append globally unique identifiers (GUIDs) to bucket names**  
We recommend that you create bucket names that aren't predictable. Don't write code assuming your chosen bucket name is available unless you have already created the bucket. One method for creating bucket names that aren't predictable is to append a Globally Unique Identifier (GUID) to your bucket name, for example, `amzn-s3-demo-bucket-a1b2c3d4-5678-90ab-cdef-example11111`. For more information, see [Creating a bucket that uses a GUID in the bucket name](#create-bucket-name-guid).

**Avoid using periods (`.`) in bucket names**  
For best compatibility, we recommend that you avoid using periods (`.`) in bucket names, except for buckets that are used only for static website hosting. If you include periods in a bucket's name, you can't use virtual-host-style addressing over HTTPS, unless you perform your own certificate validation. The security certificates used for virtual hosting of buckets don't work for buckets with periods in their names. 

This limitation doesn't affect buckets used for static website hosting, because static website hosting is available only over HTTP. For more information about virtual-host-style addressing, see [Virtual hosting of general purpose buckets](VirtualHosting.md). For more information about static website hosting, see [Hosting a static website using Amazon S3](WebsiteHosting.md).

**Choose a relevant name**  
When you name a bucket, we recommend that you choose a name that's relevant to you or your business. Avoid using names associated with others. For example, avoid using `AWS` or `Amazon` in your bucket name.

**Don't delete buckets so that you can reuse bucket names**  
If a bucket is empty, you can delete it. After a bucket is deleted, the name becomes available for reuse. However, you aren't guaranteed to be able to reuse the name right away, or at all. After you delete a bucket in the shared global namespace, some time might pass before you can reuse the name. In addition, another AWS account might create a bucket with the same name before you can reuse the name. 

**After you delete a general purpose bucket in the shared global namespace, be aware that another AWS account in the same partition can use the same general purpose bucket name for a new bucket and can therefore potentially receive requests intended for the deleted general purpose bucket.** If you want to prevent this, or if you want to continue to use the same general purpose bucket name, don't delete the general purpose bucket. We recommend that you empty the bucket and keep it, and instead, block any bucket requests as needed.

## Creating a bucket that uses a GUID in the bucket name
<a name="create-bucket-name-guid"></a>

The following examples show you how to create a general purpose bucket that uses a GUID at the end of the bucket name.

### Using the AWS CLI
<a name="guid-cli-bucket-name"></a>

The following AWS CLI example creates a general purpose bucket in the US West (N. California) Region (`us-west-1`) Region with an example bucket name that uses a globally unique identifier (GUID). To use this example command, replace the `user input placeholders` with your own information.

```
aws s3api create-bucket \
    --bucket amzn-s3-demo-bucket1$(uuidgen | tr -d - | tr '[:upper:]' '[:lower:]' ) \
    --region us-west-1 \
    --create-bucket-configuration LocationConstraint=us-west-1
```

### Using the AWS SDK for Java
<a name="guid-sdk-bucket-name"></a>

The following example shows you how to create a with a GUID at the end of the bucket name in US East (N. Virginia) Region (`us-east-1`) by using the AWS SDK for Java. To use this example, replace the `user input placeholders` with your own information. For information about other AWS SDKs, see [Tools to Build on AWS](https://aws.amazon.com/developer/tools/).

```
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.Bucket;
import com.amazonaws.services.s3.model.CreateBucketRequest;

import java.util.List;
import java.util.UUID;

public class CreateBucketWithUUID {
    public static void main(String[] args) {
        final AmazonS3 s3 = AmazonS3ClientBuilder.standard().withRegion(Regions.US_EAST_1).build();
        String bucketName = "amzn-s3-demo-bucket" +  UUID.randomUUID().toString().replace("-", "");
        CreateBucketRequest createRequest = new CreateBucketRequest(bucketName);
        System.out.println(bucketName);
        s3.createBucket(createRequest);
    }
}
```

## Creating a bucket in your account regional namespace
<a name="create-account-regional-naming"></a>

The following examples show you how to create a general purpose bucket in your account regional namespace.

### Using the AWS CLI
<a name="account-regional-cli-naming"></a>

The following AWS CLI example creates a general purpose bucket in the account regional namespace for AWS account 012345678910 in the US West (N. California) Region (`us-west-1`) Region. To use this example command, replace the `user input placeholders` with your own information.

```
aws s3api create-bucket \
    --bucket amzn-s3-demo-bucket-012345678910-us-west-1-an \
    --bucket-namespace account-regional
    --region us-west-1 \
    --create-bucket-configuration LocationConstraint=us-west-1
```

# General purpose bucket quotas, limitations, and restrictions
<a name="BucketRestrictions"></a>

An Amazon S3 general purpose bucket is owned by the AWS account that created it. Bucket ownership is not transferable to another account.

## Bucket quotas
<a name="bucket-quota-limits"></a>

By default, you can create up to 10,000 general purpose buckets per AWS account. To request a quota increase for general purpose buckets, visit the [Service Quotas console](https://console.aws.amazon.com/servicequotas/home/services/s3/quotas/). 

**Important**  
We strongly recommend using only paginated `ListBuckets` requests. Unpaginated `ListBuckets` requests are only supported for AWS accounts set to the default general purpose bucket quota of 10,000. If you have an approved general purpose bucket quota above 10,000, you must send paginated `ListBuckets` requests to list your account’s buckets. All unpaginated `ListBuckets` requests will be rejected for AWS accounts with a general purpose bucket quota greater than 10,000. 

**Note**  
You must use the following AWS Regions to view your quota, bucket utilization, or request an increase for your general purpose buckets in your AWS account.  
General purpose bucket quotas for commercial Regions can only be viewed and managed from US East (N. Virginia).
General purpose bucket quotas for AWS GovCloud (US) can only be viewed and managed from AWS GovCloud (US-West).

For information about service quotas, see [AWS service quotas](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html) in the *Amazon Web Services General Reference*. 

## Objects and bucket limitations
<a name="object-bucket-limitations"></a>

There is no max bucket size or limit to the number of objects that you can store in a bucket. You can store all of your objects in a single bucket, or you can organize them across several buckets. However, you can't create a bucket from within another bucket.

## Bucket naming rules
<a name="bucket-naming-limits"></a>

When you create a bucket, you choose its name and the AWS Region to create it in. After you create a bucket, you can't change its name or Region. For more information about bucket naming, see [General purpose bucket naming rules](bucketnamingrules.md).

# Accessing an Amazon S3 general purpose bucket
<a name="access-bucket-intro"></a>

You can access your Amazon S3 general purpose buckets by using the Amazon S3 console, AWS Command Line Interface, AWS SDKs, or the Amazon S3 REST API. Each method of accessing an S3 general purpose bucket supports specific use cases. For more information, see the following sections.

**Topics**
+ [Use cases](#accessing-use-cases)
+ [Amazon S3 console](#accessing-aws-management-console)
+ [AWS CLI](#accessing-aws-cli)
+ [AWS SDKs](#accessing-aws-sdks)
+ [Amazon S3 REST API](#AccessingUsingRESTAPI)
+ [Virtual hosting of general purpose buckets](VirtualHosting.md)

## Use cases
<a name="accessing-use-cases"></a>

Depending on the use case for your Amazon S3 general purpose bucket, there are different recommended methods to access the underlying data in your buckets. The following list includes common use cases for accessing your data. 
+ **Static websites** – You can use Amazon S3 to host a static website. In this use case, you can configure your S3 general purpose bucket to function like a website. For an example that walks you through the steps of hosting a website on Amazon S3, see [Tutorial: Configuring a static website on Amazon S3](HostingWebsiteOnS3Setup.md). 

  To host a static website with security settings like Block Public Access enabled, we recommend using Amazon CloudFront with Origin Access Control (OAC) and implementing additional security headers, such as HTTPS. For more information, see [Getting started with a secure static website](https://docs.aws.amazon.com//AmazonCloudFront/latest/DeveloperGuide/getting-started-secure-static-website-cloudformation-template.html).
**Note**  
Amazon S3 supports both [virtual-hosted–style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#virtual-hosted-style-access) and [path-style URLs](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html#path-style-access) for static website access. Because buckets can be accessed using path-style and virtual-hosted–style URLs, we recommend that you create buckets with DNS-compliant bucket names. For more information, see [General purpose bucket quotas, limitations, and restrictions](BucketRestrictions.md).
+ **Shared datasets** – As you scale on Amazon S3, it's common to adopt a multi-tenant model, where you assign different end customers or business units to unique prefixes within a shared general purpose bucket. By using [Amazon S3 access points](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-points.html), you can divide one large bucket policy into separate, discrete access point policies for each application that needs to access the shared dataset. This approach makes it simpler to focus on building the right access policy for an application without disrupting what any other application is doing within the shared dataset. For more information, see [Managing access to shared datasets with access points](access-points.md).
+ **High-throughput workloads** – Mountpoint for Amazon S3 is a high-throughput open source file client for mounting an Amazon S3 general purpose bucket as a local file system. With Mountpoint, your applications can access objects stored in Amazon S3 through file-system operations, such as open and read. Mountpoint automatically translates these operations into S3 object API calls, giving your applications access to the elastic storage and throughput of Amazon S3 through a file interface. For more information, see [Mount an Amazon S3 bucket as a local file system](mountpoint.md). 
+ **Multi-Region applications** – Amazon S3 Multi-Region Access Points provide a global endpoint that applications can use to fulfill requests from S3 general purpose buckets that are located in multiple AWS Regions. You can use Multi-Region Access Points to build multi-Region applications with the same architecture that's used in a single Region, and then run those applications anywhere in the world. Instead of sending requests over the public internet, Multi-Region Access Points provide built-in network resilience with acceleration of internet-based requests to Amazon S3. For more information, see [Managing multi-Region traffic with Multi-Region Access Points](MultiRegionAccessPoints.md).
+ **Secure Shell (SSH) File Transfer Protocol (SFTP)** – If you’re trying to securely transfer sensitive data over the internet, you can use an SFTP-enabled server with your Amazon S3 general purpose bucket. AWS SFTP is a network protocol that supports the full security and authentication functionality of SSH. With this protocol, you have fine-grained control over user identity, permissions, and keys or you can use IAM policies to manage access. To associate an SFTP enabled server with your Amazon S3 bucket, make sure to create your SFTP-enabled server first. Then, you set up user accounts, and associate the server with an Amazon S3 general purpose bucket. For a walkthrough of this process, see [AWS Transfer for SFTP – Fully Managed SFTP Service for Amazon S3](https://aws.amazon.com/blogs/aws/new-aws-transfer-for-sftp-fully-managed-sftp-service-for-amazon-s3/) in *AWS Blogs*.

## Amazon S3 console
<a name="accessing-aws-management-console"></a>

The console is a web-based user interface for managing Amazon S3 and AWS resources. With the Amazon S3 console, you can easily access a bucket and modify the bucket's properties. You can also perform most bucket operations by using the console UI, without having to write any code.

If you've signed up for an AWS account, you can access the Amazon S3 console by signing into the Amazon S3 console and choosing **S3** from the Amazon S3 console home page. You can also use this link to directly access the [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

## AWS CLI
<a name="accessing-aws-cli"></a>

You can use the AWS CLI to issue commands or build scripts at your system's command line to perform AWS (including S3) tasks. For example, if you need to access multiple buckets, you can save time by using the AWS CLI to automate common and repetitive tasks. Scriptability and repeatability for common actions are frequent considerations as organizations scale. 

The [AWS CLI](https://aws.amazon.com/cli/) provides commands for a broad set of AWS services. The AWS CLI is supported on Windows, macOS, and Linux. To get started, see the [https://docs.aws.amazon.com/cli/latest/userguide/](https://docs.aws.amazon.com/cli/latest/userguide/). For more information about the commands for Amazon S3, see [s3api](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/index.html) and [s3control](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3control/index.html) in the *AWS CLI Command Reference*.

## AWS SDKs
<a name="accessing-aws-sdks"></a>

AWS provides SDKs (software development kits) that consist of libraries and sample code for various programming languages and platforms (Java, Python, Ruby, .NET, iOS, Android, and so on). The AWS SDKs provide a convenient way to create programmatic access to S3 and AWS. Amazon S3 is a REST service. You can send requests to Amazon S3 using the AWS SDK libraries, which wrap the underlying Amazon S3 REST API and simplify your programming tasks. For example, the SDKs take care of tasks such as calculating signatures, cryptographically signing requests, managing errors, and retrying requests automatically. For information about the AWS SDKs, including how to download and install them, see [Tools for AWS](https://aws.amazon.com/tools/).

Every interaction with Amazon S3 is either authenticated or anonymous. If you are using the AWS SDKs, the libraries compute the signature for authentication from the keys that you provide. For more information about how to make requests to Amazon S3, see [Making requests ](https://docs.aws.amazon.com/AmazonS3/latest/API/MakingRequests.html).

## Amazon S3 REST API
<a name="AccessingUsingRESTAPI"></a>

The architecture of Amazon S3 is designed to be programming language-neutral, using AWS-supported interfaces to store and retrieve objects. You can access S3 and AWS programmatically by using the Amazon S3 REST API. The REST API is an HTTP interface to Amazon S3. With the REST API, you use standard HTTP requests to create, fetch, and delete buckets and objects.

To use the REST API, you can use any toolkit that supports HTTP. You can even use a browser to fetch objects, as long as they are anonymously readable.

The REST API uses standard HTTP headers and status codes, so that standard browsers and toolkits work as expected. In some areas, we have added functionality to HTTP (for example, we added headers to support access control). In these cases, we have done our best to add the new functionality in a way that matches the style of standard HTTP usage.

If you make direct REST API calls in your application, you must write the code to compute the signature and add it to the request. For more information about how to make requests to Amazon S3, see [Making requests ](https://docs.aws.amazon.com/AmazonS3/latest/API/MakingRequests.html) in the *Amazon S3 API Reference*.

# Virtual hosting of general purpose buckets
<a name="VirtualHosting"></a>

Virtual hosting is the practice of serving multiple websites from a single web server. One way to differentiate sites in your Amazon S3 REST API requests is by using the apparent hostname of the Request-URI instead of just the path name part of the URI. An ordinary Amazon S3 REST request specifies a bucket by using the first slash-delimited component of the Request-URI path. Instead, you can use Amazon S3 virtual hosting to address a general purpose bucket in a REST API call by using the HTTP `Host` header. In practice, Amazon S3 interprets `Host` as meaning that most buckets are automatically accessible for limited types of requests at `https://bucket-name.s3.region-code.amazonaws.com`. For a complete list of Amazon S3 Regions and endpoints, see [Amazon S3 endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/s3.html) in the *Amazon Web Services General Reference*.

Virtual hosting also has other benefits. By naming your bucket after your registered domain name and by making that name a DNS alias for Amazon S3, you can completely customize the URL of your Amazon S3 resources, for example, `http://my.bucket-name.com/`. You can also publish to the "root directory" of your bucket's virtual server. This ability can be important because many existing applications search for files in this standard location. For example, `favicon.ico`, `robots.txt`, and `crossdomain.xml` are all expected to be found at the root. 

**Important**  
When you're using virtual-hosted–style general purpose buckets with SSL, the SSL wildcard certificate matches only buckets that do not contain dots (`.`). To work around this limitation, use HTTP or write your own certificate-verification logic. For more information, see [Amazon S3 Path Deprecation Plan](https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/) on the *AWS News Blog.*

**Topics**
+ [Path-style requests](#path-style-access)
+ [Virtual-hosted–style requests](#virtual-hosted-style-access)
+ [HTTP `Host` header bucket specification](#VirtualHostingSpecifyBucket)
+ [Examples](#VirtualHostingExamples)
+ [Customizing Amazon S3 URLs with CNAME records](#VirtualHostingCustomURLs)
+ [How to associate a hostname with an Amazon S3 bucket](#VirtualHostingCustomURLsHowTo)
+ [Limitations](#VirtualHostingLimitations)
+ [Backward compatibility](#VirtualHostingBackwardsCompatibility)

## Path-style requests
<a name="path-style-access"></a>

Currently, Amazon S3 supports both virtual-hosted–style and path-style URL access in all AWS Regions. However, path-style URLs will be discontinued in the future. For more information, see the following **Important** note.

In Amazon S3, path-style URLs use the following format:

```
https://s3.region-code.amazonaws.com/bucket-name/key-name
```

For example, if you create a bucket named `amzn-s3-demo-bucket1` in the US West (Oregon) Region, and you want to access the `puppy.jpg` object in that bucket, you can use the following path-style URL:

```
https://s3.us-west-2.amazonaws.com/amzn-s3-demo-bucket1/puppy.jpg
```

**Important**  
Update (September 23, 2020) – To make sure that customers have the time that they need to transition to virtual-hosted–style URLs, we have decided to delay the deprecation of path-style URLs. For more information, see [Amazon S3 Path Deprecation Plan – The Rest of the Story](https://aws.amazon.com/blogs/aws/amazon-s3-path-deprecation-plan-the-rest-of-the-story/) in the *AWS News Blog*.

**Warning**  
When hosting website content that will be accessed from a web browser, avoid using path-style URLs, which might interfere with the browser same origin security model. To host website content, we recommend that you use either S3 website endpoints or a CloudFront distribution. For more information, see [Website endpoints](WebsiteEndpoints.md) and [ Deploy a React-based single-page application to Amazon S3 and CloudFront](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront.html) in the *AWS Perspective Guidance Patterns*.

## Virtual-hosted–style requests
<a name="virtual-hosted-style-access"></a>

In a virtual-hosted–style URI, the bucket name is part of the domain name in the URL.

Amazon S3 virtual-hosted–style URLs use the following format:

```
https://bucket-name.s3.region-code.amazonaws.com/key-name
```

In this example, `amzn-s3-demo-bucket1` is the bucket name, US West (Oregon) is the Region, and `puppy.png` is the key name:

```
https://amzn-s3-demo-bucket1.s3.us-west-2.amazonaws.com/puppy.png
```

## HTTP `Host` header bucket specification
<a name="VirtualHostingSpecifyBucket"></a>

As long as your `GET` request does not use the SSL endpoint, you can specify the bucket for the request by using the HTTP `Host` header. The `Host` header in a REST request is interpreted as follows: 
+ If the `Host` header is omitted or its value is `s3.region-code.amazonaws.com`, the bucket for the request will be the first slash-delimited component of the Request-URI, and the key for the request will be the rest of the Request-URI. This is the ordinary method, as illustrated by the first and second examples in this section. Omitting the `Host` header is valid only for HTTP 1.0 requests. 
+ Otherwise, if the value of the `Host` header ends in `.s3.region-code.amazonaws.com`, the bucket name is the leading component of the `Host` header's value up to `.s3.region-code.amazonaws.com`. The key for the request is the Request-URI. This interpretation exposes buckets as subdomains of `.s3.region-code.amazonaws.com`, as illustrated by the third and fourth examples in this section. 
+ Otherwise, the bucket for the request is the lowercase value of the `Host` header, and the key for the request is the Request-URI. This interpretation is useful when you have registered the same DNS name as your bucket name and have configured that name to be a canonical name (CNAME) alias for Amazon S3. The procedure for registering domain names and configuring CNAME DNS records is beyond the scope of this guide, but the result is illustrated by the final example in this section.

## Examples
<a name="VirtualHostingExamples"></a>

This section provides example URLs and requests.

**Example – Path-style URLs and requests**  
This example uses the following:  
+ Bucket Name ‐ `example.com`
+ Region ‐ US East (N. Virginia) 
+ Key Name ‐ `homepage.html`
The URL is as follows:  

```
1. http://s3.us-east-1.amazonaws.com/example.com/homepage.html
```
The request is as follows:  

```
1. GET /example.com/homepage.html HTTP/1.1
2. Host: s3.us-east-1.amazonaws.com
```
The request with HTTP 1.0 and omitting the `Host` header is as follows:  

```
1. GET /example.com/homepage.html HTTP/1.0
```

For information about DNS-compatible names, see [Limitations](#VirtualHostingLimitations). For more information about keys, see [Keys](Welcome.md#BasicsKeys).

**Example – Virtual-hosted–style URLs and requests**  
This example uses the following:  
+ **Bucket name** ‐ `amzn-s3-demo-bucket1` 
+ **Region** ‐ Europe (Ireland) 
+ **Key name** ‐ `homepage.html`
The URL is as follows:  

```
1. http://amzn-s3-demo-bucket1.s3.eu-west-1.amazonaws.com/homepage.html
```
The request is as follows:  

```
1. GET /homepage.html HTTP/1.1
2. Host: amzn-s3-demo-bucket1.s3.eu-west-1.amazonaws.com
```

**Example – CNAME alias method**  
To use this method, you must configure your DNS name as a CNAME alias for `bucket-name.s3.us-east-1.amazonaws.com`. For more information, see [Customizing Amazon S3 URLs with CNAME records](#VirtualHostingCustomURLs).   
This example uses the following:  
+ Bucket Name ‐ `example.com` 
+ **Key name** ‐ `homepage.html`
The URL is as follows:  

```
1. http://www.example.com/homepage.html
```
The example is as follows:  

```
1. GET /homepage.html HTTP/1.1
2. Host: www.example.com
```

## Customizing Amazon S3 URLs with CNAME records
<a name="VirtualHostingCustomURLs"></a>

Depending on your needs, you might not want `s3.region-code.amazonaws.com` to appear on your website or service. For example, if you're hosting website images on Amazon S3, you might prefer `http://images.example.com/` instead of `http://images.example.com.s3.us-east-1.amazonaws.com/`. Any bucket with a DNS-compatible name can be referenced as follows: ` http://BucketName.s3.Region.amazonaws.com/[Filename]`, for example, `http://images.example.com.s3.us-east-1.amazonaws.com/mydog.jpg`. By using CNAME, you can map `images.example.com` to an Amazon S3 hostname so that the previous URL could become `http://images.example.com/mydog.jpg`. 

Your bucket name must be the same as the CNAME. For example, if you create a CNAME to map `images.example.com` to `images.example.com.s3.us-east-1.amazonaws.com`, both `http://images.example.com/filename` and `http://images.example.com.s3.us-east-1.amazonaws.com/filename` will be the same.

The CNAME DNS record should alias your domain name to the appropriate virtual hosted–style hostname. For example, if your bucket name and domain name are `images.example.com` and your bucket is in the US East (N. Virginia) Region, the CNAME record should alias to `images.example.com.s3.us-east-1.amazonaws.com`. 

```
1. images.example.com CNAME 			images.example.com.s3.us-east-1.amazonaws.com.
```

Amazon S3 uses the hostname to determine the bucket name. So the CNAME and the bucket name must be the same. For example, suppose that you have configured `www.example.com` as a CNAME for `www.example.com.s3.us-east-1.amazonaws.com`. When you access `http://www.example.com`, Amazon S3 receives a request similar to the following:

**Example**  

```
1. GET / HTTP/1.1
2. Host: www.example.com
3. Date: date
4. Authorization: signatureValue
```

Amazon S3 sees only the original hostname `www.example.com` and is unaware of the CNAME mapping used to resolve the request. 

You can use any Amazon S3 endpoint in a CNAME alias. For example, `s3.ap-southeast-1.amazonaws.com` can be used in CNAME aliases. For more information about endpoints, see [Request endpoints](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTAPI.html) in the *Amazon S3 API Reference*. To create a static website by using a custom domain, see [Tutorial: Configuring a static website using a custom domain registered with Route 53](website-hosting-custom-domain-walkthrough.md).

**Important**  
When using custom URLs with CNAMEs, you will need to ensure a matching bucket exists for any CNAME or alias record you configure. For example, if you create DNS entries for `www.example.com` and `login.example.com` to publish web content using S3, you will need to create both buckets `www.example.com` and `login.example.com`.  
When a CNAME or alias records is configured pointing to an S3 endpoint without a matching bucket, any AWS user can create that bucket and publish content under the configured alias, even if ownership is not the same.  
For the same reason, we recommend that you change or remove the corresponding CNAME or alias when deleting a bucket.

## How to associate a hostname with an Amazon S3 bucket
<a name="VirtualHostingCustomURLsHowTo"></a>

**To associate a hostname with an Amazon S3 bucket by using a CNAME alias**

1. Select a hostname that belongs to a domain that you control. 

   This example uses the `images` subdomain of the `example.com` domain.

1. Create a bucket that matches the hostname. 

   In this example, the host and bucket names are `images.example.com`. The bucket name must *exactly* match the hostname. 

1. Create a CNAME DNS record that defines the hostname as an alias for the Amazon S3 bucket. 

   For example:

   `images.example.com CNAME images.example.com.s3.us-west-2.amazonaws.com`
**Important**  
For request-routing reasons, the CNAME DNS record must be defined exactly as shown in the preceding example. Otherwise, it might appear to operate correctly, but it will eventually result in unpredictable behavior.

   The procedure for configuring CNAME DNS records depends on your DNS server or DNS provider. For specific information, see your server documentation or contact your provider.

## Limitations
<a name="VirtualHostingLimitations"></a>

 SOAP APIs for Amazon S3 are not available for new customers, and are approaching End of Life (EOL) on August 31, 2025. We recommend that you use either the REST API or the AWS SDKs. 

## Backward compatibility
<a name="VirtualHostingBackwardsCompatibility"></a>

The following sections cover various aspects of Amazon S3 backward compatibility that relate to path-style and virtual-hosted–style URL requests.

### Legacy endpoints
<a name="s3-legacy-endpoints"></a>

Some Regions support legacy endpoints. You might see these endpoints in your server access logs or AWS CloudTrail logs. For more information, review the following information. For a complete list of Amazon S3 Regions and endpoints, see [Amazon S3 endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/s3.html) in the *Amazon Web Services General Reference*.

**Important**  
Although you might see legacy endpoints in your logs, we recommend that you always use the standard endpoint syntax to access your buckets.   
Amazon S3 virtual-hosted–style URLs use the following format:  

```
https://bucket-name.s3.region-code.amazonaws.com/key-name
```
In Amazon S3, path-style URLs use the following format:  

```
https://s3.region-code.amazonaws.com/bucket-name/key-name
```

#### s3‐Region
<a name="s3-dash-region"></a>

Some older Amazon S3 Regions support endpoints that contain a dash (`-`) between `s3` and the Region code (for example, `s3‐us-west-2`), instead of a dot (for example, `s3.us-west-2`). If your bucket is in one of these Regions, you might see the following endpoint format in your server access logs or CloudTrail logs:

```
https://bucket-name.s3-region-code.amazonaws.com
```

In this example, the bucket name is amzn-s3-demo-bucket1 and the Region is US West (Oregon):

```
https://amzn-s3-demo-bucket1.s3-us-west-2.amazonaws.com
```

#### Legacy global endpoint
<a name="deprecated-global-endpoint"></a>

For some Regions, you can use the legacy global endpoint to construct requests that do not specify a Region-specific endpoint. The legacy global endpoint point is as follows:

```
bucket-name.s3.amazonaws.com
```

In your server access logs or CloudTrail logs, you might see requests that use the legacy global endpoint. In this example, the bucket name is `amzn-s3-demo-bucket1` and the legacy global endpoint is: 

```
https://amzn-s3-demo-bucket1.s3.amazonaws.com
```

**Virtual-hosted–style requests for US East (N. Virginia)**  
Requests made with the legacy global endpoint go to the US East (N. Virginia) Region by default. Therefore, the legacy global endpoint is sometimes used in place of the Regional endpoint for US East (N. Virginia). If you create a bucket in US East (N. Virginia) and use the global endpoint, Amazon S3 routes your request to this Region by default. 

**Virtual-hosted–style requests for other Regions**  
The legacy global endpoint is also used for virtual-hosted–style requests in other supported Regions. If you create a bucket in a Region that was launched before March 20, 2019, and use the legacy global endpoint, Amazon S3 updates the DNS record to reroute the request to the correct location, which might take time. In the meantime, the default rule applies, and your virtual-hosted–style request goes to the US East (N. Virginia) Region. Amazon S3 then redirects it with an HTTP 307 Temporary Redirect to the correct Region. 

For S3 buckets in Regions launched after March 20, 2019, the DNS server doesn't route your request directly to the AWS Region where your bucket resides. It returns an HTTP 400 Bad Request error instead. For more information, see [Making requests ](https://docs.aws.amazon.com/AmazonS3/latest/API/MakingRequests.html) in the *Amazon S3 API Reference*.

**Path-style requests**  
For the US East (N. Virginia) Region, you can use the legacy global endpoint for path-style requests. 

For all other Regions, the path-style syntax requires that you use the Region-specific endpoint when attempting to access a bucket. If you try to access a bucket with the legacy global endpoint or another endpoint that is different than the one for the Region where the bucket resides, you receive an HTTP response code 301 Permanent Redirect error and a message that indicates the correct URI for your resource. For example, if you use `https://s3.amazonaws.com/bucket-name` for a bucket that was created in the US West (Oregon) Region, you will receive an HTTP 301 Permanent Redirect error.

# Creating a general purpose bucket
<a name="create-bucket-overview"></a>

To upload your data to Amazon S3, you must first create an Amazon S3 general purpose bucket in one of the AWS Regions. The AWS account that creates the bucket owns it. When you create a bucket, you must choose a bucket name and Region. During the creation process, you can optionally choose other storage management options for the bucket.

**Important**  
After you create a bucket, you can't change the bucket name, the bucket owner, or the Region. For more information about bucket naming, see [General purpose bucket naming rules](bucketnamingrules.md). 

By default, you can create up to 10,000 general purpose buckets per AWS account. To request a quota increase for general purpose buckets, visit the [Service Quotas console](https://console.aws.amazon.com/servicequotas/home/services/s3/quotas/). 

You can store any number of objects in a bucket. For a list of restriction and limitations related to Amazon S3 general purpose buckets, see [General purpose bucket quotas, limitations, and restrictions](BucketRestrictions.md).

## General purpose bucket settings
<a name="create-bucket-settings"></a>

When you are creating a general purpose bucket, you must decide if you want to create a global general purpose bucket or a general purpose bucket in your account regional namespace. This decision along with the bucket name and region cannot be changed after creation.

When you're creating a general purpose bucket, you can use the following settings to control various aspects of your bucket's behavior:
+ **S3 Bucket Namespace** – By default, Amazon S3 general purpose buckets exist in a global namespace. When you create a general purpose bucket, you can choose to create a bucket in the shared global namespace or you can choose to create a bucket in your account regional namespace. Your account regional namespace is a subdivision of the global namespace that only your account can create buckets in. New general purpose buckets created in your account regional namespace are unique to your account and can never be re-created by another account. These buckets support all the S3 features and AWS services that general purpose buckets in the shared global namespace already support, your applications require no change to interact with buckets in your account regional namespace. For more information on bucket namespaces, see [Namespaces for general purpose buckets](gpbucketnamespaces.md).
+ **S3 Object Ownership** – S3 Object Ownership is an Amazon S3 bucket-level setting that you can use both to control ownership of objects that are uploaded to your bucket and to disable or enable access control lists (ACLs). By default, Object Ownership is set to the Bucket owner enforced setting, and all ACLs are disabled. With ACLs disabled, the bucket owner owns every object in the bucket and manages access to data exclusively by using policies. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).
+ **S3 Object Lock** – S3 Object Lock can help prevent Amazon S3 objects from being deleted or overwritten for a fixed amount of time or indefinitely. Object Lock uses a *write-once-read-many* (WORM) model to store objects. You can use Object Lock to help meet regulatory requirements that require WORM storage, or to add another layer of protection against object changes or deletion. For more information, see [Locking objects with Object Lock](object-lock.md).
+ **Tags** – An AWS tag is a key-value pair that holds metadata. You attach the tags to your Amazon S3 resources, such as buckets. You can tag resources when you create them or manage tags on existing resources. You can use tags for cost allocation to track storage costs by bucket tag in AWS Billing and Cost Management. You can also use tags for attribute-based access control (ABAC), to scale access permissions and grant access to S3 resources based on their tags. For more information, see [Using tags with S3 general purpose buckets](buckets-tagging.md).

After you create a general purpose bucket, or when you're creating a general purpose bucket by using the Amazon S3 console, you can also use the following settings to control other aspects of your bucket's behavior: 
+ **S3 Block Public Access** – The S3 Block Public Access feature provides settings for access points, buckets, and accounts to help you manage public access to Amazon S3 resources. By default, new buckets, access points, and objects don't allow public access. However, users can modify bucket policies, access point policies, or object permissions to allow public access. S3 Block Public Access settings override these policies and permissions so that you can limit public access to these resources. For more information, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).
+ **S3 Versioning** – Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your bucket. With versioning, you can easily recover from both unintended user actions and application failures. By default, versioning is disabled for buckets. For more information, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).
+ **Default encryption** – You can set the default encryption type for all objects in your bucket. Server-side encryption with Amazon S3 managed keys (SSE-S3) is the base level of encryption configuration for every bucket in Amazon S3. All new objects uploaded to an S3 bucket are automatically encrypted with SSE-S3 as the base level of encryption. If you want to use a different type of default encryption, you can specify server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS), dual-layer server-side encryption with AWS KMS keys (DSSE-KMS), or server-side encryption with customer-provided keys (SSE-C) to encrypt your data. For more information, see [Setting default server-side encryption behavior for Amazon S3 buckets](bucket-encryption.md).

You can use the Amazon S3 console, Amazon S3 REST API, AWS Command Line Interface (AWS CLI), or AWS SDKs to create a general purpose bucket. For more information about the permissions required to create a general purpose bucket, see [CreateBucket](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucket.html) in the *Amazon Simple Storage Service API Reference*.

If you're having trouble creating an Amazon S3 bucket, see [How do I troubleshoot errors when creating an Amazon S3 bucket?](https://repost.aws/knowledge-center/s3-create-bucket-errors) on AWS re:Post. 

## Using the S3 console
<a name="create-bucket"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the navigation bar on the top of the page, choose the name of the currently displayed AWS Region. Next, choose the Region in which you want to create a bucket. 
**Note**  
After you create a bucket, you can't change its Region. 
To minimize latency and costs and address regulatory requirements, choose a Region close to you. Objects stored in a Region never leave that Region unless you explicitly transfer them to another Region. For a list of Amazon S3 AWS Regions, see [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region) in the *Amazon Web Services General Reference*.

1. In the left navigation pane, choose **General purpose buckets**.

1. Choose **Create bucket**. The **Create bucket** page opens.

1. For **Bucket name**, enter a name for your bucket.

   The bucket name must:
   + Be unique within a partition. A partition is a grouping of Regions. AWS currently has three partitions: `aws` (commercial Regions), `aws-cn` (China Regions), and `aws-us-gov` (AWS GovCloud (US) Regions).
   + Be between 3 and 63 characters long.
   + Consist only of lowercase letters, numbers, periods (`.`), and hyphens (`-`). For best compatibility, we recommend that you avoid using periods (`.`) in bucket names, except for buckets that are used only for static website hosting.
   + Begin and end with a letter or number. 
   + For a complete list of bucket-naming rules, see [General purpose bucket naming rules](bucketnamingrules.md).
**Important**  
After you create the bucket, you can't change its name. 
Don't include sensitive information in the bucket name. The bucket name is visible in the URLs that point to the objects in the bucket.

1. (Optional) Under **General configuration**, you can choose to copy an existing bucket's settings to your new bucket. If you don't want to copy the settings of an existing bucket, skip to the next step.
**Note**  
This option:  
Isn't available in the AWS CLI and is only available in the Amazon S3 console
Doesn't copy the bucket policy from the existing bucket to the new bucket

    To copy an existing bucket's settings, under **Copy settings from existing bucket**, select **Choose bucket**. The **Choose bucket** window opens. Find the bucket with the settings that you want to copy, and select **Choose bucket**. The **Choose bucket** window closes, and the **Create bucket** window reopens.

   Under **Copy settings from existing bucket**, you now see the name of the bucket that you selected. The settings of your new bucket now match the settings of the bucket that you selected. If you want to remove the copied settings, choose **Restore defaults**. Review the remaining bucket settings on the **Create bucket** page. If you don't want to make any changes, you can skip to the final step. 

1. Under **Object Ownership**, to disable or enable ACLs and control ownership of objects uploaded in your bucket, choose one of the following settings:

**ACLs disabled**
   +  **Bucket owner enforced (default)** – ACLs are disabled, and the bucket owner automatically owns and has full control over every object in the general purpose bucket. ACLs no longer affect access permissions to data in the S3 general purpose bucket. The bucket uses policies exclusively to define access control.

     By default, ACLs are disabled. A majority of modern use cases in Amazon S3 no longer require the use of ACLs. We recommend that you keep ACLs disabled, except in circumstances where you must control access for each object individually. For more information, see [Controlling ownership of objects and disabling ACLs for your bucket](about-object-ownership.md).

**ACLs enabled**
   + **Bucket owner preferred** – The bucket owner owns and has full control over new objects that other accounts write to the bucket with the `bucket-owner-full-control` canned ACL. 

     If you apply the **Bucket owner preferred** setting, to require all Amazon S3 uploads to include the `bucket-owner-full-control` canned ACL, you can [add a bucket policy](ensure-object-ownership.md#ensure-object-ownership-bucket-policy) that allows only object uploads that use this ACL.
   + **Object writer** – The AWS account that uploads an object owns the object, has full control over it, and can grant other users access to it through ACLs.
**Note**  
The default setting is **Bucket owner enforced**. To apply the default setting and keep ACLs disabled, only the `s3:CreateBucket` permission is needed. To enable ACLs, you must have the `s3:PutBucketOwnershipControls` permission.

1. Under **Block Public Access settings for this bucket**, choose the Block Public Access settings that you want to apply to the bucket. 

   By default, all four Block Public Access settings are enabled. We recommend that you keep all settings enabled, unless you know that you need to turn off one or more of them for your specific use case. For more information about blocking public access, see [Blocking public access to your Amazon S3 storage](access-control-block-public-access.md).
**Note**  
To enable all Block Public Access settings, only the `s3:CreateBucket` permission is required. To turn off any Block Public Access settings, you must have the `s3:PutBucketPublicAccessBlock` permission.

1. (Optional) By default, **Bucket Versioning** is disabled. Versioning is a means of keeping multiple variants of an object in the same bucket. You can use versioning to preserve, retrieve, and restore every version of every object stored in your bucket. With versioning, you can recover more easily from both unintended user actions and application failures. For more information about versioning, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md). 

   To enable versioning on your bucket, choose **Enable**. 

1. (Optional) Under **Tags**, you can choose to add tags to your bucket. With AWS cost allocation, you can use bucket tags to annotate billing for your use of a bucket. A tag is a key-value pair that represents a label that you assign to a bucket. For more information, see [Using cost allocation S3 bucket tags](CostAllocTagging.md).

   To add a bucket tag, enter a **Key** and optionally a **Value** and choose **Add Tag**.

1. To configure **Default encryption**, under **Encryption type**, choose one of the following: 
   + **Server-side encryption with Amazon S3 managed keys (SSE-S3)**
   + **Server-side encryption with AWS Key Management Service keys (SSE-KMS)**
   + **Dual-layer server-side encryption with AWS Key Management Service (AWS KMS) keys (DSSE-KMS)**
**Important**  
If you use the SSE-KMS or DSSE-KMS option for your default encryption configuration, you are subject to the requests per second (RPS) quota of AWS KMS. For more information about AWS KMS quotas and how to request a quota increase, see [Quotas](https://docs.aws.amazon.com/kms/latest/developerguide/limits.html) in the *AWS Key Management Service Developer Guide*.

   Buckets and new objects are encrypted by using server-side encryption with Amazon S3 managed keys (SSE-S3) as the base level of encryption configuration. For more information about default encryption, see [Setting default server-side encryption behavior for Amazon S3 buckets](bucket-encryption.md). For more information about SSE-S3, see [Using server-side encryption with Amazon S3 managed keys (SSE-S3)](UsingServerSideEncryption.md).

   For more information about using server-side encryption to encrypt your data, see [Protecting data with encryption](UsingEncryption.md). 

1. If you chose **Server-side encryption with AWS Key Management Service keys (SSE-KMS)** or **Dual-layer server-side encryption with AWS Key Management Service (AWS KMS) keys (DSSE-KMS)**, do the following:

   1. Under **AWS KMS key**, specify your KMS key in one of the following ways:
      + To choose from a list of available KMS keys, choose **Choose from your AWS KMS keys**, and choose your **KMS key** from the list of available keys.

        Both the AWS managed key (`aws/s3`) and your customer managed keys appear in this list. For more information about customer managed keys, see [Customer keys and AWS keys](https://docs.aws.amazon.com//kms/latest/developerguide/concepts.html#key-mgmt) in the *AWS Key Management Service Developer Guide*.
      + To enter the KMS key ARN, choose **Enter AWS KMS key ARN**, and enter your KMS key ARN in the field that appears. 
      + To create a new customer managed key in the AWS KMS console, choose **Create a KMS key**.

        For more information about creating an AWS KMS key, see [Creating keys](https://docs.aws.amazon.com//kms/latest/developerguide/create-keys.html) in the *AWS Key Management Service Developer Guide*.
**Important**  
You can use only KMS keys that are available in the same AWS Region as the bucket. The Amazon S3 console lists only the first 100 KMS keys in the same Region as the bucket. To use a KMS key that isn't listed, you must enter your KMS key ARN. If you want to use a KMS key that's owned by a different account, you must first have permission to use the key, and then you must enter the KMS key ARN. For more information about cross account permissions for KMS keys, see [Creating KMS keys that other accounts can use](https://docs.aws.amazon.com//kms/latest/developerguide/key-policy-modifying-external-accounts.html#cross-account-console) in the *AWS Key Management Service Developer Guide*. For more information about SSE-KMS, see [Specifying server-side encryption with AWS KMS (SSE-KMS)](specifying-kms-encryption.md). For more information about DSSE-KMS, see [Using dual-layer server-side encryption with AWS KMS keys (DSSE-KMS)](UsingDSSEncryption.md).  
When you use an AWS KMS key for server-side encryption in Amazon S3, you must choose a symmetric encryption KMS key. Amazon S3 supports only symmetric encryption KMS keys and not asymmetric KMS keys. For more information, see [Identifying symmetric and asymmetric KMS keys](https://docs.aws.amazon.com//kms/latest/developerguide/find-symm-asymm.html) in the *AWS Key Management Service Developer Guide*.

   1. When you configure your bucket to use default encryption with SSE-KMS, you can also use S3 Bucket Keys. S3 Bucket Keys lower the cost of encryption by decreasing request traffic from Amazon S3 to AWS KMS. For more information, see [Reducing the cost of SSE-KMS with Amazon S3 Bucket Keys](bucket-key.md). S3 Bucket Keys aren't supported for DSSE-KMS.

      By default, S3 Bucket Keys are enabled in the Amazon S3 console. We recommend leaving S3 Bucket Keys enabled to lower your costs. To disable S3 Bucket Keys for your bucket, under **Bucket Key**, choose **Disable**.

1. (Optional) S3 Object Lock helps protect new objects from being deleted or overwritten. For more information, see [Locking objects with Object Lock](object-lock.md). If you want to enable S3 Object Lock, do the following:

   1. Choose **Advanced settings**.
**Important**  
Enabling Object Lock automatically enables versioning for the bucket. After you've enabled and successfully created the bucket, you must also configure the Object Lock default retention and legal hold settings on the bucket's **Properties** tab. 

   1. If you want to enable Object Lock, choose **Enable**, read the warning that appears, and acknowledge it.
**Note**  
To create an Object Lock enabled bucket, you must have the following permissions: `s3:CreateBucket`, `s3:PutBucketVersioning`, and `s3:PutBucketObjectLockConfiguration`.

1. Choose **Create bucket**.

## Using the AWS SDKs
<a name="create-bucket-intro"></a>

When you use the AWS SDKs to create a general purpose bucket, you must create a client and then use the client to send a request to create a bucket. As a best practice, you should create your client and bucket in the same AWS Region. If you don't specify a Region when you create a client or a bucket, Amazon S3 uses the default Region, US East (N. Virginia). If you want to constrain the bucket creation to a specific AWS Region, use the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucketConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateBucketConfiguration.html) condition key.

To create a client to access a dual-stack endpoint, you must specify an AWS Region. For more information, see [Using Amazon S3 dual-stack endpoints ](https://docs.aws.amazon.com/AmazonS3/latest/API/dual-stack-endpoints.html#dual-stack-endpoints-description) in the *Amazon S3 API Reference*. For a list of available AWS Regions, see [Amazon Simple Storage Service endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/s3.html) in the *AWS General Reference*. 

When you create a client, the Region maps to the Region-specific endpoint. The client uses this endpoint to communicate with Amazon S3: `s3.region.amazonaws.com`. If your Region launched after March 20, 2019, your client and bucket must be in the same Region. However, you can use a client in the US East (N. Virginia) Region to create a bucket in any Region that launched before March 20, 2019. For more information, see [Legacy endpoints](VirtualHosting.md#s3-legacy-endpoints).

These AWS SDK code examples perform the following tasks:
+ **Create a client by explicitly specifying an AWS Region** – In the example, the client uses the `s3.us-west-2.amazonaws.com` endpoint to communicate with Amazon S3. You can specify any AWS Region. For a list of AWS Regions, see [Regions and endpoints](https://docs.aws.amazon.com/general/latest/gr/s3.html) in the *AWS General Reference*. 
+ **Send a create bucket request by specifying only a bucket name** – The client sends a request to Amazon S3 to create the bucket in the Region where you created a client. 
+ **Retrieve information about the location of the bucket** – Amazon S3 stores bucket location information in the *location* subresource that's associated with the bucket.

For additional AWS SDK examples and examples in other languages, see [Use CreateBucket with an AWS SDK or CLI](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_CreateBucket_section.html) in the *Amazon Simple Storage Service API Reference*.

------
#### [ Java ]

For examples of how to create a bucket with the AWS SDK for Java, see [Create a bucket](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_CreateBucket_section.html) in the *Amazon S3 API Reference*.

When using the AWS SDK for Java v2, you can create a general purpose bucket with a globally unique identifier (GUID) appended to the bucket name to ensure uniqueness. Use `UUID.randomUUID().toString().replace("-", "")` to generate a GUID and concatenate it with your base bucket name. This approach helps avoid bucket naming conflicts across all AWS accounts.

------
#### [ .NET ]

For information about how to create and test a working sample, see the [AWS SDK for .NET Version 3 API Reference](https://docs.aws.amazon.com/sdkfornet/v3/apidocs/Index.html).

**Example**  

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using Amazon.S3.Util;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
    class CreateBucketTest
    {
        private const string bucketName = "*** bucket name ***";
        // Specify your bucket region (an example region is shown).
        private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
        private static IAmazonS3 s3Client;
        public static void Main()
        {
            s3Client = new AmazonS3Client(bucketRegion);
            CreateBucketAsync().Wait();
        }

        static async Task CreateBucketAsync()
        {
            try
            {
                if (!(await AmazonS3Util.DoesS3BucketExistAsync(s3Client, bucketName)))
                {
                    var putBucketRequest = new PutBucketRequest
                    {
                        BucketName = bucketName,
                        UseClientRegion = true
                    };

                    PutBucketResponse putBucketResponse = await s3Client.PutBucketAsync(putBucketRequest);
                }
                // Retrieve the bucket location.
                string bucketLocation = await FindBucketLocationAsync(s3Client);
            }
            catch (AmazonS3Exception e)
            {
                Console.WriteLine("Error encountered on server. Message:'{0}' when writing an object", e.Message);
            }
            catch (Exception e)
            {
                Console.WriteLine("Unknown encountered on server. Message:'{0}' when writing an object", e.Message);
            }
        }
        static async Task<string> FindBucketLocationAsync(IAmazonS3 client)
        {
            string bucketLocation;
            var request = new GetBucketLocationRequest()
            {
                BucketName = bucketName
            };
            GetBucketLocationResponse response = await client.GetBucketLocationAsync(request);
            bucketLocation = response.Location.ToString();
            return bucketLocation;
        }
    }
}
```

------
#### [ Ruby ]

For information about how to create and test a working sample, see the [AWS SDK for Ruby - Version 3](https://docs.aws.amazon.com/sdk-for-ruby/v3/api/).

**Example**  

```
require 'aws-sdk-s3'

# Wraps Amazon S3 bucket actions.
class BucketCreateWrapper
  attr_reader :bucket

  # @param bucket [Aws::S3::Bucket] An Amazon S3 bucket initialized with a name. This is a client-side object until
  #                                 create is called.
  def initialize(bucket)
    @bucket = bucket
  end

  # Creates an Amazon S3 bucket in the specified AWS Region.
  #
  # @param region [String] The Region where the bucket is created.
  # @return [Boolean] True when the bucket is created; otherwise, false.
  def create?(region)
    @bucket.create(create_bucket_configuration: { location_constraint: region })
    true
  rescue Aws::Errors::ServiceError => e
    puts "Couldn't create bucket. Here's why: #{e.message}"
    false
  end

  # Gets the Region where the bucket is located.
  #
  # @return [String] The location of the bucket.
  def location
    if @bucket.nil?
      'None. You must create a bucket before you can get its location!'
    else
      @bucket.client.get_bucket_location(bucket: @bucket.name).location_constraint
    end
  rescue Aws::Errors::ServiceError => e
    "Couldn't get the location of #{@bucket.name}. Here's why: #{e.message}"
  end
end

# Example usage:
def run_demo
  region = "us-west-2"
  wrapper = BucketCreateWrapper.new(Aws::S3::Bucket.new("amzn-s3-demo-bucket-#{Random.uuid}"))
  return unless wrapper.create?(region)

  puts "Created bucket #{wrapper.bucket.name}."
  puts "Your bucket's region is: #{wrapper.location}"
end

run_demo if $PROGRAM_NAME == __FILE__
```

------

## Using the AWS CLI
<a name="creating-bucket-cli"></a>

The following AWS CLI example creates a general purpose bucket in the US West (N. California) Region (`us-west-1`) Region with an example bucket name that uses a globally unique identifier (GUID). To use this example command, replace the `user input placeholders` with your own information.

```
aws s3api create-bucket \
    --bucket amzn-s3-demo-bucket1$(uuidgen | tr -d - | tr '[:upper:]' '[:lower:]' ) \
    --region us-west-1 \
    --create-bucket-configuration LocationConstraint=us-west-1
```

For more information and additional examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/create-bucket.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/create-bucket.html) in the *AWS CLI Command Reference*.

# Viewing the properties for an S3 general purpose bucket
<a name="view-bucket-properties"></a>

You can view properties for any Amazon S3 bucket you own. These settings include the following:
+ **Bucket Versioning** – Keep multiple versions of an object in one general purpose bucket by using versioning. By default, versioning is disabled for a new bucket. For information about enabling versioning, see [Enabling versioning on buckets](manage-versioning-examples.md).
+ **Tags** – An AWS tag is a key-value pair that holds metadata. You attach the tags to your Amazon S3 resources, such as buckets. You can tag resources when you create them or manage tags on existing resources. You can use tags for cost allocation to track storage costs by bucket tag in AWS Billing and Cost Management. You can also use tags for attribute-based access control (ABAC), to scale access permissions and grant access to S3 resources based on their tags. For more information, see [Using tags with S3 general purpose buckets](buckets-tagging.md).
+ **Default encryption** – Enabling default encryption provides you with automatic server-side encryption. Amazon S3 encrypts an object before saving it to a disk and decrypts the object when you download it. For more information, see [Setting default server-side encryption behavior for Amazon S3 buckets](bucket-encryption.md). 
+ **Server access logging** – Get detailed records for the requests that are made to your general purpose bucket with server access logging. By default, Amazon S3 doesn't collect server access logs. For information about enabling server access logging, see [Enabling Amazon S3 server access logging](enable-server-access-logging.md).
+ **AWS CloudTrail data events** – Use CloudTrail to log data events. By default, trails don't log data events. Additional charges apply for data events. For more information, see [Logging Data Events for Trails](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) in the *AWS CloudTrail User Guide*.
+ **Event notifications** – Enable certain Amazon S3 general purpose bucket events to send notification messages to a destination whenever the events occur. For more information, see [Enabling and configuring event notifications using the Amazon S3 console](enable-event-notifications.md).
+ **Transfer acceleration** – Enable fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. For information about enabling transfer acceleration, see [Enabling and using S3 Transfer Acceleration](transfer-acceleration-examples.md).
+ **Object Lock** – Use S3 Object Lock to prevent an object from being deleted or overwritten for a fixed amount of time or indefinitely. For more information, see [Locking objects with Object Lock](object-lock.md).
+ **Requester Pays** – Enable Requester Pays if you want the requester (instead of the general purpose bucket owner) to pay for requests and data transfers. For more information, see [Using Requester Pays general purpose buckets for storage transfers and usage](RequesterPaysBuckets.md). 
+ **Static website hosting** – You can host a static website on Amazon S3. For more information, see [Hosting a static website using Amazon S3](WebsiteHosting.md).

You can view bucket properties using the AWS Management Console, AWS CLI, or AWS SDKs

## Using the S3 console
<a name="view-bucket"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, choose the name of the bucket that you want to view the properties for.

1. Choose the **Properties** tab.

1. On the **Properties** page, you can configure the above properties for the bucket.

## Using the AWS CLI
<a name="view-bucket-properties-cli"></a>

**View bucket properties with the AWS CLI**  
The following commands show how you can use the AWS CLI to list different general purpose bucket properties. 

The following returns the tag set associated with the bucket *amzn-s3-demo-bucket1*. For more information about bucket tags see, [Using cost allocation S3 bucket tags](CostAllocTagging.md).

```
aws s3api get-bucket-tagging --bucket amzn-s3-demo-bucket1
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-tagging.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-tagging.html) in the *AWS CLI Command Reference*.

The following returns the versioning state of the bucket *amzn-s3-demo-bucket1*. For information about the bucket versioning, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).

```
aws s3api get-bucket-versioning --bucket amzn-s3-demo-bucket1
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-versioning.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-versioning.html) in the *AWS CLI Command Reference*.

The following returns the default encryption configuration for the bucket *amzn-s3-demo-bucket1*. By default, all buckets have a default encryption configuration that uses server-side encryption with Amazon S3 managed keys (SSE-S3). For information about the bucket default encryption, see [Setting default server-side encryption behavior for Amazon S3 buckets](bucket-encryption.md).

```
aws s3api get-bucket-encryption --bucket amzn-s3-demo-bucket1
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-encryption.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-encryption.html) in the *AWS CLI Command Reference*.

The following returns the notification configuration of the bucket *amzn-s3-demo-bucket1*. For information about the bucket event notifications, see [Amazon S3 Event Notifications](EventNotifications.md).

```
aws s3api get-bucket-notification-configuration --bucket amzn-s3-demo-bucket1
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-notification-configuration.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-notification-configuration.html) in the *AWS CLI Command Reference*.

The following returns the logging status for the bucket *amzn-s3-demo-bucket1*. For information about the bucket logging, see [Logging requests with server access logging](ServerLogs.md).

```
aws s3api get-bucket-logging --bucket amzn-s3-demo-bucket1
```

For more information and examples, see [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-logging.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-bucket-logging.html) in the *AWS CLI Command Reference*.

## Using the AWS SDKs
<a name="view-bucket-properties-sdk"></a>

For examples of how to return general purpose bucket properties with the AWS SDKs, such as versioning, tags, and more, see [Code examples](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_ListBuckets_section.html) in the *Amazon S3 API Reference*.

For general information about using different AWS SDKs, see [Developing with Amazon S3 using the AWS SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/sdk-general-information-section.html) in the *Amazon S3 API Reference*.

# Listing Amazon S3 general purpose buckets
<a name="list-buckets"></a>

To return a list of general purpose buckets that you own, you can use [ListBuckets](https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListBuckets.htm). You can list your buckets by using the Amazon S3 console, the AWS Command Line Interface, or the AWS SDKs. For `ListBuckets` requests using the AWS CLI, AWS SDKs, and Amazon S3 REST API, AWS accounts that use the default service quota for buckets (10,000 buckets), support both paginated and unpaginated requests. Regardless of how many buckets you have in your account, you can create page sizes between 1 and 10,000 buckets to list all of your buckets. For paginated requests, `ListBuckets` requests return both the bucket names and the corresponding AWS Regions for each bucket. The following AWS Command Line Interface and AWS SDK examples show you how to use pagination in your `ListBuckets` request. Note that some AWS SDKs assist with pagination. 

**Permissions**  
To list all of your general purpose buckets, you must have the `s3:ListAllMyBuckets` permission. If you're encountering an `HTTP Access Denied (403 Forbidden)` error, see [Troubleshoot access denied (403 Forbidden) errors in Amazon S3](troubleshoot-403-errors.md).

**Important**  
We strongly recommend using only paginated `ListBuckets` requests. Unpaginated `ListBuckets` requests are only supported for AWS accounts set to the default general purpose bucket quota of 10,000. If you have an approved general purpose bucket quota above 10,000, you must send paginated `ListBuckets` requests to list your account’s buckets. All unpaginated `ListBuckets` requests will be rejected for AWS accounts with a general purpose bucket quota greater than 10,000. 

## Using the S3 console
<a name="access-bucket-example-console"></a>

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. On the **General purpose buckets** tab, you can see a list of your general purpose buckets.

1. To find buckets by name, enter a bucket name in the **Find buckets by name** field.

## Using the AWS CLI
<a name="access-bucket-example-cli"></a>

To use the AWS CLI to generate a listing of general purpose buckets, you can use the `ls` or `list-buckets` commands. The following examples show you how to create a paginated `list-buckets` request and an unpaginated `ls` request. To use these examples, replace the *user input placeholders*.

**Example – List all the buckets in your account by using `ls` (unpaginated)**  
The following example command lists all the general purpose buckets in your account in a single non-paginated call. This call returns a list of all buckets in your account (up to 10,000 results):  

```
$ aws s3 ls
```
For more information and examples, see [List bucket and objects](https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html#using-s3-commands-listing-buckets).  


**Example – List all the buckets in your account by using `ls` (paginated)**  
The following example command makes one or more paginated calls to list all the general purpose buckets in your account, returning 100 buckets per page:  

```
$ aws s3 ls --page-size 100
```
For more information and examples, see [List bucket and objects](https://docs.aws.amazon.com/cli/latest/userguide/cli-services-s3-commands.html#using-s3-commands-listing-buckets).  


**Example – List all the buckets in your account (paginated)**  
The following example provides a paginated `list-buckets` command to list all the general purpose buckets in your account. The `--max-items` and `--page-size` options limit the number of buckets listed to 100 per page.   

```
$ aws s3api list-buckets /
    --max-items 100 / 
    --page-size 100
```
If the number of items output (`--max-items`) is fewer than the total number of items returned by the underlying API calls, the output includes a continuation token, specified by the `starting-token` argument, that you can pass to a subsequent command to retrieve the next set of items. The following example shows how to use the `starting-token` value returned by the previous example. You can specify the `starting-code` to retrieve the next 100 buckets.   

```
$ aws s3api list-buckets / 
    --max-items 100 / 
    --page-size 100 /
    --starting-token eyJNYXJrZXIiOiBudWxsLCAiYm90b190cnVuY2F0ZV9hbW91bnQiOiAxfQ==
```


**Example – List all the buckets in an AWS Region (paginated)**  
The following example command uses the `--bucket-region` parameter to list up to 100 buckets in an account that are in the `us-east-2` Region. Requests made to a Regional endpoint that is different from the value specified in the `--bucket-region` parameter are not supported. For example, if you want to limit the response to your buckets in `us-east-2`, you must make your request to an endpoint in `us-east-2`.  

```
$ aws s3api list-buckets /
    --region us-east-2 /
    --max-items 100 / 
    --page-size 100 /
    --bucket-region us-east-2
```


**Example – List all the buckets that begin with a specific bucket name prefix (paginated)**  
The following example command lists up to 100 buckets that have a name starting with the *amzn-s3-demo-bucket* prefix.   

```
$ aws s3api list-buckets /
    --max-items 100 /
    --page-size 100 /
    --prefix amzn-s3-demo-bucket
```

## Using the AWS SDKs
<a name="access-bucket-example-sdk"></a>

The following examples show you how to list your general purpose buckets by using the AWS SDKs

------
#### [ SDK for Python ]

**Example – ListBuckets request (paginated)**  

```
import boto3

s3 = boto3.client('s3')
response = s3.list_buckets(MaxBuckets=100)
```

**Example – ListBuckets response (paginated)**  

```
import boto3

s3 = boto3.client('s3')
response = s3.list_buckets(MaxBuckets=1,ContinuationToken="eyJNYXJrZXIiOiBudWxsLCAiYm90b190cnVuY2F0ZV9hbW91bnQiOiAxfQ==EXAMPLE--")
```

------
#### [ SDK for Java ]

For examples of how to list buckets with the AWS SDK for Java, see [List buckets](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_ListBuckets_section.html) in the *Amazon S3 API Reference*.

------
#### [ SDK for Go ]

```
package main
import (
 "context"
 "fmt"
 "log"
 "github.com/aws/aws-sdk-go-v2/aws"
 "github.com/aws/aws-sdk-go-v2/config"
 "github.com/aws/aws-sdk-go-v2/service/s3"
)
func main() {
 cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithRegion("us-east-2"))
 if err != nil {
  log.Fatal(err)
 }
 client := s3.NewFromConfig(cfg)
 maxBuckets := 1000
 resp, err := client.ListBuckets(context.TODO(), management portals3.ListBucketsInput{MaxBuckets: aws.Int32(int32(maxBuckets))})
 if err != nil {
  log.Fatal(err)
 }
 fmt.Println("S3 Buckets:")
 for _, bucket := range resp.Buckets {
     fmt.Println("- Name:", *bucket.Name)
     fmt.Println("-BucketRegion", *bucket.BucketRegion)
 }
 fmt.Println(resp.ContinuationToken == nil)
 fmt.Println(resp.Prefix == nil)
}
```

------

# Emptying a general purpose bucket
<a name="empty-bucket"></a>

You can empty a general purpose bucket's contents using the Amazon S3 console, AWS SDKs, or AWS Command Line Interface (AWS CLI). When you empty a general purpose bucket, you delete all the objects, but you keep the bucket. After you empty a bucket, it cannot be undone. Objects added to the bucket while the empty bucket action is in progress might be deleted. All objects (including all object versions and delete markers) in the bucket must be deleted before the bucket itself can be deleted.

When you empty a general purpose bucket that has S3 Versioning enabled or suspended, all versions of all the objects in the bucket are deleted. For more information, see [Working with objects in a versioning-enabled bucket](manage-objects-versioned-bucket.md).

While emptying your bucket, we recommend that you also remove all incomplete multipart uploads. You can use multipart uploads to upload very large objects (up to 50 TB) as a set of parts for improved throughput and quicker recovery from network issues. In cases where the multipart upload process doesn't finish, the incomplete parts remain in the bucket (in an unusable state). These incomplete parts incur storage costs until the upload process is finished, or until the incomplete parts are removed. For more information, see [Uploading and copying objects using multipart upload in Amazon S3](mpuoverview.md).

As a best practice, we recommend configuring lifecycle rules to expire objects and incomplete multipart uploads that are older than a specific number of days. When you create your lifecycle rule to expire incomplete multipart uploads, we recommend 7 days as a good starting point. For more information, see [Setting an S3 Lifecycle configuration on a bucket](how-to-set-lifecycle-configuration-intro.md).

 Lifecycle expiration is an asynchronous process, so the rule might take some days to run before your bucket is empty. After the first time that Amazon S3 runs the rule, all objects that are eligible for expiration are marked for deletion. You're no longer charged for those objects that are marked for deletion. For more information, see [How do I empty an Amazon S3 bucket using a lifecycle configuration rule?](https://repost.aws/knowledge-center/s3-empty-bucket-lifecycle-rule).

## Using the S3 console
<a name="empty-bucket-console"></a>

You can use the Amazon S3 console to empty a general purpose bucket, which deletes all of the objects in the bucket without deleting the bucket. 

**To empty an S3 bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the bucket list, select the option next to the name of the bucket that you want to empty, and then choose **Empty**.

1. On the **Empty bucket** page, confirm that you want to empty the bucket by entering the bucket name into the text field, and then choose **Empty**.

1. Monitor the progress of the bucket emptying process on the **Empty bucket: Status** page.

## Using the AWS CLI
<a name="empty-bucket-awscli"></a>

You can empty a general purpose bucket using the AWS CLI only if the bucket does not have Bucket Versioning enabled. If versioning is not enabled, you can use the `rm` (remove) AWS CLI command with the `--recursive` parameter to empty the bucket (or remove a subset of objects with a specific key name prefix). 

The following `rm` command removes objects that have the key name prefix `doc`, for example, `doc/doc1` and `doc/doc2`.

```
$ aws s3 rm s3://bucket-name/doc --recursive
```

Use the following command to remove all objects without specifying a prefix.

```
$ aws s3 rm s3://bucket-name --recursive
```

For more information, see [Using high-level S3 commands with the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/using-s3-commands.html) in the *AWS Command Line Interface User Guide*.

**Note**  
You can't remove objects from a bucket that has versioning enabled. Amazon S3 adds a delete marker when you delete an object, which is what this command does. For more information about S3 Bucket Versioning, see [Retaining multiple versions of objects with S3 Versioning](Versioning.md).

## Using the AWS SDKs
<a name="empty-bucket-awssdks"></a>

You can use the AWS SDKs to empty a general purpose bucket or remove a subset of objects that have a specific key name prefix.

For an example of how to empty a bucket using AWS SDK for Java, see [Deleting a general purpose bucket](delete-bucket.md). The code deletes all objects, regardless of whether the bucket has versioning enabled, and then it deletes the bucket. To just empty the bucket, make sure that you remove the statement that deletes the bucket. 

For more information about using other AWS SDKs, see [Tools for Amazon Web Services](https://aws.amazon.com/tools/).

## Using a lifecycle configuration
<a name="empty-bucket-lifecycle"></a>

To empty a large general purpose bucket, we recommend that you use an S3 Lifecycle configuration rule. Lifecycle expiration is an asynchronous process, so the rule might take some days to run before the bucket is empty. After the first time that Amazon S3 runs the rule, all objects that are eligible for expiration are marked for deletion. You're no longer charged for those objects that are marked for deletion. For more information, see [How do I empty an Amazon S3 bucket using a lifecycle configuration rule?](https://repost.aws/knowledge-center/s3-empty-bucket-lifecycle-rule).

If you use a lifecycle configuration to empty your bucket, the configuration should include [current versions, non-current versions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/versioning-workflows.html), [delete markers](https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeleteMarker.html), and [incomplete multipart uploads](https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpu-abort-incomplete-mpu-lifecycle-config.html).

You can add lifecycle configuration rules to expire all objects or a subset of objects that have a specific key name prefix. For example, to remove all objects in a bucket, you can set a lifecycle rule to expire objects one day after creation.

Amazon S3 supports a bucket lifecycle rule that you can use to stop multipart uploads that don't complete within a specified number of days after being initiated. We recommend that you configure this lifecycle rule to minimize your storage costs. For more information, see [Configuring a bucket lifecycle configuration to delete incomplete multipart uploads](mpu-abort-incomplete-mpu-lifecycle-config.md).

For more information about using a lifecycle configuration to empty a bucket, see [Setting an S3 Lifecycle configuration on a bucket](how-to-set-lifecycle-configuration-intro.md) and [Expiring objects](lifecycle-expire-general-considerations.md).

## Emptying a general purpose bucket with AWS CloudTrail configured
<a name="empty-bucket-cloudtrail"></a>

AWS CloudTrail tracks object-level data events in an Amazon S3 general purpose bucket, such as deleting objects. If you use a general purpose bucket as a destination to log your CloudTrail events and are deleting objects from that same bucket you may be creating new objects while emptying your bucket. To prevent this, stop your AWS CloudTrail trails. For more information about stopping your CloudTrail trails from logging events, see [Turning off logging for a trail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-delete-trails-console.html) in the *AWS CloudTrail User Guide*.

Another alternative to stopping CloudTrail trails from being added to the bucket is to add a deny `s3:PutObject` statement to your bucket policy. If you want to store new objects in the bucket at a later time you will need to remove this deny `s3:PutObject` statement. For more information, see [Object operations](security_iam_service-with-iam.md#using-with-s3-actions-related-to-objects) and [IAM JSON policy elements: Effect](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_effect.html) in the *IAM User Guide*.

# Deleting a general purpose bucket
<a name="delete-bucket"></a>

You can delete an empty Amazon S3 general purpose bucket. For information about emptying a general purpose bucket, see [Emptying a general purpose bucket](empty-bucket.md). 

You can delete a bucket by using the Amazon S3 console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the Amazon S3 REST API. 

**Important**  
Before deleting a general purpose bucket, consider the following:  
**If a bucket is deleted, it can't be restored by AWS.** Before deleting a bucket, make sure that you have backed up or replicated your data.
General purpose buckets names are unique within a global namespace. **If you delete a bucket in the shared global namespace, be aware that another AWS account can use the same general purpose bucket name for a new bucket and can therefore potentially receive requests intended for the deleted bucket.** If you want to prevent this, or if you want to continue to use the same bucket name, don't delete the bucket. We recommend that you empty the bucket and keep it, and instead, block any bucket requests as needed. For buckets no longer in active use, we recommend emptying the bucket of all objects to minimize costs while retaining the bucket itself.
We recommend creating buckets in your account regional namespace for assurance that only your account can ever own these bucket names. For more information, see [Namespaces for general purpose buckets](gpbucketnamespaces.md).
When you delete a general purpose bucket, the bucket might not be instantly removed. Instead, Amazon S3 queues the bucket for deletion. Because Amazon S3 is distributed across AWS Regions, the deletion process takes time to fully propagate and achieve consistency throughout the system.
If the bucket hosts a static website, and you created and configured an Amazon Route 53 hosted zone as described in [Tutorial: Configuring a static website using a custom domain registered with Route 53](website-hosting-custom-domain-walkthrough.md), you must clean up the Route 53 hosted zone settings that are related to the bucket. For more information, see [Step 2: Delete the Route 53 hosted zone](getting-started-cleanup.md#getting-started-cleanup-route53).
If the bucket receives log data from Elastic Load Balancing (ELB), we recommend that you stop the delivery of ELB logs to the bucket before deleting it. After you delete the bucket, if another user creates a bucket using the same name, your log data could potentially be delivered to that bucket. For information about ELB access logs, see [Access logs for your Classic Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/access-log-collection.html) in the *User Guide for Classic Load Balancers* and [Access logs for your Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html) in the *User Guide for Application Load Balancers*.

**Troubleshooting**  
If you are unable to delete an Amazon S3 general purpose bucket, consider the following:
+ **Make sure that the bucket is empty** – You can delete buckets only if they don't have any objects in them. Make sure that the bucket is empty. For information about emptying a bucket, see [Emptying a general purpose bucket](empty-bucket.md).
+ **Make sure that there aren't any access points attached** – You can delete buckets only if they don't have any S3 Access Points or Multi-Region Access Points attached within the same account. Before deleting the bucket, delete any same-account access points that are attached to the bucket.
+ **Make sure that you have the `s3:DeleteBucket` permission** – If you can't delete a bucket, work with your IAM administrator to confirm that you have the `s3:DeleteBucket` permission. For information about how to view or update IAM permissions, see [Changing permissions for an IAM user](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_users_change-permissions.html) in the *IAM User Guide*. For troubleshooting information, see [Troubleshoot access denied (403 Forbidden) errors in Amazon S3](troubleshoot-403-errors.md).
+ **Check for `s3:DeleteBucket Deny` statements in AWS Organizations service control policies (SCPs) and resource control policies (RCPs)** – SCPs and RCPs can deny the delete permission on a bucket. For more information, see [service control policies](https://docs.aws.amazon.com//organizations/latest/userguide/orgs_manage_policies_scps.html) and [resource control policies](https://docs.aws.amazon.com//organizations/latest/userguide/orgs_manage_policies_rcps.html) in the *AWS Organizations User Guide*. 
+ **Check for `s3:DeleteBucket Deny` statements in your bucket policy** – If you have `s3:DeleteBucket` permissions in your IAM user or role policy and you can't delete a bucket, the bucket policy might include a `Deny` statement for `s3:DeleteBucket`. Buckets created by AWS Elastic Beanstalk have a policy containing this statement by default. Before you can delete the bucket, you must delete this statement or the bucket policy.

**Prerequisites**  
Before you can delete a general purpose bucket, you must empty it. For information about emptying a bucket, see [Emptying a general purpose bucket](empty-bucket.md).

## Using the S3 console
<a name="delete-bucket-console"></a>

**To delete an S3 bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the buckets list, select the option button next to the name of the bucket that you want to delete, and then choose **Delete** at the top of the page.

1. On the **Delete bucket** page, confirm that you want to delete the bucket by entering the bucket name in the text field, and then choose **Delete bucket**.
**Note**  
If the bucket contains any objects, empty the bucket before deleting it by choosing the **Empty bucket** button in the **This bucket is not empty** error alert and following the instructions on the **Empty bucket** page. Then return to the **Delete bucket** page and delete the bucket.

1. To verify that you've deleted the bucket, open the **General purpose buckets** list and enter the name of the bucket that you deleted. If the bucket can't be found, your deletion was successful. 

## Using the AWS SDK for Java
<a name="delete-empty-bucket"></a>

To empty and delete a general purpose bucket using the AWS SDK for Java, you must first delete all objects in the general purpose bucket, and then delete the bucket. 

For examples in other languages, see [Use DeleteBucket with an AWS SDK or CLI](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_DeleteBucket_section.html) in the *Amazon Simple Storage Service API Reference*. For information about using other AWS SDKs, see [Tools for Amazon Web Services](https://aws.amazon.com/tools/).

------
#### [ Java ]

To delete a bucket that contains objects using the AWS SDK for Java, you must delete all objects first, and then delete the bucket. This approach works for buckets with or without versioning enabled.

**Note**  
For buckets without versioning enabled, you can delete all objects directly and then delete the bucket. For buckets with versioning enabled, you must delete all object versions before deleting the bucket.

For examples of how to delete a bucket with the AWS SDK for Java, see [Delete a bucket](https://docs.aws.amazon.com/AmazonS3/latest/API/s3_example_s3_DeleteBucket_section.html) in the *Amazon S3 API Reference*.

------

## Using the AWS CLI
<a name="delete-bucket-awscli"></a>

You can delete a general purpose bucket that contains objects with the AWS CLI if the bucket doesn't have versioning enabled. When you delete a bucket that contains objects, all the objects in the bucket are permanently deleted, including objects that have been transitioned to the S3 Glacier Flexible Retrieval storage class.

If your bucket doesn't have versioning enabled, you can use the `rb` (remove bucket) AWS CLI command with the `--force` parameter to delete the bucket and all the objects in it. This command deletes all the objects first and then deletes the bucket.

If versioning is enabled, using the `rb` command with the `--force` parameter doesn't delete versioned objects, so the bucket deletion fails because the bucket isn't empty. For more information about deleting versioned objects, see [Deleting object versions](https://docs.aws.amazon.com/AmazonS3/latest/userguide/DeletingObjectVersions.html).

To use the following command, replace `amzn-s3-demo-bucket` with the name of the bucket that you want to delete:

```
$ aws s3 rb s3://amzn-s3-demo-bucket --force  
```

For more information, see [Using High-Level S3 Commands with the AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/using-s3-commands.html) in the *AWS Command Line Interface User Guide*.

# Mount an Amazon S3 bucket as a local file system
<a name="mountpoint"></a>

Mountpoint for Amazon S3 is a high-throughput open source file client for mounting an Amazon S3 bucket as a local file system. With Mountpoint, your applications can access objects stored in Amazon S3 through file system operations, such as open and read. Mountpoint automatically translates these operations into S3 object API calls, giving your applications access to the elastic storage and throughput of Amazon S3 through a file interface.

Mountpoint for Amazon S3 is [available for production use on your large-scale read-heavy applications](https://aws.amazon.com/blogs/aws/mountpoint-for-amazon-s3-generally-available-and-ready-for-production-workloads/): data lakes, machine learning training, image rendering, autonomous vehicle simulation, extract, transform, and load (ETL), and more. 

Mountpoint supports basic file system operations, and can read files up to 50 TB in size. It can list and read existing files, and it can create new ones. It cannot modify existing files or delete directories, and it does not support symbolic links or file locking. Mountpoint is ideal for applications that do not need all of the features of a shared file system and POSIX-style permissions but require Amazon S3's elastic throughput to read and write large S3 datasets. For details, see [Mountpoint file system behavior](https://github.com/awslabs/mountpoint-s3/blob/main/doc/SEMANTICS.md) on GitHub. For workloads that require full POSIX support, we recommend [Amazon FSx for Lustre](https://aws.amazon.com/fsx/lustre/) and its [support for linking Amazon S3 buckets](https://docs.aws.amazon.com/fsx/latest/LustreGuide/create-dra-linked-data-repo.html).

Mountpoint for Amazon S3 is available only for Linux operating systems. You can use Mountpoint to access S3 objects in all storage classes except S3 Glacier Flexible Retrieval, S3 Glacier Deep Archive, S3 Intelligent-Tiering Archive Access Tier, and S3 Intelligent-Tiering Deep Archive Access Tier.

**Topics**
+ [Installing Mountpoint](mountpoint-installation.md)
+ [Configuring and using Mountpoint](mountpoint-usage.md)
+ [Troubleshooting Mountpoint](mountpoint-troubleshooting.md)

# Installing Mountpoint
<a name="mountpoint-installation"></a>

You can download and install prebuilt packages of Mountpoint for Amazon S3 by using the command line. The instructions for downloading and installing Mountpoint vary, depending on which Linux operating system that you're using. 

**Topics**
+ [Amazon Linux 2023 (AL2023)](#mountpoint-install-al2023)
+ [Other RPM-based distributions (Amazon Linux 2, Fedora, CentOS, RHEL)](#mountpoint-install-rpm)
+ [DEB-based distributions (Debian, Ubuntu)](#mountpoint.install.deb)
+ [Other Linux distributions](#mountpoint-install-other)
+ [Verifying the signature of the Mountpoint for Amazon S3 package](#mountpoint-install-verify)

## Amazon Linux 2023 (AL2023)
<a name="mountpoint-install-al2023"></a>

Mountpoint is available directly in the Amazon Linux 2023 repository since AL2023 version 2023.9.20251110.

1. Install it by entering the following command:

   ```
   sudo dnf install mount-s3
   ```

1. Verify that Mountpoint for Amazon S3 is successfully installed:

   ```
   mount-s3 --version
   ```

   You should see output similar to the following:

   ```
   mount-s3 1.21.0+1.amzn2023
   ```

## Other RPM-based distributions (Amazon Linux 2, Fedora, CentOS, RHEL)
<a name="mountpoint-install-rpm"></a>

1. Copy the following download URL for your architecture.

   *x86\$164*:

   ```
   https://s3.amazonaws.com/mountpoint-s3-release/latest/x86_64/mount-s3.rpm
   ```

   *ARM64 (Graviton)*:

   ```
   https://s3.amazonaws.com/mountpoint-s3-release/latest/arm64/mount-s3.rpm
   ```

1. Download the Mountpoint for Amazon S3 package. Replace `download-link` with the appropriate download URL from the preceding step.

   ```
   wget download-link
   ```

1. (Optional) Verify the authenticity and integrity of the downloaded file. First, copy the appropriate signature URL for your architecture. 

   *x86\$164*:

   ```
   https://s3.amazonaws.com/mountpoint-s3-release/latest/x86_64/mount-s3.rpm.asc
   ```

   *ARM64 (Graviton)*:

   ```
   https://s3.amazonaws.com/mountpoint-s3-release/latest/arm64/mount-s3.rpm.asc
   ```

   Next, see [Verifying the signature of the Mountpoint for Amazon S3 package](#mountpoint-install-verify).

1. Install the package by using the following command:

   ```
   sudo yum install ./mount-s3.rpm
   ```

1. Verify that Mountpoint is successfully installed by entering the following command:

   ```
   mount-s3 --version
   ```

   You should see output similar to the following:

   ```
   mount-s3 1.21.0
   ```

## DEB-based distributions (Debian, Ubuntu)
<a name="mountpoint.install.deb"></a>

1. Copy the download URL for your architecture. 

   *x86\$164*:

   ```
   https://s3.amazonaws.com/mountpoint-s3-release/latest/x86_64/mount-s3.deb
   ```

   *ARM64 (Graviton)*:

   ```
   https://s3.amazonaws.com/mountpoint-s3-release/latest/arm64/mount-s3.deb
   ```

1. Download the Mountpoint for Amazon S3 package. Replace `download-link` with the appropriate download URL from the preceding step.

   ```
   wget download-link
   ```

1. (Optional) Verify the authenticity and integrity of the downloaded file. First, copy the signature URL for your architecture.

   *x86\$164*:

   ```
   https://s3.amazonaws.com/mountpoint-s3-release/latest/x86_64/mount-s3.deb.asc
   ```

   *ARM64 (Graviton)*:

   ```
   https://s3.amazonaws.com/mountpoint-s3-release/latest/arm64/mount-s3.deb.asc
   ```

   Next, see [Verifying the signature of the Mountpoint for Amazon S3 package](#mountpoint-install-verify).

1. Install the package by using the following command:

   ```
   sudo apt-get install ./mount-s3.deb
   ```

1. Verify that Mountpoint for Amazon S3 is successfully installed by running the following command:

   ```
   mount-s3 --version
   ```

   You should see output similar to the following:

   ```
   mount-s3 1.21.0
   ```

## Other Linux distributions
<a name="mountpoint-install-other"></a>

1. Consult your operating system documentation to install the `FUSE` and `libfuse2` packages, which are required. 

1. Copy the download URL for your architecture. 

   *x86\$164*:

   ```
   https://s3.amazonaws.com/mountpoint-s3-release/latest/x86_64/mount-s3.tar.gz
   ```

   *ARM64 (Graviton)*:

   ```
   https://s3.amazonaws.com/mountpoint-s3-release/latest/arm64/mount-s3.tar.gz
   ```

1. Download the Mountpoint for Amazon S3 package. Replace `download-link` with the appropriate download URL from the preceding step.

   ```
   wget download-link
   ```

1. (Optional) Verify the authenticity and integrity of the downloaded file. First, copy the signature URL for your architecture. 

   *x86\$164*:

   ```
   https://s3.amazonaws.com/mountpoint-s3-release/latest/x86_64/mount-s3.tar.gz.asc
   ```

   *ARM64 (Graviton)*:

   ```
   https://s3.amazonaws.com/mountpoint-s3-release/latest/arm64/mount-s3.tar.gz.asc
   ```

   Next, see [Verifying the signature of the Mountpoint for Amazon S3 package](#mountpoint-install-verify).

1. Install the package by using the following command:

   ```
   sudo mkdir -p /opt/aws/mountpoint-s3 && sudo tar -C /opt/aws/mountpoint-s3 -xzf ./mount-s3.tar.gz
   ```

1. Add the `mount-s3` binary to your `PATH` environment variable. In your `$HOME/.profile` file, append the following line:

   ```
   export PATH=$PATH:/opt/aws/mountpoint-s3/bin
   ```

   Save the `.profile` file, and run the following command:

   ```
   source $HOME/.profile
   ```

1. Verify that Mountpoint for Amazon S3 is successfully installed by running the following command:

   ```
   mount-s3 --version
   ```

   You should see output similar to the following:

   ```
   mount-s3 1.21.0
   ```

## Verifying the signature of the Mountpoint for Amazon S3 package
<a name="mountpoint-install-verify"></a><a name="verify"></a>

1. Install GnuPG (the `gpg` command). It is required to verify the authenticity and integrity of a downloaded Mountpoint for Amazon S3 package. GnuPG is installed by default on Amazon Linux Amazon Machine Images (AMIs). After you installGnuPG, proceed to step 2. 

1. Download the Mountpoint public key by running the following command:

   ```
   wget https://s3.amazonaws.com/mountpoint-s3-release/public_keys/KEYS
   ```

1. Import the Mountpoint public key into your keyring by running the following command:

   ```
   gpg --import KEYS
   ```

1. Verify the fingerprint of the Mountpoint public key by running the following command:

   ```
   gpg --fingerprint mountpoint-s3@amazon.com
   ```

   Confirm that the displayed fingerprint string matches one of the following:

   ```
   8AEF E705 EBE3 29C0 948C  75A6 6F1C 3B3A EF4B 030B
   673F E406 1506 BB46 9A0E  F857 BE39 7A52 B086 DA5A (older key)
   ```

   If the fingerprint string doesn't match, do not finish installing Mountpoint, and contact [AWS Support](https://aws.amazon.com/premiumsupport/).

1. Download the package signature file. Replace `signature-link` with the appropriate signature link from the preceding sections.

   ```
   wget signature-link
   ```

1. Verify the signature of the downloaded package by running the following command. Replace `signature-filename` with the file name from the previous step.

   ```
   gpg --verify signature-filename
   ```

   For example, on RPM-based distributions, including Amazon Linux, enter the following command:

   ```
   gpg --verify mount-s3.rpm.asc
   ```

1. The output should include the phrase `Good signature`. If the output includes the phrase `BAD signature`, redownload the Mountpoint package file and repeat these steps. If the issue persists, do not finish installing Mountpoint, and contact [AWS Support](https://aws.amazon.com/premiumsupport/). 

   The output may include a warning about a trusted signature. This does not indicate a problem. It only means that you have not independently verified the Mountpoint public key.

# Configuring and using Mountpoint
<a name="mountpoint-usage"></a>

To use Mountpoint for Amazon S3, your host needs valid AWS credentials with access to the Amazon S3 bucket or buckets that you would like to mount. For different ways to authenticate, see Mountpoint [AWS Credentials](https://github.com/awslabs/mountpoint-s3/blob/main/doc/CONFIGURATION.md#aws-credentials) on GitHub. 

For example, you can create a new AWS Identity and Access Management (IAM) user and role for this purpose. Make sure that this role has access to the bucket or buckets that you would like to mount. You can [pass the IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html) to your Amazon EC2 instance with an instance profile. 

**Topics**
+ [Using Mountpoint for Amazon S3](#using-mountpoint)
+ [Configuring caching in Mountpoint](#mountpoint-caching)

## Using Mountpoint for Amazon S3
<a name="using-mountpoint"></a>

Use Mountpoint for Amazon S3 to do the following:

1. Mount your Amazon S3 buckets.

   1. You can mount Amazon S3 buckets manually by using the `mount-s3` command. 

      In the following example, replace `amzn-s3-demo-bucket` with the name of your S3 bucket, and replace `~/mnt` with the directory on your host where you want your S3 bucket to be mounted.

      ```
      mkdir ~/mnt
      mount-s3 amzn-s3-demo-bucket ~/mnt
      ```

      Because the Mountpoint client runs in the background by default, the `~/mnt` directory now gives you access to the objects in your Amazon S3 bucket.

   1. Alternatively, since Mountpoint v1.18, you can configure automatic mounting of Amazon S3 buckets when an instance starts up or reboots. 

      For existing or running Amazon EC2 instances, find the `fstab` file in the `/etc/fstab` directory of your Linux system. Then, add a line to your `fstab` file. For example, to mount *amzn-s3-demo-bucket* using the prefix `example-prefix/` to your sytem path `/mnt/mountpoint`, see the following. To use the following example, replace the *user input placeholders* with your own information. 

      ```
      s3://amzn-s3-demo-bucket/example-prefix/ /mnt/mountpoint mount-s3 _netdev,nosuid,nodev,nofail,rw 0 0
      ```

      See the following table for an explanation of the options used in the example.    
<a name="auto-mount-commands"></a>[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonS3/latest/userguide/mountpoint-usage.html)

      For new Amazon EC2 instances, you can modify user data on an Amazon EC2 template and set up the `fstab` file as follows. To use the following example, replace the *user input placeholders* with your own information.

      ```
      #!/bin/bash -e
      MP_RPM=$(mktemp --suffix=.rpm)
      curl https://s3.amazonaws.com/mountpoint-s3-release/latest/x86_64/mount-s3.rpm > $MP_RPM
      yum install -y $MP_RPM
      rm $MP_RPM
      
      MNT_PATH=/mnt/mountpoint
      echo "s3://amzn-s3-demo-bucket/ ${MNT_PATH} mount-s3 _netdev,nosuid,nodev,rw,allow-other,nofail" >> /etc/fstab
      mkdir $MNT_PATH
      
      systemctl daemon-reload
      mount -a
      ```

1. Access the objects in your Amazon S3 bucket through Mountpoint.

   After you mount your bucket locally, you can use common Linux commands, such as `cat` or `ls`, to work with your S3 objects. Mountpoint for Amazon S3 interprets keys in your Amazon S3 bucket as file system paths by splitting them on the forward slash (`/`) character. For example, if you have the object key `Data/2023-01-01.csv` in your bucket, you will have a directory named `Data` in your Mountpoint file system, with a file named `2023-01-01.csv` inside it. 

   Mountpoint for Amazon S3 intentionally does not implement the full [POSIX](https://en.wikipedia.org/wiki/POSIX) standard specification for file systems. Mountpoint is optimized for workloads that need high-throughput read and write access to data stored in Amazon S3 through a file system interface, but that otherwise do not rely on file system features. For more information, see Mountpoint for Amazon S3 [file system behavior](https://github.com/awslabs/mountpoint-s3/blob/main/doc/SEMANTICS.md) on GitHub. Customers that need richer file system semantics should consider other AWS file services, such as [Amazon Elastic File System (Amazon EFS) ](https://aws.amazon.com/efs/) or [Amazon FSx](https://aws.amazon.com/fsx/).

   

1. Unmount your Amazon S3 bucket by using the `umount` command. This command unmounts your S3 bucket and exits Mountpoint. 

   To use the following example command, replace `~/mnt` with the directory on your host where your S3 bucket is mounted.

   ```
   umount ~/mnt
   ```
**Note**  
To get a list of options for this command, run `umount --help`.

For additional Mountpoint configuration details, see [Amazon S3 bucket configuration](https://github.com/awslabs/mountpoint-s3/blob/main/doc/CONFIGURATION.md#s3-bucket-configuration), and [file system configuration](https://github.com/awslabs/mountpoint-s3/blob/main/doc/CONFIGURATION.md#file-system-configuration) on GitHub.

## Configuring caching in Mountpoint
<a name="mountpoint-caching"></a>

Mountpoint for Amazon S3 supports different types of data caching. To accelerate repeated read requests, you can opt in to the following: 
+ **Local cache** – You can use a local cache in your Amazon EC2 instance storage or an Amazon Elastic Block Store volume. If you repeatedly read the same data from the same compute instance and if you have unused space in your local instance storage for the repeatedly read dataset, you should opt in to a local cache. 
+ **Shared cache** – You can use a shared cache on S3 Express One Zone. If you repeatedly read small objects from multiple compute instances or if you do not know the size of your repeatedly read dataset and want to benefit from elasticity of cache size, you should opt in to the shared cache. Once you opt in, Mountpoint retains objects with sizes up to one megabyte in a directory bucket that uses S3 Express One Zone. 
+ **Combined local and shared cache** – If you have unused space in your local cache but also want a shared cache across multiple instances, you can opt in to both a local cache and shared cache. 

Caching in Mountpoint is ideal for use cases where you repeatedly read the same data that doesn’t change during the multiple reads. For example, you can use caching with machine learning training jobs that need to read a training dataset multiple times to improve model accuracy.

For more information about how to configure caching in Mountpoint, see the following examples.

**Topics**
+ [Local cache](#local-cache-example)
+ [Shared cache](#shared-cache-example)
+ [Combined local and shared cache](#shared-local-cache-example)

### Local cache
<a name="local-cache-example"></a>

You can opt in to a local cache with the `--cache CACHE_PATH` flag. In the following example, replace *`CACHE_PATH`* with the filepath to the directory that you want to cache your data in. Replace *`amzn-s3-demo-bucket`* with the name of your Amazon S3 bucket, and replace *`~/mnt`* with the directory on your host where you want your S3 bucket to be mounted.

```
mkdir ~/mnt
mount-s3 --cache CACHE_PATH amzn-s3-demo-bucket ~/mnt
```

When you opt in to local caching while mounting an Amazon S3 bucket, Mountpoint creates an empty sub-directory at the configured cache location, if that sub-directory doesn’t already exist. When you first mount a bucket and when you unmount, Mountpoint deletes the contents of the local cache.

**Important**  
If you enable local caching, Mountpoint will persist unencrypted object content from your mounted Amazon S3 bucket at the local cache location provided at mount. In order to protect your data, you should restrict access to the data cache location by using file system access control mechanisms.

### Shared cache
<a name="shared-cache-example"></a>

If you repeatedly read small objects (up to 1 MB) from multiple compute instances or the size of the dataset that you repeatedly read often exceeds the size of your local cache, you should use a shared cache in [S3 Express One Zone](https://aws.amazon.com/s3/storage-classes/express-one-zone/). When you read the same data repeatedly from multiple instances, this improves latency by avoiding redundant requests to your mounted Amazon S3 bucket. 

Once you opt in to the shared cache, you pay for the data cached in your directory bucket in S3 Express One Zone. You also pay for requests made against your data in the directory bucket in S3 Express One Zone. For more information, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). Mountpoint never deletes cached objects from directory buckets. To manage your storage costs, you should set up a [Lifecycle policy on your directory bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/directory-buckets-objects-lifecycle.html) so that Amazon S3 expires the cached data in S3 Express One Zone after a period of time that you specify. For more information, see [Mountpoint for Amazon S3 caching configuration](https://github.com/awslabs/mountpoint-s3/blob/main/doc/CONFIGURATION.md#caching-configuration) on GitHub.

To opt in to caching in S3 Express One Zone when you mount an Amazon S3 bucket to your compute instance, use the `--cache-xz` flag and specify a directory bucket as your cache location. In the following example, replace the *user input placeholders*.

```
mount-s3 amzn-s3-demo-bucket ~/mnt --cache-xz amzn-s3-demo-bucket--usw2-az1--x-s3
```

### Combined local and shared cache
<a name="shared-local-cache-example"></a>

If you have unused space on your instance but you also want to use a shared cache across multiple instances, you can opt in to both a local cache and shared cache. With this caching configuration, you can avoid redundant read requests from the same instance to the shared cache in directory bucket when the required data is cached in local storage. This can reduce request costs and improve performance.

 To opt in to both a local cache and shared cache when you mount an Amazon S3 bucket, you specify both cache locations by using the `--cache` and `--cache-xz` flags. To use the following example to opt into both a local and shared cache, replace the *user input placeholders*.

```
mount -s3 amzn-s3-demo-bucket ~/mnt --cache /path/to/mountpoint/cache --cache -xz amzn-s3-demo-bucket--usw2-az1--x-s3
```

For more information, [Mountpoint for Amazon S3 caching configuration](https://github.com/awslabs/mountpoint-s3/blob/main/doc/CONFIGURATION.md#caching-configuration) on GitHub. 

**Important**  
If you enable shared caching, Mountpoint will copy object content from your mounted Amazon S3 bucket into the S3 directory bucket that you provide as your shared cache location, making it accessible to any caller with access to the S3 directory bucket. To protect your cached data, you should follow the [Security best practices for Amazon S3](security-best-practices.md) to ensure that your buckets use the correct policies and are not publicly accessible. You should use a directory bucket dedicated to Mountpoint shared caching and grant access only to Mountpoint clients.

# Troubleshooting Mountpoint
<a name="mountpoint-troubleshooting"></a>

Mountpoint for Amazon S3 is backed by Support. If you need assistance, contact the [AWS Support Center](https://console.aws.amazon.com/support/home#/). 

You can also review and submit Mountpoint [Issues](https://github.com/awslabs/mountpoint-s3/issues) on GitHub.

If you discover a potential security issue in this project, we ask that you notify AWS Security through our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Do not create a public GitHub issue.

If your application behaves unexpectedly with Mountpoint, you can inspect your log information to diagnose the problem. 

**Logging**

By default, Mountpoint emits high-severity log information to [https://datatracker.ietf.org/doc/html/rfc5424](https://datatracker.ietf.org/doc/html/rfc5424). 

To view logs on most modern Linux distributions, including Amazon Linux, run the following `journald` command:

```
journalctl -e SYSLOG_IDENTIFIER=mount-s3
```

On other Linux systems, `syslog` entries are likely written to a file such as `/var/log/syslog`.

You can use these logs to troubleshoot your application. For example, if your application tries to overwrite an existing file, the operation fails, and you will see a line similar to the following in the log:

```
[WARN] open{req=12 ino=2}: mountpoint_s3::fuse: open failed: inode error: inode 2 (full key "README.md") is not writable
```

For more information, see Mountpoint for Amazon S3 [Logging](https://github.com/awslabs/mountpoint-s3/blob/main/doc/LOGGING.md) on GitHub.

# Working with Storage Browser for Amazon S3
<a name="storage-browser"></a>

[Storage Browser for S3](https://aws.amazon.com/s3/features/storage-browser/) is an open source component that you can add to your web application to provide your end users with a simple graphical interface for data stored in Amazon S3. With Storage Browser for S3, you can provide authorized end users access to browse, download, upload, copy, and delete data in S3 directly from your own applications.

Storage Browser for S3 supports the following operations for files: `LIST`, `GET`, `PUT`, `COPY`, `UPLOAD`, and `DELETE`. To deliver high throughput data transfer, Storage Browser for S3 only displays the data that your end users are authorized to access and optimizes upload requests. Storage Browser also optimizes performance for faster load times, calculates checksums of the data that your end users upload, and accepts objects after confirming that your data integrity was maintained (in transit) over the public internet. You can control access to your data based on your end user’s identity using AWS security and identity services, or your own managed services. You can also customize Storage Browser to match your existing application’s design and branding.

Storage Browser for S3 is only available for web and intranet applications on the React framework. You can use Storage Browser to access Amazon S3 objects in all storage classes except S3 Glacier Flexible Retrieval, S3 Glacier Deep Archive, S3 Intelligent-Tiering Archive Access tier, and S3 Intelligent-Tiering Deep Archive Access tier.

Storage Browser for S3 is available to use with your web applications in the [AWS Amplify React](https://ui.docs.amplify.aws/) library. For more information about Storage Browser, see [Storage Browser for S3](https://aws.amazon.com/s3/features/storage-browser/).

**Topics**
+ [Using Storage Browser for S3](using-storagebrowser.md)
+ [Installing Storage Browser for S3](installing-storagebrowser.md)
+ [Setting up Storage Browser for S3](setup-storagebrowser.md)
+ [Configuring Storage Browser for S3](s3config-storagebrowser.md)
+ [Troubleshooting Storage Browser for S3](troubleshooting-storagebrowser.md)

# Using Storage Browser for S3
<a name="using-storagebrowser"></a>

In Storage Browser for S3, a *location* is an S3 general purpose bucket or prefix, that you grant end users access to using [S3 Access Grants](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants.html), IAM policies, or your own managed authorization service. After you've authorized your end users to access a specific location, they can work with any data within that location.

The Storage Browser for S3 user interface has four main views:
+ **Home page:** The home page lists the S3 locations that your users can access, as well as your permissions for each. This is the initial view for users that shows the root level S3 resources that your end users have access to and the permissions (READ/WRITE/READWRITE) for each S3 location. 
+ **Location details:** This view allows users to browse files and folders in S3, and upload or download files. (Note that in Storage Browser for S3, *objects* are known as files, and *prefixes* and *buckets* are known as folders.)
+ **Location action:** After a user chooses an action (such as **Upload**) in Storage Browser, it opens up another view of the file location.
+ **Vertical ellipsis:** The vertical ellipsis icon opens the drop-down list of actions.

When using Storage Browser for S3, be aware of the following limitations:
+ Folders starting or ending with dots (.) aren’t supported.
+ S3 Access Grants with WRITE only permission isn't supported.
+ Storage Browser for S3 supports the `PUT` operation for files up to 160 GB in size.
+ Storage Browser for S3 only supports the `COPY` operation for files smaller than 5 GB. If the file size exceeds 5 GB, Storage Browser fails the request.

# Installing Storage Browser for S3
<a name="installing-storagebrowser"></a>

The fastest way to get started with Storage Browser is to clone one of the sample projects on GitHub. These sample projects can help you deploy production ready web apps for Storage Browser with preset integrations of AWS services for AWS Identity and Access Management so you can quickly connect authorized end users to data in S3.

For more information, see [Quick start](https://ui.docs.amplify.aws/react/connected-components/storage/storage-browser#quick-start) in the *Amplify Dev Center*.

## Installing Storage Browser for S3 from GitHub
<a name="install-storagebrowser-dependencies"></a>

Alternatively, you can install Storage Browser for S3 from the latest version of `aws-amplify/ui-react-storage` and `aws-amplify` packages in the [https://github.com/aws-amplify](https://github.com/aws-amplify) GitHub repository to start integrating Storage Browser into your existing application. When installing Storage Browser for S3, make sure to add the following dependencies to your `package.json` file:

```
"dependencies": {
    "aws-amplify/ui-react-storage": "latest",
    "aws-amplify": "latest",
  }
```

Alternatively, you can add the dependencies using Node Package Manager (NPM):

```
npm i --save @aws-amplify/ui-react-storage aws-amplify
```

# Setting up Storage Browser for S3
<a name="setup-storagebrowser"></a>

To connect end users with Amazon S3 *locations*, you must first set up an authentication and authorization method. There are three methods to set up an authentication and authorization method with Storage Browser:
+ [Method 1: Managing data access for your customers and third party partners](#setup-storagebrowser-method1)
+ [Method 2: Managing data access for your IAM principals for your AWS account](#setup-storagebrowser-method2)
+ [Method 3: Managing data access at scale](#setup-storagebrowser-method3)

## Method 1: Managing data access for your customers and third party partners
<a name="setup-storagebrowser-method1"></a>

With this method, you can use [AWS Amplify Auth](https://docs.amplify.aws/react/build-a-backend/auth/set-up-auth/) to manage access control and security for files. This method is ideal when you want to connect your customers or third party partners with data in S3. With this option, your customers can authenticate using social or enterprise identity providers.

You provide IAM credentials to your end users and third party partners using AWS Amplify Auth with an S3 bucket that’s configured to use Amplify Storage. AWS Amplify Auth is built on [Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html), a fully managed customer identity and access management service where you can authenticate and authorize users from a built-in user directory or enterprise directory, or from consumer identity providers. The Amplify authorization model defines which prefixes the current authenticated user can access. For more information about how to set up authorization for AWS Amplify, see [Set up storage](https://docs.amplify.aws/react/build-a-backend/storage/set-up-storage/).

To initialize the component with the Amplify authentication and storage methods, add the following code snippet to your web application:

```
import {
  createAmplifyAuthAdapter,
  createStorageBrowser,
} from '@aws-amplify/ui-react-storage/browser';
import "@aws-amplify/ui-react-storage/styles.css";

import config from './amplify_outputs.json';

Amplify.configure(config);

export const { StorageBrowser } = createStorageBrowser({
  config: createAmplifyAuthAdapter(),
});
```

## Method 2: Managing data access for your IAM principals for your AWS account
<a name="setup-storagebrowser-method2"></a>

If you want to manage access for your IAM principals or your AWS account directly, you can create an IAM role that has permissions to invoke the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetDataAccess.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_GetDataAccess.html) S3 API operation. To set this up, you must create an S3 Access Grants instance to map out permissions for S3 general purpose buckets and prefixes to the specified IAM identities. The Storage Browser component (which must be called on the client side after obtaining the IAM credentials) will then invoke the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListCallerAccessGrants.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_control_ListCallerAccessGrants.html) S3 API operation to fetch the available grants to the identity requester and populate the locations in the component. After you obtain the `s3:GetDataAccess` permission, those credentials are then used by the Storage Browser component to request data access to S3.

```
import {
  createManagedAuthAdapter,
  createStorageBrowser,
} from '@aws-amplify/ui-react-storage/browser';
import "@aws-amplify/ui-react-storage/styles.css";

export const { StorageBrowser } = createStorageBrowser({
  config: createManagedAuthAdapter({
    credentialsProvider: async (options?: { forceRefresh?: boolean }) => {
      // return your credentials object
      return {
        credentials: {
          accessKeyId: 'my-access-key-id',
          secretAccessKey: 'my-secret-access-key',
          sessionToken: 'my-session-token',
          expiration: new Date()
        },
      }
    },
    // AWS `region` and `accountId`
    region: '',
    accountId: '',
    // call `onAuthStateChange` when end user auth state changes 
    // to clear sensitive data from the `StorageBrowser` state
    registerAuthListener: (onAuthStateChange) => {},
  })
});
```

## Method 3: Managing data access at scale
<a name="setup-storagebrowser-method3"></a>

If you want to associate an [S3 Access Grants](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants.html) instance to your IAM Identity Center for a more scalable solution (such as providing data access to your whole company), you can request data from Amazon S3 on behalf of the current authenticated user. For example, you can grant user groups in your corporate directory access to your data in S3. This approach allows you to centrally manage S3 Access Grants permissions for your users and groups, including the ones hosted on external providers such as Microsoft Entra, Okta, and others.

When using this method, the [integration with the IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/trustedidentitypropagation.html) allows you to use existing user directories. Another benefit of an IAM Identity Center trusted identity propagation is that each [AWS CloudTrail data event for Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-cloudtrail-logging-for-s3.html) contains a direct reference to the end user identity that accessed the S3 data.

If you have an application that supports OAuth 2.0 and your users need access from these applications to AWS services, we recommend that you use trusted identity propagation. With trusted identity propagation, a user can sign in to an application, and that application can pass the user’s identity in any requests that access data in AWS services. This application interacts with IAM Identity Center on behalf of any authenticated users. For more information, see [Using trusted identity propagation with customer managed applications](https://docs.aws.amazon.com/singlesignon/latest/userguide/trustedidentitypropagation-using-customermanagedapps-setup.html).

[Trusted identity propagation](https://docs.aws.amazon.com//singlesignon/latest/userguide/trustedidentitypropagation-overview.html) is an AWS IAM Identity Center feature that administrators of connected AWS services can use to grant and audit access to service data. Access to this data is based on user attributes such as group associations. Setting up trusted identity propagation requires collaboration between the administrators of connected AWS services and the IAM Identity Center administrators. For more information, see [Prerequisites and considerations](https://docs.aws.amazon.com//singlesignon/latest/userguide/trustedidentitypropagation-overall-prerequisites.html).

### Setup
<a name="setup-workflow-storagebrowser-method3"></a>

To set up Storage Browser authentication in the AWS Management Console using [S3 Access Grants](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants.html) and [IAM Identity Center trusted identity propagation](https://docs.aws.amazon.com/singlesignon/latest/userguide/trustedidentitypropagation.html), your applications must request data from Amazon S3 on behalf of the current authenticated user. With this approach, you can give users or groups of users from your corporate directory direct access to your S3 buckets, prefixes, or objects. This means that your application won’t need to map any users to an IAM principal.

The following workflow outlines the steps for setting up Storage Browser for S3, using IAM Identity Center and S3 Access Grants:


| Steps | Description | 
| --- | --- | 
| 1 | [Enable IAM Identity Center for your AWS Organizations](#enable-iam-idc-org) | 
| 2 | [Configure AWS Identity and Access Management Identity Center federation](#configure-iam-idc)  | 
| 3 | [Add a trusted token issuer in the AWS Identity and Access Management Identity Center console](#add-trusted-token-issuer-idc) The trusted token issuer represents your external identity provider (IdP) within IAM Identity Center, enabling it to recognize identity tokens for your application’s authenticated users.  | 
| 4 | [Create an IAM role for the `bootstrap` application and `identity bearer`](#create-iam-role-bootstrap)  | 
| 5 | [Create and configure your application in IAM Identity Center](#create-app-iam-idc) This application interacts with IAM Identity Center on behalf of authenticated users.  | 
| 6 | [Add S3 Access Grants as a trusted application for identity propagation](#add-s3-ag-app) This step connects your application to S3 Access Grants, so that it can make requests to S3 Access Grants on behalf of authenticated users.  | 
| 7 | [Create a grant to a user or group](#create-grant-user-group) This step syncs users from AWS Identity and Access Management Identity Center with the System for Cross-domain Identity Management (SCIM). SCIM keeps your IAM Identity Center identities in sync with identities from your identity provider (IdP).  | 
| 8 | [Create your Storage Browser for S3 component](#create-storage-browser-component)  | 

### Enable IAM Identity Center for your AWS Organizations
<a name="enable-iam-idc-org"></a>

To enable IAM Identity Center for your AWS Organizations, perform the following steps:

1. Sign in to the AWS Management Console, using one of these methods:

   1. ****New to AWS (root user)** –** Sign in as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password.

   1. ****Already using AWS (IAM credentials)** –** Sign in using your IAM credentials with administrative permissions.

1. Open the [IAM Identity Center console](https://console.aws.amazon.com/singlesignon).

1. Under **Enable IAM Identity Center**, choose **Enable**.
**Note**  
IAM Identity Center requires the setup of AWS Organizations. If you haven't set up an organization, you can choose to have AWS create one for you. Choose **Create AWS organization** to complete this process.

1. Choose **Enable with AWS Organizations**.

1. Choose **Continue**.

1. (Optional) Add any tags that you want to associate with this organization instance.

1. (Optional) Configure the delegated administration.
**Note**  
If you’re using a multi-account environment, we recommend that you configure delegated administration. With delegated administration, you can limit the number of people who require access to the management account in AWS Organizations. For more information, see [Delegated administration](https://docs.aws.amazon.com/singlesignon/latest/userguide/delegated-admin.html).

1. Choose **Save**.

AWS Organizations automatically sends a verification email to the address that is associated with your management account. There might be a delay before you receive the verification email. Make sure to verify your email address within 24 hours, before your verification email expires.

### Configure AWS Identity and Access Management Identity Center federation
<a name="configure-iam-idc"></a>

To use Storage Browser for S3 with corporate directory users, you must configure IAM Identity Center to use an external identity provider (IdP). You can use the preferred identity provider of your choice. However, be aware that each identity provider uses different configuration settings. For tutorials on using different external identity providers, see [IAM Identity Center source tutorials](https://docs.aws.amazon.com/singlesignon/latest/userguide/tutorials.html).

**Note**  
Make sure to record the issuer URL and the audience attributes of the identity provider that you’ve configured because you will need to refer to it in later steps. If you don’t have the required access or permissions to configure an IdP, you might need to contact the administrator of the external IdP to obtain them. 

### Add a trusted token issuer in the AWS Identity and Access Management Identity Center console
<a name="add-trusted-token-issuer-idc"></a>

The trusted token issuer represents your external identity provider in the AWS Identity and Access Management Identity Center, and recognizes tokens for your application’s authenticated users. The account owner of the IAM Identity Center instance in your AWS Organizations must perform these steps. These steps can be done either in the IAM Identity Center console, or programmatically. 

To add a trusted token issuer in the AWS Identity and Access Management Identity Center console, perform the following steps:

1. Open the [IAM Identity Center console](https://console.aws.amazon.com/singlesignon).

1. Choose **Settings**.

1. Choose the **Authentication** tab.

1. Navigate to the **Trusted token issuers** section, and fill out the following details:

   1. Under **Issuer URL**, enter the URL of the external IdP that serves as the trusted token issuer. You might need to contact the administrator of the external IdP to obtain this information. For more information, see [Using applications with a trusted token issuer](https://docs.aws.amazon.com/singlesignon/latest/userguide/using-apps-with-trusted-token-issuer.html).

   1. Under **Trusted token issuer name**, enter a name for the trusted token issuer. This name will appear in the list of trusted token issuers that you can select in *Step 8*, when an application resource is configured for identity propagation.

1. Update your **Map attributes** to your preferred application attribute, where each identity provider attribute is mapped to an IAM Identity Center attribute. For example, you might want to [map the application attribute](https://docs.aws.amazon.com/singlesignon/latest/userguide/mapawsssoattributestoapp.html) `email` to the IAM Identity Center user attribute `email`. To see the list of allowed user attributes in IAM Identity Center, see the table in [Attribute mappings for AWS Managed Microsoft AD directory](https://docs.aws.amazon.com/singlesignon/latest/userguide/attributemappingsconcept.html).

1. (Optional) If you want to add a resource tag, enter the key and value pair. To add multiple resource tags, choose **Add new tag** to generate a new entry and enter the key and value pairs.

1. Choose **Create trusted token issuer**.

1. After you finish creating the trusted token issuer, contact the application administrator to let them know the name of the trusted token issuer, so that they can confirm that the trusted token issuer is visible in the applicable console. 

1. Make sure the application administrator selects this trusted token issuer in the applicable console to enable user access to the application from applications that are configured for trusted identity propagation.

### Create an IAM role for the `bootstrap` application and `identity bearer`
<a name="create-iam-role-bootstrap"></a>

To ensure that the `bootstrap` application and `identity bearer` users can properly work with each other, make sure to [create two IAM roles](https://docs.aws.amazon.com/managedservices/latest/onboardingguide/create-iam-role.html). One IAM role is required for the `bootstrap` application and the other IAM role must be used for the identity bearer, or end users who are accessing the web application that requests access through S3 Access Grants. The `bootstrap` application receives the token issued by the identity provider and invokes the `CreateTokenWithIAM` API, exchanging this token with the one issued by the Identity Center.

Create an IAM role, such as `bootstrap-role`, with permissions such as the following. The following example IAM policy gives permissions to the `bootstrap-role` to perform the token exchange:

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement": [{
        "Action": [
            "sso-oauth:CreateTokenWithIAM",
        ],
        "Resource": "arn:${Partition}:sso::${AccountId}:application/${InstanceId}/${ApplicationId}",
        "Effect": "Allow"
    },
    {
        "Action": [
            "sts:AssumeRole",
            "sts:SetContext"
        ],
        "Resource": "arn:aws:iam::${AccountId}:role/identity-bearer-role",
        "Effect": "Allow"
    }]
}
```

Then, create a second IAM role (such as `identity-bearer-role`), which the identity broker uses to generate the IAM credentials. The IAM credentials returned from the identity broker to the web application are used by the Storage Browser for S3 component to allow access to S3 data:

```
{
    "Action": [
        "s3:GetDataAccess",
        "s3:ListCallerAccessGrants",
    ],
    "Resource": "arn:${Partition}:s3:${Region}:${Account}:access-grants/default",
    "Effect": "Allow"
}
```

This IAM role (`identity-bearer-role`) must use a trust policy with the following statement:

```
{
   "Effect": "Allow",
   "Principal": {
      "AWS": "arn:${Partition}:iam::${Account}:role/${RoleNameWithPath}"
   },
   "Action": [
       "sts:AssumeRole",
       "sts:SetContext"
   ]
}
```

### Create and configure your application in IAM Identity Center
<a name="create-app-iam-idc"></a>

**Note**  
Before you begin, make sure that you’ve created the required IAM roles from the previous step. You’ll need to specify one of these IAM roles in this step.

To create and configure a customer managed application in AWS IAM Identity Center, perform the following steps:

1. Open the [IAM Identity Center console](https://console.aws.amazon.com/singlesignon).

1. Choose **Applications**.

1. Choose the **Customer managed** tab.

1. Choose **Add application**.

1. On the **Select application type** page, under **Setup preference**, choose **I have an application I want to set up**.

1. Under **Application type**, choose **OAuth 2.0**.

1. Choose **Next**. The **Specify application** page is displayed.

1. Under the **Application name and description**section, enter a **Display name** for the application, such as **storage-browser-oauth**.

1. Enter a **Description**. The application description appears in the IAM Identity Center console and API requests, but not in the AWS access portal.

1. Under **User and group assignment method**, choose **Do not require assignments**. This option allows all authorized IAM Identity Center users and groups access to this application.

1. Under **AWS access portal**, enter an application URL where users can access the application.

1. (Optional) If you want to add a resource tag, enter the key and value pair. To add multiple resource tags, choose **Add new tag** to generate a new entry and enter the key and value pairs.

1. Choose **Next**. The **Specify authentication page** displays.

1. Under **Authentication with trusted token issuer**, use the checkbox to select the trusted token issuer that you previously created.

1. Under **Configure selected trusted token issuers**, enter the [aud claim](https://docs.aws.amazon.com/singlesignon/latest/userguide/trusted-token-issuer-configuration-settings.html#trusted-token-issuer-aud-claim). The **aud claim** identifies the audience of the JSON Web Token (JWT), and it is the name by which the trusted token issuer identifies this application.
**Note**  
You might need to contact the administrator of the external IdP to obtain this information.

1. Choose **Next**. The **Specify authentication credentials** page displays.

1. Under **Configuration method**, choose **Enter one or more IAM roles**.

1. Under **Enter IAM roles**, add the [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) or Amazon Resource Name (ARN) for the identity bearer token. You must enter the IAM role that you created from the previous step for the identity broker application (for example, **bootstrap-role**).

1. Choose **Next**.

1. On the **Review and configure** page, review the details of your application configuration. If you need to modify any of the settings, choose **Edit** for the section that you want to edit and make your changes to.

1. Choose **Submit**. The details page of the application that you just added is displayed.

After you’ve set up your applications, your users can access your applications from within their AWS access portal based on the [permission sets that you’ve created](https://docs.aws.amazon.com/singlesignon/latest/userguide/get-started-create-a-permission-set.html) and the [user access that you’ve assigned](https://docs.aws.amazon.com/singlesignon/latest/userguide/get-started-assign-account-access-user.html).

### Add S3 Access Grants as a trusted application for identity propagation
<a name="add-s3-ag-app"></a>

After you set up your customer managed application, you must specify S3 Access Grants for identity propagation. S3 Access Grants vends credentials for users to access Amazon S3 data. When you sign in to your customer managed application, S3 Access Grants will pass your user identity to the trusted application.

 **Prerequisite:** Make sure that you’ve set up S3 Access Grants (such as [creating an S3 Access Grants instance](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-instance-create.html) and [registering a location](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-location-register.html)) before following these steps. For more information, see [Getting started with S3 Access Grants](https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-grants-get-started.html).

To add S3 Access Grants for identity propagation to your customer managed application, perform the following steps:

1. Open the [IAM Identity Center console](https://console.aws.amazon.com/singlesignon).

1. Choose **Applications**.

1. Choose the **Customer managed** tab.

1. In the **Customer managed applications** list, select the OAuth 2.0 application for which you want to initiate access requests. This is the application that your users will sign in to.

1. On the **Details** page, under **Trusted applications for identity propagation**, choose **Specify trusted applications**.

1. Under Setup type, select Individual applications and specify access, and then choose **Next**.

1. On the **Select service** page, choose **S3 Access Grants**. S3 Access Grants has applications that you can use to define your own web application for trusted identity propagation.

1. Choose **Next**. You'll select your applications in the next step.

1. On the **Select applications** page, choose **Individual applications**, select the checkbox for each application that can receive requests for access, and then choose **Next**.

1. On the **Configure access** page, under **Configuration method**, choose either of the following: 
   + **Select access per application –** Select this option to configure different access levels for each application. Choose the application for which you want to configure the access level, and then choose **Edit access**. In **Level of access to apply**, change the access levels as needed, and then choose **Save changes**.
   + ****Apply same level of access to all applications** –** Select this option if you don't need to configure access levels on a per-application basis.

1. Choose **Next**.

1. On the **Review configuration** page, review the choices that you made. 
**Note**  
You’ll want to make sure the `s3:access_grants:read_write` permission is granted for your users. This permission allows your users to retrieve credentials to access Amazon S3. Make sure to use either the IAM policy you created previously, or S3 Access Grants, to limit access to write operations.

1. To make changes, choose **Edit** for the configuration section that you want to make changes to. Then, make the required changes and choose **Save changes**.

1. Choose **Trust applications** to add the trusted application for identity propagation.

### Create a grant to a user or group
<a name="create-grant-user-group"></a>

In this step, you use IAM Identity Center to provision your users. You can use SCIM for [automated or manual provisioning of users and groups](https://docs.aws.amazon.com/singlesignon/latest/userguide/provision-automatically.html). SCIM keeps your IAM Identity Center identities in sync with identities from your identity provider (IdP). This includes any provisioning, updates, and deprovisioning of users between your IdP and IAM Identity Center.

**Note**  
This step is required because when S3 Access Grants is used with IAM Identity Center, local IAM Identity Center users aren’t used. Instead, users must be synchronized from the identity provider with IAM Identity Center.

To synchronize users from your identity provider with IAM Identity Center, perform the following steps:

1. [Enable automatic provisioning](https://docs.aws.amazon.com/singlesignon/latest/userguide/how-to-with-scim.html).

1. [Generate an access token](https://docs.aws.amazon.com/singlesignon/latest/userguide/generate-token.html).

For examples of how to set up the identity provider with IAM Identity Center for your specific use case, see [IAM Identity Center Identity source tutorials](https://docs.aws.amazon.com/singlesignon/latest/userguide/tutorials.html).

### Create your Storage Browser for S3 component
<a name="create-storage-browser-component"></a>

After you’ve set up your IAM Identity Center instance and created grants in S3 Access Grants, open your React application. In the React application, use `createManagedAuthAdapter` to set up the authorization rules. You must provide a credentials provider to return the credentials you acquired from IAM Identity Center. You can then call `createStorageBrowser` to initialize the Storage Browser for S3 component:

```
import {
    createManagedAuthAdapter,
    createStorageBrowser,
} from '@aws-amplify/ui-react-storage/browser';
import '@aws-amplify/ui-react-storage/styles.css';

export const { StorageBrowser } = createStorageBrowser({
   config: createManagedAuthAdapter({
    credentialsProvider: async (options?: { forceRefresh?: boolean }) => {
      // return your credentials object
      return {
        credentials: {
          accessKeyId: 'my-access-key-id',
          secretAccessKey: 'my-secret-access-key',
          sessionToken: 'my-session-token',
          expiration: new Date(),
        },
      }
    },
    // AWS `region` and `accountId` of the S3 Access Grants Instance.
    region: '',
    accountId: '',
    // call `onAuthStateChange` when end user auth state changes 
    // to clear sensitive data from the `StorageBrowser` state
    registerAuthListener: (onAuthStateChange) => {},
  })
});
```



Then, create a mechanism to exchange the JSON web tokens (JWTs) from your web application with the IAM credentials from IAM Identity Center. For more information about how to exchange the JWT, see the following resources:
+ [How to develop a user-facing data application with IAM Identity Center and S3 Access Grants](https://aws.amazon.com/blogs/storage/how-to-develop-a-user-facing-data-application-with-iam-identity-center-and-s3-access-grants-part-2/) post in *AWS Storage Blog*
+ [Scaling data access with S3 Access Grants](https://aws.amazon.com/blogs/storage/scaling-data-access-with-amazon-s3-access-grants/) post in *AWS Storage Blog*
+ [S3 Access Grants workshop](https://catalog.us-east-1.prod.workshops.aws/workshops/77b0af63-6ad2-4c94-bfc0-270eb9358c7a/en-US) on *AWS workshop studio*
+ [S3 Access Grants workshop](https://github.com/aws-samples/s3-access-grants-workshop) on *GitHub*

Then, set up an API endpoint to handle requests for fetching credentials. To validate the JSON web token (JWT) exchange, perform the following steps:

1. Retrieve the JSON web token from the authorization header for incoming requests.

1. Validate the token using the public keys from the specified JSON web key set (JWKS) URL.

1. Verify the token's expiration, issuer, subject, and audience claims.

To exchange the identity provider’s JSON web token with AWS IAM credentials, perform the following steps: 

**Tip**  
Make sure to avoid logging any sensitive information. We recommend that you use error handling controls for missing authorization, expired tokens, and other exceptions. For more information, see the [Implementing AWS Lambda error handling patterns ](https://aws.amazon.com/blogs/compute/implementing-aws-lambda-error-handling-patterns/) post in *AWS Compute Blog*.

1. Verify that the required **Permission** and **Scope** parameters are provided in the request.

1. Use the [https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_CreateTokenWithIAM.html](https://docs.aws.amazon.com/singlesignon/latest/OIDCAPIReference/API_CreateTokenWithIAM.html) API to exchange the JSON web token for an IAM Identity Center token.
**Note**  
After the IdP JSON web token is used, it can’t be used again. A new token must be used to exchange with IAM Identity Center.

1. Use the [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html) API operation to assume a transient role using the IAM Identity Center token. Make sure to use the identity bearer role, also known as the role that carries the identity context (for example, **identity-bearer-role**) to request the credentials.

1. Return the IAM credentials to the web application.
**Note**  
Make sure that you’ve set up a proper logging mechanism. Responses are returned in a standardized JSON format with an appropriate HTTP status code.

# Configuring Storage Browser for S3
<a name="s3config-storagebrowser"></a>

To allow Storage Browser for S3 access to S3 buckets, the Storage Browser component makes the REST API calls to Amazon S3. By default, [cross-origin resource sharing (CORS)](https://docs.aws.amazon.com/AmazonS3/latest/userguide/cors.html) isn’t enabled on S3 buckets. As a result, you must enable CORS for each S3 bucket that Storage Browser is accessing data from.

For example, to enable CORS on your S3 bucket, you can update your CORS policy like this:

```
[
    {
        "ID": "S3CORSRuleId1",
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "GET",
            "HEAD",
            "PUT",
            "POST",
            "DELETE"
        ],
        "AllowedOrigins": [
            "*"
        ],
        "ExposeHeaders": [
            "last-modified",
            "content-type",
            "content-length",
            "etag",
            "x-amz-version-id",
            "x-amz-request-id",
            "x-amz-id-2",
            "x-amz-cf-id",
            "x-amz-storage-class",
            "date",
            "access-control-expose-headers"
        ],
        "MaxAgeSeconds": 3000
    }
]
```

# Troubleshooting Storage Browser for S3
<a name="troubleshooting-storagebrowser"></a>

If you’re experiencing issues with Storage Browser for S3, make sure to review the following troubleshooting tips:
+ Avoid trying to use the same token (`idToken` or `accessToken`) for multiple requests. Tokens can't be reused. This will result in a request failure.
+ Make sure that the IAM credentials that you provide to the Storage Browser component includes permissions to invoke the `s3:GetDataAccess` operation. Otherwise, your end users won’t be able to access your data.

Alternatively, you can check the following resources:
+ Storage Browser for S3 is backed by AWS Support. If you need assistance, contact the [AWS Support Center](https://console.aws.amazon.com/support/home#/).
+ If you’re having trouble with Storage Browser for S3 or would like to submit feedback, visit the [Amplify GitHub page](https://github.com/aws-amplify/amplify-ui).
+ If you discover a potential security issue in this project, you can notify AWS Security through the [AWS Vulnerability Reporting](https://aws.amazon.com/security/vulnerability-reporting/) page.

# Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration
<a name="transfer-acceleration"></a>

Amazon S3 Transfer Acceleration is a bucket-level feature that enables fast, easy, and secure transfers of files over long distances between your client and an S3 general purpose bucket. Transfer Acceleration is designed to optimize transfer speeds from across the world into S3 general purpose buckets. Transfer Acceleration takes advantage of the globally distributed edge locations in Amazon CloudFront. As the data arrives at an edge location, the data is routed to Amazon S3 over an optimized network path.

When you use Transfer Acceleration, additional data transfer charges might apply. For more information about pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/).

## Why use Transfer Acceleration?
<a name="transfer-acceleration-why-use"></a>

You might want to use Transfer Acceleration on a general purpose bucket for various reasons:
+ Your customers upload to a centralized general purpose bucket from all over the world.
+ You transfer gigabytes to terabytes of data on a regular basis across continents.
+ You can't use all of your available bandwidth over the internet when uploading to Amazon S3.

For more information about when to use Transfer Acceleration, see [Amazon S3 FAQs](https://aws.amazon.com/s3/faqs/#s3ta).

## Requirements for using Transfer Acceleration
<a name="transfer-acceleration-requirements"></a>

The following are required when you are using Transfer Acceleration on an S3 bucket:
+ Transfer Acceleration is only supported on virtual-hosted style requests. For more information about virtual-hosted style requests, see [Making requests using the REST API ](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTAPI.html) in the *Amazon S3 API Reference*. 
+ The name of the bucket used for Transfer Acceleration must be DNS-compliant and must not contain periods (".").
+ Transfer Acceleration must be enabled on the bucket. For more information, see [Enabling and using S3 Transfer Acceleration](transfer-acceleration-examples.md). 

  After you enable Transfer Acceleration on a bucket, it might take up to 20 minutes before the data transfer speed to the bucket increases.
**Note**  
Transfer Acceleration is currently supported for buckets located in the following Regions:  
Asia Pacific (Tokyo) (ap-northeast-1)
Asia Pacific (Seoul) (ap-northeast-2)
Asia Pacific (Mumbai) (ap-south-1)
Asia Pacific (Singapore) (ap-southeast-1)
Asia Pacific (Sydney) (ap-southeast-2)
Canada (Central) (ca-central-1)
Europe (Frankfurt) (eu-central-1)
Europe (Ireland) (eu-west-1)
Europe (London) (eu-west-2)
Europe (Paris) (eu-west-3)
South America (São Paulo) (sa-east-1)
US East (N. Virginia) (us-east-1)
US East (Ohio) (us-east-2)
US West (N. California) (us-west-1)
US West (Oregon) (us-west-2)
+ To access the bucket that is enabled for Transfer Acceleration, you must use the endpoint `bucket-name.s3-accelerate.amazonaws.com`. Or, use the dual-stack endpoint `bucket-name.s3-accelerate.dualstack.amazonaws.com` to connect to the enabled bucket over IPv6. You can continue to use the regular endpoints for standard data transfer.
+ You must be the bucket owner to set the transfer acceleration state. The bucket owner can assign permissions to other users to allow them to set the acceleration state on a bucket. The `s3:PutAccelerateConfiguration` permission permits users to enable or disable Transfer Acceleration on a bucket. The `s3:GetAccelerateConfiguration` permission permits users to return the Transfer Acceleration state of a bucket, which is either `Enabled` or `Suspended.` 

The following sections describe how to get started and use Amazon S3 Transfer Acceleration for transferring data.

**Topics**
+ [Why use Transfer Acceleration?](#transfer-acceleration-why-use)
+ [Requirements for using Transfer Acceleration](#transfer-acceleration-requirements)
+ [Getting started with Amazon S3 Transfer Acceleration](transfer-acceleration-getting-started.md)
+ [Enabling and using S3 Transfer Acceleration](transfer-acceleration-examples.md)
+ [Using the Amazon S3 Transfer Acceleration Speed Comparison tool](transfer-acceleration-speed-comparison.md)

# Getting started with Amazon S3 Transfer Acceleration
<a name="transfer-acceleration-getting-started"></a>

You can use Amazon S3 Transfer Acceleration for fast, easy, and secure transfers of files over long distances between your client and an S3 bucket. Transfer Acceleration uses the globally distributed edge locations in Amazon CloudFront. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.

To get started using Amazon S3 Transfer Acceleration, perform the following steps:

1. **Enable Transfer Acceleration on a bucket** 

   

   You can enable Transfer Acceleration on a bucket any of the following ways:
   + Use the Amazon S3 console. 
   + Use the REST API [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTaccelerate.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTaccelerate.html) operation.
   + Use the AWS CLI and AWS SDKs. For more information, see [Developing with Amazon S3 using the AWS SDKs](https://docs.aws.amazon.com/AmazonS3/latest/API/sdk-general-information-section.html) in the *Amazon S3 API Reference*. 

   For more information, see [Enabling and using S3 Transfer Acceleration](transfer-acceleration-examples.md).
**Note**  
For your bucket to work with transfer acceleration, the bucket name must conform to DNS naming requirements and must not contain periods (`.`). 

1. **Transfer data to and from the acceleration-enabled bucket**

   Use one of the following `s3-accelerate` endpoint domain names:
   + To access an acceleration-enabled bucket, use `bucket-name.s3-accelerate.amazonaws.com`. 
   + To access an acceleration-enabled bucket over IPv6, use `bucket-name.s3-accelerate.dualstack.amazonaws.com`. 

     Amazon S3 dual-stack endpoints support requests to S3 buckets over IPv6 and IPv4. The Transfer Acceleration dual-stack endpoint only uses the virtual hosted-style type of endpoint name. For more information, see [Making requests to Amazon S3 over IPv6 ](https://docs.aws.amazon.com/AmazonS3/latest/API/ipv6-access.html) in the *Amazon S3 API Reference* and [Using Amazon S3 dual-stack endpoints ](https://docs.aws.amazon.com/AmazonS3/latest/API/dual-stack-endpoints.html) in the *Amazon S3 API Reference*.
**Note**  
Your data transfer application must use one of the following two types of endpoints to access the bucket for faster data transfer: `.s3-accelerate.amazonaws.com` or `.s3-accelerate.dualstack.amazonaws.com` for the dual-stack endpoint. If you want to use standard data transfer, you can continue to use the regular endpoints.

   You can point your Amazon S3 `PUT` object and `GET` object requests to the `s3-accelerate` endpoint domain name after you enable Transfer Acceleration. For example, suppose that you currently have a REST API application using [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPUT.html) that uses the hostname `amzn-s3-demo-bucket.s3.us-east-1.amazonaws.com` in the `PUT` request. To accelerate the `PUT`, you change the hostname in your request to `amzn-s3-demo-bucket.s3-accelerate.amazonaws.com`. To go back to using the standard upload speed, change the name back to `amzn-s3-demo-bucket.s3.us-east-1.amazonaws.com`.

   After Transfer Acceleration is enabled, it can take up to 20 minutes for you to realize the performance benefit. However, the accelerate endpoint is available as soon as you enable Transfer Acceleration.

   You can use the accelerate endpoint in the AWS CLI, AWS SDKs, and other tools that transfer data to and from Amazon S3. If you are using the AWS SDKs, some of the supported languages use an accelerate endpoint client configuration flag so you don't need to explicitly set the endpoint for Transfer Acceleration to `bucket-name.s3-accelerate.amazonaws.com`. For examples of how to use an accelerate endpoint client configuration flag, see [Enabling and using S3 Transfer Acceleration](transfer-acceleration-examples.md).

You can use all Amazon S3 operations through the transfer acceleration endpoints *except* for the following: 
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTServiceGET.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTServiceGET.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUT.html)
+ [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETE.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketDELETE.html)

Also, Amazon S3 Transfer Acceleration does not support cross-Region copies using [https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html). 

# Enabling and using S3 Transfer Acceleration
<a name="transfer-acceleration-examples"></a>

You can use Amazon S3 Transfer Acceleration to transfer files quickly and securely over long distances between your client and an S3 general purpose bucket. You can enable Transfer Acceleration using the S3 console, the AWS Command Line Interface (AWS CLI), API, or the AWS SDKs.

This section provides examples of how to enable Amazon S3 Transfer Acceleration on a bucket and use the acceleration endpoint for the enabled bucket. 

For more information about Transfer Acceleration requirements, see [Configuring fast, secure file transfers using Amazon S3 Transfer Acceleration](transfer-acceleration.md).

## Using the S3 console
<a name="enable-transfer-acceleration"></a>

**Note**  
If you want to compare accelerated and non-accelerated upload speeds, open the [ Amazon S3 Transfer Acceleration Speed Comparison tool](https://s3-accelerate-speedtest.s3-accelerate.amazonaws.com/en/accelerate-speed-comparsion.html).  
The Speed Comparison tool uses multipart upload to transfer a file from your browser to various AWS Regions with and without Amazon S3 transfer acceleration. You can compare the upload speed for direct uploads and transfer accelerated uploads by Region. 

**To enable transfer acceleration for an S3 general purpose bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the **General purpose buckets** list, choose the name of the bucket that you want to enable transfer acceleration for.

1. Choose **Properties**.

1. Under **Transfer acceleration**, choose **Edit**.

1. Choose **Enable**, and choose **Save changes**.

**To access accelerated data transfers**

1. After Amazon S3 enables transfer acceleration for your bucket, view the **Properties** tab for the bucket.

1. Under **Transfer acceleration**, **Accelerated endpoint** displays the transfer acceleration endpoint for your bucket. Use this endpoint to access accelerated data transfers to and from your bucket. 

   If you suspend transfer acceleration, the accelerate endpoint no longer works.

## Using the AWS CLI
<a name="transfer-acceleration-examples-aws-cli"></a>

The following are examples of AWS CLI commands used for Transfer Acceleration. For instructions on setting up the AWS CLI, see [Developing with Amazon S3 using the AWS CLI](https://docs.aws.amazon.com/AmazonS3/latest/API/setup-aws-cli.html) in the *Amazon S3 API Reference*.

### Enabling Transfer Acceleration on a bucket
<a name="transfer-acceleration-examples-aws-cli-1"></a>

Use the AWS CLI [https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-accelerate-configuration.html](https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-accelerate-configuration.html) command to enable or suspend Transfer Acceleration on a bucket. 

The following example sets `Status=Enabled` to enable Transfer Acceleration on a bucket named `amzn-s3-demo-bucket`. To suspend Transfer Acceleration, use `Status=Suspended`.

**Example**  

```
$ aws s3api put-bucket-accelerate-configuration --bucket amzn-s3-demo-bucket --accelerate-configuration Status=Enabled
```

### Using Transfer Acceleration
<a name="transfer-acceleration-examples-aws-cli-2"></a>

You can direct all Amazon S3 requests made by `s3` and `s3api` AWS CLI commands to the accelerate endpoint: `s3-accelerate.amazonaws.com`. To do this, set the configuration value `use_accelerate_endpoint` to `true` in a profile in your AWS Config file. Transfer Acceleration must be enabled on your bucket to use the accelerate endpoint. 

All requests are sent using the virtual style of bucket addressing: `amzn-s3-demo-bucket.s3-accelerate.amazonaws.com`. Any `ListBuckets`, `CreateBucket`, and `DeleteBucket` requests are not sent to the accelerate endpoint because the endpoint doesn't support those operations. 

For more information about `use_accelerate_endpoint`, see [AWS CLI S3 Configuration](https://docs.aws.amazon.com/cli/latest/topic/s3-config.html) in the *AWS CLI Command Reference*.

The following example sets `use_accelerate_endpoint` to `true` in the default profile.

**Example**  

```
$ aws configure set default.s3.use_accelerate_endpoint true
```

If you want to use the accelerate endpoint for some AWS CLI commands but not others, you can use either one of the following two methods: 
+ Use the accelerate endpoint for any `s3` or `s3api` command by setting the `--endpoint-url` parameter to `https://s3-accelerate.amazonaws.com`.
+ Set up separate profiles in your AWS Config file. For example, create one profile that sets `use_accelerate_endpoint` to `true` and a profile that does not set `use_accelerate_endpoint`. When you run a command, specify which profile you want to use, depending upon whether you want to use the accelerate endpoint. 

### Uploading an object to a bucket enabled for Transfer Acceleration
<a name="transfer-acceleration-examples-aws-cli-3"></a>

The following example uploads a file to a bucket named `amzn-s3-demo-bucket` that's been enabled for Transfer Acceleration by using the default profile that has been configured to use the accelerate endpoint.

**Example**  

```
$ aws s3 cp file.txt s3://amzn-s3-demo-bucket/key-name --region region
```

The following example uploads a file to a bucket enabled for Transfer Acceleration by using the `--endpoint-url` parameter to specify the accelerate endpoint.

**Example**  

```
$ aws configure set s3.addressing_style virtual
$ aws s3 cp file.txt s3://amzn-s3-demo-bucket/key-name --region region --endpoint-url https://s3-accelerate.amazonaws.com
```

## Using the AWS SDKs
<a name="transfer-acceleration-examples-sdk"></a>

The following are examples of using Transfer Acceleration to upload objects to Amazon S3 using the AWS SDK. Some of the AWS SDK supported languages (for example, Java and .NET) use an accelerate endpoint client configuration flag so you don't need to explicitly set the endpoint for Transfer Acceleration to `bucket-name.s3-accelerate.amazonaws.com`.

------
#### [ Java ]

To use an accelerate endpoint to upload an object to Amazon S3 with the AWS SDK for Java, you can:
+ Create an S3Client that is configured to use accelerate endpoints. All buckets that the client accesses must have Transfer Acceleration enabled.
+ Enable Transfer Acceleration on a specified bucket. This step is necessary only if the bucket you specify doesn't already have Transfer Acceleration enabled.
+ Verify that transfer acceleration is enabled for the specified bucket.
+ Upload a new object to the specified bucket using the bucket's accelerate endpoint.

For more information about using Transfer Acceleration, see [Getting started with Amazon S3 Transfer Acceleration](transfer-acceleration-getting-started.md).

The following code example shows how to configure Transfer Acceleration with the AWS SDK for Java.

```
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider;
import software.amazon.awssdk.core.sync.RequestBody;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.BucketAccelerateStatus;
import software.amazon.awssdk.services.s3.model.GetBucketAccelerateConfigurationRequest;
import software.amazon.awssdk.services.s3.model.PutBucketAccelerateConfigurationRequest;
import software.amazon.awssdk.services.s3.model.PutObjectRequest;
import software.amazon.awssdk.services.s3.model.AccelerateConfiguration;
import software.amazon.awssdk.services.s3.model.S3Exception;
import software.amazon.awssdk.core.exception.SdkClientException;

public class TransferAcceleration {
    public static void main(String[] args) {
        Region clientRegion = Region.US_EAST_1;
        String bucketName = "*** Provide bucket name ***";
        String keyName = "*** Provide key name ***";

        try {
            // Create an Amazon S3 client that is configured to use the accelerate endpoint.
            S3Client s3Client = S3Client.builder()
                    .region(clientRegion)
                    .credentialsProvider(ProfileCredentialsProvider.create())
                    .accelerate(true)
                    .build();

            // Enable Transfer Acceleration for the specified bucket.
            s3Client.putBucketAccelerateConfiguration(
                    PutBucketAccelerateConfigurationRequest.builder()
                            .bucket(bucketName)
                            .accelerateConfiguration(AccelerateConfiguration.builder()
                                    .status(BucketAccelerateStatus.ENABLED)
                                    .build())
                            .build());

            // Verify that transfer acceleration is enabled for the bucket.
            String accelerateStatus = s3Client.getBucketAccelerateConfiguration(
                    GetBucketAccelerateConfigurationRequest.builder()
                            .bucket(bucketName)
                            .build())
                    .status().toString();
            System.out.println("Bucket accelerate status: " + accelerateStatus);

            // Upload a new object using the accelerate endpoint.
            s3Client.putObject(PutObjectRequest.builder()
                            .bucket(bucketName)
                            .key(keyName)
                            .build(),
                    RequestBody.fromString("Test object for transfer acceleration"));
            System.out.println("Object \"" + keyName + "\" uploaded with transfer acceleration.");
        } catch (S3Exception e) {
            // The call was transmitted successfully, but Amazon S3 couldn't process
            // it, so it returned an error response.
            e.printStackTrace();
        } catch (SdkClientException e) {
            // Amazon S3 couldn't be contacted for a response, or the client
            // couldn't parse the response from Amazon S3.
            e.printStackTrace();
        }
    }
}
```

------
#### [ .NET ]

The following example shows how to use the AWS SDK for .NET to enable Transfer Acceleration on a bucket. For information about setting up and running the code examples, see [Getting Started with the AWS SDK for .NET](https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/net-dg-setup.html) in the *AWS SDK for .NET Developer Guide*. 

**Example**  

```
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
using System;
using System.Threading.Tasks;

namespace Amazon.DocSamples.S3
{
    class TransferAccelerationTest
    {
        private const string bucketName = "*** bucket name ***";
        // Specify your bucket region (an example region is shown).
        private static readonly RegionEndpoint bucketRegion = RegionEndpoint.USWest2;
        private static IAmazonS3 s3Client;
        public static void Main()
        {
            s3Client = new AmazonS3Client(bucketRegion);
            EnableAccelerationAsync().Wait();
        }

        static async Task EnableAccelerationAsync()
        {
                try
                {
                    var putRequest = new PutBucketAccelerateConfigurationRequest
                    {
                        BucketName = bucketName,
                        AccelerateConfiguration = new AccelerateConfiguration
                        {
                            Status = BucketAccelerateStatus.Enabled
                        }
                    };
                    await s3Client.PutBucketAccelerateConfigurationAsync(putRequest);

                    var getRequest = new GetBucketAccelerateConfigurationRequest
                    {
                        BucketName = bucketName
                    };
                    var response = await s3Client.GetBucketAccelerateConfigurationAsync(getRequest);

                    Console.WriteLine("Acceleration state = '{0}' ", response.Status);
                }
                catch (AmazonS3Exception amazonS3Exception)
                {
                    Console.WriteLine(
                        "Error occurred. Message:'{0}' when setting transfer acceleration",
                        amazonS3Exception.Message);
                }
        }
    }
}
```

When uploading an object to a bucket that has Transfer Acceleration enabled, you specify using the acceleration endpoint at the time of creating a client.



```
var client = new AmazonS3Client(new AmazonS3Config
            {
                RegionEndpoint = TestRegionEndpoint,
                UseAccelerateEndpoint = true
            }
```

------
#### [ JavaScript ]

For an example of enabling Transfer Acceleration by using the AWS SDK for JavaScript, see [PutBucketAccelerateConfiguration command](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/s3/command/PutBucketAccelerateConfigurationCommand/) in the *AWS SDK for JavaScript API Reference*.

------
#### [ Python (Boto) ]

For an example of enabling Transfer Acceleration by using the SDK for Python, see [put\$1bucket\$1accelerate\$1configuration](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Client.put_bucket_accelerate_configuration) in the *AWS SDK for Python (Boto3) API Reference*.

------
#### [ Other ]

For information about using other AWS SDKs, see [Sample Code and Libraries](https://aws.amazon.com/code/). 

------

## Using the REST API
<a name="transfer-acceleration-examples-api"></a>

Use the REST API `PutBucketAccelerateConfiguration` operation to enable accelerate configuration on an existing bucket. 

For more information, see [https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAccelerateConfiguration.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketAccelerateConfiguration.html) in the *Amazon Simple Storage Service API Reference*.

# Using the Amazon S3 Transfer Acceleration Speed Comparison tool
<a name="transfer-acceleration-speed-comparison"></a>

You can use the [Amazon S3 Transfer Acceleration Speed Comparison tool](https://s3-accelerate-speedtest.s3-accelerate.amazonaws.com/en/accelerate-speed-comparsion.html) to compare accelerated and non-accelerated upload speeds across Amazon S3 Regions. The Speed Comparison tool uses multipart uploads to transfer a file from your browser to various Amazon S3 Regions with and without using Transfer Acceleration.

You can access the Speed Comparison tool by using either of the following methods:
+ Copy the following URL into your browser window, replacing `region` with the AWS Region that you are using (for example, `us-west-2`) and `amzn-s3-demo-bucket` with the name of the bucket that you want to evaluate: 

  `https://s3-accelerate-speedtest.s3-accelerate.amazonaws.com/en/accelerate-speed-comparsion.html?region=region&origBucketName=amzn-s3-demo-bucket` 

  For a list of the Regions supported by Amazon S3, see [Amazon S3 endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/s3.html) in the *AWS General Reference*.
+ Use the Amazon S3 console. 

# Using Requester Pays general purpose buckets for storage transfers and usage
<a name="RequesterPaysBuckets"></a>

In general, bucket owners pay for all Amazon S3 storage and data transfer costs that are associated with their bucket. However, you can configure a general purpose bucket to be a *Requester Pays* bucket. With Requester Pays buckets, the requester instead of the bucket owner pays the cost of the request and the data download from the bucket. The bucket owner always pays the cost of storing data. 

Typically, you configure buckets to be Requester Pays buckets when you want to share data but not incur charges associated with others accessing the data. For example, you might use Requester Pays buckets when making available large datasets, such as zip code directories, reference data, geospatial information, or web crawling data. 

**Important**  
If you enable Requester Pays on a general purpose bucket, anonymous access to that bucket is not allowed.

You must authenticate all requests involving Requester Pays buckets. The request authentication enables Amazon S3 to identify and charge the requester for their use of the Requester Pays bucket. 

When the requester assumes an AWS Identity and Access Management (IAM) role before making their request, the account to which the role belongs is charged for the request. For more information about IAM roles, see [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) in the *IAM User Guide*. 

After you configure a bucket to be a Requester Pays bucket, requesters must show they understand that they will be charged for the request and for the data download. To show they accept the charges, requesters must either include `x-amz-request-payer` as a header in their API request for DELETE, GET, HEAD, POST, and PUT requests, or add the `RequestPayer` parameter in their REST request. For CLI requests, requesters can use the `--request-payer` parameter.

**Example – Using Requester Pays when deleting an object**  
To use the following [https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObject.html) API example, replace the `user input placeholders` with your own information.  

```
DELETE /Key+?versionId=VersionId HTTP/1.1
Host: Bucket.s3.amazonaws.com
x-amz-mfa: MFA
x-amz-request-payer: RequestPayer
x-amz-bypass-governance-retention: BypassGovernanceRetention
x-amz-expected-bucket-owner: ExpectedBucketOwner
```

If the requester restores objects by using the [https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html) API, Requester Pays is supported as long as the `x-amz-request-payer` header or the `RequestPayer` parameter are in the request; however, the requester only pays for the cost of the request. The bucket owner pays the retrieval charges.

Requester Pays buckets do not support the following:
+ Anonymous requests
+ SOAP requests
+ Using a Requester Pays bucket as the target bucket for end-user logging, or vice versa. However, you can turn on end-user logging on a Requester Pays bucket where the target bucket is not a Requester Pays bucket. 

## How Requester Pays charges work
<a name="ChargeDetails"></a>

The charge for successful Requester Pays requests is straightforward: The requester pays for the data transfer and the request, and the bucket owner pays for the data storage. However, the bucket owner is charged for the request under the following conditions:
+ The request returns an `AccessDenied` (HTTP `403 Forbidden`) error and the request is initiated inside the bucket owner's individual AWS account or AWS organization.
+ The request is a SOAP request.

For more information about Requester Pays, see the following topics.

**Topics**
+ [How Requester Pays charges work](#ChargeDetails)
+ [Configuring Requester Pays on a bucket](RequesterPaysExamples.md)
+ [Retrieving the requestPayment configuration using the REST API](BucketPayerValues.md)
+ [Downloading objects from Requester Pays buckets](ObjectsinRequesterPaysBuckets.md)

# Configuring Requester Pays on a bucket
<a name="RequesterPaysExamples"></a>

You can configure an Amazon S3 bucket to be a *Requester Pays* bucket so that the requester pays the cost of the request and data download instead of the bucket owner.

This section provides examples of how to configure Requester Pays on an Amazon S3 bucket using the console and the REST API.

## Using the S3 console
<a name="configure-requester-pays-console"></a>

**To enable Requester Pays for an S3 general purpose bucket**

1. Sign in to the AWS Management Console and open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. In the left navigation pane, choose **General purpose buckets**.

1. In the **General purpose buckets** list, choose the name of the bucket that you want to enable Requester Pays for.

1. Choose **Properties**.

1. Under **Requester pays**, choose **Edit**.

1. Choose **Enable**, and choose **Save changes**.

   Amazon S3 enables Requester Pays for your bucket and displays your **Bucket overview**. Under **Requester pays**, you see **Enabled**.

## Using the REST API
<a name="RequesterPaysBucketConfiguration"></a>

Only the bucket owner can set the `RequestPaymentConfiguration.payer` configuration value of a bucket to `BucketOwner` (the default) or `Requester`. Setting the `requestPayment` resource is optional. By default, the bucket is not a Requester Pays bucket.

To revert a Requester Pays bucket to a regular bucket, you use the value `BucketOwner`. Typically, you would use `BucketOwner` when uploading data to the Amazon S3 bucket, and then you would set the value to `Requester` before publishing the objects in the bucket.

**To set requestPayment**
+ Use a `PUT` request to set the `Payer` value to `Requester` on a specified bucket.

  ```
  1. PUT ?requestPayment HTTP/1.1
  2. Host: [BucketName].s3.amazonaws.com
  3. Content-Length: 173
  4. Date: Wed, 01 Mar 2009 12:00:00 GMT
  5. Authorization: AWS [Signature]
  6. 
  7. <RequestPaymentConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  8. <Payer>Requester</Payer>
  9. </RequestPaymentConfiguration>
  ```

If the request succeeds, Amazon S3 returns a response similar to the following.

```
1. HTTP/1.1 200 OK
2. x-amz-id-2: [id]
3. x-amz-request-id: [request_id]
4. Date: Wed, 01 Mar 2009 12:00:00 GMT
5. Content-Length: 0
6. Connection: close
7. Server: AmazonS3
8. x-amz-request-charged:requester
```

You can set Requester Pays only at the bucket level. You can't set Requester Pays for specific objects within the bucket.

You can configure a bucket to be `BucketOwner` or `Requester` at any time. However, there might be a few minutes before the new configuration value takes effect.

**Note**  
Bucket owners who give out presigned URLs should consider carefully before configuring a bucket to be Requester Pays, especially if the URL has a long lifetime. The bucket owner is charged each time the requester uses a presigned URL that uses the bucket owner's credentials. 

# Retrieving the requestPayment configuration using the REST API
<a name="BucketPayerValues"></a>

You can determine the `Payer` value that is set on a bucket by requesting the resource `requestPayment`.

**To return the requestPayment resource**
+ Use a GET request to obtain the `requestPayment` resource, as shown in the following request.

  ```
  1. GET ?requestPayment HTTP/1.1
  2. Host: [BucketName].s3.amazonaws.com
  3. Date: Wed, 01 Mar 2009 12:00:00 GMT
  4. Authorization: AWS [Signature]
  ```

If the request succeeds, Amazon S3 returns a response similar to the following.

```
 1. HTTP/1.1 200 OK
 2. x-amz-id-2: [id]
 3. x-amz-request-id: [request_id]
 4. Date: Wed, 01 Mar 2009 12:00:00 GMT
 5. Content-Type: [type]
 6. Content-Length: [length]
 7. Connection: close
 8. Server: AmazonS3
 9. 
10. <?xml version="1.0" encoding="UTF-8"?>
11. <RequestPaymentConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
12. <Payer>Requester</Payer>
13. </RequestPaymentConfiguration>
```

This response shows that the `payer` value is set to `Requester`. 

# Downloading objects from Requester Pays buckets
<a name="ObjectsinRequesterPaysBuckets"></a>

Because requesters are charged for downloading data from Requester Pays buckets, the requests must contain a special parameter, `x-amz-request-payer`, which confirms that the requester knows that they will be charged for the download. To access objects in Requester Pays buckets, requests must include one of the following.
+ For DELETE, GET, HEAD, POST, and PUT requests, include `x-amz-request-payer : requester` in the header
+ For signed URLs, include `x-amz-request-payer=requester` in the request

If the request succeeds and the requester is charged, the response includes the header `x-amz-request-charged:requester`. If `x-amz-request-payer` is not in the request, Amazon S3 returns a 403 error and charges the bucket owner for the request.

**Note**  
Bucket owners do not need to add `x-amz-request-payer` to their requests.  
Ensure that you have included `x-amz-request-payer` and its value in your signature calculation. For more information, see [Using an Authorization Header](https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-auth-using-authorization-header.html) in the *Amazon S3 API Reference* .

## Using the REST API
<a name="get-requester-pays-rest"></a>

**To download objects from a Requester Pays bucket**
+  Use a `GET` request to download an object from a Requester Pays bucket, as shown in the following request.

  ```
  1. GET / [destinationObject] HTTP/1.1
  2. Host: [BucketName].s3.amazonaws.com
  3. x-amz-request-payer : requester
  4. Date: Wed, 01 Mar 2009 12:00:00 GMT
  5. Authorization: AWS [Signature]
  ```

If the GET request succeeds and the requester is charged, the response includes `x-amz-request-charged:requester`.

Amazon S3 can return an `Access Denied` error for requests that try to get objects from a Requester Pays bucket. For more information, see [Error Responses](https://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html) in the *Amazon Simple Storage Service API Reference*.

## Using the AWS CLI
<a name="get-requester-pays-cli"></a>

To download objects from a Requester Pays bucket using the AWS CLI, you specify `--request-payer requester` as part of your `get-object` request. For more information, see [get-object](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-object.html) in the *AWS CLI Reference*.