

# Exporting log data to Amazon S3


 This chapter provides you with information, so you can export log data from your log groups to an Amazon S3 bucket for custom processing and analysis, or to load onto other systems. You can export to an S3 bucket in the same account or a different account. 

You can do the following:
+ Export log data to S3 buckets that are encrypted by SSE-KMS in AWS Key Management Service (AWS KMS) 
+ Export log data to S3 buckets that have S3 Object Lock enabled with a retention period

We recommend that you don't regularly export to Amazon S3 as a way to continuously archive your logs. For that use case, we instead recommend that you use subscriptions. For more information about subscriptions, see [Real-time processing of log data with subscriptions](Subscriptions.md). 

To begin the export process, you must create an S3 bucket to store the exported log data. You can store the exported files in your S3 bucket and define Amazon S3 lifecycle rules to archive or delete exported files automatically.

You can export to S3 buckets that are encrypted with AES-256 or with SSE-KMS. Exporting to buckets encrypted with DSSE-KMS is not supported.

You can export logs from multiple log groups or multiple time ranges to the same S3 bucket. To organize exported data, specify a prefix for each export task which will be used as the Amazon S3 key prefix for all exported objects. For example `prod/app-logs/2026-01-03/` or `log-group-name/backup/` 

**Note**  
Time-based sorting on chunks of log data inside an exported file is not guaranteed. You can sort the exported log field data by using Linux utilities. For example, the following utility command sorts the events in all `.gz` files in a single folder.  

```
find . -exec zcat {} + | sed -r 's/^[0-9]+/\x0&/' | sort -z
```
The following utility command sorts .gz files from multiple subfolders.  

```
find ./*/ -type f -exec zcat {} + | sed -r 's/^[0-9]+/\x0&/' | sort -z
```
Additionally, you can use another `stdout` command to pipe the sorted output to another file to save it.

Log data can take up to 12 hours to become available for export. Export tasks time out after 24 hours. If your export tasks are timing out, reduce the time range when you create the export task.

For near real-time analysis of log data, see [Analyzing log data with CloudWatch Logs Insights](AnalyzingLogData.md) or [Real-time processing of log data with subscriptions](Subscriptions.md) instead.

**Topics**
+ [

## Concepts
](#S3concepts)
+ [

# Export log data to Amazon S3 using the console
](S3ExportTasksConsole.md)
+ [

# Export log data to Amazon S3 using the AWS CLI
](S3ExportTasks.md)
+ [

# Describe export tasks (CLI)
](DescribeExportTasks.md)
+ [

# Cancel an export task (CLI)
](CancelExportTask.md)

## Concepts


Before you begin, become familiar with the following export concepts:

**log group name**  
The name of the log group associated with an export task. The log data in this log group will be exported to the specified S3 bucket.

**from (timestamp)**  
A required timestamp expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC. All log events in the log group that were ingested on or after this time will be exported.

**to (timestamp)**  
A required timestamp expressed as the number of milliseconds since Jan 1, 1970 00:00:00 UTC. All log events in the log group that were ingested before this time will be exported.

**destination bucket**  
The name of the S3 bucket associated with an export task. This bucket is used to export the log data from the specified log group.

**destination prefix**  
An optional attribute that is used as the Amazon S3 key prefix for all exported objects. This helps create a folder-like organization in your bucket.

# Export log data to Amazon S3 using the console


In the following examples, you use the Amazon CloudWatch console to export all data from an Amazon CloudWatch Logs log group named `my-log-group` to an Amazon S3 bucket named `amzn-s3-demo-bucket`.

Exporting log data to S3 buckets that are encrypted by SSE-KMS is supported. Exporting to buckets encrypted with DSSE-KMS is not supported.

The details of how you set up the export depends on whether the Amazon S3 bucket that you want to export to is in the same account as your logs that are being exported, or in a different account.

**Topics**
+ [

## Same-account export (console)
](#ExportSingleAccount)
+ [

## Cross-account export (console)
](#ExportCrossAccount)

## Same-account export (console)


If the Amazon S3 bucket is in the same account as the logs that are being exported, use the instructions in this section.

**Topics**
+ [

### Create an Amazon S3 bucket (console)
](#CreateS3BucketConsole)
+ [

### Set up access permissions (console)
](#CreateIAMUser-With-S3-Access)
+ [

### Set permissions on an Amazon S3 bucket (console)
](#S3PermissionsConsole)
+ [

### (Optional) Exporting to a destination Amazon S3 bucket encrypted with SSE-KMS (console)
](#S3-Export-KMSEncrypted)
+ [

### Create an export task (console)
](#CreateExportTaskConsole)

### Create an Amazon S3 bucket (console)


We recommend that you use a bucket that was created specifically for CloudWatch Logs. However, if you want to use an existing bucket, you can skip to step 2.

**Note**  
The Amazon S3 bucket must reside in the same Region as the log data to export. CloudWatch Logs doesn't support exporting data to Amazon S3 buckets in a different Region.

**To create an Amazon S3 bucket**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. If necessary, change the Region. From the navigation bar, choose the Region where your CloudWatch Logs reside.

1. Choose **Create Bucket**.

1. For **Bucket Name**, enter a name for the bucket.

1. For **Region**, select the Region where your CloudWatch Logs data resides.

1. Choose **Create**.

### Set up access permissions (console)


To create the export task, you'll need to be signed on with the `AmazonS3ReadOnlyAccess` IAM role and with the following permissions:
+ `logs:CreateExportTask`
+ `logs:CancelExportTask`
+ `logs:DescribeExportTasks`
+ `logs:DescribeLogStreams`
+ `logs:DescribeLogGroups`

To provide access, add permissions to your users, groups, or roles:
+ Users and groups in AWS IAM Identity Center:

  Create a permission set. Follow the instructions in [Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/howtocreatepermissionset.html) in the *AWS IAM Identity Center User Guide*.
+ Users managed in IAM through an identity provider:

  Create a role for identity federation. Follow the instructions in [Create a role for a third-party identity provider (federation)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*.
+ IAM users:
  + Create a role that your user can assume. Follow the instructions in [Create a role for an IAM user](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.
  + (Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in [Adding permissions to a user (console)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

### Set permissions on an Amazon S3 bucket (console)


By default, all Amazon S3 buckets and objects are private. Only the resource owner, the AWS account that created the bucket, can access the bucket and any objects that it contains. However, the resource owner can choose to grant access permissions to other resources and users by writing an access policy.

When you set the policy, we recommend that you include a randomly generated string as the prefix for the bucket, so that only intended log streams are exported to the bucket.

**Important**  
To make exports to Amazon S3 buckets more secure, we now require you to specify the list of source accounts that are allowed to export log data to your S3 bucket.   
In the following example, the list of account IDs in the `aws:SourceAccount` key would be the accounts from which a user can export log data to your Amazon S3 bucket. The `aws:SourceArn` key would be the resource for which the action is being taken. You may restrict this to a specific log group, or use a wildcard as shown in this example.  
We recommend that you also include the account ID of the account where the S3 bucket is created, to allow export within the same account.

**To set permissions on an Amazon S3 bucket**

1. In the Amazon S3 console, choose the bucket that you created.

1. Choose **Permissions**, **Bucket policy**.

1. In the **Bucket Policy Editor**, add the following policy. Change `amzn-s3-demo-bucket` to the name of your S3 bucket. Be sure to specify the correct Region endpoint, such as `us-west-1`, for **Principal**.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
         {
             "Sid": "AllowCloudWatchLogsGetBucketAcl",
             "Action": "s3:GetBucketAcl",
             "Effect": "Allow",
             "Resource": "arn:aws:s3:::amzn-s3-demo-bucket",
             "Principal": { "Service": "logs.us-east-1.amazonaws.com" },
             "Condition": {
               "StringEquals": {
                   "aws:SourceAccount": [
                       "123456789012",
                       "111122223333"
                   ]
               },
               "ArnLike": {
                       "aws:SourceArn": [
                           "arn:aws:logs:us-east-1:123456789012:log-group:*",
                           "arn:aws:logs:us-east-1:111122223333:log-group:*"
                       ]
               }
             }
         },
         {
             "Sid": "AllowCloudWatchLogsPutObject",
             "Action": "s3:PutObject",
             "Effect": "Allow",
             "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
             "Principal": { "Service": "logs.us-east-1.amazonaws.com" },
             "Condition": {
               "StringEquals": {
                   "s3:x-amz-acl": "bucket-owner-full-control",
                   "aws:SourceAccount": [
                       "123456789012",
                       "111122223333"
                   ]
               },
               "ArnLike": {
                       "aws:SourceArn": [
                           "arn:aws:logs:us-east-1:123456789012:log-group:*",
                           "arn:aws:logs:us-east-1:111122223333:log-group:*"
                       ]
               }
             }
         }
       ]
   }
   ```

------

1. Choose **Save** to set the policy that you just added as the access policy on your bucket. This policy enables CloudWatch Logs to export log data to your Amazon S3 bucket. The bucket owner has full permissions on all of the exported objects.
**Warning**  
If the existing bucket already has one or more policies attached to it, add the statements for CloudWatch Logs access to that policy or policies. We recommend that you evaluate the resulting set of permissions to be sure that they're appropriate for the users who will access the bucket.

### (Optional) Exporting to a destination Amazon S3 bucket encrypted with SSE-KMS (console)


This step is necessary only if you are exporting to an Amazon S3 bucket that uses server-side encryption with AWS KMS keys. This encryption is known as SSE-KMS. 

**To export to a bucket encrypted with SSE-KMS**

1. Open the AWS KMS console at [https://console.aws.amazon.com/kms](https://console.aws.amazon.com/kms).

1. To change the AWS Region, use the Region selector in the upper-right corner of the page.

1. In the left navigation bar, choose **Customer managed keys**.

   Choose **Create Key**.

1. For **Key type**, choose **Symmetric**.

1. For **Key usage**, choose **Encrypt and decrypt** and then choose **Next**.

1. Under **Add labels**, enter an alias for the key and optionally add a description or tags. Then choose **Next**.

1. Under **Key administrators**, select who can administer this key, and then choose **Next**.

1. Under **Define key usage permissions**, make no changes and choose **Next**. 

1. Review the settings and choose **Finish**.

1. Back at the **Customer managed keys** page, choose the name of the key that you just created.

1. Choose the **Key policy** tab and choose **Switch to policy view**.

1. In the **Key policy** section, choose **Edit**.

1. Add the following statement to the key policy statement list. When you do, replace *Region* with the Region of your logs and replace *account-ARN* with the ARN of the account that owns the KMS key.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "Allow CWL Service Principal usage",
               "Effect": "Allow",
               "Principal": {
                   "Service": "logs.Region.amazonaws.com"
               },
               "Action": [
                   "kms:GenerateDataKey",
                   "kms:Decrypt"
               ],
               "Resource": "*"
           },
           {
               "Sid": "Enable IAM User Permissions",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "account-ARN"
               },
               "Action": [
                   "kms:GetKeyPolicy*",
                   "kms:PutKeyPolicy*",
                   "kms:DescribeKey*",
                   "kms:CreateAlias*",
                   "kms:ScheduleKeyDeletion*",
                   "kms:Decrypt"
               ],
               "Resource": "*"
           }
       ]
   }
   ```

------

1. Choose **Save changes**.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Find the bucket that you created in [Create an S3 bucket (CLI)](S3ExportTasks.md#CreateS3Bucket) and choose the bucket name.

1. Choose the **Properties** tab. Then, under **Default Encryption**, choose **Edit**.

1. Under **Server-side Encryption**, choose **Enable**.

1. Under **Encryption type**, choose **AWS Key Management Service key (SSE-KMS)**.

1. Choose **Choose from your AWS KMS keys** and find the key that you created.

1. For **Bucket key**, choose **Enable**.

1. Choose **Save changes**.

### Create an export task (console)


In this procedure, you create the export task for exporting logs from a log group.

**To export data to Amazon S3 using the CloudWatch console**

1. Sign in with sufficient permissions as documented in [Set up access permissions (console)](#CreateIAMUser-With-S3-Access).

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Log groups**.

1. On the **Log Groups** screen, choose the name of the log group.

1. Choose **Actions**, **Export data to Amazon S3**.

1. On the **Export data to Amazon S3** screen, under **Define data export**, set the time range for the data to export using **From** and **To**.

1. If your log group has multiple log streams, you can provide a log stream prefix to limit the log group data to a specific stream. Choose **Advanced**, and then for **Stream prefix**, enter the log stream prefix.

1. Under **Choose S3 bucket**, choose the account associated with the S3 bucket.

1. For **S3 bucket name**, choose an &S3; bucket.

1. For **S3 Bucket prefix**, enter the randomly generated string that you specified in the bucket policy.

1. Choose **Export** to export your log data to Amazon S3.

1. To view the status of the log data that you exported to Amazon S3, choose **Actions** and then **View all exports to Amazon S3**.

## Cross-account export (console)


If the Amazon S3 bucket is in a different account than the logs that are being exported, use the instructions in this section.

**Topics**
+ [

### Create an Amazon S3 bucket for cross-account export (console)
](#CreateS3BucketConsole-crossaccount)
+ [

### Set up access permissions for cross-account export (console)
](#CreateIAMUser-With-S3-Access-crossaccount)
+ [

### Set permissions on an S3 bucket for cross-account export (console)
](#S3PermissionsConsole-crossaccount)
+ [

### (Optional) Exporting to a destination Amazon S3 bucket encrypted with SSE-KMS for cross-account export (console)
](#S3-Export-KMSEncrypted-crossaccount)
+ [

### Create an export task for cross-account export (console)
](#CreateExportTaskConsole-crossaccount)

### Create an Amazon S3 bucket for cross-account export (console)


We recommend that you use a bucket that was created specifically for CloudWatch Logs. However, if you want to use an existing bucket, you can skip this procedure.

**Note**  
The Amazon S3 bucket must reside in the same Region as the log data to export. CloudWatch Logs doesn't support exporting data to Amazon S3 buckets in a different Region.

**To create an Amazon S3 bucket**

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. If necessary, change the Region. From the navigation bar, choose the Region where your CloudWatch Logs reside.

1. Choose **Create Bucket**.

1. For **Bucket Name**, enter a name for the bucket.

1. For **Region**, select the Region where your CloudWatch Logs data resides.

1. Choose **Create**.

### Set up access permissions for cross-account export (console)


First, you must create a new IAM policy to enable CloudWatch Logs to have the `s3:PutObject` action for the destination Amazon S3 bucket in the destination account.

Along with `s3:PutObject` action, additional actions included in the policy depend on whether the destination bucket uses AWS KMS encryption or has ACLs enabled using the [S3 Object Ownership](https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html) setting. 
+ If using KMS encryption, add the `kms:GenerateDataKey` and `kms:Decrypt` actions for the key resource
+ If ACLs are enabled on the bucket add the `s3:PutObjectAcl` action for the bucket resource

Change `amzn-s3-demo-bucket` to the name of your destination S3 bucket in the following policies.

**To create an IAM policy to export logs to an Amazon S3 bucket**

1. Open the IAM console at [https://console.aws.amazon.com/iam/](https://console.aws.amazon.com/iam/).

1. In the navigation pane on the left, choose **Policies**.

1. Choose **Create policy**.

1. In the **Policy editor** section, choose **JSON**. 

1. If the destination bucket does not use AWS KMS encryption, paste the following policy into the editor.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": "s3:PutObject",
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
           }
       ]
   }
   ```

------

    If the destination bucket does use AWS KMS encryption, paste the following policy into the editor.

------
#### [ JSON ]

****  

   ```
   {
     "Version":"2012-10-17",		 	 	 
     "Statement": [
       {
         "Effect": "Allow",
         "Action": "s3:PutObject",
         "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
       },
       {
         "Effect": "Allow",
         "Action": [
           "kms:GenerateDataKey",
           "kms:Decrypt"
         ],
         "Resource": "arn:aws:kms:us-east-1:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab"
       }
     ]
   }
   ```

------

   If the ACLs are enabled on destination bucket then add s3:PutObjectAcl to s3:PutObject Action block in the above policies.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Effect": "Allow",
               "Action": [
                  "s3:PutObject",
                  "s3:PutObjectAcl"
               ],
               "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
           }
       ]
   }
   ```

------

1. Choose **Next**.

1. Enter a policy name. You will use this name to attach the policy to your IAM role.

1. Choose **Create policy** to save the new policy.

To create an export task, you must be signed in with a IAM role that has the `AmazonS3ReadOnlyAccess` managed policy attached, the IAM policy created above, and also with the following permissions:
+ `logs:CreateExportTask`
+ `logs:CancelExportTask`
+ `logs:DescribeExportTasks`
+ `logs:DescribeLogStreams`
+ `logs:DescribeLogGroups`

To provide access, add permissions to your users, groups, or roles:
+ Users and groups in AWS IAM Identity Center:

  Create a permission set. Follow the instructions in [Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/howtocreatepermissionset.html) in the *AWS IAM Identity Center User Guide*.
+ Users managed in IAM through an identity provider:

  Create a role for identity federation. Follow the instructions in [Create a role for a third-party identity provider (federation)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*.
+ IAM users:
  + Create a role that your user can assume. Follow the instructions in [Create a role for an IAM user](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.
  + (Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in [Adding permissions to a user (console)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

### Set permissions on an S3 bucket for cross-account export (console)


By default, all S3 buckets and objects are private. Only the resource owner, the AWS account that created the bucket, can access the bucket and any objects that it contains. However, the resource owner can choose to grant access permissions to other resources and users by writing an access policy.

When you set the policy, we recommend that you include a randomly generated string as the prefix for the bucket, so that only intended log streams are exported to the bucket.

**Important**  
To make exports to S3 buckets more secure, we now require you to specify the list of source accounts that are allowed to export log data to your S3 bucket.   
In the following example, the list of account IDs in the `aws:SourceAccount` key would be the accounts from which a user can export log data to your S3 bucket. The `aws:SourceArn` key would be the resource for which the action is being taken. You may restrict this to a specific log group, or use a wildcard as shown in this example.  
We recommend that you also include the account ID of the account where the S3 bucket is created, to allow export within the same account.

**To set permissions on an Amazon S3 bucket**

1. In the Amazon S3 console, choose the bucket that you created.

1. Choose **Permissions**, **Bucket policy**.

1. In the **Bucket Policy Editor**, add the following policy. Change `amzn-s3-demo-bucket` to the name of your S3 bucket. Be sure to specify the correct Region endpoint, such as `us-east-1`, for **Principal**.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
         {
             "Action": "s3:GetBucketAcl",
             "Effect": "Allow",
             "Resource": "arn:aws:s3:::amzn-s3-demo-bucket",
             "Principal": { "Service": "logs.us-east-1.amazonaws.com" },
             "Condition": {
               "StringEquals": {
                   "aws:SourceAccount": [
                       "123456789012",
                       "111122223333"
                   ]
               },
               "ArnLike": {
                       "aws:SourceArn": [
                           "arn:aws:logs:us-east-1:123456789012:log-group:*",
                           "arn:aws:logs:us-east-1:111122223333:log-group:*"
                       ]
               }
             }
         },
         {
             "Action": "s3:PutObject",
             "Effect": "Allow",
             "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
             "Principal": { "Service": "logs.us-east-1.amazonaws.com" },
             "Condition": {
               "StringEquals": {
                   "s3:x-amz-acl": "bucket-owner-full-control",
                   "aws:SourceAccount": [
                       "123456789012",
                       "111122223333"
                   ]
               },
               "ArnLike": {
                       "aws:SourceArn": [
                           "arn:aws:logs:us-east-1:123456789012:log-group:*",
                           "arn:aws:logs:us-east-1:111122223333:log-group:*"
                       ]
               }
             }
         },
         {
             "Effect": "Allow",
             "Principal": {
               "AWS": "arn:aws:iam::111122223333:role/role_name"
             },
             "Action": "s3:PutObject",
             "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
             "Condition": {
               "StringEquals": {
                   "s3:x-amz-acl": "bucket-owner-full-control"
               }
             }
          }
       ]
   }
   ```

------

1. Choose **Save** to set the policy that you just added as the access policy on your bucket. This policy enables CloudWatch Logs to export log data to your S3 bucket. The bucket owner has full permissions on all of the exported objects.
**Warning**  
If the existing bucket already has one or more policies attached to it, add the statements for CloudWatch Logs access to that policy or policies. We recommend that you evaluate the resulting set of permissions to be sure that they're appropriate for the users who will access the bucket.

### (Optional) Exporting to a destination Amazon S3 bucket encrypted with SSE-KMS for cross-account export (console)


This procedure is necessary only if you are exporting to an S3 bucket that uses server-side encryption with AWS KMS keys. This encryption is known as SSE-KMS. 

**To export to a bucket encrypted with SSE-KMS**

1. Open the AWS KMS console at [https://console.aws.amazon.com/kms](https://console.aws.amazon.com/kms).

1. To change the AWS Region, use the Region selector in the upper-right corner of the page.

1. In the left navigation bar, choose **Customer managed keys**.

   Choose **Create Key**.

1. For **Key type**, choose **Symmetric**.

1. For **Key usage**, choose **Encrypt and decrypt** and then choose **Next**.

1. Under **Add labels**, enter an alias for the key and optionally add a description or tags. Then choose **Next**.

1. Under **Key administrators**, select who can administer this key, and then choose **Next**.

1. Under **Define key usage permissions**, make no changes and choose **Next**. 

1. Review the settings and choose **Finish**.

1. Back at the **Customer managed keys** page, choose the name of the key that you just created.

1. Choose the **Key policy** tab and choose **Switch to policy view**.

1. In the **Key policy** section, choose **Edit**.

1. Add the following statement to the key policy statement list. When you do, replace *us-east-1* with the Region of your logs, *account-ARN* with the ARN of the account that owns the KMS key, *123456789012* with the account number that owns the KMS key,*key\$1id* with the kms-key Id and *role\$1name* with the role used for creating export task.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "Allow CWL Service Principal usage",
               "Effect": "Allow",
               "Principal": {
               "Service": "logs.us-east-1.amazonaws.com"
               },
               "Action": [
                   "kms:GenerateDataKey",
                   "kms:Decrypt"
               ],
               "Resource": "*"
           },
           {
               "Sid": "Enable IAM User Permissions",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "account-ARN"
               },
               "Action": [
                   "kms:GetKeyPolicy*",
                   "kms:PutKeyPolicy*",
                   "kms:DescribeKey*",
                   "kms:CreateAlias*",
                   "kms:ScheduleKeyDeletion*",
                   "kms:Decrypt"
               ],
               "Resource": "*"
           },
           {
               "Sid": "Enable IAM Role Permissions",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:role/role_name"
               },
               "Action": [
                   "kms:GenerateDataKey",
                   "kms:Decrypt"
               ],
               "Resource": "arn:aws:kms:us-east-1:123456789012:key/key-id"
           }
       ]
   }
   ```

------

1. Choose **Save changes**.

1. Open the Amazon S3 console at [https://console.aws.amazon.com/s3/](https://console.aws.amazon.com/s3/).

1. Find the bucket that you created in [Create an S3 bucket (CLI)](S3ExportTasks.md#CreateS3Bucket) and choose the bucket name.

1. Choose the **Properties** tab. Then, under **Default Encryption**, choose **Edit**.

1. Under **Server-side Encryption**, choose **Enable**.

1. Under **Encryption type**, choose **AWS Key Management Service key (SSE-KMS)**.

1. Choose **Choose from your AWS KMS keys** and find the key that you created.

1. For **Bucket key**, choose **Enable**.

1. Choose **Save changes**.

### Create an export task for cross-account export (console)


In this procedure, you create the export task for exporting logs from a log group.

**To export data to Amazon S3 using the CloudWatch console**

1. Sign in with sufficient permissions as documented in [Set up access permissions (console)](#CreateIAMUser-With-S3-Access).

1. Open the CloudWatch console at [https://console.aws.amazon.com/cloudwatch/](https://console.aws.amazon.com/cloudwatch/).

1. In the navigation pane, choose **Log groups**.

1. On the **Log Groups** screen, choose the name of the log group.

1. Choose **Actions**, **Export data to Amazon S3**.

1. On the **Export data to Amazon S3** screen, under **Define data export**, set the time range for the data to export using **From** and **To**.

1. If your log group has multiple log streams, you can provide a log stream prefix to limit the log group data to a specific stream. Choose **Advanced**, and then for **Stream prefix**, enter the log stream prefix.

1. Under **Choose S3 bucket**, choose the account associated with the S3 bucket.

1. For **S3 bucket name**, choose an S3 bucket.

1. For **S3 Bucket prefix**, enter the randomly generated string that you specified in the bucket policy.

1. Choose **Export** to export your log data to Amazon S3.

1. To view the status of the log data that you exported to Amazon S3, choose **Actions** and then **View all exports to Amazon S3**.

# Export log data to Amazon S3 using the AWS CLI


In the following example, you use an export task to export all data from a CloudWatch Logs log group named `my-log-group` to an Amazon S3 bucket named `amzn-s3-demo-bucket`. This example assumes that you have already created a log group called `my-log-group`.

Exporting log data to S3 buckets that are encrypted by AWS KMS is supported. Exporting to buckets encrypted with DSSE-KMS is not supported.

The details of how you set up the export depends on whether the Amazon S3 bucket that you want to export to is in the same account as your logs that are being exported, or in a different account.

**Topics**
+ [

## Same-account export (CLI)
](#ExportSingleAccount-CLI)
+ [

## Cross-account export (CLI)
](#ExportCrossAccount-CLI)

## Same-account export (CLI)


If the Amazon S3 bucket is in the same account as the logs that are being exported, use the instructions in this section.

**Topics**
+ [

### Create an S3 bucket (CLI)
](#CreateS3Bucket)
+ [

### Set up access permissions (CLI)
](#CreateIAMUser-With-S3-Access-CLI)
+ [

### Set permissions on an S3 bucket (CLI)
](#S3Permissions)
+ [

### (Optional) Exporting to a destination Amazon S3 bucket encrypted with SSE-KMS (CLI)
](#S3-Export-KMSEncrypted-CLI)
+ [

### Create an export task (CLI)
](#CreateExportTask)

### Create an S3 bucket (CLI)


We recommend that you use a bucket that was created specifically for CloudWatch Logs. However, if you want to use an existing bucket, you can skip this procedure.

**Note**  
The S3 bucket must reside in the same Region as the log data to export. CloudWatch Logs doesn't support exporting data to S3 buckets in a different Region.

**To create an S3 bucket using the AWS CLI**  
At a command prompt, run the following [create-bucket](https://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html) command, where `LocationConstraint` is the Region where you are exporting log data.

```
aws s3api create-bucket --bucket amzn-s3-demo-bucket --create-bucket-configuration LocationConstraint=us-east-2
```

The following is example output.

```
{
    "Location": "/amzn-s3-demo-bucket"
}
```

### Set up access permissions (CLI)


To create the export task later, you'll need to be signed on with the `AmazonS3ReadOnlyAccess` IAM role and with the following permissions:
+ `logs:CreateExportTask`
+ `logs:CancelExportTask`
+ `logs:DescribeExportTasks`
+ `logs:DescribeLogStreams`
+ `logs:DescribeLogGroups`

To provide access, add permissions to your users, groups, or roles:
+ Users and groups in AWS IAM Identity Center:

  Create a permission set. Follow the instructions in [Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/howtocreatepermissionset.html) in the *AWS IAM Identity Center User Guide*.
+ Users managed in IAM through an identity provider:

  Create a role for identity federation. Follow the instructions in [Create a role for a third-party identity provider (federation)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*.
+ IAM users:
  + Create a role that your user can assume. Follow the instructions in [Create a role for an IAM user](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.
  + (Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in [Adding permissions to a user (console)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

### Set permissions on an S3 bucket (CLI)


By default, all S3 buckets and objects are private. Only the resource owner, the account that created the bucket, can access the bucket and any objects that it contains. However, the resource owner can choose to grant access permissions to other resources and users by writing an access policy.

**Important**  
To make exports to S3 buckets more secure, we now require you to specify the list of source accounts that are allowed to export log data to your S3 bucket.   
In the following example, the list of account IDs in the `aws:SourceAccount` key would be the accounts from which a user can export log data to your S3 bucket. The `aws:SourceArn` key would be the resource for which the action is being taken. You may restrict this to a specific log group, or use a wildcard as shown in this example.  
We recommend that you also include the account ID of the account where the S3 bucket is created, to allow export within the same account.

**To set permissions on an S3 bucket**

1. Create a file named `policy.json` and add the following access policy, changing `amzn-s3-demo-bucket` to the name of your S3 bucket and `Principal` to the endpoint of the Region where you are exporting log data, such as `us-east-1`. Use a text editor to create this policy file. Don't use the IAM console.

------
#### [ JSON ]

****  

   ```
    
       {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
         {
             "Sid": "AllowGetBucketAcl",
             "Action": "s3:GetBucketAcl",
             "Effect": "Allow",
             "Resource": "arn:aws:s3:::amzn-s3-demo-bucket",
             "Principal": { "Service": "logs.us-east-1.amazonaws.com" },
             "Condition": {
               "StringEquals": {
                   "aws:SourceAccount": [
                       "123456789012",
                       "111122223333"
                   ]
               },
               "ArnLike": {
                       "aws:SourceArn": [
                           "arn:aws:logs:us-east-1:123456789012:log-group:*",
                           "arn:aws:logs:us-east-1:111122223333:log-group:*"
                       ]
               }
             }
         },
         {
             "Sid": "AllowPutObject",
             "Action": "s3:PutObject",
             "Effect": "Allow",
             "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
             "Principal": { "Service": "logs.us-east-1.amazonaws.com" },
             "Condition": {
               "StringEquals": {
                   "s3:x-amz-acl": "bucket-owner-full-control",
                   "aws:SourceAccount": [
                       "123456789012",
                       "111122223333"
                   ]
               },
               "ArnLike": {
                       "aws:SourceArn": [
                           "arn:aws:logs:us-east-1:123456789012:log-group:*",
                           "arn:aws:logs:us-east-1:111122223333:log-group:*"
                       ]
               }
             }
         }
       ]
   }
   ```

------

1. Set the policy that you just added as the access policy on your bucket by using the [put-bucket-policy](https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-policy.html) command. This policy enables CloudWatch Logs to export log data to your S3 bucket. The bucket owner will have full permissions on all of the exported objects.

   ```
   aws s3api put-bucket-policy --bucket amzn-s3-demo-bucket --policy file://policy.json
   ```
**Warning**  
If the existing bucket already has one or more policies attached to it, add the statements for CloudWatch Logs access to that policy or policies. We recommend that you evaluate the resulting set of permissions to be sure that they're appropriate for the users who will access the bucket.

### (Optional) Exporting to a destination Amazon S3 bucket encrypted with SSE-KMS (CLI)


This procedure is necessary only if you are exporting to an S3 bucket that uses server-side encryption with AWS KMS keys. This encryption is known as SSE-KMS. 

**To export to a bucket encrypted with SSE-KMS**

1. Use a text editor to create a file named `key_policy.json` and add the following access policy. When you add the policy, make the following changes:
   + Replace *Region* with the Region of your logs. 
   + Replace *account-ARN* with the ARN of the account that owns the KMS key.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "Allow CWL Service Principal usage",
               "Effect": "Allow",
               "Principal": {
                   "Service": "logs.Region.amazonaws.com"
               },
               "Action": [
                   "kms:GenerateDataKey",
                   "kms:Decrypt"
               ],
               "Resource": "*"
           },
           {
               "Sid": "Enable IAM User Permissions",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "account-ARN"
               },
               "Action": [
                   "kms:GetKeyPolicy*",
                   "kms:PutKeyPolicy*",
                   "kms:DescribeKey*",
                   "kms:CreateAlias*",
                   "kms:ScheduleKeyDeletion*",
                   "kms:Decrypt"
               ],
               "Resource": "*"
           }
       ]
   }
   ```

------

1. Enter the following command:

   ```
   aws kms create-key --policy file://key_policy.json
   ```

   The following is example output from this command:

   ```
   {
       "KeyMetadata": {
           "AWSAccountId": "account_id",
           "KeyId": "key_id",
           "Arn": "arn:aws:kms:us-east-2:account-ARN:key/key_id",
           "CreationDate": "time",
           "Enabled": true,
           "Description": "",
           "KeyUsage": "ENCRYPT_DECRYPT",
           "KeyState": "Enabled",
           "Origin": "AWS_KMS",
           "KeyManager": "CUSTOMER",
           "CustomerMasterKeySpec": "SYMMETRIC_DEFAULT",
           "KeySpec": "SYMMETRIC_DEFAULT",
           "EncryptionAlgorithms": [
               "SYMMETRIC_DEFAULT"
           ],
           "MultiRegion": false
       }
   ```

1. Use a text editor to create a file called `bucketencryption.json` with the following contents.

   ```
   {
     "Rules": [
       {
         "ApplyServerSideEncryptionByDefault": {
           "SSEAlgorithm": "aws:kms",
           "KMSMasterKeyID": "{KMS Key ARN}"
         },
         "BucketKeyEnabled": true
       }
     ]
   }
   ```

1. Enter the following command, replacing *amzn-s3-demo-bucket* with the name of the bucket that you are exporting logs to.

   ```
   aws s3api put-bucket-encryption --bucket amzn-s3-demo-bucket --server-side-encryption-configuration file://bucketencryption.json
   ```

   If the command doesn't return an error, the process is successful.

### Create an export task (CLI)


Use the following command to create the export task. After you create it, the export task might take anywhere from a few seconds to a few hours, depending on the size of the data to export.

**To export data to Amazon S3 using the AWS CLI**

1. Sign in with sufficient permissions as documented in [Set up access permissions (CLI)](#CreateIAMUser-With-S3-Access-CLI).

1. At a command prompt, use the following [create-export-task](https://docs.aws.amazon.com/cli/latest/reference/logs/create-export-task.html) command to create the export task.

   ```
   aws logs create-export-task --profile CWLExportUser --task-name "my-log-group-09-10-2015" --log-group-name "my-log-group" --from 1441490400000 --to 1441494000000 --destination "amzn-s3-demo-bucket" --destination-prefix "export-task-output"
   ```

   The following is example output.

   ```
   {
       "taskId": "cda45419-90ea-4db5-9833-aade86253e66"
   }
   ```

## Cross-account export (CLI)


If the Amazon S3 bucket is in a different account than the logs that are being exported, use the instructions in this section.

**Topics**
+ [

### Create an S3 bucket for cross-account export (CLI)
](#CreateS3Bucket-CLI-crossaccount)
+ [

### Set up access permissions for cross-account export (CLI)
](#CreateIAMUser-With-S3-Access-CLI-crossaccount)
+ [

### Set permissions on an S3 bucket for cross-account export (CLI)
](#S3Permissions-CLI-crossaccount)
+ [

### (Optional) Exporting to a destination Amazon S3 bucket encrypted with SSE-KMS for cross-account export (CLI)
](#S3-Export-KMSEncrypted-CLI-crossaccount)
+ [

### Create an export task for cross-account export (CLI)
](#CreateExportTask-CLI-crossaccount)

### Create an S3 bucket for cross-account export (CLI)


We recommend that you use a bucket that was created specifically for CloudWatch Logs. However, if you want to use an existing bucket, you can skip to step 2.

**Note**  
The S3 bucket must reside in the same Region as the log data to export. CloudWatch Logs doesn't support exporting data to S3 buckets in a different Region.

**To create an S3 bucket using the AWS CLI**  
At a command prompt, run the following [create-bucket](https://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html) command, where `LocationConstraint` is the Region where you are exporting log data.

```
aws s3api create-bucket --bucket amzn-s3-demo-bucket --create-bucket-configuration LocationConstraint=us-east-2
```

The following is example output.

```
{
    "Location": "/amzn-s3-demo-bucket"
}
```

### Set up access permissions for cross-account export (CLI)


First, you must create a new IAM policy to enable CloudWatch Logs to have the `s3:PutObject` action for the destination Amazon S3 bucket in the destination account. 

Along with `s3:PutObject` action, additional actions included in the policy depend on whether the destination bucket uses AWS KMS encryption or has ACLs enabled using the [S3 Object Ownership](https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html) setting. 
+ If using KMS encryption, add the `kms:GenerateDataKey` and `kms:Decrypt` actions for the key resource
+ If ACLs are enabled on the bucket add the `s3:PutObjectAcl` action for the bucket resource

Change `amzn-s3-demo-bucket` to the name of your destination S3 bucket in the following policies.

The policy that you create depends on whether the destination bucket uses AWS KMS encryption. If it does not use AWS KMS encryption, create a policy with the following contents.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
        }
    ]
}
```

------

If the destination bucket uses AWS KMS encryption, create a policy with the following contents.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [{
            "Effect": "Allow",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:GenerateDataKey",
                "kms:Decrypt"
            ],
            "Resource": "arn:aws:kms:us-east-1:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab"
        }
    ]
}
```

------

If the ACLs are enabled on destination bucket then add s3:PutObjectAcl to s3:PutObject Action block in the above policies.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
               "s3:PutObject",
               "s3:PutObjectAcl"
            ],
            "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*"
        }
    ]
}
```

------

To create an export task, you must be signed in with a IAM role that has the `AmazonS3ReadOnlyAccess` managed policy attached, the IAM policy created above, and also with the following permissions:
+ `logs:CreateExportTask`
+ `logs:CancelExportTask`
+ `logs:DescribeExportTasks`
+ `logs:DescribeLogStreams`
+ `logs:DescribeLogGroups`

To provide access, add permissions to your users, groups, or roles:
+ Users and groups in AWS IAM Identity Center:

  Create a permission set. Follow the instructions in [Create a permission set](https://docs.aws.amazon.com//singlesignon/latest/userguide/howtocreatepermissionset.html) in the *AWS IAM Identity Center User Guide*.
+ Users managed in IAM through an identity provider:

  Create a role for identity federation. Follow the instructions in [Create a role for a third-party identity provider (federation)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-idp.html) in the *IAM User Guide*.
+ IAM users:
  + Create a role that your user can assume. Follow the instructions in [Create a role for an IAM user](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_roles_create_for-user.html) in the *IAM User Guide*.
  + (Not recommended) Attach a policy directly to a user or add a user to a user group. Follow the instructions in [Adding permissions to a user (console)](https://docs.aws.amazon.com//IAM/latest/UserGuide/id_users_change-permissions.html#users_change_permissions-add-console) in the *IAM User Guide*.

### Set permissions on an S3 bucket for cross-account export (CLI)


By default, all S3 buckets and objects are private. Only the resource owner, the account that created the bucket, can access the bucket and any objects that it contains. However, the resource owner can choose to grant access permissions to other resources and users by writing an access policy.

**Important**  
To make exports to S3 buckets more secure, we now require you to specify the list of source accounts that are allowed to export log data to your S3 bucket.   
In the following example, the list of account IDs in the `aws:SourceAccount` key would be the accounts from which a user can export log data to your S3 bucket. The `aws:SourceArn` key would be the resource for which the action is being taken. You may restrict this to a specific log group, or use a wildcard as shown in this example.  
We recommend that you also include the account ID of the account where the S3 bucket is created, to allow export within the same account.

**To set permissions on an S3 bucket**

1. Create a file named `policy.json` and add the following access policy, changing `amzn-s3-demo-bucket` to the name of your destination S3 bucket, `Principal` to the endpoint of the Region where you are exporting log data, such as `us-west-1`. Use a text editor to create this policy file. Don't use the IAM console.

------
#### [ JSON ]

****  

   ```
   {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
         {
             "Action": "s3:GetBucketAcl",
             "Effect": "Allow",
             "Resource": "arn:aws:s3:::amzn-s3-demo-bucket",
             "Principal": { "Service": "logs.us-east-1.amazonaws.com" },
             "Condition": {
               "StringEquals": {
                   "aws:SourceAccount": [
                   "123456789012",
                   "111122223333"
                   ]
               },
               "ArnLike": {
                       "aws:SourceArn": [
                       "arn:aws:logs:us-east-1:123456789012:log-group:*",
                       "arn:aws:logs:us-east-1:111122223333:log-group:*"
                        ]
               }
             }
         },
         {
             "Action": "s3:PutObject" ,
             "Effect": "Allow",
             "Resource": "arn:aws:s3:::amzn-s3-demo-bucket/*",
             "Principal": { "Service": "logs.us-east-1.amazonaws.com" },
             "Condition": {
               "StringEquals": {
                   "s3:x-amz-acl": "bucket-owner-full-control",
                   "aws:SourceAccount": [
                   "123456789012",
                   "111122223333"
                   ]
               },
               "ArnLike": {
                       "aws:SourceArn": [
                       "arn:aws:logs:us-east-1:123456789012:log-group:*",
                       "arn:aws:logs:us-east-1:111122223333:log-group:*"
                       ]
               }
             }
         },
         {
             "Effect": "Allow",
             "Principal": {
             "AWS": "arn:aws:iam::111122223333:role/role_name"
             },
             "Action": "s3:PutObject",
             "Resource": "arn:aws:s3:::>amzn-s3-demo-bucket/*",
             "Condition": {
               "StringEquals": {
                   "s3:x-amz-acl": "bucket-owner-full-control"
               }
             }
          }
       ]
   }
   ```

------

1. Set the policy that you just added as the access policy on your bucket by using the [put-bucket-policy](https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-policy.html) command. This policy enables CloudWatch Logs to export log data to your S3 bucket. The bucket owner will have full permissions on all of the exported objects.

   ```
   aws s3api put-bucket-policy --bucket amzn-s3-demo-bucket --policy file://policy.json
   ```
**Warning**  
If the existing bucket already has one or more policies attached to it, add the statements for CloudWatch Logs access to that policy or policies. We recommend that you evaluate the resulting set of permissions to be sure that they're appropriate for the users who will access the bucket.

### (Optional) Exporting to a destination Amazon S3 bucket encrypted with SSE-KMS for cross-account export (CLI)


This procedure is necessary only if you are exporting to an S3 bucket that uses server-side encryption with AWS KMS keys. This encryption is known as SSE-KMS. 

**To export to a bucket encrypted with SSE-KMS**

1. Use a text editor to create a file named `key_policy.json` and add the following access policy. When you add the policy, make the following changes:
   + Replace *us-east-1* with the Region of your logs. 
   + Replace *account-ARN* with the ARN of the account that owns the KMS key.
   + Replace *123456789012* with the account number that owns the KMS key.
   + *key\$1id* with the kms-key Id.
   + *role\$1name* with the role used for creating export task.

------
#### [ JSON ]

****  

   ```
       {
       "Version":"2012-10-17",		 	 	 
       "Statement": [
           {
               "Sid": "AllowCWLServicePrincipalUsage",
               "Effect": "Allow",
               "Principal": {
                   "Service": "logs.us-east-1.amazonaws.com"
               },
               "Action": [
                   "kms:GenerateDataKey",
                   "kms:Decrypt"
               ],
               "Resource": "*"
           },
           {
               "Sid": "EnableIAMUserPermissions",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "account-ARN"
               },
               "Action": [
                   "kms:GetKeyPolicy*",
                   "kms:PutKeyPolicy*",
                   "kms:DescribeKey*",
                   "kms:CreateAlias*",
                   "kms:ScheduleKeyDeletion*",
                   "kms:Decrypt"
               ],
               "Resource": "*"
           },
           {
               "Sid": "EnableIAMRolePermissions",
               "Effect": "Allow",
               "Principal": {
                   "AWS": "arn:aws:iam::111122223333:role/role_name"
               },
               "Action": [
                   "kms:GenerateDataKey",
                   "kms:Decrypt"
               ],
               "Resource": "arn:aws:kms:us-east-1:123456789012:key/1234abcd-12ab-34cd-56ef-1234567890ab"
           }
       ]
   }
   ```

------

1. Enter the following command:

   ```
   aws kms create-key --policy file://key_policy.json
   ```

   The following is example output from this command:

   ```
   {
       "KeyMetadata": {
           "AWSAccountId": "account_id",
           "KeyId": "key_id",
           "Arn": "arn:aws:kms:us-east-1:123456789012:key/key_id",
           "CreationDate": "time",
           "Enabled": true,
           "Description": "",
           "KeyUsage": "ENCRYPT_DECRYPT",
           "KeyState": "Enabled",
           "Origin": "AWS_KMS",
           "KeyManager": "CUSTOMER",
           "CustomerMasterKeySpec": "SYMMETRIC_DEFAULT",
           "KeySpec": "SYMMETRIC_DEFAULT",
           "EncryptionAlgorithms": [
               "SYMMETRIC_DEFAULT"
           ],
           "MultiRegion": false
       }
   ```

1. Use a text editor to create a file called `bucketencryption.json` with the following contents.

   ```
   {
     "Rules": [
       {
         "ApplyServerSideEncryptionByDefault": {
           "SSEAlgorithm": "aws:kms",
           "KMSMasterKeyID": "{KMS Key ARN}"
         },
         "BucketKeyEnabled": true
       }
     ]
   }
   ```

1. Enter the following command, replacing *amzn-s3-demo-bucket* with the name of the bucket that you are exporting logs to.

   ```
   aws s3api put-bucket-encryption --bucket amzn-s3-demo-bucket --server-side-encryption-configuration file://bucketencryption.json
   ```

   If the command doesn't return an error, the process is successful.

### Create an export task for cross-account export (CLI)


Use the following command to create the export task. After you create it, the export task might take anywhere from a few seconds to a few hours, depending on the size of the data to export.

**To export data to Amazon S3 using the AWS CLI**

1. Sign in with sufficient permissions as documented in [Set up access permissions (CLI)](#CreateIAMUser-With-S3-Access-CLI).

1. At a command prompt, use the following [create-export-task](https://docs.aws.amazon.com/cli/latest/reference/logs/create-export-task.html) command to create the export task.

   ```
   aws logs create-export-task --profile CWLExportUser --task-name "my-log-group-09-10-2015" --log-group-name "my-log-group" --from 1441490400000 --to 1441494000000 --destination "amzn-s3-demo-bucket" --destination-prefix "export-task-output"
   ```

   The following is example output.

   ```
   {
       "taskId": "cda45419-90ea-4db5-9833-aade86253e66"
   }
   ```

# Describe export tasks (CLI)


After you create an export task, you can get the current status of the task.

**To describe export tasks using the AWS CLI**  
At a command prompt, use the following [describe-export-tasks](https://docs.aws.amazon.com/cli/latest/reference/logs/describe-export-tasks.html) command.

```
aws logs --profile CWLExportUser describe-export-tasks --task-id "cda45419-90ea-4db5-9833-aade86253e66"
```

The following is example output.

```
{
   "exportTasks": [
   {
      "destination": "my-exported-logs",
      "destinationPrefix": "export-task-output",
      "executionInfo": {
         "creationTime": 1441495400000
      },
      "from": 1441490400000,
      "logGroupName": "my-log-group",
      "status": {
         "code": "RUNNING",
         "message": "Started Successfully"
      },
      "taskId": "cda45419-90ea-4db5-9833-aade86253e66",
      "taskName": "my-log-group-09-10-2015",
      "tTo": 1441494000000
   }]
}
```

You can use the `describe-export-tasks` command in three different ways:
+ **Without any filters** – Lists all of your export tasks, in reverse order of creation.
+ **Filter on task ID** – Lists the export task, if one exists, with the specified ID.
+ **Filter on task status** – Lists the export tasks with the specified status.

For example, use the following command to filter on the `FAILED` status.

```
aws logs --profile CWLExportUser describe-export-tasks --status-code "FAILED"
```

The following is example output.

```
{
   "exportTasks": [
   {
      "destination": "amzn-s3-demo-bucket",
      "destinationPrefix": "export-task-output",
      "executionInfo": {
         "completionTime": 1441498600000
         "creationTime": 1441495400000
      },
      "from": 1441490400000,
      "logGroupName": "my-log-group",
      "status": {
         "code": "FAILED",
         "message": "FAILED"
      },
      "taskId": "cda45419-90ea-4db5-9833-aade86253e66",
      "taskName": "my-log-group-09-10-2015",
      "to": 1441494000000
   }]
}
```

# Cancel an export task (CLI)


You can cancel an export task if it's in a `PENDING` or `RUNNING` state.

**To cancel an export task using the AWS CLI**  
At a command prompt, use the following [cancel-export-task](https://docs.aws.amazon.com/cli/latest/reference/logs/cancel-export-task.html) command:

```
aws logs --profile CWLExportUser cancel-export-task --task-id "cda45419-90ea-4db5-9833-aade86253e66"
```

You can use the [describe-export-tasks](https://docs.aws.amazon.com/cli/latest/reference/logs/describe-export-tasks.html) command to verify that the task was canceled successfully.