

# Working with AWS CloudTrail Lake
Working with CloudTrail Lake

**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

AWS CloudTrail Lake lets you run SQL-based queries on your events. CloudTrail Lake converts existing events in row-based JSON format to [ Apache ORC](https://orc.apache.org/) format. ORC is a columnar storage format that is optimized for fast retrieval of data. Events are aggregated into event data stores, which are immutable collections of events based on criteria that you select by applying [advanced event selectors](cloudtrail-lake-concepts.md#adv-event-selectors). You can keep the event data in an event data store for up to 3,653 days (about 10 years) if you choose the **One-year extendable retention pricing** option, or up to 2,557 days (about 7 years) if you choose the **Seven-year retention pricing** option. The selectors that you apply to an event data store control which events persist and are available for you to query. CloudTrail Lake is an auditing solution that can complement your compliance stack, and assist you with near real-time troubleshooting.

# CloudTrail Lake availability change


**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting on May 31, 2026. For capabilities similar to CloudTrail Lake, explore CloudWatch.

After careful consideration, we decided to close AWS CloudTrail Lake to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal.

AWS CloudTrail Lake provides a managed solution for capturing, storing, and analyzing audit logs from AWS and non-AWS sources. AWS CloudTrail continues to be available for existing customers, but CloudTrail Lake will only receive critical bug fixes and security updates.

This guide provides information about migration options for AWS CloudTrail Lake customers.

**Note**  
AWS CloudTrail continues to be fully supported. Only AWS CloudTrail Lake is no longer open to new customers. Your AWS CloudTrail Trails, Insights and Aggregated Events are not affected.

## Continued support for existing event data stores


AWS CloudTrail Lake supports two types of event data stores (EDS): organization event data stores, and account event data stores. The level of continued support depends on which type you have configured.
+ **Organization event data stores** – If your AWS Organization has an organization-level EDS, AWS CloudTrail Lake will continue to function as expected. This includes support for new member accounts added to your organization and expansion to additional AWS Regions. To learn how to create an organization event data store, see [Create an organization event data store](cloudtrail-lake-organizations.md#cloudtrail-lake-organizations-create-eds).
+ **Account event data stores** – If your AWS Organization has only account-level event data stores, AWS CloudTrail Lake will continue to support those existing accounts, including expansion to new AWS Regions. However, AWS CloudTrail Lake will not support ingestion for new accounts added to your organization. To capture AWS CloudTrail Lake data for new accounts in your organization, you must create an organization event data store or migrate to Amazon CloudWatch.

**Note**  
If you anticipate adding new member accounts to your AWS Organization and want AWS CloudTrail Lake to automatically cover those accounts, ensure you have an organization event data store configured. Account event data stores will not extend coverage to newly added organization member accounts.

## Migration options


We recommend that you migrate your AWS CloudTrail Lake logs data to Amazon CloudWatch.

Amazon CloudWatch  
+ Amazon CloudWatch unifies security, operations, and compliance data in one solution, and provides flexible analytics and seamless integration capabilities. Customers can automatically normalize and process data to offer consistency across sources with built-in support for Open Cybersecurity Schema Framework (OCSF) and Open Telemetry (OTel) formats, so you can focus on analytics and insights.
+ Amazon CloudWatch provides the current capabilities of CloudTrail Lake at a comparable price point, and has additional capabilities that were top requests from CloudTrail Lake customers. These include native analytics powered by OpenSearch, pre-built connectors for popular third-party sources, and open access through Apache Iceberg APIs.
+ To get started with Amazon CloudWatch, see [CloudWatch pipelines](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Unified-Pipelines.html) in the *Amazon CloudWatch User Guide*. For details about pricing, see [CloudWatch pricing](https://aws.amazon.com/cloudwatch/pricing/).

## Comparing architectures


The current AWS CloudTrail Lake architecture provides a managed solution for capturing, storing, and analyzing audit logs through event data stores, and queries. This system operates as a feature within AWS CloudTrail. The recommended alternative, Amazon CloudWatch, maintains the core ability to capture, store, and analyze CloudTrail logs. It unifies security, operations, and compliance data in one solution. Amazon CloudWatch offers additional capabilities such as native analytics powered by OpenSearch, pre-built connectors for popular third-party sources, open access through Apache Iceberg APIs, and built-in support for Open Cybersecurity Schema Framework (OCSF) and Open Telemetry (OTel) formats.


| Capability | CloudTrail Lake | CloudWatch | Details | 
| --- | --- | --- | --- | 
| Data Sources | 3 AWS, 16 Third-Party | 60\$1 AWS, 12 Third-Party | CloudWatch supports 30\$1 AWS sources includes VPC Flow, Lambda, EKS, ALB, NLB & CloudTrail (except Network & Insights Events). | 
| Cross-account/cross-region Enablement | Yes | Partial | CloudWatch Ingestion supports enablement across accounts but requires separate enablement in each Region. | 
| Cross-account/cross-region Centralization | Yes | Yes | Enable aggregation of logs across accounts & Regions in a single account & Region. | 
| CloudTrail Safety Features – Late event ingestion, termination protection & immutability | Yes | Yes | CloudWatch supports only CloudTrail events/data via CW Ingestion (and, not trails). | 
| Data Transformation & Enrichment | Limited | Yes | CloudWatch supports Managed OCSF Transformations for key sources & custom transformations for remaining sources. | 
| Native Analytics | Yes | Yes | CloudTrail Lake supports SQL In-place queries with Athena. CloudWatch supports Logs QL, SQL & PPL In-place queries with OpenSearch. | 
| Nested SQL | Yes | No | CloudTrail Lake supports complex nested SQL queries. | 
| 3P Analytics | Yes | Yes | CloudWatch supports in-place queries with 3P tool of choice via Amazon S3 Tables and SageMaker AI Unified Studio. | 
| Data Export to other AWS destinations or 3P tools | Yes | Yes | You can send data via CloudWatch subscription filters, and S3 tables connectors. | 
| Additional Analytics | No | Yes | CloudWatch supports Alerts & Metrics for observability & security use cases. | 
| Event Selectors for CloudTrail | Yes | Limited | CloudWatch supports Advanced selectors for CloudTrail Data events. | 

## Migration procedure


AWS recently introduced a simplified way to unify your operational and security data by allowing you to import historical CloudTrail Lake event data stores (EDS) directly into Amazon CloudWatch. This integration utilizes CloudWatch's new unified data management architecture to provide a single pane of glass for your logs.

### Best Practice: The Pilot Approach


Before performing a full-scale migration of your historical data, it is highly recommended to perform a pilot migration using a small subset of data. This allows you to:
+ Verify that the imported logs appear correctly in CloudWatch.
+ Confirm that your queries and dashboards behave as expected.
+ Validate that IAM permissions and roles are properly configured.

Once you are satisfied with the results, you can proceed with confidence to migrate the full historical dataset.
+ **Verify Schema:** Ensure the log format appears as expected in CloudWatch Logs.
+ **Cost Management:** Estimate the ingestion costs by observing the volume of a 24-hour sample.
+ **Query Validation:** Test your CloudWatch Logs Insights queries against the sample data to ensure your monitoring logic remains intact.

Once you've successfully migrated your historical dataset, you can enable live ingestion of CloudTrail logs from CloudWatch to ensure you continue to capture logs.

**Note**  
Data prior to 2023 will not be migrated from CloudTrail Lake to Amazon CloudWatch. If you require access to events older than 2023, you must continue to query them directly within CloudTrail Lake, or move it to an Amazon S3 bucket.

### Prerequisites

+ **IAM Permissions:** Ensure your IAM identity has permissions to access both CloudTrail Lake (`cloudtrail:GetEventDataStore`, `cloudtrail:ListEventDataStore`) and CloudWatch Logs (`logs:CreateImportTask`), as well as IAM permissions (`iam:ListRoles`, `iam:CreateRole`, `iam:PassRole`).
+ **Service-Linked Role:** CloudTrail requires an IAM role to perform the export on your behalf. You can create this during the setup process in the console.

### Method 1: Using the CloudTrail Console (Recommended)


This is the most direct way to push data from your existing Lake Event Data Store.

1. Open the CloudTrail console.

1. In the left navigation pane, under **Lake**, choose **Event data stores**.

1. Select the Event data store that contains the data you want to migrate.

1. Choose the **Actions** button in the top right and select **Export to CloudWatch**.

1. Configure Export Settings:
   + **Time Range:** (recommended for Pilot) Instead of selecting your entire history, choose a narrow window (for example, the last 24 hours) to verify the integration. Once verified, repeat the process for the remaining historical data.
   + **Destination:** Specify an existing CloudWatch Log Group or create a new one.

1. **IAM Role:** Choose an existing IAM role or select **Create new IAM role** to allow CloudTrail to deliver logs to CloudWatch.

1. Review the configuration and choose **Export**.

### Method 2: Using the AWS CLI (create-import-task)


This method allows you to programmatically trigger the ingestion of historical event data.

**Step 1: Identify the Source ARN**

You will need the Amazon Resource Name (ARN) of your CloudTrail Lake Event Data Store. You can find this in the CloudTrail console or by running `aws cloudtrail list-event-data-stores`.

**Step 2: Create the Import Task**

Use the `logs` service to create the task. You must specify the source ARN of the Event Data Store.

```
aws logs create-import-task \
    --import-source-arn "arn:aws:cloudtrail:region:account-id:eventdatastore/eds-id" \
    --import-role-arn "arn:aws:iam::account-id:role/role-name" \
    --import-filter '{"startEventTime": START_TIME, "endEventTime": END_TIME}'
```

Parameters:
+ `--import-source-arn`: The ARN of the CloudTrail Lake Event Data Store containing the historical logs.
+ `--import-role-arn`: The ARN of the IAM role with the correct permissions.
+ `--import-filter`: Optional object specifying the start and end times of events you want imported.

**Step 3: Monitor the Task Status**

Because the import is asynchronous, you can check the progress of the migration using the `describe-import-tasks` command:

```
aws logs describe-import-tasks \
    --import-id "import-id"
```

## Additional resources

+ [AWS CloudTrail API Reference](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/Welcome.html)
+ [AWS CloudTrail User Guide](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html)
+ [Amazon CloudWatch User Guide](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Welcome.html)
+ [Working with CloudWatch telemetry enablement rules](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/telemetry-config-rules.html)
+ [CloudWatch Cross-account cross-Region log centralization](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CloudWatchLogs_Centralization.html)

## CloudTrail Lake event data stores


When you create an event data store, you choose the type of events to include in your event data store. You can create an event data store to include [CloudTrail events](query-event-data-store-cloudtrail.md) (management events, data events, network activity events), [CloudTrail Insights events](query-event-data-store-insights.md), [AWS Config configuration items](query-event-data-store-config.md), [AWS Audit Manager evidence](https://docs.aws.amazon.com/audit-manager/latest/userguide/evidence-finder.html#understanding-evidence-finder), or [events from outside of AWS](event-data-store-integration-events.md). Each event data store can only contain a specific event category (for example, AWS Config configuration items), because the [event schema](query-supported-event-schemas.md) is unique to the event category. You can store events from an organization in AWS Organizations in an [organization event data store](cloudtrail-lake-organizations.md), including events from multiple Regions and accounts. You can also run SQL queries across multiple event data stores using the supported SQL JOIN keywords. For information about running queries across multiple event data stores, see [Advanced, multi-table query support](query-limitations.md#query-advanced-multi-table).

You can copy trail events to a new or existing event data store to create a point-in-time snapshot of events logged to the trail. For more information, see [Copy trail events to an event data store](cloudtrail-copy-trail-to-lake-eds.md).

You can federate an event data store to see the metadata associated with the event data store in the AWS Glue [Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/components-overview.html#data-catalog-intro) and run SQL queries on the event data using Amazon Athena. The table metadata stored in the AWS Glue Data Catalog lets the Athena query engine know how to find, read, and process the data that you want to query. For more information, see [Federate an event data store](query-federation.md).

You can attach a resource-based policy to your event data store to provide cross-account access to selected principals. You can add a resource-based policy when you create or update an event data store on the CloudTrail console, or by running the AWS CLI `put-resource-policy` command. For more information, see [Resource-based policy examples for event data stores](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds).

By default, all events in an event data store are encrypted by CloudTrail. When you configure an event data store, you can choose to use your own AWS Key Management Service key. Using your own KMS key incurs AWS KMS costs for encryption and decryption. After you associate an event data store with a KMS key, the KMS key cannot be removed or changed.

You can control access to actions on event data stores by using authorization based on tags. For more information and examples, see [Examples: Denying access to create or delete event data stores based on tags](security_iam_id-based-policy-examples.md#security_iam_id-based-policy-examples-eds-tags) in this guide.

CloudTrail Lake event data stores incur charges. When you create an event data store, you choose the [pricing option](cloudtrail-lake-manage-costs.md#cloudtrail-lake-manage-costs-pricing-option) you want to use for the event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention period for the event data store. For information about CloudTrail pricing and managing Lake costs, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md).

CloudTrail Lake supports Amazon CloudWatch metrics, which provide information about data ingested and storage bytes. For more information about supported CloudWatch metrics, see [Supported CloudWatch metrics](cloudtrail-lake-cloudwatch-metrics.md).

**Note**  
CloudTrail typically delivers events within an average of about 5 minutes of an API call. This time is not guaranteed.

## CloudTrail Lake queries


CloudTrail Lake queries offer a deeper and more customizable view of events than simple key and value lookups in **Event history**, or running `LookupEvents`. An **Event history** search is limited to a single AWS account, only returns events from a single AWS Region, and cannot query multiple attributes. In contrast, CloudTrail Lake users can run complex SQL queries across multiple event fields. CloudTrail Lake supports all valid Trino `SELECT` statements and functions. For more information about the supported SQL functions and operators, see [Functions and Operators](https://trino.io/docs/current/functions.html) on the Trino documentation website.

You can build a query on the CloudTrail Lake **Editor** tab by writing the query in SQL from scratch, by opening a saved or sample query and editing it, or by using the query generator to produce a query from an English language prompt. For more information, see [Create or edit a query with the CloudTrail console](query-create-edit-query.md) and [Create CloudTrail Lake queries from natural language prompts](lake-query-generator.md).

You can save CloudTrail Lake queries for future use, and view results of queries for up to seven days. When you run queries, you can save the query results to an Amazon S3 bucket.

The CloudTrail console provides a number of sample queries that can help you get started writing your own queries. For more information, see [View sample queries with the CloudTrail console](lake-console-queries.md).

CloudTrail Lake queries incur charges. When you run queries in Lake, you pay based upon the amount of data scanned. For information about CloudTrail pricing and managing Lake costs, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md).

## CloudTrail Lake dashboards


You can use CloudTrail Lake dashboards to see event trends for the event data stores in your account. CloudTrail Lake offers the following types of dashboards:
+ **Managed dashboards** – You can view a managed dashboard to see event trends for an event data store that collects management events, data events, or Insights events. These dashboards are automatically available to you and are managed by CloudTrail Lake. CloudTrail offers 14 managed dashboards to choose from. You can manually refresh managed dashboards. You cannot modify, add, or remove the widgets for these dashboards, however, you can save a managed dashboard as a custom dashboard if you want to modify the widgets or set a refresh schedule.
+ **Custom dashboards** – Custom dashboards allow you to query events in any event data store type. You can add up to 10 widgets to a custom dashboard. You can manually refresh a custom dashboard, or you can set a refresh schedule.
+ **Highlights dashboards** – Enable the Highlights dashboard to view an at-a-glance overview of the AWS activity collected by the event data stores in your account. The Highlights dashboard is managed by CloudTrail and includes widgets that are relevant to your account. The widgets shown on the Highlights dashboard are unique to each account. These widgets could surface detected abnormal activity or anomalies. For example, your Highlights dashboard could include the **Total cross-account access widget**, which shows if there is an increase in abnormal cross-account activity. CloudTrail updates the Highlights dashboard every 6 hours. The dashboard shows the last 24 hours of data from the last update.

Each dashboard consists of one or more widgets and each widget represents a SQL query.

For more information, see [CloudTrail Lake dashboards](lake-dashboard.md).

## CloudTrail Lake integrations


You can use CloudTrail Lake *integrations* to log and store user activity data from outside of AWS; from any source in your hybrid environments, such as in-house or SaaS applications hosted on-premises or in the cloud, virtual machines, or containers. After you create event data stores in CloudTrail Lake and create a channel to log activity events, you call the `PutAuditEvents` API to ingest your application activity into CloudTrail. You can then use CloudTrail Lake to search, query, and analyze the data that is logged from your applications.

Integrations can also log events to your event data stores from over a dozen CloudTrail partners. In a partner integration, you create destination event data stores, a channel, and a resource policy. After you create the integration, you provide the channel ARN to the partner. There are two types of integrations: direct and solution. With direct integrations, the partner calls the `PutAuditEvents` API to deliver events to the event data store for your AWS account. With solution integrations, the application runs in your AWS account and the application calls the `PutAuditEvents` API to deliver events to the event data store for your AWS account.

For more information about integrations, see [Create an integration with an event source outside of AWS](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-integration.html).

## Additional resources


The following resources can help you get a better understanding of what CloudTrail Lake is and how you can use it.
+ [Modernize Your Audit Log Management Using CloudTrail Lake](https://www.youtube.com/watch?v=aLkecCsHhxw) (YouTube video)
+ [Log Activity Events from Non-AWS Sources in AWS CloudTrail Lake](https://www.youtube.com/watch?v=gF0FLdegQKM) (YouTube video)
+ [Analyze Activity Logs with AWS CloudTrail Lake and Amazon Athena](https://www.youtube.com/watch?v=cOeZaJt_k-w) (YouTube video)
+ [Get visibility into the activity logs for your workforce and customer identities](https://aws.amazon.com/blogs/mt/get-visibility-into-the-activity-logs-for-your-workforce-and-customer-identities/) (AWS blog)
+ [Using AWS CloudTrail Lake to identify older TLS connections to AWS service endpoints](https://aws.amazon.com/blogs/mt/using-aws-cloudtrail-lake-to-identify-older-tls-connections-to-aws-service-endpoints/) (AWS blog)
+ [How Arctic Wolf uses AWS CloudTrail Lake to Simplify Security and Operations](https://aws.amazon.com/blogs/mt/how-arctic-wolf-uses-aws-cloudtrail-lake-to-simplify-security-and-operations/) (AWS blog)
+ [CloudTrail Lake FAQs](https://aws.amazon.com/cloudtrail/faqs/#CloudTrail_Lake)
+ [AWS CloudTrail API Reference](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/Welcome.html)
+ [AWS CloudTrail Data API Reference](https://docs.aws.amazon.com/awscloudtraildata/latest/APIReference/Welcome.html)
+ [AWS CloudTrail Partner Onboarding Guide](https://docs.aws.amazon.com/awscloudtrail/latest/partner-onboarding/cloudtrail-lake-partner-onboarding.html)

# CloudTrail Lake supported Regions


**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

Currently, CloudTrail Lake is supported in the following AWS Regions:


****  

| Region Name | Region | 
| --- | --- | 
| US East (N. Virginia) | us-east-1 | 
| US East (Ohio) | us-east-2 | 
| US West (N. California) | us-west-1 | 
| US West (Oregon) | us-west-2 | 
| Africa (Cape Town) | af-south-1 | 
| Asia Pacific (Hong Kong) | ap-east-1 | 
| Asia Pacific (Hyderabad) | ap-south-2 | 
| Asia Pacific (Jakarta) | ap-southeast-3 | 
| Asia Pacific (Melbourne) | ap-southeast-4 | 
| Asia Pacific (Mumbai) | ap-south-1 | 
| Asia Pacific (Osaka) | ap-northeast-3 | 
| Asia Pacific (Seoul) | ap-northeast-2 | 
| Asia Pacific (Singapore) | ap-southeast-1 | 
| Asia Pacific (Sydney) | ap-southeast-2 | 
| Asia Pacific (Tokyo) | ap-northeast-1 | 
| Canada (Central) | ca-central-1 | 
| Europe (Frankfurt) | eu-central-1 | 
| Europe (Ireland) | eu-west-1 | 
| Europe (London) | eu-west-2 | 
| Europe (Milan) | eu-south-1 | 
| Europe (Paris) | eu-west-3 | 
| Europe (Spain) | eu-south-2 | 
| Europe (Stockholm) | eu-north-1 | 
| Europe (Zurich) | eu-central-2 | 
| Israel (Tel Aviv) | il-central-1 | 
| Middle East (Bahrain) | me-south-1 | 
| Middle East (UAE) | me-central-1 | 
| South America (São Paulo) | sa-east-1 | 
| AWS GovCloud (US-East) | us-gov-east-1 | 
| AWS GovCloud (US-West) | us-gov-west-1 | 

For information about CloudTrail service endpoints, see [AWS CloudTrail endpoints and quotas](https://docs.aws.amazon.com/general/latest/gr/ct.html).

For more information about using CloudTrail in the AWS GovCloud (US) Regions, see [Service Endpoints](https://docs.aws.amazon.com/govcloud-us/latest/UserGuide/using-govcloud-endpoints.html) in the *AWS GovCloud (US) User Guide*. 

# CloudTrail Lake concepts and terminology


**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

This section describes the key concepts and terms to help you use AWS CloudTrail Lake.

**Topics**
+ [

## Event data stores
](#cloudtrail-lake-concepts-eds)
+ [

## Integrations
](#cloudtrail-lake-concepts-integrations)
+ [

## Queries
](#cloudtrail-lake-concepts-queries)
+ [

## Dashboards
](#cloudtrail-lake-concepts-dashboard)

## Event data stores


Events are aggregated into *event data stores*, which are immutable collections of events based on criteria that you select by applying advanced event selectors.

You can create an event data store to log [CloudTrail events](query-event-data-store-cloudtrail.md) (management events, data events, network activity events), [CloudTrail Insights events](query-event-data-store-insights.md), [AWS Audit Manager evidence](https://docs.aws.amazon.com/audit-manager/latest/userguide/evidence-finder.html#understanding-evidence-finder), [AWS Config configuration items](query-event-data-store-config.md), or [events outside of AWS](event-data-store-integration-events.md).

**Advanced event selectors**  
*Advanced event selectors* determine which events to include in an event data store. Advanced event selectors help you control costs by logging only those events that are important to you.  
For management events, data events, and network activity events, you can use advanced event selectors to filter events. For example, if you're creating an event data store to collect management events, you can filter out AWS Key Management Service (AWS KMS) or Amazon Relational Database Service (Amazon RDS) Data API events. Typically, AWS KMS actions such as `Encrypt`, `Decrypt`, and `GenerateDataKey` generate more than 99 percent of events.  
For AWS Config configuration items, Audit Manager evidence, or events outside of AWS, advanced event selectors are used only to include events of that type in the event data store.

**Federation**  
*Federation* lets you see the metadata associated with an event data store in the AWS Glue [Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/components-overview.html#data-catalog-intro) and run SQL queries on the event data using Amazon Athena. The table metadata stored in the AWS Glue Data Catalog lets the Athena query engine know how to find, read, and process the data that you want to query.   
When you enable Lake query federation, CloudTrail creates the federated resources on your behalf and registers those resources with [AWS Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/how-it-works.html). After Lake federation is enabled, you can directly query your event data in Athena without needing to perform any additional steps. For more information, see [Federate an event data store](query-federation.md).

**Pricing option**  
When you create an event data store, you choose the *pricing option* that you want to use for the event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention periods for the event data store. For information about pricing, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md).

**Retention period**  
An event data store’s *retention period* determines how long event data is kept in the event data store. CloudTrail Lake determines whether to retain an event by checking if the `eventTime` of the event is within the specified retention period. For example, if you specify a retention period of 90 days, CloudTrail will remove events when their `eventTime` is older than 90 days.

**Default retention period**  
An event data store’s *default retention period* is the default number of days that event data is kept in the event data store. During an event data store’s default retention period, storage is included with ingestion pricing at no additional charge. After the default retention period, pricing for storage is pay-as-you-go.

**Maximum retention period**  
An event data store’s *maximum retention period* represents the maximum number of days that you can keep data in an event data store.

**Termination protection**  
By default, event data stores enable *termination protection*, which protects an event data store from being accidentally deleted. To delete an event data store with termination protection enabled, choose **Change termination protection** from the **Actions** menu on the event data store’s details page. Then you can proceed with deleting the event data store. For more information, see [Change termination protection with the consoleChange termination protection with the CloudTrail console](query-eds-termination-protection.md).

## Integrations


You can use CloudTrail Lake *integrations* to log and store user activity data from the following sources:
+ Outside of AWS
+ Any source in your hybrid environments, such as in-house or software as a service (SaaS) applications hosted on premises or in the cloud, virtual machines, or containers



An integration requires a channel to deliver the events and an event data store to receive the events. After you set up your integration, call the [PutAuditEvents](https://docs.aws.amazon.com/awscloudtraildata/latest/APIReference/API_PutAuditEvents.html) API operation to ingest your application activity into CloudTrail. Then, you can use CloudTrail Lake to search, query, and analyze the data that is logged from your applications. For more information, see [Create an integration with an event source outside of AWS](query-event-data-store-integration.md).

**Integration type**  
There are two types of integrations: *direct* and *solution*. With direct integrations, the partner calls the `PutAuditEvents` API operation to deliver events to the event data store for your AWS account. With solution integrations, the application runs in your AWS account and the application calls the `PutAuditEvents` API operation to deliver events to the event data store for your AWS account.

**Channels**  
Activity events from sources outside of AWS work by using *channels* to bring events into CloudTrail Lake from external partners that work with CloudTrail, or from your own sources. When you create a channel, you choose one or more event data stores to store events that arrive from the channel source. You can change the destination event data stores for a channel as needed, as long as the destination event data stores are set to log `eventCategory="ActivityAuditLog"` events. When you create a channel for events from an external partner, you provide a channel Amazon Resource Name (ARN) to the partner or source application.

**Resource-based policies**  
*Resource-based policies* are JSON policy documents that you attach to a resource. The resource-based policy attached to the channel allows the source to transmit events through the channel. If a channel doesn't have a resource policy, only the channel owner can call the `PutAuditEvents` API operation on the channel. For more information, see [AWS CloudTrail resource-based policy examples](security_iam_resource-based-policy-examples.md).

## Queries


*Queries* in CloudTrail Lake are authored in SQL. You can build a query on the CloudTrail Lake **Editor** tab by writing the query in SQL from scratch, by opening a saved or sample query and editing it, or by using the query generator to produce a query from an English language prompt. For more information, see [Create or edit a query with the CloudTrail console](query-create-edit-query.md) and [Create CloudTrail Lake queries from natural language prompts](lake-query-generator.md).

CloudTrail Lake supports all valid Trino `SELECT` statements and functions. For more information about the supported SQL functions and operators, see [Functions and Operators](https://trino.io/docs/current/functions.html) on the Trino documentation website.

## Dashboards


By using CloudTrail Lake *dashboards*, you can visualize the events in an event data store and see events trends, such as top AWS services, users, and errors. For more information, see [CloudTrail Lake dashboards](lake-dashboard.md).

**Dashboard types**  
CloudTrail Lake offers the following types of dashboards:  
+ **Managed dashboards** – You can view a managed dashboard to see event trends for an event data store that collects management events, data events, or Insights events. These dashboards are automatically available to you and are managed by CloudTrail Lake. CloudTrail offers 14 managed dashboards to choose from. You can manually refresh managed dashboards. You cannot modify, add, or remove the widgets for these dashboards, however, you can save a managed dashboard as a custom dashboard if you want to modify the widgets or set a refresh schedule.
+ **Custom dashboards** – Custom dashboards allow you to query events in any event data store type. You can add up to 10 widgets to a custom dashboard. You can manually refresh a custom dashboard, or you can set a refresh schedule.
+ **Highlights dashboards** – Enable the Highlights dashboard to view an at-a-glance overview of the AWS activity collected by the event data stores in your account. The Highlights dashboard is managed by CloudTrail and includes widgets that are relevant to your account. The widgets shown on the Highlights dashboard are unique to each account. These widgets could surface detected abnormal activity or anomalies. For example, your Highlights dashboard could include the **Total cross-account access widget**, which shows if there is an increase in abnormal cross-account activity. CloudTrail updates the Highlights dashboard every 6 hours. The dashboard shows the last 24 hours of data from the last update.

**Widgets**  
*Widgets* are the components that make up a dashboard and provide a visualization, such as a line chart or bar chart. Each widget corresponds to a SQL query. When you refresh a dashboard, CloudTrail runs a query for each widget on the dashboard to populate the data for the widget.

# CloudTrail Lake event data stores
Event data stores

**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

When you create an event data store in CloudTrail Lake, you choose the type of events to include in your event data store. You can create an event data store to include CloudTrail events (management events, data events, or network activity events), CloudTrail Insights events, AWS Config configuration items, or events outside of AWS. Each event data store type can only contain specific event categories (for example, AWS Config configuration items), because the event schema is unique to the event category. You can run SQL queries across multiple event data stores using the supported SQL JOIN keywords. For information about running queries across multiple event data stores, see [Advanced, multi-table query support](query-limitations.md#query-advanced-multi-table).

The following table shows the supported event categories for each event data store type. The **eventCategory** column shows the value that you would specify in the advanced event selectors to collect events of that type.


****  

| Event type (console) | eventCategory (API) | Description | 
| --- | --- | --- | 
| CloudTrail events |  `Management` `Data` `NetworkActivity`  | This event data store type can collect CloudTrail management events, data events, and network activity events. For more information, see [Create an event data store for CloudTrail events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-cloudtrail.html). | 
| CloudTrail Insights events |  `Insight`  | This event data store type can collect CloudTrail Insights events. To receive Insights events, you need a [source event data store](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-insights.html#query-event-data-store-cloudtrail-insights) that logs CloudTrail management events and enables Insights. For information about creating the source and destination event data stores, see [Create an event data store for CloudTrail Insights events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-insights.html). | 
| Configuration items |  `ConfigurationItem`  | This event data store type can collect AWS Config configuration items. For more information, see [Create an event data store for AWS Config configuration items](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html). | 
| Events from integration |  `ActivityAuditLog`  | This event data store type can collect non-AWS events from integrations. For more information, see [Create an event data store for events outside of AWS](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/event-data-store-integration-events.html). | 

You can also create an event data store for AWS Audit Manager evidence by using the Audit Manager console. For more information about aggregating evidence in CloudTrail Lake using Audit Manager, see [Understanding how evidence finder works with CloudTrail Lake](https://docs.aws.amazon.com/audit-manager/latest/userguide/evidence-finder.html#understanding-evidence-finder) in the *AWS Audit Manager User Guide*.

CloudTrail Lake event data stores incur charges. When you create an event data store, you choose the [pricing option](cloudtrail-lake-manage-costs.md#cloudtrail-lake-manage-costs-pricing-option) you want to use for the event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention period for the event data store. For information about CloudTrail pricing and managing Lake costs, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md).

The sections which follow describe how to create, update, and manage event data stores.

**Topics**
+ [

# Create, update, and manage event data stores with the console
](manage-lake-eds-console.md)
+ [

# Create, update, and manage event data stores with the AWS CLI
](lake-eds-cli.md)
+ [

# Manage event data store lifecycles
](query-eds-disable-termination.md)
+ [

# Copy trail events to an event data store
](cloudtrail-copy-trail-to-lake-eds.md)
+ [

# Federate an event data store
](query-federation.md)
+ [

# Understanding organization event data stores
](cloudtrail-lake-organizations.md)

# Create, update, and manage event data stores with the console


**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

You can use the CloudTrail console to create, update, delete, and restore event data stores.

You can update the following settings using the CloudTrail console:
+ You can change the [pricing option](cloudtrail-lake-manage-costs.md#cloudtrail-lake-manage-costs-pricing-option) from **Seven-year retention pricing** to **One-year extendable retention pricing**.
+ You can update the retention period for the event data store. The retention period determines how long event data is kept in the event data store. 
+ You can convert a multi-Region event data store to a single-Region event data store, or convert a single-Region event data store to a multi-Region event data store.
+ The management account for an AWS Organizations organization can convert an account-level event data store to an organization event data store, or can convert an organization event data store to an account-level event data store. This setting is not available on event data stores that collect events outside of AWS.
+ You can enable or disable [Lake query federation](query-federation.md). Federating an event data store allows you to query your event data from Amazon Athena.
+ You can add or edit the resource-based policy for an event data store to provide cross-account access to your event data store. For more information, see [Resource-based policy examples for event data stores](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds).
+ You can [stop event ingestion ](query-eds-stop-ingestion.md)and restart event ingestion on event data stores that collect management events, data events, or AWS Config configuration items.
+ You can enable or disable [termination protection](query-eds-termination-protection.md). Enabling termination protection protects an event data store from being accidentally deleted. Termination protection is enabled by default.
+ You can [restore](query-eds-restore.md) an event data store that is pending deletion.
+ You can add or remove tags. You can add up to 50 tag key pairs to help you identify, sort, and control access to your event data store.
+ You can add a KMS key to encrypt your event data store. You can’t remove a KMS key from an event data store.

Using the CloudTrail console to create or update a event data stores provides the following advantages:
+ If you're configuring an event data store to collect data events, using the CloudTrail console allows you to view the available data event resource types. For more information, see [Logging data events](logging-data-events-with-cloudtrail.md).
+ If you're configuring an event data store to collect network activity events, using the CloudTrail console allows you to view the event sources for which you can log network activity events. For more information, see [Logging network activity events](logging-network-events-with-cloudtrail.md).
+ If you're configuring a event data store to collect events outside of AWS, using the CloudTrail console lets you view information about available partners. For more information, see [Create an event data store for events outside of AWS with the console](event-data-store-integration-events.md).

**Topics**
+ [

# Create an event data store for CloudTrail events with the console
](query-event-data-store-cloudtrail.md)
+ [

# Create an event data store for Insights events with the console
](query-event-data-store-insights.md)
+ [

# Create an event data store for configuration items with the console
](query-event-data-store-config.md)
+ [

# Create an event data store for events outside of AWS with the console
](event-data-store-integration-events.md)
+ [

# Update an event data store with the console
](query-event-data-store-update.md)
+ [

# Stop and start event ingestion with the console
](query-eds-stop-ingestion.md)
+ [

# Change termination protection with the console
](query-eds-termination-protection.md)
+ [

# Delete an event data store with the console
](query-event-data-store-delete.md)
+ [

# Restore an event data store with the console
](query-eds-restore.md)
+ [

# Exporting data from CloudTrail Lake Event Data Store to CloudWatch
](cloudtrail-lake-export-cloudwatch.md)

# Create an event data store for CloudTrail events with the console
Create an event data store for CloudTrail events

**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

Event data stores for CloudTrail events can include CloudTrail management events, data events, and network activity events. You can keep the event data in an event data store for up to 3,653 days (about 10 years) if you choose the **One-year extendable retention pricing** option, or up to 2,557 days (about 7 years) if you choose the **Seven-year retention pricing** option..

CloudTrail Lake event data stores incur charges. When you create an event data store, you choose the [pricing option](cloudtrail-lake-manage-costs.md#cloudtrail-lake-manage-costs-pricing-option) you want to use for the event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention period for the event data store. For information about CloudTrail pricing and managing Lake costs, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md).

## To create an event data store for CloudTrail events


Use this procedure to create an event data store that logs CloudTrail management events, data events, or network activity events. 

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  From the navigation pane, under **Lake**, choose **Event data stores**. 

1. Choose **Create event data store**.

1. On the **Configure event data store** page, in **General details**, enter a name for the event data store. A name is required.

1. Choose the **Pricing option** that you want to use for your event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention periods for your event data store. For more information, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md). 

   The following are the available options:
   + **One-year extendable retention pricing** - Generally recommended if you expect to ingest less than 25 TB of event data per month and want a flexible retention period of up to 10 years. For the first 366 days (the default retention period), storage is included at no additional charge with ingestion pricing. After 366 days, extended retention is available at pay-as-you-go pricing. This is the default option.
     + **Default retention period:** 366 days
     + **Maximum retention period:** 3,653 days
   + **Seven-year retention pricing** - Recommended if you expect to ingest more than 25 TB of event data per month and need a retention period of up to 7 years. Retention is included with ingestion pricing at no additional charge.
     + **Default retention period:** 2,557 days
     + **Maximum retention period:** 2,557 days

1. Specify a retention period for the event data store. Retention periods can be between 7 days and 3,653 days (about 10 years) for the **One-year extendable retention pricing** option, or between 7 days and 2,557 days (about seven years) for the **Seven-year retention pricing** option. 

    CloudTrail Lake determines whether to retain an event by checking if the `eventTime` of the event is within the specified retention period. For example, if you specify a retention period of 90 days, CloudTrail will remove events when their `eventTime` is older than 90 days. 
**Note**  
If you are copying trail events to this event data store, CloudTrail will not copy an event if its `eventTime` is older than the specified retention period. To determine the appropriate retention period, take the sum of the oldest event you want to copy in days and the number of days you want to retain the events in the event data store (**retention period** = *oldest-event-in-days* \$1 *number-days-to-retain*). For example, if the oldest event you're copying is 45 days old and you want to keep the events in the event data store for a further 45 days, you would set the retention period to 90 days. 

1. (Optional) To enable encryption using AWS Key Management Service, choose **Use my own AWS KMS key**. Choose **New** to have an AWS KMS key created for you, or choose **Existing** to use an existing KMS key. In **Enter KMS alias**, specify an alias, in the format `alias/`*MyAliasName*. Using your own KMS key requires that you edit your KMS key policy to allow your event data store to be encrypted and decrypted. For more information, see [Configure AWS KMS key policies for CloudTrail](create-kms-key-policy-for-cloudtrail.md). CloudTrail also supports AWS KMS multi-Region keys. For more information about multi-Region keys, see [Using multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the *AWS Key Management Service Developer Guide*.

   Using your own KMS key incurs AWS KMS costs for encryption and decryption. After you associate an event data store with a KMS key, the KMS key cannot be removed or changed.
**Note**  
To enable AWS Key Management Service encryption for an organization event data store, you must use an existing KMS key for the management account.

1. (Optional) If you want to query against your event data using Amazon Athena, choose **Enable** in **Lake query federation**. Federation lets you view the metadata associated with the event data store in the AWS Glue [Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/components-overview.html#data-catalog-intro) and run SQL queries against the event data in Athena. The table metadata stored in the AWS Glue Data Catalog lets the Athena query engine know how to find, read, and process the data that you want to query. For more information, see [Federate an event data store](query-federation.md).

   To enable Lake query federation, choose **Enable** and then do the following:

   1. Choose whether you want to create a new role or use an existing IAM role. [AWS Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/how-it-works.html) uses this role to manage permissions for the federated event data store. When you create a new role using the CloudTrail console, CloudTrail automatically creates a role with the required permissions. If you choose an existing role, be sure the policy for the role provides the [required minimum permissions](query-federation.md#query-federation-permissions-role).

   1. If you are creating a new role, enter a name to identify the role.

   1. If you are using an existing role, choose the role you want to use. The role must exist in your account.

1. (Optional) Choose **Enable resource policy** to add a resource-based policy to your event data store. Resource-based policies allow you to control which principals can perform actions on your event data store. For example, you can add a resource based policy that allows the root users in other accounts to query this event data store and view the query results. For example policies, see [Resource-based policy examples for event data stores](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds).

   A resource-based policy includes one or more statements. Each statement in the policy defines the [principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) that are allowed or denied access to the event data store and the actions the principals can perform on the event data store resource.

   The following actions are supported in resource-based policies for event data stores:
   +  `cloudtrail:StartQuery` 
   +  `cloudtrail:CancelQuery` 
   +  `cloudtrail:ListQueries` 
   +  `cloudtrail:DescribeQuery` 
   +  `cloudtrail:GetQueryResults` 
   +  `cloudtrail:GenerateQuery` 
   +  `cloudtrail:GenerateQueryResultsSummary` 
   +  `cloudtrail:GetEventDataStore` 

   For [organization event data stores](cloudtrail-lake-organizations.md), CloudTrail creates a [default resource-based policy](cloudtrail-lake-organizations.md#cloudtrail-lake-organizations-eds-rbp) that lists the actions that the delegated administrator accounts are allowed to perform on organization event data stores. The permissions in this policy are derived from the delegated administrator permissions in AWS Organizations. This policy is updated automatically following changes to the organization event data store or to the organization (for example, a CloudTrail delegated administrator account is registered or removed).

1. (Optional) In the **Tags** section, you can add up to 50 tag key pairs to help you identify, sort, and control access to your event data store. For more information about how to use IAM policies to authorize access to an event data store based on tags, see [Examples: Denying access to create or delete event data stores based on tags](security_iam_id-based-policy-examples.md#security_iam_id-based-policy-examples-eds-tags). For more information about how you can use tags in AWS, see [Tagging AWS resources](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html) in the *Tagging AWS Resources User Guide*.

1.  Choose **Next** to configure the event data store. 

1.  On the **Choose events** page, choose **AWS events**, and then choose **CloudTrail events**. 

1. For **CloudTrail events**, choose at least one event type. By default, **Management events** is selected. You can add [management events](logging-management-events-with-cloudtrail.md), [data events](logging-data-events-with-cloudtrail.md), and [network activity events](logging-network-events-with-cloudtrail.md) to your event data store.

1. (Optional) Choose **Copy trail events** if you want to copy events from an existing trail to run queries on past events. To copy trail events to an organization event data store, you must use the management account for the organization. The delegated administrator account cannot copy trail events to an organization event data store. For more information about considerations for copying trail events, see [Considerations for copying trail events](cloudtrail-copy-trail-to-lake-eds.md#cloudtrail-trail-copy-considerations-lake).

1. To have your event data store collect events from all accounts in an AWS Organizations organization, select **Enable for all accounts in my organization**. You must be signed in to the management account or delegated administrator account for the organization to create an event data store that collects events for an organization.
**Note**  
To copy trail events or enable Insights events, you must be signed in to the management account for your organization.

1. Expand **Additional settings** to choose whether you want your event data store to collect events for all AWS Regions, or only the current AWS Region, and choose whether the event data store ingests events. By default, your event data store collects events from all Regions in your account and starts ingesting events when it's created. 

   1. Select **Include only the current region in my event data store** to include only events that are logged in the current Region. If you do not choose this option, your event data store includes events from all Regions.

   1. Deselect **Ingest events** if you do not want the event data store to start ingesting events. For example, you may want to deselect **Ingest events**, if you are copying trail events and do not want the event data store to include any future events. By default, the event data store starts ingesting events when it's created.

1. If your event data store includes management events, you can choose from the following options. For more information about management events, see [Logging management events](logging-management-events-with-cloudtrail.md).

   1. Choose between **Simple event collection** or **Advanced event collection**:
      + Choose **Simple event collection** if you want to log all events, log only read events, or log only write events. You can choose also to exclude AWS Key Management Service and Amazon RDS Data API events.
      + Choose **Advanced event collection** if you want to include or exclude management events based on the values of advanced event selector fields, including the `eventName`, `eventType`, `eventSource`, `sessionCredentialFromConsole`, and `userIdentity.arn` fields.

   1. If you selected **Simple event collection**, choose whether you want to log all events, log only read events, or log only write events. You can also choose to exclude AWS KMS and Amazon RDS Data API events.

   1. If you selected **Advanced event collection**, make the following selections:

      1. In **Log selector template**, choose a predefined template, or **Custom** to build a custom configuration based on advanced event selector field values.

         You can choose from the following predefined templates:
         + **Log all events** – Choose this template to log all events.
         + **Log only read events** – Choose this template to log only read events. Read-only events are events that do not change the state of a resource, such as `Get*` or `Describe*` events.
         + **Log only write events** – Choose this template to log only write events. Write events add, change, or delete resources, attributes, or artifacts, such as `Put*`, `Delete*`, or `Write*` events.
         + **Log only AWS Management Console events** – Choose this template to log only events originating from the AWS Management Console.
         + **Exclude AWS service initiated events** – Choose this template to exclude AWS service events, which have an `eventType` of `AwsServiceEvent`, and events initiated with AWS service-linked roles (SLRs).

      1. (Optional) In **Selector name**, enter a name to identify your selector. The selector name is a descriptive name for an advanced event selector, such as "Log management events from AWS Management Console sessions". The selector name is listed as `Name` in the advanced event selector and is viewable if you expand the **JSON view**.

      1. If you chose **Custom**, in **Advanced event selectors** build an expression based on advanced event selector field values.
**Note**  
Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith`, `EndsWith`, `NotStartsWith`, or `NotEndsWith` to explicitly match the beginning or end of the event field.

         1. Choose from the following fields.
            + **`readOnly`** – `readOnly` can be set to **equals** a value of `true` or `false`. When it is set to `false`, the event data store logs Write-only management events. Read-only management events are events that do not change the state of a resource, such as `Get*` or `Describe*` events. Write events add, change, or delete resources, attributes, or artifacts, such as `Put*`, `Delete*`, or `Write*` events. To log both **Read** and **Write** events, don't add a `readOnly` selector.
            + **`eventName`** – `eventName` can use any operator. You can use it to include or exclude any management event, such as `CreateAccessPoint` or `GetAccessPoint`.
            + **`userIdentity.arn`** – Include or exclude events for actions taken by specific IAM identities. For more information, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html).
            + **`sessionCredentialFromConsole`** – Include or exclude events originating from an AWS Management Console session. This field can be set to **equals** or **not equals** with a value of `true`.
            + **`eventSource`** – You can use it to include or exclude specific event sources. The `eventSource` is typically a short form of the service name without spaces plus `.amazonaws.com`. For example, you could set `eventSource` **equals** to `ec2.amazonaws.com` to log only Amazon EC2 management events.
            + **`eventType`** – The [eventType](cloudtrail-event-reference-record-contents.md#ct-event-type) to include or exclude. For example, you can set this field to **not equals** `AwsServiceEvent` to exclude [AWS service events](non-api-aws-service-events.md).

         1. For each field, choose **\$1 Condition** to add as many conditions as you need, up to a maximum of 500 specified values for all conditions.

            For information about how CloudTrail evaluates multiple conditions, see [How CloudTrail evaluates multiple conditions for a field](filtering-data-events.md#filtering-data-events-conditions).
**Note**  
You can have a maximum of 500 values for all selectors on an event data store. This includes arrays of multiple values for a selector such as `eventName`. If you have single values for all selectors, you can have a maximum of 500 conditions added to a selector.

         1. Choose **\$1 Field** to add additional fields as required. To avoid errors, do not set conflicting or duplicate values for fields. 

      1. Optionally, expand **JSON view** to see your advanced event selectors as a JSON block.

   1. Choose **Enable Insights events capture** to enable Insights. To enable Insights, you need to set up a [destination event data store](query-event-data-store-insights.md#query-event-data-store-insights-procedure) to collect Insights events based upon the management event activity in this event data store.

      If you choose to enable Insights, do the following.

      1. Choose the destination event store that will log Insights events. The destination event data store will collect Insights events based upon the management event activity in this event data store. For information about how to create the destination event data store, see [To create a destination event data store that logs Insights events](query-event-data-store-insights.md#query-event-data-store-insights-procedure).

      1. Choose the Insights types. You can choose **API call rate**, **API error rate**, or both. You must be logging **Write** management events to log Insights events for **API call rate**. You must be logging **Read** or **Write** management events to log Insights events for **API error rate**.

1. To include data events in your event data store, do the following.

   1. Choose a resource type. This is the AWS service and resource on which data events are logged.

   1. In **Log selector template**, choose a predefined template, or choose **Custom** to define your own event collection conditions based on the values of advanced event selector fields.

      You can choose from the following predefined templates:
      + **Log all events** – Choose this template to log all events.
      + **Log only read events** – Choose this template to log only read events. Read-only events are events that do not change the state of a resource, such as `Get*` or `Describe*` events.
      + **Log only write events** – Choose this template to log only write events. Write events add, change, or delete resources, attributes, or artifacts, such as `Put*`, `Delete*`, or `Write*` events.
      + **Log only AWS Management Console events** – Choose this template to log only events originating from the AWS Management Console.
      + **Exclude AWS service initiated events** – Choose this template to exclude AWS service events, which have an `eventType` of `AwsServiceEvent`, and events initiated with AWS service-linked roles (SLRs).

   1. (Optional) In **Selector name**, enter a name to identify your selector. The selector name is a descriptive name for an advanced event selector, such as "Log data events for only two S3 buckets". The selector name is listed as `Name` in the advanced event selector and is viewable if you expand the **JSON view**.

   1. If you selected **Custom**, in **Advanced event selectors** build an expression based on the values of advanced event selector fields.
**Note**  
Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith`, `EndsWith`, `NotStartsWith`, or `NotEndsWith` to explicitly match the beginning or end of the event field.

      1. Choose from the following fields.
         + **`readOnly`** - `readOnly` can be set to **equals** a value of `true` or `false`. Read-only data events are events that do not change the state of a resource, such as `Get*` or `Describe*` events. Write events add, change, or delete resources, attributes, or artifacts, such as `Put*`, `Delete*`, or `Write*` events. To log both `read` and `write` events, don't add a `readOnly` selector.
         + **`eventName`** - `eventName` can use any operator. You can use it to include or exclude any data event logged to CloudTrail, such as `PutBucket`, `GetItem`, or `GetSnapshotBlock`.
         + **`eventSource`** – The event source to include or exclude. This field can use any operator.
         + **eventType** – The event type to include or exclude. For example, you can set this field to **not equals** `AwsServiceEvent` to exclude [AWS service events](non-api-aws-service-events.md). For a list of event types, see [`eventType`](cloudtrail-event-reference-record-contents.md#ct-event-type) in [CloudTrail record contents for management, data, and network activity events](cloudtrail-event-reference-record-contents.md).
         + **sessionCredentialFromConsole** – Include or exclude events originating from an AWS Management Console session. This field can be set to **equals** or **not equals** with a value of `true`.
         + **userIdentity.arn** – Include or exclude events for actions taken by specific IAM identities. For more information, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html).
         + **`resources.ARN`** - You can use any operator with `resources.ARN`, but if you use **equals** or **does not equal**, the value must exactly match the ARN of a valid resource of the type you've specified in the template as the value of `resources.type`.
**Note**  
You can't use the `resources.ARN` field to filter resource types that do not have ARNs.

           For more information about the ARN formats of data event resources, see [Actions, resources, and condition keys for AWS services](https://docs.aws.amazon.com/service-authorization/latest/reference/reference_policies_actions-resources-contextkeys.html) in the *Service Authorization Reference*.

      1. For each field, choose **\$1 Condition** to add as many conditions as you need, up to a maximum of 500 specified values for all conditions. For example, to exclude data events for two S3 buckets from data events that are logged on your event data store, you can set the field to **resources.ARN**, set the operator for **does not start with**, and then paste in an S3 bucket ARN for which you do not want to log events.

         To add the second S3 bucket, choose **\$1 Condition**, and then repeat the preceding instruction, pasting in the ARN for or browsing for a different bucket.

         For information about how CloudTrail evaluates multiple conditions, see [How CloudTrail evaluates multiple conditions for a field](filtering-data-events.md#filtering-data-events-conditions).
**Note**  
You can have a maximum of 500 values for all selectors on an event data store. This includes arrays of multiple values for a selector such as `eventName`. If you have single values for all selectors, you can have a maximum of 500 conditions added to a selector.

      1. Choose **\$1 Field** to add additional fields as required. To avoid errors, do not set conflicting or duplicate values for fields. For example, do not specify an ARN in one selector to be equal to a value, then specify that the ARN not equal the same value in another selector.

   1. Optionally, expand **JSON view** to see your advanced event selectors as a JSON block.

   1. To add another resource type on which to log data events, choose **Add data event type**. Repeat steps a through this step to configure advanced event selectors for the resource type.

1. To include network activity events in your event data store, do the following.

   1. From **Network activity event source**, choose the source for network activity events.

   1. In **Log selector template**, choose a template. You can choose to log all network activity events, log all network activity access denied events, or choose **Custom** to build a custom log selector to filter on multiple fields, such as `eventName` and `vpcEndpointId`.

   1. (Optional) Enter a name to identify the selector. The selector name is listed as **Name** in the advanced event selector and is viewable if you expand the **JSON view**.

   1. In **Advanced event selectors** build expressions by choosing values for **Field**, **Operator**, and **Value**. You can skip this step if you are using a predefined log template.

      1. For excluding or including network activity events, you can choose from the following fields in the console.
         + **`eventName`** – You can use any operator with `eventName`. You can use it to include or exclude any event, such as `CreateKey`.
         + **`errorCode`** – You can use it to filter on an error code. Currently, the only supported `errorCode` is `VpceAccessDenied`.
         +  **`vpcEndpointId`** – Identifies the VPC endpoint that the operation passed through. You can use any operator with `vpcEndpointId`. 

      1. For each field, choose **\$1 Condition** to add as many conditions as you need, up to a maximum of 500 specified values for all conditions. 

      1. Choose **\$1 Field** to add additional fields as required. To avoid errors, do not set conflicting or duplicate values for fields. 

   1. To add another event source for which you want to log network activity events, choose **Add network activity event selector**.

   1. Optionally, expand **JSON view** to see your advanced event selectors as a JSON block.

1. To copy existing trail events to your event data store, do the following.

   1. Choose the trail that you want to copy. By default, CloudTrail only copies CloudTrail events contained in the S3 bucket's `CloudTrail` prefix and the prefixes inside the `CloudTrail` prefix, and does not check prefixes for other AWS services. If you want to copy CloudTrail events contained in another prefix, choose **Enter S3 URI**, and then choose **Browse S3** to browse to the prefix. If the source S3 bucket for the trail uses a KMS key for data encryption, ensure that the KMS key policy allows CloudTrail to decrypt the data. If your source S3 bucket uses multiple KMS keys, you must update each key's policy to allow CloudTrail to decrypt the data in the bucket. For more information about updating the KMS key policy, see [KMS key policy for decrypting data in the source S3 bucket](cloudtrail-copy-trail-to-lake-eds.md#copy-trail-events-permissions-kms).

   1. Choose the time range for copying the events. CloudTrail checks the prefix and log file name to verify the name contains a date between the chosen start and end date before attempting to copy trail events. You can choose a **Relative range** or an **Absolute range**. To avoid duplicating events between the source trail and destination event data store, choose a time range that is earlier than the creation of the event data store.
**Note**  
CloudTrail only copies trail events that have an `eventTime` within the event data store’s retention period. For example, if an event data store’s retention period is 90 days, then CloudTrail will not copy any trail events with an `eventTime` older than 90 days.
      + If you choose **Relative range**, you can choose to copy events logged in the last 6 months, 1 year, 2 years, 7 years, or a custom range. CloudTrail copies the events logged within the chosen time period.
      + If you choose **Absolute range**, you can choose a specific start and end date. CloudTrail copies the events that occurred between the chosen start and end dates.

   1. For **Permissions**, choose from the following IAM role options. If you choose an existing IAM role, verify that the IAM role policy provides the necessary permissions. For more information about updating the IAM role permissions, see [IAM permissions for copying trail events](cloudtrail-copy-trail-to-lake-eds.md#copy-trail-events-permissions-iam).
      + Choose **Create a new role (recommended)** to create a new IAM role. For **Enter IAM role name**, enter a name for the role. CloudTrail automatically creates the necessary permissions for this new role.
      + Choose **Use a custom IAM role ARN** to use a custom IAM role that is not listed. For **Enter IAM role ARN**, enter the IAM ARN.
      + Choose an existing IAM role from the drop-down list.

1. Choose **Next** to enrich your events by adding resource tag keys and IAM global condition keys.

1. In **Enrich events**, add up to 50 resource tag keys and 50 IAM global condition keys to provide additional metadata about your events. This helps you categorize and group related events.

   If you add resource tag keys, CloudTrail will include the selected tag keys associated with the resources that were involved in the API call. API events related to deleted resources will not have resource tags.

   If you add IAM global condition keys, CloudTrail will include information about the selected condition keys that were evaluated during the authorization process, including additional details about the principal, session, network, and the request itself. 

   Information about the resource tag keys and IAM global condition keys is shown in the `eventContext` field of the event. For more information, see [Enrich CloudTrail events by adding resource tag keys and IAM global condition keys](cloudtrail-context-events.md).
**Note**  
If an event contains a resource that doesn’t belong to the event Region, CloudTrail will not populate tags for this resource because tag retrieval is limited to the event Region.

1. Choose **Expand event size** to expand the event payload up to 1 MB from 256 KB. This option is automatically enabled when you add resource tag keys or IAM global condition keys to ensure all of your added keys are included in the event.

   Expanding the event size is helpful for analyzing and troubleshooting events because it allows you to see the full contents of the following fields as long as the event payload is less than 1 MB:
   + `annotation`
   + `requestID`
   + `additionalEventData`
   + `serviceEventDetails`
   + `userAgent`
   + `errorCode`
   + `responseElements`
   + `requestParameters`
   + `errorMessage`

   For more information about these fields, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html).

1. Choose **Next** to review your choices.

1. On the **Review and create** page, review your choices. Choose **Edit** to make changes to a section. When you're ready to create the event data store, choose **Create event data store**.

1. The new event data store is visible in the **Event data stores** table on the **Event data stores** page.

   From this point forward, the event data store captures events that match its advanced event selectors (if you kept the **Ingest events** option selected). Events that occurred before you created the event data store are not in the event data store, unless you opted to copy existing trail events.

You can now run queries on your new event data store. The **Sample queries** tab provides example queries to get you started. For more information about creating and editing queries, see [Create or edit a query with the CloudTrail console](query-create-edit-query.md).

You can also view the [managed dashboards](lake-dashboard-managed.md), or [create custom dashboards](lake-dashboard-custom.md) to visualize event trends. For more information about Lake dashboards, see [CloudTrail Lake dashboards](lake-dashboard.md).

# Create an event data store for Insights events with the console
Create an event data store for Insights events

**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

AWS CloudTrail Insights help AWS users identify and respond to unusual activity associated with API call rates and API error rates by continuously analyzing CloudTrail management events. CloudTrail Insights analyze your normal patterns of API call rates and API error rates, also called the *baseline*, and generate Insights events when the call volume or error rates are outside normal patterns. Insights events on API call rate are generated for `write` management APIs, and Insights events on API error rate are generated for both `read` and `write` management APIs.

To log Insights events in CloudTrail Lake, you need a destination event data store that logs Insights events and a source event data store that enables Insights and logs management events.

**Note**  
To log Insights events on the API call rate, the source event data store must log `write` management events. To log Insights events on the API error rate, the source event data store must log `read` or `write` management events. 

If you have CloudTrail Insights enabled on a source event data store and CloudTrail detects unusual activity, CloudTrail delivers Insights events to your destination event data store. Unlike other types of events captured in a CloudTrail event data store, Insights events are logged only when CloudTrail detects changes in your account's API usage that differ significantly from the account's typical usage patterns.

After you enable CloudTrail Insights for the first time on an event data store, CloudTrail may take up to 7 days to begin delivering Insights events, provided that unusual activity is detected during that time.

CloudTrail Insights analyzes the management events that occur in each Region for the event data store and generates an Insights events when unusual activity is detected that deviates from the baseline. A CloudTrail Insights event is generated in the same Region as its supporting management event is generated.

For an organization event data store, CloudTrail Insights analyzes the management events from each member account in the organization for each Region and generates an Insights event when unusual activity is detected that deviates from the baseline for the account and the Region.

Additional charges apply for ingesting Insights events in CloudTrail Lake. You will be charged separately if you enable Insights for both trails and CloudTrail Lake event data stores. For information about CloudTrail pricing, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/).

**Topics**
+ [

## To create a destination event data store that logs Insights events
](#query-event-data-store-insights-procedure)
+ [

## To create a source event data store that enables Insights events
](#query-event-data-store-cloudtrail-insights)

## To create a destination event data store that logs Insights events


When you create an Insights event data store, you have the option to choose an existing source event data store that logs management events and then specify the Insights types you want to receive. Or, you can alternatively enable Insights on a new or existing event data store after you create your Insights event data store and then choose this event data store as the destination event data store.

This procedure shows you how to create a destination event data store that logs Insights events.

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  From the navigation pane, open the **Lake** submenu, then choose **Event data stores**. 

1. Choose **Create event data store**.

1. On the **Configure event data store** page, in **General details**, enter a name for the event data store. A name is required.

1. Choose the **Pricing option** that you want to use for your event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention periods for your event data store. For more information, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md). 

   The following are the available options:
   + **One-year extendable retention pricing** - Generally recommended if you expect to ingest less than 25 TB of event data per month and want a flexible retention period of up to 10 years. For the first 366 days (the default retention period), storage is included at no additional charge with ingestion pricing. After 366 days, extended retention is available at pay-as-you-go pricing. This is the default option.
     + **Default retention period:** 366 days
     + **Maximum retention period:** 3,653 days
   + **Seven-year retention pricing** - Recommended if you expect to ingest more than 25 TB of event data per month and need a retention period of up to 7 years. Retention is included with ingestion pricing at no additional charge.
     + **Default retention period:** 2,557 days
     + **Maximum retention period:** 2,557 days

1. Specify a retention period for the event data store in days. Retention periods can be between 7 days and 3,653 days (about 10 years) for the **One-year extendable retention pricing** option, or between 7 days and 2,557 days (about seven years) for the **Seven-year retention pricing** option. The event data store retains event data for the specified number of days.

1. (Optional) To enable encryption using AWS Key Management Service, choose **Use my own AWS KMS key**. Choose **New** to have an AWS KMS key created for you, or choose **Existing** to use an existing KMS key. In **Enter KMS alias**, specify an alias, in the format `alias/`*MyAliasName*. Using your own KMS key requires that you edit your KMS key policy to allow your event data store to be encrypted and decrypted. For more information, see [Configure AWS KMS key policies for CloudTrail](create-kms-key-policy-for-cloudtrail.md). CloudTrail also supports AWS KMS multi-Region keys. For more information about multi-Region keys, see [Using multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the *AWS Key Management Service Developer Guide*.

   Using your own KMS key incurs AWS KMS costs for encryption and decryption. After you associate an event data store with a KMS key, the KMS key cannot be removed or changed.
**Note**  
To enable AWS Key Management Service encryption for an organization event data store, you must use an existing KMS key for the management account.

1. (Optional) If you want to query against your event data using Amazon Athena, choose **Enable** in **Lake query federation**. Federation lets you view the metadata associated with the event data store in the AWS Glue [Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/components-overview.html#data-catalog-intro) and run SQL queries against the event data in Athena. The table metadata stored in the AWS Glue Data Catalog lets the Athena query engine know how to find, read, and process the data that you want to query. For more information, see [Federate an event data store](query-federation.md).

   To enable Lake query federation, choose **Enable** and then do the following:

   1. Choose whether you want to create a new role or use an existing IAM role. [AWS Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/how-it-works.html) uses this role to manage permissions for the federated event data store. When you create a new role using the CloudTrail console, CloudTrail automatically creates a role with the required permissions. If you choose an existing role, be sure the policy for the role provides the [required minimum permissions](query-federation.md#query-federation-permissions-role).

   1. If you are creating a new role, enter a name to identify the role.

   1. If you are using an existing role, choose the role you want to use. The role must exist in your account.

1. (Optional) Choose **Enable resource policy** to add a resource-based policy to your event data store. Resource-based policies allow you to control which principals can perform actions on your event data store. For example, you can add a resource based policy that allows the root users in other accounts to query this event data store and view the query results. For example policies, see [Resource-based policy examples for event data stores](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds).

   A resource-based policy includes one or more statements. Each statement in the policy defines the [principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) that are allowed or denied access to the event data store and the actions the principals can perform on the event data store resource.

   The following actions are supported in resource-based policies for event data stores:
   +  `cloudtrail:StartQuery` 
   +  `cloudtrail:CancelQuery` 
   +  `cloudtrail:ListQueries` 
   +  `cloudtrail:DescribeQuery` 
   +  `cloudtrail:GetQueryResults` 
   +  `cloudtrail:GenerateQuery` 
   +  `cloudtrail:GenerateQueryResultsSummary` 
   +  `cloudtrail:GetEventDataStore` 

   For [organization event data stores](cloudtrail-lake-organizations.md), CloudTrail creates a [default resource-based policy](cloudtrail-lake-organizations.md#cloudtrail-lake-organizations-eds-rbp) that lists the actions that the delegated administrator accounts are allowed to perform on organization event data stores. The permissions in this policy are derived from the delegated administrator permissions in AWS Organizations. This policy is updated automatically following changes to the organization event data store or to the organization (for example, a CloudTrail delegated administrator account is registered or removed).

1. (Optional) In the **Tags** section, you can add up to 50 tag key pairs to help you identify, sort, and control access to your event data store. For more information about how to use IAM policies to authorize access to an event data store based on tags, see [Examples: Denying access to create or delete event data stores based on tags](security_iam_id-based-policy-examples.md#security_iam_id-based-policy-examples-eds-tags). For more information about how you can use tags in AWS, see [Tagging AWS resources](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html) in the *Tagging AWS Resources User Guide*.

1.  Choose **Next** to configure the event data store. 

1.  On the **Choose events** page, choose **AWS events**, and then choose **CloudTrail Insights events**. 

1. In **CloudTrail Insights events**, do the following.

   1. Choose **Allow delegated administrator access** if you want to give your organization's delegated administrator access to this event data store. This option is only available if you are signed in with the management account for an AWS Organizations organization.

   1. (Optional) Choose an existing source event data store that logs management events and specify the Insights types you want to receive.

      To add a source event data store, do the following.

      1. Choose **Add source event data store**.

      1. Choose the source event data store.

      1. Choose the **Insights type** that you want to receive.
         + `ApiCallRateInsight` – The `ApiCallRateInsight` Insights type analyzes write-only management API calls that are aggregated per minute against a baseline API call volume. To receives Insights on `ApiCallRateInsight`, the source event data store must log **Write** management events.
         + `ApiErrorRateInsight` – The `ApiErrorRateInsight` Insights type analyzes management API calls that result in error codes. The error is shown if the API call is unsuccessful. To receive Insights on `ApiErrorRateInsight`, the source event data store must log **Write** or **Read** management events.

      1. Repeat the previous two steps (ii and iii) to add any additional Insights types you want to receive.

1. Choose **Next** to review your choices.

1. On the **Review and create** page, review your choices. Choose **Edit** to make changes to a section. When you're ready to create the event data store, choose **Create event data store**.

1. The new event data store is visible in the **Event data stores** table on the **Event data stores** page.

1. If you did not choose a source event data store in step 10, follow the steps in [To create a source event data store that enables Insights events](#query-event-data-store-cloudtrail-insights) to create a source event data store.

## To create a source event data store that enables Insights events


This procedure shows you how to create a source event data store that enables Insights events and logs management events.

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  From the navigation pane, open the **Lake** submenu, then choose **Event data stores**. 

1. Choose **Create event data store**.

1. On the **Configure event data store** page, in **General details**, enter a name for the event data store. A name is required.

1. Choose the **Pricing option** that you want to use for your event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention periods for your event data store. For more information, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md). 

   The following are the available options:
   + **One-year extendable retention pricing** - Generally recommended if you expect to ingest less than 25 TB of event data per month and want a flexible retention period of up to 10 years. For the first 366 days (the default retention period), storage is included at no additional charge with ingestion pricing. After 366 days, extended retention is available at pay-as-you-go pricing. This is the default option.
     + **Default retention period:** 366 days
     + **Maximum retention period:** 3,653 days
   + **Seven-year retention pricing** - Recommended if you expect to ingest more than 25 TB of event data per month and need a retention period of up to 7 years. Retention is included with ingestion pricing at no additional charge.
     + **Default retention period:** 2,557 days
     + **Maximum retention period:** 2,557 days

1. Specify a retention period for the event data store. Retention periods can be between 7 days and 3,653 days (about 10 years) for the **One-year extendable retention pricing** option, or between 7 days and 2,557 days (about seven years) for the **Seven-year retention pricing** option.

    CloudTrail Lake determines whether to retain an event by checking if the `eventTime` of the event is within the specified retention period. For example, if you specify a retention period of 90 days, CloudTrail will remove events when their `eventTime` is older than 90 days.

1. (Optional) To enable encryption using AWS Key Management Service, choose **Use my own AWS KMS key**. Choose **New** to have an AWS KMS key created for you, or choose **Existing** to use an existing KMS key. In **Enter KMS alias**, specify an alias, in the format `alias/`*MyAliasName*. Using your own KMS key requires that you edit your KMS key policy to allow your event data store to be encrypted and decrypted. For more information, see [Configure AWS KMS key policies for CloudTrail](create-kms-key-policy-for-cloudtrail.md). CloudTrail also supports AWS KMS multi-Region keys. For more information about multi-Region keys, see [Using multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the *AWS Key Management Service Developer Guide*.

   Using your own KMS key incurs AWS KMS costs for encryption and decryption. After you associate an event data store with a KMS key, the KMS key cannot be removed or changed.
**Note**  
To enable AWS Key Management Service encryption for an organization event data store, you must use an existing KMS key for the management account.

1. (Optional) If you want to query against your event data using Amazon Athena, choose **Enable** in **Lake query federation**. Federation lets you view the metadata associated with the event data store in the AWS Glue [Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/components-overview.html#data-catalog-intro) and run SQL queries against the event data in Athena. The table metadata stored in the AWS Glue Data Catalog lets the Athena query engine know how to find, read, and process the data that you want to query. For more information, see [Federate an event data store](query-federation.md).

   To enable Lake query federation, choose **Enable** and then do the following:

   1. Choose whether you want to create a new role or use an existing IAM role. [AWS Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/how-it-works.html) uses this role to manage permissions for the federated event data store. When you create a new role using the CloudTrail console, CloudTrail automatically creates a role with the required permissions. If you choose an existing role, be sure the policy for the role provides the [required minimum permissions](query-federation.md#query-federation-permissions-role).

   1. If you are creating a new role, enter a name to identify the role.

   1. If you are using an existing role, choose the role you want to use. The role must exist in your account.

1. (Optional) Choose **Enable resource policy** to add a resource-based policy to your event data store. Resource-based policies allow you to control which principals can perform actions on your event data store. For example, you can add a resource based policy that allows the root users in other accounts to query this event data store and view the query results. For example policies, see [Resource-based policy examples for event data stores](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds).

   A resource-based policy includes one or more statements. Each statement in the policy defines the [principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) that are allowed or denied access to the event data store and the actions the principals can perform on the event data store resource.

   The following actions are supported in resource-based policies for event data stores:
   +  `cloudtrail:StartQuery` 
   +  `cloudtrail:CancelQuery` 
   +  `cloudtrail:ListQueries` 
   +  `cloudtrail:DescribeQuery` 
   +  `cloudtrail:GetQueryResults` 
   +  `cloudtrail:GenerateQuery` 
   +  `cloudtrail:GenerateQueryResultsSummary` 
   +  `cloudtrail:GetEventDataStore` 

   For [organization event data stores](cloudtrail-lake-organizations.md), CloudTrail creates a [default resource-based policy](cloudtrail-lake-organizations.md#cloudtrail-lake-organizations-eds-rbp) that lists the actions that the delegated administrator accounts are allowed to perform on organization event data stores. The permissions in this policy are derived from the delegated administrator permissions in AWS Organizations. This policy is updated automatically following changes to the organization event data store or to the organization (for example, a CloudTrail delegated administrator account is registered or removed).

1. (Optional) In the **Tags** section, you can add up to 50 tag key pairs to help you identify, sort, and control access to your event data store. For more information about how to use IAM policies to authorize access to an event data store based on tags, see [Examples: Denying access to create or delete event data stores based on tags](security_iam_id-based-policy-examples.md#security_iam_id-based-policy-examples-eds-tags). For more information about how you can use tags in AWS, see [Tagging AWS resources](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html) in the *Tagging AWS Resources User Guide*.

1.  Choose **Next** to configure the event data store. 

1.  On the **Choose events** page, choose **AWS events**, and then choose **CloudTrail events**. 

1. In **CloudTrail events**, leave **Management events** selected. 

1. To have your event data store collect events from all accounts in an AWS Organizations organization, select **Enable for all accounts in my organization**. You must be signed in to the management account for the organization to create an event data store that enables Insights.

1. Expand **Additional settings** to choose whether you want your event data store to collect events for all AWS Regions, or only the current AWS Region, and choose whether the event data store ingests events. By default, your event data store collects events from all Regions in your account and starts ingesting events when it's created. 

   1. Choose **Include only the current region in my event data store** if you want to include only events that are logged in the current Region. If you do not choose this option, your event data store includes events from all Regions.

   1. Leave **Ingest events** selected.

1. Choose between **Simple event collection** or **Advanced event collection**:
   + Choose **Simple event collection** if you want to log all events, log only read events, or log only write events. You can choose also to exclude AWS Key Management Service and Amazon RDS Data API events.
   + Choose **Advanced event collection** if you want to include or exclude management events based on the values of advanced event selector fields, including the `eventName`, `eventType`, `eventSource`, `sessionCredentialFromConsole`, and `userIdentity.arn` fields.

1. If you selected **Simple event collection**, choose whether you want to log all events, log only read events, or log only write events. You can also choose to exclude AWS KMS and Amazon RDS Data API events.

1. If you selected **Advanced event collection**, make the following selections:

   1. In **Log selector template**, choose a predefined template, or choose **Custom** to write your own event collection conditions based on the values of advanced event selector fields.

      You can choose from the following predefined templates:
      + **Log all events** – Choose this template to log all events.
      + **Log only read events** – Choose this template to log only read events. Read-only events are events that do not change the state of a resource, such as `Get*` or `Describe*` events.
      + **Log only write events** – Choose this template to log only write events. Write events add, change, or delete resources, attributes, or artifacts, such as `Put*`, `Delete*`, or `Write*` events.
      + **Log only AWS Management Console events** – Choose this template to log only events originating from the AWS Management Console.
      + **Exclude AWS service initiated events** – Choose this template to exclude AWS service events, which have an `eventType` of `AwsServiceEvent`, and events initiated with AWS service-linked roles (SLRs).

   1. (Optional) In **Selector name**, enter a name to identify your selector. The selector name is a descriptive name for an advanced event selector, such as "Log management events from AWS Management Console sessions". The selector name is listed as `Name` in the advanced event selector and is viewable if you expand the **JSON view**.

   1. If you chose **Custom**, in **Advanced event selectors** build an expression based on advanced event selector field values.
**Note**  
Selectors don't support the use of wildcards like `*` . To match multiple values with a single condition, you may use `StartsWith`, `EndsWith`, `NotStartsWith`, or `NotEndsWith` to explicitly match the beginning or end of the event field.

      1. Choose from the following fields.
         + **`readOnly`** – `readOnly` can be set to **equals** a value of `true` or `false`. When it is set to `false`, the event data store logs Write-only management events. Read-only management events are events that do not change the state of a resource, such as `Get*` or `Describe*` events. Write events add, change, or delete resources, attributes, or artifacts, such as `Put*`, `Delete*`, or `Write*` events. To log both **Read** and **Write** events, don't add a `readOnly` selector.
         + **`eventName`** – `eventName` can use any operator. You can use it to include or exclude any management event, such as `CreateAccessPoint` or `GetAccessPoint`.
         + **`userIdentity.arn`** – Include or exclude events for actions taken by specific IAM identities. For more information, see [CloudTrail userIdentity element](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-user-identity.html).
         + **`sessionCredentialFromConsole`** – Include or exclude events originating from an AWS Management Console session. This field can be set to **equals** or **not equals** with a value of `true`.
         + **`eventSource`** – You can use it to include or exclude specific event sources. The `eventSource` is typically a short form of the service name without spaces plus `.amazonaws.com`. For example, you could set `eventSource` **equals** to `ec2.amazonaws.com` to log only Amazon EC2 management events.
         + **`eventType`** – The [eventType](cloudtrail-event-reference-record-contents.md#ct-event-type) to include or exclude. For example, you can set this field to **not equals** `AwsServiceEvent` to exclude [AWS service events](non-api-aws-service-events.md).

      1. For each field, choose **\$1 Condition** to add as many conditions as you need, up to a maximum of 500 specified values for all conditions.

         For information about how CloudTrail evaluates multiple conditions, see [How CloudTrail evaluates multiple conditions for a field](filtering-data-events.md#filtering-data-events-conditions).
**Note**  
You can have a maximum of 500 values for all selectors on an event data store. This includes arrays of multiple values for a selector such as `eventName`. If you have single values for all selectors, you can have a maximum of 500 conditions added to a selector.

      1. Choose **\$1 Field** to add additional fields as required. To avoid errors, do not set conflicting or duplicate values for fields. 

   1. Optionally, expand **JSON view** to see your advanced event selectors as a JSON block.

1. Choose **Enable Insights events capture**. 

1. Choose the destination event store that will log Insights events. The destination event data store will collect Insights events based upon the management event activity in this event data store. For information about how to create the destination event data store, see [To create a destination event data store that logs Insights events](#query-event-data-store-insights-procedure).

1. Choose the Insights types. You can choose **API call rate**, **API error rate**, or both. You must be logging **Write** management events to log Insights events for **API call rate**. You must be logging **Read** or **Write** management events to log Insights events for **API error rate**.

1. Choose **Next** to enrich your events by adding resource tag keys and IAM global condition keys.

1. In **Enrich events**, add up to 50 resource tag keys and 50 IAM global condition keys to provide additional metadata about your events. This helps you categorize and group related events.

   If you add resource tag keys, CloudTrail will include the selected tag keys associated with the resources that were involved in the API call. API events related to deleted resources will not have resource tags.

   If you add IAM global condition keys, CloudTrail will include information about the selected condition keys that were evaluated during the authorization process, including additional details about the principal, session, network, and the request itself. 

   Information about the resource tag keys and IAM global condition keys is shown in the `eventContext` field of the event. For more information, see [Enrich CloudTrail events by adding resource tag keys and IAM global condition keys](cloudtrail-context-events.md).
**Note**  
If an event contains a resource that doesn’t belong to the event Region, CloudTrail will not populate tags for this resource because tag retrieval is limited to the event Region.

1. Choose **Expand event size** to expand the event payload up to 1 MB from 256 KB. This option is automatically enabled when you add resource tag keys or IAM global condition keys to ensure all of your added keys are included in the event.

   Expanding the event size is helpful for analyzing and troubleshooting events because it allows you to see the full contents of the following fields as long as the event payload is less than 1 MB:
   + `annotation`
   + `requestID`
   + `additionalEventData`
   + `serviceEventDetails`
   + `userAgent`
   + `errorCode`
   + `responseElements`
   + `requestParameters`
   + `errorMessage`

   For more information about these fields, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html).

1. Choose **Next** to review your choices.

1. On the **Review and create** page, review your choices. Choose **Edit** to make changes to a section. When you're ready to create the event data store, choose **Create event data store**.

1. The new event data store is visible in the **Event data stores** table on the **Event data stores** page.

   From this point forward, the event data store captures events that match its advanced event selectors. After you enable CloudTrail Insights for the first time on your source event data store, CloudTrail may take up to 7 days to begin delivering Insights events, provided that unusual activity is detected during that time.

   You can view the CloudTrail Lake dashboard to visualize the Insights events in your destination event data store. For more information about Lake dashboards, see [CloudTrail Lake dashboards](lake-dashboard.md).

Additional charges apply for ingesting Insights events in CloudTrail Lake. You will be charged separately if you enable Insights for both trails and event data stores. For information about CloudTrail pricing, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/).

# Create an event data store for configuration items with the console
Create an event data store for AWS Config configuration items

**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

You can create an event data store to include [AWS Config configuration items](https://docs.aws.amazon.com/config/latest/developerguide/config-concepts.html#config-items), and use the event data store to investigate non-compliant changes to your production environments. With an event data store, you can relate non-compliant rules to the users and resources associated with the changes. A configuration item represents a point-in-time view of the attributes of a supported AWS resource that exists in your account. AWS Config creates a configuration item whenever it detects a change to a resource type that it is recording. AWS Config also creates configuration items when a configuration snapshot is captured.

You can use both AWS Config and CloudTrail Lake to run queries against your configuration items. You can use AWS Config to query the current configuration state of AWS resources based on configuration properties for a single AWS account and AWS Region, or across multiple accounts and Regions. In contrast, you can use CloudTrail Lake to query across diverse data sources such as CloudTrail events, configuration items, and rule evaluations. CloudTrail Lake queries cover all AWS Config configuration items including resource configuration and compliance history.

Creating an event data store for configuration items doesn't impact existing AWS Config advanced queries, or any configured AWS Config aggregators. You can continue to run advanced queries using AWS Config, and AWS Config continues to deliver history files to your S3 buckets.

CloudTrail Lake event data stores incur charges. When you create an event data store, you choose the [pricing option](cloudtrail-lake-manage-costs.md#cloudtrail-lake-manage-costs-pricing-option) you want to use for the event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention period for the event data store. For information about CloudTrail pricing and managing Lake costs, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md).

## Limitations


The following limitations apply to event data stores for configuration items.
+ No support for custom configuration items
+ No support for event filtering using advanced event selectors

## Prerequisites


Before you create your event data store, set up AWS Config recording for all your accounts and Regions. You can use [Quick Setup](https://docs.aws.amazon.com/systems-manager/latest/userguide/quick-setup-config.html), a capability of AWS Systems Manager, to quickly create a configuration recorder powered by AWS Config. 

**Note**  
You are charged service usage fees when AWS Config starts recording configurations. For more information about pricing, see [AWS Config Pricing](https://aws.amazon.com/config/pricing/). For information about managing the configuration recorder, see [Managing the Configuration Recorder](https://docs.aws.amazon.com/config/latest/developerguide/stop-start-recorder.html) in the *AWS Config Developer Guide*.  


Additionally, the following actions are recommended, but are not required to create an event data store.
+  Set up an Amazon S3 bucket to receive a configuration snapshot on request and configuration history. For more information about snapshots, see [Managing the Delivery Channel](https://docs.aws.amazon.com/config/latest/developerguide/manage-delivery-channel.html) and [Delivering Configuration Snapshot to an Amazon S3 Bucket](https://docs.aws.amazon.com/config/latest/developerguide/deliver-snapshot-cli.html) in the *AWS Config Developer Guide*. 
+  Specify the rules that you want AWS Config to use to evaluate compliance information for the recorded resource types. Several of the CloudTrail Lake sample queries for AWS Config require AWS Config Rules to evaluate the compliance state of your AWS resources. For more information about AWS Config Rules, see [Evaluating Resources with AWS Config Rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html) in the *AWS Config Developer Guide*. 

## To create an event data store for configuration items


1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  From the navigation pane, under **Lake**, choose **Event data stores**. 

1. Choose **Create event data store**.

1. On the **Configure event data store** page, in **General details**, enter a name for the event data store. A name is required.

1. Choose the **Pricing option** that you want to use for your event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention periods for your event data store. For more information, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md). 

   The following are the available options:
   + **One-year extendable retention pricing** - Generally recommended if you expect to ingest less than 25 TB of event data per month and want a flexible retention period of up to 10 years. For the first 366 days (the default retention period), storage is included at no additional charge with ingestion pricing. After 366 days, extended retention is available at pay-as-you-go pricing. This is the default option.
     + **Default retention period:** 366 days
     + **Maximum retention period:** 3,653 days
   + **Seven-year retention pricing** - Recommended if you expect to ingest more than 25 TB of event data per month and need a retention period of up to 7 years. Retention is included with ingestion pricing at no additional charge.
     + **Default retention period:** 2,557 days
     + **Maximum retention period:** 2,557 days

1. Specify a retention period for the event data store. Retention periods can be between 7 days and 3,653 days (about 10 years) for the **One-year extendable retention pricing** option, or between 7 days and 2,557 days (about seven years) for the **Seven-year retention pricing** option. 

    CloudTrail Lake determines whether to retain an event by checking if the `eventTime` of the event is within the specified retention period. For example, if you specify a retention period of 90 days, CloudTrail will remove events when their `eventTime` is older than 90 days.

1. (Optional) To enable encryption using AWS Key Management Service, choose **Use my own AWS KMS key**. Choose **New** to have an AWS KMS key created for you, or choose **Existing** to use an existing KMS key. In **Enter KMS alias**, specify an alias, in the format `alias/`*MyAliasName*. Using your own KMS key requires that you edit your KMS key policy to allow your event data store to be encrypted and decrypted. For more information, see [Configure AWS KMS key policies for CloudTrail](create-kms-key-policy-for-cloudtrail.md). CloudTrail also supports AWS KMS multi-Region keys. For more information about multi-Region keys, see [Using multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the *AWS Key Management Service Developer Guide*.

   Using your own KMS key incurs AWS KMS costs for encryption and decryption. After you associate an event data store with a KMS key, the KMS key cannot be removed or changed.
**Note**  
To enable AWS Key Management Service encryption for an organization event data store, you must use an existing KMS key for the management account.

1. (Optional) If you want to query against your event data using Amazon Athena, choose **Enable** in **Lake query federation**. Federation lets you view the metadata associated with the event data store in the AWS Glue [Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/components-overview.html#data-catalog-intro) and run SQL queries against the event data in Athena. The table metadata stored in the AWS Glue Data Catalog lets the Athena query engine know how to find, read, and process the data that you want to query. For more information, see [Federate an event data store](query-federation.md).

   To enable Lake query federation, choose **Enable** and then do the following:

   1. Choose whether you want to create a new role or use an existing IAM role. [AWS Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/how-it-works.html) uses this role to manage permissions for the federated event data store. When you create a new role using the CloudTrail console, CloudTrail automatically creates a role with the required permissions. If you choose an existing role, be sure the policy for the role provides the [required minimum permissions](query-federation.md#query-federation-permissions-role).

   1. If you are creating a new role, enter a name to identify the role.

   1. If you are using an existing role, choose the role you want to use. The role must exist in your account.

1. (Optional) Choose **Enable resource policy** to add a resource-based policy to your event data store. Resource-based policies allow you to control which principals can perform actions on your event data store. For example, you can add a resource based policy that allows the root users in other accounts to query this event data store and view the query results. For example policies, see [Resource-based policy examples for event data stores](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds).

   A resource-based policy includes one or more statements. Each statement in the policy defines the [principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) that are allowed or denied access to the event data store and the actions the principals can perform on the event data store resource.

   The following actions are supported in resource-based policies for event data stores:
   +  `cloudtrail:StartQuery` 
   +  `cloudtrail:CancelQuery` 
   +  `cloudtrail:ListQueries` 
   +  `cloudtrail:DescribeQuery` 
   +  `cloudtrail:GetQueryResults` 
   +  `cloudtrail:GenerateQuery` 
   +  `cloudtrail:GenerateQueryResultsSummary` 
   +  `cloudtrail:GetEventDataStore` 

   For [organization event data stores](cloudtrail-lake-organizations.md), CloudTrail creates a [default resource-based policy](cloudtrail-lake-organizations.md#cloudtrail-lake-organizations-eds-rbp) that lists the actions that the delegated administrator accounts are allowed to perform on organization event data stores. The permissions in this policy are derived from the delegated administrator permissions in AWS Organizations. This policy is updated automatically following changes to the organization event data store or to the organization (for example, a CloudTrail delegated administrator account is registered or removed).

1. (Optional) In the **Tags** section, you can add up to 50 tag key pairs to help you identify, sort, and control access to your event data store. For more information about how to use IAM policies to authorize access to an event data store based on tags, see [Examples: Denying access to create or delete event data stores based on tags](security_iam_id-based-policy-examples.md#security_iam_id-based-policy-examples-eds-tags). For more information about how you can use tags in AWS, see [Tagging AWS resources](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html) in the *Tagging AWS Resources User Guide*.

1. Choose **Next**.

1. On the **Choose events** page, choose **AWS events**, and then choose **Configuration items**.

1. CloudTrail stores the event data store resource in the Region in which you create it, but by default, the configuration items collected in the data store are from all Regions in your account that have recording enabled. Optionally, you can select **Include only the current region in my event data store** to include only configuration items that are captured in the current Region. If you do not choose this option, your event data store includes configuration items from all Regions that have recording enabled.

1. To have your event data store collect configuration items from all accounts in an AWS Organizations organization, select **Enable for all accounts in my organization**. You must be signed in to the management account or delegated administrator account for the organization to create an event data store that collects configuration items for an organization.

1. Choose **Next** to review your choices.

1. On the **Review and create** page, review your choices. Choose **Edit** to make changes to a section. When you're ready to create the event data store, choose **Create event data store**.

1. The new event data store is visible in the **Event data stores** table on the **Event data stores** page.

   From this point forward, the event data store captures configuration items. Configuration items that occurred before you created the event data store are not in the event data store.

## Configuration item schema


The following table describes the required and optional schema elements that match those in configuration item records. The contents of `eventData` are provided by your configuration items; other fields are provided by CloudTrail after ingestion.

CloudTrail event record contents are described in more detail in [CloudTrail record contents for management, data, and network activity events](cloudtrail-event-reference-record-contents.md).
+ [Fields that are provided by CloudTrail after ingestion](#fields-cloudtrail-event)
+ [Fields that are provided by your events](#fields-config)<a name="fields-cloudtrail-event"></a>


**Fields that are provided by CloudTrail after ingestion**  

| Field name | Input type | Requirement | Description | 
| --- | --- | --- | --- | 
| eventVersion | string | Required |  The version of the AWS event format.  | 
| eventCategory | string | Required |  The event category. For configuration items, the valid value is `ConfigurationItem`.  | 
| eventType | string | Required |  The event type. For configuration items, the valid value is `AwsConfigurationItem`.  | 
| eventID | string | Required |  A unique ID for an event.  | 
| eventTime |  string  | Required |  The event timestamp, in `yyyy-MM-DDTHH:mm:ss` format, in Universal Coordinated Time (UTC).  | 
| awsRegion | string | Required |  The AWS Region to which to assign an event.  | 
| recipientAccountId | string | Required |  Represents the AWS account ID that received this event.  | 
| addendum |  addendum  | Optional |  Shows information about why an event was delayed. If information was missing from an existing event, the addendum block includes the missing information and a reason for why it was missing.  | <a name="fields-config"></a>


**Fields in `eventData` are provided by your configuration items**  

| Field name | Input type | Requirement | Description | 
| --- | --- | --- | --- | 
| eventData |  -  | Required | Fields in eventData are provided by your configuration items. | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  | string | Optional |  The version of the configuration item from its source.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  | string | Optional |  The time when the configuration recording was initiated.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  | string | Optional |  The configuration item status. Valid values are `OK`, `ResourceDiscovered`, `ResourceNotRecorded`, ` ResourceDeleted`, and `ResourceDeletedNotRecorded`.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  | string | Optional |  The 12-digit AWS account ID associated with the resource.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  | string | Optional |  The type of AWS resource. For more information about valid resource types, see [ConfigurationItem](https://docs.aws.amazon.com/config/latest/APIReference/API_ConfigurationItem.html) in the *AWS Config API Reference*.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  | string | Optional |  The ID of the resource (for example., sg-*xxxxxx*).  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  | string | Optional |  The custom name of the resource, if available.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  | string | Optional |  Amazon Resource Name (ARN) associated with the resource.   | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  |  string  | Optional |  The AWS Region where the resource resides.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  |  string  | Optional |  The Availability Zone associated with the resource.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  |  string  | Optional |  The time stamp when the resource was created.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  |  JSON  | Optional |  The description of the resource configuration.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  |  JSON  | Optional |  Configuration attributes that AWS Config returns for certain resource types to supplement the information returned for the configuration parameter.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  |  string  | Optional |  A list of CloudTrail event IDs.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  | - | Optional |  A list of related AWS resources.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  |  string  | Optional |  The type of relationship with the related resource.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  |  string  | Optional |  The resource type of the related resource.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  |  string  | Optional |  The ID of the related resource (for example, sg-*xxxxxx*).  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  |  string  | Optional |  The custom name of the related resource, if available.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-config.html)  |  JSON  | Optional |  A mapping of key value tags associated with the resource.  | 

The following example shows the hierarchy of schema elements that match those in configuration item records.

```
{
  "eventVersion": String,
  "eventCategory: String,
  "eventType": String,
  "eventID": String,
  "eventTime": String,
  "awsRegion": String,
  "recipientAccountId": String,
  "addendum": Addendum,
  "eventData": {
      "configurationItemVersion": String,
      "configurationItemCaptureTime": String,
      "configurationItemStatus": String,
      "configurationStateId": String,
      "accountId": String,
      "resourceType": String,
      "resourceId": String,
      "resourceName": String,
      "arn": String,
      "awsRegion": String, 
      "availabilityZone": String,
      "resourceCreationTime": String,
      "configuration": {
        JSON,
      },
      "supplementaryConfiguration": {
        JSON,
      },
      "relatedEvents": [
        String
      ],
      "relationships": [
        struct{
          "name" : String,
          "resourceType": String,
          "resourceId": String,
          "resourceName": String
        }
      ],
     "tags": {
       JSON
     }
    }
  }
}
```

# Create an event data store for events outside of AWS with the console
Create an event data store for events outside of AWS

**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

You can create an event data store to include events outside of AWS, and then use CloudTrail Lake to search, query, and analyze the data that is logged from your applications.

You can use CloudTrail Lake *integrations* to log and store user activity data from outside of AWS; from any source in your hybrid environments, such as in-house or SaaS applications hosted on-premises or in the cloud, virtual machines, or containers.

When you create an event data store for an integration, you also create a channel, and attach a resource policy to the channel. 

CloudTrail Lake event data stores incur charges. When you create an event data store, you choose the [pricing option](cloudtrail-lake-manage-costs.md#cloudtrail-lake-manage-costs-pricing-option) you want to use for the event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention period for the event data store. For information about CloudTrail pricing and managing Lake costs, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md).

## To create an event data store for events outside of AWS


1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  From the navigation pane, under **Lake**, choose **Event data stores**. 

1. Choose **Create event data store**.

1. On the **Configure event data store** page, in **General details**, enter a name for the event data store. A name is required.

1. Choose the **Pricing option** that you want to use for your event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention periods for your event data store. For more information, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md). 

   The following are the available options:
   + **One-year extendable retention pricing** - Generally recommended if you expect to ingest less than 25 TB of event data per month and want a flexible retention period of up to 10 years. For the first 366 days (the default retention period), storage is included at no additional charge with ingestion pricing. After 366 days, extended retention is available at pay-as-you-go pricing. This is the default option.
     + **Default retention period:** 366 days
     + **Maximum retention period:** 3,653 days
   + **Seven-year retention pricing** - Recommended if you expect to ingest more than 25 TB of event data per month and need a retention period of up to 7 years. Retention is included with ingestion pricing at no additional charge.
     + **Default retention period:** 2,557 days
     + **Maximum retention period:** 2,557 days

1. Specify a retention period for the event data store. Retention periods can be between 7 days and 3,653 days (about 10 years) for the **One-year extendable retention pricing** option, or between 7 days and 2,557 days (about seven years) for the **Seven-year retention pricing** option. 

    CloudTrail Lake determines whether to retain an event by checking if the `eventTime` of the event is within the specified retention period. For example, if you specify a retention period of 90 days, CloudTrail will remove events when their `eventTime` is older than 90 days. 

1. (Optional) To enable encryption using AWS Key Management Service, choose **Use my own AWS KMS key**. Choose **New** to have an AWS KMS key created for you, or choose **Existing** to use an existing KMS key. In **Enter KMS alias**, specify an alias, in the format `alias/`*MyAliasName*. Using your own KMS key requires that you edit your KMS key policy to allow your event data store to be encrypted and decrypted. For more information, see [Configure AWS KMS key policies for CloudTrail](create-kms-key-policy-for-cloudtrail.md). CloudTrail also supports AWS KMS multi-Region keys. For more information about multi-Region keys, see [Using multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the *AWS Key Management Service Developer Guide*.

   Using your own KMS key incurs AWS KMS costs for encryption and decryption. After you associate an event data store with a KMS key, the KMS key cannot be removed or changed.
**Note**  
To enable AWS Key Management Service encryption for an organization event data store, you must use an existing KMS key for the management account.

1. (Optional) If you want to query against your event data using Amazon Athena, choose **Enable** in **Lake query federation**. Federation lets you view the metadata associated with the event data store in the AWS Glue [Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/components-overview.html#data-catalog-intro) and run SQL queries against the event data in Athena. The table metadata stored in the AWS Glue Data Catalog lets the Athena query engine know how to find, read, and process the data that you want to query. For more information, see [Federate an event data store](query-federation.md).

   To enable Lake query federation, choose **Enable** and then do the following:

   1. Choose whether you want to create a new role or use an existing IAM role. [AWS Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/how-it-works.html) uses this role to manage permissions for the federated event data store. When you create a new role using the CloudTrail console, CloudTrail automatically creates a role with the required permissions. If you choose an existing role, be sure the policy for the role provides the [required minimum permissions](query-federation.md#query-federation-permissions-role).

   1. If you are creating a new role, enter a name to identify the role.

   1. If you are using an existing role, choose the role you want to use. The role must exist in your account.

1. (Optional) Choose **Enable resource policy** to add a resource-based policy to your event data store. Resource-based policies allow you to control which principals can perform actions on your event data store. For example, you can add a resource based policy that allows the root users in other accounts to query this event data store and view the query results. For example policies, see [Resource-based policy examples for event data stores](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds).

   A resource-based policy includes one or more statements. Each statement in the policy defines the [principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) that are allowed or denied access to the event data store and the actions the principals can perform on the event data store resource.

   The following actions are supported in resource-based policies for event data stores:
   +  `cloudtrail:StartQuery` 
   +  `cloudtrail:CancelQuery` 
   +  `cloudtrail:ListQueries` 
   +  `cloudtrail:DescribeQuery` 
   +  `cloudtrail:GetQueryResults` 
   +  `cloudtrail:GenerateQuery` 
   +  `cloudtrail:GenerateQueryResultsSummary` 
   +  `cloudtrail:GetEventDataStore` 

   For [organization event data stores](cloudtrail-lake-organizations.md), CloudTrail creates a [default resource-based policy](cloudtrail-lake-organizations.md#cloudtrail-lake-organizations-eds-rbp) that lists the actions that the delegated administrator accounts are allowed to perform on organization event data stores. The permissions in this policy are derived from the delegated administrator permissions in AWS Organizations. This policy is updated automatically following changes to the organization event data store or to the organization (for example, a CloudTrail delegated administrator account is registered or removed).

1. (Optional) In the **Tags** section, you can add up to 50 tag key pairs to help you identify, sort, and control access to your event data store. For more information about how to use IAM policies to authorize access to an event data store based on tags, see [Examples: Denying access to create or delete event data stores based on tags](security_iam_id-based-policy-examples.md#security_iam_id-based-policy-examples-eds-tags). For more information about how you can use tags in AWS, see [Tagging AWS resources](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html) in the *Tagging AWS Resources User Guide*.

1.  Choose **Next** to configure the event data store. 

1.  On the **Choose events** page, choose **Events from integrations**. 

1.  From **Events from integration**, choose the source to deliver events to the event data store. 

1. Provide a name to identify the integration's channel. The name can be 3-128 characters. Only letters, numbers, periods, underscores, and dashes are allowed.

1. In **Resource policy**, configure the resource policy for the integration's channel. Resource policies are JSON policy documents that specify what actions a specified principal can perform on the resource and under what conditions. The accounts defined as principals in the resource policy can call the `PutAuditEvents` API to deliver events to your channel. The resource owner has implicit access to the resource if their IAM policy allows the `cloudtrail-data:PutAuditEvents` action.

   The information required for the policy is determined by the integration type. For a direction integration, CloudTrail automatically adds the partner's AWS account IDs, and requires you to enter the unique external ID provided by the partner. For a solution integration, you must specify at least one AWS account ID as principal, and can optionally enter an external ID to prevent against confused deputy.
**Note**  
If you do not create a resource policy for the channel, only the channel owner can call the `PutAuditEvents` API on the channel.

   1. For a direct integration, enter the external ID provided by your partner. The integration partner provides a unique external ID, such as an account ID or a randomly generated string, to use for the integration to prevent against confused deputy. The partner is responsible for creating and providing a unique external ID.

       You can choose **How to find this?** to view the partner's documentation that describes how to find the external ID.   
![\[Partner documentation for external ID\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/integration-external-id.png)
**Note**  
If the resource policy includes an external ID, all calls to the `PutAuditEvents` API must include the external ID. However, if the policy does not define an external ID, the partner can still call the `PutAuditEvents` API and specify an `externalId` parameter.

   1.  For a solution integration, choose **Add AWS account** to specify each AWS account ID to add as a principal in the policy.

1. Choose **Next** to review your choices.

1. On the **Review and create** page, review your choices. Choose **Edit** to make changes to a section. When you're ready to create the event data store, choose **Create event data store**.

1. The new event data store is visible in the **Event data stores** table on the **Event data stores** page.

1. Provide the channel Amazon Resource Name (ARN) to the partner application. Instructions for providing the channel ARN to the partner application are found on the partner documentation website. For more information, choose the **Learn more** link for the partner on the **Available sources** tab of the **Integrations** page to open the partner's page in AWS Marketplace.

The event data store starts ingesting partner events into CloudTrail through the integration's channel when you, the partner, or the partner applications calls the `PutAuditEvents` API on the channel.

# Update an event data store with the console
Update an event data store

**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

This section describes how to update an event data store's settings using the AWS Management Console. For information about how to update an event data store using the AWS CLI, see [Update an event data store with the AWS CLI](lake-cli-update-eds.md).

**To update an event data store**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1. In the navigation pane, under **Lake**, choose **Event data stores**.

1. Choose the event data store that you want to update. This action opens the event data store's details page.

1. In **General details**, choose **Edit** to change the following settings:
   + **Event data store name** - Change the name that identifies your event data store.
   + **[Pricing option](cloudtrail-lake-concepts.md#eds-pricing-tier)**- For event data stores using the **Seven-year retention pricing** option, you can choose to use **One-year extendable retention pricing** instead. We recommend one-year extendable retention pricing for event data stores that ingest less than 25 TB of event data on a monthly basis. We also recommend one-year extendable retention pricing if you're seeking a flexible retention period of up to 10 years. For more information, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md).
**Note**  
You can't change the pricing option for event data stores that use **One-year extendable retention pricing**. If you want to use **Seven-year retention pricing**, [stop ingestion](query-eds-stop-ingestion.md) on your current event data store. Then create a new event data store with the **Seven-year retention pricing** option.
   + **Retention period** - Change the retention period for the event data store. The retention period determines how long event data is kept in the event data store. Retention periods can be between 7 days and 3,653 days (about 10 years) for the **One-year extendable retention pricing** option, or between 7 days and 2,557 days (about seven years) for the **Seven-year retention pricing** option.
**Note**  
If you decrease the retention period of an event data store, CloudTrail will remove any events with an `eventTime` older than the new retention period. For example, if the previous retention period was 365 days and you decrease it to 100 days, CloudTrail will remove events with an `eventTime` older than 100 days.
   + **Encryption** - To encrypt your event data store using your own KMS key, choose **Use my own AWS KMS key**. By default, all events in an event data store are encrypted by CloudTrail. Using your own KMS key incurs AWS KMS costs for encryption and decryption.
**Note**  
After you associate an event data store with a KMS key, the KMS key can't be removed or changed.
   + To include only events that are logged in the current AWS Region, choose **Include on the current region in my event data store**. If you don't choose this option, your event data store includes events from all Regions.
   + To have your event data store collect events from all accounts in an AWS Organizations organization, choose **Enable for all accounts in my organization**. This option is only available if you're signed in with the management account for your organization, and the **Event type** for the event data store is **CloudTrail events** or **Configuration items**. 

   Choose **Save changes** when you're finished.

1. In **Lake query federation**, choose **Edit** to enable or disable Lake query federation. [Enabling Lake query federation](query-enable-federation.md) lets you view the metadata for your event data store in the AWS Glue [Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/components-overview.html#data-catalog-intro) and run SQL queries on the event data using Amazon Athena. [Disabling Lake query federation](query-disable-federation.md) disables the integration with AWS Glue, AWS Lake Formation, and Amazon Athena. After disabling Lake query federation, you can no longer query your data in Athena. No CloudTrail Lake data is deleted when you disable federation and you can continue to run queries in CloudTrail Lake.

   To enable federation, do the following:

   1. Choose **Enable**.

   1. Choose whether to create a new IAM role, or use an existing role. When you create a new role, CloudTrail automatically creates a role with the required permissions. If you're using an existing role, be sure the role's policy provides the [required minimum permissions](query-federation.md#query-federation-permissions-role).

   1.  If you're creating a new IAM role, enter a name for the role. 

   1.  If you're choosing an existing IAM role, choose the role you want to use. The role must exist in your account. 

   Choose **Save changes** when you are finished.

1. In **Resource policy**, choose **Edit** to add or revise the resource-based policy for the event data store.

   Resource-based policies allow you to control which principals can perform actions on your event data store. For example, you can add a resource based policy that allows the root users in other accounts to query this event data store and view the query results. For example policies, see [Resource-based policy examples for event data stores](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds).

   A resource-based policy includes one or more statements. Each statement in the policy defines the [principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) that are allowed or denied access to the event data store and the actions the principals can perform on the event data store resource.

   The following actions are supported in resource-based policies for event data stores:
   +  `cloudtrail:StartQuery` 
   +  `cloudtrail:CancelQuery` 
   +  `cloudtrail:ListQueries` 
   +  `cloudtrail:DescribeQuery` 
   +  `cloudtrail:GetQueryResults` 
   +  `cloudtrail:GenerateQuery` 
   +  `cloudtrail:GenerateQueryResultsSummary` 
   +  `cloudtrail:GetEventDataStore` 

   For [organization event data stores](cloudtrail-lake-organizations.md), CloudTrail creates a [default resource-based policy](cloudtrail-lake-organizations.md#cloudtrail-lake-organizations-eds-rbp) that lists the actions that the delegated administrator accounts are allowed to perform on organization event data stores. The permissions in this policy are derived from the delegated administrator permissions in AWS Organizations. This policy is updated automatically following changes to the organization event data store or to the organization (for example, a CloudTrail delegated administrator account is registered or removed).

1. Edit any additional settings specific to your event data store's **Event type**.

   **Settings for CloudTrail events**
   + To change which events your event data store logs, choose **Edit** in **CloudTrail events**.
   + In **Management events**, choose **Edit** to change the settings for management events. For more information, see [Updating the management event settings for an existing event data store](logging-management-events-with-cloudtrail.md#logging-management-events-with-the-cloudtrail-console-eds).
   + In **Data events**, choose **Edit** to change the settings for data events. You can choose which resource types you want to log and choose the log selector template you want to use. For more information, see [Updating an existing event data store to log data events using the console](logging-data-events-with-cloudtrail.md#logging-data-events-with-the-cloudtrail-console-eds).
   + In **Network activity events**, choose **Edit** to change the settings for network activity events. You can choose which network activity event type you want to log and choose the log selector template you want to use. For more information, see [Update an existing event data store to log network activity events](logging-network-events-with-cloudtrail.md#log-network-events-lake-console).
   + In **Enrich events, expand event size**, choose **Edit** to add or remove resource tags and IAM global condition keys, and expand the event size.

     In **Enrich events**, add up to 50 resource tag keys and 50 IAM global condition keys to provide additional metadata about your events. This helps you categorize and group related events.

     If you add resource tag keys, CloudTrail will include the selected tag keys associated with the resources that were involved in the API call. API events related to deleted resources will not have resource tags.

     If you add IAM global condition keys, CloudTrail will include information about the selected condition keys that were evaluated during the authorization process, including additional details about the principal, session, network, and the request itself.

     Information about the resource tag keys and IAM global condition keys is shown in the `eventContext` field of the event. For more information, see [Enrich CloudTrail events by adding resource tag keys and IAM global condition keys](cloudtrail-context-events.md).
**Note**  
If an event contains a resource that doesn’t belong to the event Region, CloudTrail will not populate tags for this resource because tag retrieval is limited to the event Region.

     Choose **Expand event size** to expand the event payload up to 1 MB from 256 KB. This option is automatically enabled when you add resource tag keys or IAM global condition keys to ensure all of your added keys are included in the event.

     Expanding the event size is helpful for analyzing and troubleshooting events because it allows you to see the full contents of the following fields as long as the event payload is less than 1 MB:
     + `annotation`
     + `requestID`
     + `additionalEventData`
     + `serviceEventDetails`
     + `userAgent`
     + `errorCode`
     + `responseElements`
     + `requestParameters`
     + `errorMessage`

     For more information about these fields, see [CloudTrail record contents](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-event-reference-record-contents.html).

     Choose **Save changes** when you're finished.

   **Settings for Events from integration**

   In **Integrations**, choose your integration. Then choose **Edit** to change the following settings:
   + In **Integration details**, change the name that identifies your integration's channel.
   + In **Event delivery location**, choose the destination for your events.
   + In **Resource policy**, configure the resource policy for the integration's channel.

   Choose **Save changes** when you're finished.

   For more information about these settings, see [Create an integration with a CloudTrail partner with the console](query-event-data-store-integration-partner.md).

1. To add, change, or remove tags, choose **Edit** in **Tags**. You can add up to 50 tag key pairs to help you identify, sort, and control access to your event data store. Choose **Save changes** when you're finished.

# Stop and start event ingestion with the console
Stop and start event ingestion

**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

By default, event data stores are configured to ingest events. You can stop an event data store from ingesting events by using the console, AWS CLI, or APIs.

The options to **Start ingestion** and **Stop ingestion** are only available on event data stores containing either CloudTrail events (management events, data events, and network activity events), or AWS Config configuration items.

When you stop ingestion on an event data store, the event data store's state changes to `STOPPED_INGESTION`. You can still run queries on any events already in the event data store. You can also copy trail events to the event data store (if it contains only CloudTrail events).

**To stop an event data store from ingesting events**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1. In the navigation pane, under **Lake**, choose **Event data stores**.

1. Choose the event data store.

1. From **Actions**, choose **Stop ingestion**.

1. When you are prompted to confirm, choose **Stop ingestion**. The event data store will stop ingesting live events.

1. To resume ingestion, choose **Start ingestion**.

**To restart event ingestion**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1. In the navigation pane, under **Lake**, choose **Event data stores**.

1. Choose the event data store.

1. From **Actions**, choose **Start ingestion**.

# Change termination protection with the console
Change termination protection

**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

By default, event data stores in AWS CloudTrail Lake are configured with termination protection enabled. Termination protection prevents an event data store from accidental deletion. If you want to delete the event data store, you must disable termination protection. You can disable termination protection by using the AWS Management Console, AWS CLI, or API operations.

**To turn off termination protection**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1. In the navigation pane, under **Lake**, choose **Event data stores**.

1. Choose the event data store.

1. From **Actions**, choose **Change termination protection**.

1. Choose **Disabled**.

1. Choose **Save**. You can now [delete the event data store](query-event-data-store-delete.md).

**To turn on termination protection**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1. In the navigation pane, under **Lake**, choose **Event data stores**.

1. Choose the event data store.

1. From **Actions**, choose **Change termination protection**.

1. To turn on termination protection, choose **Enabled**.

1. Choose **Save**.

# Delete an event data store with the console
Delete an event data store

**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

This section describes how to delete an event data store using the CloudTrail console. For information about how to delete an event data store using the AWS CLI, see [Delete an event data store with the AWS CLI](lake-cli-delete-eds.md).

**Note**  
You can't delete an event data store if either [termination protection](query-eds-termination-protection.md) or [Lake query federation](query-enable-federation.md) is enabled. By default, CloudTrail enables termination protection to protect an event data store from being accidentally deleted.  
To delete an event data store with an event type of **Events from integration**, you must first delete the integration's channel. You can delete the channel from the integration's details page or by using the **aws cloudtrail delete-channel** command. For more information, see [Delete a channel to delete an integration with the AWS CLI](lake-cli-delete-integration.md)

**To delete an event data store**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1. In the navigation pane, under **Lake**, choose **Event data stores**.

1. Choose the event data store.

1. From **Actions**, choose **Delete**.

1. Type the name of the event data store to confirm that you want to delete it.

1. Choose **Delete**.

After you delete an event data store, the event data store's status changes to `PENDING_DELETION` and remains in that state for 7 days. You can [restore](query-eds-restore.md) an event data store during the 7-day wait period. While in the `PENDING_DELETION` state, an event data store isn't available for queries, and no other operations can be performed on the event data store except restore operations. An event data store that is pending deletion does not ingest events and does not incur costs. Event data stores that are pending deletion count toward the quota of event data stores that can exist in one AWS Region.

# Restore an event data store with the console
Restore an event data store

**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

After you delete an event data store in AWS CloudTrail Lake, its status changes to `PENDING_DELETION` and remains in that state for 7 days. During this time, you can restore the event data store by using the AWS Management Console, AWS CLI, or the [https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_RestoreEventDataStore.html](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_RestoreEventDataStore.html) API operation.

This section describes how to restore an event data store using the console. For information about how to restore an event data store using the AWS CLI, see [Restore an event data store with the AWS CLI](lake-cli-manage-eds.md#lake-cli-restore-eds).

**To restore an event data store**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1. In the navigation pane, under **Lake**, choose **Event data stores**.

1. Choose the event data store.

1. From **Actions**, choose **Restore**.

# Exporting data from CloudTrail Lake Event Data Store to CloudWatch
Export Data from an event data store

**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

Making CloudTrail Lake data available to CloudWatch provides several advantages:
+ **Centralized log management** - Combine CloudTrail events with application logs, infrastructure logs, and other data sources in CloudWatch.
+ **Simplified integration** - CloudWatch handles the import process with just a few steps - specify the event data store and data range.
+ **Historical data access** - Import historical CloudTrail Lake data to analyze past events alongside current operational data.
+ **No additional CloudTrail cost** - Simplified import of CloudTrail Lake data is available at no additional CloudTrail cost. However, you will incur CloudWatch cost with Infrequent Access custom logs pricing applied.

This section describes how to export data from an event data store using the CloudTrail console. For information about how to perform this via SDK or AWS CLI, see [CloudWatch Documentation](https://docs.aws.amazon.com/AmazonCloudWatchLogs/latest/APIReference/)

**To export data from an event data store**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1. In the navigation pane, under **Lake**, choose **Event data stores**.

1. Choose the event data store.

1. From **Actions**, choose **Export to CloudWatch**.

1. Choose the time range to export data for the EDS.

1. Use the instructions to either create or provide an IAM role that CloudTrail will use to access your data for export.

1. Choose **Export**.

When making CloudTrail Lake data available for export into CloudWatch, consider the following:
+ **Pricing** - While simplified export of CloudTrail Lake data is available at no additional CloudTrail cost, you incur CloudWatch fees based on custom logs pricing
+ **Data retention** - Ensure that your CloudTrail Lake event data store retention period covers the historical data you want to export
+ **Regional availability** - Check the CloudWatch documentation for supported AWS regions for this feature
+ **Event data store access** - You must have access to the Event Data Store from which data will be exported. 

# Create, update, and manage event data stores with the AWS CLI


**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

This section describes the AWS CLI commands you can use to create, update, and manage your CloudTrail Lake event data stores.

When using the AWS CLI, remember that your commands run in the AWS Region configured for your profile. If you want to run the commands in a different Region, either change the default Region for your profile, or use the **--region** parameter with the command.

## Available commands for event data stores


Commands for creating and updating event data stores in CloudTrail Lake include:
+ `create-event-data-store` to create an event data store.
+ `get-event-data-store` to return information about the event data store including the advanced event selectors configured for the event data store.
+ `update-event-data-store` to change the configuration of an existing event data store.
+ `list-event-data-stores` to list the event data stores.
+ `delete-event-data-store` to delete an event data store.
+ `restore-event-data-store` to restore an event data store that is pending deletion.
+ `start-import` to start an import of trail events to an event data store, or retry a failed import.
+ `[get-import](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/get-import.html)` to return information about a specific import.
+ `[stop-import](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/stop-import.html)` to stop an import of trail events to an event data store.
+ `[list-imports](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/list-imports.html)` to return information on all imports, or a select set of imports by `ImportStatus` or `Destination`.
+ `[list-import-failures](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/list-import-failures.html)` to list import failures for the specified import.
+ `stop-event-data-store-ingestion` to stop event ingestion on an event data store.
+ `start-event-data-store-ingestion` to restart event ingestion on an event data store.
+ `enable-federation` to enable federation on an event data store to query the event data store in Amazon Athena.
+ `disable-federation` to disable federation on an event data store. After you disable federation, you can no longer query against the event data store's data in Amazon Athena. You can continue to query in CloudTrail Lake.
+ `[put-insight-selectors](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/put-insight-selectors.html)` to add or modify Insights event selectors for an existing event data store, and enable or disable Insights events.
+ `[get-insight-selectors](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/get-insight-selectors.html)` to return information about Insights event selectors configured for an event data store.
+ `[add-tags](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/add-tags.html)` to add one or more tags (key-value pairs) to an existing event data store.
+ `[remove-tags](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/remove-tags.html)` to remove one or more tags from a event data store.
+ `[list-tags](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/list-tags.html)` to return a list of tags associated with a event data store.
+ [`get-event-configuration`](lake-cli-manage-eds.md#lake-cli-get-event-configuration) to return any resource tag keys and IAM global conditions keys configured for the event data store. The command also returns whether the event data store is configured to collect `Standard` size events or `Large` size events.
+ [`put-event-configuration`](lake-cli-manage-eds.md#lake-cli-put-event-configuration) to expand the event size and add or remove resource tag keys and IAM global condition keys. For more information, see [Enrich CloudTrail events by adding resource tag keys and IAM global condition keys](cloudtrail-context-events.md).
+ `[put-resource-policy](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/put-resource-policy.html)` to attach a resource-based policy to an event data store. Resource-based polices allow you to control which principals can perform actions on your event data store. For example policies, see [Resource-based policy examples for event data stores](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds).
+ `[get-resource-policy](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/get-resource-policy.html)` to get the resource-based policy attached to an event data store. 
+ `[delete-resource-policy](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/delete-resource-policy.html)` to delete the resource-based policy attached to an event data store. 

For a list of available commands for CloudTrail Lake queries, see [Available commands for CloudTrail Lake queries](lake-queries-cli.md#lake-queries-cli-commands).

For a list of available commands for CloudTrail Lake dashboards, see [Available commands for dashboards](lake-dashboard-cli.md#lake-dashboard-cli-commands).

For a list of available commands for CloudTrail Lake integrations, see [Available commands for CloudTrail Lake integrations](lake-integrations-cli.md#lake-integrations-cli-commands).

# Create an event data store with the AWS CLI


This section describes how to use the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/create-event-data-store.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/create-event-data-store.html) command to create an event data store and provides examples of different types of event data stores that you can create.

When you create an event data store, the only required parameter is `--name`, which is used to identify the event data store. You can configure additional optional parameters, including:
+ `--advanced-event-selectors` - Specifies the type of events to include in the event data store. By default, event data stores log all management events. For more information about advanced event selectors, see [AdvancedEventSelector](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_AdvancedEventSelector.html) in the CloudTrail API Reference.
+ `--kms-key-id` - Specifies the KMS key ID to use to encrypt the events delivered by CloudTrail. The value can be an alias name prefixed by `alias/`, a fully specified ARN to an alias, a fully specified ARN to a key, or a globally unique identifier.
+ `--multi-region-enabled` - Creates a multi-Region event data store that logs events for all AWS Regions in your account. By default, `--multi-region-enabled` is set, even if the parameter is not added.
+ `--organization-enabled` - Enables an event data store to collect events for all accounts in an organization. By default, the event data store is not enabled for all accounts in an organization.
+ `--billing-mode` - Determines the cost for ingesting and storing events, and the default and maximum retention period for the event data store.

  The following are the possible values:
  + `EXTENDABLE_RETENTION_PRICING` - This billing mode is generally recommended if you ingest less than 25 TB of event data a month and want a flexible retention period of up to 3653 days (about 10 years). The default retention period for this billing mode is 366 days.
  + `FIXED_RETENTION_PRICING` - This billing mode is recommended if you expect to ingest more than 25 TB of event data per month and need a retention period of up to 2557 days (about 7 years). The default retention period for this billing mode is 2557 days.

  The default value is `EXTENDABLE_RETENTION_PRICING`.
+ `--retention-period` - The number of days to keep events in the event data store. Valid values are integers between 7 and 3653 if the `--billing-mode` is `EXTENDABLE_RETENTION_PRICING`, or between 7 and 2557 if the `--billing-mode` is set to `FIXED_RETENTION_PRICING`. If you do not specify `--retention-period`, CloudTrail uses the default retention period for the `--billing-mode`.
+ `--start-ingestion` - The `--start-ingestion` parameter starts event ingestion on the event data store when it's created. This parameter is set even if the parameter is not added.

  Specify the `--no-start-ingestion` if you do not want the event data store to ingest live events. For example, you may want to set this parameter if you are copying events to the event data store and only plan to use the event data to analyze past events. The `--no-start-ingestion` parameter is only valid if the `eventCategory` is `Management`, `Data`, or `ConfigurationItem`.

The following examples show how to create different types of event data stores.

**Topics**
+ [

## Create an event data store for S3 data events with the AWS CLI
](#lake-cli-create-eds-data)
+ [

## Create an event data store for KMS network activity events with the AWS CLI
](#lake-cli-create-eds-network)
+ [

## Create an event data store for AWS Config configuration items with the AWS CLI
](#lake-cli-create-eds-config)
+ [

## Create an organization event data store for management events with the AWS CLI
](#lake-cli-create-eds-org)
+ [

## Create event data stores for Insights events with the AWS CLI
](#lake-cli-insights)

## Create an event data store for S3 data events with the AWS CLI


The following example AWS Command Line Interface (AWS CLI) **create-event-data-store** command creates an event data store named `my-event-data-store` that selects all Amazon S3 data events and is encrypted using a KMS key.

```
aws cloudtrail create-event-data-store \
--name my-event-data-store \
--kms-key-id "arn:aws:kms:us-east-1:123456789012:alias/KMS_key_alias" \
--advanced-event-selectors '[
        {
            "Name": "Select all S3 data events",
            "FieldSelectors": [
                { "Field": "eventCategory", "Equals": ["Data"] },
                { "Field": "resources.type", "Equals": ["AWS::S3::Object"] },
                { "Field": "resources.ARN", "StartsWith": ["arn:aws:s3"] }
            ]
        }
    ]'
```

The following is an example response.

```
{
    "EventDataStoreArn": "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-ee54-4813-92d5-999aeEXAMPLE",
    "Name": "my-event-data-store",
    "Status": "CREATED",
    "AdvancedEventSelectors": [
        {
            "Name": "Select all S3 data events",
            "FieldSelectors": [
                {
                    "Field": "eventCategory",
                    "Equals": [
                        "Data"
                    ]
                },
                {
                    "Field": "resources.type",
                    "Equals": [
                        "AWS::S3::Object"
                    ]
                },
                {
                    "Field": "resources.ARN",
                    "StartsWith": [
                        "arn:aws:s3"
                    ]
                }
            ]
        }
    ],
    "MultiRegionEnabled": true,
    "OrganizationEnabled": false,
    "BillingMode": "EXTENDABLE_RETENTION_PRICING",
    "RetentionPeriod": 366,
    "KmsKeyId": "arn:aws:kms:us-east-1:123456789012:alias/KMS_key_alias",
    "TerminationProtectionEnabled": true,
    "CreatedTimestamp": "2023-11-09T22:19:39.417000-05:00",
    "UpdatedTimestamp": "2023-11-09T22:19:39.603000-05:00"
}
```

## Create an event data store for KMS network activity events with the AWS CLI


The following example shows how to create an event data store to include `VpceAccessDenied` network activity events for AWS KMS. This example sets the `errorCode` field equal to `VpceAccessDenied` events and the `eventSource` field equal to `kms.amazonaws.com`.

```
aws cloudtrail create-event-data-store \
--name EventDataStoreName \
--advanced-event-selectors '[
     {
        "Name": "Audit AccessDenied AWS KMS events over a VPC endpoint",
        "FieldSelectors": [
            {
                "Field": "eventCategory",
                "Equals": ["NetworkActivity"]
            },
            {
                "Field": "eventSource",
                "Equals": ["kms.amazonaws.com"]
            },
            {
                "Field": "errorCode",
                "Equals": ["VpceAccessDenied"]
            }
        ]
    }
]'
```

The command returns the following example output.

```
{
    "EventDataStoreArn": "arn:aws:cloudtrail:us-east-1:111122223333:eventdatastore/EXAMPLEb4a8-99b1-4ec2-9258-EXAMPLEc890",
    "Name": "EventDataStoreName",
    "Status": "CREATED",
    "AdvancedEventSelectors": [
        {
            "Name": "Audit AccessDenied AWS KMS events over a VPC endpoint",
            "FieldSelectors": [
                {
                    "Field": "eventCategory",
                    "Equals": [
                        "NetworkActivity"
                    ]
                },
                {
                    "Field": "eventSource",
                    "Equals": [
                        "kms.amazonaws.com"
                    ]
                },
                {
                    "Field": "errorCode",
                    "Equals": [
                        "VpceAccessDenied"
                    ]
                }
            ]
        }
    ],
    "MultiRegionEnabled": true,
    "OrganizationEnabled": false,
    "RetentionPeriod": 366,
    "TerminationProtectionEnabled": true,
    "CreatedTimestamp": "2024-05-20T21:00:17.673000+00:00",
    "UpdatedTimestamp": "2024-05-20T21:00:17.820000+00:00"
}
```

For more information about network activity events, see [Logging network activity events](logging-network-events-with-cloudtrail.md).

## Create an event data store for AWS Config configuration items with the AWS CLI


The following example AWS CLI **create-event-data-store** command creates an event data store named `config-items-eds` that selects AWS Config configuration items. To collect configuration items, specify that the `eventCategory` field Equals `ConfigurationItem` in the advanced event selectors.

```
aws cloudtrail create-event-data-store \
--name config-items-eds \
--advanced-event-selectors '[
    {
        "Name": "Select AWS Config configuration items",
        "FieldSelectors": [
            { "Field": "eventCategory", "Equals": ["ConfigurationItem"] }
        ]
    }
]'
```

The following is an example response.

```
{
    "EventDataStoreArn": "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-ee54-4813-92d5-999aeEXAMPLE",
    "Name": "config-items-eds",
    "Status": "CREATED",
    "AdvancedEventSelectors": [
        {
            "Name": "Select AWS Config configuration items",
            "FieldSelectors": [
                {
                    "Field": "eventCategory",
                    "Equals": [
                        "ConfigurationItem"
                    ]
                }
            ]
        }
    ],
    "MultiRegionEnabled": true,
    "OrganizationEnabled": false,
    "BillingMode": "EXTENDABLE_RETENTION_PRICING",
    "RetentionPeriod": 366,
    "TerminationProtectionEnabled": true,
    "CreatedTimestamp": "2023-11-07T19:03:24.277000+00:00",
    "UpdatedTimestamp": "2023-11-07T19:03:24.468000+00:00"
}
```

## Create an organization event data store for management events with the AWS CLI


The following example AWS CLI **create-event-data-store** command creates an organization event data store that collects all management events and sets the `--billing-mode` parameter to `FIXED_RETENTION_PRICING`.

```
aws cloudtrail create-event-data-store --name org-management-eds --organization-enabled --billing-mode FIXED_RETENTION_PRICING
```

The following is an example response.

```
{
    "EventDataStoreArn": "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE6-d493-4914-9182-e52a7934b207",
    "Name": "org-management-eds",
    "Status": "CREATED",
    "AdvancedEventSelectors": [
        {
            "Name": "Default management events",
            "FieldSelectors": [
                {
                    "Field": "eventCategory",
                    "Equals": [
                        "Management"
                    ]
                }
            ]
        }
    ],
    "MultiRegionEnabled": true,
    "OrganizationEnabled": true,
    "BillingMode": "FIXED_RETENTION_PRICING",
    "RetentionPeriod": 2557,
    "TerminationProtectionEnabled": true,
    "CreatedTimestamp": "2023-11-16T15:30:50.689000+00:00",
    "UpdatedTimestamp": "2023-11-16T15:30:50.851000+00:00"
}
```

## Create event data stores for Insights events with the AWS CLI


To log Insights events in CloudTrail Lake, you need a destination event data store that collects Insights events and a source event data store that enables Insights and logs management events.

This procedure shows you how to create the destination and source event data stores and then enable Insights events.

1. Run the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/create-event-data-store.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/create-event-data-store.html) command to create a destination event data store that collects Insights events. The value for `eventCategory` must be `Insight`. Replace *retention-period-days* with the number of days you would like to retain events in your event data store. Valid values are integers between 7 and 3653 if the `--billing-mode` is `EXTENDABLE_RETENTION_PRICING`, or between 7 and 2557 if the `--billing-mode` is set to `FIXED_RETENTION_PRICING`. If you do not specify `--retention-period`, CloudTrail uses the default retention period for the `--billing-mode`.

   If you are signed in with the management account for an AWS Organizations organization, include the `--organization-enabled` parameter if you want to give your [delegated administrator](cloudtrail-delegated-administrator.md) access to the event data store.

   ```
   aws cloudtrail create-event-data-store \
   --name insights-event-data-store \
   --no-multi-region-enabled \
   --retention-period retention-period-days \
   --advanced-event-selectors '[
       {
         "Name": "Select Insights events",
         "FieldSelectors": [
             { "Field": "eventCategory", "Equals": ["Insight"] }
           ]
       }
     ]'
   ```

   The following is an example response.

   ```
   {
       "Name": "insights-event-data-store",
       "ARN": "arn:aws:cloudtrail:us-east-1:111122223333:eventdatastore/EXAMPLEf852-4e8f-8bd1-bcf6cEXAMPLE",
       "AdvancedEventSelectors": [
           {
              "Name": "Select Insights events",
              "FieldSelectors": [
                 {
                     "Field": "eventCategory",
                     "Equals": [
                         "Insight"
                       ]
                   }
               ]
           }
       ],
       "MultiRegionEnabled": false,
       "OrganizationEnabled": false,
       "BillingMode": "EXTENDABLE_RETENTION_PRICING",
       "RetentionPeriod": "90",
       "TerminationProtectionEnabled": true,
       "CreatedTimestamp": "2023-05-08T15:22:33.578000+00:00",
       "UpdatedTimestamp": "2023-05-08T15:22:33.714000+00:00"
   }
   ```

   You will use the `ARN` (or ID suffix of the ARN) from the response as the value for the `--insights-destination` parameter in step 3.

1. Run the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/create-event-data-store.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/create-event-data-store.html) command to create a source event data store that logs management events. By default, event data stores log all management events. You don't need to specify the advanced event selectors if you want to log all management events. Replace *retention-period-days* with the number of days you would like to retain events in your event data store. Valid values are integers between 7 and 3653 if the `--billing-mode` is `EXTENDABLE_RETENTION_PRICING`, or between 7 and 2557 if the `--billing-mode` is set to `FIXED_RETENTION_PRICING`. If you do not specify `--retention-period`, CloudTrail uses the default retention period for the `--billing-mode`. If you are creating an organization event data store, include the `--organization-enabled` parameter.

   ```
   aws cloudtrail create-event-data-store --name source-event-data-store --retention-period retention-period-days
   ```

   The following is an example response.

   ```
   {
       "EventDataStoreArn": "arn:aws:cloudtrail:us-east-1:111122223333:eventdatastore/EXAMPLE9952-4ab9-49c0-b788-f4f3EXAMPLE",
       "Name": "source-event-data-store",
       "Status": "CREATED",
       "AdvancedEventSelectors": [
           {
               "Name": "Default management events",
               "FieldSelectors": [
                   {
                       "Field": "eventCategory",
                       "Equals": [
                           "Management"
                       ]
                   }
               ]
           }
       ],
       "MultiRegionEnabled": true,
       "OrganizationEnabled": false,
       "BillingMode": "EXTENDABLE_RETENTION_PRICING",
       "RetentionPeriod": 90,
       "TerminationProtectionEnabled": true,
       "CreatedTimestamp": "2023-05-08T15:25:35.578000+00:00",
       "UpdatedTimestamp": "2023-05-08T15:25:35.714000+00:00"
   }
   ```

   You will use the `ARN` (or ID suffix of the ARN) from the response as the value for the `--event-data-store` parameter in step 3.

1. Run the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/put-insight-selectors.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/put-insight-selectors.html) command to enable Insights events. Insights selector values can be `ApiCallRateInsight`, `ApiErrorRateInsight`, or both. For the `--event-data-store` parameter, specify the ARN (or ID suffix of the ARN) of the source event data store that logs management events and will enable Insights. For the `--insights-destination` parameter, specify the ARN (or ID suffix of the ARN) of the destination event data store that will log Insights events.

   ```
   aws cloudtrail put-insight-selectors --event-data-store arn:aws:cloudtrail:us-east-1:111122223333:eventdatastore/EXAMPLE9952-4ab9-49c0-b788-f4f3EXAMPLE --insights-destination arn:aws:cloudtrail:us-east-1:111122223333:eventdatastore/EXAMPLEf852-4e8f-8bd1-bcf6cEXAMPLE --insight-selectors '[{"InsightType": "ApiCallRateInsight"},{"InsightType": "ApiErrorRateInsight"}]'
   ```

   The following result shows the Insights event selector that is configured for the event data store.

   ```
   {
     "EventDataStoreARN": "arn:aws:cloudtrail:us-east-1:111122223333:eventdatastore/EXAMPLE9952-4ab9-49c0-b788-f4f3EXAMPLE",
     "InsightsDestination": "arn:aws:cloudtrail:us-east-1:111122223333:eventdatastore/EXAMPLEf852-4e8f-8bd1-bcf6cEXAMPLE",
     "InsightSelectors":
         [
            {
               "InsightType": "ApiErrorRateInsight"
            },
            {
               "InsightType": "ApiCallRateInsight"
            }
         ]
   }
   ```

   After you enable CloudTrail Insights for the first time on an event data store, CloudTrail may take up to 7 days to begin delivering Insights events, provided that unusual activity is detected during that time.

   CloudTrail Insights analyzes management events that occur in a single Region, not globally. A CloudTrail Insights event is generated in the same Region as its supporting management events are generated.

   For an organization event data store, CloudTrail analyzes management events from each member's account instead of analyzing the aggregation of all management events for the organization.

Additional charges apply for ingesting Insights events in CloudTrail Lake. You will be charged separately if you enable Insights for both trails and event data stores. For information about CloudTrail pricing, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/).

# Import trail events to an event data store with the AWS CLI
Import trail events with the AWS CLI

This section shows how to create and configure an event data store by running the [https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/create-event-data-store.html](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/create-event-data-store.html) command and then how to import the events to that event data store by using the [https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/start-import.html](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/start-import.html) command. For more information about importing trail events, see [Copy trail events to an event data store](cloudtrail-copy-trail-to-lake-eds.md).

## Preparing to import trail events


Before you import trail events, make the following preparations.
+ Be sure you have a role with the [required permissions](cloudtrail-copy-trail-to-lake-eds.md#copy-trail-events-permissions-iam) to import trail events to an event data store.
+ Determine the [--billing-mode](cloudtrail-lake-manage-costs.md#cloudtrail-lake-manage-costs-pricing-option) value you want to specify for the event data store. The `--billing-mode` determines the cost of ingesting and storing events, and the default and maximum retention period for the event data store.

  When you import trail events to CloudTrail Lake, CloudTrail unzips the logs that are stored in gzip (compressed) format. Then CloudTrail copies the events contained in the logs to your event data store. The size of the uncompressed data could be greater than the actual Amazon S3 storage size. To get a general estimate of the size of the uncompressed data, multiply the size of the logs in the S3 bucket by 10. You can use this estimate to choose the `--billing-mode` value for your use case.
+ Determine the value you want to specify for the `--retention-period`. CloudTrail will not copy an event if its `eventTime` is older than the specified retention period.

  To determine the appropriate retention period, take the sum of the oldest event you want to copy in days and the number of days you want to retain the events in the event data store as demonstrated in this equation:

  **Retention period** = *oldest-event-in-days* \$1 *number-days-to-retain*

  For example, if the oldest event you're copying is 45 days old and you want to keep the events in the event data store for a further 45 days, you would set the retention period to 90 days.
+ Decide whether you want to use the event data store to analyze any future events. If you don't want to ingest any future events, include the `--no-start-ingestion` parameter when you create the event data store. By default, event data store's begin ingesting events when they're created.

## To create an event data store and import trail events to that event data store


1. Run the **create-event-data-store** command to create the new event data store. In this example, the `--retention-period` is set to `120` because the oldest event being copied is 90 days old and we want to retain the events for 30 days. The `--no-start-ingestion` parameter is set because we don't want to ingest any future events. In this example, `--billing-mode` wasn't set, because we are using the default value `EXTENDABLE_RETENTION_PRICING` as we expect to ingest less than 25 TB of event data.
**Note**  
If you're creating the event data store to replace your trail, we recommend configuring the `--advanced-event-selectors` to match the event selectors of your trail to ensure you have the same event coverage. By default, event data stores log all management events.

   ```
   aws cloudtrail create-event-data-store  --name import-trail-eds  --retention-period 120 --no-start-ingestion
   ```

   The following is the example response:

   ```
   {
       "EventDataStoreArn": "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLEa-4357-45cd-bce5-17ec652719d9",
       "Name": "import-trail-eds",
       "Status": "CREATED",
       "AdvancedEventSelectors": [
           {
               "Name": "Default management events",
               "FieldSelectors": [
                   {
                       "Field": "eventCategory",
                       "Equals": [
                           "Management"
                       ]
                   }
               ]
           }
       ],
       "MultiRegionEnabled": true,
       "OrganizationEnabled": false,
       "BillingMode": "EXTENDABLE_RETENTION_PRICING",
       "RetentionPeriod": 120,
       "TerminationProtectionEnabled": true,
       "CreatedTimestamp": "2023-11-09T16:52:25.444000+00:00",
       "UpdatedTimestamp": "2023-11-09T16:52:25.569000+00:00"
   }
   ```

   The initial `Status` is `CREATED` so we'll run the **get-event-data-store** command to verify ingestion is stopped.

   ```
   aws cloudtrail get-event-data-store --event-data-store eds-id
   ```

   The response shows the `Status` is now `STOPPED_INGESTION`, which indicates the event data store is not ingesting live events.

   ```
   {
       "EventDataStoreArn": "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLEa-4357-45cd-bce5-17ec652719d9",
       "Name": "import-trail-eds",
       "Status": "STOPPED_INGESTION",
       "AdvancedEventSelectors": [
           {
               "Name": "Default management events",
               "FieldSelectors": [
                   {
                       "Field": "eventCategory",
                       "Equals": [
                           "Management"
                       ]
                   }
               ]
           }
       ],
       "MultiRegionEnabled": true,
       "OrganizationEnabled": false,
       "BillingMode": "EXTENDABLE_RETENTION_PRICING",
       "RetentionPeriod": 120,
       "TerminationProtectionEnabled": true,
       "CreatedTimestamp": "2023-11-09T16:52:25.444000+00:00",
       "UpdatedTimestamp": "2023-11-09T16:52:25.569000+00:00"
   }
   ```

1. Run the **start-import** command to import the trail events to the event data store created in step 1. Specify the ARN (or ID suffix of the ARN) of the event data store as the value for the `--destinations` parameter. For `--start-event-time` specify the `eventTime` for the oldest event you want to copy and for `--end-event-time` specify the `eventTime` of the newest event you want to copy. For `--import-source` specify the S3 URI for the S3 bucket containing your trail logs, the AWS Region for the S3 bucket, and the ARN of the role used for importing trail events. 

   ```
   aws cloudtrail start-import \
   --destinations ["arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLEa-4357-45cd-bce5-17ec652719d9"] \
   --start-event-time 2023-08-11T16:08:12.934000+00:00 \
   --end-event-time 2023-11-09T17:08:20.705000+00:00 \
   --import-source {"S3": {"S3LocationUri": "s3://aws-cloudtrail-logs-123456789012-612ff1f6/AWSLogs/123456789012/CloudTrail/","S3BucketRegion":"us-east-1","S3BucketAccessRoleArn": "arn:aws:iam::123456789012:role/service-role/CloudTrailLake-us-east-1-copy-events-eds"}}
   ```

   The following is an example response.

   ```
   {
      "CreatedTimestamp": "2023-11-09T17:08:20.705000+00:00",
      "Destinations": [
           "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLEa-4357-45cd-bce5-17ec652719d9"
       ],
      "EndEventTime": "2023-11-09T17:08:20.705000+00:00",
      "ImportId": "EXAMPLEe-7be2-4658-9204-b38c3257fcd1",
      "ImportSource": { 
         "S3": { 
            "S3BucketAccessRoleArn": "arn:aws:iam::123456789012:role/service-role/CloudTrailLake-us-east-1-copy-events-eds",
            "S3BucketRegion":"us-east-1",
            "S3LocationUri": "s3://aws-cloudtrail-logs-123456789012-111ff1f6/AWSLogs/123456789012/CloudTrail/"
         }
      },
      "ImportStatus": "INITIALIZING",
      "StartEventTime": "2023-08-11T16:08:12.934000+00:00",
      "UpdatedTimestamp": "2023-11-09T17:08:20.806000+00:00"
   }
   ```

1. Run the [https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/get-import.html](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/get-import.html) command to get information about the import.

   ```
   aws cloudtrail get-import --import-id import-id
   ```

   The following is an example response.

   ```
   {
       "ImportId": "EXAMPLEe-7be2-4658-9204-b38c3EXAMPLE",
       "Destinations": [
           "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLEa-4357-45cd-bce5-17ec652719d9"
       ],
       "ImportSource": {
           "S3": {
               "S3LocationUri": "s3://aws-cloudtrail-logs-123456789012-111ff1f6/AWSLogs/123456789012/CloudTrail/",
               "S3BucketRegion":"us-east-1",
               "S3BucketAccessRoleArn": "arn:aws:iam::123456789012:role/service-role/CloudTrailLake-us-east-1-copy-events-eds"
           }
       },
       "StartEventTime": "2023-08-11T16:08:12.934000+00:00",
       "EndEventTime": "2023-11-09T17:08:20.705000+00:00",
       "ImportStatus": "COMPLETED",
       "CreatedTimestamp": "2023-11-09T17:08:20.705000+00:00",
       "ImportStatistics": {
           "PrefixesFound": 1548,
           "PrefixesCompleted": 1548,
           "FilesCompleted": 92845,
           "EventsCompleted": 577249,
           "FailedEntries": 0
       }
   }
   ```

   An import finishes with an `ImportStatus` of `COMPLETED` if there were no failures, or `FAILED` if there were failures.

   If the import had `FailedEntries`, you can run the [https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/list-import-failures.html](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/list-import-failures.html) command to return a list of failures.

   ```
   aws cloudtrail list-import-failures --import-id import-id
   ```

   To retry an import that had failures, run the **start-import** command with only the `--import-id` parameter. When you retry an import, CloudTrail resumes the import at the location where the failure occurred.

   ```
   aws cloudtrail start-import --import-id import-id
   ```

# Update an event data store with the AWS CLI


This section provides examples that show how to update an event data store's settings by running the AWS CLI `update-event-data-store` command.

**Topics**
+ [

## Update the billing mode with the AWS CLI
](#lake-cli-update-billing-mode)
+ [

## Update the retention mode, enable termination protection, and specify a AWS KMS key with the AWS CLI
](#lake-cli-update-retention)
+ [

## Disable termination protection with the AWS CLI
](#lake-cli-update-disable-termination)

## Update the billing mode with the AWS CLI


The `--billing-mode` for the event data store determines the cost for ingesting and storing events, and the default and maximum retention period for the event data store. If an event data store's `--billing-mode` is set to `FIXED_RETENTION_PRICING`, you can change the value to `EXTENDABLE_RETENTION_PRICING`. `EXTENDABLE_RETENTION_PRICING` is generally recommended if your event data store ingests less than 25 TB of event data per month and you want a flexible retention period of up to 3653 days. For information about pricing, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md).

**Note**  
You cannot change the `--billing-mode` value from `EXTENDABLE_RETENTION_PRICING` to `FIXED_RETENTION_PRICING`. If the event data store's billing mode is set to `EXTENDABLE_RETENTION_PRICING` and you want to use `FIXED_RETENTION_PRICING` instead, you can [stop ingestion](lake-cli-manage-eds.md#lake-cli-stop-ingestion-eds) on the event data store and create a new event data store that uses `FIXED_RETENTION_PRICING`.

The following example AWS CLI **update-event-data-store** command changes the `--billing-mode` for the event data store from `FIXED_RETENTION_PRICING` to `EXTENDABLE_RETENTION_PRICING`. The required `--event-data-store` parameter value is an ARN (or the ID suffix of the ARN) and is required; other parameters are optional. 

```
aws cloudtrail update-event-data-store \
--region us-east-1 \
--event-data-store arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE \
--billing-mode EXTENDABLE_RETENTION_PRICING
```

The following is an example response.

```
{
    "EventDataStoreArn": "event-data-store arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE",
    "Name": "management-events-eds",
    "Status": "ENABLED",
    "AdvancedEventSelectors": [
        {
            "Name": "Default management events",
            "FieldSelectors": [
                {
                    "Field": "eventCategory",
                    "Equals": [
                        "Management"
                    ]
                }
            ]
        }
    ],
    "MultiRegionEnabled": true,
    "OrganizationEnabled": false,
    "BillingMode": "EXTENDABLE_RETENTION_PRICING",
    "RetentionPeriod": 2557,
    "TerminationProtectionEnabled": true,
    "CreatedTimestamp": "2023-10-27T10:55:55.384000-04:00",
    "UpdatedTimestamp": "2023-10-27T10:57:05.549000-04:00"
}
```

## Update the retention mode, enable termination protection, and specify a AWS KMS key with the AWS CLI


The following example AWS CLI **update-event-data-store** command updates an event data store to change its retention period to 100 days, and enable termination protection. The required `--event-data-store` parameter value is an ARN (or the ID suffix of the ARN) and is required; other parameters are optional. In this example, the `--retention-period` parameter is added to change the retention period to 100 days. Optionally, you can choose to enable AWS Key Management Service encryption and specify an AWS KMS key by adding `--kms-key-id` to the command, and specifying a KMS key ARN as the value. `--termination-protection-enabled` is added to enable termination protection on an event data store that did not have termination protection enabled.

An event data store that logs events from outside AWS cannot be updated to log AWS events. Similarly, an event data store that logs AWS events cannot be updated to log events from outside AWS.

**Note**  
If you decrease the retention period of an event data store, CloudTrail will remove any events with an `eventTime` older than the new retention period. For example, if the previous retention period was 365 days and you decrease it to 100 days, CloudTrail will remove events with an `eventTime` older than 100 days.

```
aws cloudtrail update-event-data-store \
--event-data-store arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE \
--retention-period 100 \
--kms-key-id "arn:aws:kms:us-east-1:0123456789:alias/KMS_key_alias" \
--termination-protection-enabled
```

The following is an example response.

```
{
    "EventDataStoreArn": "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-ee54-4813-92d5-999aeEXAMPLE",
    "Name": "my-event-data-store",
    "Status": "ENABLED",
    "AdvancedEventSelectors": [
        {
            "Name": "Select all S3 data events",
            "FieldSelectors": [
                {
                    "Field": "eventCategory",
                    "Equals": [
                        "Data"
                    ]
                },
                {
                    "Field": "resources.type",
                    "Equals": [
                        "AWS::S3::Object"
                    ]
                },
                {
                    "Field": "resources.ARN",
                    "StartsWith": [
                        "arn:aws:s3"
                    ]
                }
            ]
        }
    ],
    "MultiRegionEnabled": true,
    "OrganizationEnabled": false,
    "BillingMode": "EXTENDABLE_RETENTION_PRICING",
    "RetentionPeriod": 100,
    "KmsKeyId": "arn:aws:kms:us-east-1:0123456789:alias/KMS_key_alias",
    "TerminationProtectionEnabled": true,
    "CreatedTimestamp": "2023-10-27T10:55:55.384000-04:00",
    "UpdatedTimestamp": "2023-10-27T10:57:05.549000-04:00"
}
```

## Disable termination protection with the AWS CLI


By default, termination protection is enabled on an event data store to protect the event data store from accidental deletion. You cannot delete an event data store when termination protection is enabled. If you want to delete the event data store, you must first disable termination protection.

The following example AWS CLI **update-event-data-store** command disables termination protection by passing the `--no-termination-protection-enabled` parameter.

```
aws cloudtrail update-event-data-store \
--region us-east-1 \
--no-termination-protection-enabled \
--event-data-store arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
```

The following is an example response.

```
{
    "EventDataStoreArn": "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE",
    "Name": "management-events-eds",
    "Status": "ENABLED",
    "AdvancedEventSelectors": [
        {
            "Name": "Default management events",
            "FieldSelectors": [
                {
                    "Field": "eventCategory",
                    "Equals": [
                        "Management"
                    ]
                }
            ]
        }
    ],
    "MultiRegionEnabled": true,
    "OrganizationEnabled": false,
    "BillingMode": "EXTENDABLE_RETENTION_PRICING",
    "RetentionPeriod": 366,
    "TerminationProtectionEnabled": false,
    "CreatedTimestamp": "2023-10-27T10:55:55.384000-04:00",
    "UpdatedTimestamp": "2023-10-27T10:57:05.549000-04:00"
}
```

# Managing event data stores with the AWS CLI


This section describes several other commands that you can run to get information about your event data stores, start and stop ingestion on an event data store, and enable and disable [federation](query-federation.md) on an event data store.

**Topics**
+ [

## Get an event data store with the AWS CLI
](#lake-cli-get-eds)
+ [

## List all event data stores in an account with the AWS CLI
](#lake-cli-list-eds)
+ [

## Add resource tag keys and IAM global conditions keys, and expand event size
](#lake-cli-put-event-configuration)
+ [

## Get the event configuration for an event data store
](#lake-cli-get-event-configuration)
+ [

## Get the resource-based policy for an event data store with the AWS CLI
](#lake-cli-get-resource-policy)
+ [

## Attach a resource-based policy to an event data store with the AWS CLI
](#lake-cli-put-resource-policy)
+ [

## Delete the resource-based policy attached to an event data store with the AWS CLI
](#lake-cli-delete-resource-policy)
+ [

## Stop ingestion on an event data store with the AWS CLI
](#lake-cli-stop-ingestion-eds)
+ [

## Start ingestion on an event data store with the AWS CLI
](#lake-cli-start-ingestion-eds)
+ [

## Enable federation on an event data store
](#lake-cli-enable-federation-eds)
+ [

## Disable federation on an event data store
](#lake-cli-disable-federation-eds)
+ [

## Restore an event data store with the AWS CLI
](#lake-cli-restore-eds)

## Get an event data store with the AWS CLI


The following example AWS CLI **get-event-data-store** command returns information about the event data store specified by the required `--event-data-store` parameter, which accepts an ARN or the ID suffix of the ARN.

```
aws cloudtrail get-event-data-store \
--event-data-store arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
```

The following is an example response. Creation and last updated times are in `timestamp` format.

```
{
    "EventDataStoreARN": "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE",
    "Name": "s3-data-events-eds",
    "Status": "ENABLED",
    "AdvancedEventSelectors": [
        {
            "Name": "Log DeleteObject API calls for a specific S3 bucket",
            "FieldSelectors": [
                {
                    "Field": "eventCategory",
                    "Equals": [
                        "Data"
                    ]
                },
                {
                    "Field": "eventName",
                    "Equals": [
                        "DeleteObject"
                    ]
                },
                {
                    "Field": "resources.ARN",
                    "StartsWith": [
                        "arn:aws:s3:::amzn-s3-demo-bucket"
                    ]
                },
                {
                    "Field": "readOnly",
                    "Equals": [
                        "false"
                    ]
                },
                {
                    "Field": "resources.type",
                    "Equals": [
                        "AWS::S3::Object"
                    ]
                }
            ]
        }
    ],
    "MultiRegionEnabled": true,
    "OrganizationEnabled": false,
    "BillingMode": "FIXED_RETENTION_PRICING",
    "RetentionPeriod": 2557,
    "TerminationProtectionEnabled": true,
    "CreatedTimestamp": "2023-11-09T22:20:36.344000+00:00",
    "UpdatedTimestamp": "2023-11-09T22:20:36.476000+00:00"
}
```

## List all event data stores in an account with the AWS CLI


The following example AWS CLI **list-event-data-stores** command returns information about all event data stores in an account, in the current Region. Optional parameters include `--max-results`, to specify a maximum number of results that you want the command to return on a single page. If there are more results than your specified `--max-results` value, run the command again adding the returned `NextToken` value to get the next page of results.

```
aws cloudtrail list-event-data-stores
```

The following is an example response.

```
{
    "EventDataStores": [
        {
            "EventDataStoreArn": "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE7-cad6-4357-a84b-318f9868e969",
            "Name": "management-events-eds"
        },
        {
            "EventDataStoreArn": "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE6-88e1-43b7-b066-9c046b4fd47a",
            "Name": "config-items-eds"
        },
        {
            "EventDataStoreArn": "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLEf-b314-4c85-964e-3e43b1e8c3b4",
            "Name": "s3-data-events"
        }
    ]
}
```

## Add resource tag keys and IAM global conditions keys, and expand event size


Run the AWS CLI `put-event-configuration` command to expand the maximum event size and add up to 50 resource tag keys and 50 IAM global condition keys to provide additional metadata about your events.

The `put-event-configuration` command accepts the following arguments:
+ `--event-data-store` – Specify the ARN of the event data store or the ID suffix of the ARN. This parameter is required.
+ `--max-event-size` – Set to `Large` to set the maximum event size to 1 MB. By default, the value is `Standard`, which specifies a maximum event size of 256 KB.
**Note**  
In order to add resource tag keys or IAM global conditions keys, you must set the event size to `Large` to ensure all of your added keys are included in the event.
+ `--context-key-selectors` – Specify the type of keys you want included in the events collected by your event data store. You can include resource tag keys and IAM global condition keys. Information about the added resource tags and IAM global condition keys is shown in the `eventContext` field in the event. For more information, see [Enrich CloudTrail events by adding resource tag keys and IAM global condition keys](cloudtrail-context-events.md).
  + Set the `Type` to `TagContext` to pass in an array of up to 50 resource tag keys. If you add resource tags, CloudTrail events will include the selected tag keys associated with the resources that were involved in the API call. API events related to deleted resources will not have resource tags.
  + Set the `Type` to `RequestContext` to pass in an array of up to 50 IAM global condition keys. If you add IAM global condition keys, CloudTrail events will include information about the selected condition keys that were evaluated during the authorization process, including additional details about the principal, session, network, and the request itself.

The following example sets the maximum event size to `Large` and adds two resource tag keys `myTagKey1` and `myTagKey2`.

```
aws cloudtrail put-event-configuration \
--event-data-store arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE \
--max-event-size Large \
--context-key-selectors '[{"Type":"TagContext", "Equals":["myTagKey1","myTagKey2"]}]'
```

The next example sets the maximum event size to `Large` and adds an IAM; global condition key (`aws:MultiFactorAuthAge`).

```
aws cloudtrail put-event-configuration \
--event-data-store arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE \
--max-event-size Large \
--context-key-selectors '[{"Type":"RequestContext", "Equals":["aws:MultiFactorAuthAge"]}]'
```

The final example removes all resource tag keys and IAM global condition keys and sets the maximum event size to `Standard`.

```
aws cloudtrail put-event-configuration \
--event-data-store arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE \
--max-event-size Standard \
--context-key-selectors
```

## Get the event configuration for an event data store


Run the AWS CLI `get-event-configuration` command to return the event configuration for an event data store that collects CloudTrail events. This command returns the maximum event size and lists the resource tag keys and IAM global condition keys (if any) that are included in CloudTrail events.

```
aws cloudtrail get-event-configuration \
--event-data-store arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
```

## Get the resource-based policy for an event data store with the AWS CLI


The following example runs the `get-resource-policy` command on an organization event data store.

```
aws cloudtrail get-resource-policy --resource-arn arn:aws:cloudtrail:us-east-1:888888888888:eventdatastore/example6-d493-4914-9182-e52a7934b207
```

Because the command was run on an organization event data store, the output will show both the provided resource-based policy and the [`DelegatedAdminResourcePolicy`](cloudtrail-lake-organizations.md#cloudtrail-lake-organizations-eds-rbp) generated for the delegated administrator accounts.

## Attach a resource-based policy to an event data store with the AWS CLI


To run queries on a dashboard during a manual or scheduled refresh, you need to attach a resource-based policy to every event data store that is associated with a widget on the dashboard. This allows CloudTrail Lake to run the queries on your behalf. For more information about the resource-based policy, see [Example: Allow CloudTrail to run queries to refresh a dashboard](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds-dashboard).

The following example attaches a resource-based policy to an event data store that allows CloudTrail to run queries on a dashboard when the dashboard is refreshed. The policy is created in a separate file, *policy.json*, with the following example policy statement:

------
#### [ JSON ]

****  

```
{ "Version":"2012-10-17",		 	 	  "Statement": [{ "Sid": "EDSPolicy", "Effect":
    "Allow", "Principal": { "Service": "cloudtrail.amazonaws.com" }, "Resource":
    "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/event_data_store_ID",
    "Action": "cloudtrail:StartQuery", "Condition": { "StringEquals": { "AWS:SourceArn":
    "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE",
    "AWS:SourceAccount": "123456789012" } } } ] }
```

------

 Replace *123456789012* with your account ID, *arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/event\$1data\$1store\$1ID* with the ARN of the event data store for which CloudTrail will run queries, and *arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE* with the ARN of the dashboard.

```
aws cloudtrail put-resource-policy \
--resource-arn eds-arn \
--resource-policy file://policy.json
```

The following is the example response.

```
{ "ResourceArn": "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE", "ResourcePolicy": "policy-statement" }
```

For additional policy examples, see [Resource-based policy examples for event data stores](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds).

## Delete the resource-based policy attached to an event data store with the AWS CLI


The following examples deletes the resource-based policy attached to an event data store. Replace *eds-arn* with the ARN of the event data store.

```
aws cloudtrail delete-resource-policy --resource-arn eds-arn
```

This command produces no output if it's successful.

## Stop ingestion on an event data store with the AWS CLI


The following example AWS CLI **stop-event-data-store-ingestion** command stops an event data store from ingesting events. To stop ingestion, the event data store `Status` must be `ENABLED` and the `eventCategory` must be `Management`, `Data`, or `ConfigurationItem`. The event data store is specified by `--event-data-store`, which accepts an event data store ARN, or the ID suffix of the ARN. After you run **stop-event-data-store-ingestion**, the state of the event data store changes to `STOPPED_INGESTION`.

The event data store does count towards your account maximum of ten event data stores when its state is `STOPPED_INGESTION`.

```
aws cloudtrail stop-event-data-store-ingestion \
--event-data-store arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
```

There is no response if the operation is successful.

## Start ingestion on an event data store with the AWS CLI


The following example AWS CLI **start-event-data-store-ingestion** command starts event ingestion on an event data store. To start ingestion, the event data store `Status` must be `STOPPED_INGESTION` and the `eventCategory` must be `Management`, `Data`, or `ConfigurationItem`. The event data store is specified by `--event-data-store`, which accepts an event data store ARN, or the ID suffix of the ARN. After you run **start-event-data-store-ingestion**, the state of the event data store changes to `ENABLED`.

```
aws cloudtrail start-event-data-store-ingestion --event-data-store arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
```

There is no response if the operation is successful.

## Enable federation on an event data store


To enable federation, run the **aws cloudtrail enable-federation** command, providing the required `--event-data-store` and `--role` parameters. For `--event-data-store`, provide the event data store ARN (or the ID suffix of the ARN). For `--role`, provide the ARN for your federation role. The role must exist in your account and provide the [required minimum permissions](query-federation.md#query-federation-permissions-role).

```
aws cloudtrail enable-federation \
--event-data-store arn:aws:cloudtrail:region:account-id:eventdatastore/eds-id
--role arn:aws:iam::account-id:role/federation-role-name
```

This example shows how a delegated administrator can enable federation on an organization event data store by specifying the ARN of the event data store in the management account and the ARN of the federation role in the delegated administrator account.

```
aws cloudtrail enable-federation \
--event-data-store arn:aws:cloudtrail:region:management-account-id:eventdatastore/eds-id
--role arn:aws:iam::delegated-administrator-account-id:role/federation-role-name
```

## Disable federation on an event data store


To disable federation on the event data store, run the **aws cloudtrail disable-federation** command. The event data store is specified by `--event-data-store`, which accepts an event data store ARN or the ID suffix of the ARN.

```
aws cloudtrail disable-federation \
--event-data-store arn:aws:cloudtrail:region:account-id:eventdatastore/eds-id
```

**Note**  
If this is an organization event data store, use the account ID for the management account.

## Restore an event data store with the AWS CLI


The following example AWS CLI **restore-event-data-store** command restores an event data store that is pending deletion. The event data store is specified by `--event-data-store`, which accepts an event data store ARN or the ID suffix of the ARN. You can only restore a deleted event data store within the seven-day wait period after deletion.

```
aws cloudtrail restore-event-data-store \
--event-data-store EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
```

The response includes information about the event data store, including its ARN, advanced event selectors, and the status of restoration.

# Delete an event data store with the AWS CLI


This section demonstrates how to delete an event data store by running the AWS CLI `delete-event-data-store` command 

To delete an event data store, specify the `--event-data-store` by providing the event data store ARN, or the ID suffix of the ARN. After you run **delete-event-data-store**, the final state of the event data store is `PENDING_DELETION`, and the event data store is automatically deleted after a wait period of 7 days.

After you run **delete-event-data-store** on an event data store, you cannot run **list-queries**, **describe-query**, or **get-query-results** on queries that are using the disabled data store. The event data store does count towards your account maximum of ten event data stores in an AWS Region when it is pending deletion.

**Note**  
You can't delete an event data store if `--termination-protection-enabled` is set or its `FederationStatus` is `ENABLED`.  
To delete an event data store with an `eventCategory` of `ActivityAuditLog`, you must first delete the integration's channel. You can delete the channel by using the `aws cloudtrail delete-channel` command. For more information, see [Delete a channel to delete an integration with the AWS CLI](lake-cli-delete-integration.md).  


```
aws cloudtrail delete-event-data-store \
--event-data-store arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
```

There is no response if the operation is successful.

# Manage event data store lifecycles


**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

The following are the lifecycle stages of an event data store:
+ `CREATED` – A short-term state indicating that the event data store has been created.
+ `ENABLED` – The event data store is active and ingesting events. You can run queries and copy trail events to the event data store.
+ `STARTING_INGESTION` – A short-term state indicating that the event data store will start ingesting live events.
+ `STOPPING_INGESTION` – A short-term state indicating that the event data store will stop ingesting live events.
+ `STOPPED_INGESTION` – The event data store is not ingesting live events. You can still run queries on any events already in the event data store and copy trail events to the event data store.
+ `PENDING_DELETION` – The event data store was in an `ENABLED` or `STOPPED_INGESTION` state and has been deleted but is within the 7-day wait period before permanent deletion. You cannot run queries on the event data store, and no operations can be performed on the event data store except restoration.

You can only delete an event data store if both federation and termination protection are disabled. *Termination protection* prevents an event data store from getting accidentally deleted. By default, termination protection is enabled on an event data store. [Federation](query-federation.md) lets you query your event data store data in Athena and is disabled by default.

After you delete an event data store, it remains in the `PENDING_DELETION` state for 7 days before it is permanently deleted. You can restore an event data store during the 7-day wait period. While in the `PENDING_DELETION` state, an event data store is not available for queries, and no other operations can be performed on the event data store except restore operations. An event data store that is pending deletion does not ingest events and does not incur costs. However, event data stores that are pending deletion count toward the quota of event data stores that can exist in one AWS Region.

**Actions available on event data stores**

To [delete](query-event-data-store-delete.md) or [restore](query-eds-restore.md) an event data store, [copy trail events](cloudtrail-copy-trail-to-lake-eds.md), start or stop ingesting events, or turn on or turn off an event data store's termination protection, use commands on the **Actions** menu of the event data store's details page.

![\[Event data store Actions menu.\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/query-eds-actions.png)


The option to **Copy trail events** is only available on event data stores that contain CloudTrail events. The options to **Start ingestion** and **Stop ingestion** are only available on event data stores containing either CloudTrail events (management and data events), or AWS Config configuration items.

# Copy trail events to an event data store


**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

You can copy trail events to a CloudTrail Lake event data store to create a point-in-time snapshot of events logged to the trail. Copying a trail's events does not interfere with the trail's ability to log events and does not modify the trail in any way.

You can copy trail events to an existing event data store configured for CloudTrail events, or you can create a new CloudTrail event data store and choose the **Copy trail events** option as part of event data store creation. For more information about copying trail events to an existing event data store, see [Copy trail events to an existing event data store with the console](cloudtrail-copy-trail-events-lake.md). For more information about creating a new event data store, see [Create an event data store for CloudTrail events with the console](query-event-data-store-cloudtrail.md). 

If you are copying trail events to an organization event data store, you must use the management account for the organization. You cannot copy trail events using the delegated administrator account for an organization.

CloudTrail Lake event data stores incur charges. When you create an event data store, you choose the [pricing option](cloudtrail-lake-manage-costs.md#cloudtrail-lake-manage-costs-pricing-option) you want to use for the event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention period for the event data store. For information about CloudTrail pricing and managing Lake costs, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md).

When you copy trail events to a CloudTrail Lake event data store, you incur charges based on the amount of uncompressed data the event data store ingests.

When you copy trail events to CloudTrail Lake, CloudTrail unzips the logs that are stored in gzip (compressed) format and then copies the events contained in the logs to your event data store. The size of the uncompressed data could be greater than the actual S3 storage size. To get a general estimate of the size of the uncompressed data, you can multiply the size of the logs in the S3 bucket by 10.

You can reduce costs by specifying a narrower time range for the copied events. If you are planning to only use the event data store to query your copied events, you can turn off event ingestion to avoid incurring charges on future events. For more information, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md).

**Scenarios**

The following table describes some common scenarios for copying trail events and how you accomplish each scenario using the console.


| Scenario | How do I accomplish this in the console? | 
| --- | --- | 
|  Analyze and query historical trail events in CloudTrail Lake without ingesting new events  |  Create a [new event data store](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-cloudtrail.html#query-event-data-store-cloudtrail-procedure) and choose the **Copy trail events** option as part of event data store creation. When creating the event data store, deselect **Ingest events** (step 15 of the procedure) to ensure the event data store contains only the historical events for your trail and no future events.  | 
|  Replace your existing trail with a CloudTrail Lake event data store  |  Create an event data store with the same event selectors as your trail to ensure that the event data store has the same coverage as your trail.  To avoid duplicating events between the source trail and destination event data store, choose a date range for the copied events that is earlier than the creation of the event data store. After your event data store is created, you can turn off logging for the trail to avoid additional charges.  | 

**Topics**
+ [

## Considerations for copying trail events
](#cloudtrail-trail-copy-considerations-lake)
+ [

## Required permissions for copying trail events
](#copy-trail-events-permissions)
+ [

# Copy trail events to an existing event data store with the console
](cloudtrail-copy-trail-events-lake.md)
+ [

# Copy trail events to a new event data store with the console
](scenario-lake-import.md)
+ [

# View event copy details with the CloudTrail console
](copy-trail-details.md)

## Considerations for copying trail events


Consider the following factors when copying trail events.
+  When copying trail events, CloudTrail uses the S3 [https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html) API operation to retrieve the trail events in the source S3 bucket. There are some S3 archived storage classes, such as S3 Glacier Flexible Retrieval, S3 Glacier Deep Archive, S3 Outposts, and S3 Intelligent-Tiering Deep Archive tiers that are not accessible by using `GetObject`. To copy trail events stored in these archived storage classes, you must first restore a copy using the S3 `RestoreObject` operation. For information about restoring archived objects, see [Restoring Archived Objects](https://docs.aws.amazon.com/AmazonS3/latest/userguide/restoring-objects.html) in the *Amazon S3 User Guide*. 
+  When you copy trail events to an event data store, CloudTrail copies all trail events regardless of the configuration of the destination event data store's event types, advanced event selectors, or AWS Region. 
+  Before copying trail events to an existing event data store, be sure the event data store's pricing option and retention period are configured appropriately for your use case. 
  + **Pricing option:** The pricing option determines the cost for ingesting and storing events. For more information about pricing options, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Event data store pricing options](cloudtrail-lake-manage-costs.md#cloudtrail-lake-manage-costs-pricing-option).
  + **Retention period:** The retention period determines how long event data is kept in the event data store. CloudTrail only copies trail events that have an `eventTime` within the event data store’s retention period. To determine the appropriate retention period, take the sum of the oldest event you want to copy in days and the number of days you want to retain the events in the event data store (**retention period** = *oldest-event-in-days* \$1 *number-days-to-retain*). For example, if the oldest event you're copying is 45 days old and you want to keep the events in the event data store for a further 45 days, you would set the retention period to 90 days. 
+ If you are copying trail events to an event data store for investigation and do not want to ingest any future events, you can stop ingestion on the event data store. When creating the event data store, deselect the **Ingest events** option (step 15 of the [procedure](query-event-data-store-cloudtrail.md#query-event-data-store-cloudtrail-procedure)) to ensure the event data store contains only the historical events for your trail and no future events.
+  Before copying trail events, disable any access control lists (ACLs) attached to the source S3 bucket, and update the S3 bucket policy for the destination event data store. For more information about updating the S3 bucket policy, see [Amazon S3 bucket policy for copying trail events](cloudtrail-copy-trail-to-lake.md#cloudtrail-copy-trail-events-permissions-s3). For more information about disabling ACLs, see [ Controlling ownership of objects and disabling ACLs for your bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html). 
+  CloudTrail only copies trail events from Gzip compressed log files that are in the source S3 bucket. CloudTrail does not copy trail events from uncompressed log files, or log files that were compressed using a format other than Gzip. 
+  To avoid duplicating events between the source trail and destination event data store, choose a time range for the copied events that is earlier than the creation of the event data store. 
+  By default, CloudTrail only copies CloudTrail events contained in the S3 bucket's `CloudTrail` prefix and the prefixes inside the `CloudTrail` prefix, and does not check prefixes for other AWS services. If you want to copy CloudTrail events contained in another prefix, you must choose the prefix when you copy trail events. 
+  To copy trail events to an organization event data store, you must use the management account for the organization. You cannot use the delegated administrator account to copy trail events to an organization event data store. 

## Required permissions for copying trail events


Before copying trail events, ensure you have all the required permissions for your IAM role. You only need to update the IAM role permissions if you choose an existing IAM role to copy trail events. If you choose to create a new IAM role, CloudTrail provides all necessary permissions for the role.

If the source S3 bucket uses a KMS key for data encryption, ensure that the KMS key policy allows CloudTrail to decrypt data in the bucket. If the source S3 bucket uses multiple KMS keys, you must update each key's policy to allow CloudTrail to decrypt data in the bucket.

**Topics**
+ [

### IAM permissions for copying trail events
](#copy-trail-events-permissions-iam)
+ [

### Amazon S3 bucket policy for copying trail events
](#copy-trail-events-permissions-s3)
+ [

### KMS key policy for decrypting data in the source S3 bucket
](#copy-trail-events-permissions-kms)

### IAM permissions for copying trail events


When copying trail events, you have the option to create a new IAM role, or use an existing IAM role. When you choose a new IAM role, CloudTrail creates an IAM role with the required permissions and no further action is required on your part.

If you choose an existing role, ensure the IAM role's policies allow CloudTrail to copy trail events from the source S3 bucket. This section provides examples of the required IAM role permission and trust policies.

The following example provides the permissions policy, which allows CloudTrail to copy trail events from the source S3 bucket. Replace *amzn-s3-demo-bucket*, *myAccountID*, *region*, *prefix*, and *eventDataStoreId* with the appropriate values for your configuration. The *myAccountID* is the AWS account ID used for CloudTrail Lake, which may not be the same as the AWS account ID for the S3 bucket.

Replace *key-region*, *keyAccountID*, and *keyID* with the values for the KMS key used to encrypt the source S3 bucket. You can omit the `AWSCloudTrailImportKeyAccess` statement if the source S3 bucket does not use a KMS key for encryption.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Sid": "AWSCloudTrailImportBucketAccess",
      "Effect": "Allow",
      "Action": ["s3:ListBucket", "s3:GetBucketAcl"],
      "Resource": [
        "arn:aws:s3:::amzn-s3-demo-bucket"
      ],
      "Condition": {
        "StringEquals": {
          "aws:SourceAccount": "myAccountID",
          "aws:SourceArn": "arn:aws:cloudtrail:region:myAccountID:eventdatastore/eventDataStoreId"
         }
       }
    },
    {
      "Sid": "AWSCloudTrailImportObjectAccess",
      "Effect": "Allow",
      "Action": ["s3:GetObject"],
      "Resource": [
        "arn:aws:s3:::amzn-s3-demo-bucket/prefix",
        "arn:aws:s3:::amzn-s3-demo-bucket/prefix/*"
      ],
      "Condition": {
        "StringEquals": {
          "aws:SourceAccount": "myAccountID",
          "aws:SourceArn": "arn:aws:cloudtrail:region:myAccountID:eventdatastore/eventDataStoreId"
         }
       }
    },
    {
      "Sid": "AWSCloudTrailImportKeyAccess",
      "Effect": "Allow",
      "Action": ["kms:GenerateDataKey","kms:Decrypt"],
      "Resource": [
        "arn:aws:kms:key-region:keyAccountID:key/keyID"
      ]
    }
  ]
}
```

The following example provides the IAM trust policy, which allows CloudTrail to assume an IAM role to copy trail events from the source S3 bucket. Replace *myAccountID*, *region*, and *eventDataStoreArn* with the appropriate values for your configuration. The *myAccountID* is the AWS account ID used for CloudTrail Lake, which may not be the same as the AWS account ID for the S3 bucket.

```
{
  "Version": "2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "cloudtrail.amazonaws.com"
      },
      "Action": "sts:AssumeRole",
      "Condition": {
        "StringEquals": {
          "aws:SourceAccount": "myAccountID",
          "aws:SourceArn": "arn:aws:cloudtrail:region:myAccountID:eventdatastore/eventDataStoreId"
        }
      }
    }
  ]
}
```

### Amazon S3 bucket policy for copying trail events


By default, Amazon S3 buckets and objects are private. Only the resource owner (the AWS account that created the bucket) can access the bucket and objects it contains. The resource owner can grant access permissions to other resources and users by writing an access policy.

Before you copy trail events, you must update the S3 bucket policy to allow CloudTrail to copy trail events from the source S3 bucket.

You can add the following statement to the S3 bucket policy to grant these permissions. Replace *roleArn* and *amzn-s3-demo-bucket* with the appropriate values for your configuration.

****

```
{
  "Sid": "AWSCloudTrailImportBucketAccess",
  "Effect": "Allow",
  "Action": [
    "s3:ListBucket",
    "s3:GetBucketAcl",
    "s3:GetObject"
  ],
  "Principal": {
    "AWS": "roleArn"
  },
  "Resource": [
    "arn:aws:s3:::amzn-s3-demo-bucket",
    "arn:aws:s3:::amzn-s3-demo-bucket/*"
  ]
},
```

### KMS key policy for decrypting data in the source S3 bucket


If the source S3 bucket uses a KMS key for data encryption, ensure the KMS key policy provides CloudTrail with the `kms:Decrypt` and `kms:GenerateDataKey` permissions required to copy trail events from an S3 bucket with SSE-KMS encryption enabled. If your source S3 bucket uses multiple KMS keys, you must update each key's policy. Updating the KMS key policy allows CloudTrail to decrypt data in the source S3 bucket, run validation checks to ensure that events conform to CloudTrail standards, and copy events into the CloudTrail Lake event data store. 

The following example provides the KMS key policy, which allows CloudTrail to decrypt the data in the source S3 bucket. Replace *roleArn*, *amzn-s3-demo-bucket*, *myAccountID*, *region*, and *eventDataStoreId* with the appropriate values for your configuration. The *myAccountID* is the AWS account ID used for CloudTrail Lake, which may not be the same as the AWS account ID for the S3 bucket.

```
{
  "Sid": "AWSCloudTrailImportDecrypt",
  "Effect": "Allow",
  "Action": [
          "kms:Decrypt",
          "kms:GenerateDataKey"
  ],
  "Principal": {
    "AWS": "roleArn"
  },
  "Resource": "*",
  "Condition": {
    "StringLike": {
      "kms:EncryptionContext:aws:s3:arn": "arn:aws:s3:::amzn-s3-demo-bucket/*"
    },
    "StringEquals": {
      "aws:SourceAccount": "myAccountID",
      "aws:SourceArn": "arn:aws:cloudtrail:region:myAccountID:eventdatastore/eventDataStoreId"
    }
  }
}
```

# Copy trail events to an existing event data store with the console
Copy trail events to an existing event data store

Use the following procedure to copy trail events to an existing event data store. For information about how to create a new event data store, see [Create an event data store for CloudTrail events with the console](query-event-data-store-cloudtrail.md).

**Note**  
 Before copying trail events to an existing event data store, be sure the event data store's pricing option and retention period are configured appropriately for your use case.   
**Pricing option:** The pricing option determines the cost for ingesting and storing events. For more information about pricing options, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Event data store pricing options](cloudtrail-lake-manage-costs.md#cloudtrail-lake-manage-costs-pricing-option).
**Retention period:** The retention period determines how long event data is kept in the event data store. CloudTrail only copies trail events that have an `eventTime` within the event data store’s retention period. To determine the appropriate retention period, take the sum of the oldest event you want to copy in days and the number of days you want to retain the events in the event data store (**retention period** = *oldest-event-in-days* \$1 *number-days-to-retain*). For example, if the oldest event you're copying is 45 days old and you want to keep the events in the event data store for a further 45 days, you would set the retention period to 90 days. 

**To copy trail events to an event data store**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  From the navigation pane, under **Lake**, choose **Event data stores**. 

1. Choose **Copy trail events**.

1. On the **Copy trail events** page, for **Event source**, choose the trail that you want to copy. By default, CloudTrail only copies CloudTrail events contained in the S3 bucket's `CloudTrail` prefix and the prefixes inside the `CloudTrail` prefix, and does not check prefixes for other AWS services. If you want to copy CloudTrail events contained in another prefix, choose **Enter S3 URI**, and then choose **Browse S3** to browse to the prefix. If the source S3 bucket for the trail uses a KMS key for data encryption, ensure that the KMS key policy allows CloudTrail to decrypt the data. If your source S3 bucket uses multiple KMS keys, you must update each key's policy to allow CloudTrail to decrypt the data in the bucket. For more information about updating the KMS key policy, see [KMS key policy for decrypting data in the source S3 bucket](cloudtrail-copy-trail-to-lake-eds.md#copy-trail-events-permissions-kms).

   The S3 bucket policy must grant CloudTrail access to copy trail events from your S3 bucket. For more information about updating the S3 bucket policy, see [Amazon S3 bucket policy for copying trail events](cloudtrail-copy-trail-to-lake-eds.md#copy-trail-events-permissions-s3).

1. For **Specify a time range of events**, choose the time range for copying the events. CloudTrail checks the prefix and log file name to verify the name contains a date between the chosen start and end date before attempting to copy trail events. You can choose a **Relative range** or an **Absolute range**. To avoid duplicating events between the source trail and destination event data store, choose a time range that is earlier than the creation of the event data store.
**Note**  
CloudTrail only copies trail events that have an `eventTime` within the event data store’s retention period. For example, if an event data store’s retention period is 90 days, then CloudTrail will not copy any trail events with an `eventTime` older than 90 days.
   + If you choose **Relative range**, you can choose to copy events logged in the last 6 months, 1 year, 2 years, 7 years, or a custom range. CloudTrail copies the events logged within the chosen time period.
   + If you choose **Absolute range**, you can choose a specific start and end date. CloudTrail copies the events that occurred between the chosen start and end dates.

1. For **Delivery location**, choose the destination event data store from the drop-down list.

1. For **Permissions**, choose from the following IAM role options. If you choose an existing IAM role, verify that the IAM role policy provides the necessary permissions. For more information about updating the IAM role permissions, see [IAM permissions for copying trail events](cloudtrail-copy-trail-to-lake-eds.md#copy-trail-events-permissions-iam).
   + Choose **Create a new role (recommended)** to create a new IAM role. For **Enter IAM role name**, enter a name for the role. CloudTrail automatically creates the necessary permissions for this new role.
   + Choose **Use a custom IAM role ARN** to use a custom IAM role that is not listed. For **Enter IAM role ARN**, enter the IAM ARN.
   + Choose an existing IAM role from the drop-down list.

1. Choose **Copy events**.

1. You are prompted to confirm. When you are ready to confirm, choose **Copy trail events to Lake**, and then choose **Copy events**.

1. On the **Copy details** page, you can see the copy status and review any failures. When a trail event copy completes, its **Copy status** is set to either **Completed** if there were no errors, or **Failed** if errors occurred.
**Note**  
Details shown on the event copy details page are not in real-time. The actual values for details such as **Prefixes copied** may be higher than what is shown on the page. CloudTrail updates the details incrementally over the course of the event copy.

1. If the **Copy status** is **Failed**, fix any errors shown in **Copy failures**, and then choose **Retry copy**. When you retry a copy, CloudTrail resumes the copy at the location where the failure occurred. 

For more information about viewing the details of a trail event copy, see [View event copy details with the CloudTrail console](copy-trail-details.md).

# Copy trail events to a new event data store with the console
Copy trail events to a new event data store

This walkthrough shows you how to copy trail events to a new CloudTrail Lake event data store for historical analysis. For more information about copying trail events, see [Copy trail events to an event data store](cloudtrail-copy-trail-to-lake-eds.md).

**To copy trail events to a new event data store**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  From the navigation pane, under **Lake**, choose **Event data stores**. 

1. Choose **Create event data store**.

1. On the **Configure event data store** page, in **General details**, give your event data store a name, such as *my-management-events-eds*. As a best practice, use a name that quickly identifies the purpose of the event data store. For information about CloudTrail naming requirements, see [Naming requirements for CloudTrail resources, S3 buckets, and KMS keys](cloudtrail-trail-naming-requirements.md).

1. Choose the **Pricing option** that you want to use for your event data store. The pricing option determines the cost for ingesting and storing events, and the default and maximum retention periods for your event data store. For more information, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/) and [Managing CloudTrail Lake costs](cloudtrail-lake-manage-costs.md). 

   The following are the available options:
   + **One-year extendable retention pricing** - Generally recommended if you expect to ingest less than 25 TB of event data per month and want a flexible retention period of up to 10 years. For the first 366 days (the default retention period), storage is included at no additional charge with ingestion pricing. After 366 days, extended retention is available at pay-as-you-go pricing. This is the default option.
     + **Default retention period:** 366 days
     + **Maximum retention period:** 3,653 days
   + **Seven-year retention pricing** - Recommended if you expect to ingest more than 25 TB of event data per month and need a retention period of up to 7 years. Retention is included with ingestion pricing at no additional charge.
     + **Default retention period:** 2,557 days
     + **Maximum retention period:** 2,557 days

1. Specify a retention period for the event data store. Retention periods can be between 7 days and 3,653 days (about 10 years) for the **One-year extendable retention pricing** option, or between 7 days and 2,557 days (about seven years) for the **Seven-year retention pricing** option.

    CloudTrail Lake determines whether to retain an event by checking if the `eventTime` of the event is within the specified retention period. For example, if you specify a retention period of 90 days, CloudTrail will remove events when their `eventTime` is older than 90 days. 
**Note**  
CloudTrail will not copy an event if its `eventTime` is older than the specified retention period.   
To determine the appropriate retention period, take the sum of the oldest event you want to copy in days and the number of days you want to retain the events in the event data store (**retention period** = *oldest-event-in-days* \$1 *number-days-to-retain*). For example, if the oldest event you're copying is 45 days old and you want to keep the events in the event data store for a further 45 days, you would set the retention period to 90 days.

1. (Optional) In **Encryption**. choose whether you want to encrypt the event data store using your own KMS key. By default, all events in an event data store are encrypted by CloudTrail using a KMS key that AWS owns and manages for you.

   To enable encryption using your own KMS key, choose **Use my own AWS KMS key**. Choose **New** to have an AWS KMS key created for you, or choose **Existing** to use an existing KMS key. In **Enter KMS alias**, specify an alias, in the format `alias/`*MyAliasName*. Using your own KMS key requires that you edit your KMS key policy to allow CloudTrail logs to be encrypted and decrypted. For more information, see [Configure AWS KMS key policies for CloudTrail](create-kms-key-policy-for-cloudtrail.md). CloudTrail also supports AWS KMS multi-Region keys. For more information about multi-Region keys, see [Using multi-Region keys](https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) in the *AWS Key Management Service Developer Guide*.

   Using your own KMS key incurs AWS KMS costs for encryption and decryption. After you associate an event data store with a KMS key, the KMS key cannot be removed or changed.
**Note**  
To enable AWS Key Management Service encryption for an organization event data store, you must use an existing KMS key for the management account.

1. (Optional) If you want to query against your event data using Amazon Athena, choose **Enable** in **Lake query federation**. Federation lets you view the metadata associated with the event data store in the AWS Glue [Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/components-overview.html#data-catalog-intro) and run SQL queries against the event data in Athena. The table metadata stored in the AWS Glue Data Catalog lets the Athena query engine know how to find, read, and process the data that you want to query. For more information, see [Federate an event data store](query-federation.md).

   To enable Lake query federation, choose **Enable** and then do the following:

   1. Choose whether you want to create a new role or use an existing IAM role. [AWS Lake Formation](https://docs.aws.amazon.com/lake-formation/latest/dg/how-it-works.html) uses this role to manage permissions for the federated event data store. When you create a new role using the CloudTrail console, CloudTrail automatically creates a role with the required permissions. If you choose an existing role, be sure the policy for the role provides the [required minimum permissions](query-federation.md#query-federation-permissions-role).

   1. If you are creating a new role, enter a name to identify the role.

   1. If you are using an existing role, choose the role you want to use. The role must exist in your account.

1. (Optional) Choose **Enable resource policy** to add a resource-based policy to your event data store. Resource-based policies allow you to control which principals can perform actions on your event data store. For example, you can add a resource based policy that allows the root users in other accounts to query this event data store and view the query results. For example policies, see [Resource-based policy examples for event data stores](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds).

   A resource-based policy includes one or more statements. Each statement in the policy defines the [principals](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html) that are allowed or denied access to the event data store and the actions the principals can perform on the event data store resource.

   The following actions are supported in resource-based policies for event data stores:
   +  `cloudtrail:StartQuery` 
   +  `cloudtrail:CancelQuery` 
   +  `cloudtrail:ListQueries` 
   +  `cloudtrail:DescribeQuery` 
   +  `cloudtrail:GetQueryResults` 
   +  `cloudtrail:GenerateQuery` 
   +  `cloudtrail:GenerateQueryResultsSummary` 
   +  `cloudtrail:GetEventDataStore` 

   For [organization event data stores](cloudtrail-lake-organizations.md), CloudTrail creates a [default resource-based policy](cloudtrail-lake-organizations.md#cloudtrail-lake-organizations-eds-rbp) that lists the actions that the delegated administrator accounts are allowed to perform on organization event data stores. The permissions in this policy are derived from the delegated administrator permissions in AWS Organizations. This policy is updated automatically following changes to the organization event data store or to the organization (for example, a CloudTrail delegated administrator account is registered or removed).

1. (Optional) In **Tags**, add one or more custom tags (key-value pairs) to your event data store. Tags can help you identify your CloudTrail event data stores. For example, you could attach a tag with the name **stage** and the value **prod**. You can use tags to limit access to your event data store. You can also use tags to track the query and ingestion costs for your event data store.

   For information about how to use tags to track costs, see [Creating user-defined cost allocation tags for CloudTrail Lake event data stores](cloudtrail-budgets-tools.md#cloudtrail-lake-manage-costs-tags). For information about how to use IAM policies to authorize access to an event data store based on tags, see [Examples: Denying access to create or delete event data stores based on tags](security_iam_id-based-policy-examples.md#security_iam_id-based-policy-examples-eds-tags). For information about how you can use tags in AWS, see [Tagging your AWS resources](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html) in the *Tagging AWS Resources User Guide*.

1.  Choose **Next** to configure the event data store. 

1.  On the **Choose events** page, leave the default selections for **Event type**.  
![\[Choose event type for the event data store\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/lake-event-type.png)

1. For **CloudTrail events**, we'll leave **Management events** selected and choose **Copy trail events**. In this example, we're not concerned about the event types because we are only using the event data store to analyze past events and are not ingesting future events. 

   If you're creating an event data store to replace an existing trail, choose the same event selectors as your trail to ensure the event data store has the same event coverage.  
![\[Choose CloudTrail events types for your event data store\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/cloudtrail-events-copy-trail.png)

1. Choose **Enable for all accounts in my organization** if this is an organization event data store. This option won't be available to change unless you have accounts configured in AWS Organizations.
**Note**  
If you are creating an organization event data store, you must be signed in with the management account for the organization because only the management account can copy trail events to an organization event data store.

1.  For **Additional settings**, we'll deselect **Ingest events**, because in this example we don't want the event data store to ingest any future events as we're only interested in querying the copied events. By default, an event data store collects events for all AWS Regions and starts ingesting events when it's created.

1. For **Management events**, we'll leave the default settings.

1. In the **Copy trail events** area, complete the following steps.

   1. Choose the trail that you want to copy. In this example, we'll choose a trail named *management-events*.

      By default, CloudTrail only copies CloudTrail events contained in the S3 bucket's `CloudTrail` prefix and the prefixes inside the `CloudTrail` prefix, and does not check prefixes for other AWS services. If you want to copy CloudTrail events contained in another prefix, choose **Enter S3 URI**, and then choose **Browse S3** to browse to the prefix. If the source S3 bucket for the trail uses a KMS key for data encryption, ensure that the KMS key policy allows CloudTrail to decrypt the data. If your source S3 bucket uses multiple KMS keys, you must update each key's policy to allow CloudTrail to decrypt the data in the bucket. For more information about updating the KMS key policy, see [KMS key policy for decrypting data in the source S3 bucket](cloudtrail-copy-trail-to-lake-eds.md#copy-trail-events-permissions-kms).

   1. Choose a time range for copying the events. CloudTrail checks the prefix and log file name to verify the name contains a date between the chosen start and end date before attempting to copy trail events. You can choose a **Relative range** or an **Absolute range**. To avoid duplicating events between the source trail and destination event data store, choose a time range that is earlier than the creation of the event data store.
      + If you choose **Relative range**, you can choose to copy events logged in the last 6 months, 1 year, 2 years, 7 years, or a custom range. CloudTrail copies the events logged within the chosen time period.
      + If you choose **Absolute range**, you can choose a specific start and end date. CloudTrail copies the events that occurred between the chosen start and end dates.

      In this example, we'll choose **Absolute range** and we'll select the entire month of May.  
![\[Choose absolute range for event data store\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/absolute-range-example.png)

   1. For **Permissions**, choose from the following IAM role options. If you choose an existing IAM role, verify that the IAM role policy provides the necessary permissions. For more information about updating the IAM role permissions, see [IAM permissions for copying trail events](cloudtrail-copy-trail-to-lake-eds.md#copy-trail-events-permissions-iam).
      + Choose **Create a new role (recommended)** to create a new IAM role. For **Enter IAM role name**, enter a name for the role. CloudTrail automatically creates the necessary permissions for this new role.
      + Choose **Use a custom IAM role ARN** to use a custom IAM role that is not listed. For **Enter IAM role ARN**, enter the IAM ARN.
      + Choose an existing IAM role from the drop-down list.

      In this example, we'll choose **Create a new role (recommended)** and will provide the name **copy-trail-events**.  
![\[Choose options for copying CloudTrail events\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/copy-trail-events.png)

1. Choose **Next** to review your choices.

1. On the **Review and create** page, review your choices. Choose **Edit** to make changes to a section. When you're ready to create the event data store, choose **Create event data store**.

1. The new event data store is visible in the **Event data stores** table on the **Event data stores** page.  
![\[View event data stores\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/event-data-stores-table.png)

1. Choose the event data store name to view its details page. The details page shows the details for your event data store and the status of the copy. The event copy status is shown in the **Event copy status** area.

   When a trail event copy completes, its **Copy status** is set to either **Completed** if there were no errors, or **Failed** if errors occurred.  
![\[View the event copy status on the details page\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/event-copy-status.png)

1. To view more details about the copy, choose the copy name in the **Event log S3 location** column, or choose the **View details** option from the **Actions** menu. For more information about viewing the details of a trail event copy, see [View event copy details with the CloudTrail console](copy-trail-details.md).  
![\[View event copy details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/copy-details.png)

1.  The **Copy failures** area shows any errors that occurred when copying trail events. If the **Copy status** is **Failed**, fix any errors shown in **Copy failures**, and then choose **Retry copy**. When you retry a copy, CloudTrail resumes the copy at the location where the failure occurred. 

# View event copy details with the CloudTrail console
View event copy details

After a trail event copy starts, you can view the event copy details, including the status of the copy, and information on any copy failures.

**Note**  
Details shown on the event copy details page are not in real-time. The actual values for details such as **Prefixes copied** may be higher than what is shown on the page. CloudTrail updates the details incrementally over the course of the event copy.

**To access the event copy details page**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  From the left navigation pane, under **Lake**, choose **Event data stores**. 

1. Choose the event data store.

1. Choose the event copy in the **Event copy status** section.

## Copy details


From **Copy details**, you can view the following details about the trail event copy.
+ **Event log S3 location** - The location of the source S3 bucket containing the trail event log files.
+ **Copy ID** - The ID for the copy.
+ **Prefixes copied** - Represents the number of S3 prefixes copied. During a trail event copy, CloudTrail copies the events in the trail log files that are stored in the prefixes.
+ **Copy status** - The status of the copy.
  + **Initializing** - Initial status shown when the trail event copy starts.
  + **In progress** - Indicates the trail event copy is in progress.
**Note**  
You cannot copy trail events if another trail event copy is **In progress**. To stop a trail event copy, choose **Stop copy**.
  + **Stopped** - Indicates a **Stop copy** action occurred. To retry a trail event copy, choose **Retry copy**.
  + **Failed** - The copy completed, but some trail events failed to copy. Review the error messages in **Copy failures**. To retry a trail event copy, choose **Retry copy**. When you retry a copy, CloudTrail resumes the copy at the location where the failure occurred.
  + **Completed** - The copy completed without errors. You can query the copied trail events in the event data store.
+ **Created time** - Indicates when the trail event copy started.
+ **Finish time** - Indicates when the trail event copy completed or stopped.

## Copy failures


 From **Copy failures**, you can review the error location, error message, and error type for each copy failure. Common reasons for failure, include if an S3 prefix contained an uncompressed file, or contained a file delivered by a service other than CloudTrail. Another possible cause of failure relates to access issues. For example, if the event data store's S3 bucket did not grant CloudTrail access to import the events, you would get an `AccessDenied` error.

For each copy failure, review the following error information.
+  The **Error location** - Indicates the location in the S3 bucket where the error occurred. If an error occurred because the source S3 bucket contained an uncompressed file, the **Error location** would include the prefix where you would find that file. 
+  The **Error message** - Provides an explanation for why the error occurred. 
+  The **Error type** - Provides the error type. For example, an **Error type** of `AccessDenied`, indicates that the error occurred because of a permissions issue. For more information about the required permissions for copying trail events, see [Required permissions for copying trail events](cloudtrail-copy-trail-to-lake-eds.md#copy-trail-events-permissions). 

After resolving any failures, choose **Retry copy**. When you retry a copy, CloudTrail resumes the copy at the location where the failure occurred. 

# Federate an event data store


**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

Federating an event data store lets you view the metadata associated with the event data store in the AWS Glue [Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/components-overview.html#data-catalog-intro), registers the Data Catalog with AWS Lake Formation, and lets you run SQL queries against your event data using Amazon Athena. The table metadata stored in the AWS Glue Data Catalog lets the Athena query engine know how to find, read, and process the data that you want to query. 

You can enable federation by using the CloudTrail console, AWS CLI, or [https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_EnableFederation.html](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_EnableFederation.html) API operation. When you enable Lake query federation, CloudTrail creates a managed database named `aws:cloudtrail` (if the database doesn't already exist) and a managed federated table in the AWS Glue Data Catalog. The event data store ID is used for the table name. CloudTrail registers the federation role ARN and event data store in [AWS Lake Formation](query-federation-lake-formation.md), the service responsible for allowing fine-grained access control of the federated resources in the AWS Glue Data Catalog.

To enable Lake query federation, you must create a new IAM role or choose an existing role. Lake Formation uses this role to manage permissions for the federated event data store. When you create a new role using the CloudTrail console, CloudTrail automatically creates the required permissions for the role. If you choose an existing role, be sure that the role provides the [minimum permissions](#query-federation-permissions-role).

You can disable federation by using the CloudTrail console, AWS CLI, or [https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_DisableFederation.html](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_DisableFederation.html) API operation. When you disable federation, CloudTrail disables the integration with AWS Glue, AWS Lake Formation, and Amazon Athena. After disabling Lake query federation, you can no longer query your event data in Athena. No CloudTrail Lake data is deleted when you disable federation and you can continue to run queries in CloudTrail Lake.

There are no CloudTrail charges for federating a CloudTrail Lake event data store. There are costs for running queries in Amazon Athena. For more information about Athena pricing, see [Amazon Athena Pricing](https://aws.amazon.com/athena/pricing/).

[![AWS Videos](http://img.youtube.com/vi/https://www.youtube.com/embed/cOeZaJt_k-w?si=4LsEgq23NNHSJAAg/0.jpg)](http://www.youtube.com/watch?v=https://www.youtube.com/embed/cOeZaJt_k-w?si=4LsEgq23NNHSJAAg)


**Topics**
+ [

## Considerations
](#query-federation-considerations)
+ [

## Required permissions for federation
](#query-federation-permissions)
+ [

# Enable Lake query federation
](query-enable-federation.md)
+ [

# Disable Lake query federation
](query-disable-federation.md)
+ [

# Managing CloudTrail Lake federation resources with AWS Lake Formation
](query-federation-lake-formation.md)

## Considerations


Consider the following factors when federating an event data store:
+ There are no CloudTrail charges for federating a CloudTrail Lake event data store. There are costs for running queries in Amazon Athena. For more information about Athena pricing, see [Amazon Athena Pricing](https://aws.amazon.com/athena/pricing/).
+ Lake Formation is used to manage permissions for the federated resources. If you delete the federation role, or revoke permissions to the resources from Lake Formation or AWS Glue, you can't run queries from Athena. For more information about working with Lake Formation, see [Managing CloudTrail Lake federation resources with AWS Lake Formation](query-federation-lake-formation.md). 
+ Anyone using Amazon Athena to query data registered with Lake Formation must have an IAM permissions policy that allows the `lakeformation:GetDataAccess` action. The AWS managed policy: [https://docs.aws.amazon.com/athena/latest/ug/managed-policies.html#amazonathenafullaccess-managed-policy](https://docs.aws.amazon.com/athena/latest/ug/managed-policies.html#amazonathenafullaccess-managed-policy) allows this action. If you use inline policies, be sure to update permissions policies to allow this action. For more information, see [Managing Lake Formation and Athena user permissions](https://docs.aws.amazon.com/athena/latest/ug/lf-athena-user-permissions.html).
+ To create views on federated tables in Athena, you need a destination database other than `aws:cloudtrail`. This is because the `aws:cloudtrail` database is managed by CloudTrail.
+ To create a dataset in Amazon Quick, you must choose the **Use custom SQL** option. For more information, see [Creating a dataset using Amazon Athena data](https://docs.aws.amazon.com/quicksight/latest/user/create-a-data-set-athena.html).
+ If federation is enabled, you can't delete an event data store. To delete a federated event data store, you must first [disable federation](query-disable-federation.md) and [termination protection](query-eds-termination-protection.md) if it's enabled.
+ The following considerations apply to organization event data stores:
  + Only a single delegated administrator account or the management account can enable federation on an organization event data store. Other delegated administrator accounts can still query and share information using the [Lake Formation data sharing feature](https://docs.aws.amazon.com/lake-formation/latest/dg/data-sharing-overivew.html).
  + Any delegated administrator account or the organization's management account can disable federation.

## Required permissions for federation


Before federating an event data store, be sure that you have all the required permissions for the federation role and for enabling and disabling federation. You only need to update the federation role permissions if you choose an existing IAM role to enable federation. If you choose to create a new IAM role using the CloudTrail console, CloudTrail provides all necessary permissions for the role.

**Topics**
+ [

### IAM permissions for federating an event data store
](#query-federation-permissions-role)
+ [

### Required permissions for enabling federation
](#query-federation-permissions-enable)
+ [

### Required permissions for disabling federation
](#query-federation-permissions-disable)

### IAM permissions for federating an event data store


When you enable federation, you have the option to create a new IAM role, or use an existing IAM role. When you choose a new IAM role, CloudTrail creates an IAM role with the required permissions and no further action is required on your part.

If you choose an existing role, ensure the IAM role's policies provide the required permissions to enable federation. This section provides examples of the required IAM role permission and trust policies.

The following example provides the permissions policy for the federation role. For the first statement provide the full ARN of your event data store for the `Resource`.

The second statement in this policy allows Lake Formation to decrypt data for an event data store encrypted with a KMS key. Replace *key-region*, *account-id*, and *key-id* with the values for your KMS key. You can omit this statement if your event data store does not use a KMS key for encryption.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "LakeFederationEDSDataAccess",
            "Effect": "Allow",
            "Action": "cloudtrail:GetEventDataStoreData",
            "Resource": "arn:aws:cloudtrail:us-east-1:111111111111:eventdatastore/eds-id"
        },
        {
            "Sid": "LakeFederationKMSDecryptAccess",
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt",
                "kms:GenerateDataKey"
            ],
            "Resource": "arn:aws:kms:us-east-1:111111111111:key/key-id"
        }
    ]
}
```

------

The following example provides the IAM trust policy, which allows AWS Lake Formation to assume an IAM role to manage permissions for the federated event data store. 

------
#### [ JSON ]

****  

```
{
  "Version":"2012-10-17",		 	 	 
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lakeformation.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
```

------

### Required permissions for enabling federation


The following example policy provides the minimum required permissions to enable federation on an event data store. This policy allows CloudTrail to enable federation on the event data store, AWS Glue to create the federated resources in the AWS Glue Data Catalog, and AWS Lake Formation to manage resource registration.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "CloudTrailEnableFederation",
            "Effect": "Allow",
            "Action": "cloudtrail:EnableFederation",
            "Resource": "arn:aws:cloudtrail:us-east-1:111111111111:eventdatastore/eds-id"
        },
        {
            "Sid": "FederationRoleAccess",
            "Effect": "Allow",
            "Action": [
                "iam:PassRole",
                "iam:GetRole"
            ],
            "Resource": "arn:aws:iam::111122223333:role/federation-role-name"
        },
        {
            "Sid": "GlueResourceCreation",
            "Effect": "Allow",
            "Action": [
                "glue:CreateDatabase",
                "glue:CreateTable",
                "glue:PassConnection"
            ],
            "Resource": [
                "arn:aws:glue:us-east-1:111111111111:catalog",
                "arn:aws:glue:us-east-1:111111111111:database/aws:cloudtrail",
                "arn:aws:glue:us-east-1:111111111111:table/aws:cloudtrail/eds-id",
                "arn:aws:glue:us-east-1:111111111111:connection/aws:cloudtrail"
            ]
        },
        {
            "Sid": "LakeFormationRegistration",
            "Effect": "Allow",
            "Action": [
                "lakeformation:RegisterResource",
                "lakeformation:DeregisterResource"
            ],
            "Resource": "arn:aws:lakeformation:region:111111111111:catalog:111111111111"
        }
    ]
}
```

------

### Required permissions for disabling federation


The following example policy provides the minimum required resources to disable federation on an event data store. This policy allows CloudTrail to disable federation on the event data store, AWS Glue to delete the managed federated table in the AWS Glue Data Catalog, and Lake Formation to deregister the federated resource.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "CloudTrailDisableFederation",
            "Effect": "Allow",
            "Action": "cloudtrail:DisableFederation",
            "Resource": "arn:aws:cloudtrail:us-east-1:111111111111:eventdatastore/eds-id"
        },
        {
            "Sid": "GlueTableDeletion",
            "Effect": "Allow",
            "Action": "glue:DeleteTable",
            "Resource": [
                "arn:aws:glue:us-east-1:111111111111:catalog",
                "arn:aws:glue:us-east-1:111111111111:database/aws:cloudtrail",
                "arn:aws:glue:us-east-1:111111111111:table/aws:cloudtrail/eds-id"
            ]
        },
        {
            "Sid": "LakeFormationDeregistration",
            "Effect": "Allow",
            "Action": "lakeformation:DeregisterResource",
            "Resource": "arn:aws:lakeformation:us-east-1:111111111111:catalog:111111111111"
        }
    ]
}
```

------

# Enable Lake query federation


**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

You can enable Lake query federation by using the CloudTrail console, AWS CLI, or [https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_EnableFederation.html](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_EnableFederation.html) API operation. When you enable Lake query federation, CloudTrail creates a managed database named `aws:cloudtrail` (if the database doesn't already exist) and a managed federated table in the AWS Glue Data Catalog. The event data store ID is used for the table name. CloudTrail registers the federation role ARN and event data store in [AWS Lake Formation](query-federation-lake-formation.md), the service responsible for allowing fine-grained access control of the federated resources in the AWS Glue Data Catalog.

This section describes how to enable federation using the CloudTrail console and AWS CLI.

------
#### [ CloudTrail console ]

The following procedure shows you how to enable Lake query federation on an existing event data store.

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1. In the navigation pane, under **Lake**, choose **Event data stores**.

1. Choose the event data store that you want to update. This opens the event data store's details page.

1. In **Lake query federation**, choose **Edit** and then choose **Enable**.

1. Choose whether to create a new IAM role, or use an existing role. When you create a new role, CloudTrail automatically creates a role with the required permissions. If you're using an existing role, be sure the role's policy provides the [required minimum permissions](query-federation.md#query-federation-permissions-role).

1.  If you're creating a new IAM role, enter a name for the role. 

1.  If you're choosing an existing IAM role, choose the role you want to use. The role must exist in your account. 

1. Choose **Save changes**. The **Federation status** changes to `Enabled`.

------
#### [ AWS CLI ]

To enable federation, run the **aws cloudtrail enable-federation** command, providing the required **--event-data-store** and **--role** parameters. For **--event-data-store**, provide the event data store ARN (or the ID suffix of the ARN). For **--role**, provide the ARN for your federation role. The role must exist in your account and provide the [required minimum permissions](query-federation.md#query-federation-permissions-role).

```
aws cloudtrail enable-federation
--event-data-store arn:aws:cloudtrail:region:account-id:eventdatastore/eds-id
--role arn:aws:iam::account-id:role/federation-role-name
```

This example shows how a delegated administrator can enable federation on an organization event data store by specifying the ARN of the event data store in the management account and the ARN of the federation role in the delegated administrator account.

```
aws cloudtrail enable-federation
--event-data-store arn:aws:cloudtrail:region:management-account-id:eventdatastore/eds-id
--role arn:aws:iam::delegated-administrator-account-id:role/federation-role-name
```

------

# Disable Lake query federation


**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

You can disable federation by using the CloudTrail console, AWS CLI, or [https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_DisableFederation.html](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_DisableFederation.html) API operation. When you disable federation, CloudTrail disables the integration with AWS Glue, AWS Lake Formation, and Amazon Athena. After disabling Lake query federation, you can no longer query your event data in Athena. No CloudTrail Lake data is deleted when you disable federation and you can continue to run queries in CloudTrail Lake.

This section describes how to disable federation using the CloudTrail console and AWS CLI.

------
#### [ CloudTrail console ]

The following procedure shows you how to disable Lake query federation on an existing event data store.

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1. In the navigation pane, under **Lake**, choose **Event data stores**.

1. Choose the event data store that you want to update. This opens the event data store's details page.

1. In **Lake query federation**, choose **Edit** and then choose **Disable**.

1. Choose **Save changes**. The **Federation status** changes to `Disabled`.

------
#### [ AWS CLI ]

To disable federation on the event data store, run the **aws cloudtrail disable-federation** command. The event data store is specified by `--event-data-store`, which accepts an event data store ARN or the ID suffix of the ARN.

```
aws cloudtrail disable-federation
--event-data-store arn:aws:cloudtrail:region:account-id:eventdatastore/eds-id
```

**Note**  
If this is an organization event data store, use the account ID for the management account.

------

# Managing CloudTrail Lake federation resources with AWS Lake Formation


**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

When you federate an event data store, CloudTrail registers the federation role ARN and event data store in AWS Lake Formation, the service responsible for allowing fine-grained access control of the federated resources in the AWS Glue Data Catalog. This section describes how you can use Lake Formation to manage the CloudTrail Lake federation resources.

When you enable federation, CloudTrail creates the following resources in the AWS Glue Data Catalog.
+ **Managed database** – CloudTrail creates 1 database with the name `aws:cloudtrail` per account. CloudTrail manages the database. You can't delete or modify the database in AWS Glue. 
+ **Managed federated table** – CloudTrail creates 1 table for each federated event data store and uses the event data store ID for the table name. CloudTrail manages the tables. You can't delete or modify the tables in AWS Glue. To delete a table, you must [disable federation](query-disable-federation.md) on the event data store. 

## Controlling access to federated resources


You can use one of two permissions methods to control access to the managed database and tables.
+ **IAM only access control** – With IAM only access control, all users in the account with the required IAM permissions are given access to all Data Catalog resources. For information about how AWS Glue works with IAM, see [How AWS Glue works with IAM](https://docs.aws.amazon.com/glue/latest/dg/security_iam_service-with-iam.html). 

  On the Lake Formation console, this method appears as **Use only IAM access control**.
**Note**  
If you want to create data filters and use other Lake Formation features, you must use Lake Formation access control.
+ **Lake Formation access control** – This methods provides the following advantages. 
  + You can implement column-level, row-level, and cell-level security by creating [data filters](https://docs.aws.amazon.com/lake-formation/latest/dg/data-filters-about.html). For more information, see [Securing data lakes with row-level access control](https://docs.aws.amazon.com/lake-formation/latest/dg/cbac-tutorial.html) in the *AWS Lake Formation Developer Guide*.
  + Database and tables are only visible to Lake Formation administrators and creators of the database and resources. If another user needs access to these resources, you must explicitly [grant access by using Lake Formation permissions](https://docs.aws.amazon.com/lake-formation/latest/dg/granting-catalog-permissions.html).

For more information about access control, see [Methods for fine-grained access control](https://docs.aws.amazon.com/lake-formation/latest/dg/access-control-fine-grained.html).

## Determining the permissions method for a federated resource


When you enable federation for the first time, CloudTrail creates a managed database and managed federated table using your Lake Formation data lake settings.

After CloudTrail enables federation, you can verify which permissions method you are using for the managed database and managed federated table by checking the permissions for those resources. If the `ALL` (*Super*) to `IAM_ALLOWED_PRINCIPALS ` setting is present for the resource, the resource is managed exclusively by IAM permissions. If the setting is missing, the resource is managed by Lake Formation permissions. For more information about Lake Formation permissions, see [Lake Formation permissions reference](https://docs.aws.amazon.com/lake-formation/latest/dg/lf-permissions-reference.html).

The permissions method for the managed database and managed federated table can differ. For example, if you check the values for the database and table, you could see the following:
+ For the database, the value that assigns `ALL` (*Super*) to `IAM_ALLOWED_PRINCIPALS` is present in the permissions indicating that the you're using IAM only access control for the database.
+ For the table, the value that assigns `ALL` (*Super*) to `IAM_ALLOWED_PRINCIPALS` not present, which indicates access control by Lake Formation permissions.

You can switch between access methods at any time by adding or removing `ALL` (*Super*) to `IAM_ALLOWED_PRINCIPALS ` permission on any federated resource in Lake Formation.

## Cross-account sharing using Lake Formation


This section describes how to share a managed database and managed federated table across accounts by using Lake Formation.

You can share a managed database across accounts by taking these steps:

1. Update the [cross-account data sharing version](https://docs.aws.amazon.com/lake-formation/latest/dg/optimize-ram.html) to version 4. 

1. Remove `Super` to `IAM_ALLOWED_PRINCIPALS` permissions from the database if present to switch to Lake Formation access control.

1. Grant `Describe` permissions to the external account on the database.

1. If a Data Catalog resource is shared with your AWS account and your account is not in the same AWS organization as the sharing account, accept the resource share invitation from AWS Resource Access Manager (AWS RAM). For more information, see [Accepting a resource share invitation from AWS RAM](https://docs.aws.amazon.com/lake-formation/latest/dg/accepting-ram-invite.html).

After completing these steps, the database should be visible to the external account. By default, sharing the database does not give access to any tables in the database.

 You can share all or individual managed federated tables with an external account by taking these steps:

1. Update the [cross-account data sharing version](https://docs.aws.amazon.com/lake-formation/latest/dg/optimize-ram.html) to version 4. 

1. Remove `Super` to `IAM_ALLOWED_PRINCIPALS` permissions from the table if present to switch to Lake Formation access control.

1. (Optional) Specify any [data filters](https://docs.aws.amazon.com/lake-formation/latest/dg/data-filters-about.html) to restrict columns or rows.

1. Grant `Select` permissions to the external account on the table.

1. If a Data Catalog resource is shared with your AWS account and your account is not in the same AWS organization as the sharing account, accept the resource share invitation from AWS Resource Access Manager (AWS RAM). For an organization, you can auto accept using RAM settings. For more information, see [Accepting a resource share invitation from AWS RAM](https://docs.aws.amazon.com/lake-formation/latest/dg/accepting-ram-invite.html).

1. The table should now be visible. To enable Amazon Athena queries on this table, create a [resource link in this account](https://docs.aws.amazon.com/lake-formation/latest/dg/create-resource-link-table.html) with the shared table.

The owning account can revoke sharing at any point by removing permissions for the external account from Lake Formation, or by [disabling federation](query-disable-federation.md) in CloudTrail.

# Understanding organization event data stores
Organization event data stores

**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

If you have created an organization in AWS Organizations, you can create an *organization event data store* that logs all events for all AWS accounts in that organization. Organization event data stores can apply to all AWS Regions, or the current Region. You can't use an organization event data store to collect events from outside of AWS.

You can [create an organization event data store](#cloudtrail-lake-organizations-create-eds) by using either the management account or the delegated administrator account. When a delegated administrator creates an organization event data store, the organization event data store exists in the management account for the organization. This approach is because the management account maintains ownership of all organization resources. 

The management account for an organization can [update an account-level event data store](#cloudtrail-lake-organizations-update-eds) to apply it to an organization.

When the organization event data store is specified as applying to an organization, it's automatically applied to all member accounts in the organization. Member accounts can't see the organization event data store, nor can they modify or delete it. By default, member accounts don't have access to the organization event data store, nor can they run queries on organization event data stores. 

The following table shows the capabilities of the management account and delegated administrator accounts within the AWS Organizations organization.


| Capabilities | Management account | Delegated administrator account | 
| --- | --- | --- | 
|  Register or remove delegated administrator accounts.  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/negative_icon.svg) No  | 
|  Create an organization event data store for AWS CloudTrail events or AWS Config configuration items.  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  | 
|  Enable Insights on an organization event data store.  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/negative_icon.svg) No  | 
|  Update an organization event data store.  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes1  | 
|  Start and stop event ingestion on an organization event data store.  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  | 
|  Enable Lake query federation on an organization event data store.2  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  | 
|  Disable Lake query federation on an organization event data store.  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  | 
|  Delete an organization event data store.  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  | 
|  Copy trail events to an event data store.  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/negative_icon.svg) No  | 
|  Run queries on organization event data stores.  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  | 
|  View a managed dashboard for an organization event data store.  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/negative_icon.svg) No  | 
|  Enable the Highlights dashboard for organization event data stores.  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/negative_icon.svg) No  | 
|  Create a widget for a custom dashboard that queries an organization event data store.  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/success_icon.svg) Yes  |  ![\[alt text not found\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/negative_icon.svg) No  | 

1Only the management account can convert an organization event data store to an account-level event data store, or convert an account-level event data store to an organization event data store. These actions are not allowed for the delegated administrator because organization event data stores only exist in the management account. When an organization event data store is converted to an account-level event data store, only the management account has access to the event data store. Likewise, only an account-level event data store in the management account can be converted to an organization event data store.

2Only a single delegated administrator account or the management account can enable federation on an organization event data store. Other delegated administrator accounts can query and share information using the [Lake Formation data sharing feature](https://docs.aws.amazon.com/lake-formation/latest/dg/data-sharing-overivew.html). Any delegated administrator account as well as the organization's management account can disable federation.

## Create an organization event data store


The management account or delegated administrator account for an organization can create an organization event data store to collect either CloudTrail events (management events, data events) or AWS Config configuration items.

**Note**  
Only the organization's management account can copy trail events to an event data store.

------
#### [ CloudTrail console ]

**To create an organization event data store using the console**

1. Follow the steps in the [create an event data store for CloudTrail events](query-event-data-store-cloudtrail.md#query-event-data-store-cloudtrail-procedure) procedure to create an organization event data store for CloudTrail management or data events.

   **OR**

   Follow the steps in the [create an event data store for AWS Config configuration items](query-event-data-store-config.md#create-config-event-data-store) procedure to create an organization event data store for AWS Config configuration items.

1. On the **Choose events** page, choose **Enable for all accounts in my organization**.

------
#### [ AWS CLI ]

To create an organization event data store run the [https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/create-event-data-store.html](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/create-event-data-store.html) command and include the `--organization-enabled` option.

The following example AWS CLI `create-event-data-store` command creates an organization event data store that collects all management events. Because CloudTrail logs management events by default, you don't need to specify advanced event selectors if your event data store is logging all management events and is not collecting any data events.

```
aws cloudtrail create-event-data-store --name org-management-eds --organization-enabled
```

The following is an example response.

```
{
    "EventDataStoreArn": "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE6-d493-4914-9182-e52a7934b207",
    "Name": "org-management-eds",
    "Status": "CREATED",
    "AdvancedEventSelectors": [
        {
            "Name": "Default management events",
            "FieldSelectors": [
                {
                    "Field": "eventCategory",
                    "Equals": [
                        "Management"
                    ]
                }
            ]
        }
    ],
    "MultiRegionEnabled": true,
    "OrganizationEnabled": true,
    "BillingMode": "EXTENDABLE_RETENTION_PRICING",
    "RetentionPeriod": 366,
    "TerminationProtectionEnabled": true,
    "CreatedTimestamp": "2023-11-16T15:30:50.689000+00:00",
    "UpdatedTimestamp": "2023-11-16T15:30:50.851000+00:00"
}
```

The next example AWS CLI `create-event-data-store` command creates an organization event data store named `config-items-org-eds` that collects AWS Config configuration items. To collect configuration items, specify that the `eventCategory` field equals `ConfigurationItem` in the advanced event selectors.

```
aws cloudtrail create-event-data-store --name config-items-org-eds \
--organization-enabled \
--advanced-event-selectors '[
    {
        "Name": "Select AWS Config configuration items",
        "FieldSelectors": [
            { "Field": "eventCategory", "Equals": ["ConfigurationItem"] }
        ]
    }
]'
```

------

## Apply an account-level event data store to an organization


The organization's management account can convert an account-level event data store to apply it to an organization.

------
#### [ CloudTrail console ]

**To update an account-level event data store using the console**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1. In the navigation pane, under **Lake**, choose **Event data stores**.

1. Choose the event data store that you want to update. This action opens the event data store's details page.

1. In **General details**, choose **Edit**.

1. Choose **Enable for all accounts in my organization**.

1. Choose **Save changes**.

For additional information about updating an event data store, see [Update an event data store with the console](query-event-data-store-update.md).

------
#### [ AWS CLI ]

To update an account-level event data store to apply it to an organization, run the [update-event-data-store](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/update-event-data-store.html) command and include the `--organization-enabled` option.

```
aws cloudtrail update-event-data-store --region us-east-1 \
--organization-enabled \
--event-data-store arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
```

------

## Default resource policy for delegated administrators


CloudTrail automatically generates a resource policy named `DelegatedAdminResourcePolicy` for [organization event data stores](#cloudtrail-lake-organizations) that lists the actions that the delegated administrator accounts are allowed to perform on organization event data stores. The permissions in `DelegatedAdminResourcePolicy` are derived from the delegated administrator permissions in AWS Organizations.

The purpose of `DelegatedAdminResourcePolicy` is to ensure that the delegated administrator accounts can manage the organization event data store on the behalf of the organization and are not unintentionally denied access to the organization event data store when a resource-based policy is attached to the organization event data store that allows or denies principals from performing an action on the organization event data store.

CloudTrail evaluates `DelegatedAdminResourcePolicy` in tandem with any resource-based policy provided for the organization event data store. The delegated administrator accounts would only be denied access if the provided resource-based policy included a statement that explicitly denied the delegated administrator accounts from performing an action on the organization event data store that the delegated administrator accounts would otherwise be able to perform.

This `DelegatedAdminResourcePolicy` policy is updated automatically when:
+ The management account converts an organization event data store to an account-level event data store, or converts an account-level event data store to an organization event data store.
+ There are organization changes. For example, the management account registers or removes a CloudTrail delegated administrator account.

You can view the up-to-date policy on the **Delegated administrator resource policy** section on the CloudTrail console, or by running the AWS CLI `get-resource-policy` command and passing the ARN of the organization event data store.

The following example runs the `get-resource-policy` command on an organization event data store.

```
aws cloudtrail get-resource-policy --resource-arn arn:aws:cloudtrail:us-east-1:888888888888:eventdatastore/example6-d493-4914-9182-e52a7934b207
```

The output of this command will show the resource-based policy and the `DelegatedAdminResourcePolicy` policy generated for the delegated administrator accounts.

## Additional resources

+ [Organization delegated administrator](cloudtrail-delegated-administrator.md)
+ [Add a CloudTrail delegated administrator](cloudtrail-add-delegated-administrator.md)
+ [Remove a CloudTrail delegated administrator](cloudtrail-remove-delegated-administrator.md)

# Create an integration with an event source outside of AWS
Integrations

**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

You can use CloudTrail to log and store user activity data from any source in your hybrid environments, such as in-house or SaaS applications hosted on-premises or in the cloud, virtual machines, or containers. You can store, access, analyze, troubleshoot and take action on this data without maintaining multiple log aggregators and reporting tools. 

Activity events from non-AWS sources work by using *channels* to bring events into CloudTrail Lake from external partners that work with CloudTrail, or from your own sources. When you create a channel, you choose one or more event data stores to store events that arrive from the channel source. You can change the destination event data stores for a channel as needed, as long as the destination event data stores are set to log `eventCategory="ActivityAuditLog"` events. When you create a channel for events from an external partner, you provide a channel ARN to the partner or source application. The resource policy attached to the channel allows the source to transmit events through the channel. If a channel does not have a resource policy, only the channel owner can call the `PutAuditEvents` API on the channel.

CloudTrail has partnered with many event source providers, such as Okta and LaunchDarkly. When you create an integration with an event source outside AWS, you can choose one of these partners as your event source, or choose **My custom integration** to integrate events from your own sources into CloudTrail. A maximum of one channel is allowed per source.

There are two types of integrations: direct and solution. With direct integrations, the partner calls the `PutAuditEvents` API to deliver events to the event data store for your AWS account. With solution integrations, the application runs in your AWS account and the application calls the `PutAuditEvents` API to deliver events to the event data store for your AWS account.

From the **Integrations** page, you can choose the **Available sources** tab to the view the **Integration type** for partners.

![\[Partner integration type\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/partner-integration-type.png)


To get started, create an integration to log events from partner or other application sources using the CloudTrail console.

**Topics**
+ [

# Create an integration with a CloudTrail partner with the console
](query-event-data-store-integration-partner.md)
+ [

# Create a custom integration with the console
](query-event-data-store-integration-custom.md)
+ [

# Create, update, and manage CloudTrail Lake integrations with the AWS CLI
](lake-integrations-cli.md)
+ [

## Additional information about integration partners
](#cloudtrail-lake-partner-information)
+ [

# CloudTrail Lake integrations event schema
](query-integration-event-schema.md)

# Create an integration with a CloudTrail partner with the console


When you create an integration with an event source outside AWS, you can choose one of these partners as your event source. When you create an integration in CloudTrail with a partner application, the partner needs the Amazon Resource Name (ARN) of the channel that you create in this workflow to send events to CloudTrail. After you create the integration, you finish configuring the integration by following the partner's instructions to provide the required channel ARN to the partner. The integration starts ingesting partner events into CloudTrail after the partner calls `PutAuditEvents` on the integration's channel.

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  From the navigation pane, under **Lake**, choose **Integrations**. 

1. On the **Add integration** page, enter a name for your channel. The name can be 3-128 characters. Only letters, numbers, periods, underscores, and dashes are allowed.

1. Choose the partner application source from which you want to get events. If you're integrating with events from your own applications hosted on-premises or in the cloud, choose **My custom integration**.

1. From **Event delivery location**, choose to log the same activity events to existing event data stores, or create a new event data store.

   If you choose to create a new event data store, enter a name for the event data store, choose the pricing option, and specify the retention period in days. The event data store retains event data for the specified number of days.

   If you choose to log activity events to one or more existing event data stores, choose the event data stores from the list. The event data stores can only include activity events. The event type in the console must be **Events from integrations**. In the API, the `eventCategory` value must be `ActivityAuditLog`.

1. In **Resource policy**, configure the resource policy for the integration's channel. Resource policies are JSON policy documents that specify what actions a specified principal can perform on the resource and under what conditions. The accounts defined as principals in the resource policy can call the `PutAuditEvents` API to deliver events to your channel. The resource owner has implicit access to the resource if their IAM policy allows the `cloudtrail-data:PutAuditEvents` action.

   The information required for the policy is determined by the integration type. For a direction integration, CloudTrail automatically adds the partner's AWS account IDs, and requires you to enter the unique external ID provided by the partner. For a solution integration, you must specify at least one AWS account ID as principal, and can optionally enter an external ID to prevent against confused deputy.
**Note**  
If you do not create a resource policy for the channel, only the channel owner can call the `PutAuditEvents` API on the channel.

   1. For a direct integration, enter the external ID provided by your partner. The integration partner provides a unique external ID, such as an account ID or a randomly generated string, to use for the integration to prevent against confused deputy. The partner is responsible for creating and providing a unique external ID.

       You can choose **How to find this?** to view the partner's documentation that describes how to find the external ID.   
![\[Partner documentation for external ID\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/integration-external-id.png)
**Note**  
If the resource policy includes an external ID, all calls to the `PutAuditEvents` API must include the external ID. However, if the policy does not define an external ID, the partner can still call the `PutAuditEvents` API and specify an `externalId` parameter.

   1.  For a solution integration, choose **Add AWS account** to specify an AWS account ID to add as a principal in the policy.

1. (Optional) In the **Tags** area, you can add up to 50 tag key and value pairs to help you identify, sort, and control access to your event data store and channel. For more information about how to use IAM policies to authorize access to an event data store based on tags, see [Examples: Denying access to create or delete event data stores based on tags](security_iam_id-based-policy-examples.md#security_iam_id-based-policy-examples-eds-tags). For more information about how you can use tags in AWS, see [Tagging AWS resources](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html) in the *AWS General Reference*.

1. When you are ready to create the new integration, choose **Add integration**. There is no review page. CloudTrail creates the integration, but you must provide the channel Amazon Resource Name (ARN) to the partner application. Instructions for providing the channel ARN to the partner application are found on the partner documentation website. For more information, choose the **Learn more** link for the partner on the **Available sources** tab of the **Integrations** page to open the partner's page in AWS Marketplace.

To finish the setup for your integration, provide the channel ARN to the partner or source application. Depending upon the integration type, either you, the partner, or the application runs the `PutAuditEvents` API to deliver activity events to the event data store for your AWS account. After your activity events are delivered, you can use CloudTrail Lake to search, query, and analyze the data that is logged from your applications. Your event data includes fields that match CloudTrail event payload, such as `eventVersion`, `eventSource`, and `userIdentity`.

# Create a custom integration with the console


You can use CloudTrail to log and store user activity data from any source in your hybrid environments, such as in-house or SaaS applications hosted on-premises or in the cloud, virtual machines, or containers. Perform the first half of this procedure in the CloudTrail Lake console, then call the [https://docs.aws.amazon.com/awscloudtraildata/latest/APIReference/API_PutAuditEvents.html](https://docs.aws.amazon.com/awscloudtraildata/latest/APIReference/API_PutAuditEvents.html) API to ingest events, providing your channel ARN and event payload. After you use the `PutAuditEvents` API to ingest your application activity into CloudTrail, you can use CloudTrail Lake to search, query, and analyze the data that is logged from your applications.

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  From the navigation pane, under **Lake**, choose **Integrations**. 

1. On the **Add integration** page, enter a name for your channel. The name can be 3-128 characters. Only letters, numbers, periods, underscores, and dashes are allowed.

1. Choose **My custom integration**.

1. From **Event delivery location**, choose to log the same activity events to existing event data stores, or create a new event data store.

   If you choose to create a new event data store, enter a name for the event data store and specify the retention period in days. You can keep the event data in an event data store for up to 3,653 days (about 10 years) if you choose the **One-year extendable retention pricing** option, or up to 2,557 days (about 7 years) if you choose the **Seven-year retention pricing** option.

   If you choose to log activity events to one or more existing event data stores, choose the event data stores from the list. The event data stores can only include activity events. The event type in the console must be **Events from integrations**. In the API, the `eventCategory` value must be `ActivityAuditLog`.

1. In **Resource policy**, configure the resource policy for the integration's channel. Resource policies are JSON policy documents that specify what actions a specified principal can perform on the resource and under what conditions. The accounts defined as principals in the resource policy can call the `PutAuditEvents` API to deliver events to your channel.
**Note**  
If you do not create a resource policy for the channel, only the channel owner can call the `PutAuditEvents` API on the channel.

   1. (Optional) Enter a unique external ID to provide an extra layer of protection. The external ID is a unique string such as an account ID or a randomly generated string, to prevent against confused deputy. 
**Note**  
If the resource policy includes an external ID, all calls to the `PutAuditEvents` API must include the external ID. However, if the policy does not define an external ID, you can still call the `PutAuditEvents` API and specify an `externalId` parameter.

   1. Choose **Add AWS account** to specify each AWS account ID to add as a principal in the resource policy for the channel.

1. (Optional) In the **Tags** area, you can add up to 50 tag key and value pairs to help you identify, sort, and control access to your event data store and channel. For more information about how to use IAM policies to authorize access to an event data store based on tags, see [Examples: Denying access to create or delete event data stores based on tags](security_iam_id-based-policy-examples.md#security_iam_id-based-policy-examples-eds-tags). For more information about how you can use tags in AWS, see [Tagging your AWS resources](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html) in the *AWS General Reference*.

1. When you are ready to create the new integration, choose **Add integration**. There is no review page. CloudTrail creates the integration, but to integrate your custom events, you must specify the channel ARN in a [https://docs.aws.amazon.com/awscloudtraildata/latest/APIReference/API_PutAuditEvents.html](https://docs.aws.amazon.com/awscloudtraildata/latest/APIReference/API_PutAuditEvents.html) request.

1. Call the `PutAuditEvents` API to ingest your activity events into CloudTrail. You can add up to 100 activity events (or up to 1 MB) per `PutAuditEvents` request. You'll need the channel ARN that you created in preceding steps, the payload of events that you want CloudTrail to add, and the external ID (if specified for your resource policy). Be sure that there is no sensitive or personally-identifying information in event payload before ingesting it into CloudTrail. Events that you ingest into CloudTrail must follow the [CloudTrail Lake integrations event schema](query-integration-event-schema.md).
**Tip**  
Use [AWS CloudShell](https://docs.aws.amazon.com/cloudshell/latest/userguide/welcome.html) to be sure you are running the most current AWS APIs.

   The following examples show how to use the **put-audit-events** CLI command. The **--audit-events** and **--channel-arn** parameters are required. You need the ARN of the channel that you created in the preceding steps, which you can copy from the integration details page. The value of **--audit-events** is a JSON array of event objects. `--audit-events` includes a required ID from the event, the required payload of the event as the value of `EventData`, and an [optional checksum](#event-data-store-integration-custom-checksum) to help validate the integrity of the event after ingestion into CloudTrail.

   ```
   aws cloudtrail-data put-audit-events \
   --region region \
   --channel-arn $ChannelArn \
   --audit-events \
   id="event_ID",eventData='"{event_payload}"' \
   id="event_ID",eventData='"{event_payload}"',eventDataChecksum="optional_checksum"
   ```

   The following is an example command with two event examples.

   ```
   aws cloudtrail-data put-audit-events \
   --region us-east-1 \
   --channel-arn arn:aws:cloudtrail:us-east-1:01234567890:channel/EXAMPLE8-0558-4f7e-a06a-43969EXAMPLE \
   --audit-events \
   id="EXAMPLE3-0f1f-4a85-9664-d50a3EXAMPLE",eventData='"{\"eventVersion\":\0.01\",\"eventSource\":\"custom1.domain.com\", ...
   \}"' \
   id="EXAMPLE7-a999-486d-b241-b33a1EXAMPLE",eventData='"{\"eventVersion\":\0.02\",\"eventSource\":\"custom2.domain.com\", ...
   \}"',eventDataChecksum="EXAMPLE6e7dd61f3ead...93a691d8EXAMPLE"
   ```

   The following example command adds the `--cli-input-json` parameter to specify a JSON file (`custom-events.json`) of event payload.

   ```
   aws cloudtrail-data put-audit-events \
   --channel-arn $channelArn \
   --cli-input-json file://custom-events.json \
   --region us-east-1
   ```

   The following are the sample contents of the example JSON file, `custom-events.json`.

   ```
   {
       "auditEvents": [
         {
           "eventData": "{\"version\":\"eventData.version\",\"UID\":\"UID\",
           \"userIdentity\":{\"type\":\"CustomUserIdentity\",\"principalId\":\"principalId\",
           \"details\":{\"key\":\"value\"}},\"eventTime\":\"2021-10-27T12:13:14Z\",\"eventName\":\"eventName\",
           \"userAgent\":\"userAgent\",\"eventSource\":\"eventSource\",
           \"requestParameters\":{\"key\":\"value\"},\"responseElements\":{\"key\":\"value\"},
           \"additionalEventData\":{\"key\":\"value\"},
           \"sourceIPAddress\":\"source_IP_address\",\"recipientAccountId\":\"recipient_account_ID\"}",
           "id": "1"
         }
      ]
   }
   ```

## (Optional) Calculate a checksum value


The checksum that you specify as the value of `EventDataChecksum` in a `PutAuditEvents` request helps you verify that CloudTrail receives the event that matches with the checksum; it helps verify the integrity of events. The checksum value is a base64-SHA256 algorithm that you calculate by running the following command.

```
printf %s "{"eventData": "{\"version\":\"eventData.version\",\"UID\":\"UID\",
        \"userIdentity\":{\"type\":\"CustomUserIdentity\",\"principalId\":\"principalId\",
        \"details\":{\"key\":\"value\"}},\"eventTime\":\"2021-10-27T12:13:14Z\",\"eventName\":\"eventName\",
        \"userAgent\":\"userAgent\",\"eventSource\":\"eventSource\",
        \"requestParameters\":{\"key\":\"value\"},\"responseElements\":{\"key\":\"value\"},
        \"additionalEventData\":{\"key\":\"value\"},
        \"sourceIPAddress\":\"source_IP_address\",
        \"recipientAccountId\":\"recipient_account_ID\"}",
        "id": "1"}" \
 | openssl dgst -binary -sha256 | base64
```

The command returns the checksum. The following is an example.

```
EXAMPLEHjkI8iehvCUCWTIAbNYkOgO/t0YNw+7rrQE=
```

The checksum value becomes the value of `EventDataChecksum` in your `PutAuditEvents` request. If the checksum doesn't match with the one for the provided event, CloudTrail rejects the event with an `InvalidChecksum` error.

# Create, update, and manage CloudTrail Lake integrations with the AWS CLI


This section describes the commands you can use to create, update and manage your CloudTrail Lake integrations using the AWS CLI.

When using the AWS CLI, remember that your commands run in the AWS Region configured for your profile. If you want to run the commands in a different Region, either change the default Region for your profile, or use the **--region** parameter with the command.

## Available commands for CloudTrail Lake integrations


Commands for creating, updating, and managing integrations in CloudTrail Lake include:
+ `create-event-data-store` to create an event data store for events outside of AWS.
+ `delete-channel` to delete a channel used for an integration.
+ `[delete-resource-policy](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/delete-resource-policy.html)` to delete the resource policy attached to a channel for a CloudTrail Lake integration.
+ `[get-channel](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/get-channel.html)` to return information about a CloudTrail channel.
+ `[get-resource-policy](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/get-resource-policy.html)` to retrieve the JSON text of the resource-based policy document attached to the CloudTrail channel.
+ `[list-channels](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/list-channels.html)` to list the channels in the current account, and their source names.
+ `[put-audit-events](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail-data/put-audit-events.html)` to ingest your application events into CloudTrail Lake. A required parameter, `auditEvents`, accepts the JSON records (also called payload) of events that you want CloudTrail to ingest. You can add up to 100 of these events (or up to 1 MB) per `PutAuditEvents` request.
+ `[put-resource-policy](https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/put-resource-policy.html)` to attach a resource-based permission policy to a CloudTrail channel that is used for an integration with an event source outside of AWS. For more information about resource-based policies, see [AWS CloudTrail resource-based policy examples](security_iam_resource-based-policy-examples.md).
+ `update-channel` to update a channel specified by a required channel ARN or UUID.

For a list of available commands for CloudTrail Lake event data stores, see [Available commands for event data stores](lake-eds-cli.md#lake-eds-cli-commands).

For a list of available commands for CloudTrail Lake queries, see [Available commands for CloudTrail Lake queries](lake-queries-cli.md#lake-queries-cli-commands).

For a list of available commands for CloudTrail Lake dashboards, see [Available commands for dashboards](lake-dashboard-cli.md#lake-dashboard-cli-commands).

# Create an integration to log events from outside AWS with the AWS CLI
Create an integration with the AWS CLI

This section describes how you can use the AWS CLI to create a CloudTrail Lake integration to log events from outside of AWS.

In the AWS CLI, you create an integration in four commands (three if you already have an event data store that meets the criteria). Event data stores that you use as the destinations for an integration must be for a single Region and single account; they cannot be multi-region, they cannot log events for organizations in AWS Organizations, and they can only include activity events. The event type in the console must be **Events from integrations**. In the API, the `eventCategory` value must be `ActivityAuditLog`. For more information about integrations, see [Create an integration with an event source outside of AWS](query-event-data-store-integration.md).

1. Run [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/index.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/index.html) to create an event data store, if you do not already have one or more event data stores that you can use for the integration.

   The following example AWS CLI command creates an event data store that logs events from outside AWS. For activity events, the `eventCategory` field selector value is `ActivityAuditLog`. The event data store has a retention period of 90 days set. By default, the event data store collects events from all Regions, but because this is collecting non-AWS events, set it to a single Region by adding the `--no-multi-region-enabled` option. Termination protection is enabled by default, and the event data store does not collect events for accounts in an organization.

   ```
   aws cloudtrail create-event-data-store \
   --name my-event-data-store \
   --no-multi-region-enabled \
   --retention-period 90 \
   --advanced-event-selectors '[
       {
         "Name": "Select all external events",
         "FieldSelectors": [
             { "Field": "eventCategory", "Equals": ["ActivityAuditLog"] }
           ]
       }
     ]'
   ```

   The following is an example response.

   ```
   {
       "EventDataStoreArn": "arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLEf852-4e8f-8bd1-bcf6cEXAMPLE",
       "Name": "my-event-data-store",
       "AdvancedEventSelectors": [
           {
              "Name": "Select all external events",
              "FieldSelectors": [
                 {
                     "Field": "eventCategory",
                     "Equals": [
                         "ActivityAuditLog"
                       ]
                   }
               ]
           }
       ],
       "MultiRegionEnabled": true,
       "OrganizationEnabled": false,
       "BillingMode": "EXTENDABLE_RETENTION_PRICING",
       "RetentionPeriod": 90,
       "TerminationProtectionEnabled": true,
       "CreatedTimestamp": "2023-10-27T10:55:55.384000-04:00",
       "UpdatedTimestamp": "2023-10-27T10:57:05.549000-04:00"
   }
   ```

   You'll need the event data store ID (the suffix of the ARN, or `EXAMPLEf852-4e8f-8bd1-bcf6cEXAMPLE` in the preceding response example) to go on to the next step and create your channel.

1. Run the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/create-channel.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/create-channel.html) command to create a channel that allows a partner or source application to send events to an event data store in CloudTrail.

   A channel has the following components:  
**Source**  
CloudTrail uses this information to determine the partners that are sending event data to CloudTrail on your behalf. A source is required, and can be either `Custom` for all valid non-AWS events, or the name of a partner event source. A maximum of one channel is allowed per source.  
For information about the `Source` values for available partners, see [Additional information about integration partners](query-event-data-store-integration.md#cloudtrail-lake-partner-information).  
**Ingestion status**  
The channel status shows when the last events were received from a channel source.  
**Destinations**  
The destinations are the CloudTrail Lake event data stores that are receiving events from the channel. You can change destination event data stores for a channel.

   To stop receiving events from a source, delete the channel.

   You need the ID of at least one destination event data store to run this command. The valid type of destination is `EVENT_DATA_STORE`. You can send ingested events to more than one event data store. The following example command creates a channel that sends events to two event data stores, represented by their IDs in the `Location` attribute of the `--destinations` parameter. The `--destinations`, `--name`, and `--source` parameters are required. To ingest events from a CloudTrail partner, specify the name of the partner as the value of `--source`. To ingest events from your own applications outside AWS, specify `Custom` as the value of `--source`.

   ```
   aws cloudtrail create-channel \
       --region us-east-1 \
       --destinations '[{"Type": "EVENT_DATA_STORE", "Location": "EXAMPLEf852-4e8f-8bd1-bcf6cEXAMPLE"}, {"Type": "EVENT_DATA_STORE", "Location": "EXAMPLEg922-5n2l-3vz1- apqw8EXAMPLE"}]'
       --name my-partner-channel \
       --source $partnerSourceName \
   ```

   In the response to your **create-channel** command, copy the ARN of the new channel. You need the ARN to run the `put-resource-policy` and `put-audit-events` commands in the next steps.

1.  Run the **put-resource-policy** command to attach a resource policy to the channel. Resource policies are JSON policy documents that specify what actions a specified principal can perform on the resource and under what conditions. The accounts defined as principals in the channel's resource policy can call the `PutAuditEvents` API to deliver events. 
**Note**  
If you do not create a resource policy for the channel, only the channel owner can call the `PutAuditEvents` API on the channel.

   The information required for the policy is determined by the integration type.
   + For a direction integration, CloudTrail requires the policy to contain the partner's AWS account IDs, and requires you to enter the unique external ID provided by the partner. CloudTrail automatically adds the partner's AWS account IDs to the resource policy when you create an integration using the CloudTrail console. Refer to the [partner's documentation](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-event-data-store-integration.html#cloudtrail-lake-partner-information#lake-integration-partner-documentation) to learn how to get the AWS account numbers required for the policy.
   + For a solution integration, you must specify at least one AWS account ID as principal, and can optionally enter an external ID to prevent against confused deputy.

   The following are requirements for the resource policy:
   + The resource ARN defined in the policy must match the channel ARN the policy is attached to.
   +  The policy contains only one action: cloudtrail-data:PutAuditEvents 
   +  The policy contains at least one statement. The policy can have a maximum of 20 statements. 
   +  Each statement contains at least one principal. A statement can have a maximum of 50 principals. 

   ```
   aws cloudtrail put-resource-policy \
       --resource-arn "channelARN" \
       --policy "{
       "Version": "2012-10-17",		 	 	 
       "Statement":
       [
           {
               "Sid": "ChannelPolicy",
               "Effect": "Allow",
               "Principal":
               {
                   "AWS":
                   [
                       "arn:aws:iam::111122223333:root",
                       "arn:aws:iam::444455556666:root",
                       "arn:aws:iam::123456789012:root"
                   ]
               },
               "Action": "cloudtrail-data:PutAuditEvents",
               "Resource": "arn:aws:cloudtrail:us-east-1:777788889999:channel/EXAMPLE-80b5-40a7-ae65-6e099392355b",
               "Condition":
               {
                   "StringEquals":
                   {
                       "cloudtrail:ExternalId": "UniqueExternalIDFromPartner"
                   }
               }
           }
       ]
   }"
   ```

   For more information about resource policies, see [AWS CloudTrail resource-based policy examples](security_iam_resource-based-policy-examples.md).

1. Run the [https://docs.aws.amazon.com/awscloudtraildata/latest/APIReference/API_PutAuditEvents.html](https://docs.aws.amazon.com/awscloudtraildata/latest/APIReference/API_PutAuditEvents.html) API to ingest your activity events into CloudTrail. You'll need the payload of events that you want CloudTrail to add. Be sure that there is no sensitive or personally-identifying information in event payload before ingesting it into CloudTrail. Note that the `PutAuditEvents` API uses the `cloudtrail-data` CLI endpoint, not the `cloudtrail` endpoint.

   The following examples show how to use the **put-audit-events** CLI command. The **--audit-events** and **--channel-arn** parameters are required. The **--external-id** parameter is required if an external ID is defined in the resource policy. You need the ARN of the channel that you created in the preceding step. The value of **--audit-events** is a JSON array of event objects. `--audit-events` includes a required ID from the event, the required payload of the event as the value of `EventData`, and an [optional checksum](#lake-cli-integration-checksum.title) to help validate the integrity of the event after ingestion into CloudTrail.

   ```
   aws cloudtrail-data put-audit-events \
   --channel-arn $ChannelArn \
   --external-id $UniqueExternalIDFromPartner \
   --audit-events \
   id="event_ID",eventData='"{event_payload}"' \
   id="event_ID",eventData='"{event_payload}"',eventDataChecksum="optional_checksum"
   ```

   The following is an example command with two event examples.

   ```
   aws cloudtrail-data put-audit-events \
   --channel-arn arn:aws:cloudtrail:us-east-1:123456789012:channel/EXAMPLE8-0558-4f7e-a06a-43969EXAMPLE \
   --external-id UniqueExternalIDFromPartner \
   --audit-events \
   id="EXAMPLE3-0f1f-4a85-9664-d50a3EXAMPLE",eventData='"{\"eventVersion\":\0.01\",\"eventSource\":\"custom1.domain.com\", ...
   \}"' \
   id="EXAMPLE7-a999-486d-b241-b33a1EXAMPLE",eventData='"{\"eventVersion\":\0.02\",\"eventSource\":\"custom2.domain.com\", ...
   \}"',eventDataChecksum="EXAMPLE6e7dd61f3ead...93a691d8EXAMPLE"
   ```

   The following example command adds the `--cli-input-json` parameter to specify a JSON file (`custom-events.json`) of event payload.

   ```
   aws cloudtrail-data put-audit-events --channel-arn $channelArn --external-id $UniqueExternalIDFromPartner --cli-input-json file://custom-events.json --region us-east-1
   ```

   The following are the sample contents of the example JSON file, `custom-events.json`.

   ```
   {
       "auditEvents": [
         {
           "eventData": "{\"version\":\"eventData.version\",\"UID\":\"UID\",
           \"userIdentity\":{\"type\":\"CustomUserIdentity\",\"principalId\":\"principalId\",
           \"details\":{\"key\":\"value\"}},\"eventTime\":\"2021-10-27T12:13:14Z\",\"eventName\":\"eventName\",
           \"userAgent\":\"userAgent\",\"eventSource\":\"eventSource\",
           \"requestParameters\":{\"key\":\"value\"},\"responseElements\":{\"key\":\"value\"},
           \"additionalEventData\":{\"key\":\"value\"},
           \"sourceIPAddress\":\"12.34.56.78\",\"recipientAccountId\":\"152089810396\"}",
           "id": "1"
         }
      ]
   }
   ```

You can verify that the integration is working, and CloudTrail is ingesting events from the source correctly, by running the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/get-channel.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/get-channel.html) command. The output of **get-channel** shows the most recent time stamp that CloudTrail received events.

```
aws cloudtrail get-channel --channel arn:aws:cloudtrail:us-east-1:01234567890:channel/EXAMPLE8-0558-4f7e-a06a-43969EXAMPLE
```

## (Optional) Calculate a checksum value


The checksum that you specify as the value of `EventDataChecksum` in a `PutAuditEvents` request helps you verify that CloudTrail receives the event that matches with the checksum; it helps verify the integrity of events. The checksum value is a base64-SHA256 algorithm that you calculate by running the following command.

```
printf %s "{"eventData": "{\"version\":\"eventData.version\",\"UID\":\"UID\",
        \"userIdentity\":{\"type\":\"CustomUserIdentity\",\"principalId\":\"principalId\",
        \"details\":{\"key\":\"value\"}},\"eventTime\":\"2021-10-27T12:13:14Z\",\"eventName\":\"eventName\",
        \"userAgent\":\"userAgent\",\"eventSource\":\"eventSource\",
        \"requestParameters\":{\"key\":\"value\"},\"responseElements\":{\"key\":\"value\"},
        \"additionalEventData\":{\"key\":\"value\"},
        \"sourceIPAddress\":\"source_IP_address\",
        \"recipientAccountId\":\"recipient_account_ID\"}",
        "id": "1"}" \
 | openssl dgst -binary -sha256 | base64
```

The command returns the checksum. The following is an example.

```
EXAMPLEDHjkI8iehvCUCWTIAbNYkOgO/t0YNw+7rrQE=
```

The checksum value becomes the value of `EventDataChecksum` in your `PutAuditEvents` request. If the checksum doesn't match with the one for the provided event, CloudTrail rejects the event with an `InvalidChecksum` error.

# Update a channel with the AWS CLI


This section describes how you can use the AWS CLI to update a channel for a CloudTrail Lake integration. You can run the `update-channel` command to update the name of the channel or to specify a different destination event data store. You cannot update the source of a channel.

When you run the command, the `--channel` parameter is required.

The following is an example that demonstrates how to update the channel name and destination.

```
aws cloudtrail update-channel \
--channel aws:cloudtrail:us-east-1:123456789012:channel/EXAMPLE8-0558-4f7e-a06a-43969EXAMPLE \
--name "new-channel-name" \
--destinations '[{"Type": "EVENT_DATA_STORE", "Location": "EXAMPLEf852-4e8f-8bd1-bcf6cEXAMPLE"}, {"Type": "EVENT_DATA_STORE", "Location": "EXAMPLEg922-5n2l-3vz1- apqw8EXAMPLE"}]'
```

# Delete a channel to delete an integration with the AWS CLI
Delete a channel with the AWS CLI

This section describes how to run the `delete-channel` command to delete the channel for a CloudTrail Lake integration. You would delete a channel, if you wanted to stop ingesting partner or other activity events outside of AWS. The ARN or channel ID (the ARN suffix) of the channel that you want to delete is required.

The following example shows how to delete the channel.

```
aws cloudtrail delete-channel \
--channel EXAMPLE8-0558-4f7e-a06a-43969EXAMPLE
```

## Additional information about integration partners


The table in this section provides the source name for each integration partner and identifies the integration type (direct or solution).

The information in the **Source name** column is required when calling the `CreateChannel` API. You specify the source name as the value for the `Source` parameter.


****  

| Partner name (console) | Source name (API) | Integration type | 
| --- | --- | --- | 
| My custom integration | Custom | solution | 
| Cloud Storage Security | CloudStorageSecurityConsole | solution | 
| Clumio | Clumio | direct | 
| CrowdStrike | CrowdStrike | solution | 
| CyberArk | CyberArk | solution | 
| GitHub | GitHub | solution | 
| Kong Inc | KongGatewayEnterprise | solution | 
| LaunchDarkly | LaunchDarkly | direct | 
| Netskope | NetskopeCloudExchange | solution | 
| Nordcloud, an IBM Company | IBMMulticloud | direct | 
| MontyCloud | MontyCloud | direct | 
| Okta | OktaSystemLogEvents | solution | 
| One Identity | OneLogin | solution | 
| Shoreline.io | Shoreline | solution | 
| Snyk.io | Snyk | direct | 
| Wiz | WizAuditLogs | solution | 

### View partner documentation


You can learn more about a partner's integration with CloudTrail Lake by viewing their documentation.

**To view partner documentation**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  From the navigation pane, under **Lake**, choose **Integrations**. 

1. From the **Integrations** page, choose **Available sources**, then choose **Learn more** for the partner whose documentation you want to view. 

# CloudTrail Lake integrations event schema


The following table describes the required and optional schema elements that match those in CloudTrail event records. The contents of `eventData` are provided by your events; other fields are provided by CloudTrail after ingestion.

CloudTrail event record contents are described in more detail in [CloudTrail record contents for management, data, and network activity events](cloudtrail-event-reference-record-contents.md).
+ [Fields that are provided by CloudTrail after ingestion](#fields-cloudtrail)
+ [Fields that are provided by your events](#fields-event)<a name="fields-cloudtrail"></a>

The following fields are provided by CloudTrail after ingestion:


| Field name | Input type | Requirement | Description | 
| --- | --- | --- | --- | 
| eventVersion | string | Required |  The event version.  | 
| eventCategory | string | Required |  The event category. For non-AWS events, the value is `ActivityAuditLog`.  | 
| eventType | string | Required |  The event type. For non-AWS events, the valid value is `ActivityLog`.  | 
| eventID | string | Required | A unique ID for an event. | 
| eventTime |  string  | Required |  Event timestamp, in `yyyy-MM-DDTHH:mm:ss` format, in Universal Coordinated Time (UTC).  | 
| awsRegion | string | Required |  The AWS Region where the `PutAuditEvents` call was made.  | 
| recipientAccountId | string | Required |  Represents the account ID that received this event. CloudTrail populates this field by calculating it from event payload.  | 
| addendum |  -  | Optional |  Shows information about why event processing was delayed. If information was missing from an existing event, the addendum block includes the missing information and a reason for why it was missing.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  | string | Optional |  The reason that the event or some of its contents were missing.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  | string | Optional |  The event record fields that are updated by the addendum. This is only provided if the reason is `UPDATED_DATA`.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  | string | Optional |  The original event UID from the source. This is only provided if the reason is `UPDATED_DATA`.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  | string | Optional |  The original event ID. This is only provided if the reason is `UPDATED_DATA`.  | 
| metadata |  -  | Required |  Information about the channel that the event used.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  | string | Required |  The timestamp when the event was processed, in `yyyy-MM-DDTHH:mm:ss` format, in Universal Coordinated Time (UTC).  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  | string | Required |  The ARN of the channel that the event used.  | <a name="fields-event"></a>

The following fields are provided by customer events:


| Field name | Input type | Requirement | Description | 
| --- | --- | --- | --- | 
| eventData |  -  | Required | The audit data sent to CloudTrail in a PutAuditEvents call. | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  | string | Required |  The version of the event from its source. Length constraints: Maximum length of 256.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  |  -  | Required |  Information about the user who made a request.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  |  string  | Required |  The type of user identity. Length constraints: Maximum length of 128.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  |  string  | Required |  A unique identifier for the actor of the event. Length constraints: Maximum length of 1024.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  |  JSON object  | Optional |  Additional information about the identity.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  |  string  | Optional |  The agent through which the request was made. Length constraints: Maximum length of 1024.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  |  string  | Required |  This is the partner event source, or the custom application about which events are logged. Length constraints: Maximum length of 1024.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  |  string  | Required |  The requested action, one of the actions in the API for the source service or application. Length constraints: Maximum length of 1024.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  |  string  | Required |  Event timestamp, in `yyyy-MM-DDTHH:mm:ss` format, in Universal Coordinated Time (UTC).  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  | string | Required |  The UID value that identifies the request. The service or application that is called generates this value. Length constraints: Maximum length of 1024.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  |  JSON object  | Optional |  The parameters, if any, that were sent with the request. This field has a maximum size of 100 kB, and content exceeding the limit is rejected.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  |  JSON object  | Optional |  The response element for actions that make changes (create, update, or delete actions). This field has a maximum size of 100 kB, and content exceeding the limit is rejected.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  | string | Optional |  A string representing an error for the event. Length constraints: Maximum length of 256.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  | string | Optional |  The description of the error. Length constraints: Maximum length of 256.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  |  string  | Optional |  The IP address from which the request was made. Both IPv4 and IPv6 addresses are accepted.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  | string | Required |  Represents the account ID that received this event. The account ID must be the same as the AWS account ID that owns the channel.  | 
|  [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/query-integration-event-schema.html)  |  JSON object  | Optional |  Additional data about the event that was not part of the request or response. This field has a maximum size of 28 kB, and content exceeding that limit is rejected.  | 

The following example shows the hierarchy of schema elements that match those in CloudTrail event records.

```
{
    "eventVersion": String,
    "eventCategory": String,
    "eventType": String,
    "eventID": String,
    "eventTime": String,
    "awsRegion": String,
    "recipientAccountId": String,
    "addendum": {
       "reason": String,
       "updatedFields": String,
       "originalUID": String, 
       "originalEventID": String
    },
    "metadata" : { 
       "ingestionTime": String,
       "channelARN": String
    },
    "eventData": {
        "version": String,
        "userIdentity": {
          "type": String,
          "principalId": String,
          "details": {
             JSON
          }
        }, 
        "userAgent": String,
        "eventSource": String,
        "eventName": String,
        "eventTime": String,
        "UID": String,
        "requestParameters": {
           JSON
        },
        "responseElements": {
           JSON
        },
        "errorCode": String,
        "errorMessage": String,
        "sourceIPAddress": String,
        "recipientAccountId": String,
        "additionalEventData": {
           JSON
        }
    }
}
```

# CloudTrail Lake dashboards
Dashboards

**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

You can use CloudTrail Lake dashboards to see event trends for the event data stores in your account. CloudTrail Lake offers the following types of dashboards:
+ **Managed dashboards** – You can view a managed dashboard to see event trends for an event data store that collects management events, data events, or Insights events. These dashboards are automatically available to you and are managed by CloudTrail Lake. CloudTrail offers 14 managed dashboards to choose from. You can manually refresh managed dashboards. You cannot modify, add, or remove the widgets for these dashboards, however, you can save a managed dashboard as a custom dashboard if you want to modify the widgets or set a refresh schedule.
+ **Custom dashboards** – Custom dashboards allow you to query events in any event data store type. You can add up to 10 widgets to a custom dashboard. You can manually refresh a custom dashboard, or you can set a refresh schedule.
+ **Highlights dashboards** – Enable the Highlights dashboard to view an at-a-glance overview of the AWS activity collected by the event data stores in your account. The Highlights dashboard is managed by CloudTrail and includes widgets that are relevant to your account. The widgets shown on the Highlights dashboard are unique to each account. These widgets could surface detected abnormal activity or anomalies. For example, your Highlights dashboard could include the **Total cross-account access widget**, which shows if there is an increase in abnormal cross-account activity. CloudTrail updates the Highlights dashboard every 6 hours. The dashboard shows the last 24 hours of data from the last update.

Each dashboard consists of one or more widgets and each widget provides a graphical representation of the results of a SQL query. To view the query for a widget, choose **View and edit query** to open up the query editor.

When a dashboard is refreshed, CloudTrail Lake runs queries to populate the dashboard's widgets. Because running queries incurs costs, CloudTrail asks you to acknowledge the costs associated with running queries. For more information about CloudTrail pricing, see [CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/).

**Topics**
+ [

## Prerequisites
](#lake-dashboard-prerequisites)
+ [

## Limitations
](#lake-dashboard-limitations)
+ [

## Region support
](#lake-dashboard-regions)
+ [

## Required permissions
](#lake-dashboard-permissions)
+ [

# View a managed dashboard with the CloudTrail console
](lake-dashboard-managed.md)
+ [

# Enable the Highlights dashboard with the CloudTrail console
](lake-dashboard-highlights.md)
+ [

# Disable the Highlights dashboard with the CloudTrail console
](lake-dashboard-highlights-disable.md)
+ [

# Create a custom dashboard with the CloudTrail console
](lake-dashboard-custom.md)
+ [

# Set a refresh schedule for a custom dashboard with the CloudTrail console
](lake-dashboard-refresh.md)
+ [

# Disable the refresh schedule for a custom dashboard with the CloudTrail console
](lake-dashboard-refresh-disable.md)
+ [

# Change termination protection with the CloudTrail console
](lake-dashboard-termination-protection.md)
+ [

# Delete a custom dashboard with the CloudTrail console
](lake-dashboard-delete.md)
+ [

# Create, update, and manage dashboards with the AWS CLI
](lake-dashboard-cli.md)

## Prerequisites


The following prerequisites apply to CloudTrail Lake dashboards:
+ To view and use Lake dashboards, you must create at least one CloudTrail Lake event data store. You can create event data stores using the console, AWS CLI, or SDKs. For information about creating an event data store using the console, see [Create an event data store for CloudTrail events with the console](query-event-data-store-cloudtrail.md). For information about creating an event data store using the AWS CLI, see [Create an event data store with the AWS CLI](lake-cli-create-eds.md).
+ You must have adequate permissions to view, create, update, and refresh dashboards. For more information, see [Required permissions](#lake-dashboard-permissions).

## Limitations


The following limitations apply to CloudTrail Lake dashboards:
+ You can only enable the Highlights dashboard for event data stores that exist in your account.
+ You can only view managed dashboards for event data stores that exist in your account.
+ For custom dashboards, you can only add sample widgets or create new widgets that query event data stores that exist in your account.
+ Delegated administrators for a AWS Organizations organization cannot view or manage dashboards that are owned by the management account.

## Region support


The CloudTrail Lake dashboards are supported in all AWS Regions where CloudTrail Lake is supported.

The **Activity summary** widget on the **Highlights** dashboard is supported in the following Regions:
+ Asia Pacific (Tokyo) Region (ap-northeast-1)
+ US East (N. Virginia) (us-east-1)
+ US West (Oregon) Region (us-west-1)

All other widgets are supported in all AWS Regions where CloudTrail Lake is supported.

For information about CloudTrail Lake supported Regions, see [CloudTrail Lake supported Regions](cloudtrail-lake-supported-regions.md).

## Required permissions


This section describes the required permissions for CloudTrail Lake dashboards and discusses two types of IAM policies:
+ Identity-based policies which allow you to perform actions to create, manage, and delete dashboards.
+ Resource-based policies that allow CloudTrail to run queries on your event data store when the dashboard is refreshed and perform scheduled refreshes of custom dashboards and the Highlights dashboard on your behalf. When you create dashboards using the CloudTrail console, you are given the option to attach resource-based policies. You can also run the AWS CLI [`put-resource-policy`](lake-dashboard-cli-manage.md#lake-dashboard-cli-add-rbp) command to add a resource-based policy to your event data stores or dashboards. 

### Identity-based policy requirements


Identity-based policies are JSON permissions policy documents that you can attach to an identity, such as an IAM user, group of users, or role. These policies control what actions users and roles can perform, on which resources, and under what conditions. To learn how to create an identity-based policy, see [Define custom IAM permissions with customer managed policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_create.html) in the *IAM User Guide*.

To view and manage CloudTrail Lake dashboards, you need one of the following policies:
+ The [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSCloudTrail_FullAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSCloudTrail_FullAccess.html) managed policy.
+ The [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AdministratorAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AdministratorAccess.html) managed policy.
+ A custom policy that includes one or more of the specific permissions described in the sections which follow.

**Topics**
+ [

#### Required permissions for creating dashboards
](#lake-dashboard-permissions-identity-create)
+ [

#### Required permissions for updating dashboards
](#lake-dashboard-permissions-identity-update)
+ [

#### Required permissions for refreshing dashboards
](#lake-dashboard-permissions-identity-create)

#### Required permissions for creating dashboards


The following sample policy provides the required minimum permissions for creating dashboards. Replace *partition*, *region*, *account-id*, and *eds-id* with the values for your configuration.
+ `StartQuery` permission is required only if the request contains widgets. Provide `StartQuery` permissions for all event data stores included in a widget query.
+ `StartDashboardRefresh` permission is required only if the dashboard has a refresh schedule.
+ For the Highlights dashboard, the caller must have `StartQuery` permission on all the event data stores in the account.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Action": [
                "cloudtrail:CreateDashboard",
                "cloudtrail:StartDashboardRefresh",
                "cloudtrail:StartQuery"
            ],
            "Resource": [
                "arn:aws:cloudtrail:us-east-1:111111111111:dashboard/*",
                "arn:aws:cloudtrail:us-east-1:111111111111:eventdatastore/eds-id"
            ]
        }
    ]
}
```

------

#### Required permissions for updating dashboards


The following sample policy provides the required minimum permissions for updating dashboards. Replace *partition*, *region*, *account-id*, and *eds-id* with the values for your configuration.
+ `StartQuery` permission is required only if the request contains widgets. Provide `StartQuery` permissions for all event data stores included in a widget query.
+ `StartDashboardRefresh` permission is required only if the dashboard has a refresh schedule.
+ For the Highlights dashboard, the caller must have `StartQuery` permission on all the event data stores in the account.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Action": [
                "cloudtrail:UpdateDashboard",
                "cloudtrail:StartDashboardRefresh",
                "cloudtrail:StartQuery"
            ],
            "Resource": [
                "arn:aws:cloudtrail:us-east-1:111111111111:dashboard/*",
                "arn:aws:cloudtrail:us-east-1:111111111111:eventdatastore/eds-id"
            ]
        }
    ]
}
```

------

#### Required permissions for refreshing dashboards


The following sample policy provides the required minimum permissions for refreshing dashboards. Replace *partition*, *region*, *account-id*, *dashboard-name*, and *eds-id* with the values for your configuration.
+ For custom dashboards and the Highlights dashboards, the caller must have `cloudtrail:StartDashboardRefresh permissions`.
+ For managed dashboards, the caller must have `cloudtrail:StartDashboardRefresh` permission and `cloudtrail:StartQuery` permissions for the event data store involved in the refresh.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "Statement1",
            "Effect": "Allow",
            "Action": [
                "cloudtrail:StartDashboardRefresh",
                "cloudtrail:StartQuery"
            ],
            "Resource": [
                "arn:aws:cloudtrail:us-east-1:111111111111:dashboard/dashboard-name",
                "arn:aws:cloudtrail:us-east-1:111111111111:eventdatastore/eds-id"
            ]
        }
    ]
}
```

------

### Resource-based policies for dashboards and event data stores


Resource-based policies are JSON policy documents that you attach to a resource. Examples of resource-based policies are IAM role trust policies and Amazon S3 bucket policies. For the resource where the policy is attached, the policy defines what actions a specified principal can perform on that resource and under what conditions. You must specify a principal in a resource-based policy. 

To run queries on a dashboard during a manual or scheduled refresh, you must attach a resource-based policy to every event data store that is associated with a widget on the dashboard. This allows CloudTrail Lake to run the queries on your behalf. When you create a custom dashboard, or enable the **Highlights** dashboard using the CloudTrail console, CloudTrail gives you the option to choose which event data stores you want to apply permissions to. For more information about the resource-based policy, see [Example: Allow CloudTrail to run queries to refresh a dashboard](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds-dashboard).

To set a refresh schedule for a dashboard, you must attach a resource-based policy to the dashboard to allow CloudTrail Lake to refresh the dashboard on your behalf. When you set a refresh schedule for a custom dashboard, or enable the **Highlights** dashboard using the CloudTrail console, CloudTrail gives you the option to attach a resource-based policy to your dashboard. For an example policy, see [Resource-based policy example for a dashboard](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-dashboards).

You can attach a resource-based policy using the CloudTrail console, the [AWS CLI](lake-dashboard-cli-manage.md#lake-dashboard-cli-add-rbp), or the [PutResourcePolicy](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_PutResourcePolicy.html) API operation.

### KMS key permissions to decrypt data in an event data store


If an event data store being queried is encrypted with a KMS key, ensure the KMS key policy allows CloudTrail to decrypt the data in the event data store. The following example policy statement allows the CloudTrail service principal to decrypt the event data store.

```
{
      "Sid": "AllowCloudTrailDecryptAccess",
      "Effect": "Allow",
      "Principal": {
          "Service": "cloudtrail.amazonaws.com"
        },
      "Action": "kms:Decrypt",
      "Resource": "*"
}
```

# View a managed dashboard with the CloudTrail console
View a managed dashboard

CloudTrail Lake provides managed dashboards that show event trends for event data stores that collect management events, data events, and Insights events. These dashboards are managed by CloudTrail Lake. You cannot modify, add, or remove the widgets for these dashboards, however, you can save a managed dashboard as a custom dashboard if you want to modify the widgets or set a refresh schedule.

**Note**  
You can only view managed dashboards for event data stores that exist in your account.

**To view a managed dashboard**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  In the left navigation pane, under **Lake**, choose **Dashboard**. 

1. Choose the **Managed and custom dashboards** tab.

1. From **Managed dashboards**, choose the dashboard you want to view. For more information, see [Available managed dashboards](lake-managed-dashboards.md).
**Note**  
The dropdown shows only relevant event data stores for the selected dashboard. For example, if you choose dashboards focused on data events, like S3 data events, the dropdown will only show event data stores that are configured to collect data events.

1.  Choose the event data store for the dashboard. CloudTrail will run queries on this dashboard when the dashboard is refreshed.

1. To view the query for a widget, choose **View and edit query** at the bottom of the widget.

1. Choose to filter the dashboard data by an **Absolute range** or **Relative range**. Choose **Absolute range** to select a specific date and time range. Choose **Relative range** to select a predefined time range or a custom range. By default, the dashboard displays event data for the past 24 hours.
**Note**  
CloudTrail Lake queries incur costs based upon the amount of data scanned. To help control costs, you can filter on a narrower time range. For more information about CloudTrail pricing, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/).

1. Choose the refresh icon to populate the graphics for the dashboard's widgets. Each widget indicates the status of the refresh.

## Save a managed dashboard as a custom dashboard


You cannot modify a managed dashboard, but you can save a copy as a custom dashboard. This allows you to set a refresh schedule for the dashboard and modify the widgets.

**To save a managed dashboard as a custom dashboard**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  In the left navigation pane, under **Lake**, choose **Dashboard**. 

1. Choose the **Managed and custom dashboards** tab.

1. Choose the managed dashboard that you want to create a copy of.

1. Choose **Save as new dashboard**.

1. Provide a name to identify the dashboard.

1. (Optional) In the **Tags** section, you can add up to 50 tag key pairs to help you identify and sort your dashboards. For more information about how you can use tags in AWS, see [Tagging AWS resources](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html) in the *Tagging AWS Resources User Guide*.

1. For **Permissions**, choose the event data stores that you want to apply permissions to. Because CloudTrail runs queries to populate data for the widgets on a dashboard, CloudTrail requires permissions to run queries on the event data store associated with the dashboard's widgets. For each event data store selected in this step, CloudTrail attaches a resource-based policy to the event data store that allows CloudTrail to run queries. You can deselect an event data store if you do not want to allow permissions.

1. Choose **Create dashboard**.

After you create the custom dashboard, you can [add widgets](lake-dashboard-custom-widgets.md), [remove widgets](lake-dashboard-custom-widgets-remove.md), and [set a refresh schedule](lake-dashboard-refresh.md) for the dashboard.

# Available managed dashboards


The section provides information about the available managed dashboards and provides information about the widgets featured on each dashboard.

**Topics**
+ [

## Security monitoring dashboard
](#lake-managed-dashboard-security)
+ [

## IAM activity dashboard
](#lake-managed-dashboard-iam)
+ [

## User activity dashboard
](#lake-managed-dashboard-user)
+ [

## Enriched events dashboard
](#lake-managed-dashboard-enriched-events)
+ [

## Error analysis dashboard
](#lake-managed-dashboard-error)
+ [

## EC2 activity dashboard
](#lake-managed-dashboard-ec2)
+ [

## Organizations activity dashboard
](#lake-managed-dashboard-organizations)
+ [

## Resource changes dashboard
](#lake-managed-dashboard-resources)
+ [

## Data events overview dashboard
](#lake-managed-dashboard-data)
+ [

## Lambda data events dashboard
](#lake-managed-dashboard-lambda)
+ [

## DynamoDB data events dashboard
](#lake-managed-dashboard-dynamodb)
+ [

## S3 data events dashboard
](#lake-managed-dashboard-s3)
+ [

## Insights events dashboard
](#lake-managed-dashboard-insights)
+ [

## Management events dashboard
](#lake-managed-dashboard-mgmt)
+ [

## Overview dashboard
](#lake-managed-dashboard-overview)

## Security monitoring dashboard


This dashboard provides a centralized view of critical security focused widgets, such as top access denied events, failed console login attempts and their associated IP addresses, root user console login attempts, destructive actions, cross-account access and other critical security focused widgets. It provides quick incident detection and response to enhance your overall security posture.

This dashboard is available for event data stores that collect management events and includes the following widgets:

**Top access denied events**  
Tracks the most frequently occurring access-denied events, grouped by API.

**Failed ConsoleLogin attempts**  
Tracks the trend of failed console login attempts over time, with breakdowns on MFA vs Non-MFA authenticated callers. 

**Failed ConsoleLogin attempts by IP address**  
Tracks the IP addresses associated with failed console login attempts and displays the top offending IP addresses by failed login count.

**Root user ConsoleLogin attempts**  
Tracks the frequency of console login attempts made by root users over time.

**Destructive actions**  
Tracks the frequency of delete operations over time.

**Top cross-account access**  
Tracks the top cross-account activity by caller account ID and action.

**Users who disabled MFA**  
Tracks the most recent users who disabled MFA.

**Recent EC2 SecurityGroup and NetworkAcl changes**  
Tracks the most recent EC2 SecurityGroup and NetworkAcl changes.

**Recent EC2 SecurityGroup changes that allow public access**  
Tracks the most recent EC2 security groups that have rules allowing public (0.0.0.0/0) access.

**Potential CloudTrail disabling actions**  
Tracks recent actions that risk disrupting CloudTrail logging.

## IAM activity dashboard


This dashboard provides visibility into commonly used IAM APIs, API errors, changes to IAM entities, and top caller IP addresses, enabling the identification of unintended IAM actions and compliance issues.

This dashboard is available for event data stores that collect management events and includes the following widgets:

**Top IAM APIs**  
Tracks the most frequently used IAM APIs.

**Top IAM callers**  
Tracks the most frequent IAM API callers.

**IAM success vs failure trend**  
Tracks the trend of success and failed IAM API calls over time.

**Top IAM API errors**  
Tracks the most frequent errors in calling IAM APIs.

**Top AccessDenied IAM APIs**  
Tracks the most frequent IAM API calls that failed with access denied errors.

**Top IP addresses of IAM calls**  
Tracks the top source IP addresses from which IAM API calls were made.

**Recent IAM policy changes**  
Tracks the most recent changes to IAM policies, categorized by the specific IAM API operation that facilitated the change, the IAM resource (user, role, or group) associated with the policy change, and the policy name or ARN that was used.

**Recent IAM user changes**  
Tracks the most recent changes to IAM users, categorized by the specific IAM API that facilitates user management, the IAM user affected by the change, and the event time.

**Top assumed IAM roles**  
Tracks the most frequently assumed IAM roles.

## User activity dashboard


This dashboard provides visibility into user activity trends, insights into key areas such as top active users, user traffic patterns, users with access denied errors, recent user operations, users who performed destructive activities and IAM policy changes, as well as privileged user actions. It helps detect unintended user actions and security risks.

This dashboard is available for event data stores that collect management events and includes the following widgets:

**User activity trends by user ARN**  
Tracks the user activity trend over time by user ARN.

**User activity trends by API**  
Tracks the user activity trend over time by API.

**Most recent user activity**  
Tracks the most recent user actions. 

**Top users with errors**  
Tracks the users that have the highest number of errors.

**Top users with AccessDenied errors**  
Tracks the users that have the highest number of AccessDenied errors. 

**Top users making destructive actions**  
Tracks the users that are making the highest number of destructive actions. 

**Top users changing IAM policies**  
Tracks the IAM users who are frequently performing changes to IAM policies.

**Top actions performed by potential IAM privileged users**  
Tracks the most frequent actions by highly privileged IAM users, such as administrators.

## Enriched events dashboard


This enriched events dashboard provides insights on trends across tagged resources, principal activities, and AWS global condition keys. These insights help you analyze the most frequent resource and principal tag distributions as well as frequently used global condition keys in role sessions, requests, and principals in request context.

This dashboard is available for event data stores that collect management events and includes the following widgets:

**Enriched events over time**  
Tracks the count of enriched events over time.

**Most frequent resource tag key value pairs**  
Displays the most frequently used resource tag key-value pairs across enriched events. 

**Most frequent resource tag key value pairs with associated resources and users**  
Displays the most frequently used resource tag key-value pairs, showing which resources use these tags and which users are associated with them.

**Most frequent principal tag key value pairs**  
Displays the most frequently used principal tag key-value pairs across enriched events.

**Most frequent access denied actions grouped by principal tag key value pairs**  
Displays the most frequent access-denied actions grouped by principal tag key-value pairs across enriched events.

**Most frequent principal properties in IAM global condition keys**  
Displays the most frequently used IAM global condition keys for principal properties, showing their key-value pairs and counts across all events.

**Most frequent request properties in IAM global condition keys**  
Displays the most frequently used IAM global condition keys for request properties, showing their key-value pairs and counts across all events.

**Most frequent role session properties in IAM global condition keys**  
Displays the most frequently used IAM global condition keys for role session properties, showing their key-value pairs and counts across all events.

## Error analysis dashboard


This dashboard provides comprehensive insights into error trends across services, APIs, users, error codes, and throttled APIs. The visibility enables prompt identification and troubleshooting of potential availability issues for optimal system performance.

This dashboard is available for event data stores that collect management events and includes the following widgets:

**Error count by service**  
Tracks the error count of activities by service.

**Error count by API**  
Tracks the error count of activities by API.

**Top errors by error code**  
Tracks the most frequent errors by error code.

**Top errors by error message**  
Tracks the most frequent errors by error message.

**Top AccessDenied errors by API**  
Tracks the APIs with the most frequently reported access denied errors.

**Top throttled errors by API**  
Tracks the APIs with the most frequently reported throttled errors.

**Top users with errors**  
Tracks the users with the most frequently reported errors.

## EC2 activity dashboard


This dashboard provides comprehensive visibility into EC2 management activities, like API trends, access errors, top instance launchers, security changes, and network modifications. The insights help identify security risks and operational issues.

This dashboard is available for event data stores that collect management events and includes the following widgets:

**EC2 instance management activity overview**  
Monitors an overview of EC2 instance management activities over a specified time, highlighting key operations such as launches, stops, and terminations.

**EC2 API success vs failure trends**  
Tracks the trend of success and failed EC2 API calls over time.

**Top EC2 errors**  
Tracks the most frequent error codes that occur during EC2 API calls.

**Top EC2 AccessDenied events**  
Tracks EC2 APIs with the most access denied errors.

**Top users launching EC2 instances**  
Tracks the users who are the most active in launching new EC2 instances.

**Recent EC2 SecurityGroup and NetworkInterface changes**  
Tracks the most recent EC2 security group and network interface changes. 

**Recent VPC management and route table changes**  
Tracks the most recent VPC management activities and route table changes. 

**Recent EC2 actions by root user**  
Tracks the most recent EC2 actions performed by root users with highly privileged permissions.

## Organizations activity dashboard


Designed for organization event data stores, this dashboard offers visibility into organizational activities and trends, including insights on active members, account management, access patterns, policy changes, and top services and APIs utilized.

This dashboard is available for organization event data stores and includes the following widgets:

**Activity trend in the organization**  
Tracks the overall activity trend across the entire AWS Organizations organization over time, providing visibility into periods of high or low activity levels.

**Member account management summary**  
Tracks the distribution of member account management activities within the organization, categorized based on the counts of each activity type.

**Most used services across organization**  
Tracks the AWS services that have been utilized the most across the organization.

**Most active accounts by service**  
Tracks the most active accounts utilizing an AWS service across the organization.

**Most used APIs across organization**  
Highlights the AWS APIs that have been invoked most frequently across the entire organization.

**Most active member accounts**  
Tracks the member accounts within the organization that have exhibited the highest count of activity.

**Access denied errors trend across the organization**  
Tracks the pattern of access denied errors occurring within the organization over time.

**Accounts with most access denied errors **  
Tracks the accounts within the organization that have experienced the highest number of access denied errors.

**Recent service control policy changes**  
Tracks the most recent changes made to service control policies (SCPs) within the organization.

## Resource changes dashboard


This dashboard provides a comprehensive view of resource management activities, monitoring trends in provisioning, deletion, and modifications across services. It highlights critical changes, including those made through CloudFormation, manually, and to policies like S3 bucket and KMS access.

This dashboard is available for event data stores that collect management events and includes the following widgets:

**Resource creation and deletion trends**  
Tracks the creation and deletion of resources within the account over time.

**Top users performing resource creation**  
Tracks the users who are most actively creating new resources.

**Top APIs used for resource creation**  
Tracks the APIs that are most frequently used for creating new resources within the account.

**Top APIs used for resource deletion**  
Tracks the APIs that are most frequently used for deleting resources within the account.

**Most recent resources created outside CloudFormation**  
Tracks new resources created outside of CloudFormation governance, emphasizing changes not managed through CloudFormation templates.

**Most recent resource changes made using console**  
 Tracks the most recent changes made to resources via the AWS Management Console.

**Most recent S3 bucket access changes**  
Tracks the most recent S3 bucket access changes. 

**Most recent KMS key access changes**  
Tracks the most recent KMS key policy changes. 

## Data events overview dashboard


This dashboard offers a centralized view of data events in the event data store, including overall activity trends, top services, APIs, regions, throttled data plane APIs, and leading data plane users. This dashboard helps you monitor data plane API activity for auditing and troubleshooting.

This dashboard is available for event data stores that collect data events and includes the following widgets:

**Overall data events trend**  
Tracks the trend in overall data events occurring within the account over time.

**Top services generating data events**  
Tracks the services generating the highest volume of data activity within the account.

**Top APIs generating data events**  
Tracks the APIs generating the highest volume of data activity within the account.

**Top regions generating data events**  
Tracks the regions generating the highest volume of data activity within the account.

**Top throttled data plane APIs**  
Tracks the data plane APIs that are experiencing frequent throttling within the account.

**Top users of data plane APIs**  
Tracks the top users who utilize data plane APIs most across the account.

## Lambda data events dashboard


This dashboard provides visibility into Lambda data plane API activity, including top users, frequently invoked functions, common API errors. These insights help you audit Lambda usage, detect abnormalities, and mitigate operational or security risks.

This dashboard is available for event data stores that collect Lambda data events and includes the following widgets:

**Lambda data plane API activity**  
Tracks the trend in Lambda data plane API activity within the account over time.

**Lambda invocations success vs failure trend**  
Tracks the trend of success and failed Lambda invocations over time.

**Top users of Lambda invocations**  
Tracks the users who make the most invocations of Lambda functions across the account.

**Top invoked Lambda functions**  
Tracks the Lambda functions that are invoked most frequently within the account.

**Top 10 Lambda Invoke API errors**  
Tracks the top 10 errors encountered during Lambda Invoke API calls.

**Most throttled users of Lambda invocations**  
Tracks the users who experience the highest number of throttling events for Lambda invocations.

## DynamoDB data events dashboard


This dashboard provides visibility into DynamoDB data plane API activity, including usage trends, top APIs, and throttling patterns involving users and tables. These insights help you audit DynamoDB usage, detect abnormalities, and mitigate operational or security risks.

This dashboard is available for event data stores that collect DynamoDB data events and includes the following widgets:

**DynamoDB account data activity**  
Tracks the trend in DynamoDB data events occurring within the account over time.

**DynamoDB data plane APIs success vs failure trend**  
Tracks the trend of success and failed DynamoDB data plane API calls over time.

**Top 10 DynamoDB data plane APIs**  
Lists the top 10 DynamoDB data plane API calls.

**Top users of DynamoDB data plane APIs**  
Tracks the users who make the highest number of calls to DynamoDB data plane APIs within the account.

**Top 10 DynamoDB data plane API errors**  
Tracks the top 10 errors in calling DynamoDB data plane APIs.

**Most throttled users of DynamoDB data plane APIs**  
Tracks the users with most frequent throttling when calling DynamoDB data plane APIs.

**Top throttled DynamoDB data plane APIs**  
Tracks the DynamoDB data plane APIs that are experiencing frequent throttling within the account.

**Top throttled DynamoDB tables**  
Tracks the DynamoDB tables experiencing the highest rates of throttling within the account.

## S3 data events dashboard


This dashboard provides visibility into S3 data plane API activity, including usage trends, most accessed S3 objects, top S3 users, and top S3 actions. These insights help you audit S3 usage, detect abnormalities, and mitigate operational or security risks.

This dashboard is available for event data stores that collect Amazon S3 data events and includes the following widgets:

**S3 account activity**  
Tracks S3 account activity.

**Most accessed objects**  
Lists the most accessed S3 objects.

**S3 top users**  
Tracks the top S3 users. 

**Top S3 actions**  
Tracks the top S3 actions.

## Insights events dashboard


This dashboard provides visibility into the overall breakdown of Insights events by type, as well as the top users and services generating these event types. Additionally, it shows the daily count of Insights events and a 30-day historical view of Insights metrics.

**Note**  
After you enable CloudTrail Insights for the first time on the source event data store, it can take up to 7 days for CloudTrail to deliver the first Insights event, if unusual activity is detected.
The **Insights Events** dashboard only displays information about the Insights events collected by the selected event data store, which is determined by the configuration of the source event data store. For example, if you configure the source event data store to enable Insights events on `ApiCallRateInsight` but not `ApiErrorRateInsight`, you won't see information about Insights events on `ApiErrorRateInsight`.

This dashboard is available for event data stores that collect Insights events and includes the following widgets:

**Insight types**  
Tracks events by Insights type.

**Insights by date**  
Tracks Insights events by date. 

**API call rate Insights by event source**  
Tracks API call rate Insights by event source. To view data for this widget, your Insights event data store must be configured to collect Insights on API call rate.

**API error rate Insights by event source**  
Tracks API error rate Insights by event source. To view this widget, your Insights event data store must be configured to collect Insights on API error rate.

**Insights by top users**  
Lists the top users with requests resulting in Insights events.

**Insights events**  
Lists recent Insights events.

## Management events dashboard


This dashboard highlights insights on access denied events, destructive actions, console sign-in events, top errors by user, TLS version usage, and outdated TLS calls by user.

This dashboard is available for event data stores that collect management events and includes the following widgets:

**Top access denied events**  
Tracks the top events that resulted in access denied errors.

**Top errors by user**  
Tracks the top errors by user.

**Console sign-in events**  
Shows console sign-in events.

**Destructive actions**  
Tracks actions that resulted in destructive actions.

**TLS version**  
Shows the TLS versions.

**Outdated TLS calls by user**  
Tracks calls using outdated TLS versions by user.

## Overview dashboard


This dashboard highlights insights on access denied events, destructive actions, console sign-in events, top errors by user, TLS version usage, and outdated TLS calls by user.

This dashboard is available for event data stores that collect management events and includes the following widgets:

**Account activity**  
Tracks read and write activity for your account. 

**Top errors**  
Lists the most frequent errors.

**Most active regions**  
Shows the most active AWS Regions.

**Top services**  
Shows the top services.

**Most throttled events**  
Lists the most throttled events.

**Top users**  
Lists the top users.

# Enable the Highlights dashboard with the CloudTrail console
Enable the Highlights dashboard

Enable the Highlights dashboard to view an at-a-glance overview of the AWS activity collected by the event data stores in your account. The Highlights dashboard is managed by CloudTrail and includes widgets that are relevant to your account. The widgets shown on the Highlights dashboard are unique to each account. These widgets could surface detected abnormal activity or anomalies. For example, your Highlights dashboard could include the **Total cross-account access widget**, which shows if there is an increase in abnormal cross-account activity.

CloudTrail updates the Highlights dashboard every 6 hours. The dashboard shows the last 24 hours of data from the last update.

**Note**  
You can only enable the Highlights dashboard for event data stores that exist in your account.  
You cannot set a refresh schedule for the Highlights dashboard, or add or remove widgets.

## To enable the Highlights dashboard


Use the following procedure to enable the Highlights dashboard.

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  In the left navigation pane, under **Lake**, choose **Dashboard**. 

1. Choose the **Highlights** tab.

1. Because running queries incurs CloudTrail charges, CloudTrail asks you to review the cost information before enabling the **Highlights** dashboard. For information about CloudTrail pricing, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/).

   Choose **Agree and enable Highlights** to enable the Highlights dashboard.

1. For **Permissions**, choose the event data stores that you want to apply permissions to. CloudTrail requires permissions to run queries on your event data stores and refresh the dashboard on your behalf. To provide permissions, CloudTrail attaches a default resource-based policy to each event data store selected in this step to allow CloudTrail to run queries on the event data store. CloudTrail attaches a resource-based policy to the dashboard to allow CloudTrail to refresh the dashboard every 6 hours.

   You can modify the resource-based policy for an event data store from its details page. You can modify the resource-based policy for a dashboard by selecting **Edit policy** from the **Actions** menu for the dashboard.

1. Choose **Confirm**.

 When you enable the **Highlights** dashboard, termination protection is automatically enabled. Termination protection protects a dashboard from being accidentally deleted. You'll need to disable termination protection, if you want to disable the dashboard.

# Disable the Highlights dashboard with the CloudTrail console
Disable the Highlights dashboard

This section describes how to disable the Highlights dashboard. Because termination protection is automatically enabled for the Highlights dashboard, you'll need to first disable termination protection and then disable the Highlights dashboard.

**To disable the Highlights dashboard**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  In the left navigation pane, under **Lake**, choose **Dashboard**. 

1. Choose the **Highlights** tab.

1. From **Actions**, choose **Change termination protection**.

1. Choose **Disabled**.

1. Choose **Save**.

1. From **Actions**, choose **Disable Highlights**.

# Create a custom dashboard with the CloudTrail console
Create a custom dashboard

You can create custom dashboards and add up to 10 widgets to each custom dashboard. You can choose to add sample widgets or create new widgets from SQL queries.

After you're done adding widgets, you can manually refresh the dashboard or set a refresh schedule.

**To create a custom dashboard**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  In the left navigation pane, under **Lake**, choose **Dashboard**. 

1. Choose the **Managed and custom dashboards** tab.

1. Choose **Build my own dashboard**.

1. Provide a dashboard name to identify your dashboard.

1. For **Permissions**, choose the event data stores that you want to apply permissions to. Because CloudTrail runs queries to populate data for the widgets on a dashboard, CloudTrail requires permissions to run queries on the event data stores associated with the dashboard's widgets. For each event data store selected in this step, CloudTrail attaches a resource-based policy to the event data store that allows CloudTrail to run queries on the event data store for this dashboard.

1. (Optional) In the **Tags** section, you can add up to 50 tag key pairs to help you identify and sort your dashboards. For more information about how you can use tags in AWS, see [Tagging AWS resources](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html) in the *Tagging AWS Resources User Guide*.

1. Choose **Create dashboard**.

   Next, you can add widgets and [set a refresh schedule](lake-dashboard-refresh.md).

**Topics**
+ [

# Add a sample widget with the CloudTrail console
](lake-dashboard-custom-widgets.md)
+ [

# Create a new widget from a SQL query with the CloudTrail console
](lake-dashboard-custom-widgets-new.md)
+ [

# Remove a widget from a dashboard with the CloudTrail console
](lake-dashboard-custom-widgets-remove.md)

# Add a sample widget with the CloudTrail console
Add a sample widget

This section describes how to add a sample widget to your dashboard. You can add a maximum of 10 widgets to a custom dashboard.

**Note**  
Sample widgets are limited to a single event data store that exists in your account. To query across multiple event data stores in your account, [create a new widget](lake-dashboard-custom-widgets-new.md).

**To add a sample widget to a dashboard**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  In the left navigation pane, under **Lake**, choose **Dashboard**. 

1. Choose the **Managed and custom dashboards** tab.

1. In **Custom dashboards**, choose the dashboard that you want to add a widget to.

1. From **Actions**, choose **Edit dashboard**.

1. From **Actions**, choose **Add sample widget**.

1. Choose the event data store you'd like to run the query on. You can only choose event data stores that exist in your account.

1. Choose the sample widget you'd like to add. By default, all sample widgets are shown. You can filter by a widget type (for example, IAM widgets).

1. Choose **View query** to view the query for the selected widget.

1. Choose **Add to dashboard** to add the widget to the dashboard.

1. Choose **Save** to save the dashboard.

# Create a new widget from a SQL query with the CloudTrail console
Create a new widget

This section describes how to create a new widget by writing or pasting a SQL query and choosing a chart type. You can add a maximum of 10 widgets to a custom dashboard.

**To create a new widget from a SQL query**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  In the left navigation pane, under **Lake**, choose **Dashboard**. 

1. Choose the **Managed and custom dashboards** tab.

1. In **Custom dashboards**, choose the dashboard that you want to create a widget for.

1. From **Actions**, choose **Edit dashboard**.

1. From **Actions**, choose **Create new widget**.

1. Choose the event data store you'd like to run the query on. You can query across multiple event data stores as long as the event data stores exist in your account.

1. Write or copy the SQL query.

   You can also provide a natural language prompt in English and choose **Generate query** to produce a SQL query from your prompt. For more information, see [Create CloudTrail Lake queries from natural language prompts](lake-query-generator.md).

1. Choose **Run** to run the query and preview the query results.
**Note**  
When you run queries, you incur charges based on the amount of optimized and compressed data scanned. To help control costs, we recommend that you constrain queries by adding starting and ending `eventTime` timestamps to queries.

1. Choose the **Visualizer** tab to select the chart type for the widget. You can choose from these chart types: table, bar chart, line chart, and pie chart.

1. Choose **Add to dashboard** to add the widget to the dashboard.

1. Choose **Save** to save the dashboard.

# Remove a widget from a dashboard with the CloudTrail console
Remove a widget

This section describes to remove a widget from a custom dashboard.

**To remove a widget from a dashboard**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  In the left navigation pane, under **Lake**, choose **Dashboard**. 

1. Choose the **Managed and custom dashboards** tab.

1. In **Custom dashboards**, choose the dashboard for which you want to remove a widget.

1. From **Actions**, choose **Edit dashboard**.

1. On the widget you want to remove, choose the remove icon (![\[Vertical ellipsis icon representing a menu or more options.\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/remove-icon.png)) and then choose **Remove**.

1. Choose **Save** to save the dashboard.

# Set a refresh schedule for a custom dashboard with the CloudTrail console
Set a refresh schedule for a custom dashboard

This section describes how to set a dashboard refresh schedule. You can set a refresh schedule to allow CloudTrail Lake to refresh a dashboard every 1 hour, 6 hours, 12 hours, or 24 hours (1 day).

When you set a refresh schedule using the CloudTrail console, CloudTrail attaches a resource-based policy to the dashboard that allows CloudTrail to refresh the dashboard on your behalf.

**To set a refresh schedule**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  In the left navigation pane, under **Lake**, choose **Dashboard**. 

1. Choose the **Managed and custom dashboards** tab.

1. In **Custom dashboards**, choose the dashboard that you want to set a refresh schedule for.

1. Choose the refresh frequency from the dropdown list.

1. To create a refresh schedule, CloudTrail attaches a resource-based policy to the dashboard to allow CloudTrail to refresh the dashboard on your behalf. Expand **Dashboard resource policy** to view the resource-based policy that CloudTrail will attach to the dashboard.

1. Because running queries incurs costs, CloudTrail asks you to confirm that you want CloudTrail to run queries for the scheduled frequency. Choose **Confirm** to set a refresh schedule.

# Disable the refresh schedule for a custom dashboard with the CloudTrail console
Disable the refresh schedule for a custom dashboard

You can disable the refresh schedule if you no longer want CloudTrail to automatically refresh your dashboard, and instead wish to manually refresh your dashboard.

**To disable a refresh schedule**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  In the left navigation pane, under **Lake**, choose **Dashboard**. 

1. Choose the **Managed and custom dashboards** tab.

1. In **Custom dashboards**, choose the dashboard that you want to disable a refresh schedule for.

1. Choose **Disable refresh schedule** from the dropdown list.   
![\[Option for disabling refresh schedule\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/ct-lake-disable-schedule.png)

# Change termination protection with the CloudTrail console
Change termination protection

Termination protection prevents a dashboard from accidental deletion. If you want to delete a custom dashboard, or disable the Highlights dashboard, you must disable termination protection.

**To turn off termination protection**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1. In the navigation pane, under **Lake**, choose **Dashboard**.

1. Choose the dashboard you want to disable termination protection for.

1. From **Actions**, choose **Change termination protection**.

1. Choose **Disabled**.

1. Choose **Save**.

**To turn on termination protection**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1. In the navigation pane, under **Lake**, choose **Dashboard**.

1. Choose the dashboard you want to enable termination protection for.

1. From **Actions**, choose **Change termination protection**.

1. To turn on termination protection, choose **Enabled**.

1. Choose **Save**.

# Delete a custom dashboard with the CloudTrail console
Delete a custom dashboard

This section describes how to delete a dashboard using the CloudTrail. 

**Note**  
You can't delete an event data store if [termination protection](lake-dashboard-termination-protection.md) is enabled.

**To delete a dashboard**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1. In the navigation pane, under **Lake**, choose **Dashboard**.

1. Choose the **Managed and custom dashboards** tab.

1. Choose the custom dashboard you want to delete.

1. From **Actions**, choose **Delete**.

1. Choose **Delete** to confirm you want to delete the dashboard.

# Create, update, and manage dashboards with the AWS CLI


This section describes the AWS CLI commands you can use to create, update, and manage your CloudTrail Lake dashboards.

When using the AWS CLI, remember that your commands run in the AWS Region configured for your profile. If you want to run the commands in a different Region, either change the default Region for your profile, or use the `--region` parameter with the command.

## Available commands for dashboards


Commands for creating and updating dashboards in CloudTrail Lake include:
+ `create-dashboard` to create a custom dashboard or enable the Highlights dashboard.
+ `update-dashboard` to update a custom dashboard or the Highlights dashboard.
+ `delete-dashboard` to delete a custom dashboard or the Highlights dashboard.
+ `get-dashboard` returns information about the specified dashboard.
+ `list-dashboards` lists all dashboards for your AWS account, or for the specified filter.
+ `start-dashboard-refresh` starts a refresh of the dashboard.
+ `get-resource-policy` gets the resource-based policy attached to the dashboard.
+ `put-resource-policy` attaches a resource-based policy to a dashboard to allow CloudTrail to refresh the dashboard asynchronously on your behalf. You also attach a resource-based policy to an event data store to allow CloudTrail to run queries on the event data store to populate the data for dashboard widgets.
+ `delete-resource-policy` removes the resource-based policy attached to a dashboard.
+ `add-tags` adds tags to identify the dashboard.
+ `remove-tags` removes tags from a dashboard.
+ `list-tags` lists tags for a dashboard.

For a list of available commands for CloudTrail Lake event data stores, see [Available commands for event data stores](lake-eds-cli.md#lake-eds-cli-commands).

For a list of available commands for CloudTrail Lake queries, see [Available commands for CloudTrail Lake queries](lake-queries-cli.md#lake-queries-cli-commands).

For a list of available commands for CloudTrail Lake integrations, see [Available commands for CloudTrail Lake integrations](lake-integrations-cli.md#lake-integrations-cli-commands).

**Topics:**
+  [Create a dashboard with the AWS CLI](lake-dashboard-cli-create.md) 
+  [Manage dashboards with the AWS CLI](lake-dashboard-cli-manage.md) 
+  [Delete a dashboard with the AWS CLI](lake-dashboard-cli-delete.md) 

# Create a dashboard with the AWS CLI


This section describes how to use the `create-dashboard` command to create a create a custom dashboard or the Highlights dashboard.

When using the AWS CLI, remember that your commands run in the AWS Region configured for your profile. If you want to run the commands in a different Region, either change the default Region for your profile, or use the `--region` parameter with the command.

 CloudTrail runs queries to populate the dashboard's widgets during a manual or scheduled refresh. CloudTrail must be granted permissions to run the `StartQuery` operation on each event data store associated with a dashboard widget. To provide permissions, run the `put-resource-policy` command to attach a resource-based policy to each event data store, or edit the event data store's policy on the CloudTrail console. For an example policy, see [Example: Allow CloudTrail to run queries to refresh a dashboard](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds-dashboard). 

 To set a refresh schedule, CloudTrail must be granted permissions to run the `StartDashboardRefresh` operation to refresh the dashboard on your behalf. To provide permissions, run the `put-resource-policy` operation to attach a resource-based policy to the dashboard, or edit the dashboard's policy on the CloudTrail console. For an example policy, see [Resource-based policy example for a dashboard](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-dashboards). 

**Topics**
+ [

## Create a custom dashboard with the AWS CLI
](#lake-dashboard-cli-create-custom)
+ [

## Enable the Highlights dashboard with the AWS CLI
](#lake-dashboard-cli-create-highlights)
+ [

# View properties for widgets
](lake-widget-properties.md)

## Create a custom dashboard with the AWS CLI


The following procedure shows how to create a custom dashboard, attach the required resource-based policies to event data stores and the dashboard, and update the dashboard to set and enable a refresh schedule.

1. Run the `create-dashboard` to create a dashboard.

   When you create a custom dashboard, you can pass in an array with up to 10 widgets. A widget provides a graphical representation of the results for a query. Each widget consists of `ViewProperties`, `QueryStatement`, and `QueryParameters`.
   + `ViewProperties` – Specifies the properties for the view type. For more information, see [View properties for widgets](lake-widget-properties.md).
   + `QueryStatement` – The query CloudTrail runs when the dashboard is refreshed. You can query across multiple event data stores as long as the event data stores exist in your account.
   + `QueryParameters` – The following `QueryParameters` values are supported for custom dashboards: `$Period$`, `$StartTime$`, and `$EndTime$`. To use `QueryParameters` place a `?` in the `QueryStatement` where you want to substitute the parameter. CloudTrail will fill in the parameters when the query is run.

   The following example creates a dashboard with four widgets, one of each view type.
**Note**  
In the this example, `?` is surrounded with single quotes because it is used with `eventTime`. Depending on the operating system you are running on, you may need to surround single quotes with escape quotes. For more information, see [Using quotation marks and literals with strings in the AWS CLI](https://docs.aws.amazon.com/cli/v1/userguide/cli-usage-parameters-quoting-strings.html).

   ```
   aws cloudtrail create-dashboard --name AccountActivityDashboard \
   --widgets '[
       {
         "ViewProperties": {
           "Height": "2",
           "Width": "4",
           "Title": "TopErrors",
           "View": "Table"
         },
         "QueryStatement": "SELECT errorCode, COUNT(*) AS eventCount FROM eds WHERE eventTime > '?' AND eventTime < '?' AND (errorCode is not null) GROUP BY errorCode ORDER BY eventCount DESC LIMIT 100",
         "QueryParameters": ["$StartTime$", "$EndTime$"]
       },
       {
         "ViewProperties": {
           "Height": "2",
           "Width": "4",
           "Title": "MostActiveRegions",
           "View": "PieChart",
           "LabelColumn": "awsRegion",
           "ValueColumn": "eventCount",
           "FilterColumn": "awsRegion"
         },
         "QueryStatement": "SELECT awsRegion, COUNT(*) AS eventCount FROM eds where eventTime > '?' and eventTime < '?' GROUP BY awsRegion ORDER BY eventCount LIMIT 100",
         "QueryParameters": ["$StartTime$", "$EndTime$"]
       },
       {
         "ViewProperties": {
           "Height": "2",
           "Width": "4",
           "Title": "AccountActivity",
           "View": "LineChart",
           "YAxisColumn": "eventCount",
           "XAxisColumn": "eventDate",
           "FilterColumn": "readOnly"
         },
         "QueryStatement": "SELECT DATE_TRUNC('?', eventTime) AS eventDate, IF(readOnly, 'read', 'write') AS readOnly, COUNT(*) as eventCount FROM eds WHERE eventTime > '?' AND eventTime < '?' GROUP BY DATE_TRUNC('?', eventTime), readOnly ORDER BY DATE_TRUNC('?', eventTime), readOnly",
         "QueryParameters": ["$Period$", "$StartTime$", "$EndTime$", "$Period$", "$Period$"]
       },
       {
         "ViewProperties": {
           "Height": "2",
           "Width": "4",
           "Title": "TopServices",
           "View": "BarChart",
           "LabelColumn": "service",
           "ValueColumn": "eventCount",
           "FilterColumn": "service",
           "Orientation": "Horizontal"
         },
         "QueryStatement": "SELECT REPLACE(eventSource, '.amazonaws.com') AS service, COUNT(*) AS eventCount FROM eds WHERE eventTime > '?' AND eventTime < '?' GROUP BY eventSource ORDER BY eventCount DESC LIMIT 100",
         "QueryParameters": ["$StartTime$", "$EndTime$"]
       }
     ]'
   ```

1. Create a separate file with the resource policy needed for each event data store that is included in a widget's `QueryStatement`. Name the file *policy.json*, with the following example policy statement:

    Replace *123456789012* with your account ID, *arn:aws:cloudtrail:us-east-1:123456789012:dashboard/exampleDashboard* with the ARN of the dashboard.

   For more information about resource-based policies for dashboards, see [Example: Allow CloudTrail to run queries to refresh a dashboard](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds-dashboard).

1. Run the `put-resource-policy` command to attach the policy. You can also update an event data store's resource-based policy on the CloudTrail console. 

   The following example attaches a resource-based policy to an event data store. 

   ```
   aws cloudtrail put-resource-policy \
   --resource-arn eds-arn \
   --resource-policy file://policy.json
   ```

1. Run the `put-resource-policy` command to attach a resource-based policy to the dashboard. For an example policy, see [Resource-based policy example for a dashboard](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-dashboards).

   The following example attaches a resource-based policy to a dashboard. Replace *account-id* with your account ID and *dashboard-arn* with the ARN of the dashboard.

   ```
   aws cloudtrail put-resource-policy \
   --resource-arn dashboard-arn \
   --resource-policy '{"Version": "2012-10-17",		 	 	  "Statement": [{"Sid": "DashboardPolicy", "Effect": "Allow", "Principal": { "Service": "cloudtrail.amazonaws.com" }, "Action": "cloudtrail:StartDashboardRefresh", "Condition": { "StringEquals": { "AWS:SourceArn": "dashboard-arn", "AWS:SourceAccount": "account-id"}}}]}'
   ```

1. Run the `update-dashboard` command to set and enable a refresh schedule by configuring the `--refresh-schedule` parameter.

   The `--refresh-schedule` consists of the following optional parameters:
   + `Frequency` – The `Unit` and `Value` for the schedule.

     For custom dashboards, the unit can be `HOURS` or `DAYS`.

      For custom dashboards, the following values are valid when the unit is `HOURS`: `1`, `6`, `12`, `24` 

     For custom dashboards, the only valid value when the unit is `DAYS` is `1`.
   + `Status` – Specifies whether the refresh schedule is enabled. Set the value to `ENABLED` to enable the refresh schedule, or to `DISABLED` to turn off the refresh schedule. 
   + `TimeOfDay ` – The time of day in UTC to run the schedule; for hourly only refer to minutes; default is 00:00.

   The following example sets a refresh schedule for every six hours and enables the schedule.

   ```
   aws cloudtrail update-dashboard --dashboard-id AccountActivityDashboard \
   --refresh-schedule '{"Frequency": {"Unit": "HOURS", "Value": 6}, "Status": "ENABLED"}'
   ```

## Enable the Highlights dashboard with the AWS CLI


The following procedure shows how to create the Highlights dashboard, attach the required resource-based policies to your event data stores and the dashboard, and update the dashboard to set and enable the refresh schedule.

1. Run the `create-dashboard` command to create the Highlights dashboard. To create this dashboard, the `--name` must be `AWSCloudTrail-Highlights`.

   ```
   aws cloudtrail create-dashboard --name AWSCloudTrail-Highlights
   ```

1. For each event data store in your account, run the `put-resource-policy` command to attach a resource-based policy to the event data store. You can also update an event data store's resource-based policy on the CloudTrail console. For an example policy, see [Example: Allow CloudTrail to run queries to refresh a dashboard](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds-dashboard).

   The following example attaches a resource-based policy to an event data store. Replace *account-id* with your account ID, *eds-arn* with the ARN of the event data store, and *dashboard-arn* with the ARN of the dashboard.

   ```
   aws cloudtrail put-resource-policy \
   --resource-arn eds-arn \
   --resource-policy '{"Version": "2012-10-17",		 	 	  "Statement": [{"Sid": "EDSPolicy", "Effect": "Allow", "Principal": { "Service": "cloudtrail.amazonaws.com" }, "Action": "cloudtrail:StartQuery", "Condition": { "StringEquals": { "AWS:SourceArn": "dashboard-arn", "AWS:SourceAccount": "account-id"}}} ]}'
   ```

1. Run the `put-resource-policy` command to attach a resource-based policy to the dashboard. For an example policy, see [Resource-based policy example for a dashboard](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-dashboards).

   The following example attaches a resource-based policy to a dashboard. Replace *account-id* with your account ID and *dashboard-arn* with the ARN of the dashboard.

   ```
   aws cloudtrail put-resource-policy \
   --resource-arn dashboard-arn \
   --resource-policy '{"Version": "2012-10-17",		 	 	  "Statement": [{"Sid": "DashboardPolicy", "Effect": "Allow", "Principal": { "Service": "cloudtrail.amazonaws.com" }, "Action": "cloudtrail:StartDashboardRefresh", "Condition": { "StringEquals": { "AWS:SourceArn": "dashboard-arn", "AWS:SourceAccount": "account-id"}}}]}'
   ```

1. Run the `update-dashboard` command to set and enable a refresh schedule by configuring the `--refresh-schedule` parameter. For the Highlights dashboard, the only valid `UNIT` is `HOURS` and the only valid `Value` is `6`.

   ```
   aws cloudtrail update-dashboard --dashboard-id AWSCloudTrail-Highlights \
   --refresh-schedule '{"Frequency": {"Unit": "HOURS", "Value": 6}, "Status": "ENABLED"}'
   ```

# View properties for widgets


This section describes the configurable view properties for the 4 view types: table, line chart, pie chart, and bar chart.

**Topics**
+ [

## Table
](#lake-widget-table)
+ [

## Line chart
](#lake-widget-linechart)
+ [

## Pie chart
](#lake-widget-piechart)
+ [

## Bar chart
](#lake-widget-barchart)

## Table


The following example shows a widget configured as a table.

```
{
    "ViewProperties": {
       "Height": "2",
       "Width": "4",
       "Title": "TopErrors",
       "View": "Table"
    },
    "QueryStatement": "SELECT errorCode, COUNT(*) AS eventCount FROM eds WHERE eventTime > '?' AND eventTime < '?' AND (errorCode is not null) GROUP BY errorCode ORDER BY eventCount DESC LIMIT 100",
    "QueryParameters": ["$StartTime$", "$EndTime$"]
}
```

The following table describes the configurable view properties for a table.


| Parameter | Required | Value | 
| --- | --- | --- | 
|  `Height`  |  Yes  |  The height of the table in inches.  | 
|  `Width`  |  Yes  |  The width of the table in inches.  | 
|  `Title`  |  Yes  |  The title of the table.  | 
|  `View`  |  Yes  |  The widget view type. For a table, the value is `Table`.  | 

## Line chart


The following example shows a widget configured as a line chart.

```
{
    "ViewProperties": {
       "Height": "2",
       "Width": "4",
       "Title": "AccountActivity",
       "View": "LineChart",
       "YAxisColumn": "eventCount",
       "XAxisColumn": "eventDate",
       "FilterColumn": "readOnly"
    },
    "QueryStatement": "SELECT DATE_TRUNC('?', eventTime) AS eventDate, IF(readOnly, 'read', 'write') AS readOnly, COUNT(*) as eventCount FROM eds WHERE eventTime > '?' AND eventTime < '?' GROUP BY DATE_TRUNC('?', eventTime), readOnly ORDER BY DATE_TRUNC('?', eventTime), readOnly",
    "QueryParameters": ["$Period$", "$StartTime$", "$EndTime$", "$Period$", "$Period$"]
}
```

The following table describes the configurable view properties for a line chart.


| Parameter | Required | Value | 
| --- | --- | --- | 
|  `Height`  |  Yes  |  The height of the line chart in inches.  | 
|  `Width`  |  Yes  |  The width of the line chart in inches.  | 
|  `Title`  |  Yes  |  The title of the line chart.  | 
|  `View`  |  Yes  |  The widget view type. For a line chart, the value is `LineChart`.  | 
|  `YAxisColumn`  |  Yes  |  The field from the query results that you want to use for the Y axis column. For example, `eventCount`.  | 
|  `XAxisColumn`  |  Yes  |  The field from the query results that you want to use for the X axis column. For example, `eventDate`.  | 
|  `FilterColumn`  |  No  |  The field from the query results that you want to filter on. For example, `readOnly`.  | 

## Pie chart


The following example shows a widget configured as a pie chart.

```
{
    "ViewProperties": {
       "Height": "2",
       "Width": "4",
       "Title": "MostActiveRegions",
       "View": "PieChart",
       "LabelColumn": "awsRegion",
       "ValueColumn": "eventCount",
       "FilterColumn": "awsRegion"
    },
    "QueryStatement": "SELECT awsRegion, COUNT(*) AS eventCount FROM eds where eventTime > '?' and eventTime < '?' GROUP BY awsRegion ORDER BY eventCount LIMIT 100",
    "QueryParameters": ["$StartTime$", "$EndTime$"]
}
```

The following table describes configurable view properties for a pie chart.


| Parameter | Required | Value | 
| --- | --- | --- | 
|  `Height`  |  Yes  |  The height of the pie chart in inches.  | 
|  `Width`  |  Yes  |  The width of the pie chart in inches.  | 
|  `Title`  |  Yes  |  The title of the pie chart.  | 
|  `View`  |  Yes  |  The widget view type. For a pie chart, the value is `PieChart`.  | 
|  `LabelColumn`  |  Yes  |  The label for segments in the pie chart. For example, `awsRegion`.  | 
|  `ValueColumn`  |  Yes  |  The value for the segments in the pie chart. For example, `ValueColumn`.  | 
|  `FilterColumn`  |  No  |  The field from the query results that you want to filter on. For example, `awsRegion`.  | 

## Bar chart


The following example shows a widget configured as a bar chart.

```
{
    "ViewProperties": {
       "Height": "2",
       "Width": "4",
       "Title": "TopServices",
       "View": "BarChart",
       "LabelColumn": "service",
       "ValueColumn": "eventCount",
       "FilterColumn": "service",
       "Orientation": "Horizontal"
    },
    "QueryStatement": "SELECT REPLACE(eventSource, '.amazonaws.com') AS service, COUNT(*) AS eventCount FROM eds WHERE eventTime > '?' AND eventTime < '?' GROUP BY eventSource ORDER BY eventCount DESC LIMIT 100",
    "QueryParameters": ["$StartTime$", "$EndTime$"]
}
```

The following table describes the configurable view properties for a bar chart.


| Parameter | Required | Value | 
| --- | --- | --- | 
|  `Height`  |  Yes  |  The height of the bar chart in inches.  | 
|  `Width`  |  Yes  |  The width of the bar chart in inches.  | 
|  `Title`  |  Yes  |  The title of the bar chart.  | 
|  `View`  |  Yes  |  The widget view type. For a bar chart, the value is `BarChart`.  | 
|  `LabelColumn`  |  Yes  |  The label for bars in the bar chart. For example, `service`.  | 
|  `ValueColumn`  |  Yes  |  The value for the bars in the bar chart. For example, `eventCount`.  | 
|  `FilterColumn`  |  No  |  The field from the query results that you want to filter on. For example, `service`.  | 
|  `Orientation`  |  No  |  The orientation of the bar chart, either `Horizontal` or `Vertical`.  | 

# Manage dashboards with the AWS CLI


This section describes several other commands that you can run to manage your dashboards, including getting a dashboard, listing your dashboards, refreshing a dashboard, and updating a dashboard.

When using the AWS CLI, remember that your commands run in the AWS Region configured for your profile. If you want to run the commands in a different Region, either change the default Region for your profile, or use the `--region` parameter with the command.

**Topics**
+ [

## Get a dashboard with the AWS CLI
](#lake-dashboard-cli-get)
+ [

## List dashboards with the AWS CLI
](#lake-dashboard-cli-list)
+ [

## Attach a resource-based policy to an event data store or dashboard with the AWS CLI
](#lake-dashboard-cli-add-rbp)
+ [

## Manually refresh a dashboard with the AWS CLI
](#lake-dashboard-cli-refresh)
+ [

## Update a dashboard with the AWS CLI
](#lake-dashboard-cli-update)

## Get a dashboard with the AWS CLI


Run the `get-dashboard` command to return a dashboard. Specify the `--dashboard-id` by providing the dashboard ARN, or the dashboard name.

```
aws cloudtrail get-dashboard --dashboard-id arn:aws:cloudtrail:us-east-1:123456789012:dashboard/exampleDash
```

## List dashboards with the AWS CLI


Run the `list-dashboards` command to list the dashboards for your account.
+ Include the `--type` parameter, to view only the `CUSTOM` or `MANAGED` dashboards.
+  Include the `--max-results` parameter to limit the number of results. Valid values are 1-100.
+ Include the `--name-prefix` to return dashboards matching the specified prefix.

The following example lists all dashboards.

```
aws cloudtrail list-dashboards
```

This example lists only the `CUSTOM` dashboards.

```
aws cloudtrail list-dashboards --type CUSTOM
```

The next example lists only the `MANAGED` dashboards.

```
aws cloudtrail list-dashboards --type MANAGED
```

The final example lists the dashboards matching the specified prefix.

```
aws cloudtrail list-dashboards --name-prefix ExamplePrefix
```

## Attach a resource-based policy to an event data store or dashboard with the AWS CLI


Run the `put-resource-policy` command to apply a resource-based policy to an event data store or dashboard.

### Attach a resource-based policy to an event data store


To run queries on a dashboard during a manual or scheduled refresh, you need to attach a resource-based policy to every event data store that is associated with a widget on the dashboard. This allows CloudTrail Lake to run the queries on your behalf. For more information about the resource-based policy, see [Example: Allow CloudTrail to run queries to refresh a dashboard](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-eds-dashboard).

The following example attaches a resource-based policy to an event data store. Replace *account-id* with your account ID, *eds-arn* with the ARN of the event data store for which CloudTrail will run queries, and *dashboard-arn* with the ARN of the dashboard.

```
aws cloudtrail put-resource-policy \
--resource-arn eds-arn \
--resource-policy '{"Version": "2012-10-17",		 	 	  "Statement": [{"Sid": "EDSPolicy", "Effect": "Allow", "Principal": { "Service": "cloudtrail.amazonaws.com" }, "Action": "cloudtrail:StartQuery", "Condition": { "StringEquals": { "AWS:SourceArn": "dashboard-arn", "AWS:SourceAccount": "account-id"}}} ]}'
```

### Attach a resource-based policy to a dashboard


To set a refresh schedule for a dashboard, you need to attach a resource-based policy to the dashboard to allow CloudTrail Lake to refresh the dashboard on your behalf. For more information about the resource-based policy, see [Resource-based policy example for a dashboard](security_iam_resource-based-policy-examples.md#security_iam_resource-based-policy-examples-dashboards).

The following example attaches a resource-based policy to a dashboard. Replace *account-id* with your account ID and *dashboard-arn* with the ARN of the dashboard.

```
aws cloudtrail put-resource-policy \
--resource-arn dashboard-arn \
--resource-policy '{"Version": "2012-10-17",		 	 	  "Statement": [{"Sid": "DashboardPolicy", "Effect": "Allow", "Principal": { "Service": "cloudtrail.amazonaws.com" }, "Action": "cloudtrail:StartDashboardRefresh", "Condition": { "StringEquals": { "AWS:SourceArn": "dashboard-arn", "AWS:SourceAccount": "account-id"}}}]}'
```

## Manually refresh a dashboard with the AWS CLI


Run the `start-dashboard-refresh` command to manually refresh the dashboard. Before you can run this command, you must [attach a resource-based policy](#lake-dashboard-cli-add-rbp-eds) to every event data store associated with a dashboard widget.

The following example shows how to manually refresh a custom dashboard.

```
aws cloudtrail start-dashboard-refresh \ 
--dashboard-id  dashboard-id \ 
--query-parameter-values '{"$StartTime$": "2024-11-05T10:45:24.00Z"}'
```

The next example shows how to manually refresh a managed dashboard. Because managed dashboards are configured by CloudTrail, the refresh request needs to include the ID of the event data store that the queries will run on.

```
aws cloudtrail start-dashboard-refresh \
--dashboard-id dashboard-id  \
--query-parameter-values '{"$StartTime$": "2024-11-05T10:45:24.00Z", "$EventDataStoreId$": "eds-id"}'
```

## Update a dashboard with the AWS CLI


Run the `update-dashboard` command to update a dashboard. You can update the dashboard to set a refresh schedule, enable or disable a refresh schedule, modify the widgets, and enable or disable termination protection.

### Update the refresh schedule with the AWS CLI


The following example updates the refresh schedule for a custom dashboard named `AccountActivityDashboard`.

```
aws cloudtrail update-dashboard --dashboard-id AccountActivityDashboard \
--refresh-schedule '{"Frequency": {"Unit": "HOURS", "Value": 6}, "Status": "ENABLED"}'
```

### Disable termination protection and the refresh schedule on a custom dashboard with the AWS CLI


The following example disables termination protection for a custom dashboard named `AccountActivityDashboard` to allow the dashboard to be deleted. It also turns off the refresh schedule.

```
aws cloudtrail update-dashboard --dashboard-id AccountActivityDashboard \
--refresh-schedule '{ "Status": "DISABLED"}' \
--no-termination-protection-enabled
```

### Add a widget to a custom dashboard


The following example adds a new widget named `TopServices` to the custom dashboard named `AccountActivityDashboard`. The widgets array includes the two widgets that were already created for the dashboard and the new widget.

**Note**  
In the this example, `?` is surrounded with single quotes because it is used with `eventTime`. Depending on the operating system you are running on, you may need to surround single quotes with escape quotes. For more information, see [Using quotation marks and literals with strings in the AWS CLI](https://docs.aws.amazon.com/cli/v1/userguide/cli-usage-parameters-quoting-strings.html).

```
aws cloudtrail update-dashboard --dashboard-id AccountActivityDashboard \
--widgets '[
    {
      "ViewProperties": {
        "Height": "2",
        "Width": "4",
        "Title": "TopErrors",
        "View": "Table"
      },
      "QueryStatement": "SELECT errorCode, COUNT(*) AS eventCount FROM eds WHERE eventTime > '?' AND eventTime < '?' AND (errorCode is not null) GROUP BY errorCode ORDER BY eventCount DESC LIMIT 100",
      "QueryParameters": ["$StartTime$", "$EndTime$"]
    },
    {
      "ViewProperties": {
        "Height": "2",
        "Width": "4",
        "Title": "MostActiveRegions",
        "View": "PieChart",
        "LabelColumn": "awsRegion",
        "ValueColumn": "eventCount",
        "FilterColumn": "awsRegion"
      },
      "QueryStatement": "SELECT awsRegion, COUNT(*) AS eventCount FROM eds where eventTime > '?' and eventTime < '?' GROUP BY awsRegion ORDER BY eventCount LIMIT 100",
      "QueryParameters": ["$StartTime$", "$EndTime$"]
    },
    {
      "ViewProperties": {
        "Height": "2",
        "Width": "4",
        "Title": "TopServices",
        "View": "BarChart",
        "LabelColumn": "service",
        "ValueColumn": "eventCount",
        "FilterColumn": "service",
        "Orientation": "Vertical"
      },
      "QueryStatement": "SELECT replace(eventSource, '.amazonaws.com') AS service, COUNT(*) as eventCount FROM eds WHERE eventTime > '?' AND eventTime < '?' GROUP BY eventSource ORDER BY eventCount DESC LIMIT 100",
      "QueryParameters": ["$StartTime$", "$EndTime$"]
    }
  ]'
```

# Delete a dashboard with the AWS CLI


This section describes how to use the AWS CLI `delete-dashboard` command to delete a CloudTrail Lake dashboard.

To delete a dashboard, specify the `--dashboard-id` by providing the dashboard ARN, or the dashboard name.

```
aws cloudtrail delete-dashboard --dashboard-id arn:aws:cloudtrail:us-east-1:123456789012:dashboard/exampleDash
```

There is no response if the operation is successful.

**Note**  
You can't delete a dashboard if `--termination-protection-enabled` is set.

# CloudTrail Lake queries
Queries

**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

Queries in CloudTrail Lake are authored in SQL. You can build a query on the CloudTrail Lake **Editor** tab by writing the query in SQL from scratch, by opening a saved or sample query and editing it, or by using the query generator to produce a query from an English language prompt. You cannot overwrite an included sample query with your changes, but you can save it as a new query. For more information about the SQL query language that is allowed, see [CloudTrail Lake SQL constraints](query-limitations.md).

An unbounded query (such as `SELECT * FROM edsID`) scans all data in your event data store. To help control costs, we recommend that you constrain queries by adding starting and ending `eventTime` time stamps to queries. The following is an example that searches for all events in a specified event data store where the event time is after (`>`) January 5, 2023 at 1:51 p.m. and before (`<`) January 19, 2023 at 1:51 p.m. Because an event data store has a minimum retention period of seven days, the minimum time span between starting and ending `eventTime` values is also seven days.

```
SELECT *
FROM eds-ID
WHERE
     eventtime >='2023-01-05 13:51:00' and eventtime < ='2023-01-19 13:51:00'
```

For information about how to optimize your queries, see [Optimize CloudTrail Lake queries](lake-queries-optimization.md).

**Topics**
+ [

## Query editor tools
](#query-editor-format-controls)
+ [

# Create CloudTrail Lake queries from natural language prompts
](lake-query-generator.md)
+ [

# View sample queries with the CloudTrail console
](lake-console-queries.md)
+ [

# Create or edit a query with the CloudTrail console
](query-create-edit-query.md)
+ [

# Run a query and save query results with the console
](query-run-query.md)
+ [

# View query results with the console
](query-results.md)
+ [

# Summarize query results in natural language
](query-results-summary.md)
+ [

# Download saved query results
](view-download-cloudtrail-lake-query-results.md)
+ [

# Validate CloudTrail Lake saved query results
](cloudtrail-query-results-validation.md)
+ [

# Optimize CloudTrail Lake queries
](lake-queries-optimization.md)
+ [

# Run and manage CloudTrail Lake queries with the AWS CLI
](lake-queries-cli.md)

## Query editor tools


A toolbar at the upper right of the query editor offers commands to help author and format your SQL query.

![\[Query editor toolbar\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/query-editor-toolbar.png)


The following list describes the commands on the toolbar.
+ **Undo** – Reverts the last content change made in the query editor.
+ **Redo** – Repeats the last content change made in the query editor.
+ **Format selected** – Arranges the query editor content according to SQL formatting and spacing conventions.
+ **Comment/uncomment selected** - Comments the selected portion of the query if it is not already commented. If the selected portion is already commented, choosing this option removes the comment.

# Create CloudTrail Lake queries from natural language prompts


You can use the CloudTrail Lake query generator to produce a query from an English language prompt that you provide. The query generator uses generative artificial intelligence (generative AI) to produce a ready-to-use SQL query from your prompt, which you can then choose to run in Lake's query editor, or further fine tune. You don't need to have extensive knowledge of SQL or CloudTrail event fields to use the query generator.

The prompt can be a question or a statement about the event data in your CloudTrail Lake event data store. For example, you can enter prompts like "What are my top errors in the past month?" and “Give me a list of users that used SNS.”

A prompt can have a minimum of 3 characters and a maximum of 500 characters.

There are no charges for generating queries; however, when you run queries, you incur charges based on the amount of optimized and compressed data scanned. To help control costs, we recommend that you constrain queries by adding starting and ending `eventTime` timestamps to queries.

**Note**  
You can provide feedback about a generated query by choosing the thumbs up or thumbs down button that appears below the generated query. When you provide feedback, CloudTrail saves your prompt and the generated query.  
Do not include any personally identifying, confidential, or sensitive information in your prompts.  
This feature uses generative AI large language models (LLMs); we recommend double-checking the LLM response.

**Note**  
CloudTrail will automatically select the optimal region within your geography to process inference requests while generating queries. This maximizes available compute resources, model availability, and delivers the best customer experience. Your data will remain stored only in the region where the request originated, however, input prompts and output results may be processed outside that region. All data will be transmitted encrypted across Amazon's secure network.  
 CloudTrail will securely route your inference requests to available compute resources within the geographic area where the request originated, as follows:   
Inference requests originating in the United States will be processed within the United States
Inference requests originating within Japan will be processed within Japan
Inference requests originating in Australia will be processed within Australia.
Inference requests originating in European Union will be processed within the European Union
Inference requests originating in India will be processed within India
 To opt out of the query generation feature, you can explicitly deny or remove the `cloudtrail:GenerateQuery` action from the iam policy you are using. 

You can access the query generator using the CloudTrail console and AWS CLI.

------
#### [ CloudTrail console ]

**To use the query generator on the CloudTrail console**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  From the navigation pane, under **Lake**, choose **Query**. 

1. On the **Query** page, choose the **Editor** tab.

1. Choose the event data store you want to create a query for.

1. In the **Query generator** area, enter a prompt in plain English. For examples, see [Example prompts](#lake-query-generator-examples).

1. Choose **Generate query**. The query generator will attempt to generate a query from your prompt. If successful, the query generator provides the SQL query in the editor. If the prompt is unsuccessful, rephrase your prompt and try again.

1. (Optional) You can provide feedback about the generated query. To provide feedback, choose the thumbs up or thumbs down button that appears below the prompt. When you provide feedback, CloudTrail saves your prompt and the generated query.

1. (Optional) Choose **Run** to run the query.
**Note**  
When you run queries, you incur charges based on the amount of optimized and compressed data scanned. To help control costs, we recommend that you constrain queries by adding starting and ending `eventTime` timestamps to queries.

1. (Optional) If you run the query and there are results, you can choose **Summarize results** to generate a natural language summary in English of the query results. This option uses generative artificial intelligence (generative AI) to produce the summary. For more information about this option, see [Summarize query results in natural language](query-results-summary.md).

   You can provide feedback about the summary by choosing the thumbs up or thumbs down button that appears below the generated summary.
**Note**  
The query summarization feature is in preview release for CloudTrail Lake and is subject to change. This feature is available in the following regions: Asia Pacific (Tokyo), US East (N. Virginia), and US West (Oregon).

------
#### [ AWS CLI ]

**To generate a query with the AWS CLI**

Run the `generate-query` command to generate a query from an English prompt. For `--event-data-stores`, provide the ARN (or ID suffix of the ARN) of the event data store you want to query. You can only specify one event data store. For `--prompt`, provide the prompt in English. 

```
aws cloudtrail generate-query 
--event-data-stores arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-ee54-4813-92d5-999aeEXAMPLE \
--prompt "Show me all console login events for the past week?"
```

If successful, the command outputs a SQL statement and provides a `QueryAlias` that you will use with the `start-query` command to run the query against your event data store.

```
{
  "QueryStatement": "SELECT * FROM $EDS_ID WHERE eventname = 'ConsoleLogin' AND eventtime >= timestamp '2024-09-16 00:00:00' AND eventtime <= timestamp '2024-09-23 00:00:00' AND eventSource = 'signin.amazonaws.com'",
  "QueryAlias": "AWSCloudTrail-UUID"
}
```

**To run a query with the AWS CLI**

Run the `start-query` command with the `QueryAlias` outputted by the `generate-query` command in the previous example. You also have the option of running the `start-query` command by providing the `QueryStatement`.

```
aws cloudtrail start-query --query-alias AWSCloudTrail-UUID
```

The response is a `QueryId` string. To get the status of a query, run `describe-query` using the `QueryId` value returned by `start-query`. If the query is successful, you can run `get-query-results` to get results.

```
{
    "QueryId": "EXAMPLE2-0add-4207-8135-2d8a4EXAMPLE"
}
```

**Note**  
Queries that run for longer than one hour might time out. You can still get partial results that were processed before the query timed out.  
If you are delivering the query results to an S3 bucket using the optional `--delivery-s3uri` parameter, the bucket policy must grant CloudTrail permission to delivery query results to the bucket. For information about manually editing the bucket policy, see [Amazon S3 bucket policy for CloudTrail Lake query results](s3-bucket-policy-lake-query-results.md).

------

## Required permissions


The [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSCloudTrail_FullAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSCloudTrail_FullAccess.html) and [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AdministratorAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AdministratorAccess.html) managed policies both provide the necessary permissions to use this feature.

You can also include the `cloudtrail:GenerateQuery` action in a new or existing customer managed or inline policy.

## Region support


This feature is supported in the following AWS Regions:
+ Asia Pacific (Mumbai) Region (ap-south-1)
+ Asia Pacific (Sydney) Region (ap-southeast-2)
+ Asia Pacific (Tokyo) Region (ap-northeast-1)
+ Canada (Central) Region (ca-central-1)
+ Europe (London) Region (eu-west-2)
+ US East (N. Virginia) Region (us-east-1)
+ US West (Oregon) Region (us-west-2)

## Limitations


The following are limitations of the query generator:
+ The query generator can only accept prompts in English.
+ The query generator can only generate queries for event data stores that collect CloudTrail events (management events, data events, network activity events).
+ The query generator cannot generate queries for prompts that do not pertain to CloudTrail Lake event data.

## Example prompts


This section provides example prompts and the resulting SQL queries generated from the prompts.

If you choose to run the example queries in this section, replace *eds-id* with the ID of the event data store that you want to query and replace the timestamps with the appropriate timestamps for your use case. Timestamps have the following format: `YYYY-MM-DD HH:MM:SS`.

**Prompt:** What are my top errors in the past month?

**SQL query:**

```
SELECT
    errorMessage,
    COUNT(*) as eventCount
FROM
    eds-id
WHERE
    errorMessage IS NOT NULL
AND eventTime >= timestamp '2024-05-01 00:00:00'
AND eventTime <= timestamp '2024-05-31 23:59:59'
GROUP BY 1
ORDER BY 2 DESC
LIMIT 2;
```

**Prompt:** Give me a list of users that used Amazon SNS.

**SQL query:**

```
SELECT
    DISTINCT userIdentity.arn AS user
FROM
    eds-id
WHERE
    eventSource = 'sns.amazonaws.com'
```

**Prompt:** What are my API counts each day for read and write events in the past month?

**SQL query:**

```
SELECT date(eventTime) AS event_date,
    SUM(
        CASE
            WHEN readonly = true THEN 1
            ELSE 0
        END
    ) AS read_events,
    SUM(
        CASE
            WHEN readonly = false THEN 1
            ELSE 0
        END
    ) AS write_events
FROM
    eds-id
WHERE
    eventTime >= timestamp '2024-05-04 00:00:00'
AND eventTime <= timestamp '2024-06-04 23:59:59'
GROUP BY 1
ORDER BY 1 ASC;
```

**Prompt:** Show any events with access denied errors for the past three weeks.

**SQL query:**

```
SELECT *
FROM 
  eds-id
WHERE
  WHERE (errorCode = 'AccessDenied' OR errorMessage = 'Access Denied')
AND eventTime >= timestamp '2024-05-16 01:00:00'
AND eventTime <= timestamp '2024-06-06 01:00:00'
```

**Prompt:** Query the number of calls each operator performed on the date *2024-05-01*. The operator is a principal tag.

**SQL query:**

```
SELECT element_at(
        eventContext.tagContext.principalTags,
        'operator'
    ) AS operator,
    COUNT(*) AS eventCount
FROM
    eds-id
WHERE eventtime >= '2024-05-01 00:00:00'
    AND eventtime < '2024-05-01 23:59:59'
GROUP BY 1
ORDER BY 2 DESC;
```

**Prompt:** Give me all event IDs that touched resources within the CloudFormation stack with name *myStack* on the date *2024-05-01*.

**SQL query:**

```
SELECT eventID
FROM
    eds-id
WHERE any_match(
        eventContext.tagcontext.resourcetags,
        rt->element_at(rt.tags, 'aws:cloudformation:stack-name') = 'myStack'
    )
    AND eventtime >= '2024-05-01 00:00:00'
    AND eventtime < '2024-05-01 23:59:59'
```

**Prompt:** Count the number of events grouped by resource tag '*solution*' values, listing them in descending order of count.

**SQL query:**

```
SELECT element_at(rt.tags, 'solution'),
    count(*) as event_count
FROM
    eds-id,
    unnest(eventContext.tagContext.resourceTags) as rt
WHERE eventtime < '2025-05-14 19:00:00'
GROUP BY 1
ORDER BY 2 DESC;
```

**Prompt:** Find all Amazon S3 data events where resource tag Environment has value *prod*.

**SQL query:**

```
SELECT *
FROM
    eds-id
WHERE eventCategory = 'Data'
    AND eventSource = 's3.amazonaws.com'
    AND eventtime >= '2025-05-14 00:00:00'
    AND eventtime < '2025-05-14 20:00:00'
    AND any_match(
        eventContext.tagContext.resourceTags,
        rt->element_at(rt.tags, 'Environment') = 'prod'
    )
```

# View sample queries with the CloudTrail console
View sample queries

The CloudTrail console provides a number of sample queries that can help you get started writing your own queries.

CloudTrail queries incur charges based upon the amount of data scanned. To help control costs, we recommend that you constrain queries by adding starting and ending `eventTime` time stamps to queries. For more information about CloudTrail pricing, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/).

**Note**  
You can also view queries created by the GitHub community. For more information, see [CloudTrail Lake sample queries](https://github.com/aws-samples/cloud-trail-lake-query-samples) on the GitHub website. AWS CloudTrail has not evaluated the queries in GitHub. 

**To view and run a sample query**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  From the navigation pane, under **Lake**, choose **Query**. 

1. On the **Query** page, choose the **Sample queries** tab.

1. Choose a sample query from the list or enter a phrase to search by. In this example, we'll open the query **Investigate who made console changes** by choosing the **Query name**. This opens the query in the **Editor** tab.
**Note**  
By default, this page uses basic search functionality. You can improve the search functionality by adding permissions for the `cloudtrail:SearchSampleQueries` action, if it is not already provided by your permissions policy. The [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSCloudTrail_FullAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSCloudTrail_FullAccess.html) managed policy provides permissions to perform the `cloudtrail:SearchSampleQueries` action.  
![\[Sample queries tab\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/query-sample-console.png)

1. On the **Editor** tab, choose the event data store for which you want to run the query. When you choose the event data store from the list, CloudTrail automatically populates the event data store ID in the `FROM` line of the query editor.  
![\[Choose event data store for query\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/query-editor-console.png)

1. Choose **Run** to run the query.

   The **Command output** tab shows you metadata about your query, such as whether the query was successful, the number of records matched, and the run time of the query.  
![\[View query status\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/query-console-status.png)

   The **Query results** tab shows you the event data in the selected event data store that matched your query.  
![\[View query results\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/query-console-results.png)

For more information about editing a query, see [Create or edit a query with the CloudTrail console](query-create-edit-query.md). For more information about running a query and saving query results, see [Run a query and save query results with the console](query-run-query.md).

# Create or edit a query with the CloudTrail console
Create or edit a query

In this walkthrough, we open one of the sample queries, edit it to find actions taken by a specific user named `Alice`, and save it as a new query. You can also edit a saved query on the **Saved queries** tab, if you have saved queries. To help control costs, we recommend that you constrain queries by adding starting and ending `eventTime` time stamps to queries.

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  From the navigation pane, under **Lake**, choose **Query**. 

1. On the **Query** page, choose the **Sample queries** tab.

1. Open a sample query by choosing the **Query name**. This opens the query in the **Editor** tab. In this example, we'll select the query named **Investigate user actions** and edit the query to find the actions for a specific user named `Alice`.

1. In the **Editor** tab, edit the `WHERE` line to specify the user that you want to investigate and update the `eventTime` values as needed. The value of `FROM` is the ID portion of the event data store's ARN and is automatically populated by CloudTrail when you choose the event data store.

   ```
   SELECT
       eventID, eventName, eventSource, eventTime, userIdentity.arn AS user
   FROM
       event-data-store-id
   WHERE
       userIdentity.arn LIKE '%Alice%'
       AND eventTime > '2023-06-23 00:00:00' AND eventTime < '2023-06-26 00:00:00'
   ```

1. You can run a query before you save it, to verify that the query works. To run a query, choose an event data store from the **Event data store** drop-down list, and then choose **Run**. View the **Status** column of the **Command output** tab for the active query to verify that a query ran successfully.

1. When you have updated the sample query, choose **Save**.

1. In **Save query**, enter a name and description for the query. Choose **Save query** to save your changes as the new query. To discard changes to a query, choose **Cancel**, or close the **Save query** window.  
![\[Saving a changed query\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/query-save.png)
**Note**  
Saved queries are tied to your browser; if you use a different browser or a different device to access the CloudTrail console, the saved queries are not available.

1. Open the **Saved queries** tab to see the new query in the table.  
![\[Saved queries tab showing the new saved query\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/query-saved-table.png)

# Run a query and save query results with the console
Run a query and save query results

After you choose or save a query, you can run a query on an event data store. 

 When you run a query, you have the option to save the query results to an Amazon S3 bucket. When you run queries in CloudTrail Lake, you incur charges based on the amount of data scanned by the query. There are no additional CloudTrail Lake charges for saving query results to an S3 bucket, however, there are S3 storage charges. For more information about S3 pricing, see [Amazon S3 pricing](https://aws.amazon.com/s3/pricing/). 

 When you save query results, the query results may display in the CloudTrail console before they are viewable in the S3 bucket since CloudTrail delivers the query results after the query scan completes. While most queries complete within a few minutes, depending on the size of your event data store, it can take considerably longer for CloudTrail to deliver query results to your S3 bucket. CloudTrail delivers the query results to the S3 bucket in compressed gzip format. On average, after the query scan completes you can expect a latency of 60 to 90 seconds for every GB of data delivered to the S3 bucket.

**To run a query using CloudTrail Lake**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  From the navigation pane, under **Lake**, choose **Query**. 

1. On the **Saved queries** or **Sample queries** tabs, choose a query to run by choosing the **Query name**. 

1. On the **Editor** tab, for **Event data store**, choose an event data store from the drop-down list.

1. (Optional) On the **Editor** tab, choose **Save results to S3** to save the query results to an S3 bucket. When you choose the default S3 bucket, CloudTrail creates and applies the required bucket policies. If you choose the default S3 bucket, your IAM policy needs to include permission for the `s3:PutEncryptionConfiguration` action because by default server-side encryption is enabled for the bucket. For more information about saving query results, see [Additional information about saved query results](#save-query-results). 
**Note**  
 To use a different bucket, specify a bucket name, or choose **Browse S3** to choose a bucket. The bucket policy must grant CloudTrail permission to deliver query results to the bucket. For information about manually editing the bucket policy, see [Amazon S3 bucket policy for CloudTrail Lake query results](s3-bucket-policy-lake-query-results.md). 

1. On the **Editor** tab, choose **Run**.

   Depending on the size of your event data store, and the number of days of data it includes, a query can take several minutes to run. The **Command output** tab shows the status of a query, and whether a query is finished running. When a query has finished running, open the **Query results** tab to see a table of results for the active query (the query currently shown in the editor).

**Note**  
Queries that run for longer than one hour might time out. You can still get partial results that were processed before the query timed out. CloudTrail does not deliver partial query results to an S3 bucket. To avoid a time out, you can refine your query to limit the amount of data scanned by specifying a narrower time range.

## Additional information about saved query results


After you save query results, you can download the saved query results from the S3 bucket. For more information about finding and downloading saved query results, see [Download saved query results](view-download-cloudtrail-lake-query-results.md).

You can also validate saved query results to determine whether the query results were modified, deleted, or unchanged after CloudTrail delivered the query results. For more information about validating saved query results, see [Validate CloudTrail Lake saved query results](cloudtrail-query-results-validation.md).

## Example: Save query results to an Amazon S3 bucket


This walkthrough shows how you can save query results to an S3 bucket and then download those query results.

**To save query results to an Amazon S3 bucket**

1. Sign in to the AWS Management Console and open the CloudTrail console at [https://console.aws.amazon.com/cloudtrail/](https://console.aws.amazon.com/cloudtrail/).

1.  From the navigation pane, under **Lake**, choose **Query**. 

1. On the **Sample queries** or **Saved queries** tabs, choose a query to run by choosing the **Query name**. In this example, we'll choose the sample query named **Investigate user actions**.

1. On the **Editor** tab, for **Event data store**, choose an event data store from the drop-down list. When you choose the event data store from the list, CloudTrail automatically populates the event data store ID in the `From` line.

1. In this sample query, we'll edit the `userIdentity.ARN` value to specify a user named `Admin`, and we'll leave the default values for `eventTime`. When you run a query, you're charged for the amount of data scanned. To help control costs, we recommend that you constrain queries by adding starting and ending `eventTime` time stamps to queries.  
![\[Edit userIdentity.ARN value in sample query\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/sample-query-edit.png)

1. Choose **Save results to S3** to save the query results to an S3 bucket. When you choose the default S3 bucket, CloudTrail creates and applies the required bucket policies. If you choose the default S3 bucket, your IAM policy needs to include permission for the `s3:PutEncryptionConfiguration` action because by default server-side encryption is enabled for the bucket. In this example, we'll use the default S3 bucket.
**Note**  
 To use a different bucket, specify a bucket name, or choose **Browse S3** to choose a bucket. The bucket policy must grant CloudTrail permission to deliver query results to the bucket. For information about manually editing the bucket policy, see [Amazon S3 bucket policy for CloudTrail Lake query results](s3-bucket-policy-lake-query-results.md).   
![\[Chosen S3 bucket for saved query results.\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/save-query-results.png)

1. Choose **Run**. Depending on the size of your event data store, and the number of days of data it includes, a query can take several minutes to run. The **Command output** tab shows the status of a query, and whether a query is finished running. When a query has finished running, open the **Query results** tab to see a table of results for the active query (the query currently shown in the editor).

1. When CloudTrail completes delivery of the saved query results to your S3 bucket, the **Delivery status** column provides a link to the S3 bucket that contains your saved query result files as well as a [sign file](cloudtrail-query-results-validation.md#cloudtrail-results-file-validation-sign-file-structure) that you can use to verify your saved query results. Choose **View in S3** to view the query result files and sign files in the S3 bucket.
**Note**  
 When you save query results, the query results may display in the CloudTrail console before they are viewable in the S3 bucket because CloudTrail delivers the query results after the query scan completes. While most queries complete within a few minutes, depending on the size of your event data store, it can take considerably longer for CloudTrail to deliver query results to your S3 bucket. CloudTrail delivers the query results to the S3 bucket in compressed gzip format. On average, after the query scan completes you can expect a latency of 60 to 90 seconds for every GB of data delivered to the S3 bucket.  
![\[Query delivery status on Command output tab\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/query-delivery-status.png)

1. To download your query results, choose the query result file (in this example, `result_1.csv.gz`) and then choose **Download**.  
![\[Download query result file\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/download-query-results.png)

For information about validating saved query results, see [Validate CloudTrail Lake saved query results](cloudtrail-query-results-validation.md).

# View query results with the console
View query results

After your query finishes, you can view its results. The results of a query are available for seven days after the query finishes. You can view results for the active query on the **Query results** tab, or you can access results for all recent queries on the **Results history** tab on the **Lake** home page.

Query results can change from older runs of a query to newer ones, as later events in the query period can be logged between queries.

When you save query results, the query results may display in the CloudTrail console before they are viewable in the S3 bucket since CloudTrail delivers the query results after the query scan completes. While most queries complete within a few minutes, depending on the size of your event data store, it can take considerably longer for CloudTrail to deliver query results to your S3 bucket. CloudTrail delivers the query results to the S3 bucket in compressed gzip format.  On average, after the query scan completes you can expect a latency of 60 to 90 seconds for every GB of data delivered to the S3 bucket. For more information about finding and downloading saved query results, see [Download saved query results](view-download-cloudtrail-lake-query-results.md).

**Note**  
Queries that run for longer than one hour might time out. You can still get partial results that were processed before the query timed out. CloudTrail does not deliver partial query results to an S3 bucket. To avoid a time out, you can refine your query to limit the amount of data scanned by specifying a narrower time range.

**To view query results**

1. Choose the **Query results** tab on the query editor if it is not already selected. On the **Query results** tab for an active query, each row represents an event result that matched the query. Filter results by entering all or part of an event field value in the search bar. To copy an event, choose the event you want to copy and then choose **Copy**.

1. (Optional) Choose **Summarize results** to generate a natural language summary of the query results. The summary is provided in English. This option uses generative artificial intelligence (generative AI) to produce the summary. For more information about this option, see [Summarize query results in natural language](query-results-summary.md).

   You can provide feedback about the summary by choosing the thumbs up or thumbs down button that appears below the generated summary.
**Note**  
The query summarization feature is in preview release for CloudTrail Lake and is subject to change. This feature is available in the following regions: Asia Pacific (Tokyo), US East (N. Virginia), and US West (Oregon).

1. On the **Command output** tab, view metadata about the query that was run, such as the event data store ID, run time, number of results scanned, and whether or not the query was successful. If you saved the query results to an Amazon S3 bucket, the metadata also includes a link to the S3 bucket containing the saved query results.

# Summarize query results in natural language


**Note**  
The query summarization feature is in preview release for CloudTrail Lake and is subject to change.

**Note**  
CloudTrail will automatically select the optimal region within your geography to process inference requests while summarizing queries. This maximizes available compute resources, model availability, and delivers the best customer experience. Your data will remain stored only in the region where the request originated, however, input prompts and output results may be processed outside that region. All data will be transmitted encrypted across Amazon's secure network.  
 CloudTrail will securely route your inference requests to available compute resources within the geographic area where the request originated, as follows:   
Inference requests originating in the United States will be processed within the United States
Inference requests originating within Japan will be processed within Japan
 To opt out of the query summarization feature, you can explicitly deny or remove the `cloudtrail:GenerateQueryResultsSummary` action from the iam policy you are using. 

After your query finishes, you can get a summary of your query results in natural language from the **Query results** tab in the query editor. This option uses generative artificial intelligence (generative AI) to produce the summary.

**To summarize query results**

1. From the **Query results** tab of the query editor, choose **Summarize results** to generate a natural language summary of the query results. The summary is provided in English.

1. (Optional) Provide feedback about the summary by choosing the thumbs up or thumbs down button that appears below the generated summary.

If the related event data store is encrypted using a KMS key, you cannot use the KMS key to encrypt the query results and summary. The query results and summary are instead encrypted by CloudTrail.

Access to the generated summary is authorized against the `GetQueryResults`, `GenerateQueryResultsSummary`, and KMS permissions (if the related event date store is encrypted with a KMS key). When a summary is generated, CloudTrail records an event named `GenerateQueryResultsSummary` for visibility.

## Required permissions


The [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSCloudTrail_FullAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AWSCloudTrail_FullAccess.html) and [https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AdministratorAccess.html](https://docs.aws.amazon.com/aws-managed-policy/latest/reference/AdministratorAccess.html) managed policies both provide the necessary permissions to use this feature.

You can also include the `cloudtrail:GenerateQueryResultsSummary` and `cloudtrail:GetQueryResults` actions in a new or existing customer managed or inline policy.

If the event data store related to the query results being summarized is encrypted with a KMS key, you also need permissions for the KMS key.

## Region support


This feature is available in the following AWS Regions:
+ Asia Pacific (Tokyo) Region (ap-northeast-1)
+ US East (N. Virginia) Region (us-east-1)
+ US West (Oregon) Region (us-west-2)

## Limitations


The following are limitations of this feature:
+ Summaries are in English only.
+ Summaries are limited to event data stores that collect CloudTrail events (management events, data events, network activity events).
+ Each summary is for the results of a single query.
+ The query results size must be less than 250 KB.
+ The monthly quota of query results that can be summarized is 3 MB.

# Download saved query results


After you save query results, you need to be able to locate the file containing the query results. CloudTrail delivers your query results to an Amazon S3 bucket that you specify when you save the query results. 

**Note**  
 When you save query results, the query results may display in the console before they are viewable in the S3 bucket since CloudTrail delivers the query results after the query scan completes. While most queries complete within a few minutes, depending on the size of your event data store, it can take considerably longer for CloudTrail to deliver query results to your S3 bucket. CloudTrail delivers the query results to the S3 bucket in compressed gzip format. On average, after the query scan completes you can expect a latency of 60 to 90 seconds for every GB of data delivered to the S3 bucket. 

**Topics**
+ [

## Find your CloudTrail Lake saved query results
](#cloudtrail-find-lake-query-results)
+ [

## Download your CloudTrail Lake saved query results
](#cloudtrail-download-lake-query-results)

## Find your CloudTrail Lake saved query results


CloudTrail publishes query result and sign files to your S3 bucket. The query result file contains the output of the saved query and the sign file provides the signature and hash value for the query results. You can use the sign file to validate the query results. For more information about validating query results, see [Validate CloudTrail Lake saved query results](cloudtrail-query-results-validation.md).

To retrieve a query result or sign file, you can use the Amazon S3 console, the Amazon S3 command line interface (CLI), or the API. 

**To find your query results and sign files with the Amazon S3 console**

1. Open the Amazon S3 console.

1. Choose the bucket you specified.

1. Navigate through the object hierarchy until you find the query result and sign files. The query result file has a .csv.gz extension and the sign file has a .json extension.

You will navigate through an object hierarchy that is similar to the following example, but with a different bucket name, account ID, date, and query ID. 

```
All Buckets
    amzn-s3-demo-bucket
        AWSLogs
            Account_ID;
                CloudTrail-Lake
                    Query
                        2022
                            06
                              20
                                Query_ID
```

## Download your CloudTrail Lake saved query results
Download saved query results

When you save query results, CloudTrail delivers two types of files to your Amazon S3 bucket.
+ A sign file in JSON format that you can use to validate the query result files. The sign file is named result\$1sign.json. For more information about the sign file, see [CloudTrail sign file structure](cloudtrail-query-results-validation.md#cloudtrail-results-file-validation-sign-file-structure).
+ One or more query result files in CSV format, which contain the results from the query. The number of query result files delivered is dependent upon the total size of the query results. The maximum file size for a query result file is 1 TB. Each query result file is named result\$1*number*.csv.gz. For example, if the total size of the query results was 2 TB, you would have two query result files, result\$11.csv.gz and result\$12.csv.gz.

 CloudTrail query result and sign files are Amazon S3 objects. You can use the S3 console, the AWS Command Line Interface (CLI), or the S3 API to retrieve query result and sign files. 

 The following procedure describes how to download the query result and sign files with the Amazon S3 console. 

**To download your query result or sign file with the Amazon S3 console**

1. Open the Amazon S3 console.

1. Choose the bucket and choose the file that you want to download.  
![\[CloudTrail query result file\]](http://docs.aws.amazon.com/awscloudtrail/latest/userguide/images/lake_query_results_S3.png)

1. Choose **Download** and follow any prompts to save the file.
**Note**  
Some browsers, such as Chrome, automatically extract the query result file for you. If your browser does this for you, skip to step 5.

1. Use a product such as [7-Zip](https://www.7-zip.org/) to extract the query result file.

1. Open the query result or sign file.

# Validate CloudTrail Lake saved query results
Validate saved query results

To determine whether the query results were modified, deleted, or unchanged after CloudTrail delivered the query results, you can use CloudTrail query results integrity validation. This feature is built using industry standard algorithms: SHA-256 for hashing and SHA-256 with RSA for digital signing. This makes it computationally infeasible to modify, delete or forge CloudTrail query result files without detection. You can use the command line to validate the query result files. 

## Why use it?


Validated query result files are invaluable in security and forensic investigations. For example, a validated query result file enables you to assert positively that the query result file itself has not changed. The CloudTrail query result file integrity validation process also lets you know if a query result file has been deleted or changed. 

**Topics**
+ [

## Why use it?
](#cloudtrail-query-results-validation-use-cases)
+ [

## Validate saved query results with the AWS CLI
](#cloudtrail-query-results-validation-cli)
+ [

## CloudTrail sign file structure
](#cloudtrail-results-file-validation-sign-file-structure)
+ [

## Custom implementations of CloudTrail query result file integrity validation
](#cloudtrail-results-file-custom-validation)

## Validate saved query results with the AWS CLI
Validate query results with the AWS CLI

You can validate the integrity of the query result files and sign file by using the [https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/verify-query-results.html](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/verify-query-results.html) command.

### Prerequisites


To validate query results integrity with the command line, the following conditions must be met:
+ You must have online connectivity to AWS.
+ You must use AWS CLI version 2.
+ To validate query result files and sign file locally, the following conditions apply:
  + You must put the query result files and sign file in the specified file path. Specify the file path as the value for the **--local-export-path** parameter.
  + You must not rename the query result files and sign file.
+ To validate the query result files and sign file in the S3 bucket, the following conditions apply:
  + You must not rename the query result files and sign file.
  + You must have read access to the Amazon S3 bucket that contains the query result files and sign file.
  + The specified S3 prefix must contain the query result files and sign file. Specify the S3 prefix as the value for the **--s3-prefix** parameter.

### verify-query-results


 The **verify-query-results** command verifies the hash value of each query result file by comparing the value with the `fileHashValue` in the sign file, and then validating the `hashSignature` in the sign file. 

When you verify query results, you can use either the **--s3-bucket** and **--s3-prefix** command line options to validate the query result files and sign file stored in an S3 bucket, or you can use the **--local-export-path** command line option to perform a local validation of the downloaded query result files and sign file.

**Note**  
The **verify-query-results** command is Region specific. You must specify the **--region** global option to validate query results for a specific AWS Region.

The following are the options for the **verify-query-results** command.

**--s3-bucket** *<string>*  
Specifies the S3 bucket name that stores the query result files and sign file. You cannot use this parameter with **--local-export-path**.

**--s3-prefix** *<string>*  
Specifies the S3 path of the S3 folder that contains the query result files and sign file (for example, `s3/path/`). You cannot use this parameter with **--local-export-path**. You do not need to provide this parameter if the files are located in the root directory of the S3 bucket.

**--local-export-path** *<string>*  
Specifies the local directory that contains the query result files and sign file (for example, `/local/path/to/export/file/`). You cannot use this parameter with **--s3-bucket** or **--s3-prefix**.

#### Examples


The following example validates query results using the **--s3-bucket** and **--s3-prefix** command line options to specify the S3 bucket name and prefix containing the query result files and sign file.

```
aws cloudtrail verify-query-results --s3-bucket amzn-s3-demo-bucket --s3-prefix prefix --region region
```

The following example validates downloaded query results using the **--local-export-path** command line option to specify the local path for the query result files and sign file. For more information about downloading query result files, see [Download your CloudTrail Lake saved query results](view-download-cloudtrail-lake-query-results.md#cloudtrail-download-lake-query-results).

```
aws cloudtrail verify-query-results --local-export-path local_file_path --region region
```

#### Validation results


The following table describes the possible validation messages for query result files and sign file.


****  

| File Type | Validation Message | Description | 
| --- | --- | --- | 
| Sign file | Successfully validated sign and query result files | The sign file signature is valid. The query result files it references can be checked. | 
| Query result file |  `ValidationError: "File file_name has inconsistent hash value with hash value recorded in sign file, hash value in sign file is expected_hash, but get computed_hash`  | Validation failed because the hash value for the query result file did not match the fileHashValue in the sign file. | 
| Sign file |  `ValidationError: Invalid signature in sign file`  | Validation for the sign file failed because the signature is not valid. | 

## CloudTrail sign file structure


The sign file contains the name of each query result file that was delivered to your Amazon S3 bucket when you saved the query results, the hash value for each query result file, and the digital signature of the file. The digital signature and hash values are used for validating the integrity of the query result files and of the sign file itself. 

### Sign file location


The sign file is delivered to an Amazon S3 bucket location that follows this syntax.

```
s3://amzn-s3-demo-bucket/optional-prefix/AWSLogs/aws-account-ID/CloudTrail-Lake/Query/year/month/date/query-ID/result_sign.json
```

### Sample sign file contents


The following example sign file contains information for CloudTrail Lake query results.

```
{
  "version": "1.0",
  "region": "us-east-1",
  "files": [
    {
      "fileHashValue" : "de85a48b8a363033c891abd723181243620a3af3b6505f0a44db77e147e9c188",
      "fileName" : "result_1.csv.gz"
    }
  ],
  "hashAlgorithm" : "SHA-256",
  "signatureAlgorithm" : "SHA256withRSA",
  "queryCompleteTime": "2022-05-10T22:06:30Z",
  "hashSignature" : "7664652aaf1d5a17a12ba50abe6aca77c0ec76264bdf7dce71ac6d1c7781117c2a412e5820bccf473b1361306dff648feae20083ad3a27c6118172a81635829bdc7f7b795ebfabeb5259423b2fb2daa7d1d02f55791efa403dac553171e7ce5f9307d13e92eeec505da41685b4102c71ec5f1089168dacde702c8d39fed2f25e9216be5c49769b9db51037cb70a84b5712e1dffb005a74580c7fdcbb89a16b9b7674e327de4f5414701a772773a4c98eb008cca34228e294169901c735221e34cc643ead34628aabf1ba2c32e0cdf28ef403e8fe3772499ac61e21b70802dfddded9bea0ddfc3a021bf2a0b209f312ccee5a43f2b06aa35cac34638f7611e5d7",
  "publicKeyFingerprint" : "67b9fa73676d86966b449dd677850753"
}
```

### Sign file field descriptions


The following are descriptions for each field in the sign file: 

`version`  
The version of the sign file. 

`region`  
The Region for the AWS account used for saving the query results. 

`files.fileHashValue`  
The hexadecimal encoded hash value of the compressed query result file content.

`files.fileName`  
The name of the query result file. 

`hashAlgorithm`  
The hash algorithm used to hash the query result file. 

`signatureAlgorithm`  
The algorithm used to sign the file. 

`queryCompleteTime`  
Indicates when CloudTrail delivered the query results to the S3 bucket. You can use this value to find the public key.

`hashSignature`  
The hash signature for the file.

`publicKeyFingerprint`  
The hexadecimal encoded fingerprint of the public key used to sign the file.

## Custom implementations of CloudTrail query result file integrity validation
Custom implementations of CloudTrail query results validation

Because CloudTrail uses industry standard, openly available cryptographic algorithms and hash functions, you can create your own tools to validate the integrity of the CloudTrail query result files. When you save query results to an Amazon S3 bucket, CloudTrail delivers a sign file to your S3 bucket. You can implement your own validation solution to validate the signature and query result files. For more information about the sign file, see [CloudTrail sign file structure](#cloudtrail-results-file-validation-sign-file-structure). 

This topic describes how the sign file is signed, and then details the steps that you will need to take to implement a solution that validates the sign file and the query result files that the sign file references. 

### Understanding how CloudTrail sign files are signed


CloudTrail sign files are signed with RSA digital signatures. For each sign file, CloudTrail does the following: 

1. Creates a hash list containing the hash value for each query result file.

1. Gets a private key unique to the Region.

1. Passes the SHA-256 hash of the string and the private key to the RSA signing algorithm, which produces a digital signature.

1. Encodes the byte code of the signature into hexadecimal format.

1. Puts the digital signature into the sign file.

#### Contents of the data signing string


The data signing string consists of the hash value for each query result file separated by a space. The sign file lists the `fileHashValue` for each query result file.

### Custom validation implementation steps


When implementing a custom validation solution, you will need to validate the sign file and the query result files that it references. 

#### Validate the sign file


To validate a sign file, you need its signature, the public key whose private key was used to sign it, and a data signing string that you compute. 

1. Get the sign file.

1. Verify that the sign file has been retrieved from its original location. 

1. Get the hexadecimal-encoded signature of the sign file.

1. Get the hexadecimal-encoded fingerprint of the public key whose private key was used to sign the sign file.

1. Retrieve the public key for the time range corresponding to `queryCompleteTime` in the sign file. For the time range, choose a `StartTime` earlier than the `queryCompleteTime` and an `EndTime` later than the `queryCompleteTime`.

1. From among the public keys retrieved, choose the public key whose fingerprint matches the `publicKeyFingerprint` value in the sign file.

1. Using a hash list containing the hash value for each query result file separated by a space, recreate the data signing string used to verify the sign file signature. The sign file lists the `fileHashValue` for each query result file.

   For example, if your sign file's `files` array contains the following three query result files, your hash list is "aaa bbb ccc".

   ```
   “files": [ 
      { 
           "fileHashValue" : “aaa”, 
           "fileName" : "result_1.csv.gz" 
      },
      {       
           "fileHashValue" : “bbb”,       
           "fileName" : "result_2.csv.gz"      
      },
      { 
           "fileHashValue" : “ccc”,       
           "fileName" : "result_3.csv.gz" 
      }
   ],
   ```

1. Validate the signature by passing in the SHA-256 hash of the string, the public key, and the signature as parameters to the RSA signature verification algorithm. If the result is true, the sign file is valid. 

#### Validate the query result files


If the sign file is valid, validate the query result files that the sign file references. To validate the integrity of a query result file, compute its SHA-256 hash value on its compressed content and compare the results with the `fileHashValue` for the query result file recorded in the sign file. If the hashes match, the query result file is valid.

The following sections describe the validation process in detail.

#### A. Get the sign file


The first steps are to get the sign file and get the fingerprint of the public key.

1. Get the sign file from your Amazon S3 bucket for the query results that you want to validate. 

1. Next, get the `hashSignature` value from the sign file.

1. In the sign file, get the fingerprint of the public key whose private key was used to sign the file from the `publicKeyFingerprint` field. 

#### B. Retrieve the public key for validating the sign file


To get the public key to validate the sign file, you can use either the AWS CLI or the CloudTrail API. In both cases, you specify a time range (that is, a start time and end time) for the sign file that you want to validate. Use a time range corresponding to the `queryCompleteTime` in the sign file. One or more public keys may be returned for the time range that you specify. The returned keys may have validity time ranges that overlap.

**Note**  
Because CloudTrail uses different private/public key pairs per Region, each sign file is signed with a private key unique to its Region. Therefore, when you validate a sign file from a particular Region, you must retrieve its public key from the same Region.

##### Use the AWS CLI to retrieve public keys


To retrieve a public key for a sign file by using the AWS CLI, use the `cloudtrail list-public-keys` command. The command has the following format: 

 `aws cloudtrail list-public-keys [--start-time <start-time>] [--end-time <end-time>]` 

The start-time and end-time parameters are UTC timestamps and are optional. If not specified, the current time is used, and the currently active public key or keys are returned.

 **Sample Response** 

The response will be a list of JSON objects representing the key (or keys) returned: 

##### Use the CloudTrail API to retrieve public keys


To retrieve a public key for a sign file by using the CloudTrail API, pass in start time and end time values to the `ListPublicKeys` API. The `ListPublicKeys` API returns the public keys whose private keys were used to sign the file within the specified time range. For each public key, the API also returns the corresponding fingerprint.

##### `ListPublicKeys`


This section describes the request parameters and response elements for the `ListPublicKeys` API.

**Note**  
The encoding for the binary fields for `ListPublicKeys` is subject to change. 

 **Request Parameters** 


****  

| Name | Description | 
| --- | --- | 
|  StartTime  |  Optionally specifies, in UTC, the start of the time range to look up the public key for CloudTrail sign file. If StartTime is not specified, the current time is used, and the current public key is returned.  Type: DateTime   | 
|  EndTime  |  Optionally specifies, in UTC, the end of the time range to look up public keys for CloudTrail sign files. If EndTime is not specified, the current time is used.  Type: DateTime   | 

 **Response Elements** 

`PublicKeyList`, an array of `PublicKey` objects that contains: 


****  

|  |  | 
| --- |--- |
|  Name  |  Description  | 
|  Value  |  The DER encoded public key value in PKCS \$11 format.  Type: Blob   | 
|  ValidityStartTime  |  The starting time of validity of the public key. Type: DateTime   | 
|  ValidityEndTime  |  The ending time of validity of the public key. Type: DateTime   | 
|  Fingerprint  |  The fingerprint of the public key. The fingerprint can be used to identify the public key that you must use to validate the sign file. Type: String   | 

#### C. Choose the public key to use for validation


From among the public keys retrieved by `list-public-keys` or `ListPublicKeys`, choose the public key whose fingerprint matches the fingerprint recorded in the `publicKeyFingerprint` field of the sign file. This is the public key that you will use to validate the sign file. 

#### D. Recreate the data signing string


Now that you have the signature of the sign file and the associated public key, you need to calculate the data signing string. After you have calculated the data signing string, you will have the inputs needed to verify the signature.

The data signing string consists of the hash value for each query result file separated by a space. After you recreate this string, you can validate the sign file.

#### E. Validate the sign file


Pass the recreated data signing string, digital signature, and public key to the RSA signature verification algorithm. If the output is true, the signature of the sign file is verified and the sign file is valid. 

#### F. Validate the query result files


After you have validated the sign file, you can validate the query result files it references. The sign file contains the SHA-256 hashes of the query result files. If one of the query result files was modified after CloudTrail delivered it, the SHA-256 hashes will change, and the signature of the sign file will not match. 

Use the following procedure to validate the query result files listed in the sign file's `files` array.

1. Retrieve the original hash of the file from the `files.fileHashValue` field in the sign file.

1. Hash the compressed contents of the query result file with the hashing algorithm specified in `hashAlgorithm`.

1. Compare the hash value that you generated for each query result file with the `files.fileHashValue` in the sign file. If the hashes match, the query result files are valid.

### Validating signature and query result files offline


When validating sign and query result files offline, you can generally follow the procedures described in the previous sections. However, you must take into account the following information about public keys.

#### Public keys


In order to validate offline, the public key that you need for validating query result files in a given time range must first be obtained online (by calling `ListPublicKeys`, for example) and then stored offline. This step must be repeated whenever you want to validate additional files outside the initial time range that you specified.

### Sample validation snippet


The following sample snippet provides skeleton code for validating CloudTrail sign and query result files. The skeleton code is online/offline agnostic; that is, it is up to you to decide whether to implement it with or without online connectivity to AWS. The suggested implementation uses the [Java Cryptography Extension (JCE)](https://en.wikipedia.org/wiki/Java_Cryptography_Extension) and [Bouncy Castle](https://www.bouncycastle.org/) as a security provider. 

The sample snippet shows:
+ How to create the data signing string used to validate the sign file signature.
+ How to verify the sign file's signature.
+ How to calculate the hash value for the query result file and compare it with the `fileHashValue` listed in the sign file to verify the authenticity of the query result file.

```
import org.apache.commons.codec.binary.Hex;
import org.bouncycastle.asn1.pkcs.PKCSObjectIdentifiers;
import org.bouncycastle.asn1.pkcs.RSAPublicKey;
import org.bouncycastle.asn1.x509.AlgorithmIdentifier;
import org.bouncycastle.asn1.x509.SubjectPublicKeyInfo;
import org.bouncycastle.jce.provider.BouncyCastleProvider;
import org.json.JSONArray;
import org.json.JSONObject;
 
import java.security.KeyFactory;
import java.security.MessageDigest;
import java.security.PublicKey;
import java.security.Security;
import java.security.Signature;
import java.security.spec.X509EncodedKeySpec;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;
 
 
public class SignFileValidationSampleCode {
 
    public void validateSignFile(String s3Bucket, String s3PrefixPath) throws Exception {
        MessageDigest messageDigest = MessageDigest.getInstance("SHA-256");
 
        // Load the sign file from S3 (using Amazon S3 Client) or from your local copy
        JSONObject signFile = loadSignFileToMemory(s3Bucket, String.format("%s/%s", s3PrefixPath, "result_sign.json"));
 
        // Using the Bouncy Castle provider as a JCE security provider - http://www.bouncycastle.org/
        Security.addProvider(new BouncyCastleProvider());
 
        List<String> hashList = new ArrayList<>();
 
        JSONArray jsonArray = signFile.getJSONArray("files");
 
        for (int i = 0; i < jsonArray.length(); i++) {
            JSONObject file = jsonArray.getJSONObject(i);
            String fileS3ObjectKey = String.format("%s/%s", s3PrefixPath, file.getString("fileName"));
 
            // Load the export file from S3 (using Amazon S3 Client) or from your local copy
            byte[] exportFileContent = loadCompressedExportFileInMemory(s3Bucket, fileS3ObjectKey);
            messageDigest.update(exportFileContent);
            byte[] exportFileHash = messageDigest.digest();
            messageDigest.reset();
            byte[] expectedHash = Hex.decodeHex(file.getString("fileHashValue"));
 
            boolean signaturesMatch = Arrays.equals(expectedHash, exportFileHash);
            if (!signaturesMatch) {
                System.err.println(String.format("Export file: %s/%s hash doesn't match.\tExpected: %s Actual: %s",
                        s3Bucket, fileS3ObjectKey,
                        Hex.encodeHexString(expectedHash), Hex.encodeHexString(exportFileHash)));
            } else {
                System.out.println(String.format("Export file: %s/%s hash match",
                        s3Bucket, fileS3ObjectKey));
            }
 
            hashList.add(file.getString("fileHashValue"));
        }
        String hashListString = hashList.stream().collect(Collectors.joining(" "));
 
        /*
            NOTE:
            To find the right public key to verify the signature, call CloudTrail ListPublicKey API to get a list
            of public keys, then match by the publicKeyFingerprint in the sign file. Also, the public key bytes
            returned from ListPublicKey API are DER encoded in PKCS#1 format:
 
            PublicKeyInfo ::= SEQUENCE {
                algorithm       AlgorithmIdentifier,
                PublicKey       BIT STRING
            }
 
            AlgorithmIdentifier ::= SEQUENCE {
                algorithm       OBJECT IDENTIFIER,
                parameters      ANY DEFINED BY algorithm OPTIONAL
            }
        */
        byte[] pkcs1PublicKeyBytes = getPublicKey(signFile.getString("queryCompleteTime"),
                signFile.getString("publicKeyFingerprint"));
        byte[] signatureContent = Hex.decodeHex(signFile.getString("hashSignature"));
 
        // Transform the PKCS#1 formatted public key to x.509 format.
        RSAPublicKey rsaPublicKey = RSAPublicKey.getInstance(pkcs1PublicKeyBytes);
        AlgorithmIdentifier rsaEncryption = new AlgorithmIdentifier(PKCSObjectIdentifiers.rsaEncryption, null);
        SubjectPublicKeyInfo publicKeyInfo = new SubjectPublicKeyInfo(rsaEncryption, rsaPublicKey);
 
        // Create the PublicKey object needed for the signature validation
        PublicKey publicKey = KeyFactory.getInstance("RSA", "BC")
                .generatePublic(new X509EncodedKeySpec(publicKeyInfo.getEncoded()));
 
        // Verify signature
        Signature signature = Signature.getInstance("SHA256withRSA", "BC");
        signature.initVerify(publicKey);
        signature.update(hashListString.getBytes("UTF-8"));
 
        if (signature.verify(signatureContent)) {
            System.out.println("Sign file signature is valid.");
        } else {
            System.err.println("Sign file signature failed validation.");
        }
 
        System.out.println("Sign file validation completed.");
    }
}
```

# Optimize CloudTrail Lake queries
Optimize queries

This page provides guidance about how to optimize CloudTrail Lake queries to improve performance and reliability. It covers specific optimization techniques as well as workarounds for common query failures.

**Topics**
+ [

## Recommendations for optimizing queries
](#lake-queries-tuning)
+ [

## Workarounds for query failures
](#lake-queries-troubleshooting)

## Recommendations for optimizing queries


Follow the recommendations in this section to optimize your queries.

**Topics**
+ [

### Optimize aggregations
](#query-optimization-aggregation)
+ [

### Use approximation techniques
](#query-optimization-approximation)
+ [

### Limit query results
](#query-optimization-limit)
+ [

### Optimize LIKE queries
](#query-optimization-like)
+ [

### Use `UNION ALL` instead of `UNION`
](#query-optimization-union)
+ [

### Include only required columns
](#query-optimization-reqcolumns)
+ [

### Reduce window function scope
](#query-optimization-windows)

### Optimize aggregations


Excluding redundant columns in `GROUP BY` clauses can improve performance as fewer columns require less memory. For example, in the following query, we can use the `arbitrary` function on a redundant column like `eventType` to improve the performance. The `arbitrary` function on `eventType` is used to pick the field value randomly from the group as the value is the same and doesn't need to be included in the `GROUP BY` clause.

```
SELECT eventName, eventSource, arbitrary(eventType), count(*) 
FROM $EDS_ID 
GROUP BY eventName, eventSource
```

It's possible to improve the performance of the `GROUP BY` function by ordering the list of fields within the `GROUP BY` in decreasing order of their unique value count (cardinality). For example, while getting the number of events of a type in each AWS Region, performance can be improved by using the `eventName`, `awsRegion` order in the `GROUP BY` function instead of `awsRegion`, `eventName` as there are more unique values of `eventName` than there are of `awsRegion`.

```
SELECT eventName, awsRegion, count(*) 
FROM $EDS_ID 
GROUP BY eventName, awsRegion
```

### Use approximation techniques


Whenever exact values are not needed for counting distinct values, use [approximate aggregate functions](https://trino.io/docs/current/functions/aggregate.html#approximate-aggregate-functions) to find the most frequent values. For example, [https://trino.io/docs/current/functions/aggregate.html#approx_distinct](https://trino.io/docs/current/functions/aggregate.html#approx_distinct) uses much less memory and runs faster than the `COUNT(DISTINCT fieldName)` operation.

### Limit query results


If only a sample response is needed for a query, restrict the results to a small number of rows by using the `LIMIT` condition. Otherwise, the query will return large results and take more time for query execution.

Using `LIMIT` along with `ORDER BY` can provide results for the top or bottom N records faster as it reduces the amount of memory needed and time taken to sort.

```
SELECT * FROM $EDS_ID
ORDER BY eventTime 
LIMIT 100;
```

### Optimize LIKE queries


You can use `LIKE` to find matching strings, but with long strings, this is compute intensive. The [https://trino.io/docs/current/functions/regexp.html#regexp_like](https://trino.io/docs/current/functions/regexp.html#regexp_like) function is in most cases a faster alternative.

Often, you can optimize a search by anchoring the substring that you're looking for. For example, if you're looking for a prefix, it's better to use '`substr`%' instead of '%`substr`%' with the `LIKE` operator and '^`substr`' with the `regexp_like` function.

### Use `UNION ALL` instead of `UNION`


`UNION ALL` and `UNION` are two ways to combine the results of two queries into one result but `UNION` removes duplicates. `UNION` needs to process all the records and find the duplicates, which is memory and compute intensive, but `UNION ALL` is a relatively quick operation. Unless you need to deduplicate records, use `UNION ALL` for the best performance.

### Include only required columns


If you don't need a column, don't include it in your query. The less data a query has to process, the faster it will run. If you have queries that do `SELECT *` in the outermost query, you should change the `*` to a list of columns that you need.

The `ORDER BY` clause returns the results of a query in sorted order. When sorting larger amount of data, if required memory is not available, intermediate sorted results are written to disk which can slow down query execution. If you don't strictly need your result to be sorted, avoid adding an `ORDER BY` clause. Also, avoid adding `ORDER BY` to inner queries if it is not strictly necessary. 

### Reduce window function scope


[Window functions](https://trino.io/docs/current/functions/window.html) keep all the records that they operate on in memory in order to calculate their result. When the window is very large, the window function can run out of memory. To make sure that queries run within the available memory limits, reduce the size of the windows that your window functions operate over by adding a `PARTITION BY` clause.

Sometimes queries with window functions can be rewritten without window functions. For example, instead of using `row_number` or `rank`, you can use aggregate functions like [https://trino.io/docs/current/functions/aggregate.html#max_by](https://trino.io/docs/current/functions/aggregate.html#max_by) or [https://trino.io/docs/current/functions/aggregate.html#min_by](https://trino.io/docs/current/functions/aggregate.html#min_by).

The following query finds the alias most recently assigned to each KMS key using `max_by`.

```
SELECT element_at(requestParameters, 'targetKeyId') as keyId, 
max_by(element_at(requestParameters, 'aliasName'), eventTime) as mostRecentAlias 
FROM $EDS_ID 
WHERE eventsource = 'kms.amazonaws.com' 
AND eventName in ('CreateAlias', 'UpdateAlias') 
AND eventTime > DATE_ADD('week', -1, CURRENT_TIMESTAMP) 
GROUP BY element_at(requestParameters, 'targetKeyId')
```

In this case, the `max_by` function returns the alias for the record with the latest event time within the group. This query runs faster and uses less memory than an equivalent query with a window function.

## Workarounds for query failures


This section provides workarounds for common query failures.

**Topics**
+ [

### Query fails because response is too large
](#large-responses)
+ [

### Query fails due to resource exhaustion
](#exhausted-resources)

### Query fails because response is too large


A query can fail if the response is too large resulting in the message `Query response is too large`. If this occurs, you can reduce the aggregation scope.

Aggregation functions like `array_agg` can cause at least one row in the query response to be very large causing the query to fail. For example, using `array_agg(eventName)` instead of `array_agg(DISTINCT eventName)` will increase the response size a lot due to duplicated event names from the selected CloudTrail events.

### Query fails due to resource exhaustion


If sufficient memory is not available during the execution of memory intensive operations like joins, aggregations and window functions, intermediate results are spilled to disk, but spilling slows query execution and can be insufficient to prevent the query from failing with `Query exhausted resources at this scale factor`. This can be fixed by retrying the query.

If the above errors persist even after optimizing the query, you can scope down the query using the `eventTime` of the events and execute the query multiple times in smaller intervals of the original query time range.

# Run and manage CloudTrail Lake queries with the AWS CLI


You can use the AWS CLI to run and manage your CloudTrail Lake queries. When using the AWS CLI, remember that your commands run in the AWS Region configured for your profile. If you want to run the commands in a different Region, either change the default Region for your profile, or use the **--region** parameter with the command.

## Available commands for CloudTrail Lake queries


Commands for running and managing queries in CloudTrail Lake include:
+ `start-query` to run a query.
+ `describe-query` to return metadata about a query.
+ `generate-query` to produce a query from an English language prompt. For more information, see [Create CloudTrail Lake queries from natural language prompts](lake-query-generator.md).
+ `get-query-results` to return query results for the specified query ID.
+ `list-queries` to get a list queries for the specified event data store.
+ `cancel-query` to cancel a running query.

For a list of available commands for CloudTrail Lake event data stores, see [Available commands for event data stores](lake-eds-cli.md#lake-eds-cli-commands).

For a list of available commands for CloudTrail Lake dashboards, see [Available commands for dashboards](lake-dashboard-cli.md#lake-dashboard-cli-commands).

For a list of available commands for CloudTrail Lake integrations, see [Available commands for CloudTrail Lake integrations](lake-integrations-cli.md#lake-integrations-cli-commands).

## Produce a query from a natural language prompt with the AWS CLI


Run the `generate-query` command to generate a query from an English prompt. For `--event-data-stores`, provide the ARN (or ID suffix of the ARN) of the event data store you want to query. You can only specify one event data store. For `--prompt`, provide the prompt in English.

```
aws cloudtrail generate-query 
--event-data-stores arn:aws:cloudtrail:us-east-1:123456789012:eventdatastore/EXAMPLE-ee54-4813-92d5-999aeEXAMPLE \
--prompt "Show me all console login events for the past week?"
```

If successful, the command outputs a SQL statement and provides a `QueryAlias` that you will use with the `start-query` command to run the query against your event data store.

```
{
  "QueryStatement": "SELECT * FROM $EDS_ID WHERE eventname = 'ConsoleLogin' AND eventtime >= timestamp '2024-09-16 00:00:00' AND eventtime <= timestamp '2024-09-23 00:00:00' AND eventSource = 'signin.amazonaws.com'",
  "QueryAlias": "AWSCloudTrail-UUID"
}
```

## Start a query with the AWS CLI


The following example AWS CLI **start-query** command runs a query on the event data store specified as an ID in the query statement and delivers the query results to a specified S3 bucket. The `--query-statement` parameter provides a SQL query, enclosed in single quotation marks. Optional parameters include `--delivery-s3-uri`, to deliver the query results to a specified S3 bucket. For more information about the query language you can use in CloudTrail Lake, see [CloudTrail Lake SQL constraints](query-limitations.md).

```
aws cloudtrail start-query
--query-statement 'SELECT eventID, eventTime FROM EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE LIMIT 10'
--delivery-s3-uri "s3://aws-cloudtrail-lake-query-results-123456789012-us-east-1"
```

The response is a `QueryId` string. To get the status of a query, run **describe-query** using the `QueryId` value returned by **start-query**. If the query is successful, you can run **get-query-results** to get results.

**Output**

```
{
    "QueryId": "EXAMPLE2-0add-4207-8135-2d8a4EXAMPLE"
}
```

**Note**  
Queries that run for longer than one hour might time out. You can still get partial results that were processed before the query timed out.  
If you are delivering the query results to an S3 bucket using the optional `--delivery-s3-uri` parameter, the bucket policy must grant CloudTrail permission to delivery query results to the bucket. For information about manually editing the bucket policy, see [Amazon S3 bucket policy for CloudTrail Lake query results](s3-bucket-policy-lake-query-results.md).

## Get metadata about a query with the AWS CLI


The following example AWS CLI **describe-query** command gets metadata about a query, including query run time in milliseconds, number of events scanned and matched, total number of bytes scanned, and query status. The `BytesScanned` value matches the number of bytes for which your account is billed for the query, unless the query is still running. If the query results were delivered to an S3 bucket, the response also provides the S3 URI and the delivery status.

You must specify a value for either the `--query-id` or the `--query-alias` parameter. Specifying the `--query-alias` parameter returns information about the last query run for the alias. 

```
aws cloudtrail describe-query --query-id EXAMPLEd-17a7-47c3-a9a1-eccf7EXAMPLE
```

The following is an example response.

```
{
    "QueryId": "EXAMPLE2-0add-4207-8135-2d8a4EXAMPLE", 
    "QueryString": "SELECT eventID, eventTime FROM EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE LIMIT 10", 
    "QueryStatus": "RUNNING",
    "QueryStatistics": {
        "EventsMatched": 10,
        "EventsScanned": 1000,
        "BytesScanned": 35059,
        "ExecutionTimeInMillis": 3821,
        "CreationTime": "1598911142"
    }
}
```

## Get query results with the AWS CLI


The following example AWS CLI **get-query-results** command gets event data results of a query. You must specify the `--query-id` returned by the **start-query** command. The `BytesScanned` value matches the number of bytes for which your account is billed for the query, unless the query is still running. Optional parameters include `--max-query-results`, to specify a maximum number of results that you want the command to return on a single page. If there are more results than your specified `--max-query-results` value, run the command again adding the returned `NextToken` value to get the next page of results.

```
aws cloudtrail get-query-results
--query-id EXAMPLEd-17a7-47c3-a9a1-eccf7EXAMPLE
```

**Output**

```
{
    "QueryStatus": "RUNNING",
    "QueryStatistics": {
        "ResultsCount": 244,
        "TotalResultsCount": 1582,
        "BytesScanned":27044
    },
    "QueryResults": [
      {
        "key": "eventName",
        "value": "StartQuery",
      }
   ],
    "QueryId": "EXAMPLE2-0add-4207-8135-2d8a4EXAMPLE", 
    "QueryString": "SELECT eventID, eventTime FROM EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE LIMIT 10",
    "NextToken": "20add42078135EXAMPLE"
}
```

## List all queries on an event data store with the AWS CLI


The following example AWS CLI **list-queries** command returns a list of queries and query statuses on a specified event data store for the past seven days. You must specify an ARN or the ID suffix of an ARN value for `--event-data-store`. Optionally, to shorten the list of results, you can specify a time range, formatted as timestamps, by adding `--start-time` and `--end-time` parameters, and a `--query-status` value. Valid values for `QueryStatus` include `QUEUED`, `RUNNING`, `FINISHED`, `FAILED`, or `CANCELLED`.

**list-queries** also has optional pagination parameters. Use `--max-results` to specify a maximum number of results that you want the command to return on a single page. If there are more results than your specified `--max-results` value, run the command again adding the returned `NextToken` value to get the next page of results.

```
aws cloudtrail list-queries
--event-data-store EXAMPLE-f852-4e8f-8bd1-bcf6cEXAMPLE
--query-status CANCELLED
--start-time 1598384589
--end-time 1598384602
--max-results 10
```

**Output**

```
{
    "Queries": [
        {
          "QueryId": "EXAMPLE2-0add-4207-8135-2d8a4EXAMPLE", 
          "QueryStatus": "CANCELLED",
          "CreationTime": 1598911142
        },
        {
          "QueryId": "EXAMPLE2-4e89-9230-2127-5dr3aEXAMPLE", 
          "QueryStatus": "CANCELLED",
          "CreationTime": 1598296624
        }
     ],
    "NextToken": "20add42078135EXAMPLE"
}
```

## Cancel a running query with the AWS CLI


The following example AWS CLI **cancel-query** command cancels a query with a status of `RUNNING`. You must specify a value for `--query-id`. When you run **cancel-query**, the query status might show as `CANCELLED` even if the **cancel-query** operation is not yet finished.

**Note**  
A canceled query can incur charges. Your account is still charged for the amount of data that was scanned before you canceled the query.

The following is a CLI example.

```
aws cloudtrail cancel-query
--query-id EXAMPLEd-17a7-47c3-a9a1-eccf7EXAMPLE
```

**Output**

```
QueryId -> (string)
QueryStatus -> (string)
```

# CloudTrail Lake SQL constraints


**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

CloudTrail Lake queries are SQL strings. This section provides information about the supported functions, operators, and schemas.

Only `SELECT` statements are allowed. No query strings can change or mutate data.

The CloudTrail Lake syntax for a `SELECT` statement is as follows. The event data store ID—the ID portion of the event data store's ARN—is specified for the `FROM` value.

```
SELECT [ DISTINCT ] columns [ Aggregate ]
[ FROM table event_data_store_ID]
[ WHERE columns [ Conditions ] ]
[ GROUP BY columns [ DISTINCT | Aggregate ] ]
[ HAVING columns [ Aggregate | Conditions ] ]
[ ORDER BY columns [ Aggregate | ASC | DESC | NULLS | FIRST | LAST ]
[ LIMIT [ INT ] ]
```

CloudTrail Lake supports all valid Trino SQL `SELECT` statements, functions, and operators. For more information about the supported SQL functions and operators, see [Functions and Operators](https://trino.io/docs/current/functions.html) on the Trino documentation website. 

The CloudTrail console provides a number of sample queries that can help you get started writing your own queries. For more information, see [View sample queries with the CloudTrail console](lake-console-queries.md).

For information about how to optimize your queries, see [Optimize CloudTrail Lake queries](lake-queries-optimization.md).

**Topics**
+ [

## Supported functions, condition and join operators
](#query-aggregates-condition-operators)
+ [

## Advanced, multi-table query support
](#query-advanced-multi-table)

## Supported functions, condition and join operators


**Supported functions**

CloudTrail Lake supports all Trino functions. For more information about the supported functions, see [Functions and Operators](https://trino.io/docs/current/functions.html) on the Trino documentation website.

**Supported condition operators**

The following are supported condition operators.

```
AND
OR
IN
NOT
IS (NOT) NULL
LIKE
BETWEEN
GREATEST
LEAST
IS DISTINCT FROM
IS NOT DISTINCT FROM
<
>
<=
>=
<>
!=
( conditions ) #parenthesised conditions
```

**Supported join operators**

The following are the supported `JOIN` operators. For more information about running multi-table queries, see [Advanced, multi-table query support](#query-advanced-multi-table).

```
UNION 
UNION ALL 
EXCEPT 
INTERSECT 
LEFT JOIN 
RIGHT JOIN 
INNER JOIN
```

## Advanced, multi-table query support


CloudTrail Lake supports advanced query language across multiple event data stores.
+ [`UNION|UNION ALL|EXCEPT|INTERSECT`](#query-multi-table-union)
+ [`LEFT|RIGHT|INNER JOIN`](#query-multi-table-left-right)

To run your query, use the **start-query** command in the AWS CLI. The following is an example, using one of the sample queries in this section.

```
aws cloudtrail start-query
--query-statement "Select eventId, eventName from EXAMPLEf852-4e8f-8bd1-bcf6cEXAMPLE UNION Select eventId, eventName from EXAMPLEg741-6y1x-9p3v-bnh6iEXAMPLE UNION ALL Select eventId, eventName from EXAMPLEb529-4e8f9l3d-6m2z-lkp5sEXAMPLE ORDER BY eventId LIMIT 10;"
```

The response is a `QueryId` string. To get the status of a query, run `describe-query`, using the `QueryId` value returned by `start-query`. If the query is successful, you can run `get-query-results` to get results.

### `UNION|UNION ALL|EXCEPT|INTERSECT`


The following is an example query that uses `UNION` and `UNION ALL` to find events by their event ID and event name in three event data stores, EDS1, EDS2, and EDS3. The results are selected from each event data store first, then results are concatenated, ordered by event ID, and limited to ten events.

```
Select eventId, eventName from EDS1
UNION
Select eventId, eventName from EDS2
UNION ALL
Select eventId, eventName from EDS3 
ORDER BY eventId LIMIT 10;
```

### `LEFT|RIGHT|INNER JOIN`


The following is an example query that uses `LEFT JOIN` to find all events from an event data store named `eds2`, mapped to `edsB`, that match those in a primary (left) event data store, `edsA`. The returned events occur on or before January 1, 2020, and only the event names are returned.

```
SELECT edsA.eventName, edsB.eventName, element_at(edsA.map, 'test')
FROM eds1 as edsA 
LEFT JOIN eds2 as edsB
ON edsA.eventId = edsB.eventId 
WHERE edsA.eventtime <= '2020-01-01'
ORDER BY edsB.eventName;
```

# Supported SQL schemas for event data stores


**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

The following sections provide the supported SQL schema for each event data store type.

**Topics**
+ [

## Supported schema for CloudTrail event record ﬁelds
](#query-supported-event-schema)
+ [

## Supported schema for CloudTrail Insights event record fields
](#query-supported-insights-schema)
+ [

## Supported schema for AWS Config configuration item record ﬁelds
](#query-supported-config-items-schema)
+ [

## Supported schema for AWS Audit Manager evidence record ﬁelds
](#query-supported-event-schema-audit-manager)
+ [

## Supported schema for non-AWS event ﬁelds
](#query-supported-event-schema-integration)

## Supported schema for CloudTrail event record ﬁelds


The following is the valid SQL schema for CloudTrail management, data, and network activity event record fields. For more information about CloudTrail event record fields, see [CloudTrail record contents for management, data, and network activity events](cloudtrail-event-reference-record-contents.md).

```
[
    {
        "Name": "eventversion",
        "Type": "string"
    },
    {
        "Name": "useridentity",
        "Type": "struct<type:string,principalid:string,arn:string,accountid:string,accesskeyid:string,
                 username:string,sessioncontext:struct<attributes:struct<creationdate:timestamp,
                 mfaauthenticated:string>,sessionissuer:struct<type:string,principalid:string,arn:string,
                 accountid:string,username:string>,webidfederationdata:struct<federatedprovider:string,
                 attributes:map<string,string>>,sourceidentity:string,ec2roledelivery:string,
                 ec2issuedinvpc:string>,onbehalfof:struct<userid:string,identitystorearn:string>,
                 inscopeof:struct<sourcearn:string,sourceaccount:string,issuertype:string,
                 credentiaisissuedto:string>,invokedby:string,identityprovider:string>"
    },
    {
        "Name": "eventtime",
        "Type": "timestamp"
    },
    {
        "Name": "eventsource",
        "Type": "string"
    },
    {
        "Name": "eventname",
        "Type": "string"
    },
    {
        "Name": "awsregion",
        "Type": "string"
    },
    {
        "Name": "sourceipaddress",
        "Type": "string"
    },
    {
        "Name": "useragent",
        "Type": "string"
    },
    {
        "Name": "errorcode",
        "Type": "string"
    },
    {
        "Name": "errormessage",
        "Type": "string"
    },
    {
        "Name": "requestparameters",
        "Type": "map<string,string>"
    },
    {
        "Name": "responseelements",
        "Type": "map<string,string>"
    },
    {
        "Name": "additionaleventdata",
        "Type": "map<string,string>"
    },
    {
        "Name": "requestid",
        "Type": "string"
    },
    {
        "Name": "eventid",
        "Type": "string"
    },
    {
        "Name": "readonly",
        "Type": "boolean"
    },
    {
        "Name": "resources",
        "Type": "array<struct<accountid:string,type:string,arn:string,arnprefix:string>>"
    },
    {
        "Name": "eventtype",
        "Type": "string"
    },
    {
        "Name": "apiversion",
        "Type": "string"
    },
    {
        "Name": "managementevent",
        "Type": "boolean"
    },
    {
        "Name": "recipientaccountid",
        "Type": "string"
    },
    {
        "Name": "sharedeventid",
        "Type": "string"
    },
    {
        "Name": "annotation",
        "Type": "string"
    },
    {
        "Name": "vpcendpointid",
        "Type": "string"
    },
    {
        "Name": "vpcendpointaccountid",
        "Type": "string"
    },
    {
        "Name": "serviceeventdetails",
        "Type": "map<string,string>"
    },
    {
        "Name": "addendum",
        "Type": "map<string,string>"
    },
    {
        "Name": "edgedevicedetails",
        "Type": "map<string,string>"
    },
    {
        "Name": "insightdetails",
        "Type": "map<string,string>"
    },
    {
        "Name": "eventcategory",
        "Type": "string"
    },
    {
        "Name": "tlsdetails",
        "Type": "struct<tlsversion:string,ciphersuite:string,clientprovidedhostheader:string>"
    },
    {
        "Name": "sessioncredentialfromconsole",
        "Type": "string"
    },
    {
        "Name": "eventjson",
        "Type": "string"
    }
    {
        "Name": "eventjsonchecksum",
        "Type": "string"
    }
]
```

## Supported schema for CloudTrail Insights event record fields


The following is the valid SQL schema for Insights event record fields. For Insights events, the value of `eventcategory` is `Insight`, and the value of `eventtype` is `AwsCloudTrailInsight`. For descriptions of these fields, see [CloudTrail record contents for Insights events for event data stores](cloudtrail-insights-fields-lake.md).

**Note**  
The `insightvalue`, `insightaverage`, `baselinevalue`, and `baselineaverage` fields within the `attributions` field of `insightContext` will begin to be deprecated on June 23, 2025.

```
[
    {
        "Name": "eventversion",
        "Type": "string"
    },
    {
        "Name": "eventcategory",
        "Type": "string"
    },
    {
        "Name": "eventtype",
        "Type": "string"
    },
        "Name": "eventid",
        "Type": "string"
    },
    {
        "Name": "eventtime",
        "Type": "timestamp"
    },
    {
        "Name": "awsregion",
        "Type": "string"
    },
    {
        "Name": "recipientaccountid",
        "Type": "string"
    },
    {
        "Name": "sharedeventid",
        "Type": "string"
    },
    {
        "Name": "addendum",
        "Type": "map<string,string>"
    },
    {
        "Name": "insightsource",
        "Type": "string"
    },
    {
        "Name": "insightstate",
        "Type": "string"
    },
    {
        "Name": "insighteventsource",
        "Type": "string"
    },
    {
        "Name": "insighteventname",
        "Type": "string"
    },
    {
        "Name": "insighterrorcode",
        "Type": "string"
    },
    {
        "Name": "insighttype",
        "Type": "string"
    },
    {
        "Name": "insightContext",
        "Type": "struct<baselineaverage:double,insightaverage:double,
                 baselineduration:integer,insightduration:integer,
                 attributions:struct<attribute:string,insightvalue:string,
                 insightaverage:double,baselinevalue:string,baselineaverage:double,
                 insight:struct<value:string,average:double>,
                 baseline:struct<value:string,average:double>>>"
    }
]
```

## Supported schema for AWS Config configuration item record ﬁelds


The following is the valid SQL schema for configuration item record fields. For configuration items, the value of `eventcategory` is `ConfigurationItem`, and the value of `eventtype` is `AwsConfigurationItem`.

```
[
    {
        "Name": "eventversion",
        "Type": "string"
    },
    {
        "Name": "eventcategory",
        "Type": "string"
    },
    {
        "Name": "eventtype",
        "Type": "string"
    },
        "Name": "eventid",
        "Type": "string"
    },
    {
        "Name": "eventtime",
        "Type": "timestamp"
    },
    {
        "Name": "awsregion",
        "Type": "string"
    },
    {
        "Name": "recipientaccountid",
        "Type": "string"
    },
    {
        "Name": "addendum",
        "Type": "map<string,string>"
    },
    {
        "Name": "eventdata",
        "Type": "struct<configurationitemversion:string,configurationitemcapturetime:
                 string,configurationitemstatus:string,configurationitemstateid:string,accountid:string,
                 resourcetype:string,resourceid:string,resourcename:string,arn:string,awsregion:string,
                 availabilityzone:string,resourcecreationtime:string,configuration:map<string,string>,
                 supplementaryconfiguration:map<string,string>,relatedevents:string,
                 relationships:struct<name:string,resourcetype:string,resourceid:string,
                 resourcename:string>,tags:map<string,string>>"
    }
]
```

## Supported schema for AWS Audit Manager evidence record ﬁelds


The following is the valid SQL schema for Audit Manager evidence record fields. For Audit Manager evidence record fields, the value of `eventcategory` is `Evidence`, and the value of `eventtype` is `AwsAuditManagerEvidence`. For more information about aggregating evidence in CloudTrail Lake using Audit Manager, see [Evidence finder](https://docs.aws.amazon.com/audit-manager/latest/userguide/evidence-finder.html) in the *AWS Audit Manager User Guide*.

```
[
    {
        "Name": "eventversion",
        "Type": "string"
    },
    {
        "Name": "eventcategory",
        "Type": "string"
    },
    {
        "Name": "eventtype",
        "Type": "string"
    },
        "Name": "eventid",
        "Type": "string"
    },
    {
        "Name": "eventtime",
        "Type": "timestamp"
    },
    {
        "Name": "awsregion",
        "Type": "string"
    },
    {
        "Name": "recipientaccountid",
        "Type": "string"
    },
    {
        "Name": "addendum",
        "Type": "map<string,string>"
    },
    {
        "Name": "eventdata",
        "Type": "struct<attributes:map<string,string>,awsaccountid:string,awsorganization:string,
                 compliancecheck:string,datasource:string,eventname:string,eventsource:string,
                 evidenceawsaccountid:string,evidencebytype:string,iamid:string,evidenceid:string,
                 time:timestamp,assessmentid:string,controlsetid:string,controlid:string,
                 controlname:string,controldomainname:string,frameworkname:string,frameworkid:string,
                 service:string,servicecategory:string,resourcearn:string,resourcetype:string,
                 evidencefolderid:string,description:string,manualevidences3resourcepath:string,
                 evidencefoldername:string,resourcecompliancecheck:string>"
    }
]
```

## Supported schema for non-AWS event ﬁelds


The following is the valid SQL schema for non-AWS events. For non-AWS events, the value of `eventcategory` is `ActivityAuditLog`, and the value of `eventtype` is `ActivityLog`.

```
[
    {
        "Name": "eventversion",
        "Type": "string"
    },
    {
        "Name": "eventcategory",
        "Type": "string"
    },
    {
        "Name": "eventtype",
        "Type": "string"
    },
        "Name": "eventid",
        "Type": "string"
    },
    {
        "Name": "eventtime",
        "Type": "timestamp"
    },
    {
        "Name": "awsregion",
        "Type": "string"
    },
    {
        "Name": "recipientaccountid",
        "Type": "string"
    },
    {
        "Name": "addendum",
        "Type": "struct<reason:string,updatedfields:string,originalUID:string,originaleventid:string>"
    },
    {
        "Name": "metadata",
        "Type": "struct<ingestiontime:string,channelarn:string>"
    },
    {
        "Name": "eventdata",
        "Type": "struct<version:string,useridentity:struct<type:string,
                 principalid:string,details:map<string,string>>,useragent:string,eventsource:string,
                 eventname:string,eventtime:string,uid:string,requestparameters:map<string,string>>,
                 responseelements":map<string,string>>,errorcode:string,errormssage:string,sourceipaddress:string,
                 recipientaccountid:string,additionaleventdata":map<string,string>>"
    }
]
```

# Supported CloudWatch metrics


**Note**  
AWS CloudTrail Lake will no longer be open to new customers starting May 31, 2026. If you would like to use CloudTrail Lake, sign up prior to that date. Existing customers can continue to use the service as normal. For more information, see [CloudTrail Lake availability change](cloudtrail-lake-service-availability-change.md).

CloudTrail Lake supports Amazon CloudWatch metrics. CloudWatch is a monitoring service for AWS resources. You can use CloudWatch to collect and track metrics, set alarms, and automatically react to changes in your AWS resources. 

 The `AWS/CloudTrail` namespace includes the following metrics for CloudTrail Lake. 


****  

| Metric | Description | Units | 
| --- | --- | --- | 
| HourlyDataIngested | The amount of data ingested into the event data store during the last hour. This metric is updated every hour. This metric is available for all event data store types.  | Bytes | 
| TotalDataRetained |  The amount of data retained in the event data store during its entire retention period. This metric is updated nightly. This metric is available for all event data store types.  | Bytes | 
| TotalStorageBytes |  The total compressed bytes in the event data store as of the current day. This metric is available for all event data store types.  | Bytes | 
| TotalPaidStorageBytes |   For event data stores using the one-year extendable retention [pricing option](cloudtrail-lake-manage-costs.md#cloudtrail-lake-manage-costs-pricing-option), this is the total compressed bytes after 366 days to the maximum retention period configured for the event data store. For event data stores using the one-year extendable retention pricing option, storage is included at no additional cost with ingestion pricing for the first 366 days, which is the default retention period for the event data store. After 366 days, storage is pay-as-you-go. For information about pricing, see [AWS CloudTrail Pricing](https://aws.amazon.com/cloudtrail/pricing/). This metric is only available for event data stores using the one-year extendable retention pricing option.  | Bytes | 
| HourlyEventsAnalyzed | The total number of events analyzed by CloudTrail Insights in the event data store. This metric is updated every hour. This metric is for CloudTrail event data stores that enable CloudTrail Insights.  | Count | 

For more information about CloudWatch metrics, see the following topics.
+  [Using Amazon CloudWatch metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html) 
+  [Using Amazon CloudWatch alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html) 