

# Collecting data from AWS services in Security Lake
<a name="internal-sources"></a>

Amazon Security Lake can collect logs and events from the following natively-supported AWS services:
+ AWS CloudTrail management and data events (S3, Lambda)
+ Amazon Elastic Kubernetes Service (Amazon EKS) Audit Logs
+ Amazon Route 53 resolver query logs
+ AWS Security Hub CSPM findings
+ Amazon Virtual Private Cloud (Amazon VPC) Flow Logs
+ AWS WAFv2 logs

Security Lake automatically transforms this data into the [Open Cybersecurity Schema Framework (OCSF) in Security Lake](open-cybersecurity-schema-framework.md) and Apache Parquet format.

**Tip**  
 To add one or more of the preceding services as a log source in Security Lake, you *don't* need to separately configure logging in these services, except CloudTrail management events. If you do have logging configured in these services, you *don't* need to change your logging configuration to add them as log sources in Security Lake. Security Lake pulls data directly from these services through an independent and duplicated stream of events. 



## Prerequisite: Verify permissions
<a name="add-internal-sources-permissions"></a>

To add an AWS service as a source in Security Lake, you must have the necessary permissions. Verify that the AWS Identity and Access Management (IAM) policy attached to the role that you use to add a source has permission to perform the following actions:
+ `glue:CreateDatabase`
+ `glue:CreateTable`
+ `glue:GetDatabase`
+ `glue:GetTable`
+ `glue:UpdateTable`
+ `iam:CreateServiceLinkedRole`
+ `s3:GetObject`
+ `s3:PutObject`

It is recommended for the role to have the following conditions and resource scope for the `S3:getObject` and `s3:PutObject` permissions.

------
#### [ JSON ]

****  

```
{
    "Version":"2012-10-17",		 	 	 
    "Statement": [
        {
            "Sid": "AllowUpdatingSecurityLakeS3Buckets",
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:PutObject"
            ],
            "Resource": "arn:aws:s3:::aws-security-data-lake*",
              "Condition": {
                "StringEquals": {
                    "aws:ResourceAccount": "${aws:PrincipalAccount}"
                }
            }
    }
    ]
}
```

------

These actions allow you to collect logs and events from the an AWS service and send them to the correct AWS Glue database and table.

If you use a AWS KMS key for server-side encryption of your data lake, you also need permission for `kms:DescribeKey`.

## Adding an AWS service as a source
<a name="add-internal-sources"></a>

After you add an AWS service as a source, Security Lake automatically starts collecting security logs and events from it. These instructions tell you how to add a natively-supported AWS service as a source in Security Lake. For instructions on adding a custom source, see [Collecting data from custom sources in Security Lake](custom-sources.md).

------
#### [ Console ]

**To add an AWS log source (console)**

1. Open the Security Lake console at [https://console.aws.amazon.com/securitylake/](https://console.aws.amazon.com/securitylake/).

1. Choose **Sources** from the navigation pane.

1. Select the AWS service that you want to collect data from, and choose **Configure**. 

1. In the **Source settings** section, enable the source and select the **Version** of data source that you want to use for data ingestion. By default, the latest version of data source is ingested by Security Lake.
**Important**  
If you don't have the required role permissions to enable the new version of the AWS log source in the specified Region, contact your Security Lake administrator. For more information, see [Update role permissions](https://docs.aws.amazon.com/security-lake/latest/userguide/internal-sources.html#update-role-permissions).

   For your subscribers to ingest the selected version of the data source, you must also update your subscriber settings. For the details on how to edit a subscriber, see [Subscriber management in Amazon Security Lake](https://docs.aws.amazon.com//security-lake/latest/userguide/subscriber-management.html).

   Optionally, you can choose to ingest the latest version only and disable all previous source versions used for data ingestion. 

1. In the **Regions** section, select the Regions in which you want to collect data for the source. Security Lake will collect data from the source from *all* accounts in the selected Regions.

1. Choose **Enable**.

------
#### [ API ]

**To add an AWS log source (API)**

To add an AWS service as a source programmatically, use the [CreateAwsLogSource](https://docs.aws.amazon.com/security-lake/latest/APIReference/API_CreateAwsLogSource.html) operation of the Security Lake API. If you're using the AWS Command Line Interface (AWS CLI), run the [create-aws-log-source](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/securitylake/create-aws-log-source.html) command. The `sourceName` and `regions` parameters are required. Optionally, you can limit the scope of the source to specific `accounts` or a specific `sourceVersion`.

**Important**  
When you don't provide a parameter in your command, Security Lake assumes that the missing parameter refers to the entire set. For example, if you don't provide the `accounts` parameter , the command applies to the entire set of accounts in your organization.

The following example adds VPC Flow Logs as a source in the designated accounts and Regions. This example is formatted for Linux, macOS, or Unix, and it uses the backslash (\$1) line-continuation character to improve readability.

**Note**  
If you apply this request to a Region in which you haven't enabled Security Lake, you'll receive an error. You can resolve the error by enabling Security Lake in that Region or by using the `regions` parameter to specify only those Regions in which you've enabled Security Lake.

```
$ aws securitylake create-aws-log-source \
--sources sourceName=VPC_FLOW,accounts='["123456789012", "111122223333"]',regions=["us-east-2"],sourceVersion="2.0"
```

------

## Getting the status of source collection
<a name="get-status-internal-sources"></a>

Choose your access method, and follow the steps to get a snapshot of the accounts and sources for which log collection is enabled in the current Region.

------
#### [ Console ]

**To get the status of log collection in the current Region**

1. Open the Security Lake console at [https://console.aws.amazon.com/securitylake/](https://console.aws.amazon.com/securitylake/).

1. On the navigation pane, choose **Accounts**.

1. Hover the cursor over the number in the **Sources** column to see which logs are enabled for the selected account.

------
#### [ API ]

To get the status of log collection in the current Region, use the [GetDataLakeSources](https://docs.aws.amazon.com/security-lake/latest/APIReference/API_GetDataLakeSources.html) operation of the Security Lake API. If you're using the AWS CLI, run the [get-data-lake-sources](https://docs.aws.amazon.com/cli/latest/reference/securitylake/get-data-lake-sources.html) command. For the `accounts` parameter, you can specify one or more AWS account IDs as a list. If your request succeeds, Security Lake returns a snapshot for those accounts in the current Region, including which AWS sources Security Lake is collecting data from and the status of each source. If you don't include the `accounts` parameter, the response includes the status of log collection for all accounts in which Security Lake is configured in the current Region.

For example, the following AWS CLI command retrieves log collection status for the specified accounts in the current Region. This example is formatted for Linux, macOS, or Unix, and it uses the backslash (\$1) line-continuation character to improve readability.

```
$ aws securitylake get-data-lake-sources \
--accounts "123456789012" "111122223333"
```

------

# Updating role permissions in Security Lake
<a name="update-role-permissions"></a>

If you don't have the required role permissions or resources—new AWS Lambda function and Amazon Simple Queue Service (Amazon SQS) queue—to ingest data from a new version of the data source, you must update your `AmazonSecurityLakeMetaStoreManagerV2` role permissions and create a new set of resources to process data from your sources.

Choose your preferred method, and follow the instructions to update your role permissions and create new resources to process data from a new version of an AWS log source in a specified Region. This is a one-time action, as the permissions and resources are automatically applied to future data source releases.

------
#### [ Console ]

**To update role permissions (console)**

1. Open the Security Lake console at [https://console.aws.amazon.com/securitylake/](https://console.aws.amazon.com/securitylake/).

   Sign in with the credentials of the delegated Security Lake administrator.

1. In the navigation pane, under **Settings**, choose **General**.

1. Choose **Update role permissions**.

1. In the **Service access** section, do one of the following: 
   + **Create and use a new service role**— You can use the **AmazonSecurityLakeMetaStoreManagerV2** role created by Security Lake.
   + **Use an existing service role**— You can choose an existing service role from the **Service role name** list. 

1. Choose **Apply**.

------
#### [ API ]

**To update role permissions (API)**

To update permissions programmatically, use the [https://docs.aws.amazon.com/security-lake/latest/APIReference/API_UpdateDataLake.html](https://docs.aws.amazon.com/security-lake/latest/APIReference/API_UpdateDataLake.html) operation of the Security Lake API. To update permissions using the AWS CLI, run the [https://docs.aws.amazon.com/cli/latest/reference/securitylake/update-data-lake.html](https://docs.aws.amazon.com/cli/latest/reference/securitylake/update-data-lake.html) command. 

To update your role permissions, you must attach the [AmazonSecurityLakeMetastoreManager](security-iam-awsmanpol.md#security-iam-awsmanpol-AmazonSecurityLakeMetastoreManager) policy to the role. 

------

## Deleting the AmazonSecurityLakeMetaStoreManager role
<a name="remove-sl-metastoremanager-role"></a>

**Important**  
After you update your role permissions to `AmazonSecurityLakeMetaStoreManagerV2`, confirm that the data lake works correctly before you remove the old `AmazonSecurityLakeMetaStoreManager` role. It is recommended to wait at-least 4 hours before removing the role.

 If you decide to remove the role, you must first delete the `AmazonSecurityLakeMetaStoreManager` role from AWS Lake Formation. 

Follow these steps to remove the `AmazonSecurityLakeMetaStoreManager` role from the Lake Formation console.

1. Sign in to the AWS Management Console, and open the Lake Formation console at [https://console.aws.amazon.com/lakeformation/](https://console.aws.amazon.com/lakeformation/).

1. In the Lake Formation console, from the navigation pane, choose **Administrative roles and tasks**.

1. Remove `AmazonSecurityLakeMetaStoreManager` from each Region.

# Removing an AWS service as a source from Security Lake
<a name="remove-internal-sources"></a>

Choose your access method, and follow these steps to remove a natively-supported AWS service as a Security Lake source. You can remove a source for one or more Regions. When you remove the source, Security Lake stops collecting data from that source in the specified Regions and accounts, and subscribers can no longer consume new data from the source. However, subscribers can still consume data that Security Lake collected from the source before removal. You can only use these instructions to remove a natively-supported AWS service as a source. For information about removing a custom source, see [Collecting data from custom sources in Security Lake](custom-sources.md).

------
#### [ Console ]

1. Open the Security Lake console at [https://console.aws.amazon.com/securitylake/](https://console.aws.amazon.com/securitylake/).

1. Choose **Sources** from the navigation pane.

1. Select a source, and choose **Disable**.

1. Select a Region or Regions in which you want to stop collecting data from this source. Security Lake will stop collecting data from the source from *all* accounts in the selected Regions.

------
#### [ API ]

To remove an AWS service as a source programmatically, use the [DeleteAwsLogSource](https://docs.aws.amazon.com/security-lake/latest/APIReference/API_DeleteAwsLogSource.html) operation of the Security Lake API. If you're using the AWS Command Line Interface (AWS CLI), run the [delete-aws-log-source](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/securitylake/delete-aws-log-source.html) command. The `sourceName` and `regions` parameters are required. Optionally, you can limit the scope of the removal to specific `accounts` or a specific `sourceVersion`.

**Important**  
When you don't provide a parameter in your command, Security Lake assumes that the missing parameter refers to the entire set. For example, if you don't provide the `accounts` parameter , the command applies to the entire set of accounts in your organization.

The following example removes VPC Flow Logs as a source in the designated accounts and Regions.

```
$ aws securitylake delete-aws-log-source \
--sources sourceName=VPC_FLOW,accounts='["123456789012", "111122223333"]',regions='["us-east-1", "us-east-2"]',sourceVersion="2.0"
```

The following example removes Route 53 as a source in the designated account and Regions.

```
$ aws securitylake delete-aws-log-source \
--sources sourceName=ROUTE53,accounts='["123456789012"]',regions='["us-east-1", "us-east-2"]',sourceVersion="2.0"
```

The preceding examples are formatted for Linux, macOS, or Unix, and they use the backslash (\$1) line-continuation character to improve readability.

------

# CloudTrail event logs in Security Lake
<a name="cloudtrail-event-logs"></a>

AWS CloudTrail provides you with a history of AWS API calls for your account, including API calls made using the AWS Management Console, the AWS SDKs, the command line tools, and certain AWS services. CloudTrail also allows you to identify which users and accounts called AWS APIs for services that support CloudTrail, the source IP address that the calls were made from, and when the calls occurred. For more information, see the [AWS CloudTrail User Guide](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/).

Security Lake can collect logs associated with CloudTrail management events and CloudTrail data events for S3 and Lambda. CloudTrail management events, S3 data events, and Lambda data events are three separate sources in Security Lake. As a result, they have different values for [https://docs.aws.amazon.com/security-lake/latest/APIReference/API_AwsLogSourceConfiguration.html#securitylake-Type-AwsLogSourceConfiguration-sourceName](https://docs.aws.amazon.com/security-lake/latest/APIReference/API_AwsLogSourceConfiguration.html#securitylake-Type-AwsLogSourceConfiguration-sourceName) when you add one of these as an ingested log source. Management events, also known as control plane events, provide insight into management operations that are performed on resources in your AWS account. CloudTrail data events, also known as data plane operations, show the resource operations performed on or within resources in your AWS account. These operations are often high-volume activities.

To collect CloudTrail management events in Security Lake, you must have at least one CloudTrail multi-Region organization trail that collects read and write CloudTrail management events. Logging must be enabled for the trail. If you do have logging configured in the other services, you don't need to change your logging configuration to add them as log sources in Security Lake. Security Lake pulls data directly from these services through an independent and duplicated stream of events.

A multi-Region trail delivers log files from multiple Regions to a single Amazon Simple Storage Service (Amazon S3) bucket for a single AWS account. If you already have a multi-Region trail managed through CloudTrail console or AWS Control Tower, no further action is required.
+ For information about creating and managing a trail through CloudTrail, see [Creating a trail for an organization](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/creating-trail-organization.html) in the *AWS CloudTrail User Guide*. 
+ For information about creating and managing a trail through AWS Control Tower, see [Logging AWS Control Tower actions with AWS CloudTrail](https://docs.aws.amazon.com/controltower/latest/userguide/logging-using-cloudtrail.html) in the *AWS Control Tower User Guide*.

When you add CloudTrail events as a source, Security Lake immediately starts collecting your CloudTrail event logs. It consumes CloudTrail management and data events directly from CloudTrail through an independent and duplicated stream of events.

Security Lake doesn't manage your CloudTrail events or affect your existing CloudTrail configurations. To manage access and retention of your CloudTrail events directly, you must use the CloudTrail service console or API. For more information, see [Viewing events with CloudTrail Event history](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events.html) in the *AWS CloudTrail User Guide*.

The following list provides GitHub repository links to the mapping reference for how Security Lake normalizes CloudTrail events to OCSF.

****GitHub OCSF repository for CloudTrail events****
+ Source version 1 [(v1.0.0-rc.2)](https://github.com/ocsf/examples/tree/main/mappings/markdown/AWS/v1.0.0-rc.2/CloudTrail)
+ Source version 2 [(v1.1.0)](https://github.com/ocsf/examples/tree/main/mappings/markdown/AWS/v1.1.0/CloudTrail)

# Amazon EKS Audit Logs in Security Lake
<a name="eks-audit-logs"></a>

When you add Amazon EKS Audit Logs as a source, Security Lake starts collecting in-depth information about the activities performed on the Kubernetes resources running in your Elastic Kubernetes Service (EKS) clusters. EKS Audit Logs help you detect potentially suspicious activities in your EKS clusters within the Amazon Elastic Kubernetes Service. 

Security Lake consumes EKS Audit Log events directly from the Amazon EKS control plane logging feature through an independent and duplicative stream of audit logs. This process is designed to not require additional set up or affect existing Amazon EKS control plane logging configurations that you might have. For more information, see [Amazon EKS control plane logging](https://docs.aws.amazon.com//eks/latest/userguide/control-plane-logs.html) in the **Amazon EKS User Guide**.

Amazon EKS audit logs is supported only in OCSF v1.1.0. For information about how Security Lake normalizes EKS Audit Logs events to OCSF, see the mapping reference in the [GitHub OCSF repository for Amazon EKS Audit Logs events (v1.1.0)](https://github.com/ocsf/examples/tree/main/mappings/markdown/AWS/v1.1.0/EKS Audit Logs).

# Route 53 resolver query logs in Security Lake
<a name="route-53-logs"></a>

Route 53 resolver query logs track DNS queries made by resources within your Amazon Virtual Private Cloud (Amazon VPC). This helps you understand how your applications are operating and spot security threats.

When you add Route 53 resolver query logs as a source in Security Lake, Security Lake immediately starts collecting your resolver query logs directly from Route 53 through an independent and duplicated stream of events.

Security Lake doesn't manage your Route 53 logs or affect your existing resolver query logging configurations. To manage resolver query logs, you must use the Route 53 service console. For more information, see [Managing Resolver query logging configurations](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-query-logging-configurations-managing.html) in the *Amazon Route 53 Developer Guide*.

The following list provides GitHub repository links to the mapping reference for how Security Lake normalizes Route 53 logs to OCSF.

****GitHub OCSF repository for Route 53 logs****
+ Source version 1 [(v1.0.0-rc.2)](https://github.com/ocsf/examples/tree/main/mappings/markdown/AWS/v1.0.0-rc.2/Route53)
+ Source version 2 [(v1.1.0)](https://github.com/ocsf/examples/tree/main/mappings/markdown/AWS/v1.1.0/Route53)

# Security Hub CSPM findings in Security Lake
<a name="security-hub-findings"></a>

Security Hub CSPM findings help you understand your security posture in AWS and let you check your environment against security industry standards and best practices. Security Hub CSPM collects findings from various sources, including integrations with other AWS services, third-party product integrations, and checks against Security Hub CSPM controls. Security Hub CSPM processes findings in a standard format called AWS Security Finding Format (ASFF).

When you add Security Hub CSPM findings as a source in Security Lake, Security Lake immediately starts collecting your findings directly from Security Hub CSPM through an independent and duplicated stream of events. Security Lake also transforms the findings from ASFF to the [Open Cybersecurity Schema Framework (OCSF) in Security Lake](open-cybersecurity-schema-framework.md) (OCSF).

Security Lake doesn't manage your Security Hub CSPM findings or affect your Security Hub CSPM settings. To manage Security Hub CSPM findings, you must use the Security Hub CSPM service console, API, or AWS CLI. For more information, see [Findings in AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings.html) in the *AWS Security Hub User Guide *.

The following list provides GitHub repository links to the mapping reference for how Security Lake normalizes Security Hub CSPM findings to OCSF.

****GitHub OCSF repository for Security Hub CSPM findings****
+ Source version 1 [(v1.0.0-rc.2)](https://github.com/ocsf/examples/tree/main/mappings/markdown/AWS/v1.0.0-rc.2/Security%20Hub)
+ Source version 2 [(v1.1.0)](https://github.com/ocsf/examples/tree/main/mappings/markdown/AWS/v1.1.0/Security%20Hub)

# VPC Flow Logs in Security Lake
<a name="vpc-flow-logs"></a>

The VPC Flow Logs feature of Amazon VPC captures information about the IP traffic going to and from network interfaces within your environment. 

When you add VPC Flow Logs as a source in Security Lake, Security Lake immediately starts collecting your VPC Flow Logs. It consumes VPC Flow Logs directly from Amazon VPC through an independent and duplicate stream of Flow Logs.

Security Lake doesn't manage your VPC Flow Logs or affect your Amazon VPC configurations. To manage your Flow Logs, you must use the Amazon VPC service console. For more information, see [Work with Flow Logs](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-flow-logs.html) in the *Amazon VPC Developer Guide*.

The following list provides GitHub repository links to the mapping reference for how Security Lake normalizes VPC Flow Logs to OCSF.

****GitHub OCSF repository for VPC Flow Logs****
+ Source version 1 [(v1.0.0-rc.2)](https://github.com/ocsf/examples/tree/main/mappings/markdown/AWS/v1.0.0-rc.2/VPC%20Flowlogs)
+ Source version 2 [(v1.1.0)](https://github.com/ocsf/examples/tree/main/mappings/markdown/AWS/v1.1.0/VPC%20Flowlogs)

# AWS WAF logs in Security Lake
<a name="aws-waf"></a>

When you add AWS WAF as a log source in Security Lake, Security Lake immediately starts collecting the logs. AWS WAF is a web application firewall that you can use to monitor web requests that your end users send to your applications and to control access to your content. Logged information includes the time that AWS WAF received a web request from your AWS resource, detailed information about the request, and details about the rules that the request matched. 

Security Lake consumes AWS WAF logs directly from AWS WAF through an independent and duplicate stream of logs. This process is designed to not require additional setup or affect existing AWS WAF configurations. Security Lake logs only retrieve data that's permitted by the AWS WAF [web access control list (web ACL)](https://docs.aws.amazon.com/waf/latest/developerguide/web-acl.html) configuration. If [Data protection](https://docs.aws.amazon.com/waf/latest/developerguide/waf-data-protection-and-logging.html) is enabled for the web ACL in Security Lake accounts, the generated data will be redacted or hashed based on your web ACL settings. For information about using AWS WAF to protect your application resources, see [How AWS WAF works](https://docs.aws.amazon.com/waf/latest/developerguide/how-aws-waf-works.html) in the *AWS WAF Developer Guide*.

**Important**  
If you are using Amazon CloudFront distribution as the resource type in AWS WAF, you must select US East (N. Virginia) to ingest the global logs in Security Lake.

AWS WAF logs is supported only in OCSF v1.1.0. For information about how Security Lake normalizes AWS WAF log events to OCSF, see the mapping reference in the [GitHub OCSF repository for AWS WAF logs (v1.1.0)](https://github.com/ocsf/examples/tree/main/mappings/markdown/AWS/v1.1.0/WAF).

## Removing an AWS service as a source
<a name="remove-internal-sources"></a>

Choose your access method, and follow these steps to remove a natively-supported AWS service as a Security Lake source. You can remove a source for one or more Regions. When you remove the source, Security Lake stops collecting data from that source in the specified Regions and accounts, and subscribers can no longer consume new data from the source. However, subscribers can still consume data that Security Lake collected from the source before removal. You can only use these instructions to remove a natively-supported AWS service as a source. For information about removing a custom source, see [Collecting data from custom sources in Security Lake](custom-sources.md).

------
#### [ Console ]

1. Open the Security Lake console at [https://console.aws.amazon.com/securitylake/](https://console.aws.amazon.com/securitylake/).

1. Choose **Sources** from the navigation pane.

1. Select a source, and choose **Disable**.

1. Select a Region or Regions in which you want to stop collecting data from this source. Security Lake will stop collecting data from the source from *all* accounts in the selected Regions.

------
#### [ API ]

To remove an AWS service as a source programmatically, use the [DeleteAwsLogSource](https://docs.aws.amazon.com/security-lake/latest/APIReference/API_DeleteAwsLogSource.html) operation of the Security Lake API. If you're using the AWS Command Line Interface (AWS CLI), run the [delete-aws-log-source](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/securitylake/delete-aws-log-source.html) command. The `sourceName` and `regions` parameters are required. Optionally, you can limit the scope of the removal to specific `accounts` or a specific `sourceVersion`.

**Important**  
When you don't provide a parameter in your command, Security Lake assumes that the missing parameter refers to the entire set. For example, if you don't provide the `accounts` parameter , the command applies to the entire set of accounts in your organization.

The following example removes VPC Flow Logs as a source in the designated accounts and Regions.

```
$ aws securitylake delete-aws-log-source \
--sources sourceName=VPC_FLOW,accounts='["123456789012", "111122223333"]',regions='["us-east-1", "us-east-2"]',sourceVersion="2.0"
```

The following example removes Route 53 as a source in the designated account and Regions.

```
$ aws securitylake delete-aws-log-source \
--sources sourceName=ROUTE53,accounts='["123456789012"]',regions='["us-east-1", "us-east-2"]',sourceVersion="2.0"
```

The preceding examples are formatted for Linux, macOS, or Unix, and they use the backslash (\$1) line-continuation character to improve readability.

------