

# The AWS Security Reference Architecture



|  | 
| --- |
| Influence the future of the AWS Security Reference Architecture (AWS SRA) by taking a [short survey](https://amazonmr.au1.qualtrics.com/jfe/form/SV_e3XI1t37KMHU2ua). | 

The following diagram illustrates the AWS SRA. This architectural diagram brings together all the AWS security-related services. It is built around a simple, three-tier web architecture that can fit on a single page. In such a workload, there is a *web tier* through which users connect and interact with the *application tier,* which handles the actual business logic of the application: taking inputs from the user, doing some computation, and generating outputs. The application tier stores and retrieves information from the *data tier*. The architecture is purposefully modular and provides high-level abstraction for many modern web applications.

**Architecture diagrams**  
To customize the reference architecture diagrams in this guide based on your business needs, you can download the following .zip file and extract its contents.  
[![\[alt text not found\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/security-reference-architecture/images/download.png) Download the diagram source file (Microsoft PowerPoint format)](samples/aws-security-reference-architecture-diagrams.zip) 

![\[AWS Security Reference Architecture diagram.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/security-reference-architecture/images/sra.png)


For this reference architecture, the actual web application and data tier are deliberately represented as simply as possible, through Amazon EC2 instances and an Amazon Aurora database, respectively. Most architecture diagrams focus and dive deep on the web, application, and data tiers. For readability, they often omit the security controls. This diagram flips that emphasis to show security wherever possible, and keeps the application and data tiers as simple as necessary to show security features meaningfully.

The AWS SRA contains all AWS security-related services available at the time of publication. (See [document history](doc-history.md).) However, not every workload or environment, based on its unique threat exposure, has to deploy every security service. Our goal is to provide a reference for a range of options, including descriptions of how these services fit together architecturally, so that your business can make decisions that are most appropriate for your infrastructure, workload, and security needs, based on risk.

The following sections walk through each OU and account to understand its objectives and the individual AWS security services associated with it. For each element (typically an AWS service), this document provides the following information:
+ Brief overview of the element and its security purpose in the AWS SRA. For more detailed descriptions and technical information about individual services, see [the appendix](appendix.md).
+ Recommended placement to most effectively enable and manage the service. This is captured in the individual architecture diagrams for each account and OU.
+ Configuration, management, and data sharing links to other security services. How does this service rely on, or support, other security services?
+ Design considerations. First, the document highlights *optional* features or configurations that have important security implications. Second, where our teams' experience includes common variations in the recommendations we make—typically as a result of alternate requirements or constraints—the document describes those options.

**Topics**
+ [

# Org Management account
](org-management.md)
+ [

# Security OU – Security Tooling account
](security-tooling.md)
+ [

# Security OU – Log Archive account
](log-archive.md)
+ [

# Infrastructure OU – Network account
](network.md)
+ [

# Infrastructure OU – Shared Services account
](shared-services.md)
+ [

# Workloads OU – Application account
](application.md)

# Org Management account



|  | 
| --- |
| Influence the future of the AWS Security Reference Architecture (AWS SRA) by taking a [short survey](https://amazonmr.au1.qualtrics.com/jfe/form/SV_e3XI1t37KMHU2ua). | 

The following diagram illustrates the AWS security services that are configured in the Org Management account.

![\[Security services for Org Management account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/security-reference-architecture/images/org-management-acct.png)


The sections [Using AWS Organizations for security](organizations-security.md) and [The management account, trusted access, and delegated administrators](management-account.md) earlier in this guide discussed the purpose and security objectives of the Org Management account in depth. Follow the [security best practices](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_best-practices_mgmt-acct.html) for your Org Management account. These include using an email address that is managed by your business, maintaining the correct administrative and security contact information (such as attaching a phone number to the account in the event AWS needs to contact the owner of the account), enabling multi-factor authentication (MFA) for the all users, and regularly reviewing who has access to the Org Management account. Services deployed in the Org Management account should be configured with appropriate roles, trust policies, and other permissions so that the administrators of those services (who must access them in the Org Management account) cannot also inappropriately access other services.

## Service control policies


With [AWS Organizations](https://aws.amazon.com/organizations/), you can centrally manage policies across multiple AWS accounts. For example, you can apply [service control policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scp.html) (SCPs) across multiple AWS accounts that are members of an organization. SCPs allow you to define which AWS service APIs can and cannot be run by [IAM](https://aws.amazon.com/iam/) principals (such as IAM users and roles) in your organization's member AWS accounts. SCPs are created and applied from the Org Management account, which is the AWS account that you used when you created your organization. Read more about SCPs in the [Using AWS Organizations for security](organizations-security.md) section earlier in this reference. 

If you use AWS Control Tower to manage your AWS organization, it will deploy [a set of SCPs as preventive guardrails](https://docs.aws.amazon.com/controltower/latest/userguide/guardrails-reference.html) (categorized as mandatory, strongly recommended, or elective). These guardrails help you govern your resources by enforcing organization-wide security controls. These SCPs automatically use an aws-control-tower tag that has a value of managed-by-control-tower.

**Design consideration**  
SCPs affect only *member* accounts in the AWS organization. Although they are applied from the Org Management account, they have no effect on users or roles in that account. To learn about how SCP evaluation logic works, and to see examples of recommended structures, see the AWS blog post [How to use service control policies in AWS Organizations](https://aws.amazon.com/blogs/security/how-to-use-service-control-policies-in-aws-organizations/).

## Resource control policies


[Resource control policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_rcps.html) (RCPs) offer centralized control over the maximum available permissions for resources in your organization. An RCP defines a permissions guardrail or sets limits on the actions that identities can take on resources in your organization. You can use RCPs to restrict who can access your resources and enforce requirements on how your resources can be accessed in your organization's member AWS accounts. You can attach RCPs directly to individual accounts, OUs, or the organization root. For a detailed explanation of how RCPs work, see [RCP evaluation](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_rcps_evaluation.html) in the AWS Organizations documentation. Read more about RCPs in the [Using AWS Organizations for security](organizations-security.md) section earlier in this reference.

If you use AWS Control Tower to manage your AWS organization, it will deploy a set of RCPs as preventative guardrails (categorized as mandatory, strongly recommended, or elective). These guardrails help you govern your resources by enforcing organization-wide security controls. These SCPs automatically use an `aws-control-tower` tag that has a value of `managed-by-control-tower`.

**Design considerations**  
RCPs affect only resources in ***member*** accounts in the organization. They have no effect on resources in the management account. This also means that RCPs apply to member accounts that are designated as delegated administrators.
RCPs apply to resources for a subset of AWS services. For more information, see [List of AWS services that support RCPs](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_rcps.html#rcp-supported-services) in the AWS Organizations documentation. You can use [AWS Config Rules](https://aws.amazon.com/config/) and [AWS Lambda functions](https://aws.amazon.com/pm/lambda/) to monitor and automate the enforcement of security controls on resources that aren't currently supported by RCPs.

## Declarative policies


A declarative policy is a type of AWS Organizations management policy that helps you centrally declare and enforce your desired configuration for a given AWS service at scale across an organization. Declarative policies currently support [Amazon EC2](https://aws.amazon.com/ec2/), [Amazon VPC](https://aws.amazon.com/vpc/), and [Amazon EBS](https://aws.amazon.com/ebs/) services. Available service attributes include enforcing Instance Metadata Service Version 2 (IMDSv2), allowing troubleshooting though the EC2 serial console, allowing [Amazon Machine Image (AMI)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) settings, and blocking public access for Amazon EBS snapshots, Amazon EC2 AMIs, and Amazon VPC resources. For the latest supported services and attributes, see [Declarative policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_declarative.html#orgs_manage_policies_declarative-supported-controls) in the AWS Organizations documentation.

You can enforce the baseline configuration for an AWS service by making a few selections on the AWS Organizations and AWS Control Tower consoles or by using a few AWS Command Line Interface (AWS CLI) and AWS SDK commands. Declarative policies are enforced in the service's control plane, which means that the baseline configuration for an AWS service is always maintained, even when the service introduces new features or APIs, when new accounts are added to an organization, or when new principals and resources are created. Declarative policies can be applied to an entire organization or to specific OUs or accounts. The *effective policy* is the set of rules that are inherited from the organization root and OUs along with the policies that are directly attached to the account. If a declarative policy is [detached](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_policies_detach.html), the attribute state will roll back to its state before the declarative policy was attached.

You can use declarative policies to create custom error messages. For example, if an API operation fails because of a declarative policy, you can set the error message or provide a custom URL―such as a link to an internal wiki or a link to a message that describes the failure. This helps provide users with more information so they can troubleshoot the issue themselves. You can also audit the process of creating declarative policies, updating declarative policies, and deleting declarative policies by using AWS CloudTrail.

Declarative policies provide *account status reports,* which enable you to review the current status of all attributes that are supported by declarative policies for the accounts in scope. You can choose the accounts and OUs to include in the report scope or choose an entire organization by selecting the root. This report helps you assess readiness by providing a breakdown by AWS Region and specifying whether the current state of an attribute is *uniform across accounts* (through the `numberOfMatchedAccounts` value) or *inconsistent* *across accounts* (through the `numberOfUnmatchedAccounts` value).

**Design consideration**  
When you configure a service attribute by using a declarative policy, the policy might impact multiple APIs. Any noncompliant actions will fail. Account administrators will not be able to modify the value of the service attribute at the individual account level.

## Centralized root access


All member accounts in AWS Organizations have their own root user, which is an identity that has complete access to all AWS services and resources in that member account. IAM provides centralized root access management to manage root access across all member accounts. This helps prevent member root user usage and helps provide recovery at scale. The centralized root access feature has two essential capabilities: root credentials management and root sessions. 
+ The root credentials management capability allows central management and helps secure root user across all management accounts. This capability includes the removal of long-term root credentials, prevention of root credential recovery by member accounts, and provisioning of new member accounts with no root credentials by default. It also provides an easy way to demonstrate compliance. When root user management is centralized, you can remove root user passwords, access keys, and signing certificates, and deactivate multi-factor authentication (MFA) from all member accounts.
+ The root sessions capability enables you to perform privileged root user actions by using short-term credentials on member accounts from the Org Management account or from delegated administrator accounts. This capability helps you enable short-term root access that is scoped to specific actions, adhering to the principle of least privilege.

For centralized root credential management, you need to enable root credential management and root sessions capabilities at the organization level from the Org Management account or in a delegated administrator account. Following AWS SRA best practices, we delegate this capability to the Security Tooling account. For information about configuring and using centralized root user access, see the AWS Security blog post, [Centrally managing root access for customers using AWS Organizations](https://aws.amazon.com/blogs/aws/centrally-managing-root-access-for-customers-using-aws-organizations/).

## IAM Identity Center


[AWS IAM Identity Center](https://aws.amazon.com/iam/identity-center/) is an identity federation service that helps you centrally manage SSO access to all your AWS accounts, principals, and cloud workloads. IAM Identity Center also helps you manage access and permissions to commonly used third-party software as a service (SaaS) applications. Identity providers integrate with IAM Identity Center by using SAML 2.0. Bulk and just-in-time provisioning can be done by using the System for Cross-Domain Identity Management (SCIM). IAM Identity Center can also integrate with on-premises or AWS-managed Microsoft Active Directory (AD) domains as an identity provider through the use of AWS Directory Service. IAM Identity Center includes a user portal where your end-users can find and access their assigned AWS accountsIAM Identity Center, roles, cloud applications, and custom applications in one place.

IAM Identity Center natively integrates with AWS Organizations and runs in the Org Management account by default. However, to exercise least privilege and tightly control access to the management account, IAM Identity Center administration can be delegated to a specific member account. In the AWS SRA, the Shared Services account is the delegated administrator account for IAM Identity Center. Before you enable delegated administration for IAM Identity Center, review [these considerations](https://aws.amazon.com/blogs/security/getting-started-with-aws-sso-delegated-administration/#_Considerations_when_delegating). You will find more information about delegation in the [Shared Services account](shared-services.md) section. Even after you enable delegation, IAM Identity Center still needs to run in the Org Management account to perform certain [IAM Identity Center-related tasks](https://docs.aws.amazon.com/singlesignon/latest/userguide/delegated-admin.html#delegated-admin-tasks-member-account), which include managing permission sets that are provisioned in the Org Management account. 

Within the IAM Identity Center console, accounts are displayed by their encapsulating OU. This enables you to quickly discover your AWS accounts, apply common sets of permissions, and manage access from a central location. 

IAM Identity Center includes an identity store where specific user information must be stored. However, IAM Identity Center does not have to be the authoritative source for workforce information. In cases where your enterprise already has an authoritative source, IAM Identity Center supports the following types of identity providers (IdPs).
+ **IAM Identity Center identity store –** Choose this option if the following two options are not available. Users are created, group assignments are made, and permissions are assigned in the identity store. Even if your authoritative source is external to IAM Identity Center, a copy of principal attributes will be stored with the identity store.
+ **Microsoft Active Directory (AD) –** Choose this option if you want to continue managing users in either your directory in AWS Directory Service for Microsoft Active Directory or your self-managed directory in Active Directory.
+ **External identity provider –** Choose this option if you prefer to manage users in an external third-party, SAML-based IdP.

You can rely on an existing IdP that is already in place within your enterprise. This makes it easier to manage access across multiple applications and services, because you are creating, managing, and revoking access from a single location. For example, if someone leaves your team, you can revoke their access to all applications and services (including AWS accounts) from one location. This reduces the need for multiple credentials and provides you with an opportunity to integrate with your human resources (HR) processes.

**Design consideration**  
Use an external IdP if that option is available to your enterprise. If your IdP supports System for Cross-domain Identity Management (SCIM), take advantage of the SCIM capability in IAM Identity Center to automate user, group, and permission provisioning (synchronization). This allows AWS access to stay in sync with your corporate workflow for new hires, employees who are moving to another team, and employees who are leaving the company. At any given time, you can have only one directory or one SAML 2.0 identity provider connected to IAM Identity Center. However, you can switch to another identity provider.

## IAM access advisor


IAM access advisor provides traceability data in the form of service last accessed information for your AWS accounts and OUs. Use this detective control to contribute to a [least privilege strategy](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege). For IAM principals, you can view two types of last accessed information: allowed AWS service information and allowed action information. The information includes the date and time when the attempt was made.

IAM access within the Org Management account lets you view service last accessed data for the Org Management account, OU, member account, or IAM policy in your AWS organization. This information is available in the IAM console within the management account and can also be obtained programmatically by using IAM access advisor APIs in AWS CLI or a programmatic client. The information indicates which principals in an organization or account last attempted to access the service and when. Last accessed information provides insight for actual service usage (see [example scenarios](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_access-advisor-example-scenarios.html)), so you can reduce IAM permissions to only those services that are actually used.

## AWS Systems Manager


Quick Setup and Explorer, which are capabilities of [AWS Systems Manager](https://aws.amazon.com/systems-manager/), both support AWS Organizations and operate from the Org Management account.

[Quick Setup](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-quick-setup.html) is an automation feature of Systems Manager. It enables the Org Management account to easily define configurations for Systems Manager to engage on your behalf across accounts in your AWS organization. You can enable Quick Setup across your entire AWS organization or choose specific OUs. Quick Setup can schedule AWS Systems Manager Agent (SSM Agent) to run biweekly updates on your EC2 instances and can set up a daily scan of those instances to identify missing patches. 

[Explorer](https://docs.aws.amazon.com/systems-manager/latest/userguide/Explorer.html) is a customizable operations dashboard that reports information about your AWS resources. Explorer displays an aggregated view of operations data for your AWS accounts and across AWS Regions. This includes data about your EC2 instances and patch compliance details. After you complete Integrated Setup (which also includes Systems Manager OpsCenter) within AWS Organizations, you can aggregate data in Explorer by OU or for an entire AWS organization. Systems Manager aggregates the data into the AWS Org Management account before displaying it in Explorer.

The [Workloads OU](application.md) section later in this guide discusses the use of the SSM Agent on the EC2 instances in the Application account.

## AWS Control Tower


[AWS Control Tower](https://aws.amazon.com/controltower/) provides a straightforward way to set up and govern a secure, multi-account AWS environment, which is called a *landing zone*. AWS Control Tower creates your landing zone by using AWS Organizations, and provides ongoing account management and governance as well as implementation best practices. You can use AWS Control Tower to provision new accounts in a few steps while ensuring that the accounts conform to your organizational policies. You can even add existing accounts to a new AWS Control Tower environment. 

AWS Control Tower has a broad and flexible set of features. A key feature is its ability to *orchestrate* the capabilities of several other [AWS services](https://docs.aws.amazon.com/controltower/latest/userguide/integrated-services.html), including AWS Organizations, AWS Service Catalog, and IAM Identity Center, to build a landing zone. For examples, by default AWS Control Tower uses AWS CloudFormation to establish a baseline, AWS Organizations service control policies (SCPs) to prevent configuration changes, and AWS Config Rules rules to continuously detect non-conformance. AWS Control Tower employs blueprints that help you quickly align your multi-account AWS environment with [AWS Well Architected security foundation design principles](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/security.html). Among governance features, AWS Control Tower offers guardrails that prevent deployment of resources that don't conform to selected policies.

You can get started implementing AWS SRA guidance with AWS Control Tower. For example, AWS Control Tower establishes an AWS organization with the recommended multi-account architecture. It provides blueprints to provide identity management, provide federated access to accounts, centralize logging, establish cross-account security audits, define a workflow for provisioning new accounts, and implement account baselines with network configurations.

In the AWS SRA, AWS Control Tower is within the Org Management account because AWS Control Tower uses this account to set up an AWS organization automatically and designates that account as the management account. This account is used for billing across your AWS organization. It's also used for Account Factory provisioning of accounts, to manage OUs, and to manage guardrails. If you are launching AWS Control Tower in an existing AWS organization, you can use the existing management account. AWS Control Tower will use that account as the designated management account.

**Design consideration**  
If you want to do additional baselining of controls and configurations across your accounts, you can use [Customizations for AWS Control Tower (CfCT)](https://aws.amazon.com/solutions/implementations/customizations-for-aws-control-tower/). With CfCT, you can customize your AWS Control Tower landing zone by using a CloudFormation template and SCPs. You can deploy the custom template and policies to individual accounts and OUs within your organization. CfCT integrates with AWS Control Tower lifecycle events to ensure that resource deployments stay in sync with your landing zone. 

## AWS Artifact


[AWS Artifact](https://aws.amazon.com/artifact/) provides on-demand access to AWS security and compliance reports and select online agreements. Reports available in AWS Artifact include System and Organization Controls (SOC) reports, Payment Card Industry (PCI) reports, and certifications from accreditation bodies across geographies and compliance verticals that validate the implementation and operating effectiveness of AWS security controls. AWS Artifact helps you perform your due diligence of AWS with enhanced transparency into our security control environment. It also lets you continuously monitor the security and compliance of AWS with immediate access to new reports.

AWS Artifact Agreements enable you to review, accept, and track the status of AWS agreements such as the Business Associate Addendum (BAA) for an individual account and for the accounts that are part of your organization in AWS Organizations. 

You can provide the AWS audit artifacts to your auditors or regulators as evidence of AWS security controls. You can also use the responsibility guidance provided by some of the AWS audit artifacts to design your cloud architecture. This guidance helps determine the additional security controls you can put in place to support the specific use cases of your system.

AWS Artifact is hosted in the Org Management account to provide a central location where you can review, accept, and manage agreements with AWS. This is because agreements that are accepted at the management account flow down to the member accounts.

**Design consideration**  
Users within the Org Management account should be restricted to use only the Agreements feature of AWS Artifact and nothing else. To implement segregation of duties, AWS Artifact is also hosted in the Security Tooling account where you can delegate permissions to your compliance stakeholders and external auditors to access audit artifacts. You can implement this separation by defining fine-grained IAM permission policies. For examples, see [Example IAM policies](https://docs.aws.amazon.com/artifact/latest/ug/security-iam.html#example-iam-policies) in the AWS documentation.

## Distributed and centralized security service guardrails


In the AWS SRA, AWS Security Hub, AWS Security Hub CSPM, Amazon GuardDuty, AWS Config, IAM Access Analyzer, AWS CloudTrail organization trails, and often Amazon Macie are deployed with appropriate delegated set of guardrails across accounts and also provides centralized monitoring, management, and governance across your AWS organization. You will find this group of services in every type of account represented in the AWS SRA. These should be part of the AWS services that must be provisioned as part of your account onboarding and baselining process. The [GitHub code repository](https://github.com/aws-samples/aws-security-reference-architecture-examples) provides a sample implementation of AWS security-focused services across your accounts, including the AWS Org Management account.

In addition to these services, AWS SRA includes two security-focused services, Amazon Detective and AWS Audit Manager, which support the integration and delegated administrator functionality in AWS Organizations. However, those are not included as part of the recommended services for account baselining. We have seen that these services are best used in the following scenarios:
+ You have a dedicated team or group of resources that perform those digital forensics and IT audit functions. Detective is best utilized by security analyst teams, and Audit Manager is helpful to your internal audit or compliance teams.
+ You want to focus on a core set of tools such as AWS Config, Amazon GuardDuty, AWS Security Hub, and AWS Security Hub CSPM at the start of your project, and then build on these by using services that provide additional capabilities.

# Security OU – Security Tooling account



|  | 
| --- |
| Influence the future of the AWS Security Reference Architecture (AWS SRA) by taking a [short survey](https://amazonmr.au1.qualtrics.com/jfe/form/SV_e3XI1t37KMHU2ua). | 

The following diagram illustrates the AWS security services that are configured in the Security Tooling account.

![\[Security services for Security Tooling account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/security-reference-architecture/images/security-tooling-acct.png)


The Security Tooling account is dedicated to operating security services, monitoring AWS accounts, and automating security alerting and response. The security objectives include the following:
+ Provide a dedicated account with controlled access to manage access to the security guardrails, monitoring, and response.
+ Maintain the appropriate centralized security infrastructure to monitor security operations data and maintain traceability. Detection, investigation, and response are essential parts of the security lifecycle and can be used to support a quality process, a legal or compliance obligation, and for threat identification and response efforts.
+ Further support a defense-in-depth organization strategy by maintaining another layer of control over appropriate security configuration and operations such as encryption keys and security group settings. This is an account where security operators work. Read-only/audit roles to view AWS organization-wide information are typical, whereas write/modify roles are limited in number, tightly controlled, monitored, and logged.

**Design considerations**  
AWS Control Tower names the account under the Security OU the *Audit Account* by default. You can rename the account during the AWS Control Tower setup.
It might be appropriate to have more than one Security Tooling account. For example, monitoring and responding to security events are often assigned to a dedicated team. Network security might warrant its own account and roles in collaboration with the cloud infrastructure or network team. Such splits retain the objective of separating centralized security enclaves and further emphasize the separation of duties, least privilege, and potential simplicity of team assignments. If you are using AWS Control Tower, it restricts the creation of additional AWS accounts under the Security OU.

## Delegated administrator for security services


The Security Tooling account serves as the administrator account for security services that are managed in an administrator/member structure throughout the AWS accounts. As mentioned earlier, this is handled through the AWS Organizations delegated administrator functionality. Services in the AWS SRA that [currently support delegated administrator](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services_list.html) include IAM centralized management of root access, AWS Config, AWS Firewall Manager, Amazon GuardDuty, IAM Access Analyzer, Amazon Macie, AWS Security Hub, AWS Security Hub CSPM, Amazon Detective, AWS Audit Manager, Amazon Inspector, AWS CloudTrail, and AWS Systems Manager. Your security team manages the security features of these services and monitors any security-specific events or findings.

AWS IAM Identity Center supports delegated administration to a member account. AWS SRA uses the Shared Services account as the delegated administrator account for IAM Identity Center, as explained later in the [IAM Identity Center](shared-services.md#shared-sso) section of the Shared Services account.

## Centralized root access


The Security Tooling account** **is the delegated administrator account for IAM centralized management of root access capability.** **This capability has to be enabled at the organization level by enabling credential management and privileged root action in member accounts. Delegated administrators have to be provided `sts:AssumeRoot` permissions explicitly to be able to take privileged root actions on behalf of member accounts. This permission is available only after privileged root action in a member account is enabled in the Org Management or delegated administrator account. With this permission, users can perform privileged root user tasks on member accounts, centrally from the Security Tooling account. After you launch a privileged session, you can delete a misconfigured S3 bucket policy, delete a misconfigured SQS queue policy, delete the root user credentials for a member account, and reenable root user credentials for a member account. You can perform these actions from the console, by using the AWS Command Line Interface (AWS CLI) or through APIs.

## AWS CloudTrail


[AWS CloudTrail](https://aws.amazon.com/cloudtrail/) is a service that supports the governance, compliance, and auditing of activity in your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail is integrated with AWS Organizations, and that integration can be used to create a single trail that logs all events for all accounts in the AWS organization. This is referred to as an *organization trail*. You can create and manage an organization trail only from within the management account for the organization or from a delegated administrator account. When you create an organization trail, a trail with the name that you specify is created in every AWS account that belongs to your AWS organization. The trail logs activity for all accounts, including the management account, in the AWS organization and stores the logs in a single S3 bucket. Because of the sensitivity of this S3 bucket, you should secure it by following the best practices outlined in the [Amazon S3 as central log store](log-archive.md#log-s3) section later in this guide. All accounts in the AWS organization can see the organization trail in their list of trails. However, member AWS accounts have view-only access to this trail. By default, when you create an organization trail in the CloudTrail console, the trail is a multi-Region trail. For additional security best practices, see the [CloudTrail documentation](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/best-practices-security.html).

In the AWS SRA, the Security Tooling account is the delegated administrator account for managing CloudTrail. The corresponding S3 bucket to store the organization trail logs is created in the Log Archive account. This is to separate the management and usage of CloudTrail log privileges. For information about how to create or update an S3 bucket to store log files for an organization trail, see the [CloudTrail documentation](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/create-s3-bucket-policy-for-cloudtrail.html#org-trail-bucket-policy). As a security best practice, add the `aws:SourceArn` condition key of the organization trail to the resource policy of the S3 bucket (and any other resources such as KMS keys or SNS topics). This ensures that the S3 bucket accepts only data that is associated with the specific trail. The trail is configured with log file validation for log file integrity validation. The log and digest files are encrypted by using SSE-KMS. The organization trail is also integrated with a log group in CloudWatch Logs to send events for long-term retention.

**Note**  
You can create and manage organization trails from both management and delegated administrator accounts. However, as a best practice, you should limit access to the management account and use the delegated administrator functionality where it is available.

**Design considerations**  
CloudTrail does not log data events by default, because these are often high-volume activities. However, you should capture data events for specific critical AWS resources such as S3 buckets, Lambda functions, log events from outside AWS that are sent to the CloudTrail lake, and SNS topics. To do this, configure your organization trail to include data events from specific resources by specifying the ARNs of each individual resources. 
If a member account requires access to CloudTrail log files for its own account, you can [selectively share](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-sharing-logs.html) the organization's CloudTrail log files from the central S3 bucket. However, if member accounts require local Amazon CloudWatch log groups for their account's CloudTrail logs or want to configure log management and data events (read-only, write-only, management events, data events) differently from the organization trail, they can create a local trail with the appropriate controls. Local account-specific trails incur [additional cost](https://aws.amazon.com/cloudtrail/pricing/).

## AWS Security Hub CSPM


[AWS Security Hub Cloud Security Posture Management](https://aws.amazon.com/security-hub/cspm/) (AWS Security Hub CSPM), previously known as AWS Security Hub, provides you with a comprehensive view of your security posture in AWS and helps you check your environment against security industry standards and best practices. Security Hub CSPM collects security data from across AWS integrated services, supported third-party products, and other custom security products that you might use. It helps you continuously monitor and analyze your security trends and identify the highest priority security issues. In addition to the ingested sources, Security Hub CSPM generates its own findings, which are represented by security controls that map to one or more security standards. These standards include AWS Foundational Security Best Practices (FSBP), Center for Internet Security (CIS) AWS Foundations Benchmark v1.20 and v1.4.0, National Institute of Standards and Technology (NIST) SP 800-53 Rev. 5, Payment Card Industry Data Security Standard (PCI DSS), and [service-managed standards](https://docs.aws.amazon.com/securityhub/latest/userguide/service-managed-standards.html). For a list of current security standards and details on specific security controls, see the [Standards reference for Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/standards-reference.html) in the Security Hub CSPM documentation.

Security Hub CSPM integrates with AWS Organizations to simplify security posture management across all your existing and future accounts in your AWS organization. You can use the Security Hub CSPM [central configuration feature](https://docs.aws.amazon.com/securityhub/latest/userguide/central-configuration-intro.html) from the delegated administrator account (in this case, Security Tooling) to specify how the Security Hub CSPM service, security standards, and security controls are configured in your organization accounts and organizational units (OUs) across Regions.  You can configure these settings in a few steps from one primary Region, which is referred to as the *home Region*. If you don't use central configuration, you must configure Security Hub CSPM separately in each account and Region. The delegated administrator can designate accounts and OUs as *self-managed*, where the member can configure settings separately in each Region, or as *centrally managed*, where the delegated administrator can configure the member account or OU across Regions. You can designate all accounts and OUs in your organization as centrally managed, all self-managed, or a combination of both. This simplifies the enforcement of a consistent configuration while providing the flexibility to modify it for each OU and account. 

The Security Hub CSPM delegated administrator account can also view findings, view insights, and control details from all member accounts. You can additionally designate an aggregation Region within the delegated administrator account to centralize your findings across your accounts and your linked Regions. Your findings are continuously and bidirectionally synced between the aggregator Region and all other Regions.

Security Hub CSPM supports integrations with several AWS services. Amazon GuardDuty, AWS Config, Amazon Macie, IAM Access Analyzer, AWS Firewall Manager, Amazon Inspector, Amazon Route 53 Resolver DNS Firewall, and AWS Systems Manager Patch Manager can feed findings to Security Hub CSPM. Security Hub CSPM processes findings by using a standard format called the [AWS Security Finding Format (ASFF)](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format.html). Security Hub CSPM correlates the findings across integrated products to prioritize the most important ones. You can enrich the metadata of Security Hub CSPM findings to help better contextualize, prioritize, and take action on the security findings. This enrichment adds resource tags, a new AWS application tag, and account name information to every finding that's ingested into Security Hub CSPM. This helps you fine-tune findings for automation rules, search or filter findings and insights, and assess security posture status by application. In addition, you can use [automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html#automation-rules-how-it-works) to automatically update findings. As Security Hub CSPM ingests findings, it can apply a variety of rule actions, such as suppressing findings, changing their severity, and adding notes to findings. These rule actions take effect when findings match your specified criteria, such as the resource or account IDs the finding is associated with, or its title. You can use automation rules to update select finding fields in the ASFF. Rules apply to both new and updated findings.

During the investigation of a security event, you can navigate from Security Hub CSPM to Amazon Detective to investigate a GuardDuty finding. Security Hub CSPM recommends aligning the delegated administrator accounts for services such as Detective (where they exist) for smoother integration. For example, if you do not align administrator accounts between Detective and Security Hub CSPM, navigating from findings into Detective will not work. For a comprehensive list, see [Overview of AWS service integrations with Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-internal-providers.html#internal-integrations-summary) in the Security Hub CSPM documentation.

You can use Security Hub CSPM with the [Network Access Analyzer](https://aws.amazon.com/blogs/aws/new-amazon-vpc-network-access-analyzer/) feature of Amazon VPC to help continuously monitor the compliance of your AWS network configuration. This will help you block unwanted network access and help prevent your critical resources from external access. For further architecture and implementation details, see the AWS blog post [Continuous verification of network compliance using Amazon VPC Network Access Analyzer and AWS Security Hub CSPM](https://aws.amazon.com/blogs/networking-and-content-delivery/continuous-verification-of-network-compliance-using-amazon-vpc-network-access-analyzer-and-aws-security-hub/).

In addition to its monitoring features, Security Hub CSPM supports integration with Amazon EventBridge to automate the remediation of specific findings. You can define custom actions to take when a finding is received. For example, you can configure custom actions to send findings to a ticketing system or to an automated remediation system. For additional discussions and examples, see the AWS blog posts [Automated response and remediation with AWS Security Hub CSPM](https://aws.amazon.com/blogs/security/automated-response-and-remediation-with-aws-security-hub/) and [How to deploy the AWS solution for Security Hub CSPM automated response and remediation](https://aws.amazon.com/blogs/security/how-to-deploy-the-aws-solution-for-security-hub-automated-response-and-remediation/).

Security Hub CSPM uses service-linked AWS Config Rules to perform most of its security checks for controls. To support these controls, [AWS Config must be enabled on all accounts](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-prereq-config.html)—including the administrator (or delegated administrator) account and member accounts—in each AWS Region where Security Hub CSPM is enabled.

**Design considerations**  
If a compliance standard, such as PCI-DSS, is already present in Security Hub CSPM, the fully managed Security Hub CSPM service is the easiest way to operationalize it. However, if you want to assemble your own compliance or security standard, which might include security, operational, or cost optimization checks, AWS Config conformance packs offer a simplified customization process. (For more information about AWS Config and conformance packs, see the [AWS Config](#tool-config) section.)
Common use cases for Security Hub CSPM include the following:  
As a dashboard that provides visibility for application owners into the security and compliance posture of their AWS resources
As a central view of security findings used by security operations, incident responders, and threat hunters to triage and take action on AWS security and compliance findings across AWS accounts and Regions
To aggregate and route security and compliance findings from across AWS accounts and Regions, to a centralized security information and event management (SIEM) or other security orchestration system
For additional guidance on these use cases, including how to set them up, see the blog post [Three recurring Security Hub CSPM usage patterns and how to deploy them](https://aws.amazon.com/blogs/security/three-recurring-security-hub-usage-patterns-and-how-to-deploy-them/).

**Implementation example**  
The [AWS SRA code library](https://github.com/aws-samples/aws-security-reference-architecture-examples) provides a sample implementation of [Security Hub CSPM](https://github.com/aws-samples/aws-security-reference-architecture-examples/blob/main/aws_sra_examples/solutions/securityhub/securityhub_org). It includes automatic enablement of the service, delegated administration to a member account (Security Tooling), and configuration to enable Security Hub CSPM for all existing and future accounts in the AWS organization.

## AWS Security Hub


[AWS Security Hub](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub-v2.html) is a unified cloud security solution that prioritizes your critical security threats and helps you respond at scale. Security Hub detects security issues in near real time by automatically correlating and enriching security signals from multiple sources, such as posture management (AWS Security Hub CSPM), vulnerability management (Amazon Inspector), sensitive data (Amazon Macie), and threat detection (Amazon GuardDuty). This enables security teams to prioritize active risks in their cloud environments through automated analyses and contextual insights. Security Hub provides a visual representation of the potential attack path that attackers can exploit to gain access to resources associated with an exposure finding. This transforms complex security signals into actionable insights, so you can make informed decisions about your security quickly.

Security Hub has been strategically redesigned to simplify the enablement of associated security service building blocks to arrive at a security outcome. By correlating security findings in a threat matrix across different security signals in near real time, you can prioritize the most critical risks first. The findings are correlated to detect exposure associated with AWS resources. Exposures represent broader weaknesses in security controls, misconfigurations, or other areas that could be exploited by active threats. For example, an exposure might be an EC2 instance that is reachable from the internet and has software vulnerabilities that have high likelihood of exploitation.

Security Hub and Security Hub CSPM are complementary services. [Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) provides a comprehensive view of your security posture and helps you evaluate your cloud environment against security industry standards and best practices. Security Hub  provides a unified experience that helps you prioritize and respond to critical security issues. Security Hub CSPM findings are routed to Security Hub automatically, where they're correlated with findings from other security services, such as Amazon Inspector, to generate exposures. This helps you identify the most critical risks in your environment. 

Security Hub also provides a summary of resources in your AWS environment by type and associated findings. Resources are prioritized by exposures and attack sequences. When you choose a resource type, you can review all the resources associated with that resource type.

For the optimal experience, we [recommend](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-v2-recommendations.html) enabling Security Hub and Security Hub CSPM as well as enabling these other security services: [Amazon GuardDuty](https://aws.amazon.com/guardduty/), [Amazon Inspector](https://aws.amazon.com/inspector/), and [Amazon Macie](https://aws.amazon.com/macie/). You can gain visibility into whether these services and features are uniformly enabled across all your organization's member accounts by using Security Hub Coverage findings.

In the AWS SRA, the Security Tooling account acts as the delegated administrator for Security Hub, Security Hub CSPM, and other AWS security services. Within the Security Tooling account you can view all resources associated with member accounts. You can also view all the resources in your home AWS Region from linked AWS Regions.

**Implementation note**  
[Enabling Security Hub](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-v2-enable.html#securityhub-v2-enable-management-account) requires three steps, including procedures that take into account whether you have previously enabled Security Hub CSPM. Security Hub is natively integrated with AWS Organizations, which simplifies the configuration and implementation process, and centralizes and aggregates all findings into a single location. In accordance with the AWS SRA best practice, use the [Security Tooling account](dedicated-accounts.md) as the delegated administrator account to manage and configure Security Hub. Use Security Hub configuration settings to enable all Regions, OUs, and accounts automatically, including future Regions and accounts. You should also set up cross-Region aggregation to aggregate findings, resources, and trends from multiple AWS Regions into a single home Region. During configuration, you can also enable any native integrations such as Jira Cloud or ServiceNow.

**Design considerations**  
Security Hub findings are formatted in the Open Cybersecurity Schema Framework (OCSF). Security Hub generates findings in OCSF and receives findings in OCSF from Security Hub CSPM and other AWS services. These OCSF findings can be sent over Amazon EventBridge for automations or stored in a central log aggregation account to perform security log analysis and retention.
The AWS Org Management account cannot designate itself as the delegated administrator in Security Hub. This aligns with the AWS SRA best practice of designating the Security Tooling account as the delegated administrator. Also note:  
The designated administrator account for Security Hub CSPM automatically becomes the designated administrator for Security Hub.
Removing delegated administration through Security Hub also removes delegated administration for Security Hub CSPM. Likewise, removing delegated administration through Security Hub CSPM also removes it for Security Hub.
Security Hub includes features that automatically modify and take action on findings based on your specifications, Security Hub supports the following types of automations:  
Automation rules, which automatically update findings, suppress findings, and send findings to ticketing tools in near real time based on defined criteria.
Automated response and remediation, which create custom EventBridge rules that define automatic actions to take against specific findings and insights.
Security Hub can configure Amazon Inspector across all member accounts and Regions through policies, and can configure GuardDuty and Security Hub CSPM through deployment. Policies generate AWS Organizations policies for accounts and Regions. Deployments are one-time actions that enable a security capability across selected accounts and Regions. Deployments do not apply to newly enabled accounts. As an alternative, you can auto-enable features for new member accounts in GuardDuty and Security Hub CSPM.

## Amazon GuardDuty


[Amazon GuardDuty](https://aws.amazon.com/guardduty/) is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads. You must always capture and store appropriate logs for monitoring and audit purposes, but GuardDuty pulls independent streams of data directly from AWS CloudTrail, Amazon VPC flow logs, and AWS DNS logs. You don't have to manage Amazon S3 bucket policies or modify the way you collect and store your logs. GuardDuty permissions are managed as service-linked roles that you can revoke at any time by disabling GuardDuty. This makes it easy to enable the service without complex configuration, and it eliminates the risk that an IAM permission modification or S3 bucket policy change will affect the operation of the service.

In addition to providing [foundational data sources](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_data-sources.html), GuardDuty provides optional features to identify security findings. These include EKS Protection, RDS Protection, S3 Protection, Malware Protection, and Lambda Protection. For new detectors, these optional features are enabled by default except for EKS Protection, which must be manually enabled.
+ With [GuardDuty S3 Protection](https://docs.aws.amazon.com/guardduty/latest/ug/s3-protection.html), GuardDuty monitors Amazon S3 data events in CloudTrail in addition to the default CloudTrail management events. Monitoring data events enables GuardDuty to monitor object-level API operations for potential security risks to data within your S3 buckets.
+ [GuardDuty Malware Protection](https://docs.aws.amazon.com/guardduty/latest/ug/malware-protection.html) detects the presence of malware on Amazon EC2 instances or container workloads by initiating agentless scans on attached Amazon Elastic Block Store (Amazon EBS) volumes. GuardDuty also detects potential malware in S3 buckets by scanning newly uploaded objects or new versions of existing objects.
+ [GuardDuty RDS Protection](https://docs.aws.amazon.com/guardduty/latest/ug/rds-protection.html) is designed to profile and monitor access activity to Amazon Aurora databases without impacting database performance.
+ [GuardDuty EKS Protection](https://docs.aws.amazon.com/guardduty/latest/ug/kubernetes-protection.html) includes EKS Audit Log Monitoring and EKS Runtime Monitoring. With EKS Audit Log Monitoring, GuardDuty monitors [Kubernetes audit logs](https://docs.aws.amazon.com/guardduty/latest/ug/features-kubernetes-protection.html#guardduty_k8s-audit-logs) from Amazon EKS clusters and analyzes them for potentially malicious and suspicious activity. EKS Runtime Monitoring uses the GuardDuty security agent (which is an Amazon EKS add-on) to provide runtime visibility into individual Amazon EKS workloads. The GuardDuty security agent helps identify specific containers within your Amazon EKS clusters that are potentially compromised. It can also detect attempts to escalate privileges from an individual container to the underlying Amazon EC2 host or to the broader AWS environment.

GuardDuty also provides a feature known as [Extended Threat Detection](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty-extended-threat-detection.html) that automatically detects multi-stage attacks that span data sources, multiple types of AWS resources, and time within an AWS account. GuardDuty correlates these events, which are called *signals*, to identify scenarios that present themselves as potential threats to your AWS environment, and then generates an attack sequence finding. This covers threat scenarios that involve compromise related to AWS credentials misuse, and data compromise attempts in your AWS accounts. GuardDuty considers all attack sequence finding types as **Critical**. This feature is enabled by default, and there is no additional cost associated with it.

In the AWS SRA, GuardDuty is enabled in all accounts through AWS Organizations, and all findings are viewable and actionable by appropriate security teams in the GuardDuty delegated administrator account (in this case, the Security Tooling account). GuardDuty active findings are exported to a central S3 bucket in the Log Archive account, so you can retain the findings beyond 90 days. The findings are exported from the delegated administrator account and also include all the findings from associated member accounts in the same Region. The findings in the S3 bucket are encrypted with an AWS KMS customer managed key. The S3 bucket policy and KMS key policy are configured to allow only GuardDuty to use the resources.

When AWS Security Hub CSPM is enabled, GuardDuty findings automatically flow to Security Hub CSPM and Security Hub. When Amazon Detective is enabled, GuardDuty findings are included in the Detective log ingest process. GuardDuty and Detective support cross-service user workflows, where GuardDuty provides links from the console that redirect you from a selected finding to a Detective page that contains a curated set of visualizations for investigating that finding. For example, you can also integrate GuardDuty with Amazon EventBridge to automate best practices for GuardDuty, such as [automating responses to new GuardDuty findings](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings_cloudwatch.html).

**Implementation example**  
The [AWS SRA code library](https://github.com/aws-samples/aws-security-reference-architecture-examples) provides a sample implementation of [GuardDuty](https://github.com/aws-samples/aws-security-reference-architecture-examples/blob/main/aws_sra_examples/solutions/guardduty/guardduty_org). It includes encrypted S3 bucket configuration, delegated administration, and GuardDuty enablement for all existing and future accounts in the AWS organization.

## AWS Config


[AWS Config](https://aws.amazon.com/config/) is a service that enables you to assess, audit, and evaluate the configurations of supported AWS resources in your AWS accounts. AWS Config continuously monitors and records AWS resource configurations, and automatically evaluates recorded configurations against desired configurations. You can also integrate AWS Config with other services to do the heavy lifting in automated audit and monitoring pipelines. For example, AWS Config can monitor for changes in individual secrets in AWS Secrets Manager.

You can evaluate the configuration settings of your AWS resources by using [AWS Config Rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html). AWS Config provides a library of customizable, predefined rules called [managed rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html), or you can write your own [custom rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_develop-rules.html). You can run AWS Config Rules in proactive mode (before resources have been deployed) or detective mode (after resources have been deployed). Resources can be evaluated when there are configuration changes, on a periodic schedule, or both. 

A [conformance pack](https://docs.aws.amazon.com/config/latest/developerguide/conformance-packs.html) is a collection of AWS Config rules and remediation actions that can be deployed as a single entity in an account and Region, or across an organization in AWS Organizations. Conformance packs are created by authoring a YAML template that contains the list of AWS Config managed or custom rules and remediation actions. To get started evaluating your AWS environment, use one of the [sample conformance pack templates](https://docs.aws.amazon.com/config/latest/developerguide/conformancepack-sample-templates.html).

AWS Config integrates with AWS Security Hub CSPM to send the results of AWS Config managed and custom rule evaluations as findings into Security Hub CSPM.

AWS Config Rules can be used in conjunction with AWS Systems Manager to effectively remediate noncompliant resources. You use Systems Manager Explorer to gather the compliance status of AWS Config rules in your AWS accounts across AWS Regions and then use [Systems Manager Automation documents (runbooks)](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html) to resolve your noncompliant AWS Config rules. For implementation details, see the blog post [Remediate noncompliant AWS Config rules with AWS Systems Manager Automation runbooks](https://aws.amazon.com/blogs/mt/remediate-noncompliant-aws-config-rules-with-aws-systems-manager-automation-runbooks/).

The AWS Config aggregator collects configuration and compliance data across multiple accounts, Regions, and organizations in AWS Organizations. The aggregator dashboard displays the configuration data of aggregated resources. Inventory and compliance dashboards offer essential and current information about your AWS resource configurations and compliance status across AWS accounts, across AWS Regions, or within an AWS organization. They enable you to visualize and assess your AWS resource inventory without needing to write AWS Config advanced queries. You can get essential insights such as a summary of compliance by resources, the top 10 accounts that have noncompliant resources, a comparison of running and stopped EC2 instances by type, and EBS volumes by volume type and size.

If you use AWS Control Tower to manage your AWS organization, it will deploy [a set of AWS Config rules as detective guardrails](https://docs.aws.amazon.com/controltower/latest/userguide/how-controls-work.html) (categorized as mandatory, strongly recommended, or elective). These guardrails help you govern your resources and monitor compliance across accounts in your AWS organization. These AWS Config rules will automatically use an `aws-control-tower` tag that has a value of `managed-by-control-tower`.

AWS Config must be enabled for each member account in the AWS organization and AWS Region that contains the resources that you want to protect. You can centrally manage (for example, create, update, and delete) AWS Config rules across all accounts within your AWS organization. From the AWS Config delegated administrator account, you can deploy a common set of AWS Config rules across all accounts and specify accounts where AWS Config rules should not be created. The AWS Config delegated administrator account can also aggregate resource configuration and compliance data from all member accounts to provide a single view. Use the APIs from the delegated administrator account to enforce governance by ensuring that the underlying AWS Config rules cannot be modified by the member accounts in your AWS organization. AWS Config is natively integrated to send findings to AWS Security Hub CSPM, if Security Hub CSPM is enabled and at least one AWS Config managed or custom rule exists.

In the AWS SRA, the AWS Config delegated administrator account is the Security Tooling account. The AWS Config [delivery channel](https://docs.aws.amazon.com/config/latest/developerguide/manage-delivery-channel.html) is configured to deliver resource configuration snapshots in a centralized S3 bucket in the Log Archive account. Because the Log Archive account is the central log repository store, it is used to store resource configuration.

**Design considerations**  
AWS Config streams configuration and compliance change notifications to Amazon EventBridge. This means that you can use the native filtering capabilities in EventBridge to filterAWS Config events so that you can route specific types of notifications to specific targets. For example, you can send compliance notifications for specific rules or resource types to specific email addresses, or route configuration change notifications to an external IT service management (ITSM) or configuration management database (CMDB) tool. For more information, see the blog post [AWS Config best practices](https://aws.amazon.com/blogs/mt/aws-config-best-practices/).
In addition to using AWS Config proactive rule evaluation, you can use [AWS CloudFormation Guard](https://docs.aws.amazon.com/cfn-guard/latest/ug/what-is-guard.html), which is a a policy-as-code evaluation tool that proactively checks for resource configuration compliance. The AWS CloudFormation Guard command line interface (CLI) provides you with a declarative, domain-specific language (DSL) that you can use to express policy as code. In addition, you can use AWS CLI commands to validate JSON-formatted or YAML-formatted structured data such as CloudFormation change sets, JSON-based Terraform configuration files, or Kubernetes configurations. You can run the evaluations locally by using the [AWS CloudFormation Guard CLI](https://catalog.us-east-1.prod.workshops.aws/workshops/fff8e490-f397-43d2-ae26-737a6dc4ac68/en-US/70-cfn-guard/73-checking-templates) as part of your authoring process or run it within your [deployment pipeline](https://catalog.us-east-1.prod.workshops.aws/workshops/fff8e490-f397-43d2-ae26-737a6dc4ac68/en-US/70-cfn-guard/75-add-guard-to-pipeline). If you have [AWS Cloud Development Kit (AWS CDK)](https://aws.amazon.com/cdk/) applications, you can use [cdk-nag](https://github.com/cdklabs/cdk-nag) for proactive checking of best practices.

**Implementation example**  
The [AWS SRA code library](https://github.com/aws-samples/aws-security-reference-architecture-examples) provides a [sample implementation](https://github.com/aws-samples/aws-security-reference-architecture-examples/blob/main/aws_sra_examples/solutions/config/config_conformance_pack_org) that deploys AWS Config conformance packs to all AWS accounts and Regions within an AWS organization. The [AWS Config Aggregator](https://github.com/aws-samples/aws-security-reference-architecture-examples/blob/main/aws_sra_examples/solutions/config/config_aggregator_org) module helps you configure an AWS Config aggregator by delegating administration to a member account (Security Tooling) within the Org Management account and then configuring AWS Config Aggregator within the delegated administrator account for all existing and future accounts in the AWS organization. You can use the [AWS Config Control Tower Management Account](https://github.com/aws-samples/aws-security-reference-architecture-examples/blob/main/aws_sra_examples/solutions/config/config_management_account) module to enable AWS Config within the Org Management account―it isn't enabled by AWS Control Tower.

## Amazon Security Lake


[Amazon Security Lake](https://aws.amazon.com/security-lake/) is a fully managed security data lake service. You can use Security Lake to automatically centralize security data from AWS environments, software as a service (SaaS) providers, on premises, and [third-party sources](https://docs.aws.amazon.com/security-lake/latest/userguide/integrations-third-party.html). Security Lake helps you build a normalized data source that simplifies the usage of analytics tools over security data, so you can get a more complete understanding of your security posture across the entire organization. The data lake is backed by Amazon Simple Storage Service (Amazon S3) buckets, and you retain ownership over your data. Security Lake automatically collects logs for AWS services, including AWS CloudTrail, Amazon VPC, Amazon Route 53, Amazon S3, AWS Lambda, Amazon EKS audit logs, AWS Security Hub CSPM findings, and AWS WAF logs.

AWS SRA recommends that you use the Log Archive account as the delegated administrator account for Security Lake. For more information about setting up the delegated administrator account, see [Amazon Security Lake](log-archive.md#log-security-lake) in the *Security OU ‒ Log Archive account* section. Security teams that want to access Security Lake data or need the ability to write non-native logs to the Security Lake buckets by using custom extract, transform, and load (ETL) functions should operate within the Security Tooling account.

Security Lake can collect logs from different cloud providers, logs from third-party solutions, or other custom logs. We recommend that you use the Security Tooling account to perform the ETL functions to convert the logs to Open Cybersecurity Schema Framework (OCSF) format and output a file in Apache Parquet format. Security Lake creates the cross-account role with the proper permissions for the Security Tooling account and the custom source backed by Lambda functions or AWS Glue crawlers, to write data to the S3 buckets for Security Lake.

The Security Lake administrator should configure security teams that use the Security Tooling account and require access to the logs that Security Lake collects as [subscribers](https://docs.aws.amazon.com/security-lake/latest/userguide/subscriber-management.html). Security Lake supports two types of subscriber access:
+ **Data access** – Subscribers can directly access the Amazon S3 objects for Security Lake. Security Lake manages the infrastructure and permissions. When you configure the Security Tooling account as a Security Lake data access subscriber, the account is notified of new objects in the Security Lake buckets through Amazon Simple Queue Service (Amazon SQS), and Security Lake creates the permissions to access those new objects.
+ **Query access** – Subscribers can query source data from AWS Lake Formation tables in your S3 bucket by using services such as Amazon Athena. Cross-account access is automatically set up for query access by using Lake Formation. When you configure the Security Tooling account as a Security Lake query access subscriber, the account is given read-only access to the logs in the Security Lake account. When you use this subscriber type, the Athena and AWS Glue tables are shared from the Security Lake Log Archive account with the Security Tooling account through AWS Resource Access Manager (AWS RAM). To enable this capability, you have to update the cross-account data sharing settings to version 3.

For more information about creating subscribers, see [Subscriber management](https://docs.aws.amazon.com/security-lake/latest/userguide/subscriber-management.html) in the Security Lake documentation.  

For best practices for ingesting custom sources, see [Collecting data from custom sources](https://docs.aws.amazon.com/security-lake/latest/userguide/custom-sources.html) in the Security Lake documentation.

You can use [Amazon Quick Sight](https://github.com/aws-samples/amazon-security-lake-quicksight), [Amazon OpenSearch Service](https://aws.amazon.com/blogs/big-data/ingest-transform-and-deliver-events-published-by-amazon-security-lake-to-amazon-opensearch-service), and [Amazon SageMaker](https://github.com/aws-samples/amazon-security-lake-machine-learning) to set up analytics against the security data that you store in Security Lake.

**Design consideration**  
If an application team needs query access to Security Lake data to meet a business requirement, the Security Lake administrator should configure that Application account** **as a subscriber**.**

## Amazon Macie


[Amazon Macie](https://aws.amazon.com/macie/) is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and help protect your sensitive data in AWS. You need to identify the type and classification of data your workload is processing to ensure that appropriate controls are enforced. You can use Macie to automate the discovery and reporting of sensitive data in two ways: by [performing automated sensitive data discovery](https://docs.aws.amazon.com/macie/latest/user/discovery-asdd.html) and by [creating and running sensitive data discovery jobs](https://docs.aws.amazon.com/macie/latest/user/discovery-jobs.html). With automated sensitive data discovery, Macie evaluates your S3 bucket inventory on a daily basis and uses sampling techniques to identify and select representative S3 objects from your buckets. Macie then retrieves and analyzes the selected objects, inspecting them for sensitive data. Sensitive data discovery jobs provide deeper and more targeted analysis. With this option, you define the breadth and depth of the analysis, including the S3 buckets to analyze, the sampling depth, and custom criteria that derive from the properties of S3 objects. If Macie detects a potential issue with the security or privacy of a bucket, it creates a [policy finding](https://docs.aws.amazon.com/macie/latest/user/findings-types.html#findings-policy-types) for you. Automated data discovery is enabled by default for all new Macie customers, and existing Macie customers can enable it with one click.

Macie is enabled in all accounts through AWS Organizations. Principals who have the appropriate permissions in the delegated administrator account (in this case, the Security Tooling account) can enable or suspend Macie in any account, create sensitive data discovery jobs for buckets that are owned by member accounts, and view all policy findings for all member accounts. Sensitive data findings can be viewed only by the account that created the sensitive findings job. For more information, see [Managing multiple Macie accounts as an organization](https://docs.aws.amazon.com/macie/latest/user/macie-accounts.html) in the Macie documentation.

Macie findings flow to AWS Security Hub CSPM for review and analysis. Macie also integrates with Amazon EventBridge to facilitate automated responses to findings such as alerts, feeds to security information and event management (SIEM) systems, and automated remediation.

**Design considerations**  
If S3 objects are encrypted with an AWS Key Management Service (AWS KMS) key that you manage, you can add the Macie service-linked role as a key user to that KMS key to enable Macie to scan the data.
Macie is optimized for scanning objects in Amazon S3. As a result, any Macie-supported object type that can be placed in Amazon S3 (permanently or temporarily) can be scanned for sensitive data. This means that data from other sources—for example, [periodic snapshot exports of Amazon Relational Database Service (Amazon RDS) or Amazon Aurora databases, exported Amazon DynamoDB tables](https://aws.amazon.com/about-aws/whats-new/2020/01/announcing-amazon-relational-database-service-snapshot-export-to-s3/), or extracted text files from native or third-party applications—can be moved to Amazon S3 and evaluated by Macie.

**Implementation example**  
The [AWS SRA code library](https://github.com/aws-samples/aws-security-reference-architecture-examples) provides a sample implementation of [Amazon Macie](https://github.com/aws-samples/aws-security-reference-architecture-examples/blob/main/aws_sra_examples/solutions/macie/macie_org). It includes delegating administration to a member account and configuring Macie within the delegated administrator account for all existing and future accounts in the AWS organization. Macie is also configured to send the findings to a central S3 bucket that is encrypted with a customer managed key in AWS KMS.

## IAM Access Analyzer


As you accelerate your AWS Cloud adoption journey and continue to innovate, it's critical to maintain tight control over fine-grained access (permissions), contain access proliferation, and ensure that permissions are used effectively. Excessive and unused access presents security challenges and makes it harder for enterprises to enforce the [principle of least privilege](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_permissions_least_privileges.html). This principle is an important security architecture pillar that involves continually right-sizing IAM permissions to balance security requirements with operational and application development requirements. This effort involves multiple stakeholder personas, including central security and Cloud Center of Excellence (CCoE) teams as well as decentralized development teams.

[AWS Identity and Access Management Access Analyzer](https://aws.amazon.com/iam/access-analyzer) provides tools to efficiently set fine-grained permissions, verify intended permissions, and refine permissions by removing unused access to help you meet your enterprise security standards. It gives you visibility into [external and internal access to AWS resources and unused access findings](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-findings.html) through [dashboards](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-dashboard.html) and [AWS Security Hub CSPM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-securityhub-integration.html). Additionally, it supports [Amazon EventBridge](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-eventbridge.html) for event-based custom notification and remediation workflows.

The IAM Access Analyzer external access analyzer findings feature helps you identify the resources in your AWS organization and accounts, such as [Amazon S3 buckets or IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-resources.html), that are shared with an external entity. The AWS organization or account you choose is known as the *zone of trust*. The analyzer uses[ automated reasoning ](https://aws.amazon.com/what-is/automated-reasoning/)to analyze all [supported resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-resources.html) within the zone of trust, and generates findings for principals that can access the resources from outside the zone of trust. These findings help identify resources that are shared with an external entity and help you preview how your policy affects public and cross-account access to your resource before you deploy resource permissions. This is available at no additional cost.

 Similarly, the IAM Access Analyzer internal access analyzer finding feature helps you identify the resources in your AWS organization and accounts that are shared with principals internally within your organization or account. This analysis supports the principle of least privilege by ensuring that your specified resources can be accessed only by the intended principals within your organization. This is a paid feature and requires explicit configuration of resources to inspect. Use this feature judiciously to monitor specific sensitive resources that, by design, need to be locked down even internally. 

IAM Access Analyzer findings also help you identify unused access granted in your AWS organizations and accounts, including: 
+ **Unused IAM roles** – Roles that have no access activity within the specified usage window.
+ **Unused IAM users, credentials, and access keys **– Credentials that belong to IAM users and are used to access AWS services and resources. 
+ **Unused IAM policies and permissions** – Service-level and action-level permissions that weren't used by a role within a specified usage window. IAM Access Analyzer uses identity-based policies that are attached to roles to determine the services and actions that those roles can access. The analyzer provides a review of unused permissions for all service-level permissions.

You can use the findings generated from IAM Access Analyzer to gain visibility into, and remediate, any unintended or unused access based on your organization's policies and security standards. After remediation, these findings are marked as [resolved](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-findings-remediate.html) the next time the analyzer runs. If the finding is intentional, you can mark it as [archived](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-findings-archive.html) in IAM Access Analyzer and prioritize other findings that present a greater security risk. Additionally, you can set up [archive rules](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-archive-rules.html) to automatically archive specific findings. For example, you can create an archive rule to automatically archive any findings for a specific Amazon S3 bucket that you regularly grant access to. 

As a builder, you can use IAM Access Analyzer to perform automated [IAM policy checks](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-checks-validating-policies.html) earlier in your development and deployment (CI/CD) process to adhere to your corporate security standards. You can integrate IAM Access Analyzer custom policy checks and policy reviews with AWS CloudFormation to automate policy reviews as a part of your development team's CI/CD pipelines. This includes: 
+ **IAM policy validation **– IAM Access Analyzer validates your policies against [IAM policy grammar and AWS best practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_grammar.html). You can view findings for policy validation checks, including security warnings, errors, general warnings, and suggestions for your policy. Over 100 [policy validation checks](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-reference-policy-checks.html) are currently available and can be automated by using the AWS Command Line Interface (AWS CLI) and APIs.
+ **IAM custom policy checks** – IAM Access Analyzer custom policy checks validate your policies against your specified security standards. Custom policy checks use automated reasoning to provide a higher level of assurance on meeting your corporate security standards. The types of custom policy checks include: 
  + **Check against a reference policy**: When you edit a policy, you can compare it with a reference policy, such as an existing version of the policy, to check whether the update grants new access. The [CheckNoNewAccess](https://docs.aws.amazon.com/access-analyzer/latest/APIReference/API_CheckNoNewAccess.html) API compares two policies (an updated policy and a reference policy) to determine whether the updated policy introduces new access over the reference policy, and returns a pass or fail response.
  + **Check against a list of IAM actions**: You can use the [CheckAccessNotGranted](https://docs.aws.amazon.com/access-analyzer/latest/APIReference/API_CheckAccessNotGranted.html) API to ensure that a policy doesn't grant access to a list of critical actions that are defined in your security standard. This API takes a policy and a list of up to 100 IAM actions to check whether the policy allows at least one of the actions, and returns a pass or fail response.

Security teams and other IAM policy authors can use IAM Access Analyzer to author policies that comply with IAM policy grammar and security standards. Authoring right-sized policies manually can be error prone and time consuming. The IAM Access Analyzer [policy generation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-generation.html) feature assists in authoring IAM policies that are based on a principal's access activity. IAM Access Analyzer reviews AWS CloudTrail logs for [supported services](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-generation-action-last-accessed-support.html) and generates a policy template that contains the permissions that were used by the principal in the specified date range. You can then use this template to create a policy with fine-grained permissions that grants only the necessary permissions.
+ You must have a CloudTrail trail enabled for your account to generate a policy based on access activity.
+ IAM Access Analyzer doesn't identify action-level activity for data events, such as Amazon S3 data events, in generated policies.
+ The `iam:PassRole` action isn't tracked by CloudTrail and isn't included in generated policies.

IAM Access Analyzer is deployed in the Security Tooling account through the delegated administrator functionality in AWS Organizations. The delegated administrator has permissions to create and manage analyzers with the AWS organization as the zone of trust.

**Design consideration**  
To get account-scoped findings (where the account serves as the trusted boundary), you create an account-scoped analyzer in each member account. This can be done as part of the account pipeline. Account-scoped findings flow into Security Hub CSPM at the member account level. From there, they flow to the Security Hub CSPM delegated administrator account (Security Tooling).

**Implementation examples**  
The [AWS SRA code library](https://github.com/aws-samples/aws-security-reference-architecture-examples) provides a sample implementation of [IAM Access Analyzer](https://github.com/aws-samples/aws-security-reference-architecture-examples/blob/main/aws_sra_examples/solutions/iam/iam_access_analyzer). It demonstrates how to configure an organization-level analyzer within a delegated administrator account and an account-level analyzer within each account.
For information about how you can integrate custom policy checks into builder workflows, see the AWS blog post [Introducing IAM Access Analyzer custom policy checks](https://aws.amazon.com/blogs/security/introducing-iam-access-analyzer-custom-policy-checks/).

## AWS Firewall Manager


[AWS Firewall Manager](https://aws.amazon.com/firewall-manager/) helps protect your network by simplifying your administration and maintenance tasks for AWS WAF, AWS Shield Advanced, Amazon VPC security groups, AWS Network Firewall, and Amazon Route 53 Resolver DNS Firewall across multiple accounts and resources. With Firewall Manager, you set up your AWS WAF firewall rules, Shield Advanced protections, Amazon VPC security groups, Network Firewall firewalls, and DNS Firewall rule group associations only once. The service automatically applies the rules and protections across your accounts and resources, even as you add new resources.

Firewall Manager is particularly useful when you want to protect your entire AWS organization instead of a small number of specific accounts and resources, or if you frequently add new resources that you want to protect. Firewall Manager uses security policies to let you define a set of configurations, including relevant rules, protections, and actions that must be deployed and the accounts and resources (indicated by tags) to include or exclude. You can create granular and flexible configurations while still being able to scale control out to large numbers of accounts and VPCs. These policies automatically and consistently enforce the rules you configure even when new accounts and resources are created. Firewall Manager is enabled in all accounts through AWS Organizations, and configuration and management are performed by the appropriate security teams in the Firewall Manager delegated administrator account (in this case, the Security Tooling account).

You must enable AWS Config for each AWS Region that contains the resources that you want to protect. If you don't want to enable AWS Config for all resources, you must enable it for resources that are associated with [the type of Firewall Manager policies that you use](https://docs.aws.amazon.com/waf/latest/developerguide/enable-config.html). When you use both AWS Security Hub CSPM and Firewall Manager, Firewall Manager automatically sends your findings to Security Hub CSPM. Firewall Manager creates findings for resources that are out of compliance and for attacks that it detects, and sends the findings to Security Hub CSPM. When you set up a Firewall Manager policy for AWS WAF, you can centrally enable logging on web access control lists (web ACLs) for all in-scope accounts and centralize the logs under a single account.

With Firewall Manager you can have one or multiple administrators who can manage the firewall resources of your organization. When you assign multiple administrators, you can apply restrictive administrative scope conditions to define the resources (accounts, OUs, Regions, policy types) that each administrator can manage. This gives you the flexibility to have different administrator roles within your organization, and helps you maintain the principal of least privileged access. The AWS SRA uses one administrator with full administrative scope delegated to the Security Tooling account.

**Design consideration**  
Account managers of individual member accounts in the AWS organization can configure additional controls (such as AWS WAF rules and Amazon VPC security groups) in the Firewall Manager managed services according to their particular needs.

**Implementation example**  
The [AWS SRA code library](https://github.com/aws-samples/aws-security-reference-architecture-examples) provides a sample implementation of [Firewall Manager](https://github.com/aws-samples/aws-security-reference-architecture-examples/blob/main/aws_sra_examples/solutions/firewall_manager/firewall_manager_org). It demonstrates delegated administration (Security Tooling), deploys a maximum allowed security group, configures a security group policy, and configures multiple AWS WAF policies.

## Amazon EventBridge


[Amazon EventBridge](https://aws.amazon.com/eventbridge/) is a serverless event bus service that makes it straightforward to connect your applications with data from a variety of sources. It is frequently used in security automation. You can set up routing rules to determine where to send your data to build application architectures that react in real time to all your data sources. You can create a custom event bus to receive events from your custom applications, in addition to using the default event bus in each account. You can create an event bus in the Security Tooling account that can receive security-specific events from other accounts in the AWS organization. For example, by linking AWS Config Rules, Amazon GuardDuty, and AWS Security Hub CSPM with EventBridge, you create a flexible, automated pipeline for routing security data, raising alerts, and managing actions to resolve issues.

**Design considerations**  
EventBridge is capable of routing events to a number of different targets. One valuable pattern for automating security actions is to connect particular events to individual AWS Lambda responders, which take appropriate actions. For example, in certain circumstances you might want to use EventBridge to route a public S3 bucket finding to a Lambda responder that corrects the bucket policy and removes the public permissions. These responders can be integrated into your investigative playbooks and runbooks to coordinate response activities.
A best practice for a successful security operations team is to integrate the flow of security events and findings into a notification and workflow system such as a ticketing system, a bug/issue system, or another security information and event management (SIEM) system. This takes the workflow out of email and static reports, and helps you route, escalate, and manage events or findings. The flexible routing abilities in EventBridge are a powerful enabler for this integration.

## Amazon Detective


[Amazon Detective](https://aws.amazon.com/detective/) supports your responsive security control strategy by making it straightforward to analyze, investigate, and quickly identify the root cause of security findings or suspicious activities for your security analysts. Detective automatically extracts time-based events such as login attempts, API calls, and network traffic from AWS CloudTrail logs and Amazon VPC flow logs. Detective consumes these events by using independent streams of CloudTrail logs and Amazon VPC flow logs. You can use Detective to access up to a year of historical event data. Detective uses machine learning and visualization to create a unified, interactive view of the behavior of your resources and the interactions among them over time—this is called a *behavior graph*. You can explore the behavior graph to examine disparate actions such as failed logon attempts or suspicious API calls.

Detective integrates with Amazon Security Lake to enable security analysts to query and retrieve logs that are stored in Security Lake. You can use this integration to get additional information from CloudTrail logs and Amazon VPC flow logs that are stored in Security Lake while conducting security investigations in Detective.

Detective also ingests findings that are detected by Amazon GuardDuty, including threats that are detected by [GuardDuty Runtime Monitoring](https://docs.aws.amazon.com/guardduty/latest/ug/runtime-monitoring.html). When an account enables Detective, it becomes the administrator account for the behavior graph. Before you try to enable Detective, make sure that your account has been enrolled in GuardDuty for at least 48 hours. If you do not meet this requirement, you cannot enable DetectiveDetective.

Additional optional data sources for Detective include [Amazon EKS audit logs](https://docs.aws.amazon.com/detective/latest/userguide/source-data-types-EKS.html) and AWS Security Hub CSPM. The Amazon EKS audit log data source enhances the information provided about the following entity types: Amazon EKS clusters, Kubernetes pods, container images, and Kubernetes subjects. The Security Hub data source is part of [AWS security findings](https://docs.aws.amazon.com/detective/latest/userguide/source-data-types-asff.html), where it correlates findings across products into Security Hub and ingests them into Detective.

Detective automatically groups multiple findings that are related to a single security compromise event into [finding groups](https://docs.aws.amazon.com/detective/latest/userguide/groups-about.html). Threat actors typically perform a sequence of actions that lead to multiple security findings spread across time and resources. Therefore, finding groups should be the starting point for investigations that involve multiple entities and findings. Detective also provides finding group summaries by using generative AI that automatically analyzes finding groups and provides insights in natural language to help you accelerate security investigations.

Detective integrates with AWS Organizations. The Org Management account delegates a member account as the Detective administrator account. In the AWS SRA, this is the Security Tooling account. The Detective administrator account has the ability to automatically enable all current member accounts in the organization as Detective member accounts, and also add new member accounts as they get added to the AWS organization. Detective administrator accounts also have the ability to invite member accounts that currently do not reside in the AWS organization, but are within the same Region, to contribute their data to the primary account's behavior graph. When a member account accepts the invitation and is enabled, Detective begins to ingest and extract the member account's data into that behavior graph.

**Design consideration**  
You can navigate to Detective finding profiles from the GuardDuty and AWS Security Hub CSPM consoles. These links can help streamline the investigation process. Your account must be the administrative account for both Detective and the service you are pivoting from (GuardDuty or Security Hub CSPM). If the primary accounts are the same for the services, the integration links work seamlessly.

## AWS Audit Manager


[AWS Audit Manager](https://aws.amazon.com/audit-manager/) helps you continually audit your AWS usage to simplify how you manage audits and compliance with regulations and industry standards. It enables you to move from manually collecting, reviewing, and managing evidence to a solution that automates evidence collection, provides a simple way to track the source of audit evidence, enables teamwork collaboration, and helps to manage evidence security and integrity. When it's time for an audit, Audit Manager helps you manage stakeholder reviews of your controls.

With Audit Manager you can audit against [prebuilt frameworks](https://docs.aws.amazon.com/audit-manager/latest/userguide/framework-overviews.html) such as the Center for Internet Security (CIS) benchmark, the CIS AWS Foundations Benchmark, System and Organization Controls 2 (SOC 2), and the Payment Card Industry Data Security Standard (PCI DSS). It also gives you the ability to create your own frameworks with standard or custom controls based on your specific requirements for internal audits.

Audit Manager collects four types of evidence. Three types of evidence are automated: compliance check evidence from AWS Config and AWS Security Hub CSPM, management events evidence from AWS CloudTrail, and configuration evidence from AWS service-to-service API calls. For evidence that cannot be automated, Audit Manager lets you upload manual evidence.

By default, your data in Audit Manager is encrypted by using AWS managed keys. The AWS SRA uses a customer managed key for encryption to provide greater control over logical access. You should also configure an S3 bucket in the AWS Region where Audit Manager publishes the assessment report. This buckets should be encrypted with a customer managed key and have a bucket policy that’s configured to allow only Audit Manager to publish reports.  

**Note**  
Audit Manager assists in collecting evidence that's relevant for verifying compliance with specific compliance standards and regulations. However, it doesn't assess your compliance. Therefore, the evidence that's collected through Audit Manager might not include details of your operational processes that are needed for audits. Audit Manager isn't a substitute for legal counsel or compliance experts. We recommend that you engage the services of a third-party assessor who is certified for the compliance framework(s) that you are evaluated against.

Audit Manager assessments can run over multiple accounts in your AWS organizations. Audit Manager collects and consolidates evidence into a delegated administrator account in AWS Organizations. This audit functionality is primarily used by compliance and internal audit teams, and requires only read access to your AWS accounts.

**Design considerations**  
Audit Manager complements other AWS security services such as AWS Security Hub CSPM, AWS Security Hub, and AWS Config to help implement a risk management framework. Audit Manager provides independent risk assurance functionality, whereas Security Hub CSPM helps you oversee your risk and AWS Config conformance packs assist in managing your risks. Audit professionals who are familiar with the [Three Lines Model](https://www.theiia.org/en/content/position-papers/2020/the-iias-three-lines-model-an-update-of-the-three-lines-of-defense/) developed by the [Institute of Internal Auditors (IIA)](https://na.theiia.org/Pages/IIAHome.aspx) should note that this combination of AWS services helps you cover the three lines of defense. For more information, see the-two part [blog series](https://aws.amazon.com/blogs/mt/integrate-across-the-three-lines-model-part-2-transform-aws-config-conformance-packs-into-aws-audit-manager-assessments/) on the AWS Cloud Operations & Migrations blog.
In order for Audit Manager to collect Security Hub CSPM evidence, the delegated administrator account for both services has to be the same AWS account. For this reason, in the AWS SRA, the Security Tooling account is the delegated administrator for Audit Manager.

## AWS Artifact


[AWS Artifact](https://aws.amazon.com/artifact/) is hosted within the Security Tooling account to separate the compliance artifact management functionality from the AWS Org Management account. This separation of duty is important because we recommend that you avoid using the AWS Org Management account for deployments unless absolutely necessary. Instead, pass on deployments to member accounts. Because audit artifact management can be done from a member account and the function closely aligns with the security and compliance team, the Security Tooling account is designated as the administrator account for AWS Artifact. You can use AWS Artifact reports to download AWS security and compliance documents, such as AWS ISO certifications, Payment Card Industry (PCI), and System and Organization Controls (SOC) reports.

AWS Artifact doesn't support the delegated administration feature. Instead, you can restrict this capability to only IAM roles in the Security Tooling account that pertain to your audit and compliance teams, so they can download, review, and provide those reports to external auditors as needed. You can additionally restrict specific IAM roles to have access to only specific AWS Artifact reports through IAM policies. For sample IAM policies, see the [AWS Artifact documentation](https://docs.aws.amazon.com/artifact/latest/ug/security-iam.html).

**Design consideration**  
If you choose to have a dedicated AWS account for audit and compliance teams, you can host AWS Artifact in a security audit account, which is separate from the Security Tooling account. AWS Artifact reports provide evidence that demonstrates that an organization is following a documented process or meeting a specific requirement. Audit artifacts are gathered and archived throughout the system development lifecycle and can be used as evidence in internal or external audits and assessments.

## AWS KMS


[AWS Key Management Service](https://aws.amazon.com/kms/) (AWS KMS) helps you create and manage cryptographic keys and control their use across a wide range of AWS services and in your applications. AWS KMS is a secure and resilient service that uses hardware security modules to protect cryptographic keys. It follows industry standard lifecycle processes for key material, such as storage, rotation, and access control of keys. AWS KMS can help protect your data with encryption and signing keys, and can be used for both server-side encryption and client-side encryption through the [AWS Encryption SDK](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html). For protection and flexibility, AWS KMS supports three types of keys: customer managed keys, AWS managed keys, and AWS owned keys. Customer managed keys are AWS KMS keys in your AWS account that you create, own, and manage. AWS managed keys are AWS KMS keys in your account that are created, managed, and used on your behalf by an AWS service that is integrated with AWS KMS. AWS owned keys are a collection of AWS KMS keys that an AWS service owns and manages for use in multiple AWS accounts. For more information about using AWS KMS keys, see the [AWS KMS documentation](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html) and [AWS KMS cryptographic details](https://docs.aws.amazon.com/kms/latest/cryptographic-details/intro.html).

One deployment option is to centralize the responsibility of AWS KMS key management to a single account while delegating the ability to use keys in the Application account by application resources by using a combination of key and IAM policies. This approach is secure and straightforward to manage, but you can encounter hurdles due to AWS KMS throttling limits, account service limits, and the security team being inundated with operational key management tasks. Another deployment option is to have a decentralized model in which you allow AWS KMS to reside in multiple accounts, and you allow those responsible for the infrastructure and workloads in a specific account to manage their own keys. This model gives your workload teams more control, flexibility, and agility over the use of encryption keys. It also helps avoid API limits, limits the scope of impact to one AWS account only, and simplifies reporting, auditing, and other compliance-related tasks. In a decentralized model it is important to deploy and enforce guardrails so that the decentralized keys are managed in the same way and usage of AWS KMS keys is audited according to established best practices and policies. For more information, see the whitepaper [AWS Key Management Service Best Practices](https://d0.awsstatic.com/whitepapers/aws-kms-best-practices.pdf). AWS SRA recommends a distributed key management model in which the AWS KMS keys reside locally within the account where they are used. We recommend that you avoid** **using a single key in one account for all cryptographic functions. Keys can be created based on function and data protection requirements, and to enforce the principle of least privilege. In some cases, encryption permissions would be kept separate from decryption permissions, and administrators would manage lifecycle functions but would not be able to encrypt or decrypt data with the keys that they manage.

In the Security Tooling account, AWS KMS is used to manage the encryption of centralized security services such as the AWS CloudTrail organization trail that is managed by the AWS organization.

## AWS Private CA


[AWS Private Certificate Authority](https://docs.aws.amazon.com/privateca/latest/userguide/PcaWelcome.html) (AWS Private CA) is a managed private CA service that helps you securely manage the lifecycle of your private end-entity TLS certificates for EC2 instances, containers, IoT devices, and on-premises resources. It allows encrypted TLS communications to running applications. With AWS Private CA, you can create your own CA hierarchy (a root CA, through subordinate CAs, to end-entity certificates) and issue certificates with it to authenticate internal users, computers, applications, services, servers, and other devices, and to sign computer code. Certificates issued by a private CA are trusted only within your AWS organization, not on the internet.

A public key infrastructure (PKI) or security team can be responsible for managing all PKI infrastructure. This includes the management and creation of the private CA. However, there must be a provision that allows workload teams to self-serve their certificate requirements. The AWS SRA depicts a centralized CA hierarchy in which the root CA is hosted within the Security Tooling account. This enables security teams to enforce stringent security control, because the root CA is the foundation of the entire PKI. However, creation of private certificates from the private CA is delegated to application development teams by sharing out the CA to an Application account by using AWS Resource Access Manager (AWS RAM). AWS RAM manages the permissions required for cross-account sharing. This removes the need for a private CA in every account and provides a more cost-effective way of deployment. For more information about the workflow and implementation, see the blog post [How to use AWS RAM to share your AWS Private CA cross-account](https://aws.amazon.com/blogs/security/how-to-use-aws-ram-to-share-your-acm-private-ca-cross-account/).

**Note**  
AWS Certificate Manager (ACM) also helps you provision, manage, and deploy public TLS certificates for use with AWS services. To support this functionality, ACM has to reside in the AWS account that would use the public certificate. This is discussed later in this guide, in the [Application account](application.md) section.

**Design considerations**  
With AWS Private CA, you can create a hierarchy of certificate authorities with up to five levels. You can also create multiple hierarchies, each with its own root. The AWS Private CA hierarchy should adhere to your organization's PKI design. However, keep in mind that increasing the CA hierarchy increases the number of certificates in the certification path, which, in turn, increases the validation time of an end-entity certificate. A well-defined CA hierarchy provides benefits that include granular security control appropriate to each CA, delegation of subordinate CA to a different application, which leads to division of administrative tasks, use of CA with limited revocable trust, the ability to define different validity periods, and the ability to enforce path limits. Ideally, your root and subordinate CAs are in separate AWS accounts. For more information about planning a CA hierarchy by using AWS Private CA, see the [AWS Private CA documentation](https://docs.aws.amazon.com/privateca/latest/userguide/ca-hierarchy.html) and the blog post [How to secure an enterprise scale AWS Private CA hierarchy for automotive and manufacturing](https://aws.amazon.com/blogs/security/how-to-secure-an-enterprise-scale-acm-private-ca-hierarchy-for-automotive-and-manufacturing/).
AWS Private CA can integrate with your existing CA hierarchy, which allows you to use the automation and native AWS integration capability of ACM in conjunction with the existing root of trust that you use today. You can create a subordinate CA in AWS Private CA backed by a parent CA on premises. For more information about implementation, see [Installing a subordinate CA certificate signed by an external parent CA](https://docs.aws.amazon.com/privateca/latest/userguide/PCACertInstall.html#InstallSubordinateExternal) in the AWS Private CA documentation.

## Amazon Inspector


[Amazon Inspector](https://aws.amazon.com/inspector/) is an automated vulnerability management service that automatically discovers and scans Amazon EC2 instances, container images in Amazon Elastic Container Registry (Amazon ECR), AWS Lambda functions, and code repositories within your source code managers for known software vulnerabilities and unintended network exposure.

Amazon Inspector continuously assesses your environment throughout the lifecycle of your resources by automatically scanning resources whenever you make changes to them. Events that initiate rescanning a resource include installing a new package on an EC2 instance, installing a patch, and the publication of a new common vulnerabilities and exposures (CVE) report that affects the resource. Amazon Inspector supports Center of Internet Security (CIS) Benchmark assessments for operating systems in EC2 instances.

Amazon Inspector integrates with developer tools such as Jenkins and TeamCity for container image assessments. You can assess your container images for software vulnerabilities within your continuous integration and continuous delivery (CI/CD) tools, and push security to an earlier point in the software development lifecycle. Assessment findings are available in the CI/CD tool's dashboard, so you can perform automated actions in response to critical security issues such as blocked builds or image pushes to container registries. If you have an active AWS account, you can install the Amazon Inspector plugin from your CI/CD tool marketplace and add an Amazon Inspector scan in your build pipeline without needing to activate the Amazon Inspector service. This feature works with CI/CD tools hosted anywhere―on AWS, on premises, or in hybrid clouds―so you can consistently use a single solution across all your development pipelines. When Amazon Inspector is activated, it automatically discovers all your EC2 instances, container images in Amazon ECR and CI/CD tools, and Lambda functions at scale, and continuously monitors them for known vulnerabilities.

The network reachability findings of Amazon Inspector assess the accessibility of your EC2 instances to or from VPC edges such as internet gateways, VPC peering connections, or virtual private networks (VPNs) through a virtual gateway. These rules help automate the monitoring of your AWS networks and identify where network access to your EC2 instances might be misconfigured through mismanaged security groups, access control lists (ACLs), internet gateways, and so on. For more information, see the [Amazon Inspector documentation](https://docs.aws.amazon.com/inspector/latest/user/what-is-inspector.html).

When Amazon Inspector identifies vulnerabilities or open network paths, it produces a finding that you can investigate. The finding includes comprehensive details about the vulnerability, including a risk score, the affected resource, and remediation recommendations. The risk score is specifically tailored to your environment and is calculated by correlating up-to-date CVE information with temporal and environmental factors such as network accessibility and exploitability information to provide a contextual finding.

[Amazon Inspector Code Security](https://docs.aws.amazon.com/inspector/latest/user/code-security-assessments.html) scans first-party application source code, third-party application dependencies, and infrastructure as code (IaC) for vulnerabilities. After you activate Code Security, you can create and apply a scan configuration to your code repository to determine frequency, scan type, and repositories to be scanned. Code Security supports static application security testing (SAST), software composition analysis (SCA), and IaC scanning. To configure frequency, you can define scans on demand, at code changes, or periodically. Code scanning captures snippets of code to highlight detected vulnerabilities. The code snippets are stored encrypted with KMS keys. The delegated administrator for an organization cannot view code snippets that belong to member accounts. After you [integrate](https://docs.aws.amazon.com/inspector/latest/user/code-security-assessments-create-integration.html) your source code managers (SCMs) with Code Security, all code repositories are listed as projects in the Amazon Inspector console. Code Security monitors only the default branch of each repository. Amazon Inspector streamlines security remediation by providing specific code fix recommendations directly where developers work. The two-way integration with your SCM automatically suggests fixes as comments within pull requests (PRs) and merge requests (MRs) for critical and high findings, and alerts developers to the most important vulnerabilities to address without disrupting their workflow. 

In order to scan for vulnerabilities, EC2 instances must be [managed](https://docs.aws.amazon.com/systems-manager/latest/userguide/managed_instances.html) in AWS Systems Manager  by using AWS Systems Manager Agent (SSMAgent).  No agents are required for network reachability of EC2 instances or vulnerability scanning of container images in Amazon ECR or Lambda functions.

Amazon Inspector is integrated with AWS Organizations and supports delegated administration. In the AWS SRA, the Security Tooling account is made the delegated administrator account for Amazon Inspector. The Amazon Inspector delegated administrator account can manage findings data and certain settings for members of the AWS organization. This includes viewing the details of aggregated findings for all member accounts, enabling or disabling scans for member accounts, and reviewing scanned resources within the AWS organization.

**Design considerations**  
Amazon Inspector integrates with AWS Security Hub CSPM and Security Hub automatically when both services are enabled. You can use this integration to send all findings from Amazon Inspector to Security Hub CSPM, which will then include those findings in its analysis of your security posture.
Amazon Inspector automatically exports events for findings, resource coverage changes, and initial scans of individual resources to Amazon EventBridge, and, optionally, to an Amazon Simple Storage Service (Amazon S3) bucket. To export active findings to an S3 bucket, you need an AWS KMS key that Amazon Inspector can use to encrypt findings, and an S3 bucket with permissions that allow Amazon Inspector to upload objects. EventBridge integration enables you to monitor and process findings in near real time as part of your existing security and compliance workflows. EventBridge events are published to the Amazon Inspector delegated administrator account in addition to the member account from which they originated.
Amazon Inspector Code Security integrations with GitHub SaaS, GitHub Enterprise Cloud, and GitHub Enterprise Server require public internet access.

**Implementation example**  
The [AWS SRA code library](https://github.com/aws-samples/aws-security-reference-architecture-examples) provides a sample implementation of [Amazon Inspector](https://github.com/aws-samples/aws-security-reference-architecture-examples/blob/main/aws_sra_examples/solutions/inspector/inspector_org). It demonstrates delegated administration (Security Tooling) and configures Amazon Inspector for all existing and future accounts in the AWS organization.

## AWS Security Incident Response


[AWS Security Incident Response](https://aws.amazon.com/security-incident-response/) is a service that helps you prepare for, and respond to, security incidents in your AWS environment. It triages findings, escalates security events, and manages cases that require your immediate attention. Additionally, it gives you access to the AWS Customer Incident Response Team (CIRT), which investigates impacted resources. AWS Security Incident Response also provides automated response and remediation capabilities through AWS Systems Manager documents (SSM documents), which help security teams respond to, and recover from, security incidents more efficiently. AWS Security Incident Response [integrates with Amazon GuardDuty and AWS Security Hub CSPM](https://docs.aws.amazon.com/security-ir/latest/userguide/detect-and-analyze.html) to receive security findings and orchestrate automated responses.

In the AWS SRA, AWS Security Incident Response is deployed in the Security Tooling account as a delegated administrator account. The Security Tooling account is selected because it aligns with the account's purpose of operating security services and automating security alerting and responses. The Security Tooling account also acts as the delegated administrator account for Security Hub CSPM and GuardDuty, which, along with AWS Security Incident Response, help simplify workflow management. AWS Security Incident Response is configured to work with AWS Organizations, so you can manage incident responses across your organization's accounts from the Security Tooling account.

AWS Security Incident Response helps you implement the following phases of the incident response lifecycle:
+ Preparation: Create and maintain response plans and SSM documents for containment actions.
+ Detection and analysis: Automatically analyze security findings and determine incident severity.
+ Detection and analysis: Open a service-supported case and engage with the AWS CIRT for additional assistance. CIRT is a group of individuals who provide support during active security events. 
+ Containment and eradication: Run automated containment actions through SSM documents.
+ Post-incident activity: Document incident details and conduct post-incident analysis.

You can also use AWS Security Incident Response to create self-managed cases. AWS Security Incident Response can create an outbound notification or case when you need to be aware of, or act on, something that might impact your account or resources. This feature is available only when you enable the proactive response and alert triaging workflows as part of your subscription.

**Design considerations**  
When you implement AWS Security Incident Response, carefully review and test automated response actions before you enable them in production. Automation can speed up incident response, but improperly configured automated actions could impact legitimate workloads.
Consider using SSM documents in AWS Security Incident Response to implement organization-specific containment procedures while maintaining the service's built-in best practices for common incident types.
If you plan to use AWS Security Incident Response in a VPC, make sure that you have the appropriate VPC endpoints configured for Systems Manager and other integrated services to enable containment actions in private subnets.

## Deploying common security services within all AWS accounts


The [Apply security services across your AWS organization](security-services.md) section earlier in this reference highlighted security services that protect an AWS account, and noted that many of these services can also be configured and managed within AWS Organizations. Some of these services should be deployed in all accounts, and you will see them in the AWS SRA. This enables a consistent set of guardrails and provides centralized monitoring, management, and governance across your AWS organization.

Security Hub CSPM, GuardDuty, AWS Config, IAM Access Analyzer, and CloudTrail organization trails appear in all accounts. The first three support the delegated administrator feature discussed previously in the section [The management account, trusted access, and delegated administrators](management-account.md). CloudTrail currently uses a different aggregation mechanism.

The AWS SRA [GitHub code repository](https://github.com/aws-samples/aws-security-reference-architecture-examples) provides a sample implementation of enabling Security Hub CSPM, GuardDuty, AWS Config, AWS Firewall Manager, and CloudTrail organization trails across all your accounts, including the AWS Org Management account.

**Design considerations**  
Specific account configurations might necessitate additional security services. For example, accounts that manage S3 buckets (the Application and Log Archive accounts) should also include Amazon Macie, and consider turning on CloudTrail S3 data event logging in these common security services. (Macie supports delegated administration with centralized configuration and monitoring.) Another example is Amazon Inspector, which is applicable only for accounts that host either EC2 instances or Amazon ECR images.
In addition to the services described previously in this section, the AWS SRA includes two security-focused services, Amazon Detective and AWS Audit Manager, which support AWS Organizations integration and the delegated administrator functionality. However, those are not included as part of the recommended services for account baselining, because we have seen that these services are best used in the following scenarios:  
You have a dedicated team or group of resources that perform these functions. Detective is best utilized by security analyst teams and Audit Manager is helpful to your internal audit or compliance teams.
You want to focus on a core set of tools such as GuardDuty and Security Hub CSPM at the start of your project, and then build on these by using services that provide additional capabilities.

# Security OU – Log Archive account



|  | 
| --- |
| Influence the future of the AWS Security Reference Architecture (AWS SRA) by taking a [short survey](https://amazonmr.au1.qualtrics.com/jfe/form/SV_e3XI1t37KMHU2ua). | 

The following diagram illustrates the AWS security services that are configured in the Log Archive account.

![\[Security services in the Log Archive account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/security-reference-architecture/images/log-archive-acct.png)


The Log Archive account is dedicated to ingesting and archiving all security-related logs and backups. With centralized logs in place, you can monitor, audit, and alert on Amazon S3 object access, unauthorized activity by identities, IAM policy changes, and other critical activities performed on sensitive resources. The security objectives are straightforward: This should be immutable storage, accessed only by controlled, automated, and monitored mechanisms, and built for durability (for example, by using the appropriate replication and archival processes). Controls can be implemented at depth to protect the integrity and availability of the logs and log management process. In addition to preventive controls, such as assigning least privilege roles to be used for access and encrypting logs with a controlled AWS KMS key, use detective controls such as AWS Config to monitor (and alert and remediate) this collection of permissions for unexpected changes.

**Design consideration**  
Operational log data used by your infrastructure, operations, and workload teams often overlaps with the log data used by security, audit, and compliance teams. We recommend that you consolidate your operational log data into the Log Archive account. Based on your specific security and governance requirements, you might need to filter operational log data saved to this account. You might also need to specify who has access to the operational log data in the Log Archive account.

## Types of logs


The primary logs shown in the AWS SRA include AWS CloudTrail (organization trail), Amazon VPC flow logs, access logs from Amazon CloudFront and AWS WAF, and DNS logs from Amazon Route 53. These logs provide an audit of actions taken (or attempted) by a user, role, AWS service, or network entity (identified, for example, by an IP address). Other log types (for example, application logs or database logs) can be captured and archived as well. For more information about log sources and logging best practices, see the [security documentation for each service](https://docs.aws.amazon.com/security/).

## Amazon S3 as central log store


Many AWS services log information in Amazon S3—either by default or exclusively. AWS CloudTrail, Amazon VPC Flow Logs, Elastic Load Balancing, Amazon GuardDuty, AWS Config, and AWS WAF are some examples of services that log information in Amazon S3. This means that log integrity is achieved through S3 object integrity; log confidentiality is achieved through S3 object access controls; and log availability is achieved through S3 Object Lock, S3 object versions, and S3 Lifecycle rules. By logging information in a dedicated and centralized S3 bucket that resides in a dedicated account, you can manage these logs in just a few buckets and enforce strict security controls, access, and separation of duties.

In the AWS SRA, the primary logs stored in Amazon S3 come from CloudTrail, so this section describes how to protect those objects. This guidance also applies to any other S3 objects created either by your own applications or by other AWS services. Apply these patterns whenever you have data in Amazon S3 that needs high integrity, strong access control, and automated retention or destruction. 

All new objects (including CloudTrail logs) that are uploaded to S3 buckets are [encrypted by default](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-encryption-faq.html) by using Amazon server-side encryption with Amazon S3-managed encryption keys (SSE-S3). This helps protect the data at rest, but access control is controlled exclusively by IAM policies. To provide an additional managed security layer, you can use server-side encryption with AWS KMS keys that you manage (SSE-KMS) on all security S3 buckets. This adds a second level of access control. To read log files, a user must have both Amazon S3 read permissions for the S3 object and an IAM policy or role applied that allows them permissions to decrypt by the associated key policy.

Two options help you protect or verify the integrity of CloudTrail log objects that are stored in Amazon S3. CloudTrail provides [log file integrity validation](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-validation-intro.html) to determine whether a log file was modified or deleted after CloudTrail delivered it. The other option is [S3 Object Lock](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html).

In addition to protecting the S3 bucket itself, you can adhere to the principle of least privilege for the logging services (for example, CloudTrail) and the Log Archive account. For example, users with permissions granted by the AWS managed IAM policy `AWSCloudTrail_FullAccess` can disable or reconfigure the most sensitive and important auditing functions in their AWS accounts. Limit the application of this IAM policy to as few individuals as possible. 

Use detective controls, such as those delivered by AWS Config and IAM Access Analyzer, to monitor (and alert and remediate) this broader collective of preventive controls for unexpected changes.

For a deeper discussion of security best practices for S3 buckets, see the [Amazon S3 documentation](https://docs.aws.amazon.com/AmazonS3/latest/dev/security-best-practices.html), [online tech talks](https://www.youtube.com/watch?v=7M3s_ix9ljE), and the blog post [Top 10 security best practices for securing data in Amazon S3](https://aws.amazon.com/blogs/security/top-10-security-best-practices-for-securing-data-in-amazon-s3/).

**Implementation example**  
The [AWS SRA code library](https://github.com/aws-samples/aws-security-reference-architecture-examples) provides a sample implementation of [Amazon S3 block account public access](https://github.com/aws-samples/aws-security-reference-architecture-examples/blob/main/aws_sra_examples/solutions/s3/s3_block_account_public_access). This module blocks Amazon S3 public access for all existing and future accounts in the AWS organization.

## Amazon Security Lake


AWS SRA recommends that you use the Log Archive account as the delegated administrator account for Amazon Security Lake. When you do this, Security Lake collects supported logs in dedicated S3 buckets in the same account as other SRA-recommended security logs.

To protect the availability of the logs and the log management process, the S3 buckets for Security Lake should be accessed only by the Security Lake service or by IAM roles that are managed by Security Lake for sources or subscribers. In addition to using preventive controls―such as assigning least-privilege roles for access, and encrypting logs with a controlled AWS KMS key―use detective controls such as AWS Config to monitor (and alert and remediate) this collection of permissions for unexpected changes.

The Security Lake administrator can enable log collection across your AWS organization. These logs are stored in regional S3 buckets in the Log Archive account. Additionally, to centralize logs and facilitate easier storage and analysis, the Security Lake administrator can choose one or more rollup Regions where logs from all the regional S3 buckets are consolidated and stored. Logs from supported AWS services are automatically converted into a standardized open-source schema called Open Cybersecurity Schema Framework (OCSF) and saved in Apache Parquet format in Security Lake S3 buckets. With OCSF support, Security Lake efficiently normalizes and consolidates security data from AWS and other enterprise security sources to create a unified and reliable repository of security-related information.

Security Lake can collect logs that are associated with AWS CloudTrail management events and CloudTrail data events for Amazon S3 and AWS Lambda. To collect CloudTrail management events in Security Lake, you must have at least one CloudTrail multi-Region organization trail that collects read and write CloudTrail management events. Logging must be enabled for the trail. A multi-Region trail delivers log files from multiple Regions to a single S3 bucket for a single AWS account. If the Regions are in different countries, consider data export requirements to determine whether multi-Region trails can be enabled.

AWS Security Hub CSPM is a supported native data source in Security Lake, and you should add Security Hub CSPM findings to Security Lake. Security Hub CSPM generates findings from many different AWS services and third-party integrations. These findings help you get an overview of your compliance posture and whether you're following security recommendations for AWS and AWS Partner solutions.

To gain visibility and actionable insights from logs and events, you can query the data by using tools such as [Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/what-is.html), [Amazon OpenSearch Service](https://docs.aws.amazon.com/opensearch-service/latest/developerguide/what-is.html), [Amazon Quick](https://docs.aws.amazon.com/quicksuite/latest/userguide/what-is.html), and third-party solutions. Users who require access to the Security Lake log data shouldn't access the Log Archive account directly. They should access data only from the Security Tooling account. Or they can use other AWS accounts or on-premises locations that provide analytics tools such as OpenSearch Service, Quick, or third-party tools such as security information and event management (SIEM) tools. To provide access to the data, the administrator should configure [Security Lake subscribers](https://docs.aws.amazon.com/security-lake/latest/userguide/subscriber-management.html) in the Log Archive account and configure the account that needs access to the data as a [query access subscriber](https://docs.aws.amazon.com/security-lake/latest/userguide/subscriber-query-access.html). For more information, see [Amazon Security Lake](security-tooling.md#tool-security-lake) in the *Security OU ‒ Security Tooling account* section of this guide.

Security Lake provides an AWS managed policy to help you manage administrator access to the service. For more information, see the [Security Lake User Guide](https://docs.aws.amazon.com/security-lake/latest/userguide/security-iam-awsmanpol.html). As a best practice, we recommend that you restrict the configuration of Security Lake through development pipelines and prevent configuration changes through the AWS consoles or the AWS Command Line Interface (AWS CLI). Additionally, you should set up strict IAM policies and service control policies (SCPs) to provide only necessary permissions to manage Security Lake. You can [configure notifications](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ways-to-add-notification-config-to-bucket.html) to detect any direct access to these S3 buckets.

**Design consideration**  
When you enable CloudTrail management events in Security Lake, they result in Security Lake charges. The collection of CloudTrail management events in Security Lake requires an CloudTrail multi-Region organization trail that collects read and write CloudTrail management events. This first trail is available at no cost to you. CloudTrail management events typically make up a small percentage (around 5%) of total CloudTrail events. This applies to customers who use AWS Control Tower or have centralized CloudTrail logs in a Log Archive account.

# Infrastructure OU – Network account



|  | 
| --- |
| Influence the future of the AWS Security Reference Architecture (AWS SRA) by taking a [short survey](https://amazonmr.au1.qualtrics.com/jfe/form/SV_e3XI1t37KMHU2ua). | 

The following diagram illustrates the AWS security services that are configured in the Network account. 

![\[Security services for the Network account\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/security-reference-architecture/images/network-acct.png)


The Network account manages the gateway between your application and the broader internet. It is important to protect that two-way interface. The Network account isolates the networking services, configuration, and operation from the individual application workloads, security, and other infrastructure. This arrangement not only limits connectivity, permissions, and data flow, but also supports separation of duties and least privilege for the teams that need to operate in these accounts. By splitting network flow into separate inbound and outbound virtual private clouds (VPCs), you can protect sensitive infrastructure and traffic from undesired access. The inbound network is generally considered higher risk and deserves appropriate routing, monitoring, and potential issue mitigations. These infrastructure accounts will inherit permission guardrails from the Org Management account and the Infrastructure OU. Networking (and security) teams manage the majority of the infrastructure in this account.

## Network architecture


Although network design and specifics are beyond the scope of this document, we recommend these three options for network connectivity between the various accounts: VPC peering, AWS PrivateLink, and AWS Transit Gateway. Important considerations in choosing among these are operational norms, budgets, and specific bandwidth needs.
+ [VPC peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) ‒ The simplest way to connect two VPCs is to use VPC peering. A connection enables full bidirectional connectivity between the VPCs. VPCs that are in separate accounts and AWS Regions can also be peered together. At scale, when you have tens to hundreds of VPCs, interconnecting them with peering results in a mesh of hundreds to thousands of peering connections, which can be challenging to manage and scale. VPC peering is best used when resources in one VPC must communicate with resources in another VPC, the environment of both VPCs is controlled and secured, and the number of VPCs to be connected is fewer than 10 (to allow for the individual management of each connection).
+ [AWS PrivateLink](https://aws.amazon.com/privatelink) ‒ PrivateLink provides private connectivity between VPCs, services, and applications. You can create your own application in your VPC and configure it as a PrivateLink-powered service (referred to as an *endpoint service*). Other AWS principals can create a connection from their VPC to your endpoint service by using an [interface VPC endpoint](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.html) or a [Gateway Load Balancer endpoint](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway-load-balancer.html), depending on the type of service. When you use PrivateLink, service traffic doesn't pass across a publicly routable network. Use PrivateLink when you have a client-server setup where you want to give one or more consumer VPCs unidirectional access to a specific service or set of instances in the service provider VPC. This is also a good option when clients and servers in the two VPCs have overlapping IP addresses, because PrivateLink uses elastic network interfaces within the client VPC so that there are no IP conflicts with the service provider.
+ [AWS Transit Gateway](https://aws.amazon.com/transit-gateway) ‒ Transit Gateway provides a hub-and-spoke design for connecting VPCs and on-premises networks as a fully managed service without requiring you to provision virtual appliances. AWS manages high availability and scalability. A transit gateway is a regional resource and can connect thousands of VPCs within the same AWS Region. You can attach your hybrid connectivity (VPN and AWS Direct Connect connections) to a single transit gateway, thereby consolidating and controlling your AWS organization's entire routing configuration in one place. A transit gateway solves the complexity involved with creating and managing multiple VPC peering connections at scale. It is the default for most network architectures, but specific needs around cost, bandwidth, and latency might make VPC peering a better fit for your needs.

## Inbound (ingress) VPC


The inbound VPC is intended to accept, inspect, and route network connections initiated from outside the application. Depending on the specifics of the application, you can expect to see some network address translation (NAT) in this VPC. Flow logs from this VPC are captured and stored in the Log Archive account.

## Outbound (egress) VPC


The outbound VPC is intended to handle network connections initiated from within the application. Depending on the specifics of the application, you can expect to see traffic NAT, AWS service-specific VPC endpoints, and hosting of external API endpoints in this VPC. Flow logs from this VPC are captured and stored in the Log Archive account.

## Inspection VPC


A dedicated inspection VPC provides a simplified and central approach for managing inspections between VPCs (in the same or in different AWS Regions), the internet, and on-premises networks. For the AWS SRA, ensure that all traffic between VPCs passes through the inspection VPC, and avoid using the inspection VPC for any other workload.

## AWS Network Firewall


[AWS Network Firewall](https://aws.amazon.com/network-firewall/) is a highly available, managed network firewall service for your VPC. It enables you to effortlessly deploy and manage stateful inspection, intrusion prevention and detection, and web filtering to help protect your virtual networks on AWS. You can use Network Firewall to decrypt TLS sessions and inspect inbound and outbound traffic. For more information about configuring Network Firewall, see the [AWS Network Firewall – New Managed Firewall Service in VPC](https://aws.amazon.com/blogs/aws/aws-network-firewall-new-managed-firewall-service-in-vpc/) blog post.

You use a firewall on a per-Availability Zone basis in your VPC. For each Availability Zone, you choose a subnet to host the firewall endpoint that filters your traffic. The firewall endpoint in an Availability Zone can protect all the subnets inside the zone except for the subnet where it's located. Depending on the use case and deployment model, the firewall subnet could be either public or private. The firewall is completely transparent to the traffic flow and does not perform network address translation (NAT). It preserves the source and destination address. In this reference architecture, the firewall endpoints are hosted in an inspection VPC. All traffic from the inbound VPC and to the outbound VPC is routed through this firewall subnet for inspection.

Network Firewall makes firewall activity visible in real time through Amazon CloudWatch metrics, and offers increased visibility of network traffic by sending logs to Amazon Simple Storage Service (Amazon S3), CloudWatch, and Amazon Data Firehose. Network Firewall is interoperable with your existing security approach, including technologies from [AWS Partners](https://aws.amazon.com/network-firewall/partners/). You can also import existing [Suricata](https://suricata-ids.org/) rulesets, which might have been written internally or sourced externally from third-party vendors or open-source platforms.

In the AWS SRA, Network Firewall is used within the network account because the network control-focused functionality of the service aligns with the intent of the account.

**Design considerations**  
AWS Firewall Manager supports Network Firewall, so you can centrally configure and deploy Network Firewall rules across your organization. (For details, see [Using AWS Network Firewall policies in Firewall Manager](https://docs.aws.amazon.com/waf/latest/developerguide/network-firewall-policies.html) in the AWS documentation.) When you configure Firewall Manager, it automatically creates a firewall with sets of rules in the accounts and VPCs that you specify. It also deploys an endpoint in a dedicated subnet for every Availability Zone that contains public subnets. At the same time, any changes to the centrally configured set of rules are automatically updated downstream on the deployed Network Firewall firewalls.
There are [multiple deployment models](https://aws.amazon.com/blogs/networking-and-content-delivery/deployment-models-for-aws-network-firewall/) available with Network Firewall. The right model depends on your use case and requirements. Examples include the following:  
A distributed deployment model where Network Firewall is deployed into individual VPCs.
A centralized deployment model where Network Firewall is deployed into a centralized VPC for east-west (VPC-to-VPC) or north-south (internet egress and ingress, on-premises) traffic.
A combined deployment model where Network Firewall is deployed into a centralized VPC for east-west and a subset of north-south traffic.
As a best practice, do not use the Network Firewall subnet to deploy any other services. This is because Network Firewall cannot inspect traffic from sources or destinations within the firewall subnet.

## Network Access Analyzer


[Network Access Analyzer](https://docs.aws.amazon.com/vpc/latest/network-access-analyzer/what-is-vaa.html) is a feature of Amazon VPC that identifies unintended network access to your resources. You can use Network Access Analyzer to validate network segmentation, identify resources that are accessible from the internet or accessible only from trusted IP address ranges, and validate that you have appropriate network controls on all network paths.

Network Access Analyzer uses automated reasoning algorithms to analyze the network paths that a packet can take between resources in an AWS network, and produces findings for paths that match your defined [Network Access Scope](https://docs.aws.amazon.com/vpc/latest/network-access-analyzer/working-with-network-access-scopes.html). Network Access Analyzer performs a static analysis of a network configuration, meaning that no packets are transmitted in the network as part of this analysis. 

The Amazon Inspector Network Reachability rules provide a related feature. The findings generated by these rules are used in the Application account. Both Network Access Analyzer and Network Reachability use the latest technology from the [AWS provable security initiative](https://aws.amazon.com/security/provable-security/), and they apply this technology with different areas of focus. The Network Reachability package focuses specifically on EC2 instances and their internet accessibility.

The Network account defines the critical network infrastructure that controls the traffic in and out of your AWS environment. This traffic needs to be tightly monitored. In the AWS SRA, Network Access Analyzer is used within the Network account to help identify unintended network access, identify internet-accessible resources through internet gateways, and verify that appropriate network controls such as network firewalls and NAT gateways are present on all network paths between resources and internet gateways.

**Design consideration**  
Network Access Analyzer is a feature of Amazon VPC, and it can be used in any AWS account that has a VPC. Network administrators can get tightly scoped, cross-account IAM roles to validate that approved network paths are enforced within each AWS account.

## AWS RAM


[AWS Resource Access Manager](https://aws.amazon.com/ram/) (AWS RAM) helps you securely share the AWS resources that you create in one AWS account with other AWS accounts. AWS RAM provides a central place to manage the sharing of resources and to standardize this experience across accounts. This makes it simpler to manage resources while taking advantage of the administrative and billing isolation, and reduce the scope of impact containment benefits provided by a multi-account strategy. If your account is managed by AWS Organizations, AWS RAM lets you share resources with all accounts in the organization, or only with the accounts within one or more specified organizational units (OUs). You can also share with specific AWS accounts by account ID, regardless of whether the account is part of an organization. You can also share [some supported resource types](https://docs.aws.amazon.com/ram/latest/userguide/shareable.html) with specified IAM roles and users.

AWS RAM enables you to share resources that do not support IAM resource-based policies, such as VPC subnets and Route 53 rules. Furthermore, with AWS RAM, the owners of a resource can see which principals have access to individual resources that they have shared. IAM principals can retrieve the list of resources shared with them directly, which they can't do with resources shared by IAM resource policies. If AWS RAM is used to share resources outside your AWS organization, an invitation process is initiated. The recipient must accept the invitation before access to the resources is granted. This provides additional checks and balances.

AWS RAM is invoked and managed by the resource owner, in the account where the shared resource is deployed. One common use case for AWS RAM illustrated in the AWS SRA is for network administrators to share VPC subnets and transit gateways with the entire AWS organization. This provides the ability to decouple AWS account and network management functions and helps achieve separation of duties. For more information about VPC sharing, see the AWS blog post [VPC sharing: A new approach to multiple accounts and VPC management](https://aws.amazon.com/blogs/networking-and-content-delivery/vpc-sharing-a-new-approach-to-multiple-accounts-and-vpc-management/) and the [AWS network infrastructure](https://docs.aws.amazon.com/whitepapers/latest/building-scalable-secure-multi-vpc-network-infrastructure/amazon-vpc-sharing.html) whitepaper.

**Design consideration**  
Although AWS RAM as a service is deployed only within the Network account in the AWS SRA, it would typically be deployed in more than one account. For example, you can centralize your data lake management to a single data lake account, and then share the AWS Lake Formation data catalog resources (databases and tables) with other accounts in your AWS organization. For more information, see the [AWS Lake Formation documentation](https://docs.aws.amazon.com/lake-formation/latest/dg/sharing-catalog-resources.html) and the AWS blog post [Securely share your data across AWS accounts using AWS Lake Formation](https://aws.amazon.com/blogs/big-data/securely-share-your-data-across-aws-accounts-using-aws-lake-formation/). Additionally, security administrators can use AWS RAM to follow best practices when they build an AWS Private Certificate Authority hierarchy. CAs can be shared with external third parties, who can issue certificates without having access to the CA hierarchy. This allows origination organizations to limit and revoke third-party access.

## AWS Verified Access


[AWS Verified Access](https://aws.amazon.com/verified-access/) provides secure access to corporate applications and resources without a VPN. It improves security posture and helps apply zero trust access by evaluating each access request in real time against predefined requirements. You can define a unique access policy for each application with conditions based on [identity data](https://docs.aws.amazon.com/verified-access/latest/ug/user-trust.html) and [device posture](https://docs.aws.amazon.com/verified-access/latest/ug/device-trust.html). Verified Access provides secure access to HTTP(S) applications, such as browser-based applications, and non-HTTP(S) applications over TCP, SSH, and RDP protocols for applications such as Git repositories, databases, and groups of EC2 instances. These can be accessed by using a command-line terminal or from a desktop application. Verified Access also simplifies security operations by helping administrators efficiently set and monitor access policies. This frees up time to update policies, respond to security and connectivity incidents, and audit for compliance standards. Verified Access also supports integration with AWS WAF to help you filter out common threats such as SQL injection and cross-site scripting (XSS). Verified Access is seamlessly integrated with AWS IAM Identity Center, which allows users to authenticate with SAML-based third-party identity providers (IdPs). If you already have a custom IdP solution that is compatible with OpenID Connect (OIDC), Verified Access can also authenticate users by directly connecting with your IdP. Verified Access logs every access attempt so that you can quickly respond to security incidents and audit requests. Verified Access supports delivery of these logs to Amazon Simple Storage Service (Amazon S3), Amazon CloudWatch Logs, and Amazon Data Firehose. 

Verified Access supports two common corporate application patterns: internal and internet-facing. Verified Access integrates with applications by using Application Load Balancers or elastic network interfaces. If you're using an Application Load Balancer, Verified Access requires an internal load balancer. Because Verified Access supports AWS WAF at the instance level, an existing application that has AWS WAF integration with an Application Load Balancer can move policies from the load balancer to the Verified Access instance. A corporate application is represented as a Verified Access endpoint. Each endpoint is associated with a Verified Access group and inherits the access policy for the group. A Verified Access group is a collection of Verified Access endpoints and a group-level Verified Access policy. Groups simplify policy management and enable IT administrators to set up baseline criteria. Application owners can further define granular policies depending on the sensitivity of the application.

In the AWS SRA, Verified Access is hosted within the Network account. The central IT team sets up centrally managed configurations. For example, they might connect trust providers such as identity providers (for example, Okta) and device trust providers (for example, Jamf), create groups, and determine the group-level policy. These configurations can then be shared with tens, hundreds, or thousands of workload accounts by using AWS RAM. This enables application teams to manage the underlying endpoints that manage their applications without overhead from other teams. AWS RAM provides a scalable way to leverage Verified Access for corporate applications that are hosted in different workload accounts.

**Design consideration**  
You can group endpoints for applications that have similar security requirements to simplify policy administration, and then share the group with application accounts. All applications in the group share the group policy. If an application in the group requires a specific policy because of an edge case, you can apply application-level policy for that application.

## Amazon VPC Lattice


[Amazon VPC Lattice](https://aws.amazon.com/vpc/lattice/) is an application networking service that connects, monitors, and secures service-to-service communications. A [service](https://docs.aws.amazon.com/vpc-lattice/latest/ug/services.html), often called a *microservice*, is an independently deployable unit of software that delivers a specific task. VPC Lattice automatically manages network connectivity and application-layer routing between services across VPCs and AWS accounts without requiring you to manage the underlying network connectivity, frontend load balancers, or sidecar proxies. It provides a fully managed application-layer proxy that provides application-level routing based on request characteristics such a paths and headers. VPC Lattice is built into the VPC infrastructure, so it provides a consistent approach across a wide range of compute types such as Amazon Elastic Compute Cloud (Amazon EC2), Amazon Elastic Kubernetes Service (Amazon EKS), and AWS Lambda. VPC Lattice also supports weighted routing for blue/green and canary-style deployments. You can use VPC Lattice to create a [service network](https://docs.aws.amazon.com/vpc-lattice/latest/ug/service-networks.html) with a logical boundary that automatically implements service discovery and connectivity. VPC Lattice integrates with IAM for service-to-service authentication and authorization using [auth policies](https://docs.aws.amazon.com/vpc-lattice/latest/ug/auth-policies.html). 

VPC Lattice integrates with AWS RAM to enable sharing of services and service networks. AWS SRA depicts a distributed architecture where developers or service owners create VPC Lattice services in their Application account. Service owners define the listeners, routing rules, and target groups along with auth policies. They then share the services with other accounts, and associate the services with VPC Lattice service networks. These networks are created by network administrators in the Network account and shared with the Application account. Network administrators configure service network-level auth policies and monitoring. Administrators associate VPCs and VPC Lattice services with one or more service networks. For a detailed walkthrough of this distributed architecture, see the AWS blog post [Build secure multi-account multi-VPC connectivity for your applications with Amazon VPC Lattice](https://aws.amazon.com/blogs/networking-and-content-delivery/build-secure-multi-account-multi-vpc-connectivity-for-your-applications-with-amazon-vpc-lattice/)

**Design considerations**  
Depending on your organization's operating model of service or service network visibility, network administrators can share their service networks and can give service owners the control to associate their services and VPCs with these service networks. Or, service owners can share their services, and network administrators can associate the services with service networks.
A client can send requests to services that are associated with a service network only if the client is  in a VPC that's associated with the same service network. Client traffic that traverses a VPC peering connection or a transit gateway is denied.

## Edge security


Edge security generally entails three types of protections: secure content delivery, network and application-layer protection, and distributed denial of service (DDoS) mitigation. Content such as data, videos, applications, and APIs have to be delivered quickly and securely, using the recommended version of TLS to encrypt communications between endpoints. The content should also have access restrictions through signed URLs, signed cookies, and token authentication. Application-level security should be designed to control bot traffic, block common attack patterns such as SQL injection or cross-site scripting (XSS), and provide web traffic visibility. At the edge, DDoS mitigation provides an important defense layer that ensures continued availability of mission-critical business operations and services. Applications and APIs should be protected from SYN floods, UDP floods, or other reflection attacks, and have inline mitigation to stop basic network-layer attacks.

AWS offers several services to help provide a secure environment, from the core cloud to the edge of the AWS network. Amazon CloudFront, AWS Certificate Manager (ACM), AWS Shield, AWS WAF, and Amazon Route 53 work together to help create a flexible, layered security perimeter. With CloudFront, content, APIs, or applications can be delivered over HTTPS by using TLSv1.3 to encrypt and secure communication between viewer clients and CloudFront. You can use ACM to create a [custom SSL certificate](https://aws.amazon.com/cloudfront/custom-ssl-domains/) and deploy it to a CloudFront distribution for free. ACM automatically handles certificate renewal. Shield is a managed DDoS protection service that helps safeguard applications that run on AWS. It provides dynamic detection and automatic inline mitigations that minimize application downtime and latency. AWS WAF lets you create rules to filter web traffic based on specific conditions (IP addresses, HTTP headers and body, or custom URIs), common web attacks, and pervasive bots. Route 53 is a highly available and scalable DNS web service. Route 53 connects user requests to internet applications that run on AWS or on premises. The AWS SRA adopts a centralized network ingress architecture by using AWS Transit Gateway, hosted within the Network account, so the edge security infrastructure is also centralized in this account.

## Amazon CloudFront


[Amazon CloudFront](https://aws.amazon.com/cloudfront/) is a secure content delivery network (CDN) that provides inherent protection against common network layer and transport DDoS attempts. You can deliver your content, APIs, or applications by using TLS certificates, and advanced TLS features are enabled automatically. You can use AWS Certificate Manager (ACM) to create a custom TLS certificate and enforce HTTPS communications between viewers and CloudFront, as described later in the [ACM section](#network-acm). You can additionally require that the communications between CloudFront and your custom origin implement end-to-end encryption in transit. For this scenario, you must install a TLS certificate on your origin server. If your origin is an elastic load balancer, you can use a certificate that is generated by ACM or a certificate that is validated by a third-party certificate authority (CA) and imported into ACM. If S3 bucket website endpoints serve as the origin for CloudFront, you can't configure CloudFront to use HTTPS with your origin, because Amazon S3 doesn't support HTTPS for website endpoints. (However, you can still require HTTPS between viewers and CloudFront.) For all other origins that support installing HTTPS certificates, you must use a certificate that is signed by a trusted third-party CA. 

CloudFront provides multiple options to secure and restrict access to your content. For example, it can restrict access to your Amazon S3 origin by using signed URLs and signed cookies. For more information, see [Configure secure access and restrict access to content](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecurityAndPrivateContent.html) in the CloudFront documentation.

The AWS SRA illustrates centralized CloudFront distributions in the Network account because they align with the centralized network pattern that's implemented by using AWS Transit Gateway. By deploying and managing CloudFront distributions in the Network account, you gain the benefits of centralized controls. You can manage all CloudFront distributions in a single place, which makes it easier to control access, configure settings, and monitor usage across all accounts. Additionally, you can manage the ACM certificates, DNS records, and CloudFront logging from one centralized account.

The CloudFront security dashboard provides AWS WAF visibility and controls directly in your CloudFront distribution. You get visibility into your application's top security trends, allowed and blocked traffic, and bot activity. You can use investigative tools such as visual log analyzers and built-in blocking controls to isolate traffic patterns and block traffic without querying logs or writing security rules.

**Design considerations**  
Alternatively, you can deploy CloudFront as part of the application in the Application account. In this scenario, the application team makes decisions such as how the CloudFront distributions are deployed, determines the appropriate cache policies, and takes responsibility for governance, auditing, and monitoring of the CloudFront distributions. By spreading CloudFront distributions across multiple accounts, you can benefit from additional service quotas. As another benefit, you can use CloudFront's inherent and automated [origin access identity (OAI) and origin access control (OAC)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html) configuration to restrict access to Amazon S3 origins.
When you deliver web content through a CDN such as CloudFront, you have to prevent viewers from bypassing the CDN and accessing your origin content directly. To achieve this origin access restriction, you can use CloudFront and AWS WAF to add custom headers and verify the headers before you forward requests to your custom origin. For a detailed explanation of this solution, see the AWS security blog post [How to enhance Amazon CloudFront origin security with AWS WAF and AWS Secrets Manager](https://aws.amazon.com/blogs/security/how-to-enhance-amazon-cloudfront-origin-security-with-aws-waf-and-aws-secrets-manager/). An alternate method is to limit only the CloudFront prefix list in the security group that's associated with the Application Load Balancer. This will help ensure that only a CloudFront distribution can access the load balancer.

## AWS WAF


[AWS WAF](https://aws.amazon.com/waf/) is a web application firewall that helps protect your web applications from web exploits such as common vulnerabilities and bots that could affect application availability, compromise security, or consume excessive resources.  It can be integrated with an Amazon CloudFront distribution, an Amazon API Gateway REST API, an Application Load Balancer, an AWS AppSync GraphQL API, an Amazon Cognito user pool, and the AWS App Runner service.

AWS WAF uses [web access control lists](https://docs.aws.amazon.com/waf/latest/developerguide/web-acl.html) (ACLs) to protect a set of AWS resources. A web ACL is a set of [rules](https://docs.aws.amazon.com/waf/latest/developerguide/how-aws-waf-works-components.html) that defines the inspection criteria, and an associated action to take (block, allow, count, or run bot control) if a web request meets the criteria. AWS WAF provides a set of [managed rules](https://aws.amazon.com/marketplace/solutions/security/waf-managed-rules) that provides protection against common application vulnerabilities. These rules are curated and managed by AWS and AWS Partners. AWS WAF also offers a powerful rule language for authoring custom rules. You can use custom rules to write inspection criteria that fit your particular needs. Examples include IP restrictions, geographical restrictions, and customized versions of managed rules that better fit your specific application behavior.

AWS WAF provides a set of intelligent tier-managed rules for common and targeted bots and account takeover protection (ATP). You are charged a subscription fee and a traffic inspection fee when you use the bot control and ATP rule groups. Therefore, we recommend that you monitor your traffic first and then decide what to use. You can use the bot management and account takeover dashboards that are available for free on the AWS WAF console to monitor these activities and then decide whether you need an intelligent tier AWS WAF rule group. 

In the AWS SRA, AWS WAF is integrated with CloudFront in the Network account. In this configuration, AWS WAF rule processing happens at the edge locations instead of within the VPC. This enables filtering of malicious traffic closer to the end user who requested the content, and helps restrict malicious traffic from entering your core network. 

You can send full AWS WAF logs to an S3 bucket in the Log Archive account by configuring cross-account access to the S3 bucket. For more information, see the [AWS re:Post article](https://repost.aws/knowledge-center/waf-send-logs-centralized-account) on this topic.

**Design considerations**  
As an alternative to deploying AWS WAF centrally in the Network account, some use cases are better met by deploying AWS WAF in the Application account. For example, you might choose this option when you deploy your CloudFront distributions in your Application account or have public-facing Application Load Balancers, or if you're using API Gateway in front of your web applications. If you decide to deploy AWS WAF in each Application account, use AWS Firewall Manager to manage the AWS WAF rules in these accounts from the centralized Security Tooling account.   
You can also add general AWS WAF rules at the CloudFront layer and additional application-specific AWS WAF rules at a Regional resource such as the Application Load Balancer or the API gateway.

## AWS Shield


[AWS Shield](https://aws.amazon.com/shield/) is a managed DDoS protection service that safeguards applications that run on AWS. There are two tiers of Shield: Shield Standardand Shield Advanced. Shield Standard provides all AWS customers with protection against the most common infrastructure (layers 3 and 4) events at no additional charge. Shield Advanced provides more sophisticated automatic mitigations for unauthorized events that target applications on protected Amazon EC2, Elastic Load Balancing (Elastic Load Balancing), CloudFront, AWS Global Accelerator, and Route 53 hosted zones. If you own high-visibility websites or are prone to frequent DDoS attacks, you can consider the additional features that Shield Advanced provides.

You can use the [Shield Advanced automatic application layer DDoS mitigation feature](https://docs.aws.amazon.com/waf/latest/developerguide/ddos-automatic-app-layer-response.html) to configure Shield Advanced to respond automatically to mitigate application layer (layer 7) attacks against your protected CloudFront distributions, Elastic Load Balancing (Elastic Load Balancing) load balancers (Application, Network, and Classic), Amazon Route 53 hosted zones, Amazon EC2 Elastic IP addresses, and AWS Global Accelerator standard accelerators. When you enable this feature, Shield Advanced automatically generates custom AWS WAF rules to mitigate DDoS attacks. Shield Advanced also gives you access to the [AWS Shield Response Team](https://docs.aws.amazon.com/waf/latest/developerguide/ddos-srt-support.html) (SRT). You can contact SRT at any time to create and manage custom mitigations for your application or during an active DDoS attack. If you want SRT to proactively monitor your protected resources and contact you during a DDoS attempt, consider enabling the [proactive engagement](https://docs.aws.amazon.com/waf/latest/developerguide/ddos-srt-proactive-engagement.html) feature.

**Design considerations**  
If you have any workloads that are fronted by internet-facing resources in the Application account, such as CloudFront, an Application Load Balancer, or a Network Load Balancer, configure Shield Advanced in the Application account and add those resources to Shield protection. You can use AWS Firewall Manager to configure these options at scale.
If you have multiple resources in the data flow, such as a CloudFront distribution in front of an Application Load Balancer, use only the entry-point resource as the protected resource. This will ensure that you are not paying [Shield Data Transfer Out (DTO) fees](https://aws.amazon.com/shield/pricing/) twice for two resources.
Shield Advanced records metrics that you can monitor in Amazon CloudWatch. (For more information, see [Monitoring with Amazon CloudWatch](https://docs.aws.amazon.com/waf/latest/developerguide/monitoring-cloudwatch.html#set-ddos-alarms) in the AWS documentation.) Set up CloudWatch alarms to receive SNS notifications to your security center when a DDoS event is detected. In a suspected DDoS event, contact the [AWS Enterprise Support](https://aws.amazon.com/premiumsupport/plans/enterprise/) team by filing a support ticket and assigning it the highest priority. The Enterprise Support team will include the Shield Response Team (SRT) when handling the event. In addition, you can preconfigure the AWS Shield engagement Lambda function to create a support ticket and send an email to the SRT team. 

## AWS Certificate Manager (ACM)


[AWS Certificate Manager](https://aws.amazon.com/certificate-manager/) (ACM) lets you provision, manage, and deploy public and private TLS certificates for use with AWS services and your internal connected resources. With ACM, you can quickly request a certificate, deploy it on ACM-integrated AWS resources, such as Elastic Load Balancing load balancers, CloudFront distributions, and APIs on Amazon API Gateway, and let ACM handle certificate renewals. When you request ACM public certificates, there is no need to generate a key pair or a certificate signing request (CSR), submit a CSR to a certificate authority (CA), or upload and install the certificate when it is received. ACM also provides the option to import TLS certificates issued by third-party CAs and deploy them with ACM integrated services. When you use ACM to manage certificates, certificate private keys are securely protected and stored by using strong encryption and key management best practices. With ACM there is no additional charge for provisioning public certificates, and ACM manages the renewal process.

ACM is used in the Network account to generate a public TLS certificate, which, in turn, is used by CloudFront distributions to establish the HTTPS connection between viewers and CloudFront. For more information, see the [CloudFront documentation](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-viewers-to-cloudfront.html).

**Design consideration**  
For externally facing certificates, ACM must reside in the same account as the resources for which it provisions certificates. Certificates cannot be shared across accounts.

## Amazon Route 53


[Amazon Route 53](https://aws.amazon.com/route53/) is a highly available and scalable DNS web service. You can use Route 53 to perform three main functions: domain registration, DNS routing, and health checking.

You can use Route 53 as a DNS service to map domain names to your EC2 instances, S3 buckets, CloudFront distributions, and other AWS resources. The distributed nature of the AWS DNS servers helps ensure that your end users are routed to your application consistently. Features such as Route 53 traffic flow and routing control help you improve reliability. If your primary application endpoint becomes unavailable, you can configure your failover to reroute your users to an alternate location. Route 53 Resolver provides recursive DNS for your VPC and on-premises networks over AWS Direct Connect or AWS managed VPN.

By using the IAM service with Route 53, you get fine-grained control over who can update your DNS data. You can enable DNS Security Extensions (DNSSEC) signing to let DNS resolvers validate that a DNS response came from Route 53 and has not been tampered with.

[Route 53 Resolver DNS Firewall](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-dns-firewall.html) provides protection for outbound DNS requests from your VPCs. These requests go through Route 53 Resolver for domain name resolution. A primary use of DNS Firewall protections is to help prevent DNS exfiltration of your data. With DNS Firewall, you can monitor and control the domains that your applications can query. You can deny access to the domains that you know are bad, and allow all other queries to pass through. Alternately, you can deny access to all domains except for the ones that you explicitly trust. You can also use DNS Firewall to block resolution requests to resources in private hosted zones (shared or local), including VPC endpoint names. It can also block requests for public or private EC2 instance names.

Route 53 resolvers are created by default as part of every VPC. In the AWS SRA, Route 53 is used in the Network account primarily for the DNS Firewall capability.

**Design consideration**  
DNS Firewall and AWS Network Firewall both offer domain name filtering, but for different types of traffic. You can use DNS Firewall and Network Firewall together to configure domain-based filtering for application-layer traffic over two different network paths:  
DNS Firewall provides filtering for outbound DNS queries that pass through the Route 53 Resolver from applications within your VPCs. You can also configure DNS Firewall to send custom responses for queries to blocked domain names.
Network Firewall provides filtering for both network-layer and application-layer traffic, but does not have visibility into queries made by Route 53 Resolver.

# Infrastructure OU – Shared Services account



|  | 
| --- |
| Influence the future of the AWS Security Reference Architecture (AWS SRA) by taking a [short survey](https://amazonmr.au1.qualtrics.com/jfe/form/SV_e3XI1t37KMHU2ua). | 

The following diagram illustrates the AWS security services that are configured in the Shared Services account.

![\[Security services for Shared Services account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/security-reference-architecture/images/shared-services-acct.png)


The Shared Services account is part of the Infrastructure OU, and its purpose is to support the services that multiple applications and teams use to deliver their outcomes. For example, directory services (Active Directory), messaging services, and metadata services are in this category. The AWS SRA highlights the shared services that support security controls. Although the Network accounts are also part of the Infrastructure OU, they are removed from the Shared Services account to support the separation of duties. The teams that will manage these services don't need permissions or access to the Network accounts.

## AWS Systems Manager


[AWS Systems Manager](https://aws.amazon.com/systems-manager/) (which is also included in the Org Management account and in the Application account) provides a collection of capabilities that enable visibility and control of your AWS resources. One of these capabilities, Systems Manager Explorer, is a customizable operations dashboard that reports information about your AWS resources. You can synchronize operations data across all accounts in your AWS organization by using AWS Organizations and Systems Manager Explorer. Systems Manager is deployed in the Shared Services account through the delegated administrator functionality in AWS Organizations.

Systems Manager helps you work to maintain security and compliance by scanning your managed instances and reporting (or taking corrective action) on any policy violations it detects. By pairing Systems Manager with appropriate deployments in individual member AWS accounts (for example, the Application account), you can coordinate instance inventory data collection and centralize automation such as patching and security updates.

## AWS Managed Microsoft AD


[AWS Directory Service for Microsoft Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html), also known as AWS Managed Microsoft AD, enables your directory-aware workloads and AWS resources to use managed Active Directory on AWS. You can use AWS Managed Microsoft AD to join [Amazon EC2 for Windows Server](https://aws.amazon.com/windows/), [Amazon EC2 for Linux](https://aws.amazon.com/mp/linux/), and [Amazon RDS for SQL Server](https://aws.amazon.com/rds/sqlserver/) instances to your domain, and use [AWS end user computing (EUC)](https://aws.amazon.com/products/end-user-computing/) services, such as [Amazon WorkSpaces](https://aws.amazon.com/workspaces/), with Active Directory users and groups.

AWS Managed Microsoft AD helps you extend your existing Active Directory to AWS and use your existing on-premises user credentials to access cloud resources. You can also administer your on-premises users, groups, applications, and systems without the complexity of running and maintaining an on-premises, highly available Active Directory. You can join your existing computers, laptops, and printers to an AWS Managed Microsoft AD domain.

AWS Managed Microsoft AD is built on Microsoft Active Directory and doesn't require you to synchronize or replicate data from your existing Active Directory to the cloud. You can use familiar Active Directory administration tools and features, such as Group Policy Objects (GPOs), domain trusts, fine-grained password policies, group Managed Service Accounts (gMSAs), schema extensions, and Kerberos-based single sign-on. You can also delegate administrative tasks and authorize access using Active Directory security groups.

Multi-Region replication enables you to deploy and use a single AWS Managed Microsoft AD directory across multiple AWS Regions. This makes it easier and more cost-effective for you to deploy and manage your Microsoft Windows and Linux workloads globally. When you use the automated multi-Region replication capability, you get higher resiliency while your applications use a local directory for optimal performance.

AWS Managed Microsoft AD supports Lightweight Directory Access Protocol (LDAP) over SSL/TLS, also known as LDAPS, in both client and server roles. When acting as a server, AWS Managed Microsoft AD supports LDAPS over ports 636 (SSL) and 389 (TLS). You enable server-side LDAPS communications by installing a certificate on your AWS Managed Microsoft AD domain controllers from an AWS-based Active Directory Certificate Services (AD CS) certificate authority (CA). When acting as a client, AWS Managed Microsoft AD supports LDAPS over ports 636 (SSL). You can enable client-side LDAPS communications by registering CA certificates from your server certificate issuers into AWS, and then enable LDAPS on your directory. 

In the AWS SRA, Directory Service is used within the Shared Services account to provide domain services for Microsoft-aware workloads across multiple AWS member accounts.

**Design consideration**  
You can grant your on-premises Active Directory users access to sign in to the AWS Management Console and AWS Command Line Interface (AWS CLI) with their existing Active Directory credentials by using IAM Identity Center and selecting AWS Managed Microsoft AD as the identity source. This enables your users to assume one of their assigned roles at sign-in, and to access and take action on the resources according to the permissions defined for the role. An alternative option is to use AWS Managed Microsoft AD to enable your users to assume an IAM role.

## IAM Identity Center


The AWS SRA uses the delegated administrator feature supported by AWS IAM Identity Center to delegate most of the administration of IAM Identity Center to the Shared Services account. This helps restrict the number of users who require access to the Org Management account. IAM Identity Center still needs to be enabled in the Org Management account to perform certain tasks, including the management of permission sets that are provisioned within the Org Management account.

The primary reason for using the Shared Services account as the delegated administrator for IAM Identity Center is the Active Directory location. If you plan to use Active Directory as your IAM Identity Center identity source, you will need to locate the directory in the member account that you have designated as your IAM Identity Center delegated administrator account. In the AWS SRA, the Shared Services account hosts AWS Managed Microsoft AD, so that account is made the delegated administrator for IAM Identity Center.

IAM Identity Center supports the registration of a single member account as a delegated administrator at one time. You can register a member account only when you sign in with credentials from the management account. To enable delegation, you have to consider the prerequisites listed in the [IAM Identity Center documentation](https://docs.aws.amazon.com/singlesignon/latest/userguide/delegated-admin.html#delegated-admin-prereqs). The delegated administrator account can perform most IAM Identity Center management tasks, but with some restrictions, which are listed in the [IAM Identity Center documentation](https://docs.aws.amazon.com/singlesignon/latest/userguide/delegated-admin.html#delegated-admin-tasks-member-account). Access to the delegated administrator account for IAM Identity Center should be tightly controlled.

**Design considerations**  
If you decide to change the IAM Identity Center identity source from any other source to Active Directory, or change it from Active Directory to any other source, the directory must reside in (be owned by) the IAM Identity Center delegated administrator member account, if one exists; otherwise, it must be in the management account.
You can host your AWS Managed Microsoft AD within a dedicated VPC in a different account and then use [AWS Resource Access Manager (AWS RAM)](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) to share subnets from this other account to the delegated administrator account. That way, the AWS Managed Microsoft AD instance is controlled in the delegated administrator account, but from the network perspective it acts as if it is deployed in the VPC of another account. This is helpful when you have multiple AWS Managed Microsoft AD instances and you want to deploy them locally to where your workload is running but manage them centrally through one account.
If you have a dedicated identity team that performs regular identity and access management activities or have strict security requirements to separate identity management functions from other shared services functions, you can host a dedicated AWS account for identity management. In this scenario, you designate this account as your delegated administrator for IAM Identity Center, and it also hosts your AWS Managed Microsoft AD directory. You can achieve the same level of logical isolation between your identity management workloads and other shared services workloads by using fine-grained IAM permissions within a single shared service account.
IAM Identity Center currently doesn't provide [multi-Region support](https://docs.aws.amazon.com/singlesignon/latest/userguide/regions.html#region-data). (To enable IAM Identity Center in a different Region, you must first delete your current IAM Identity Center configuration.) Furthermore, it doesn't support the use of different identity sources for different set of accounts or let you delegate permissions management to different parts of your organization (that is, multiple delegated administrators) or to different groups of administrators. If you require any of these features, you can use [IAM federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html) to manage your user identities within an identity provider (IdP) outside of AWS and give these external user identities permission to use AWS resources in your account. IAM supports IdPs that are compatible with [OpenID Connect (OIDC)](https://openid.net/connect/) or SAML 2.0. As a best practice, use SAML 2.0 federation with third-party identity providers such as Active Directory Federation Service (AD FS), Okta, Azure Active Directory (Azure AD), or Ping Identity to provide single sign-on capability for users to log into the AWS Management Console or to call AWS API operations. For more information about IAM federation and identity providers, see [About SAML 2.0-based federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html) in the IAM documentation.

# Workloads OU – Application account



|  | 
| --- |
| Influence the future of the AWS Security Reference Architecture (AWS SRA) by taking a [short survey](https://amazonmr.au1.qualtrics.com/jfe/form/SV_e3XI1t37KMHU2ua). | 

The following diagram illustrates the AWS security services that are configured in the Application account (along with the application itself).

![\[Security services for Application account.\]](http://docs.aws.amazon.com/prescriptive-guidance/latest/security-reference-architecture/images/application-acct.png)


The Application account hosts the primary infrastructure and services to run and maintain an enterprise application. The Application account and Workloads OU serve a few primary security objectives. First, you create a separate account for each application to provide boundaries and controls between workloads so that you can avoid issues of comingling roles, permissions, data, and encryption keys. You want to provide a separate account container where the application team can be given broad rights to manage their own infrastructure without affecting others. Next, you add a layer of protection by providing a mechanism for the security operations team to monitor and collect security data. Employ an organization trail and local deployments of account security services (Amazon GuardDuty, AWS Config, AWS Security Hub CSPM, Amazon EventBridge, IAM Access Analyzer), which are configured and monitored by the security team. Finally, you enable your enterprise to set controls centrally. You align the application account to the broader security structure by making it a member of the Workloads OU through which it inherits appropriate service permissions, constraints, and guardrails.

**Design consideration**  
In your organization you are likely to have more than one business application. The Workloads OU is intended to house most of your business-specific workloads, including both production and non-production environments. These workloads can be a mix of commercial off-the-shelf (COTS) applications and your own internally developed custom applications and data services. There are few patterns for organizing different business applications along with their development environments. One pattern is to have multiple child OUs based on your development environment, such as production, staging, test, and development, and to use separate child AWS accounts under those OUs that pertain to different applications. Another common pattern is to have separate child OUs per application and then use separate child AWS accounts for individual development environments. The exact OU and account structure depends on your application design and the teams that manage those applications. Consider the security controls that you want to enforce, whether they are environment-specific or application-specific, because it is easier to implement those controls as SCPs on OUs. For further considerations on organizing workload-oriented OUs, see the [Application OUs](https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/application-ous.html) section of the AWS whitepaper *Organizing your AWS environment using multiple accounts*.

## Application VPC


The virtual private cloud (VPC) in the Application account needs both inbound access (for the simple web services that you are modeling) and outbound access (for application needs or AWS service needs). By default, resources inside a VPC are routable to one another. There are two private subnets: one to host the EC2 instances (application layer) and the other for Amazon Aurora (database layer). Network segmentation between different tiers, such as the application tier and database tier, is accomplished through VPC security groups, which restrict traffic at the instance level. For resiliency, the workload spans two or more Availability Zones and utilizes two subnets per zone.

**Design consideration**  
You can use [Traffic Mirroring](https://docs.aws.amazon.com/vpc/latest/mirroring/what-is-traffic-mirroring.html) to copy network traffic from an elastic network interface of EC2 instances. You can then send the traffic to out-of-band security and monitoring appliances for content inspection, threat monitoring, or troubleshooting. For example, you might want to monitor the traffic that is leaving your VPC or the traffic whose source is outside your VPC. In this case, you will mirror all traffic except for the traffic passing within your VPC and send it to a single monitoring appliance. Amazon VPC flow logs do not capture mirrored traffic; they generally capture information from packet headers only. Traffic Mirroring provides deeper insight into the network traffic by allowing you to analyze actual traffic content, including payload. Enable Traffic Mirroring only for the elastic network interface of EC2 instances that might be operating as part of sensitive workloads or for which you expect to need detailed diagnostics in the event of an issue.

## VPC endpoints


[VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/concepts.html#concepts-service-consumers) provide another layer of security control as well as scalability and reliability. Use these to connect your application VPC to other AWS services. (In the Application account, the AWS SRA employs VPC endpoints for AWS KMS, AWS Systems Manager, and Amazon S3.) Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components. They allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic. You can use a VPC endpoint to privately connect your VPC to supported AWS services and VPC endpoint services powered by AWS PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with other AWS services. Traffic between your VPC and the other AWS service does not leave the Amazon network.

Another benefit of using VPC endpoints is to enable the configuration of endpoint policies. A VPC endpoint policy is an IAM resource policy that you attach to an endpoint when you create or modify the endpoint. If you do not attach an IAM policy when you create an endpoint, AWS attaches a default IAM policy for you that allows full access to the service. An endpoint policy does not override or replace IAM user policies or service-specific policies (such as S3 bucket policies). It is a separate IAM policy for controlling access from the endpoint to the specified service. In this way, it adds another layer of control over which AWS principals can communicate with resources or services.

## Amazon EC2


The [Amazon EC2](https://aws.amazon.com/ec2/) instances that compose our application make use of version 2 of the Instance Metadata Service (IMDSv2). IMDSv2 adds protections for four types of vulnerabilities that could be used to try to access the IMDS: website application firewalls, open reverse proxies, server-side request forgery (SSRF) vulnerabilities, open layer 3 firewalls, and NATs. For more information, see the blog post [Add defense in depth against open firewalls, reverse proxies, and SSRF vulnerabilities with enhancements to the EC2 Instance Metadata Service](https://aws.amazon.com/blogs/security/defense-in-depth-open-firewalls-reverse-proxies-ssrf-vulnerabilities-ec2-instance-metadata-service/).

Use separate VPCs (as subset of account boundaries) to isolate infrastructure by workload segments. Use subnets to isolate the tiers of your application (for example, web, application, and database) within a single VPC. Use private subnets for your instances if they should not be accessed directly from the internet. To call the Amazon EC2 API from your private subnet without using an internet gateway, use AWS PrivateLink. Restrict access to your instances by using [security groups](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-groups.html). Use [VPC Flow Logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html) to monitor the traffic that reaches your instances. Use [Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html), a capability of AWS Systems Manager, to access your instances remotely instead of opening inbound SSH ports and managing SSH keys. Use separate Amazon Elastic Block Store (Amazon EBS) volumes for the operating system and your data. You can [configure your AWS account](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#encryption-by-default) to enforce the encryption of the new EBS volumes and snapshot copies that you create. 

**Implementation example**  
The [AWS SRA code library](https://github.com/aws-samples/aws-security-reference-architecture-examples) provides a sample implementation of [default Amazon EBS encryption in Amazon EC2](https://github.com/aws-samples/aws-security-reference-architecture-examples/blob/main/aws_sra_examples/solutions/ec2/ec2_default_ebs_encryption). It demonstrates how you can enable the account-level default Amazon EBS encryption within each AWS account and AWS Region in the AWS organization.

## AWS Nitro Enclaves


[AWS Nitro Enclaves](https://aws.amazon.com/ec2/nitro/nitro-enclaves/) is an Amazon EC2 feature that allows you to create isolated execution environments, called *enclaves*, from EC2 instances. Enclaves are separate, hardened, and highly constrained virtual machines. The CPU and memory of a single parent EC2 instance is partitioned into isolated enclaves. Each enclave runs an independent kernel. Enclaves provide only secure local socket connectivity with their parent instance. They have no persistent storage, interactive access, or external networking. Users cannot SSH into an enclave, and the data and applications inside the enclave cannot be accessed by the processes, applications, or users (root or administrator) of the parent instance. You can secure your most sensitive data, such as personally identifiable information (PII), healthcare, financial, and intellectual property data, within EC2 instances. Nitro Enclaves enables you to focus on your application instead of worrying about integration with external services. Nitro Enclaves includes cryptographic attestation for your software so that you can be sure that only authorized code is running, and integration with the AWS KMS so that only your enclaves can access sensitive material. This helps reduce the attack surface area for your most sensitive data processing applications. There is no additional cost of using Nitro Enclaves.

[Cryptographic attestation](https://docs.aws.amazon.com/enclaves/latest/user/set-up-attestation.html) is a process used to prove the identity of an enclave. The attestation process is accomplished through the Nitro Hypervisor, which produces a signed attestation document for the enclave to prove its identity to another third party or service. Attestation documents contain key details of the enclave such as the enclave's public key, hashes of the enclave image and applications, and more.

With AWS Certificate Manager (ACM) for Nitro Enclaves, you can use public and private SSL/TLS certificates with your web applications and web servers running on EC2 instances with Nitro Enclaves. SSL/TLS certificates are used to secure network communications and to establish the identity of websites over the internet and resources on private networks. ACM for Nitro Enclaves removes the time-consuming and error-prone manual process of purchasing, uploading, and renewing SSL/TLS certificates. ACM for Nitro Enclaves creates secure private keys, distributes the certificate and its private key to your enclave, and manages certificate renewals. With ACM for Nitro Enclaves, the certificate's private key remains isolated in the enclave, which prevents the instance and its users from accessing it. For more information, see [AWS Certificate Manager for Nitro Enclaves](https://docs.aws.amazon.com/enclaves/latest/user/nitro-enclave-refapp.html) in the Nitro Enclaves documentation.

## Application Load Balancers


[Application Load Balancers](https://aws.amazon.com/elasticloadbalancing/application-load-balancer/) distribute incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. In the AWS SRA, the target group for the load balancer are the application EC2 instances. The AWS SRA uses HTTPS listeners to ensure that the communication channel is encrypted. The Application Load Balancer uses a server certificate to terminate the front-end connection, and then to decrypt requests from clients before sending them to the targets.

AWS Certificate Manager (ACM) natively integrates with Application Load Balancers, and the AWS SRA uses ACM to generate and manage the necessary X.509 (TLS server) public certificates. You can enforce TLS 1.2 and strong ciphers for front-end connections through the Application Load Balancer security policy. For more information, see the [Elastic Load Balancing documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html).

**Design considerations**  
For common scenarios such as strictly internal applications that require a private TLS certificate on the Application Load Balancer, you can use ACM within this account to generate a private certificate from AWS Private CA. In the AWS SRA, the ACM root private CA is hosted in the Security Tooling account and can be shared with the whole AWS organization or with specific AWS accounts to issue end-entity certificates, as described earlier in the [Security Tooling account](security-tooling.md#tool-acm) section.
For public certificates, you can use ACM to generate those certificates and manage them, including automated rotation. Alternatively, you can generate your own certificates by using SSL/TLS tools to create a certificate signing request (CSR), get the CSR signed by a certificate authority (CA) to produce a certificate, and then import the certificate into ACM or upload the certificate to IAM for use with the Application Load Balancer. If you import a certificate into ACM, you must monitor the expiration date of the certificate and renew it before it expires.
For additional layers of defense, you can deploy AWS WAF policies to protect the Application Load Balancer. Having edge policies, application policies, and even private or internal policy enforcement layers adds to the visibility of communication requests and provides unified policy enforcement. For more information, see the blog post [Deploying defense in depth using AWS Managed Rules for AWS WAF](https://aws.amazon.com/blogs/security/deploying-defense-in-depth-using-aws-managed-rules-for-aws-waf-part-2/).

## AWS Private CA


[AWS Private Certificate Authority](https://docs.aws.amazon.com/privateca/latest/userguide/PcaWelcome.html) (AWS Private CA) is used in the Application account to generate private certificates to be used with an Application Load Balancer. It is a common scenario for Application Load Balancers to serve secure content over TLS. This requires TLS certificates to be installed on the Application Load Balancer. For applications that are strictly internal, private TLS certificates can provide the secure channel.

In the AWS SRA, AWS Private CA is hosted in the Security Tooling account and is shared out to the Application account by using AWS RAM. This allows developers in an Application account to a request a certificate from a shared private CA. Sharing CAs across your organization or across AWS accounts helps reduce the cost and complexity of creating and managing duplicate CAs in all your AWS accounts. When you use ACM to issue private certificates from a shared CA, the certificate is generated locally in the requesting account, and ACM provides full lifecycle management and renewal.

## Amazon Inspector


The AWS SRA uses [Amazon Inspector](https://aws.amazon.com/inspector/) to automatically discover and scan EC2 instances and container images that reside in the Amazon Elastic Container Registry (Amazon ECR)** **for software vulnerabilities and unintended network exposure.

Amazon Inspector is placed in the Application account, because it provides vulnerability management services to EC2 instances in this account. Additionally, Amazon Inspector reports on [unwanted network paths](https://docs.aws.amazon.com/inspector/latest/user/findings-types.html#findings-types-network) to and from EC2 instances.

Amazon Inspector in member accounts is centrally managed by the delegated administrator account. In the AWS SRA, the Security Tooling account is the delegated administrator account. The delegated administrator account can manage findings data and certain settings for members of the organization. This includes viewing aggregated findings details for all member accounts, enabling or disabling scans for member accounts, and reviewing scanned resources within the AWS organization.

**Design consideration**  
You can use [Patch Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html), a capability of AWS Systems Manager, to trigger on-demand patching to remediate Amazon Inspector zero-day or other critical security vulnerabilities. Patch Manager helps you patch those vulnerabilities without having to wait for your normal patching schedule. The remediation is carried out by using the Systems Manager Automation runbook. For more information, see the two-part blog series [Automate vulnerability management and remediation in AWS using Amazon Inspector and AWS Systems Manager](https://aws.amazon.com/blogs/mt/automate-vulnerability-management-and-remediation-in-aws-using-amazon-inspector-and-aws-systems-manager-part-1/).

## AWS Systems Manager


[AWS Systems Manager](https://aws.amazon.com/systems-manager/) is an AWS service that you can use to view operational data from multiple AWS services and automate operational tasks across your AWS resources. With automated approval workflows and runbooks, you can work to reduce human error and simplify maintenance and deployment tasks on AWS resources.

In addition to these general automation capabilities, Systems Manager supports a number of preventive, detective, and responsive security features. [AWS Systems Manager Agent](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent.html) (SSM Agent) is Amazon software that can be installed and configured on an EC2 instance, an on-premises server, or a virtual machine (VM). SSM Agent makes it possible for Systems Manager to update, manage, and configure these resources. Systems Manager helps you maintain security and compliance by scanning these managed instances and reporting (or taking corrective action) on any violations it detects in your patch, configuration, and custom policies.

The AWS SRA uses [Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html), a capability of Systems Manager, to provide an interactive, browser-based shell and CLI experience. This provides secure and auditable instance management without the need to open inbound ports, maintain bastion hosts, or manage SSH keys. The AWS SRA uses [Patch Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager.html), a capability of Systems Manager, to apply patches to EC2 instances for both operating systems and applications.

The AWS SRA also uses [Automation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html), a capability of Systems Manager, to simplify common maintenance and deployment tasks of Amazon EC2 instances and other AWS resources. Automation can simplify common IT tasks such as changing the state of one or more nodes (using an approval automation) and managing node states according to a schedule. Systems Manager includes features that help you target large groups of instances by using tags, and velocity controls that help you roll out changes according to the limits you define. Automation offers one-click automations for simplifying complex tasks such as creating golden Amazon Machine Images (AMIs) and recovering unreachable EC2 instances. Additionally, you can enhance operational security by giving IAM roles access to specific runbooks to perform certain functions, without directly giving permissions to those roles. For example, if you want an IAM role to have permissions to restart specific EC2 instances after patch updates, but you don't want to grant the permission directly to that role, you can instead create an Automation runbook and give the role permissions to only run the runbook.

**Design considerations**  
Systems Manager relies on EC2 instance metadata to function correctly. Systems Manager can access instance metadata by using either version 1 or version 2 of the Instance Metadata Service (IMDSv1 and IMDSv2).
SSM Agent has to communicate with different AWS services and resources such as Amazon EC2 messages, Systems Manager, and Amazon S3. For this communication to happen, the subnet requires either outbound internet connectivity or provisioning of appropriate VPC endpoints. The AWS SRA uses VPC endpoints for the SSM Agent to establish private network paths to various AWS services.
Using Automation, you can share best practices with the rest of your organization. You can create best practices for resource management in runbooks and share the runbooks across AWS Regions and groups. You can also constrain the allowed values for runbook parameters. For these use cases, you might have to create Automation runbooks in a central account such as Security Tooling or Shared Services and share them with the rest of the AWS organization. Common use cases include the capability to centrally implement patching and security updates, remediate drift on VPC configurations or S3 bucket policies, and manage EC2 instances at scale. For implementation details, see the [Systems Manager documentation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation-multiple-accounts-and-regions.html).

## Amazon Aurora


In the AWS SRA, [Amazon Aurora](https://aws.amazon.com/rds/aurora/) and [Amazon S3](https://aws.amazon.com/s3/) make up the logical data tier. Aurora is a fully managed relational database engine that's compatible with MySQL and PostgreSQL. An application that is running on the EC2 instances communicates with Aurora and Amazon S3 as needed. Aurora is configured with a database cluster inside a DB subnet group.

**Design consideration**  
As in many database services, security for Aurora is managed at three levels. To control who can perform Amazon Relational Database Service (Amazon RDS) management actions on Aurora DB clusters and DB instances, you use IAM. To control which devices and EC2 instances can open connections to the cluster endpoint and port of the DB instance for Aurora DB clusters in a VPC, you use a VPC security group. To authenticate logins and permissions for an Aurora DB cluster, you can take the same approach as with a stand-alone DB instance of MySQL or PostgreSQL, or you can use IAM database authentication for Aurora MySQL-Compatible Edition. With this latter approach, you authenticate to your Aurora MySQL-Compatible DB cluster by using an IAM role and an authentication token.

## Amazon S3


[Amazon S3](https://aws.amazon.com/s3/) is an object storage service that offers industry-leading scalability, data availability, security, and performance. It is the data backbone of many applications built on AWS, and appropriate permissions and security controls are critical for protecting sensitive data. For recommended security best practices for Amazon S3, see the [documentation](https://docs.aws.amazon.com/AmazonS3/latest/dev/security-best-practices.html), [online tech talks](https://www.youtube.com/watch?v=7M3s_ix9ljE), and deeper dives in [blog posts](https://aws.amazon.com/blogs/storage/protect-amazon-s3-buckets-using-access-analyzer-for-s3/). The most important best practice is to block overly permissive access (especially public access) to S3 buckets.

## AWS KMS


The AWS SRA illustrates the recommended distribution model for key management, where the AWS KMS key resides within the same AWS account as the resource to be encrypted. For this reason, AWS KMS is used in the Application account in addition to being included in the Security Tooling account. In the Application account, AWS KMS is used to manage keys that are specific to the application resources. You can implement a separation of duties by using [key policies](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) to grant key usage permissions to local application roles and to restrict management and monitoring permissions to your key custodians. 

**Design consideration**  
In a distributed model, the AWS KMS key management responsibility resides with the application team. However, your central security team can be responsible for the governance and [monitoring](https://docs.aws.amazon.com/kms/latest/developerguide/monitoring-cloudwatch.html) of important cryptographic events such as the following:  
The imported key material in a KMS key is nearing its expiration date.
The key material in a KMS key was automatically rotated.
AKMS key was deleted.
There is a high rate of decryption failure.

## AWS CloudHSM


[AWS CloudHSM](https://aws.amazon.com/cloudhsm/) provides managed hardware security modules (HSMs) in the AWS Cloud. It enables you to generate and use your own encryption keys on AWS by using FIPS 140-2 level 3 validated HSMs that you control access to. You can use AWS CloudHSM to offload SSL/TLS processing for your web servers. This reduces the burden on the web server and provides extra security by storing the web server's private key in AWS CloudHSM. You could similarly deploy an HSM from AWS CloudHSM in the inbound VPC in the Network account to store your private keys and sign certificate requests if you need to act as an issuing certificate authority.

**Design consideration**  
If you have a hard requirement for FIPS 140-2 level 3, you can also choose to configure AWS KMS to use the AWS CloudHSM cluster as a custom key store rather than using the native KMS key store. By doing this, you benefit from the integration between AWS KMS and AWS services that encrypt your data, while being responsible for the HSMs that protect your KMS keys. This combines single-tenant HSMs under your control with the ease of use and integration of AWS KMS. To manage your AWS CloudHSM infrastructure, you have to employ a public key infrastructure (PKI) and have a team that has experience managing HSMs.

## AWS Secrets Manager


[AWS Secrets Manager](https://aws.amazon.com/secrets-manager/) helps you protect the credentials (*secrets*) that you need to access your applications, services, and IT resources. The service enables you to efficiently rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. You can replace hardcoded credentials in your code with an API call to Secrets Manager to retrieve the secret programmatically. This helps ensure that the secret can't be compromised by someone who is examining your code, because the secret no longer exists in the code. Additionally, Secrets Manager helps you move your applications between environments (development, pre-production, production). Instead of changing the code, you can ensure that an appropriately named and referenced secret is available in the environment. This promotes the consistency and reusability of application code across different environments, while requiring fewer changes and human interactions after the code has been tested.

With Secrets Manager, you can manage access to secrets by using fine-grained IAM policies and resource-based policies. You can help secure secrets by encrypting them with encryption keys that you manage by using AWS KMS. Secrets Manager also integrates with AWS logging and monitoring services for centralized auditing.

Secrets Manager uses [envelope encryption](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping) with AWS KMS keys and data keys to protect each secret value. When you create a secret, you can choose any symmetric customer managed key in the AWS account and Region, or you can use the AWS managed key for Secrets Manager.

As a best practice, you can monitor your secrets to log any changes to them. This helps you ensure that any unexpected usage or change can be investigated. Unwanted changes can be rolled back. Secrets Manager currently supports two AWS services that enable you to monitor your organization and activity: AWS CloudTrail and AWS Config. CloudTrail captures all API calls for Secrets Manager as events, including calls from the Secrets Manager console and from code calls to the Secrets Manager APIs. In addition, CloudTrail captures other related (non-API) events that might have a security or compliance impact on your AWS account or might help you troubleshoot operational problems. These include certain secrets rotation events and deletion of secret versions. AWS Config can provide detective controls by tracking and monitoring changes to secrets in Secrets Manager. These changes include a secret's description, rotation configuration, tags, and relationship to other AWS sources such as the KMS encryption key or the AWS Lambda functions used for secret rotation. You can also configure Amazon EventBridge, which receives configuration and compliance change notifications from AWS Config, to route particular secrets events for notification or remediation actions.

In the AWS SRA, Secrets Manager is located in the Application account to support local application use cases and to manage secrets close to their usage. Here, an instance profile is attached to the EC2 instances in the Application account. Separate secrets can then be configured in Secrets Manager to allow that instance profile to retrieve secrets—for example, to join the appropriate Active Directory or LDAP domain and to access the Aurora database. Secrets Manager [integrates with Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-secrets-manager.html) to manage user credentials when you create, modify, or restore an Amazon RDS DB instance or a Multi-AZ DB cluster. This helps you manage the creation and rotation of keys and replaces the hardcoded credentials in your code with programmatic API calls to Secrets Manager. 

**Design consideration**  
In general, configure and manage Secrets Manager in the account that is closest to where the secrets will be used. This approach takes advantage of the local knowledge of the use case and provides speed and flexibility to application development teams. For tightly controlled information where an additional layer of control might be appropriate, secrets can be centrally managed by Secrets Manager in the Security Tooling account.

## Amazon Cognito


[Amazon Cognito](https://aws.amazon.com/cognito/) lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and efficiently. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Apple, Facebook, Google, and Amazon, and enterprise identity providers through SAML 2.0 and OpenID Connect. The two main components of Amazon Cognito are [user pools](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools.html) and [identity pools](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-identity.html). User pools are user directories that provide sign-up and sign-in options for your application users. Identity pools enable you to grant your users access to other AWS services. You can use identity pools and user pools separately or together. For common usage scenarios, see the [Amazon Cognito documentation](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-scenarios.html).

Amazon Cognito provides a built-in and customizable UI for user sign-up and sign-in. You can use Android, iOS, and JavaScript SDKs for Amazon Cognito to add user sign-up and sign-in pages to your apps. [Amazon Cognito Sync](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-sync.html) is an AWS service and client library that enables cross-device syncing of application-related user data.

Amazon Cognito supports multi-factor authentication and encryption of data at rest and data in transit. Amazon Cognito user pools provide [advanced security features](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-advanced-security.html) to help protect access to user accounts in your application. These advanced security features provide risk-based adaptive authentication and protection from the use of compromised credentials. 

**Design considerations**  
You can create an AWS Lambda function and then trigger that function during user pool operations such as user sign-up, confirmation, and sign-in (authentication) with a Lambda trigger. You can add authentication challenges, migrate users, and customize verification messages. For common operations and user flow, see the [Amazon Cognito documentation](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html). Amazon Cognito calls Lambda functions synchronously.
You can use Amazon Cognito user pools to secure small, multi-tenant applications. A common use case of multi-tenant design is to run workloads to support testing multiple versions of an application. Multi-tenant design is also useful for testing a single application with different datasets, which allows full use of your cluster resources. However, make sure that the number of tenants and expected volume align with the related Amazon Cognito [service quotas](https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html). These quotas are shared across all tenants in your application.

## Amazon Verified Permissions


[Amazon Verified Permissions](https://aws.amazon.com/verified-permissions/) is a scalable permissions management and fine-grained authorization service for the applications that you build. Developers and administrators can use [Cedar](https://www.cedarpolicy.com/en), a purpose-built and security-first open-source policy language, with roles and attributes to define more granular, context-aware, policy-based access controls. Developers can build more secure applications faster by externalizing authorization and centralizing policy management and administration. Verified Permissions includes schema definitions, policy statement grammar, and [automated reasoning](https://aws.amazon.com/blogs/security/aws-security-profile-byron-cook-director-aws-automated-reasoning-group/) that scale across millions of permissions, so you can enforce the principles of default deny and least privilege. The service also includes an evaluation simulator tool to help you test your authorization decisions and author policies. These features facilitate the deployment of an in-depth, fine-grained authorization model to support your [zero-trust](https://aws.amazon.com/security/zero-trust/) objectives. Verified Permissions centralizes permissions in a policy store and helps developers use those permissions to authorize user actions within their applications.

You can connect your application to the service through the API to authorize user access requests. For each authorization request, the service retrieves the relevant policies and evaluates those policies to determine whether a user is permitted to take an action on a resource, based on context inputs such as users, roles, group membership, and attributes. You can configure and connect Verified Permissions to send your policy management and authorization logs to AWS CloudTrail. If you use Amazon Cognito as your identity store, you can integrate with Verified Permissions and use the ID and access tokens that Amazon Cognito returns in the authorization decisions in your applications. You provide Amazon Cognito tokens to Verified Permissions, which uses the attributes that the tokens contain to represent the principal and identify the principal's entitlements. For more information about this integration, see the AWS blog post [Simplifying fine-grained authorization with Amazon Verified Permissions and Amazon Cognito](https://aws.amazon.com/blogs/security/simplify-fine-grained-authorization-with-amazon-verified-permissions-and-amazon-cognito/). 

Verified Permissions helps you define policy-based access control (PBAC). PBAC is an access control model that uses permissions that are expressed as policies to determine who can access which resources in an application. PBAC brings together role-based access control (RBAC) and attribute-based access control (ABAC), resulting in a more powerful and flexible access control model. To learn more about PBAC and how you can design an authorization model by using Verified Permissions, see the AWS blog post [Policy-based access control in application development with Amazon Verified Permissions](https://aws.amazon.com/blogs/devops/policy-based-access-control-in-application-development-with-amazon-verified-permissions/).

In the AWS SRA, Verified Permissions is located in the Application account to support permission management for applications through its integration with Amazon Cognito.

## Layered defense


The Application account provides an opportunity to illustrate layered defense principals that AWS enables. Consider the security of the EC2 instances that make up the core of a simple example application represented in the AWS SRA and you can see the way AWS services work together in a layered defense. This approach aligns to the structural view of AWS security services, as described in the section [Apply security services across your AWS organization](security-services.md) earlier in this guide.
+ The innermost layer is the EC2 instances. As mentioned earlier, EC2 instances include many native security features either by default or as options. Examples include [IMDSv2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configuring-instance-metadata-service.html), the [Nitro system](https://aws.amazon.com/blogs/security/confidential-computing-an-aws-perspective/), and [Amazon EBS storage encryption](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html).
+ The second layer of protection focuses on the operating system and software running on the EC2 instances. Services such as [Amazon Inspector](https://aws.amazon.com/inspector/) and [AWS Systems Manager](https://aws.amazon.com/systems-manager/) enable you to monitor, report, and take corrective action on these configurations. Amazon Inspector [monitors your software for vulnerabilities](https://aws.amazon.com/blogs/security/how-to-visualize-multi-account-amazon-inspector-findings-with-amazon-elasticsearch-service/) and Systems Manager helps you work to maintain security and compliance by scanning managed instances for their [patch](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html) and [configuration status](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-compliance.html), and then reporting and taking any [corrective actions](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html) you specify.
+ The instances, and the software running on these instances, sit with your AWS networking infrastructure. In addition to using the [security features of Amazon VPC](https://docs.aws.amazon.com/vpc/latest/userguide/security.html) the AWS SRA also makes use of VPC endpoints to provide private connectivity between the VPC and supported AWS services, and to provide a mechanism to place access policies at the network boundary.
+ The activity and configuration of the EC2 instances, software, network, and IAM roles and resources are further monitored by AWS account-focused services such as AWS Security Hub CSPM, AWS Security Hub, Amazon GuardDuty, AWS CloudTrail, AWS Config, IAM Access Analyzer, and Amazon Macie.
+ Finally, beyond the Application account, AWS RAM helps control which resources are shared with other accounts, and IAM service control policies help you enforce consistent permissions across the AWS organization.