

# Security
<a name="a-security"></a>

The Security pillar encompasses the ability to protect data, systems, and assets to take advantage of cloud technologies to improve your security. You can find prescriptive guidance on implementation in the [Security Pillar whitepaper](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/welcome.html?ref=wellarchitected-wp). 

**Topics**
+ [Security foundations](a-sec-security.md)
+ [Identity and access management](a-identity-and-access-management.md)
+ [Detection](a-detective-controls.md)
+ [Infrastructure protection](a-infrastructure-protection.md)
+ [Data protection](a-data-protection.md)
+ [Incident response](a-incident-response.md)
+ [Application security](a-appsec.md)

# Security foundations
<a name="a-sec-security"></a>

**Topics**
+ [SEC 1. How do you securely operate your workload?](sec-01.md)

# SEC 1. How do you securely operate your workload?
<a name="sec-01"></a>

 To operate your workload securely, you must apply overarching best practices to every area of security. Take requirements and processes that you have defined in operational excellence at an organizational and workload level, and apply them to all areas. Staying up to date with AWS and industry recommendations and threat intelligence helps you evolve your threat model and control objectives. Automating security processes, testing, and validation permit you to scale your security operations. 

**Topics**
+ [SEC01-BP01 Separate workloads using accounts](sec_securely_operate_multi_accounts.md)
+ [SEC01-BP02 Secure account root user and properties](sec_securely_operate_aws_account.md)
+ [SEC01-BP03 Identify and validate control objectives](sec_securely_operate_control_objectives.md)
+ [SEC01-BP04 Keep up-to-date with security threats](sec_securely_operate_updated_threats.md)
+ [SEC01-BP05 Keep up-to-date with security recommendations](sec_securely_operate_updated_recommendations.md)
+ [SEC01-BP06 Automate testing and validation of security controls in pipelines](sec_securely_operate_test_validate_pipeline.md)
+ [SEC01-BP07 Identify threats and prioritize mitigations using a threat model](sec_securely_operate_threat_model.md)
+ [SEC01-BP08 Evaluate and implement new security services and features regularly](sec_securely_operate_implement_services_features.md)

# SEC01-BP01 Separate workloads using accounts
<a name="sec_securely_operate_multi_accounts"></a>

 Establish common guardrails and isolation between environments (such as production, development, and test) and workloads through a multi-account strategy. Account-level separation is strongly recommended, as it provides a strong isolation boundary for security, billing, and access. 

**Desired outcome:** An account structure that isolates cloud operations, unrelated workloads, and environments into separate accounts, increasing security across the cloud infrastructure.

**Common anti-patterns:**
+  Placing multiple unrelated workloads with different data sensitivity levels into the same account.
+  Poorly defined organizational unit (OU) structure.

**Benefits of establishing this best practice:**
+  Decreased scope of impact if a workload is inadvertently accessed.
+  Central governance of access to AWS services, resources, and Regions.
+  Maintain security of the cloud infrastructure with policies and centralized administration of security services.
+  Automated account creation and maintenance process.
+  Centralized auditing of your infrastructure for compliance and regulatory requirements.

 **Level of risk exposed if this best practice is not established**: High 

## Implementation guidance
<a name="implementation-guidance"></a>

 AWS accounts provide a security isolation boundary between workloads or resources that operate at different sensitivity levels. AWS provides tools to manage your cloud workloads at scale through a multi-account strategy to leverage this isolation boundary. For guidance on the concepts, patterns, and implementation of a multi-account strategy on AWS, see [Organizing Your AWS Environment Using Multiple Accounts](https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/organizing-your-aws-environment.html). 

 When you have multiple AWS accounts under central management, your accounts should be organized into a hierarchy defined by layers of organizational units (OUs). Security controls can then be organized and applied to the OUs and member accounts, establishing consistent preventative controls on member accounts in the organization. The security controls are inherited, allowing you to filter permissions available to member accounts located at lower levels of an OU hierarchy. A good design takes advantage of this inheritance to reduce the number and complexity of security policies required to achieve the desired security controls for each member account. 

 [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html) and [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) are two services that you can use to implement and manage this multi-account structure in your AWS environment. AWS Organizations allows you to organize accounts into a hierarchy defined by one or more layers of OUs, with each OU containing a number of member accounts. [Service control policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) (SCPs) allow the organization administrator to establish granular preventative controls on member accounts, and [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/config-rule-multi-account-deployment.html) can be used to establish proactive and detective controls on member accounts. Many AWS services [integrate with AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services_list.html) to provide delegated administrative controls and performing service-specific tasks across all member accounts in the organization. 

 Layered on top of AWS Organizations, [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) provides a one-click best practices setup for a multi-account AWS environment with a [landing zone](https://docs.aws.amazon.com/controltower/latest/userguide/aws-multi-account-landing-zone.html). The landing zone is the entry point to the multi-account environment established by Control Tower. Control Tower provides several [benefits](https://aws.amazon.com/blogs/architecture/fast-and-secure-account-governance-with-customizations-for-aws-control-tower/) over AWS Organizations. Three benefits that provide improved account governance are: 
+  Integrated mandatory security controls that are automatically applied to accounts admitted into the organization. 
+  Optional controls that can be turned on or off for a given set of OUs. 
+  [AWS Control Tower Account Factory](https://docs.aws.amazon.com/controltower/latest/userguide/account-factory.html) provides automated deployment of accounts containing pre-approved baselines and configuration options inside your organization. 

 **Implementation steps** 

1.  **Design an organizational unit structure:** A properly designed organizational unit structure reduces the management burden required to create and maintain service control policies and other security controls. Your organizational unit structure should be [aligned with your business needs, data sensitivity, and workload structure](https://aws.amazon.com/blogs/mt/best-practices-for-organizational-units-with-aws-organizations/). 

1.  **Create a landing zone for your multi-account environment:** A landing zone provides a consistent security and infrastructure foundation from which your organization can quickly develop, launch, and deploy workloads. You can use a [custom-built landing zone or AWS Control Tower](https://docs.aws.amazon.com/prescriptive-guidance/latest/migration-aws-environment/building-landing-zones.html) to orchestrate your environment. 

1.  **Establish guardrails:** Implement consistent security guardrails for your environment through your landing zone. AWS Control Tower provides a list of [mandatory](https://docs.aws.amazon.com/controltower/latest/userguide/mandatory-controls.html) and [optional](https://docs.aws.amazon.com/controltower/latest/userguide/optional-controls.html) controls that can be deployed. Mandatory controls are automatically deployed when implementing Control Tower. Review the list of highly recommended and optional controls, and implement controls that are appropriate to your needs. 

1.  **Restrict access to newly added Regions**: For new AWS Regions, IAM resources such as users and roles are only propagated to the Regions that you specify. This action can be performed through the [console when using Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/region-deny.html), or by adjusting [IAM permission policies in AWS Organizations](https://aws.amazon.com/blogs/security/setting-permissions-to-enable-accounts-for-upcoming-aws-regions/). 

1.  **Consider AWS [CloudFormation StackSets](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html)**: StackSets help you deploy resources including IAM policies, roles, and groups into different AWS accounts and Regions from an approved template. 

## Resources
<a name="resources"></a>

**Related best practices:** 
+ [SEC02-BP04 Rely on a centralized identity provider](sec_identities_identity_provider.md)

**Related documents:** 
+  [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) 
+  [AWS Security Audit Guidelines](https://docs.aws.amazon.com/general/latest/gr/aws-security-audit-guide.html) 
+  [IAM Best Practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) 
+  [Use CloudFormation StackSets to provision resources across multiple AWS accounts and regions](https://aws.amazon.com/blogs/aws/use-cloudformation-stacksets-to-provision-resources-across-multiple-aws-accounts-and-regions/) 
+  [Organizations FAQ](https://aws.amazon.com/organizations/faqs/) 
+  [AWS Organizations terminology and concepts](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html) 
+  [Best Practices for Service Control Policies in an AWS Organizations Multi-Account Environment](https://aws.amazon.com/blogs/industries/best-practices-for-aws-organizations-service-control-policies-in-a-multi-account-environment/) 
+  [AWS Account Management Reference Guide](https://docs.aws.amazon.com/accounts/latest/reference/accounts-welcome.html) 
+  [Organizing Your AWS Environment Using Multiple Accounts](https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/organizing-your-aws-environment.html) 

**Related videos:** 
+  [Enable AWS adoption at scale with automation and governance](https://youtu.be/GUMSgdB-l6s) 
+  [Security Best Practices the Well-Architected Way](https://youtu.be/u6BCVkXkPnM) 
+  [Building and Governing Multiple Accounts using AWS Control Tower](https://www.youtube.com/watch?v=agpyuvRv5oo) 
+  [Enable Control Tower for Existing Organizations](https://www.youtube.com/watch?v=CwRy0t8nfgM) 

**Related workshops:** 
+  [Control Tower Immersion Day](https://controltower.aws-management.tools/immersionday/) 

# SEC01-BP02 Secure account root user and properties
<a name="sec_securely_operate_aws_account"></a>

 The root user is the most privileged user in an AWS account, with full administrative access to all resources within the account, and in some cases cannot be constrained by security policies. Deactivating programmatic access to the root user, establishing appropriate controls for the root user, and avoiding routine use of the root user helps reduce the risk of inadvertent exposure of the root credentials and subsequent compromise of the cloud environment. 

**Desired outcome: **Securing the root user helps reduce the chance that accidental or intentional damage can occur through the misuse of root user credentials. Establishing detective controls can also alert the appropriate personnel when actions are taken using the root user.

**Common anti-patterns:**
+  Using the root user for tasks other than the few that require root user credentials.  
+  Neglecting to test contingency plans on a regular basis to verify the functioning of critical infrastructure, processes, and personnel during an emergency. 
+  Only considering the typical account login flow and neglecting to consider or test alternate account recovery methods. 
+  Not handling DNS, email servers, and telephone providers as part of the critical security perimeter, as these are used in the account recovery flow. 

 **Benefits of establishing this best practice:** Securing access to the root user builds confidence that actions in your account are controlled and audited. 

 **Level of risk exposed if this best practice is not established**: High 

## Implementation guidance
<a name="implementation-guidance"></a>

 AWS offers many tools to help secure your account. However, because some of these measures are not turned on by default, you must take direct action to implement them. Consider these recommendations as foundational steps to securing your AWS account. As you implement these steps it’s important that you build a process to continuously assess and monitor the security controls. 

 When you first create an AWS account, you begin with one identity that has complete access to all AWS services and resources in the account. This identity is called the AWS account root user. You can sign in as the root user using the email address and password that you used to create the account. Due to the elevated access granted to the AWS root user, you must limit use of the AWS root user to perform tasks that [specifically require it](https://docs.aws.amazon.com/general/latest/gr/aws_tasks-that-require-root.html). The root user login credentials must be closely guarded, and multi-factor authentication (MFA) should always be used for the AWS account root user. 

 In addition to the normal authentication flow to log into your root user using a username, password, and multi-factor authentication (MFA) device, there are account recovery flows to log into your AWS account root user given access to the email address and phone number associated with your account. Therefore, it is equally important to secure the root user email account where the recovery email is sent and the phone number associated with the account. Also consider potential circular dependencies where the email address associated with the root user is hosted on email servers or domain name service (DNS) resources from the same AWS account. 

 When using AWS Organizations, there are multiple AWS accounts each of which have a root user. One account is designated as the management account and several layers of member accounts can then be added underneath the management account. Prioritize securing your management account’s root user, then address your member account root users. The strategy for securing your management account’s root user can differ from your member account root users, and you can place preventative security controls on your member account root users. 

 **Implementation steps** 

 The following implementation steps are recommended to establish controls for the root user. Where applicable, recommendations are cross-referenced to [CIS AWS Foundations benchmark version 1.4.0](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-cis-controls-1.4.0.html). In addition to these steps, consult [AWS best practice guidelines](https://aws.amazon.com/premiumsupport/knowledge-center/security-best-practices/) for securing your AWS account and resources. 

 **Preventative controls** 

1.  Set up accurate [contact information](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact-primary.html) for the account. 

   1.  This information is used for the lost password recovery flow, lost MFA device account recovery flow, and for critical security-related communications with your team. 

   1.  Use an email address hosted by your corporate domain, preferably a distribution list, as the root user’s email address. Using a distribution list rather than an individual’s email account provides additional redundancy and continuity for access to the root account over long periods of time. 

   1.  The phone number listed on the contact information should be a dedicated, secure phone for this purpose. The phone number should not be listed or shared with anyone. 

1.  Do not create access keys for the root user. If access keys exist, remove them (CIS 1.4). 

   1.  Eliminate any long-lived programmatic credentials (access and secret keys) for the root user. 

   1.  If root user access keys already exist, you should transition processes using those keys to use temporary access keys from an AWS Identity and Access Management (IAM) role, then [delete the root user access keys](https://docs.aws.amazon.com/accounts/latest/reference/root-user-access-key.html#root-user-delete-access-key). 

1.  Determine if you need to store credentials for the root user. 

   1.  If you are using AWS Organizations to create new member accounts, the initial password for the root user on new member accounts is set to a random value that is not exposed to you. Consider using the password reset flow from your AWS Organization management account to [gain access to the member account](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_accounts_access.html#orgs_manage_accounts_access-as-root) if needed. 

   1.  For standalone AWS accounts or the management AWS Organization account, consider creating and securely storing credentials for the root user. Use MFA for the root user. 

1.  Use preventative controls for member account root users in AWS multi-account environments. 

   1.  Consider using the [Disallow Creation of Root Access Keys for the Root User](https://docs.aws.amazon.com/controltower/latest/userguide/strongly-recommended-controls.html#disallow-root-access-keys) preventative guard rail for member accounts. 

   1.  Consider using the [Disallow Actions as a Root User](https://docs.aws.amazon.com/controltower/latest/userguide/strongly-recommended-controls.html#disallow-root-auser-actions) preventative guard rail for member accounts. 

1.  If you need credentials for the root user: 

   1.  Use a complex password. 

   1.  Turn on multi-factor authentication (MFA) for the root user, especially for AWS Organizations management (payer) accounts (CIS 1.5). 

   1.  Consider hardware MFA devices for resiliency and security, as single use devices can reduce the chances that the devices containing your MFA codes might be reused for other purposes. Verify that hardware MFA devices powered by a battery are replaced regularly. (CIS 1.6) 
      +  To configure MFA for the root user, follow the instructions for creating either a [virtual MFA](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_virtual.html#enable-virt-mfa-for-root) or [hardware MFA device](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_physical.html#enable-hw-mfa-for-root). 

   1.  Consider enrolling multiple MFA devices for backup. [Up to 8 MFA devices are allowed per account](https://aws.amazon.com/blogs/security/you-can-now-assign-multiple-mfa-devices-in-iam/). 
      +  Note that enrolling more than one MFA device for the root user automatically turns off the [flow for recovering your account if the MFA device is lost](https://aws.amazon.com/premiumsupport/knowledge-center/reset-root-user-mfa/). 

   1.  Store the password securely, and consider circular dependencies if storing the password electronically. Don’t store the password in such a way that would require access to the same AWS account to obtain it. 

1.  Optional: Consider establishing a periodic password rotation schedule for the root user. 
   +  Credential management best practices depend on your regulatory and policy requirements. Root users protected by MFA are not reliant on the password as a single factor of authentication. 
   +  [Changing the root user password](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_change-root.html) on a periodic basis reduces the risk that an inadvertently exposed password can be misused. 

 **Detective controls** 
+  Create alarms to detect use of the root credentials (CIS 1.7). [Amazon GuardDuty](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_settingup.html) can monitor and alert on root user API credential usage through the [RootCredentialUsage](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-iam.html#policy-iam-rootcredentialusage) finding. 
+  Evaluate and implement the detective controls included in the [AWS Well-Architected Security Pillar conformance pack for AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/operational-best-practices-for-wa-Security-Pillar.html), or if using AWS Control Tower, the [strongly recommended controls](https://docs.aws.amazon.com/controltower/latest/userguide/strongly-recommended-controls.html) available inside Control Tower. 

 **Operational guidance** 
+  Determine who in the organization should have access to the root user credentials. 
  +  Use a two-person rule so that no one individual has access to all necessary credentials and MFA to obtain root user access. 
  +  Verify that the organization, and not a single individual, maintains control over the phone number and email alias associated with the account (which are used for password reset and MFA reset flow). 
+  Use root user only by exception (CIS 1.7). 
  +  The AWS root user must not be used for everyday tasks, even administrative ones. Only log in as the root user to perform [AWS tasks that require root user](https://docs.aws.amazon.com/general/latest/gr/aws_tasks-that-require-root.html). All other actions should be performed by other users assuming appropriate roles. 
+  Periodically check that access to the root user is functioning so that procedures are tested prior to an emergency situation requiring the use of the root user credentials. 
+  Periodically check that the email address associated with the account and those listed under [Alternate Contacts](https://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact-alternate.html) work. Monitor these email inboxes for security notifications you might receive from abuse@amazon.com. Also ensure any phone numbers associated with the account are working. 
+  Prepare incident response procedures to respond to root account misuse. Refer to the [AWS Security Incident Response Guide](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/aws-security-incident-response-guide.html) and the best practices in the [Incident Response section of the Security Pillar whitepaper](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/incident-response.html) for more information on building an incident response strategy for your AWS account. 

## Resources
<a name="resources"></a>

**Related best practices:** 
+ [SEC01-BP01 Separate workloads using accounts](sec_securely_operate_multi_accounts.md)
+ [SEC02-BP01 Use strong sign-in mechanisms](sec_identities_enforce_mechanisms.md)
+ [SEC03-BP02 Grant least privilege access](sec_permissions_least_privileges.md)
+ [SEC03-BP03 Establish emergency access process](sec_permissions_emergency_process.md)
+ [SEC10-BP05 Pre-provision access](sec_incident_response_pre_provision_access.md)

**Related documents:** 
+  [AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html) 
+  [AWS Security Audit Guidelines](https://docs.aws.amazon.com/general/latest/gr/aws-security-audit-guide.html) 
+  [IAM Best Practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) 
+  [Amazon GuardDuty – root credential usage alert](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-iam.html#policy-iam-rootcredentialusage) 
+  [Step-by-step guidance on monitoring for root credential use through CloudTrail](https://docs.aws.amazon.com/securityhub/latest/userguide/iam-controls.html#iam-20) 
+  [MFA tokens approved for use with AWS](https://aws.amazon.com/iam/features/mfa/) 
+  Implementing [break glass access](https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/break-glass-access.html) on AWS 
+  [Top 10 security items to improve in your AWS account](https://aws.amazon.com/blogs/security/top-10-security-items-to-improve-in-your-aws-account/) 
+  [What do I do if I notice unauthorized activity in my AWS account?](https://aws.amazon.com/premiumsupport/knowledge-center/potential-account-compromise/) 

**Related videos:** 
+  [Enable AWS adoption at scale with automation and governance](https://youtu.be/GUMSgdB-l6s) 
+  [Security Best Practices the Well-Architected Way](https://youtu.be/u6BCVkXkPnM) 
+  [Limiting use of AWS root credentials](https://youtu.be/SMjvtxXOXdU?t=979) from AWS re:inforce 2022 – Security best practices with AWS IAM

**Related examples and labs:** 
+ [ Lab: AWS account setup and root user ](https://www.wellarchitectedlabs.com/security/100_labs/100_aws_account_and_root_user/)

# SEC01-BP03 Identify and validate control objectives
<a name="sec_securely_operate_control_objectives"></a>

 Based on your compliance requirements and risks identified from your threat model, derive and validate the control objectives and controls that you need to apply to your workload. Ongoing validation of control objectives and controls help you measure the effectiveness of risk mitigation. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Identify compliance requirements: Discover the organizational, legal, and compliance requirements that your workload must comply with. 
+  Identify AWS compliance resources: Identify resources that AWS has available to assist you with compliance. 
  +  [https://aws.amazon.com/compliance/ ](https://aws.amazon.com/compliance/)
  + [ https://aws.amazon.com/artifact/](https://aws.amazon.com/artifact/) 

## Resources
<a name="resources"></a>

 **Related documents:** 
+ [AWS Security Audit Guidelines](https://docs.aws.amazon.com/general/latest/gr/aws-security-audit-guide.html) 
+ [ Security Bulletins](https://aws.amazon.com/security/security-bulletins/) 

 **Related videos:** 
+  [AWS Security Hub CSPM: Manage Security Alerts and Automate Compliance](https://youtu.be/HsWtPG_rTak) 
+  [Security Best Practices the Well-Architected Way](https://youtu.be/u6BCVkXkPnM) 

# SEC01-BP04 Keep up-to-date with security threats
<a name="sec_securely_operate_updated_threats"></a>

 To help you define and implement appropriate controls, recognize attack vectors by staying up to date with the latest security threats. Consume AWS Managed Services to make it easier to receive notification of unexpected or unusual behavior in your AWS accounts. Investigate using AWS Partner tools or third-party threat information feeds as part of your security information flow. The [Common Vulnerabilities and Exposures (CVE) List ](https://cve.mitre.org/) list contains publicly disclosed cyber security vulnerabilities that you can use to stay up to date. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Subscribe to threat intelligence sources: Regularly review threat intelligence information from multiple sources that are relevant to the technologies used in your workload. 
  +  [Common Vulnerabilities and Exposures List ](https://cve.mitre.org/)
+  Consider [AWS Shield Advanced](https://aws.amazon.com/shield/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc) service: It provides near real-time visibility into intelligence sources, if your workload is internet accessible. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+ [AWS Security Audit Guidelines](https://docs.aws.amazon.com/general/latest/gr/aws-security-audit-guide.html) 
+  [AWS Shield](https://aws.amazon.com/shield/) 
+ [ Security Bulletins](https://aws.amazon.com/security/security-bulletins/) 

 **Related videos:** 
+ [Security Best Practices the Well-Architected Way ](https://youtu.be/u6BCVkXkPnM) 

# SEC01-BP05 Keep up-to-date with security recommendations
<a name="sec_securely_operate_updated_recommendations"></a>

 Stay up-to-date with both AWS and industry security recommendations to evolve the security posture of your workload. [AWS Security Bulletins](https://aws.amazon.com/security/security-bulletins/?card-body.sort-by=item.additionalFields.bulletinDateSort&card-body.sort-order=desc&awsf.bulletins-year=year%232009) contain important information about security and privacy notifications. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Follow AWS updates: Subscribe or regularly check for new recommendations, tips and tricks. 
  +  [AWS Well-Architected Labs](https://wellarchitectedlabs.com/?ref=wellarchitected) 
  +  [AWS security blog](https://aws.amazon.com/blogs/security/?ref=wellarchitected) 
  +  [AWS service documentation](https://aws.amazon.com/documentation/?ref=wellarchitected) 
+  Subscribe to industry news: Regularly review news feeds from multiple sources that are relevant to the technologies that are used in your workload. 
  +  [Example: Common Vulnerabilities and Exposures List](https://cve.mitre.org/cve/?ref=wellarchitected) 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [Security Bulletins](https://aws.amazon.com/security/security-bulletins/) 

 **Related videos:** 
+  [Security Best Practices the Well-Architected Way](https://youtu.be/u6BCVkXkPnM) 

# SEC01-BP06 Automate testing and validation of security controls in pipelines
<a name="sec_securely_operate_test_validate_pipeline"></a>

 Establish secure baselines and templates for security mechanisms that are tested and validated as part of your build, pipelines, and processes. Use tools and automation to test and validate all security controls continuously. For example, scan items such as machine images and infrastructure-as-code templates for security vulnerabilities, irregularities, and drift from an established baseline at each stage. AWS CloudFormation Guard can help you verify that CloudFormation templates are safe, save you time, and reduce the risk of configuration error. 

Reducing the number of security misconfigurations introduced into a production environment is critical—the more quality control and reduction of defects you can perform in the build process, the better. Design continuous integration and continuous deployment (CI/CD) pipelines to test for security issues whenever possible. CI/CD pipelines offer the opportunity to enhance security at each stage of build and delivery. CI/CD security tooling must also be kept updated to mitigate evolving threats.

Track changes to your workload configuration to help with compliance auditing, change management, and investigations that may apply to you. You can use AWS Config to record and evaluate your AWS and third-party resources. It allows you to continuously audit and assess the overall compliance with rules and conformance packs, which are collections of rules with remediation actions.

Change tracking should include planned changes, which are part of your organization’s change control process (sometimes referred to as MACD—Move, Add, Change, Delete), unplanned changes, and unexpected changes, such as incidents. Changes might occur on the infrastructure, but they might also be related to other categories, such as changes in code repositories, machine images and application inventory changes, process and policy changes, or documentation changes.

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Automate configuration management: Enforce and validate secure configurations automatically by using a configuration management service or tool. 
  +  [AWS Systems Manager](https://aws.amazon.com/systems-manager/) 
  +  [AWS CloudFormation](https://aws.amazon.com/cloudformation/)
  +  [Set Up a CI/CD Pipeline on AWS](https://aws.amazon.com/getting-started/projects/set-up-ci-cd-pipeline/)

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [How to use service control policies to set permission guardrails across accounts in your AWS Organization](https://aws.amazon.com/blogs/security/how-to-use-service-control-policies-to-set-permission-guardrails-across-accounts-in-your-aws-organization/) 

 **Related videos:** 
+  [Managing Multi-Account AWS Environments Using AWS Organizations](https://youtu.be/fxo67UeeN1A) 
+  [Security Best Practices the Well-Architected Way](https://youtu.be/u6BCVkXkPnM) 

# SEC01-BP07 Identify threats and prioritize mitigations using a threat model
<a name="sec_securely_operate_threat_model"></a>


****  

|  | 
| --- |
| This best practice was updated with new guidance on December 6, 2023. | 

 Perform threat modeling to identify and maintain an up-to-date register of potential threats and associated mitigations for your workload. Prioritize your threats and adapt your security control mitigations to prevent, detect, and respond. Revisit and maintain this in the context of your workload, and the evolving security landscape. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 **What is threat modeling?** 

 “Threat modeling works to identify, communicate, and understand threats and mitigations within the context of protecting something of value.” – [The Open Web Application Security Project (OWASP) Application Threat Modeling](https://owasp.org/www-community/Threat_Modeling) 

 **Why should you threat model?** 

 Systems are complex, and are becoming increasingly more complex and capable over time, delivering more business value and increased customer satisfaction and engagement. This means that IT design decisions need to account for an ever-increasing number of use cases. This complexity and number of use-case permutations typically makes unstructured approaches ineffective for finding and mitigating threats. Instead, you need a systematic approach to enumerate the potential threats to the system, and to devise mitigations and prioritize these mitigations to make sure that the limited resources of your organization have the maximum impact in improving the overall security posture of the system. 

 Threat modeling is designed to provide this systematic approach, with the aim of finding and addressing issues early in the design process, when the mitigations have a low relative cost and effort compared to later in the lifecycle. This approach aligns with the industry principle of [*shift-left* security](https://owasp.org/www-project-devsecops-guideline/latest/00a-Overview). Ultimately, threat modeling integrates with an organization’s risk management process and helps drive decisions on which controls to implement by using a threat driven approach. 

 **When should threat modeling be performed?** 

 Start threat modeling as early as possible in the lifecycle of your workload, this gives you better flexibility on what to do with the threats you have identified. Much like software bugs, the earlier you identify threats, the more cost effective it is to address them. A threat model is a living document and should continue to evolve as your workloads change. Revisit your threat models over time, including when there is a major change, a change in the threat landscape, or when you adopt a new feature or service. 

### Implementation steps
<a name="implementation-steps"></a>

 **How can we perform threat modeling?** 

 There are many different ways to perform threat modeling. Much like programming languages, there are advantages and disadvantages to each, and you should choose the way that works best for you. One approach is to start with [Shostack’s 4 Question Frame for Threat Modeling](https://github.com/adamshostack/4QuestionFrame), which poses open-ended questions to provide structure to your threat modeling exercise: 

1.  **What are working on?** 

    The purpose of this question is to help you understand and agree upon the system you are building and the details about that system that are relevant to security. Creating a model or diagram is the most popular way to answer this question, as it helps you to visualize what you are building, for example, using a [data flow diagram](https://en.wikipedia.org/wiki/Data-flow_diagram). Writing down assumptions and important details about your system also helps you define what is in scope. This allows everyone contributing to the threat model to focus on the same thing, and avoid time-consuming detours into out-of-scope topics (including out of date versions of your system). For example, if you are building a web application, it is probably not worth your time threat modeling the operating system trusted boot sequence for browser clients, as you have no ability to affect this with your design. 

1.  **What can go wrong?** 

    This is where you identify threats to your system. Threats are accidental or intentional actions or events that have unwanted impacts and could affect the security of your system. Without a clear understanding of what could go wrong, you have no way of doing anything about it. 

    There is no canonical list of what can go wrong. Creating this list requires brainstorming and collaboration between all of the individuals within your team and [relevant personas involved](https://aws.amazon.com/blogs/security/how-to-approach-threat-modeling/#tips) in the threat modeling exercise. You can aid your brainstorming by using a model for identifying threats, such as [STRIDE](https://en.wikipedia.org/wiki/STRIDE_(security)), which suggests different categories to evaluate: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of privilege. In addition, you might want to aid the brainstorming by reviewing existing lists and research for inspiration, including the [OWASP Top 10](https://owasp.org/www-project-top-ten/), [HiTrust Threat Catalog](https://hitrustalliance.net/hitrust-threat-catalogue/), and your organization’s own threat catalog. 

1.  **What are we going to do about it?** 

    As was the case with the previous question, there is no canonical list of all possible mitigations. The inputs into this step are the identified threats, actors, and areas of improvement from the previous step. 

    Security and compliance is a [shared responsibility between you and AWS](https://aws.amazon.com/compliance/shared-responsibility-model/). It’s important to understand that when you ask “What are we going to do about it?”, that you are also asking “Who is responsible for doing something about it?”. Understanding the balance of responsibilities between you and AWS helps you scope your threat modeling exercise to the mitigations that are under your control, which are typically a combination of AWS service configuration options and your own system-specific mitigations. 

    For the AWS portion of the shared responsibility, you will find that [AWS services are in-scope of many compliance programs](https://aws.amazon.com/compliance/services-in-scope/). These programs help you to understand the robust controls in place at AWS to maintain security and compliance of the cloud. The audit reports from these programs are available for download for AWS customers from [AWS Artifact](https://aws.amazon.com/artifact/). 

    Regardless of which AWS services you are using, there’s always an element of customer responsibility, and mitigations aligned to these responsibilities should be included in your threat model. For security control mitigations for the AWS services themselves, you want to consider implementing security controls across domains, including domains such as identity and access management (authentication and authorization), data protection (at rest and in transit), infrastructure security, logging, and monitoring. The documentation for each AWS service has a [dedicated security chapter](https://docs.aws.amazon.com/security/) that provides guidance on the security controls to consider as mitigations. Importantly, consider the code that you are writing and its code dependencies, and think about the controls that you could put in place to address those threats. These controls could be things such as [input validation](https://cheatsheetseries.owasp.org/cheatsheets/Input_Validation_Cheat_Sheet.html), [session handling](https://owasp.org/www-project-mobile-top-10/2014-risks/m9-improper-session-handling), and [bounds handling](https://owasp.org/www-community/vulnerabilities/Buffer_Overflow). Often, the majority of vulnerabilities are introduced in custom code, so focus on this area. 

1.  **Did we do a good job?** 

    The aim is for your team and organization to improve both the quality of threat models and the velocity at which you are performing threat modeling over time. These improvements come from a combination of practice, learning, teaching, and reviewing. To go deeper and get hands on, it’s recommended that you and your team complete the [Threat modeling the right way for builders training course](https://explore.skillbuilder.aws/learn/course/external/view/elearning/13274/threat-modeling-the-right-way-for-builders-workshop) or [workshop](https://catalog.workshops.aws/threatmodel/en-US). In addition, if you are looking for guidance on how to integrate threat modeling into your organization’s application development lifecycle, see [How to approach threat modeling](https://aws.amazon.com/blogs/security/how-to-approach-threat-modeling/) post on the AWS Security Blog. 

 **Threat Composer** 

 To aid and guide you in performing threat modeling, consider using the [Threat Composer](https://github.com/awslabs/threat-composer#threat-composer) tool, which aims to your reduce time-to-value when threat modeling. The tool helps you do the following: 
+  Write useful threat statements aligned to [threat grammar](https://catalog.workshops.aws/threatmodel/en-US/what-can-go-wrong/threat-grammar) that work in a natural non-linear workflow 
+  Generate a human-readable threat model 
+  Generate a machine-readable threat model to allow you treat threat models as code 
+  Help you to quickly identify areas of quality and coverage improvement using the Insights Dashboard 

 For further reference, visit Threat Composer and switch to the system-defined **Example Workspace**. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [SEC01-BP03 Identify and validate control objectives](sec_securely_operate_control_objectives.md) 
+  [SEC01-BP04 Keep up-to-date with security threats](sec_securely_operate_updated_threats.md) 
+  [SEC01-BP05 Keep up-to-date with security recommendations](sec_securely_operate_updated_recommendations.md) 
+  [SEC01-BP08 Evaluate and implement new security services and features regularly](sec_securely_operate_implement_services_features.md) 

 **Related documents:** 
+  [How to approach threat modeling](https://aws.amazon.com/blogs/security/how-to-approach-threat-modeling/) (AWS Security Blog) 
+ [ NIST: Guide to Data-Centric System Threat Modelling ](https://csrc.nist.gov/publications/detail/sp/800-154/draft)

 **Related videos:** 
+ [AWS Summit ANZ 2021 - How to approach threat modelling ](https://www.youtube.com/watch?v=GuhIefIGeuA)
+ [AWS Summit ANZ 2022 - Scaling security – Optimise for fast and secure delivery ](https://www.youtube.com/watch?v=DjNPihdWHeA)

 **Related training:** 
+ [ Threat modeling the right way for builders – AWS Skill Builder virtual self-paced training ](https://explore.skillbuilder.aws/learn/course/external/view/elearning/13274/threat-modeling-the-right-way-for-builders-workshop)
+ [ Threat modeling the right way for builders – AWS Workshop ](https://catalog.workshops.aws/threatmodel)

 **Related tools:** 
+  [Threat Composer](https://github.com/awslabs/threat-composer#threat-composer) 

# SEC01-BP08 Evaluate and implement new security services and features regularly
<a name="sec_securely_operate_implement_services_features"></a>

 Evaluate and implement security services and features from AWS and AWS Partners that allow you to evolve the security posture of your workload. The AWS Security Blog highlights new AWS services and features, implementation guides, and general security guidance. [What's New with AWS?](https://aws.amazon.com/new) is a great way to stay up to date with all new AWS features, services, and announcements. 

 **Level of risk exposed if this best practice is not established:** Low 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Plan regular reviews: Create a calendar of review activities that includes compliance requirements, evaluation of new AWS security features and services, and staying up-to-date with industry news. 
+  Discover AWS services and features: Discover the security features that are available for the services that you are using, and review new features as they are released. 
  + [AWS security blog](https://aws.amazon.com/blogs/security/) 
  + [AWS security bulletins ](https://aws.amazon.com/security/security-bulletins/)
  +  [AWS service documentation ](https://aws.amazon.com/documentation/)
+  Define AWS service on-boarding process: Define processes for onboarding of new AWS services. Include how you evaluate new AWS services for functionality, and the compliance requirements for your workload. 
+  Test new services and features: Test new services and features as they are released in a non-production environment that closely replicates your production one. 
+  Implement other defense mechanisms: Implement automated mechanisms to defend your workload, explore the options available. 
  +  [Remediating non-compliant AWS resources by AWS Config Rules](https://docs.aws.amazon.com/config/latest/developerguide/remediation.html)

## Resources
<a name="resources"></a>

 **Related videos:** 
+  [Security Best Practices the Well-Architected Way ](https://youtu.be/u6BCVkXkPnM)

# Identity and access management
<a name="a-identity-and-access-management"></a>

**Topics**
+ [SEC 2. How do you manage authentication for people and machines?](sec-02.md)
+ [SEC 3. How do you manage permissions for people and machines?](sec-03.md)

# SEC 2. How do you manage authentication for people and machines?
<a name="sec-02"></a>

 There are two types of identities that you must manage when approaching operating secure AWS workloads. Understanding the type of identity you must manage and grant access helps you verify the right identities have access to the right resources under the right conditions. 

Human Identities: Your administrators, developers, operators, and end users require an identity to access your AWS environments and applications. These are members of your organization, or external users with whom you collaborate, and who interact with your AWS resources via a web browser, client application, or interactive command line tools. 

Machine Identities: Your service applications, operational tools, and workloads require an identity to make requests to AWS services, for example, to read data. These identities include machines running in your AWS environment such as Amazon EC2 instances or AWS Lambda functions. You may also manage machine identities for external parties who need access. Additionally, you may also have machines outside of AWS that need access to your AWS environment. 

**Topics**
+ [SEC02-BP01 Use strong sign-in mechanisms](sec_identities_enforce_mechanisms.md)
+ [SEC02-BP02 Use temporary credentials](sec_identities_unique.md)
+ [SEC02-BP03 Store and use secrets securely](sec_identities_secrets.md)
+ [SEC02-BP04 Rely on a centralized identity provider](sec_identities_identity_provider.md)
+ [SEC02-BP05 Audit and rotate credentials periodically](sec_identities_audit.md)
+ [SEC02-BP06 Leverage user groups and attributes](sec_identities_groups_attributes.md)

# SEC02-BP01 Use strong sign-in mechanisms
<a name="sec_identities_enforce_mechanisms"></a>

Sign-ins (authentication using sign-in credentials) can present risks when not using mechanisms like multi-factor authentication (MFA), especially in situations where sign-in credentials have been inadvertently disclosed or are easily guessed. Use strong sign-in mechanisms to reduce these risks by requiring MFA and strong password policies. 

 **Desired outcome:** Reduce the risks of unintended access to credentials in AWS by using strong sign-in mechanisms for [AWS Identity and Access Management (IAM)](https://aws.amazon.com/iam/) users, the [AWS account root user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html), [AWS IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) (successor to AWS Single Sign-On), and third-party identity providers. This means requiring MFA, enforcing strong password policies, and detecting anomalous login behavior. 

 **Common anti-patterns:** 
+  Not enforcing a strong password policy for your identities including complex passwords and MFA. 
+  Sharing the same credentials among different users. 
+  Not using detective controls for suspicious sign-ins. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 There are many ways for human identities to sign-in to AWS. It is an AWS best practice to rely on a centralized identity provider using federation (direct federation or using AWS IAM Identity Center) when authenticating to AWS. In that case, you should establish a secure sign-in process with your identity provider or Microsoft Active Directory. 

 When you first open an AWS account, you begin with an AWS account root user. You should only use the account root user to set up access for your users (and for [tasks that require the root user](https://docs.aws.amazon.com/accounts/latest/reference/root-user-tasks.html)). It’s important to turn on MFA for the account root user immediately after opening your AWS account and to secure the root user using the AWS [best practice guide](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_securely_operate_aws_account.html). 

 If you create users in AWS IAM Identity Center, then secure the sign-in process in that service. For consumer identities, you can use [Amazon Cognito user pools](https://docs.aws.amazon.com/cognito/index.html) and secure the sign-in process in that service, or by using one of the identity providers that Amazon Cognito user pools supports. 

 If you are using [AWS Identity and Access Management (IAM)](https://aws.amazon.com/iam/) users, you would secure the sign-in process using IAM. 

 Regardless of the sign-in method, it’s critical to enforce a strong sign-in policy. 

 **Implementation steps** 

 The following are general strong sign-in recommendations. The actual settings you configure should be set by your company policy or use a standard like [NIST 800-63](https://pages.nist.gov/800-63-3/sp800-63b.html). 
+  Require MFA. It’s an [IAM best practice to require MFA](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#enable-mfa-for-privileged-users) for human identities and workloads. Turning on MFA provides an additional layer of security requiring that users provide sign-in credentials and a one-time password (OTP) or a cryptographically verified and generated string from a hardware device. 
+  Enforce a minimum password length, which is a primary factor in password strength. 
+  Enforce password complexity to make passwords more difficult to guess. 
+  Allow users to change their own passwords. 
+  Create individual identities instead of shared credentials. By creating individual identities, you can give each user a unique set of security credentials. Individual users provide the ability to audit each user’s activity. 

 IAM Identity Center recommendations: 
+  IAM Identity Center provides a predefined [password policy](https://docs.aws.amazon.com/singlesignon/latest/userguide/password-requirements.html) when using the default directory that establishes password length, complexity, and reuse requirements. 
+  [Turn on MFA](https://docs.aws.amazon.com/singlesignon/latest/userguide/mfa-enable-how-to.html) and configure the context-aware or always-on setting for MFA when the identity source is the default directory, AWS Managed Microsoft AD, or AD Connector. 
+  Allow users to [register their own MFA devices](https://docs.aws.amazon.com/singlesignon/latest/userguide/how-to-allow-user-registration.html). 

 Amazon Cognito user pools directory recommendations: 
+  Configure the [Password strength](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html) settings. 
+  [Require MFA](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-mfa.html) for users. 
+  Use the Amazon Cognito user pools [advanced security settings](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-advanced-security.html) for features like [adaptive authentication](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-settings-adaptive-authentication.html) which can block suspicious sign-ins. 

 IAM user recommendations: 
+  Ideally you are using IAM Identity Center or direct federation. However, you might have the need for IAM users. In that case, [set a password policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-policy.html) for IAM users. You can use the password policy to define requirements such as minimum length or whether the password requires non-alphabetic characters. 
+  Create an IAM policy to [enforce MFA sign-in](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_users-self-manage-mfa-and-creds.html#tutorial_mfa_step1) so that users are allowed to manage their own passwords and MFA devices. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [SEC02-BP03 Store and use secrets securely](sec_identities_secrets.md) 
+  [SEC02-BP04 Rely on a centralized identity provider](sec_identities_identity_provider.md) 
+  [SEC03-BP08 Share resources securely within your organization](sec_permissions_share_securely.md) 

 **Related documents:** 
+ [AWS IAM Identity Center Password Policy ](https://docs.aws.amazon.com/singlesignon/latest/userguide/password-requirements.html)
+ [ IAM user password policy ](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-policy.html)
+ [ Setting the AWS account root user password ](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html)
+ [ Amazon Cognito password policy ](https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-settings-policies.html)
+ [AWS credentials ](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html)
+ [ IAM security best practices ](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html)

 **Related videos:** 
+  [Managing user permissions at scale with AWS IAM Identity Center](https://youtu.be/aEIqeFCcK7E) 
+  [Mastering identity at every layer of the cake](https://www.youtube.com/watch?v=vbjFjMNVEpc) 

# SEC02-BP02 Use temporary credentials
<a name="sec_identities_unique"></a>

 When doing any type of authentication, it’s best to use temporary credentials instead of long-term credentials to reduce or eliminate risks, such as credentials being inadvertently disclosed, shared, or stolen. 

**Desired outcome:** To reduce the risk of long-term credentials, use temporary credentials wherever possible for both human and machine identities. Long-term credentials create many risks, for example, they can be uploaded in code to public GitHub repositories. By using temporary credentials, you significantly reduce the chances of credentials becoming compromised. 

**Common anti-patterns:**
+  Developers using long-term access keys from IAM users rather than obtaining temporary credentials from the CLI using federation. 
+  Developers embedding long-term access keys in their code and uploading that code to public Git repositories. 
+  Developers embedding long-term access keys in mobile apps that are then made available in app stores. 
+  Users sharing long-term access keys with other users, or employees leaving the company with long-term access keys still in their possession. 
+  Using long-term access keys for machine identities when temporary credentials could be used. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 Use temporary security credentials instead of long-term credentials for all AWS API and CLI requests. API and CLI requests to AWS services must, in nearly every case, be signed using [AWS access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html). These requests can be signed with either temporary or long-term credentials. The only time you should use long-term credentials, also known as long-term access keys, is if you are using an [IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) or the [AWS account root user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html). When you federate to AWS or assume an [IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) through other methods, temporary credentials are generated. Even when you access the AWS Management Console using sign-in credentials, temporary credentials are generated for you to make calls to AWS services. There are few situations where you need long-term credentials and you can accomplish nearly all tasks using temporary credentials. 

 Avoiding the use of long-term credentials in favor of temporary credentials should go hand in hand with a strategy of reducing the usage of IAM users in favor of federation and IAM roles. While IAM users have been used for both human and machine identities in the past, we now recommend not using them to avoid the risks in using long-term access keys. 

### Implementation steps
<a name="implementation-steps"></a>

 For human identities like employees, administrators, developers, operators, and customers: 
+  You should [rely on a centralized identity provider](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_identities_identity_provider.html) and [require human users to use federation with an identity provider to access AWS using temporary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-users-federation-idp). Federation for your users can be done either with [direct federation to each AWS account](https://aws.amazon.com/identity/federation/) or using [AWS IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/what-is.html) and the identity provider of your choice. Federation provides a number of advantages over using IAM users in addition to eliminating long-term credentials. Your users can also request temporary credentials from the command line for [direct federation](https://aws.amazon.com/blogs/security/how-to-implement-federated-api-and-cli-access-using-saml-2-0-and-ad-fs/) or by using [IAM Identity Center](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html). This means that there are few uses cases that require IAM users or long-term credentials for your users.  
+  When granting third parties, such as software as a service (SaaS) providers, access to resources in your AWS account, you can use [cross-account roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html) and [resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html). 
+  If you need to grant applications for consumers or customers access to your AWS resources, you can use [Amazon Cognito identity pools](https://docs.aws.amazon.com/cognito/latest/developerguide/identity-pools.html) or [Amazon Cognito user pools](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools.html) to provide temporary credentials. The permissions for the credentials are configured through IAM roles. You can also define a separate IAM role with limited permissions for guest users who are not authenticated. 

 For machine identities, you might need to use long-term credentials. In these cases, you should [require workloads to use temporary credentials with IAM roles to access AWS](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#bp-workloads-use-roles). 
+  For [Amazon Elastic Compute Cloud](https://aws.amazon.com/pm/ec2/) (Amazon EC2), you can use [roles for Amazon EC2](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html). 
+  [AWS Lambda](https://aws.amazon.com/lambda/) allows you to configure a [Lambda execution role to grant the service permissions](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html) to perform AWS actions using temporary credentials. There are many other similar models for AWS services to grant temporary credentials using IAM roles. 
+  For IoT devices, you can use the [AWS IoT Core credential provider](https://docs.aws.amazon.com/iot/latest/developerguide/authorizing-direct-aws.html) to request temporary credentials. 
+  For on-premises systems or systems that run outside of AWS that need access to AWS resources, you can use [IAM Roles Anywhere](https://docs.aws.amazon.com/rolesanywhere/latest/userguide/introduction.html). 

 There are scenarios where temporary credentials are not an option and you might need to use long-term credentials. In these situations, [audit and rotate credentials periodically](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_identities_audit.html) and [rotate access keys regularly for use cases that require long-term credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#rotate-credentials). Some examples that might require long-term credentials include WordPress plugins and third-party AWS clients. In situations where you must use long-term credentials, or for credentials other than AWS access keys, such as database logins, you can use a service that is designed to handle the management of secrets, such as [AWS Secrets Manager](https://aws.amazon.com/secrets-manager/). Secrets Manager makes it simple to manage, rotate, and securely store encrypted secrets using [supported services](https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating.html). For more information about rotating long-term credentials, see [rotating access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_RotateAccessKey). 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+ [SEC02-BP03 Store and use secrets securely](sec_identities_secrets.md) 
+ [SEC02-BP04 Rely on a centralized identity provider](sec_identities_identity_provider.md) 
+ [SEC03-BP08 Share resources securely within your organization](sec_permissions_share_securely.md) 

 **Related documents:** 
+  [Temporary Security Credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) 
+  [AWS Credentials](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html) 
+  [IAM Security Best Practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) 
+  [IAM Roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) 
+  [IAM Identity Center](https://aws.amazon.com/iam/identity-center/) 
+  [Identity Providers and Federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html) 
+  [Rotating Access Keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_RotateAccessKey) 
+  [Security Partner Solutions: Access and Access Control](https://aws.amazon.com/security/partner-solutions/#access-control) 
+  [The AWS Account Root User](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html) 

 **Related videos:** 
+  [Managing user permissions at scale with AWS IAM Identity Center](https://youtu.be/aEIqeFCcK7E) 
+  [Mastering identity at every layer of the cake](https://www.youtube.com/watch?v=vbjFjMNVEpc) 

# SEC02-BP03 Store and use secrets securely
<a name="sec_identities_secrets"></a>

 A workload requires an automated capability to prove its identity to databases, resources, and third-party services. This is accomplished using secret access credentials, such as API access keys, passwords, and OAuth tokens. Using a purpose-built service to store, manage, and rotate these credentials helps reduce the likelihood that those credentials become compromised. 

**Desired outcome:** Implementing a mechanism for securely managing application credentials that achieves the following goals: 
+  Identifying what secrets are required for the workload. 
+  Reducing the number of long-term credentials required by replacing them with short-term credentials when possible. 
+  Establishing secure storage and automated rotation of remaining long-term credentials. 
+  Auditing access to secrets that exist in the workload. 
+  Continual monitoring to verify that no secrets are embedded in source code during the development process. 
+  Reduce the likelihood of credentials being inadvertently disclosed. 

**Common anti-patterns:**
+  Not rotating credentials. 
+  Storing long-term credentials in source code or configuration files. 
+  Storing credentials at rest unencrypted. 

 **Benefits of establishing this best practice:**
+  Secrets are stored encrypted at rest and in transit. 
+  Access to credentials is gated through an API (think of it as a *credential vending machine*). 
+  Access to a credential (both read and write) is audited and logged. 
+  Separation of concerns: credential rotation is performed by a separate component, which can be segregated from the rest of the architecture. 
+  Secrets are automatically distributed on-demand to software components and rotation occurs in a central location. 
+  Access to credentials can be controlled in a fine-grained manner. 

 **Level of risk exposed if this best practice is not established**: High 

## Implementation guidance
<a name="implementation-guidance"></a>

 In the past, credentials used to authenticate to databases, third-party APIs, tokens, and other secrets might have been embedded in source code or in environment files. AWS provides several mechanisms to store these credentials securely, automatically rotate them, and audit their usage. 

 The best way to approach secrets management is to follow the guidance of remove, replace, and rotate. The most secure credential is one that you do not have to store, manage, or handle. There might be credentials that are no longer necessary to the functioning of the workload that can be safely removed. 

 For credentials that are still required for the proper functioning of the workload, there might be an opportunity to replace a long-term credential with a temporary or short-term credential. For example, instead of hard-coding an AWS secret access key, consider replacing that long-term credential with a temporary credential using IAM roles. 

 Some long-lived secrets might not be able to be removed or replaced. These secrets can be stored in a service such as [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html), where they can be centrally stored, managed, and rotated on a regular basis. 

 An audit of the workload’s source code and configuration files can reveal many types of credentials. The following table summarizes strategies for handling common types of credentials: 


|  Credential type  |  Description  |  Suggested strategy  | 
| --- | --- | --- | 
|  IAM access keys  |  AWS IAM access and secret keys used to assume IAM roles inside of a workload  |  Replace: Use [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios.html) assigned to the compute instances (such as [Amazon EC2](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html) or [AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html)) instead. For interoperability with third parties that require access to resources in your AWS account, ask if they support [AWS cross-account access](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html). For mobile apps, consider using temporary credentials through [Amazon Cognito identity pools (federated identities)](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-identity.html). For workloads running outside of AWS, consider [IAM Roles Anywhere](https://docs.aws.amazon.com/rolesanywhere/latest/userguide/introduction.html) or [AWS Systems Manager Hybrid Activations](https://docs.aws.amazon.com/systems-manager/latest/userguide/activations.html).  | 
|  SSH keys  |  Secure Shell private keys used to log into Linux EC2 instances, manually or as part of an automated process  |  Replace: Use [AWS Systems Manager](https://aws.amazon.com/blogs/mt/vr-beneficios-session-manager/) or [EC2 Instance Connect](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Connect-using-EC2-Instance-Connect.html) to provide programmatic and human access to EC2 instances using IAM roles.  | 
|  Application and database credentials  |  Passwords – plain text string  |  Rotate: Store credentials in [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) and establish automated rotation if possible.  | 
|  Amazon RDS and Aurora Admin Database credentials  |  Passwords – plain text string  |  Replace: Use the [Secrets Manager integration with Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-secrets-manager.html) or [Amazon Aurora](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-secrets-manager.html). In addition, some RDS database types can use IAM roles instead of passwords for some use cases (for more detail, see [IAM database authentication](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html)).  | 
|  OAuth tokens  |  Secret tokens – plain text string  |  Rotate: Store tokens in [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) and configure automated rotation.  | 
|  API tokens and keys  |  Secret tokens – plain text string  |  Rotate: Store in [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) and establish automated rotation if possible.  | 

 A common anti-pattern is embedding IAM access keys inside source code, configuration files, or mobile apps. When an IAM access key is required to communicate with an AWS service, use [temporary (short-term) security credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html). These short-term credentials can be provided through [IAM roles for EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) instances, [execution roles](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html) for Lambda functions, [Cognito IAM roles](https://docs.aws.amazon.com/cognito/latest/developerguide/iam-roles.html) for mobile user access, and [IoT Core policies](https://docs.aws.amazon.com/iot/latest/developerguide/iot-policies.html) for IoT devices. When interfacing with third parties, prefer [delegating access to an IAM role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html) with the necessary access to your account’s resources rather than configuring an IAM user and sending the third party the secret access key for that user. 

 There are many cases where the workload requires the storage of secrets necessary to interoperate with other services and resources. [AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) is purpose built to securely manage these credentials, as well as the storage, use, and rotation of API tokens, passwords, and other credentials. 

 AWS Secrets Manager provides five key capabilities to ensure the secure storage and handling of sensitive credentials: [encryption at rest](https://docs.aws.amazon.com/secretsmanager/latest/userguide/security-encryption.html), [encryption in transit](https://docs.aws.amazon.com/secretsmanager/latest/userguide/data-protection.html), [comprehensive auditing](https://docs.aws.amazon.com/secretsmanager/latest/userguide/monitoring.html), [fine-grained access control](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access.html), and [extensible credential rotation](https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html). Other secret management services from AWS Partners or locally developed solutions that provide similar capabilities and assurances are also acceptable. 

 **Implementation steps** 

1.  Identify code paths containing hard-coded credentials using automated tools such as [Amazon CodeGuru](https://aws.amazon.com/codeguru/features/). 

   1.  Use Amazon CodeGuru to scan your code repositories. Once the review is complete, filter on `Type=Secrets` in CodeGuru to find problematic lines of code. 

1.  Identify credentials that can be removed or replaced. 

   1.  Identify credentials no longer needed and mark for removal. 

   1.  For AWS Secret Keys that are embedded in source code, replace them with IAM roles associated with the necessary resources. If part of your workload is outside AWS but requires IAM credentials to access AWS resources, consider [IAM Roles Anywhere](https://aws.amazon.com/blogs/security/extend-aws-iam-roles-to-workloads-outside-of-aws-with-iam-roles-anywhere/) or [AWS Systems Manager Hybrid Activations](https://docs.aws.amazon.com/systems-manager/latest/userguide/activations.html). 

1.  For other third-party, long-lived secrets that require the use of the rotate strategy, integrate Secrets Manager into your code to retrieve third-party secrets at runtime. 

   1.  The CodeGuru console can automatically [create a secret in Secrets Manager](https://aws.amazon.com/blogs/aws/codeguru-reviewer-secrets-detector-identify-hardcoded-secrets/) using the discovered credentials. 

   1.  Integrate secret retrieval from Secrets Manager into your application code. 

      1.  Serverless Lambda functions can use a language-agnostic [Lambda extension](https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets_lambda.html). 

      1.  For EC2 instances or containers, AWS provides example [client-side code for retrieving secrets from Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets.html) in several popular programming languages. 

1.  Periodically review your code base and re-scan to verify no new secrets have been added to the code. 

   1.  Consider using a tool such as [git-secrets](https://github.com/awslabs/git-secrets) to prevent committing new secrets to your source code repository. 

1.  [Monitor Secrets Manager activity](https://docs.aws.amazon.com/secretsmanager/latest/userguide/monitoring.html) for indications of unexpected usage, inappropriate secret access, or attempts to delete secrets. 

1.  Reduce human exposure to credentials. Restrict access to read, write, and modify credentials to an IAM role dedicated for this purpose, and only provide access to assume the role to a small subset of operational users. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+ [SEC02-BP02 Use temporary credentials](sec_identities_unique.md)
+ [SEC02-BP05 Audit and rotate credentials periodically](sec_identities_audit.md)

 **Related documents:** 
+  [Getting Started with AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/getting-started.html) 
+  [Identity Providers and Federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html) 
+  [Amazon CodeGuru Introduces Secrets Detector](https://aws.amazon.com/blogs/aws/codeguru-reviewer-secrets-detector-identify-hardcoded-secrets/) 
+  [How AWS Secrets Manager uses AWS Key Management Service](https://docs.aws.amazon.com/kms/latest/developerguide/services-secrets-manager.html) 
+  [Secret encryption and decryption in Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/security-encryption.html) 
+  [Secrets Manager blog entries](https://aws.amazon.com/blogs/security/tag/aws-secrets-manager/) 
+  [Amazon RDS announces integration with AWS Secrets Manager](https://aws.amazon.com/about-aws/whats-new/2022/12/amazon-rds-integration-aws-secrets-manager/) 

 **Related videos:** 
+  [Best Practices for Managing, Retrieving, and Rotating Secrets at Scale](https://youtu.be/qoxxRlwJKZ4) 
+  [Find Hard-Coded Secrets Using Amazon CodeGuru Secrets Detector](https://www.youtube.com/watch?v=ryK3PN--oJs) 
+  [Securing Secrets for Hybrid Workloads Using AWS Secrets Manager](https://www.youtube.com/watch?v=k1YWhogGVF8) 

 **Related workshops:** 
+  [Store, retrieve, and manage sensitive credentials in AWS Secrets Manager](https://catalog.us-east-1.prod.workshops.aws/workshops/92e466fd-bd95-4805-9f16-2df07450db42/en-US) 
+  [AWS Systems Manager Hybrid Activations](https://mng.workshop.aws/ssm/capability_hands-on_labs/hybridactivations.html) 

# SEC02-BP04 Rely on a centralized identity provider
<a name="sec_identities_identity_provider"></a>

 For workforce identities (employees and contractors), rely on an identity provider that allows you to manage identities in a centralized place. This makes it easier to manage access across multiple applications and systems, because you are creating, assigning, managing, revoking, and auditing access from a single location. 

 **Desired outcome:** You have a centralized identity provider where you centrally manage workforce users, authentication policies (such as requiring multi-factor authentication (MFA)), and authorization to systems and applications (such as assigning access based on a user's group membership or attributes). Your workforce users sign in to the central identity provider and federate (single sign-on) to internal and external applications, removing the need for users to remember multiple credentials. Your identity provider is integrated with your human resources (HR) systems so that personnel changes are automatically synchronized to your identity provider. For example, if someone leaves your organization, you can automatically revoke access to federated applications and systems (including AWS). You have enabled detailed audit logging in your identity provider and are monitoring these logs for unusual user behavior. 

 **Common anti-patterns:** 
+  You do not use federation and single-sign on. Your workforce users create separate user accounts and credentials in multiple applications and systems. 
+  You have not automated the lifecycle of identities for workforce users, such as by integrating your identity provider with your HR systems. When a user leaves your organization or changes roles, you follow a manual process to delete or update their records in multiple applications and systems. 

 **Benefits of establishing this best practice:** By using a centralized identity provider, you have a single place to manage workforce user identities and policies, the ability to assign access to applications to users and groups, and the ability to monitor user sign-in activity. By integrating with your human resources (HR) systems, when a user changes roles, these changes are synchronized to the identity provider and automatically updates their assigned applications and permissions. When a user leaves your organization, their identity is automatically disabled in the identity provider, revoking their access to federated applications and systems. 

 **Level of risk exposed if this best practice is not established**: High 

## Implementation guidance
<a name="implementation-guidance"></a>

 **Guidance for workforce users accessing AWS** 

 Workforce users like employees and contractors in your organization may require access to AWS using the AWS Management Console or AWS Command Line Interface (AWS CLI) to perform their job functions. You can grant AWS access to your workforce users by federating from your centralized identity provider to AWS at two levels: direct federation to each AWS account or federating to multiple accounts in your [AWS organization](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_getting-started_concepts.html). 
+  To federate your workforce users directly with each AWS account, you can use a centralized identity provider to federate to [AWS Identity and Access Management](https://aws.amazon.com/iam/) in that account. The flexibility of IAM allows you to enable a separate [SAML 2.0](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_saml.html) or an [Open ID Connect (OIDC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html) Identity Provider for each AWS account and use federated user attributes for access control. Your workforce users will use their web browser to sign in to the identity provider by providing their credentials (such as passwords and MFA token codes). The identity provider issues a SAML assertion to their browser that is submitted to the AWS Management Console sign in URL to allow the user to single sign-on to the [AWS Management Console by assuming an IAM Role](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-saml.html). Your users can also obtain temporary AWS API credentials for use in the [AWS CLI](https://aws.amazon.com/cli/) or [AWS SDKs](https://aws.amazon.com/developer/tools/) from [AWS STS](https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html) by [assuming the IAM role using a SAML assertion](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRoleWithSAML.html) from the identity provider. 
+  To federate your workforce users with multiple accounts in your AWS organization, you can use [https://aws.amazon.com/single-sign-on/](https://aws.amazon.com/single-sign-on/) to centrally manage access for your workforce users to AWS accounts and applications. You enable Identity Center for your organization and configure your identity source. IAM Identity Center provides a default identity source directory which you can use to manage your users and groups. Alternatively, you can choose an external identity source by [https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-identity-source-idp.html](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-identity-source-idp.html) using SAML 2.0 and [automatically provisioning](https://docs.aws.amazon.com/singlesignon/latest/userguide/provision-automatically.html) users and groups using SCIM, or [https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-identity-source-ad.html](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-identity-source-ad.html) using [Directory Service](https://aws.amazon.com/directoryservice/). Once an identity source is configured, you can assign access to users and groups to AWS accounts by defining least-privilege policies in your [permission sets](https://docs.aws.amazon.com/singlesignon/latest/userguide/permissionsetsconcept.html). Your workforce users can authenticate through your central identity provider to sign in to the [AWS access portal](https://docs.aws.amazon.com/singlesignon/latest/userguide/using-the-portal.html) and single-sign on to the AWS accounts and cloud applications assigned to them. Your users can configure the [AWS CLI v2](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html) to authenticate with Identity Center and get credentials to run AWS CLI commands. Identity Center also allows single-sign on access to AWS applications such as [Amazon SageMaker AI Studio](https://docs.aws.amazon.com/sagemaker/latest/dg/onboard-sso-users.html) and [AWS IoT Sitewise Monitor portals](https://docs.aws.amazon.com/iot-sitewise/latest/userguide/monitor-getting-started.html). 

 After you follow the preceding guidance, your workforce users will no longer need to use IAM users and groups for normal operations when managing workloads on AWS. Instead, your users and groups are managed outside of AWS and users are able to access AWS resources as a *federated identity*. Federated identities use the groups defined by your centralized identity provider. You should identify and remove IAM groups, IAM users, and long-lived user credentials (passwords and access keys) that are no longer needed in your AWS accounts. You can [find unused credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_finding-unused.html) using [IAM credential reports](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html), [delete the corresponding IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_manage.html) and [delete IAM groups.](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_groups_manage_delete.html) You can apply a [Service Control Policy (SCP)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) to your organization that helps prevent the creation of new IAM users and groups, enforcing that access to AWS is via federated identities. 

 **Guidance for users of your applications** 

 You can manage the identities of users of your applications, such as a mobile app, using [https://aws.amazon.com/cognito/](https://aws.amazon.com/cognito/) as your centralized identity provider. Amazon Cognito enables authentication, authorization, and user management for your web and mobile apps. Amazon Cognito provides an identity store that scales to millions of users, supports social and enterprise identity federation, and offers advanced security features to help protect your users and business. You can integrate your custom web or mobile application with Amazon Cognito to add user authentication and access control to your applications in minutes. Built on open identity standards such as SAML and Open ID Connect (OIDC), Amazon Cognito supports various compliance regulations and integrates with frontend and backend development resources. 

### Implementation steps
<a name="implementation-steps"></a>

 **Steps for workforce users accessing AWS** 
+  Federate your workforce users to AWS using a centralized identity provider using one of the following approaches: 
  +  Use IAM Identity Center to enable single sign-on to multiple AWS accounts in your AWS organization by federating with your identity provider. 
  +  Use IAM to connect your identity provider directly to each AWS account, enabling federated fine-grained access. 
+  Identify and remove IAM users and groups that are replaced by federated identities. 

 **Steps for users of your applications** 
+  Use Amazon Cognito as a centralized identity provider towards your applications. 
+  Integrate your custom applications with Amazon Cognito using OpenID Connect and OAuth. You can develop your custom applications using the Amplify libraries that provide simple interfaces to integrate with a variety of AWS services, such as Amazon Cognito for authentication. 

## Resources
<a name="resources"></a>

 **Related Well-Architected best practices:** 
+  [SEC02-BP06 Leverage user groups and attributes](sec_identities_groups_attributes.md) 
+  [SEC03-BP02 Grant least privilege access](sec_permissions_least_privileges.md) 
+  [SEC03-BP06 Manage access based on lifecycle](sec_permissions_lifecycle.md) 

 **Related documents:** 
+  [Identity federation in AWS](https://aws.amazon.com/identity/federation/) 
+  [Security best practices in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) 
+  [AWS Identity and Access Management Best practices](https://aws.amazon.com/iam/resources/best-practices/) 
+  [Getting started with IAM Identity Center delegated administration](https://aws.amazon.com/blogs/security/getting-started-with-aws-sso-delegated-administration/) 
+  [How to use customer managed policies in IAM Identity Center for advanced use cases](https://aws.amazon.com/blogs/security/how-to-use-customer-managed-policies-in-aws-single-sign-on-for-advanced-use-cases/) 
+  [AWS CLI v2: IAM Identity Center credential provider](https://docs.aws.amazon.com/sdkref/latest/guide/feature-sso-credentials.html) 

 **Related videos:** 
+  [AWS re:Inforce 2022 - AWS Identity and Access Management (IAM) deep dive](https://youtu.be/YMj33ToS8cI) 
+  [AWS re:Invent 2022 - Simplify your existing workforce access with IAM Identity Center](https://youtu.be/TvQN4OdR_0Y) 
+  [AWS re:Invent 2018: Mastering Identity at Every Layer of the Cake](https://youtu.be/vbjFjMNVEpc) 

 **Related examples:** 
+  [Workshop: Using AWS IAM Identity Center to achieve strong identity management](https://catalog.us-east-1.prod.workshops.aws/workshops/590f8439-42c7-46a1-8e70-28ee41498b3a/en-US) 
+  [Workshop: Serverless identity](https://identity-round-robin.awssecworkshops.com/serverless/) 

 **Related tools:** 
+  [AWS Security Competency Partners: Identity and Access Management](https://aws.amazon.com/security/partner-solutions/) 
+  [saml2aws](https://github.com/Versent/saml2aws) 

# SEC02-BP05 Audit and rotate credentials periodically
<a name="sec_identities_audit"></a>

Audit and rotate credentials periodically to limit how long the credentials can be used to access your resources. Long-term credentials create many risks, and these risks can be reduced by rotating long-term credentials regularly.

 **Desired outcome:** Implement credential rotation to help reduce the risks associated with long-term credential usage. Regularly audit and remediate non-compliance with credential rotation policies. 

 **Common anti-patterns:** 
+  Not auditing credential use. 
+  Using long-term credentials unnecessarily. 
+  Using long-term credentials and not rotating them regularly. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 When you cannot rely on temporary credentials and require long-term credentials, audit credentials to verify that defined controls like multi-factor authentication (MFA) are enforced, rotated regularly, and have the appropriate access level. 

 Periodic validation, preferably through an automated tool, is necessary to verify that the correct controls are enforced. For human identities, you should require users to change their passwords periodically and retire access keys in favor of temporary credentials. As you move from AWS Identity and Access Management (IAM) users to centralized identities, you can [generate a credential report](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html) to audit your users. 

 We also recommend that you enforce and monitor MFA in your identity provider. You can set up [AWS Config Rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html), or use [AWS Security Hub CSPM Security Standards](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-standards-fsbp-controls.html#fsbp-iam-3), to monitor if users have configured MFA. Consider using IAM Roles Anywhere to provide temporary credentials for machine identities. In situations when using IAM roles and temporary credentials is not possible, frequent auditing and rotating access keys is necessary. 

 **Implementation steps** 
+  **Regularly audit credentials:** Auditing the identities that are configured in your identity provider and IAM helps verify that only authorized identities have access to your workload. Such identities can include, but are not limited to, IAM users, AWS IAM Identity Center users, Active Directory users, or users in a different upstream identity provider. For example, remove people that leave the organization, and remove cross-account roles that are no longer required. Have a process in place to periodically audit permissions to the services accessed by an IAM entity. This helps you identify the policies you need to modify to remove any unused permissions. Use credential reports and [AWS Identity and Access Management Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html) to audit IAM credentials and permissions. You can use [Amazon CloudWatch to set up alarms for specific API calls](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/welcome.html) called within your AWS environment. [Amazon GuardDuty can also alert you to unexpected activity](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-iam.html), which might indicate overly permissive access or unintended access to IAM credentials. 
+  **Rotate credentials regularly:** When you are unable to use temporary credentials, rotate long-term IAM access keys regularly (maximum every 90 days). If an access key is unintentionally disclosed without your knowledge, this limits how long the credentials can be used to access your resources. For information about rotating access keys for IAM users, see [Rotating access keys](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_RotateAccessKey). 
+  **Review IAM permissions:** To improve the security of your AWS account, regularly review and monitor each of your IAM policies. Verify that policies adhere to the principle of least privilege. 
+  **Consider automating IAM resource creation and updates:** IAM Identity Center automates many IAM tasks, such as role and policy management. Alternatively, AWS CloudFormation can be used to automate the deployment of IAM resources, including roles and policies, to reduce the chance of human error because the templates can be verified and version controlled. 
+  **Use IAM Roles Anywhere to replace IAM users for machine identities:** IAM Roles Anywhere allows you to use roles in areas that you traditionally could not, such as on-premise servers. IAM Roles Anywhere uses a trusted X.509 certificate to authenticate to AWS and receive temporary credentials. Using IAM Roles Anywhere avoids the need to rotate these credentials, as long-term credentials are no longer stored in your on-premises environment. Please note that you will need to monitor and rotate the X.509 certificate as it approaches expiration. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [SEC02-BP02 Use temporary credentials](sec_identities_unique.md) 
+  [SEC02-BP03 Store and use secrets securely](sec_identities_secrets.md) 

 **Related documents:** 
+  [Getting Started with AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/getting-started.html) 
+  [IAM Best Practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) 
+  [Identity Providers and Federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html) 
+  [Security Partner Solutions: Access and Access Control](https://aws.amazon.com/security/partner-solutions/#access-control) 
+  [Temporary Security Credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) 
+ [ Getting credential reports for your AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html)

 **Related videos:** 
+  [Best Practices for Managing, Retrieving, and Rotating Secrets at Scale](https://youtu.be/qoxxRlwJKZ4) 
+  [Managing user permissions at scale with AWS IAM Identity Center](https://youtu.be/aEIqeFCcK7E) 
+  [Mastering identity at every layer of the cake](https://www.youtube.com/watch?v=vbjFjMNVEpc) 

 **Related examples:** 
+ [ Well-Architected Lab - Automated IAM User Cleanup ](https://wellarchitectedlabs.com/security/200_labs/200_automated_iam_user_cleanup/)
+ [ Well-Architected Lab - Automated Deployment of IAM Groups and Roles ](https://wellarchitectedlabs.com/security/200_labs/200_automated_deployment_of_iam_groups_and_roles/)

# SEC02-BP06 Leverage user groups and attributes
<a name="sec_identities_groups_attributes"></a>

 As the number of users you manage grows, you will need to determine ways to organize them so that you can manage them at scale. Place users with common security requirements in groups defined by your identity provider, and put mechanisms in place to ensure that user attributes that may be used for access control (for example, department or location) are correct and updated. Use these groups and attributes to control access, rather than individual users. This allows you to manage access centrally by changing a user’s group membership or attributes once with a [permission set](https://docs.aws.amazon.com/singlesignon/latest/userguide/permissionsets.html), rather than updating many individual policies when a user’s access needs change.

You can use AWS IAM Identity Center (IAM Identity Center) to manage user groups and attributes. IAM Identity Center supports most commonly used attributes whether they are entered manually during user creation or automatically provisioned using a synchronization engine, such as defined in the System for Cross-Domain Identity Management (SCIM) specification. 

 **Level of risk exposed if this best practice is not established:** Low 

## Implementation guidance
<a name="implementation-guidance"></a>
+  If you are using AWS IAM Identity Center (IAM Identity Center), configure groups: IAM Identity Center provides you with the ability to configure groups of users, and assign groups the desired level of permission. 
  +  [AWS Single Sign-On - Manage Identities](https://docs.aws.amazon.com/singlesignon/latest/userguide/manage-your-identity-source-sso.html) 
+  Learn about attribute-based access control (ABAC): ABAC is an authorization strategy that defines permissions based on attributes. 
  +  [What Is ABAC for AWS?](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) 
  +  [Lab: IAM Tag Based Access Control for EC2](https://www.wellarchitectedlabs.com/Security/300_IAM_Tag_Based_Access_Control_for_EC2/README.html) 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [Getting Started with AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/getting-started.html) 
+  [IAM Best Practices](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html) 
+  [Identity Providers and Federation](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers.html) 
+  [The AWS Account Root User](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html) 

 **Related videos:** 
+  [Best Practices for Managing, Retrieving, and Rotating Secrets at Scale](https://youtu.be/qoxxRlwJKZ4) 
+  [Managing user permissions at scale with AWS IAM Identity Center](https://youtu.be/aEIqeFCcK7E) 
+  [Mastering identity at every layer of the cake](https://www.youtube.com/watch?v=vbjFjMNVEpc) 

 **Related examples:** 
+  [Lab: IAM Tag Based Access Control for EC2](https://www.wellarchitectedlabs.com/Security/300_IAM_Tag_Based_Access_Control_for_EC2/README.html) 

# SEC 3. How do you manage permissions for people and machines?
<a name="sec-03"></a>

 Manage permissions to control access to people and machine identities that require access to AWS and your workload. Permissions control who can access what, and under what conditions. 

**Topics**
+ [SEC03-BP01 Define access requirements](sec_permissions_define.md)
+ [SEC03-BP02 Grant least privilege access](sec_permissions_least_privileges.md)
+ [SEC03-BP03 Establish emergency access process](sec_permissions_emergency_process.md)
+ [SEC03-BP04 Reduce permissions continuously](sec_permissions_continuous_reduction.md)
+ [SEC03-BP05 Define permission guardrails for your organization](sec_permissions_define_guardrails.md)
+ [SEC03-BP06 Manage access based on lifecycle](sec_permissions_lifecycle.md)
+ [SEC03-BP07 Analyze public and cross-account access](sec_permissions_analyze_cross_account.md)
+ [SEC03-BP08 Share resources securely within your organization](sec_permissions_share_securely.md)
+ [SEC03-BP09 Share resources securely with a third party](sec_permissions_share_securely_third_party.md)

# SEC03-BP01 Define access requirements
<a name="sec_permissions_define"></a>

Each component or resource of your workload needs to be accessed by administrators, end users, or other components. Have a clear definition of who or what should have access to each component, choose the appropriate identity type and method of authentication and authorization.

 **Common anti-patterns:** 
+ Hard-coding or storing secrets in your application. 
+ Granting custom permissions for each user. 
+ Using long-lived credentials. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 Each component or resource of your workload needs to be accessed by administrators, end users, or other components. Have a clear definition of who or what should have access to each component, choose the appropriate identity type and method of authentication and authorization.

Regular access to AWS accounts within the organization should be provided using [federated access](https://aws.amazon.com/identity/federation/) or a centralized identity provider. You should also centralize your identity management and ensure that there is an established practice to integrate AWS access to your employee access lifecycle. For example, when an employee changes to a job role with a different access level, their group membership should also change to reflect their new access requirements.

 When defining access requirements for non-human identities, determine which applications and components need access and how permissions are granted. Using IAM roles built with the least privilege access model is a recommended approach. [AWS Managed policies](https://docs.aws.amazon.com/singlesignon/latest/userguide/security-iam-awsmanpol.html) provide predefined IAM policies that cover most common use cases.

AWS services, such as [AWS Secrets Manager](https://aws.amazon.com/blogs/security/identify-arrange-manage-secrets-easily-using-enhanced-search-in-aws-secrets-manager/) and [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html), can help decouple secrets from the application or workload securely in cases where it's not feasible to use IAM roles. In Secrets Manager, you can establish automatic rotation for your credentials. You can use Systems Manager to reference parameters in your scripts, commands, SSM documents, configuration, and automation workflows by using the unique name that you specified when you created the parameter.

You can use AWS Identity and Access Management Roles Anywhere to obtain [temporary security credentials in IAM](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) for workloads that run outside of AWS. Your workloads can use the same [IAM policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) and [IAM roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html) that you use with AWS applications to access AWS resources. 

 Where possible, prefer short-term temporary credentials over long-term static credentials. For scenarios in which you need users with programmatic access and long-term credentials, use [access key last used information](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_RotateAccessKey) to rotate and remove access keys. 

Users need programmatic access if they want to interact with AWS outside of the AWS Management Console. The way to grant programmatic access depends on the type of user that's accessing AWS.

To grant users programmatic access, choose one of the following options.


****  

| Which user needs programmatic access? | To | By | 
| --- | --- | --- | 
| IAM | (Recommended) Use console credentials as temporary credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs. |  Following the instructions for the interface that you want to use. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/wellarchitected/2023-10-03/framework/sec_permissions_define.html)  | 
|  Workforce identity (Users managed in IAM Identity Center)  | Use temporary credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs. |  Following the instructions for the interface that you want to use. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/wellarchitected/2023-10-03/framework/sec_permissions_define.html)  | 
| IAM | Use temporary credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs. | Following the instructions in [Using temporary credentials with AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html) in the IAM User Guide. | 
| IAM | (Not recommended)Use long-term credentials to sign programmatic requests to the AWS CLI, AWS SDKs, or AWS APIs. |  Following the instructions for the interface that you want to use. [\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/wellarchitected/2023-10-03/framework/sec_permissions_define.html)  | 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [Attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) 
+  [AWS IAM Identity Center](https://aws.amazon.com/iam/identity-center/) 
+  [IAM Roles Anywhere](https://docs.aws.amazon.com/rolesanywhere/latest/userguide/introduction.html) 
+  [AWS Managed policies for IAM Identity Center](https://docs.aws.amazon.com/singlesignon/latest/userguide/security-iam-awsmanpol.html) 
+  [AWS IAM policy conditions](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html) 
+  [IAM use cases](https://docs.aws.amazon.com/IAM/latest/UserGuide/IAM_UseCases.html) 
+  [Remove unnecessary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#remove-credentials) 
+  [Working with Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html) 
+  [How to control access to AWS resources based on AWS account, OU, or organization](https://aws.amazon.com/blogs/security/how-to-control-access-to-aws-resources-based-on-aws-account-ou-or-organization/) 
+  [Identify, arrange, and manage secrets easily using enhanced search in AWS Secrets Manager](https://aws.amazon.com/blogs/security/identify-arrange-manage-secrets-easily-using-enhanced-search-in-aws-secrets-manager/) 

 **Related videos:** 
+  [Become an IAM Policy Master in 60 Minutes or Less](https://youtu.be/YQsK4MtsELU) 
+  [Separation of Duties, Least Privilege, Delegation, and CI/CD](https://youtu.be/3H0i7VyTu70) 
+  [Streamlining identity and access management for innovation](https://www.youtube.com/watch?v=3qK0b1UkaE8) 

# SEC03-BP02 Grant least privilege access
<a name="sec_permissions_least_privileges"></a>

 It's a best practice to grant only the access that identities require to perform specific actions on specific resources under specific conditions. Use group and identity attributes to dynamically set permissions at scale, rather than defining permissions for individual users. For example, you can allow a group of developers access to manage only resources for their project. This way, if a developer leaves the project, the developer’s access is automatically revoked without changing the underlying access policies. 

**Desired outcome:** Users should only have the permissions required to do their job. Users should only be given access to production environments to perform a specific task within a limited time period, and access should be revoked once that task is complete. Permissions should be revoked when no longer needed, including when a user moves onto a different project or job function. Administrator privileges should be given only to a small group of trusted administrators. Permissions should be reviewed regularly to avoid permission creep. Machine or system accounts should be given the smallest set of permissions needed to complete their tasks. 

**Common anti-patterns:**
+  Defaulting to granting users administrator permissions. 
+  Using the root user for day-to-day activities. 
+  Creating policies that are overly permissive, but without full administrator privileges. 
+  Not reviewing permissions to understand whether they permit least privilege access. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 The principle of [least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) states that identities should only be permitted to perform the smallest set of actions necessary to fulfill a specific task. This balances usability, efficiency, and security. Operating under this principle helps limit unintended access and helps track who has access to what resources. IAM users and roles have no permissions by default. The root user has full access by default and should be tightly controlled, monitored, and used only for [tasks that require root access](https://docs.aws.amazon.com/accounts/latest/reference/root-user-tasks.html). 

 IAM policies are used to explicitly grant permissions to IAM roles or specific resources. For example, identity-based policies can be attached to IAM groups, while S3 buckets can be controlled by resource-based policies. 

 When creating an IAM policy, you can specify the service actions, resources, and conditions that must be true for AWS to allow or deny access. AWS supports a variety of conditions to help you scope down access. For example, by using the `PrincipalOrgID` [condition key](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_condition-keys.html), you can deny actions if the requestor isn’t a part of your AWS Organization. 

 You can also control requests that AWS services make on your behalf, such as AWS CloudFormation creating an AWS Lambda function, using the `CalledVia` condition key. You should layer different policy types to establish defense-in-depth and limit the overall permissions of your users. You can also restrict what permissions can be granted and under what conditions. For example, you can allow your application teams to create their own IAM policies for systems they build, but must also apply a [Permission Boundary](https://aws.amazon.com/blogs/security/delegate-permission-management-to-developers-using-iam-permissions-boundaries/) to limit the maximum permissions the system can receive. 

 **Implementation steps** 
+  **Implement least privilege policies**: Assign access policies with least privilege to IAM groups and roles to reflect the user’s role or function that you have defined. 
  +  **Base policies on API usage**: One way to determine the needed permissions is to review AWS CloudTrail logs. This review allows you to create permissions tailored to the actions that the user actually performs within AWS. [IAM Access Analyzer can automatically generate an IAM policy based](https://aws.amazon.com/blogs/security/delegate-permission-management-to-developers-using-iam-permissions-boundaries/) [on activity](https://aws.amazon.com/blogs/security/delegate-permission-management-to-developers-using-iam-permissions-boundaries/). You can use IAM Access Advisor at the organization or account level to [track](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_access-advisor.html) [the last accessed information for a particular policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_access-advisor.html). 
+  **Consider using [AWS managed policies for job functions](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html).** When starting to create fine-grained permissions policies, it can be difficult to know where to start. AWS has managed policies for common job roles, for example billing, database administrators, and data scientists. These policies can help narrow the access that users have while determining how to implement the least privilege policies. 
+  **Remove unnecessary permissions:** Remove permissions that are not needed and trim back overly permissive policies. [IAM Access Analyzer policy generation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-generation.html) can help fine-tune permissions policies. 
+  **Ensure that users have limited access to production environments:** Users should only have access to production environments with a valid use case. After the user performs the specific tasks that required production access, access should be revoked. Limiting access to production environments helps prevent unintended production-impacting events and lowers the scope of impact of unintended access. 
+ **Consider permissions boundaries:** A permissions boundary is a feature for using a managed policy that sets the maximum permissions that an identity-based policy can grant to an IAM entity. An entity's permissions boundary allows it to perform only the actions that are allowed by both its identity-based policies and its permissions boundaries.  
+  **Consider [resource tags](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html) for permissions:** An attribute-based access control model using resource tags allows you to grant access based on resource purpose, owner, environment, or other criteria. For example, you can use resource tags to differentiate between development and production environments. Using these tags, you can restrict developers to the development environment. By combining tagging and permissions policies, you can achieve fine-grained resource access without needing to define complicated, custom policies for every job function. 
+  **Use [service control policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) for AWS Organizations.** Service control policies centrally control the maximum available permissions for member accounts in your organization. Importantly, service control policies allow you to restrict root user permissions in member accounts. Also consider using AWS Control Tower, which provides prescriptive managed controls that enrich AWS Organizations. You can also define your own controls within Control Tower. 
+  **Establish a user lifecycle policy for your organization:** User lifecycle policies define tasks to perform when users are onboarded onto AWS, change job role or scope, or no longer need access to AWS. Permission reviews should be done during each step of a user’s lifecycle to verify that permissions are properly restrictive and to avoid permissions creep. 
+  **Establish a regular schedule to review permissions and remove any unneeded permissions:** You should regularly review user access to verify that users do not have overly permissive access. [AWS Config](https://aws.amazon.com/config/) and IAM Access Analyzer can help when auditing user permissions. 
+ **Establish a job role matrix:** A job role matrix visualizes the various roles and access levels required within your AWS footprint. Using a job role matrix, you can define and separate permissions based on user responsibilities within your organization. Use groups instead of applying permissions directly to individual users or roles.**  **

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html?ref=wellarchitected#grant-least-privilege) 
+  [Permissions boundaries for IAM entities](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_boundaries.html) 
+  [Techniques for writing least privilege IAM policies](https://aws.amazon.com/blogs/security/techniques-for-writing-least-privilege-iam-policies/) 
+  [IAM Access Analyzer makes it easier to implement least privilege permissions by generating IAM](https://aws.amazon.com/blogs/security/iam-access-analyzer-makes-it-easier-to-implement-least-privilege-permissions-by-generating-iam-policies-based-on-access-activity/) [policies based on access activity](https://aws.amazon.com/blogs/security/iam-access-analyzer-makes-it-easier-to-implement-least-privilege-permissions-by-generating-iam-policies-based-on-access-activity/) 
+  [Delegate permission management to developers by using IAM permissions boundaries](https://aws.amazon.com/blogs/security/delegate-permission-management-to-developers-using-iam-permissions-boundaries/) 
+  [Refining Permissions using last accessed information](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_access-advisor.html) 
+  [IAM policy types and when to use them](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html) 
+  [Testing IAM policies with the IAM policy simulator](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_testing-policies.html) 
+  [Guardrails in AWS Control Tower](https://docs.aws.amazon.com/controltower/latest/userguide/guardrails.html) 
+  [Zero Trust architectures: An AWS perspective](https://aws.amazon.com/blogs/security/zero-trust-architectures-an-aws-perspective/) 
+  [How to implement the principle of least privilege with CloudFormation StackSets](https://aws.amazon.com/blogs/security/how-to-implement-the-principle-of-least-privilege-with-cloudformation-stacksets/) 
+  [Attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html?ref=wellarchitected) 
+ [Reducing policy scope by viewing user activity](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_access-advisor.html?ref=wellarchitected) 
+  [View role access](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_manage_delete.html?ref=wellarchitected#roles-delete_prerequisites) 
+  [Use Tagging to Organize Your Environment and Drive Accountability](https://docs.aws.amazon.com/aws-technical-content/latest/cost-optimization-laying-the-foundation/tagging.html?ref=wellarchitected) 
+  [AWS Tagging Strategies](https://aws.amazon.com/answers/account-management/aws-tagging-strategies/?ref=wellarchitected) 
+  [Tagging AWS resources](https://aws.amazon.com/premiumsupport/knowledge-center/quicksight-iam-identity-center/) 

 **Related videos:** 
+  [Next-generation permissions management](https://www.youtube.com/watch?v=8vsD_aTtuTo) 
+  [Zero Trust: An AWS perspective](https://www.youtube.com/watch?v=1p5G1-4s1r0) 

 **Related examples:** 
+  [Lab: IAM permissions boundaries delegating role creation](https://wellarchitectedlabs.com/Security/300_IAM_Permission_Boundaries_Delegating_Role_Creation/README.html) 
+  [Lab: IAM tag based access control for EC2](https://wellarchitectedlabs.com/Security/300_IAM_Tag_Based_Access_Control_for_EC2/README.html?ref=wellarchitected) 

# SEC03-BP03 Establish emergency access process
<a name="sec_permissions_emergency_process"></a>

 Create a process that allows for emergency access to your workloads in the unlikely event of an issue with your centralized identity provider. 

 You must design processes for different failure modes that may result in an emergency event. For example, under normal circumstances, your workforce users federate to the cloud using a centralized identity provider ([SEC02-BP04](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_identities_identity_provider.html)) to manage their workloads. However, if your centralized identity provider fails, or the configuration for federation in the cloud is modified, then your workforce users may not be able to federate into the cloud. An emergency access process allows authorized administrators to access your cloud resources through alternate means (such as an alternate form of federation or direct user access) to fix issues with your federation configuration or your workloads. The emergency access process is used until the normal federation mechanism is restored. 

 **Desired outcome:** 
+  You have defined and documented the failure modes that count as an emergency: consider your normal circumstances and the systems your users depend on to manage their workloads. Consider how each of these dependencies can fail and cause an emergency situation. You may find the questions and best practices in the [Reliability pillar](https://docs.aws.amazon.com/wellarchitected/latest/framework/a-reliability.html) useful to identify failure modes and architect more resilient systems to minimize the likelihood of failures. 
+  You have documented the steps that must be followed to confirm a failure as an emergency. For example, you can require your identity administrators to check the status of your primary and standby identity providers and, if both are unavailable, declare an emergency event for identity provider failure. 
+  You have defined an emergency access process specific to each type of emergency or failure mode. Being specific can reduce the temptation on the part of your users to overuse a general process for all types of emergencies. Your emergency access processes describe the circumstances under which each process should be used, and conversely situations where the process should not be used and points to alternate processes that may apply. 
+  Your processes are well-documented with detailed instructions and playbooks that can be followed quickly and efficiently. Remember that an emergency event can be a stressful time for your users and they may be under extreme time pressure, so design your process to be as simple as possible. 

 **Common anti-patterns:** 
+  You do not have well-documented and well-tested emergency access processes. Your users are unprepared for an emergency and follow improvised processes when an emergency event arises. 
+  Your emergency access processes depend on the same systems (such as a centralized identity provider) as your normal access mechanisms. This means that the failure of such a system may impact both your normal and emergency access mechanisms and impair your ability to recover from the failure. 
+  Your emergency access processes are used in non-emergency situations. For example, your users frequently misuse emergency access processes as they find it easier to make changes directly than submit changes through a pipeline. 
+  Your emergency access processes do not generate sufficient logs to audit the processes, or the logs are not monitored to alert for potential misuse of the processes. 

 **Benefits of establishing this best practice:** 
+  By having well-documented and well-tested emergency access processes, you can reduce the time taken by your users to respond to and resolve an emergency event. This can result in less downtime and higher availability of the services you provide to your customers. 
+  You can track each emergency access request and detect and alert on unauthorized attempts to misuse the process for non-emergency events. 

 **Level of risk exposed if this best practice is not established**: Medium 

## Implementation guidance
<a name="implementation-guidance"></a>

 This section provides guidance for creating emergency access processes for several failure modes related to workloads deployed on AWS, starting with common guidance that applies to all failure modes and followed by specific guidance based on the type of failure mode. 

 **Common guidance for all failure modes** 

 Consider the following as you design an emergency access process for a failure mode: 
+  Document the pre-conditions and assumptions of the process: when the process should be used and when it should not be used. It helps to detail the failure mode and document assumptions, such as the state of other related systems. For example, the process for the Failure Mode 2 assumes that the identity provider is available, but the configuration on AWS is modified or has expired. 
+  Pre-create resources needed by the emergency access process ([SEC10-BP05](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_incident_response_pre_provision_access.html)). For example, pre-create the emergency access AWS account with IAM users and roles, and the cross-account IAM roles in all the workload accounts. This verifies that these resources are ready and available when an emergency event happens. By pre-creating resources, you do not have a dependency on AWS [control plane](https://docs.aws.amazon.com/whitepapers/latest/aws-fault-isolation-boundaries/control-planes-and-data-planes.html) APIs (used to create and modify AWS resources) that may be unavailable in an emergency. Further, by pre-creating IAM resources, you do not need to account for [potential delays due to eventual consistency.](https://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_general.html#troubleshoot_general_eventual-consistency) 
+  Include emergency access processes as part of your incident management plans ([SEC10-BP02](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_incident_response_develop_management_plans.html)). Document how emergency events are tracked and communicated to others in your organization such as peer teams, your leadership, and, when applicable, externally to your customers and business partners. 
+  Define the emergency access request process in your existing service request workflow system if you have one. Typically, such workflow systems allow you to create intake forms to collect information about the request, track the request through each stage of the workflow, and add both automated and manual approval steps. Relate each request with a corresponding emergency event tracked in your incident management system. Having a uniform system for emergency accesses allows you to track those requests in a single system, analyze usage trends, and improve your processes. 
+  Verify that your emergency access processes can only be initiated by authorized users and require approvals from the user's peers or management as appropriate. The approval process should operate effectively both inside and outside business hours. Define how requests for approval allow secondary approvers if the primary approvers are unavailable and are escalated up your management chain until approved. 
+  Verify that the process generates detailed audit logs and events for both successful and failed attempts to gain emergency access. Monitor both the request process and the emergency access mechanism to detect misuse or unauthorized accesses. Correlate activity with ongoing emergency events from your incident management system and alert when actions happen outside of expected time periods. For example, you should monitor and alert on activity in the emergency access AWS account, as it should never be used in normal operations. 
+  Test emergency access processes periodically to verify that the steps are clear and grant the correct level of access quickly and efficiently. Your emergency access processes should be tested as part of incident response simulations ([SEC10-BP07](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_incident_response_run_game_days.html)) and disaster recovery tests ([REL13-BP03](https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_planning_for_recovery_dr_tested.html)). 

 **Failure Mode 1: Identity provider used to federate to AWS is unavailable** 

 As described in [SEC02-BP04 Rely on a centralized identity provider](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_identities_identity_provider.html), we recommend relying on a centralized identity provider to federate your workforce users to grant access to AWS accounts. You can federate to multiple AWS accounts in your AWS organization using IAM Identity Center, or you can federate to individual AWS accounts using IAM. In both cases, workforce users authenticate with your centralized identity provider before being redirected to an AWS sign-in endpoint to single sign-on. 

 In the unlikely event that your centralized identity provider is unavailable, your workforce users can't federate to AWS accounts or manage their workloads. In this emergency event, you can provide an emergency access process for a small set of administrators to access AWS accounts to perform critical tasks that cannot wait until your centralized identity providers are back online. For example, your identity provider is unavailable for 4 hours and during that period you need to modify the upper limits of an Amazon EC2 Auto Scaling group in a Production account to handle an unexpected spike in customer traffic. Your emergency administrators should follow the emergency access process to gain access to the specific production AWS account and make the necessary changes. 

 The emergency access process relies on a pre-created emergency access AWS account that is used solely for emergency access and has AWS resources (such as IAM roles and IAM users) to support the emergency access process. During normal operations, no one should access the emergency access account and you must monitor and alert on the misuse of this account (for more detail, see the preceding Common guidance section). 

 The emergency access account has emergency access IAM roles with permissions to assume cross-account roles in the AWS accounts that require emergency access. These IAM roles are pre-created and configured with trust policies that trust the emergency account's IAM roles. 

 The emergency access process can use one of the following approaches: 
+  You can pre-create a set of [IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users.html) for your emergency administrators in the emergency access account with associated strong passwords and MFA tokens. These IAM users have permissions to assume the IAM roles that then allow cross-account access to the AWS account where emergency access is required. We recommend creating as few such users as possible and assigning each user to a single emergency administrator. During an emergency, an emergency administrator user signs into the emergency access account using their password and MFA token code, switches to the emergency access IAM role in the emergency account, and finally switches to the emergency access IAM role in the workload account to perform the emergency change action. The advantage of this approach is that each IAM user is assigned to one emergency administrator and you can know which user signed-in by reviewing CloudTrail events. The disadvantage is that you have to maintain multiple IAM users with their associated long-lived passwords and MFA tokens. 
+  You can use the emergency access [AWS account root user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html) to sign into the emergency access account, assume the IAM role for emergency access, and assume the cross-account role in the workload account. We recommend setting a strong password and multiple MFA tokens for the root user. We also recommend storing the password and the MFA tokens in a secure enterprise credential vault that enforces strong authentication and authorization. You should secure the password and MFA token reset factors: set the email address for the account to an email distribution list that is monitored by your cloud security administrators, and the phone number of the account to a shared phone number that is also monitored by security administrators. The advantage of this approach is that there is one set of root user credentials to manage. The disadvantage is that since this is a shared user, multiple administrators have ability to sign in as the root user. You must audit your enterprise vault log events to identify which administrator checked out the root user password. 

 **Failure Mode 2: Identity provider configuration on AWS is modified or has expired** 

 To allow your workforce users to federate to AWS accounts, you can configure the IAM Identity Center with an external identity provider or create an IAM Identity Provider ([SEC02-BP04](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_identities_identity_provider.html)). Typically, you configure these by importing a SAML meta-data XML document provided by your identity provider. The meta-data XML document includes a X.509 certificate corresponding to a private key that the identity provider uses to sign its SAML assertions. 

 These configurations on the AWS-side may be modified or deleted by mistake by an administrator. In another scenario, the X.509 certificate imported into AWS may expire and a new meta-data XML with a new certificate has not yet been imported into AWS. Both scenarios can break federation to AWS for your workforce users, resulting in an emergency. 

 In such an emergency event, you can provide your identity administrators access to AWS to fix the federation issues. For example, your identity administrator uses the emergency access process to sign into the emergency access AWS account, switches to a role in the Identity Center administrator account, and updates the external identity provider configuration by importing the latest SAML meta-data XML document from your identity provider to re-enable federation. Once federation is fixed, your workforce users continue to use the normal operating process to federate into their workload accounts. 

 You can follow the approaches detailed in the previous Failure Mode 1 to create an emergency access process. You can grant least-privilege permissions to your identity administrators to access only the Identity Center administrator account and perform actions on Identity Center in that account. 

 **Failure Mode 3: Identity Center disruption** 

 In the unlikely event of an IAM Identity Center or AWS Region disruption, we recommend that you set up a configuration that you can use to provide temporary access to the AWS Management Console. 

 The emergency access process uses direct federation from your identity provider to IAM in an emergency account. For detail on the process and design considerations, see [Set up emergency access to the AWS Management Console](https://docs.aws.amazon.com/singlesignon/latest/userguide/emergency-access.html). 

### Implementation steps
<a name="implementation-steps"></a>

 **Common steps for all failure modes** 
+  Create an AWS account dedicated to emergency access processes. Pre-create the IAM resources needed in the account such as IAM roles or IAM users, and optionally IAM Identity Providers. Additionally, pre-create cross-account IAM roles in the workload AWS accounts with trust relationships with corresponding IAM roles in the emergency access account. You can use [CloudFormation StackSets with AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/services-that-can-integrate-cloudformation.html) to create such resources in the member accounts in your organization. 
+  Create AWS Organizations [service control policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) (SCPs) to deny the deletion and modification of the cross-account IAM roles in the member AWS accounts. 
+  Enable CloudTrail for the emergency access AWS account and send the trail events to a central S3 bucket in your log collection AWS account. If you are using AWS Control Tower to set up and govern your AWS multi-account environment, then every account you create using AWS Control Tower or enroll in AWS Control Tower has CloudTrail enabled by default and sent to an S3 bucket in a dedicated log archive AWS account. 
+  Monitor the emergency access account for activity by creating EventBridge rules that match on console login and API activity by the emergency IAM roles. Send notifications to your security operations center when activity happens outside of an ongoing emergency event tracked in your incident management system. 

 **Additional steps for Failure Mode 1: Identity provider used to federate to AWS is unavailable and Failure Mode 2: Identity provider configuration on AWS is modified or has expired** 
+  Pre-create resources depending on the mechanism you choose for emergency access: 
  +  **Using IAM users:** pre-create the IAM users with strong passwords and associated MFA devices. 
  +  **Using the emergency account root user:** configure the root user with a strong password and store the password in your enterprise credential vault. Associate multiple physical MFA devices with the root user and store the devices in locations that can be accessed quickly by members of your emergency administrator team. 

 **Additional steps for Failure Mode 3: Identity center disruption** 
+  As detailed in [Set up emergency access to the AWS Management Console](https://docs.aws.amazon.com/singlesignon/latest/userguide/emergency-access.html), in the emergency access AWS account, create an IAM Identity Provider to enable direct SAML federation from your identity provider. 
+  Create emergency operations groups in your IdP with no members. 
+  Create IAM roles corresponding to the emergency operations groups in the emergency access account. 

## Resources
<a name="resources"></a>

 **Related Well-Architected best practices:** 
+  [SEC02-BP04 Rely on a centralized identity provider](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_identities_identity_provider.html) 
+  [SEC03-BP02 Grant least privilege access](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_permissions_least_privileges.html) 
+  [SEC10-BP02 Develop incident management plans](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_incident_response_develop_management_plans.html) 
+  [SEC10-BP07 Run game days](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_incident_response_run_game_days.html) 

 **Related documents:** 
+  [Set up emergency access to the AWS Management Console](https://docs.aws.amazon.com/singlesignon/latest/userguide/emergency-access.html) 
+  [Enabling SAML 2.0 federated users to access the AWS Management Console](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_enable-console-saml.html) 
+  [Break glass access](https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/break-glass-access.html) 

 **Related videos:** 
+  [AWS re:Invent 2022 - Simplify your existing workforce access with IAM Identity Center](https://youtu.be/TvQN4OdR_0Y) 
+  [AWS re:Inforce 2022 - AWS Identity and Access Management (IAM) deep dive](https://youtu.be/YMj33ToS8cI) 

 **Related examples:** 
+  [AWS Break Glass Role](https://github.com/awslabs/aws-break-glass-role) 
+  [AWS customer playbook framework](https://github.com/aws-samples/aws-customer-playbook-framework) 
+  [AWS incident response playbook samples](https://github.com/aws-samples/aws-incident-response-playbooks) 

# SEC03-BP04 Reduce permissions continuously
<a name="sec_permissions_continuous_reduction"></a>

As your teams determine what access is required, remove unneeded permissions and establish review processes to achieve least privilege permissions. Continually monitor and remove unused identities and permissions for both human and machine access.

 **Desired outcome:** Permission policies should adhere to the least privilege principle. As job duties and roles become better defined, your permission policies need to be reviewed to remove unnecessary permissions. This approach lessens the scope of impact should credentials be inadvertently exposed or otherwise accessed without authorization. 

 **Common anti-patterns:** 
+  Defaulting to granting users administrator permissions. 
+  Creating policies that are overly permissive, but without full administrator privileges. 
+  Keeping permission policies after they are no longer needed. 

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>

 As teams and projects are just getting started, permissive permission policies might be used to inspire innovation and agility. For example, in a development or test environment, developers can be given access to a broad set of AWS services. We recommend that you evaluate access continuously and restrict access to only those services and service actions that are necessary to complete the current job. We recommend this evaluation for both human and machine identities. Machine identities, sometimes called system or service accounts, are identities that give AWS access to applications or servers. This access is especially important in a production environment, where overly permissive permissions can have a broad impact and potentially expose customer data. 

 AWS provides multiple methods to help identify unused users, roles, permissions, and credentials. AWS can also help analyze access activity of IAM users and roles, including associated access keys, and access to AWS resources such as objects in Amazon S3 buckets. AWS Identity and Access Management Access Analyzer policy generation can assist you in creating restrictive permission policies based on the actual services and actions a principal interacts with. [Attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) can help simplify permissions management, as you can provide permissions to users using their attributes instead of attaching permissions policies directly to each user. 

 **Implementation steps** 
+  **Use [AWS Identity and Access Management Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html):** IAM Access Analyzer helps identify resources in your organization and accounts, such as Amazon Simple Storage Service (Amazon S3) buckets or IAM roles that are [shared with an external entity](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html). 
+  **Use [IAM Access Analyzer policy generation](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-generation.html):** IAM Access Analyzer policy generation helps you [create fine-grained permission policies based on an IAM user or role’s access activity](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-generation.html#access-analyzer-policy-generation-howitworks). 
+  **Determine an acceptable timeframe and usage policy for IAM users and roles:** Use the [last accessed timestamp](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_access-advisor-view-data.html) to [identify unused users and roles](https://aws.amazon.com/blogs/security/identify-unused-iam-roles-remove-confidently-last-used-timestamp/) and remove them. Review service and action last accessed information to identify and [scope permissions for specific users and roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_access-advisor.html). For example, you can use last accessed information to identify the specific Amazon S3 actions that your application role requires and restrict the role’s access to only those actions. Last accessed information features are available in the AWS Management Console and programmatically allow you to incorporate them into your infrastructure workflows and automated tools. 
+  **Consider [logging data events in AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html):** By default, CloudTrail does not log data events such as Amazon S3 object-level activity (for example, `GetObject` and `DeleteObject`) or Amazon DynamoDB table activities (for example, `PutItem` and `DeleteItem`). Consider using logging for these events to determine what users and roles need access to specific Amazon S3 objects or DynamoDB table items. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) 
+  [Remove unnecessary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#remove-credentials) 
+ [ What is AWS CloudTrail? ](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html)
+  [Working with Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html) 
+ [ Logging and monitoring DynamoDB ](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/MonitoringDynamoDB.html)
+ [ Using CloudTrail event logging for Amazon S3 buckets and objects ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-cloudtrail-logging-for-s3.html)
+ [ Getting credential reports for your AWS account](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_getting-report.html)

 **Related videos:** 
+  [Become an IAM Policy Master in 60 Minutes or Less](https://youtu.be/YQsK4MtsELU) 
+  [Separation of Duties, Least Privilege, Delegation, and CI/CD](https://youtu.be/3H0i7VyTu70) 
+ [AWS re:Inforce 2022 - AWS Identity and Access Management (IAM) deep dive ](https://www.youtube.com/watch?v=YMj33ToS8cI)

# SEC03-BP05 Define permission guardrails for your organization
<a name="sec_permissions_define_guardrails"></a>

 Establish common controls that restrict access to all identities in your organization. For example, you can restrict access to specific AWS Regions, or prevent your operators from deleting common resources, such as an IAM role used for your central security team. 

 **Common anti-patterns:** 
+ Running workloads in your Organizational administrator account. 
+ Running production and non-production workloads in the same account. 

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>

 As you grow and manage additional workloads in AWS, you should separate these workloads using accounts and manage those accounts using AWS Organizations. We recommend that you establish common permission guardrails that restrict access to all identities in your organization. For example, you can restrict access to specific AWS Regions, or prevent your team from deleting common resources, such as an IAM role used by your central security team. 

 You can get started by implementing example service control policies, such as preventing users from turning off key services. SCPs use the IAM policy language and allow you to establish controls that all IAM principals (users and roles) adhere to. You can restrict access to specific service actions, resources and based on specific condition to meet the access control needs of your organization. If necessary, you can define exceptions to your guardrails. For example, you can restrict service actions for all IAM entities in the account except for a specific administrator role. 

 We recommend you avoid running workloads in your management account. The management account should be used to govern and deploy security guardrails that will affect member accounts. Some AWS services support the use of a delegated administrator account. When available, you should use this delegated account instead of the management account. You should strongly limit access to the Organizational administrator account. 

Using a multi-account strategy allows you to have greater flexibility in applying guardrails to your workloads. The AWS Security Reference Architecture gives prescriptive guidance on how to design your account structure. AWS services such as AWS Control Tower provide capabilities to centrally manage both preventative and detective controls across your organization. Define a clear purpose for each account or OU within your organization and limit controls in line with that purpose. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+ [AWS Organizations](https://aws.amazon.com/organizations/) 
+ [Service control policies (SCPs)](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) 
+ [Get more out of service control policies in a multi-account environment](https://aws.amazon.com/blogs/security/get-more-out-of-service-control-policies-in-a-multi-account-environment/) 
+ [AWS Security Reference Architecture (AWS SRA)](https://docs.aws.amazon.com/prescriptive-guidance/latest/security-reference-architecture/welcome.html) 

 **Related videos:** 
+ [Enforce Preventive Guardrails using Service Control Policies](https://www.youtube.com/watch?v=mEO05mmbSms) 
+  [Building governance at scale with AWS Control Tower](https://www.youtube.com/watch?v=Zxrs6YXMidk) 
+  [AWS Identity and Access Management deep dive](https://www.youtube.com/watch?v=YMj33ToS8cI) 

# SEC03-BP06 Manage access based on lifecycle
<a name="sec_permissions_lifecycle"></a>

 Integrate access controls with operator and application lifecycle and your centralized federation provider. For example, remove a user’s access when they leave the organization or change roles. 

As you manage workloads using separate accounts, there will be cases where you need to share resources between those accounts. We recommend that you share resources using [AWS Resource Access Manager (AWS RAM)](http://aws.amazon.com/ram/). This service allows you to easily and securely share AWS resources within your AWS Organizations and Organizational Units. Using AWS RAM, access to shared resources is automatically granted or revoked as accounts are moved in and out of the Organization or Organization Unit with which they are shared. This helps ensure that resources are only shared with the accounts that you intend.

 **Level of risk exposed if this best practice is not established:** Low 

## Implementation guidance
<a name="implementation-guidance"></a>

Implement a user access lifecycle policy for new users joining, job function changes, and users leaving so that only current users have access. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [Attribute-based access control (ABAC)](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html) 
+  [Grant least privilege](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege) 
+  [IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html) 
+  [Remove unnecessary credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#remove-credentials) 
+  [Working with Policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage.html) 

 **Related videos:** 
+  [Become an IAM Policy Master in 60 Minutes or Less](https://youtu.be/YQsK4MtsELU) 
+  [Separation of Duties, Least Privilege, Delegation, and CI/CD](https://youtu.be/3H0i7VyTu70) 

# SEC03-BP07 Analyze public and cross-account access
<a name="sec_permissions_analyze_cross_account"></a>

Continually monitor findings that highlight public and cross-account access. Reduce public access and cross-account access to only the specific resources that require this access.

 **Desired outcome:** Know which of your AWS resources are shared and with whom. Continually monitor and audit your shared resources to verify they are shared with only authorized principals. 

 **Common anti-patterns:** 
+  Not keeping an inventory of shared resources. 
+  Not following a process for approval of cross-account or public access to resources. 

 **Level of risk exposed if this best practice is not established:** Low 

## Implementation guidance
<a name="implementation-guidance"></a>

 If your account is in AWS Organizations, you can grant access to resources to the entire organization, specific organizational units, or individual accounts. If your account is not a member of an organization, you can share resources with individual accounts. You can grant direct cross-account access using resource-based policies — for example, [Amazon Simple Storage Service (Amazon S3) bucket policies](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-policies.html) — or by allowing a principal in another account to assume an IAM role in your account. When using resource policies, verify that access is only granted to authorized principals. Define a process to approve all resources which are required to be publicly available. 

 [AWS Identity and Access Management Access Analyzer](https://aws.amazon.com/iam/features/analyze-access/) uses [provable security](https://aws.amazon.com/security/provable-security/) to identify all access paths to a resource from outside of its account. It reviews resource policies continuously, and reports findings of public and cross-account access to make it simple for you to analyze potentially broad access. Consider configuring IAM Access Analyzer with AWS Organizations to verify that you have visibility to all your accounts. IAM Access Analyzer also allows you to [preview findings](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-access-preview.html) before deploying resource permissions. This allows you to validate that your policy changes grant only the intended public and cross-account access to your resources. When designing for multi-account access, you can use [trust policies](https://aws.amazon.com/blogs/security/how-to-use-trust-policies-with-iam-roles/) to control in what cases a role can be assumed. For example, you could use the [`PrincipalOrgId` condition key to deny an attempt to assume a role from outside your AWS Organizations](https://aws.amazon.com/blogs/security/how-to-use-trust-policies-with-iam-roles/). 

 [AWS Config can report resources](https://docs.aws.amazon.com/config/latest/developerguide/operational-best-practices-for-Publicly-Accessible-Resources.html) that are misconfigured, and through AWS Config policy checks, can detect resources that have public access configured. Services such as [AWS Control Tower](https://aws.amazon.com/controltower/) and [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-standards-fsbp.html) simplify deploying detective controls and guardrails across AWS Organizations to identify and remediate publicly exposed resources. For example, AWS Control Tower has a managed guardrail which can detect if any [Amazon EBS snapshots are restorable by AWS accounts](https://docs.aws.amazon.com/controltower/latest/userguide/what-is-control-tower.html). 

 **Implementation steps** 
+  **Consider using [AWS Config for AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/services-that-can-integrate-config.html):** AWS Config allows you to aggregate findings from multiple accounts within an AWS Organizations to a delegated administrator account. This provides a comprehensive view, and allows you to [deploy AWS Config Rules across accounts to detect publicly accessible resources](https://docs.aws.amazon.com/config/latest/developerguide/config-rule-multi-account-deployment.html). 
+  **Configure AWS Identity and Access Management Access Analyzer** IAM Access Analyzer helps you identify resources in your organization and accounts, such as Amazon S3 buckets or IAM roles that are [shared with an external entity](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-started.html). 
+  **Use auto-remediation in AWS Config to respond to changes in public access configuration of Amazon S3 buckets:** [You can automatically turn on the block public access settings for Amazon S3 buckets](https://aws.amazon.com/blogs/security/how-to-use-aws-config-to-monitor-for-and-respond-to-amazon-s3-buckets-allowing-public-access/). 
+  **Implement monitoring and alerting to identify if Amazon S3 buckets have become public:** You must have [monitoring and alerting](https://aws.amazon.com/blogs/aws/amazon-s3-update-cloudtrail-integration/) in place to identify when Amazon S3 Block Public Access is turned off, and if Amazon S3 buckets become public. Additionally, if you are using AWS Organizations, you can create a [service control policy](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) that prevents changes to Amazon S3 public access policies. AWS Trusted Advisor checks for Amazon S3 buckets that have open access permissions. Bucket permissions that grant, upload, or delete access to everyone create potential security issues by allowing anyone to add, modify, or remove items in a bucket. The Trusted Advisor check examines explicit bucket permissions and associated bucket policies that might override the bucket permissions. You also can use AWS Config to monitor your Amazon S3 buckets for public access. For more information, see [How to Use AWS Config to Monitor for and Respond to Amazon S3 Buckets Allowing Public Access](https://aws.amazon.com/blogs/security/how-to-use-aws-config-to-monitor-for-and-respond-to-amazon-s3-buckets-allowing-public-access/). While reviewing access, it’s important to consider what types of data are contained in Amazon S3 buckets. [Amazon Macie](https://docs.aws.amazon.com/macie/latest/user/findings-types.html) helps discover and protect sensitive data, such as PII, PHI, and credentials, such as private or AWS keys. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [Using AWS Identity and Access Management Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html?ref=wellarchitected)
+ [AWS Control Tower controls library ](https://docs.aws.amazon.com/controltower/latest/userguide/controls-reference.html)
+  [AWS Foundational Security Best Practices standard](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-standards-fsbp.html)
+  [AWS Config Managed Rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config_use-managed-rules.html) 
+  [AWS Trusted Advisor check reference](https://docs.aws.amazon.com/awssupport/latest/user/trusted-advisor-check-reference.html) 
+ [ Monitoring AWS Trusted Advisor check results with Amazon EventBridge ](https://docs.aws.amazon.com/awssupport/latest/user/cloudwatch-events-ta.html)
+ [ Managing AWS Config Rules Across All Accounts in Your Organization ](https://docs.aws.amazon.com/config/latest/developerguide/config-rule-multi-account-deployment.html)
+ [AWS Config and AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/services-that-can-integrate-config.html)

 **Related videos:** 
+ [Best Practices for securing your multi-account environment](https://www.youtube.com/watch?v=ip5sn3z5FNg)
+ [Dive Deep into IAM Access Analyzer](https://www.youtube.com/watch?v=i5apYXya2m0)

# SEC03-BP08 Share resources securely within your organization
<a name="sec_permissions_share_securely"></a>

As the number of workloads grows, you might need to share access to resources in those workloads or provision the resources multiple times across multiple accounts. You might have constructs to compartmentalize your environment, such as having development, testing, and production environments. However, having separation constructs does not limit you from being able to share securely. By sharing components that overlap, you can reduce operational overhead and allow for a consistent experience without guessing what you might have missed while creating the same resource multiple times. 

 **Desired outcome:** Minimize unintended access by using secure methods to share resources within your organization, and help with your data loss prevention initiative. Reduce your operational overhead compared to managing individual components, reduce errors from manually creating the same component multiple times, and increase your workloads’ scalability. You can benefit from decreased time to resolution in multi-point failure scenarios, and increase your confidence in determining when a component is no longer needed. For prescriptive guidance on analyzing externally shared resources, see [SEC03-BP07 Analyze public and cross-account access](sec_permissions_analyze_cross_account.md). 

 **Common anti-patterns:** 
+  Lack of process to continually monitor and automatically alert on unexpected external share. 
+  Lack of baseline on what should be shared and what should not. 
+  Defaulting to a broadly open policy rather than sharing explicitly when required. 
+  Manually creating foundational resources that overlap when required. 

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>

 Architect your access controls and patterns to govern the consumption of shared resources securely and only with trusted entities. Monitor shared resources and review shared resource access continuously, and be alerted on inappropriate or unexpected sharing. Review [Analyze public and cross-account access](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_permissions_analyze_cross_account.html) to help you establish governance to reduce the external access to only resources that require it, and to establish a process to monitor continuously and alert automatically. 

 Cross-account sharing within AWS Organizations is supported by [a number of AWS services](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services_list.html), such as [AWS Security Hub CSPM](https://docs.aws.amazon.com/organizations/latest/userguide/services-that-can-integrate-securityhub.html), [Amazon GuardDuty](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_organizations.html), and [AWS Backup](https://docs.aws.amazon.com/organizations/latest/userguide/services-that-can-integrate-backup.html). These services allow for data to be shared to a central account, be accessible from a central account, or manage resources and data from a central account. For example, AWS Security Hub CSPM can transfer findings from individual accounts to a central account where you can view all the findings. AWS Backup can take a backup for a resource and share it across accounts. You can use [AWS Resource Access Manager](https://aws.amazon.com/ram/) (AWS RAM) to share other common resources, such as [VPC subnets and Transit Gateway attachments](https://docs.aws.amazon.com/ram/latest/userguide/shareable.html#shareable-vpc), [AWS Network Firewall](https://docs.aws.amazon.com/ram/latest/userguide/shareable.html#shareable-network-firewall), or [Amazon SageMaker AI pipelines](https://docs.aws.amazon.com/ram/latest/userguide/shareable.html#shareable-sagemaker). 

 To restrict your account to only share resources within your organization, use [service control policies (SCPs)](https://docs.aws.amazon.com/ram/latest/userguide/scp.html) to prevent access to external principals. When sharing resources, combine identity-based controls and network controls to [create a data perimeter for your organization](https://docs.aws.amazon.com/whitepapers/latest/building-a-data-perimeter-on-aws/building-a-data-perimeter-on-aws.html) to help protect against unintended access. A data perimeter is a set of preventive guardrails to help verify that only your trusted identities are accessing trusted resources from expected networks. These controls place appropriate limits on what resources can be shared and prevent sharing or exposing resources that should not be allowed. For example, as a part of your data perimeter, you can use VPC endpoint policies and the `AWS:PrincipalOrgId` condition to ensure the identities accessing your Amazon S3 buckets belong to your organization. It is important to note that [SCPs do not apply to service-linked roles or AWS service principals](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html#scp-effects-on-permissions). 

 When using Amazon S3, [turn off ACLs for your Amazon S3 bucket](https://docs.aws.amazon.com/AmazonS3/latest/userguide/about-object-ownership.html) and use IAM policies to define access control. For [restricting access to an Amazon S3 origin](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html) from [Amazon CloudFront](https://aws.amazon.com/cloudfront/), migrate from origin access identity (OAI) to origin access control (OAC) which supports additional features including server-side encryption with [AWS Key Management Service](https://aws.amazon.com/kms/). 

 In some cases, you might want to allow sharing resources outside of your organization or grant a third party access to your resources. For prescriptive guidance on managing permissions to share resources externally, see [Permissions management](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/permissions-management.html). 

 **Implementation steps** 

1.  **Use AWS Organizations.** 

    AWS Organizations is an account management service that allows you to consolidate multiple AWS accounts into an organization that you create and centrally manage. You can group your accounts into organizational units (OUs) and attach different policies to each OU to help you meet your budgetary, security, and compliance needs. You can also control how AWS artificial intelligence (AI) and machine learning (ML) services can collect and store data, and use the multi-account management of the AWS services integrated with Organizations. 

1.  **Integrate AWS Organizations with AWS services.** 

    When you use an AWS service to perform tasks on your behalf in the member accounts of your organization, AWS Organizations creates an IAM service-linked role (SLR) for that service in each member account. You should manage trusted access using the AWS Management Console, the AWS APIs, or the AWS CLI. For prescriptive guidance on turning on trusted access, see [Using AWS Organizations with other AWS services](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services.html) and [AWS services that you can use with Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services_list.html). 

1.  **Establish a data perimeter.** 

    The AWS perimeter is typically represented as an organization managed by AWS Organizations. Along with on-premises networks and systems, accessing AWS resources is what many consider as the perimeter of My AWS. The goal of the perimeter is to verify that access is allowed if the identity is trusted, the resource is trusted, and the network is expected. 

   1.  Define and implement the perimeters. 

       Follow the steps described in [Perimeter implementation](https://docs.aws.amazon.com/whitepapers/latest/building-a-data-perimeter-on-aws/perimeter-implementation.html) in the Building a Perimeter on AWS whitepaper for each authorization condition. For prescriptive guidance on protecting network layer, see [Protecting networks](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/protecting-networks.html). 

   1.  Monitor and alert continually. 

       [AWS Identity and Access Management Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html) helps identify resources in your organization and accounts that are shared with external entities. You can integrate [IAM Access Analyzer with AWS Security Hub CSPM](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-securityhub-integration.html) to send and aggregate findings for a resource from IAM Access Analyzer to Security Hub CSPM to help analyze the security posture of your environment. To integrate, turn on both IAM Access Analyzer and Security Hub CSPM in each Region in each account. You can also use AWS Config Rules to audit configuration and alert the appropriate party using [Amazon Q Developer in chat applications with AWS Security Hub CSPM](https://aws.amazon.com/blogs/security/enabling-aws-security-hub-integration-with-aws-chatbot/). You can then use [AWS Systems Manager Automation documents](https://docs.aws.amazon.com/config/latest/developerguide/remediation.html) to remediate noncompliant resources. 

   1.  For prescriptive guidance on monitoring and alerting continuously on resources shared externally, see [Analyze public and cross-account access](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_permissions_analyze_cross_account.html). 

1.  **Use resource sharing in AWS services and restrict accordingly.** 

    Many AWS services allow you to share resources with another account, or target a resource in another account, such as [Amazon Machine Images (AMIs)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) and [AWS Resource Access Manager (AWS RAM)](https://docs.aws.amazon.com/ram/latest/userguide/getting-started-sharing.html). Restrict the `ModifyImageAttribute` API to specify the trusted accounts to share the AMI with. Specify the `ram:RequestedAllowsExternalPrincipals` condition when using AWS RAM to constrain sharing to your organization only, to help prevent access from untrusted identities. For prescriptive guidance and considerations, see [Resource sharing and external targets](https://docs.aws.amazon.com/whitepapers/latest/building-a-data-perimeter-on-aws/perimeter-implementation.html). 

1.  **Use AWS RAM to share securely in an account or with other AWS accounts.** 

    [AWS RAM](https://aws.amazon.com/ram/) helps you securely share the resources that you have created with roles and users in your account and with other AWS accounts. In a multi-account environment, AWS RAM allows you to create a resource once and share it with other accounts. This approach helps reduce your operational overhead while providing consistency, visibility, and auditability through integrations with Amazon CloudWatch and AWS CloudTrail, which you do not receive when using cross-account access. 

    If you have resources that you shared previously using a resource-based policy, you can use the [`PromoteResourceShareCreatedFromPolicy` API](https://docs.aws.amazon.com/ram/latest/APIReference/API_PromoteResourceShareCreatedFromPolicy.html) or an equivalent to promote the resource share to a full AWS RAM resource share. 

    In some cases, you might need to take additional steps to share resources. For example, to share an encrypted snapshot, you need to [share a AWS KMS key](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html#share-kms-key). 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [SEC03-BP07 Analyze public and cross-account access](sec_permissions_analyze_cross_account.md) 
+  [SEC03-BP09 Share resources securely with a third party](sec_permissions_share_securely_third_party.md) 
+  [SEC05-BP01 Create network layers](sec_network_protection_create_layers.md) 

 **Related documents:** 
+ [Bucket owner granting cross-account permission to objects it does not own](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-managing-access-example4.html)
+ [How to use Trust Policies with IAM](https://aws.amazon.com/blogs/security/how-to-use-trust-policies-with-iam-roles/)
+ [Building Data Perimeter on AWS](https://docs.aws.amazon.com/whitepapers/latest/building-a-data-perimeter-on-aws/building-a-data-perimeter-on-aws.html)
+ [How to use an external ID when granting a third party access to your AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html)
+ [AWS services you can use with AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_integrate_services_list.html)
+ [ Establishing a data perimeter on AWS: Allow only trusted identities to access company data ](https://aws.amazon.com/blogs/security/establishing-a-data-perimeter-on-aws-allow-only-trusted-identities-to-access-company-data/)

 **Related videos:** 
+ [Granular Access with AWS Resource Access Manager](https://www.youtube.com/watch?v=X3HskbPqR2s)
+ [Securing your data perimeter with VPC endpoints](https://www.youtube.com/watch?v=iu0-o6hiPpI)
+ [ Establishing a data perimeter on AWS](https://www.youtube.com/watch?v=SMi5OBjp1fI)

 **Related tools:** 
+ [ Data Perimeter Policy Examples ](https://github.com/aws-samples/data-perimeter-policy-examples)

# SEC03-BP09 Share resources securely with a third party
<a name="sec_permissions_share_securely_third_party"></a>

 The security of your cloud environment doesn’t stop at your organization. Your organization might rely on a third party to manage a portion of your data. The permission management for the third-party managed system should follow the practice of just-in-time access using the principle of least privilege with temporary credentials. By working closely with a third party, you can reduce the scope of impact and risk of unintended access together. 

 **Desired outcome:** Long-term AWS Identity and Access Management (IAM) credentials, IAM access keys, and secret keys that are associated with a user can be used by anyone as long as the credentials are valid and active. Using an IAM role and temporary credentials helps you improve your overall security stance by reducing the effort to maintain long-term credentials, including the management and operational overhead of those sensitive details. By using a universally unique identifier (UUID) for the external ID in the IAM trust policy, and keeping the IAM policies attached to the IAM role under your control, you can audit and verify that the access granted to the third party is not too permissive. For prescriptive guidance on analyzing externally shared resources, see [SEC03-BP07 Analyze public and cross-account access](sec_permissions_analyze_cross_account.md). 

 **Common anti-patterns:** 
+  Using the default IAM trust policy without any conditions. 
+  Using long-term IAM credentials and access keys. 
+  Reusing external IDs. 

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>

 You might want to allow sharing resources outside of AWS Organizations or grant a third party access to your account. For example, a third party might provide a monitoring solution that needs to access resources within your account. In those cases, create an IAM cross-account role with only the privileges needed by the third party. Additionally, define a trust policy using the [external ID condition](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html). When using an external ID, you or the third party can generate a unique ID for each customer, third party, or tenancy. The unique ID should not be controlled by anyone but you after it’s created. The third party must implement a process to relate the external ID to the customer in a secure, auditable, and reproduceable manner. 

 You can also use [IAM Roles Anywhere](https://docs.aws.amazon.com/rolesanywhere/latest/userguide/introduction.html) to manage IAM roles for applications outside of AWS that use AWS APIs. 

 If the third party no longer requires access to your environment, remove the role. Avoid providing long-term credentials to a third party. Maintain awareness of other AWS services that support sharing. For example, the AWS Well-Architected Tool allows [sharing a workload](https://docs.aws.amazon.com/wellarchitected/latest/userguide/workloads-sharing.html) with other AWS accounts, and [AWS Resource Access Manager](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) helps you securely share an AWS resource you own with other accounts. 

 **Implementation steps** 

1.  **Use cross-account roles to provide access to external accounts.** 

    [Cross-account roles](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_third-party.html) reduce the amount of sensitive information that is stored by external accounts and third parties for servicing their customers. Cross-account roles allow you to grant access to AWS resources in your account securely to a third party, such as AWS Partners or other accounts in your organization, while maintaining the ability to manage and audit that access. 

    The third party might be providing service to you from a hybrid infrastructure or alternatively pulling data into an offsite location. [IAM Roles Anywhere](https://docs.aws.amazon.com/rolesanywhere/latest/userguide/introduction.html) helps you allow third party workloads to securely interact with your AWS workloads and further reduce the need for long-term credentials. 

    You should not use long-term credentials, or access keys associated with users, to provide external account access. Instead, use cross-account roles to provide the cross-account access. 

1.  **Use an external ID with third parties.** 

    Using an [external ID](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html) allows you to designate who can assume a role in an IAM trust policy. The trust policy can require that the user assuming the role assert the condition and target in which they are operating. It also provides a way for the account owner to permit the role to be assumed only under specific circumstances. The primary function of the external ID is to address and prevent the [confused deputy](https://aws.amazon.com/blogs/security/how-to-use-external-id-when-granting-access-to-your-aws-resources/) problem. 

    Use an external ID if you are an AWS account owner and you have configured a role for a third party that accesses other AWS accounts in addition to yours, or when you are in the position of assuming roles on behalf of different customers. Work with your third party or AWS Partner to establish an external ID condition to include in IAM trust policy. 

1.  **Use universally unique external IDs.** 

    Implement a process that generates random unique value for an external ID, such as a universally unique identifier (UUID). A third party reusing external IDs across different customers does not address the confused deputy problem, because customer A might be able to view data of customer B by using the role ARN of customer B along with the duplicated external ID. In a multi-tenant environment, where a third party supports multiple customers with different AWS accounts, the third party must use a different unique ID as the external ID for each AWS account. The third party is responsible for detecting duplicate external IDs and securely mapping each customer to their respective external ID. The third party should test to verify that they can only assume the role when specifying the external ID. The third party should refrain from storing the customer role ARN and the external ID until the external ID is required. 

    The external ID is not treated as a secret, but the external ID must not be an easily guessable value, such as a phone number, name, or account ID. Make the external ID a read-only field so that the external ID cannot be changed for the purpose of impersonating the setup. 

    You or the third party can generate the external ID. Define a process to determine who is responsible for generating the ID. Regardless of the entity creating the external ID, the third party enforces uniqueness and formats consistently across customers. 

1.  **Deprecate customer-provided long-term credentials.** 

    Deprecate the use of long-term credentials and use cross-account roles or IAM Roles Anywhere. If you must use long-term credentials, establish a plan to migrate to role-based access. For details on managing keys, see [Identity Management](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_identities_audit.html). Also work with your AWS account team and the third party to establish risk mitigation runbook. For prescriptive guidance on responding to and mitigating the potential impact of security incident, see [Incident response](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/incident-response.html). 

1.  **Verify that setup has prescriptive guidance or is automated.** 

    The policy created for cross-account access in your accounts must follow the [least-privilege principle](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege). The third party must provide a role policy document or automated setup mechanism that uses an AWS CloudFormation template or an equivalent for you. This reduces the chance of errors associated with manual policy creation and offers an auditable trail. For more information on using a CloudFormation template to create cross-account roles, see [Cross-Account Roles](https://aws.amazon.com/blogs/apn/tag/cross-account-roles/). 

    The third party should provide an automated, auditable setup mechanism. However, by using the role policy document outlining the access needed, you should automate the setup of the role. Using a CloudFormation template or equivalent, you should monitor for changes with drift detection as part of the audit practice. 

1.  **Account for changes.** 

    Your account structure, your need for the third party, or their service offering being provided might change. You should anticipate changes and failures, and plan accordingly with the right people, process, and technology. Audit the level of access you provide on a periodic basis, and implement detection methods to alert you to unexpected changes. Monitor and audit the use of the role and the datastore of the external IDs. You should be prepared to revoke third-party access, either temporarily or permanently, as a result of unexpected changes or access patterns. Also, measure the impact to your revocation operation, including the time it takes to perform, the people involved, the cost, and the impact to other resources. 

    For prescriptive guidance on detection methods, see the [Detection best practices](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/detection.html). 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [SEC02-BP02 Use temporary credentials](sec_identities_unique.md) 
+  [SEC03-BP05 Define permission guardrails for your organization](sec_permissions_define_guardrails.md) 
+  [SEC03-BP06 Manage access based on lifecycle](sec_permissions_lifecycle.md) 
+  [SEC03-BP07 Analyze public and cross-account access](sec_permissions_analyze_cross_account.md) 
+ [ SEC04 Detection ](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/detection.html)

 **Related documents:** 
+ [ Bucket owner granting cross-account permission to objects it does not own ](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-walkthroughs-managing-access-example4.html)
+ [ How to use trust policies with IAM roles ](https://aws.amazon.com/blogs/security/how-to-use-trust-policies-with-iam-roles/)
+ [ Delegate access across AWS accounts using IAM roles ](https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html)
+ [ How do I access resources in another AWS account using IAM? ](https://aws.amazon.com/premiumsupport/knowledge-center/cross-account-access-iam/)
+ [ Security best practices in IAM ](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html)
+ [ Cross-account policy evaluation logic ](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_evaluation-logic-cross-account.html)
+ [ How to use an external ID when granting access to your AWS resources to a third party ](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html)
+ [ Collecting Information from AWS CloudFormation Resources Created in External Accounts with Custom Resources ](https://aws.amazon.com/blogs/apn/collecting-information-from-aws-cloudformation-resources-created-in-external-accounts-with-custom-resources/)
+ [ Securely Using External ID for Accessing AWS Accounts Owned by Others ](https://aws.amazon.com/blogs/apn/securely-using-external-id-for-accessing-aws-accounts-owned-by-others/)
+ [ Extend IAM roles to workloads outside of IAM with IAM Roles Anywhere ](https://aws.amazon.com/blogs/security/extend-aws-iam-roles-to-workloads-outside-of-aws-with-iam-roles-anywhere/)

 **Related videos:** 
+ [ How do I allow users or roles in a separate AWS account access to my AWS account? ](https://www.youtube.com/watch?v=20tr9gUY4i0)
+ [AWS re:Invent 2018: Become an IAM Policy Master in 60 Minutes or Less ](https://www.youtube.com/watch?v=YQsK4MtsELU)
+ [AWS Knowledge Center Live: IAM Best Practices and Design Decisions ](https://www.youtube.com/watch?v=xzDFPIQy4Ks)

 **Related examples:** 
+ [ Well-Architected Lab - Lambda cross account IAM role assumption (Level 300) ](https://www.wellarchitectedlabs.com/security/300_labs/300_lambda_cross_account_iam_role_assumption/)
+ [ Configure cross-account access to Amazon DynamoDB ](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/configure-cross-account-access-to-amazon-dynamodb.html)
+ [AWS STS Network Query Tool ](https://github.com/aws-samples/aws-sts-network-query-tool)

# Detection
<a name="a-detective-controls"></a>

**Topics**
+ [SEC 4. How do you detect and investigate security events?](sec-04.md)

# SEC 4. How do you detect and investigate security events?
<a name="sec-04"></a>

Capture and analyze events from logs and metrics to gain visibility. Take action on security events and potential threats to help secure your workload.

**Topics**
+ [SEC04-BP01 Configure service and application logging](sec_detect_investigate_events_app_service_logging.md)
+ [SEC04-BP02 Analyze logs, findings, and metrics centrally](sec_detect_investigate_events_analyze_all.md)
+ [SEC04-BP03 Automate response to events](sec_detect_investigate_events_auto_response.md)
+ [SEC04-BP04 Implement actionable security events](sec_detect_investigate_events_actionable_events.md)

# SEC04-BP01 Configure service and application logging
<a name="sec_detect_investigate_events_app_service_logging"></a>

Retain security event logs from services and applications. This is a fundamental principle of security for audit, investigations, and operational use cases, and a common security requirement driven by governance, risk, and compliance (GRC) standards, policies, and procedures.

 **Desired outcome:** An organization should be able to reliably and consistently retrieve security event logs from AWS services and applications in a timely manner when required to fulfill an internal process or obligation, such as a security incident response. Consider centralizing logs for better operational results. 

 **Common anti-patterns:** 
+  Logs are stored in perpetuity or deleted too soon. 
+  Everybody can access logs. 
+  Relying entirely on manual processes for log governance and use. 
+  Storing every single type of log just in case it is needed. 
+  Checking log integrity only when necessary. 

 **Benefits of establishing this best practice:** Implement a root cause analysis (RCA) mechanism for security incidents and a source of evidence for your governance, risk, and compliance obligations. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 During a security investigation or other use cases based on your requirements, you need to be able to review relevant logs to record and understand the full scope and timeline of the incident. Logs are also required for alert generation, indicating that certain actions of interest have happened. It is critical to select, turn on, store, and set up querying and retrieval mechanisms and alerting. 

 **Implementation steps** 
+  **Select and use log sources.** Ahead of a security investigation, you need to capture relevant logs to retroactively reconstruct activity in an AWS account. Select log sources relevant to your workloads. 

   The log source selection criteria should be based on the use cases required by your business. Establish a trail for each AWS account using AWS CloudTrail or an AWS Organizations trail, and configure an Amazon S3 bucket for it. 

   AWS CloudTrail is a logging service that tracks API calls made against an AWS account capturing AWS service activity. It’s turned on by default with a 90-day retention of management events that can be [retrieved through CloudTrail Event history](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events.html) using the AWS Management Console, the AWS CLI, or an AWS SDK. For longer retention and visibility of data events, [create a CloudTrail trail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html) and associate it with an Amazon S3 bucket, and optionally with a Amazon CloudWatch log group. Alternatively, you can create a [CloudTrail Lake](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-lake.html), which retains CloudTrail logs for up to seven years and provides a SQL-based querying facility 

   AWS recommends that customers using a VPC turn on network traﬃc and DNS logs using [VPC Flow Logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html) and [Amazon Route 53 resolver query logs](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-query-logs.html), respectively, and streaming them to either an Amazon S3 bucket or a CloudWatch log group. You can create a VPC ﬂow log for a VPC, a subnet, or a network interface. For VPC Flow Logs, you can be selective on how and where you use Flow Logs to reduce cost. 

   AWS CloudTrail Logs, VPC Flow Logs, and Route 53 resolver query logs are the basic logging sources to support security investigations in AWS. You can also use [Amazon Security Lake](https://docs.aws.amazon.com/security-lake/latest/userguide/what-is-security-lake.html) to collect, normalize, and store this log data in Apache Parquet format and Open Cybersecurity Schema Framework (OCSF), which is ready for querying. Security Lake also supports other AWS logs and logs from third-party sources. 

   AWS services can generate logs not captured by the basic log sources, such as Elastic Load Balancing logs, AWS WAF logs, AWS Config recorder logs, Amazon GuardDuty ﬁndings, Amazon Elastic Kubernetes Service (Amazon EKS) audit logs, and Amazon EC2 instance operating system and application logs. For a full list of logging and monitoring options, see [Appendix A: Cloud capability deﬁnitions – Logging and Events](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/logging-and-events.html) of the [AWS Security Incident Response Guide](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/detection.html). 
+  **Research logging capabilities for each AWS service and application:** Each AWS service and application provides you with options for log storage, each of which with its own retention and life-cycle capabilities. The two most common log storage services are Amazon Simple Storage Service (Amazon S3) and Amazon CloudWatch. For long retention periods, it is recommended to use Amazon S3 for its cost effectiveness and flexible lifecycle capabilities. If the primary logging option is Amazon CloudWatch Logs, as an option, you should consider archiving less frequently accessed logs to Amazon S3. 
+  **Select log storage:** The choice of log storage is generally related to which querying tool you use, retention capabilities, familiarity, and cost. The main options for log storage are an Amazon S3 bucket or a CloudWatch Log group. 

   An Amazon S3 bucket provides cost-eﬀective, durable storage with an optional lifecycle policy. Logs stored in Amazon S3 buckets can be queried using services such as Amazon Athena. 

   A CloudWatch log group provides durable storage and a built-in query facility through CloudWatch Logs Insights. 
+  **Identify appropriate log retention:** When you use an Amazon S3 bucket or CloudWatch log group to store logs, you must establish adequate lifecycles for each log source to optimize storage and retrieval costs. Customers generally have between three months to one year of logs readily available for querying, with retention of up to seven years. The choice of availability and retention should align with your security requirements and a composite of statutory, regulatory, and business mandates. 
+  **Use logging for each AWS service and application with proper retention and lifecycle policies:** For each AWS service or application in your organization, look for the specific logging configuration guidance: 
  + [ Configure AWS CloudTrail Trail ](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-create-and-update-a-trail.html)
  + [ Configure VPC Flow Logs ](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html)
  + [ Configure Amazon GuardDuty Finding Export ](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_exportfindings.html)
  + [ Configure AWS Config recording ](https://docs.aws.amazon.com/systems-manager/latest/userguide/quick-setup-config.html)
  + [ Configure AWS WAF web ACL traffic ](https://docs.aws.amazon.com/waf/latest/developerguide/logging.html)
  + [ Configure AWS Network Firewall network traffic logs ](https://docs.aws.amazon.com/network-firewall/latest/developerguide/firewall-logging.html)
  + [ Configure Elastic Load Balancing access logs ](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-access-logs.html)
  + [ Configure Amazon Route 53 resolver query logs ](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resolver-query-logs.html)
  + [ Configure Amazon RDS logs ](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_LogAccess.html)
  + [ Configure Amazon EKS Control Plane logs ](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html)
  + [ Configure Amazon CloudWatch agent for Amazon EC2 instances and on-premises servers ](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html)
+  **Select and implement querying mechanisms for logs:** For log queries, you can use [CloudWatch Logs Insights](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AnalyzingLogData.html) for data stored in CloudWatch log groups, and [Amazon Athena](https://aws.amazon.com/athena/) and [Amazon OpenSearch Service](https://aws.amazon.com/opensearch-service/) for data stored in Amazon S3. You can also use third-party querying tools such as a security information and event management (SIEM) service. 

   The process for selecting a log querying tool should consider the people, process, and technology aspects of your security operations. Select a tool that fulﬁlls operational, business, and security requirements, and is both accessible and maintainable in the long term. Keep in mind that log querying tools work optimally when the number of logs to be scanned is kept within the tool’s limits. It is not uncommon to have multiple querying tools because of cost or technical constraints. 

   For example, you might use a third-party security information and event management (SIEM) tool to perform queries for the last 90 days of data, but use Athena to perform queries beyond 90 days because of the log ingestion cost of a SIEM. Regardless of the implementation, verify that your approach minimizes the number of tools required to maximize operational eﬃciency, especially during a security event investigation. 
+  **Use logs for alerting:** AWS provides alerting through several security services: 
  +  [AWS Config](https://aws.amazon.com/config/) monitors and records your AWS resource configurations and allows you to automate the evaluation and remediation against desired configurations. 
  +  [Amazon GuardDuty](https://aws.amazon.com/guardduty/) is a threat detection service that continually monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads. GuardDuty ingests, aggregates, and analyzes information from sources, such as AWS CloudTrail management and data events, DNS logs, VPC Flow Logs, and Amazon EKS Audit logs. GuardDuty pulls independent data streams directly from CloudTrail, VPC Flow Logs, DNS query logs, and Amazon EKS. You don’t have to manage Amazon S3 bucket policies or modify the way you collect and store logs. It is still recommended to retain these logs for your own investigation and compliance purposes. 
  +  [AWS Security Hub CSPM](https://aws.amazon.com/security-hub/) provides a single place that aggregates, organizes, and prioritizes your security alerts or findings from multiple AWS services and optional third-party products to give you a comprehensive view of security alerts and compliance status. 

   You can also use custom alert generation engines for security alerts not covered by these services or for speciﬁc alerts relevant to your environment. For information on building these alerts and detections, see [Detection in the AWS Security Incident Response Guide](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/detection.html). 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [SEC04-BP02 Analyze logs, findings, and metrics centrally](sec_detect_investigate_events_analyze_all.md) 
+  [SEC07-BP04 Define data lifecycle management](sec_data_classification_lifecycle_management.md) 
+  [SEC10-BP06 Pre-deploy tools](sec_incident_response_pre_deploy_tools.md) 

 **Related documents:** 
+ [AWS Security Incident Response Guide ](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/aws-security-incident-response-guide.html)
+ [ Getting Started with Amazon Security Lake ](https://aws.amazon.com/security-lake/getting-started/)
+ [ Getting started: Amazon CloudWatch Logs](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_GettingStarted.html)
+  [Security Partner Solutions: Logging and Monitoring](https://aws.amazon.com/security/partner-solutions/#logging-monitoring) 

 **Related videos:** 
+ [AWS re:Invent 2022 - Introducing Amazon Security Lake ](https://www.youtube.com/watch?v=V7XwbPPjXSY)

 **Related examples:** 
+ [ Assisted Log Enabler for AWS](https://github.com/awslabs/assisted-log-enabler-for-aws/)
+ [AWS Security Hub CSPM Findings Historical Export ](https://github.com/aws-samples/aws-security-hub-findings-historical-export)

 **Related tools:** 
+ [ Snowflake for Cybersecurity ](https://www.snowflake.com/en/data-cloud/workloads/cybersecurity/)

# SEC04-BP02 Analyze logs, findings, and metrics centrally
<a name="sec_detect_investigate_events_analyze_all"></a>

 Security operations teams rely on the collection of logs and the use of search tools to discover potential events of interest, which might indicate unauthorized activity or unintentional change. However, simply analyzing collected data and manually processing information is insufficient to keep up with the volume of information flowing from complex architectures. Analysis and reporting alone don’t facilitate the assignment of the right resources to work an event in a timely fashion. 

A best practice for building a mature security operations team is to deeply integrate the flow of security events and findings into a notification and workflow system such as a ticketing system, a bug or issue system, or other security information and event management (SIEM) system. This takes the workflow out of email and static reports, and allows you to route, escalate, and manage events or findings. Many organizations are also integrating security alerts into their chat or collaboration, and developer productivity platforms. For organizations embarking on automation, an API-driven, low-latency ticketing system offers considerable flexibility when planning what to automate first.

This best practice applies not only to security events generated from log messages depicting user activity or network events, but also from changes detected in the infrastructure itself. The ability to detect change, determine whether a change was appropriate, and then route that information to the correct remediation workflow is essential in maintaining and validating a secure architecture, in the context of changes where the nature of their undesirability is sufficiently subtle that they cannot currently be prevented with a combination of AWS Identity and Access Management (IAM) and AWS Organizations configuration.

Amazon GuardDuty and AWS Security Hub CSPM provide aggregation, deduplication, and analysis mechanisms for log records that are also made available to you via other AWS services. GuardDuty ingests, aggregates, and analyzes information from sources such as AWS CloudTrail management and data events, VPC DNS logs, and VPC Flow Logs. Security Hub CSPM can ingest, aggregate, and analyze output from GuardDuty, AWS Config, Amazon Inspector, Amazon Macie, AWS Firewall Manager, and a significant number of third-party security products available in the AWS Marketplace, and if built accordingly, your own code. Both GuardDuty and Security Hub CSPM have an Administrator-Member model that can aggregate findings and insights across multiple accounts, and Security Hub CSPM is often used by customers who have an on- premises SIEM as an AWS-side log and alert preprocessor and aggregator from which they can then ingest Amazon EventBridge through a AWS Lambda-based processor and forwarder.

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Evaluate log processing capabilities: Evaluate the options that are available for processing logs. 
  +  [Find an AWS Partner that specializes in logging and monitoring solutions ](https://aws.amazon.com/security/partner-solutions/#Logging_and_Monitoring)
+  As a start for analyzing CloudTrail logs, test Amazon Athena. 
  + [ Configuring Athena to analyze CloudTrail logs ](https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html)
+  Implement centralize logging in AWS: See the following AWS example solution to centralize logging from multiple sources. 
  +  [Centralize logging solution ](https://aws.amazon.com/solutions/centralized-logging/)
+  Implement centralize logging with partner: APN Partners have solutions to help you analyze logs centrally. 
  + [ Logging and Monitoring ](https://aws.amazon.com/security/partner-solutions/#Logging_and_Monitoring)

## Resources
<a name="resources"></a>

 **Related documents:** 
+ [AWS Answers: Centralized Logging ](https://aws.amazon.com/answers/logging/centralized-logging/)
+  [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) 
+ [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/)
+  [Amazon EventBridge ](https://aws.amazon.com/eventbridge)
+ [ Getting started: Amazon CloudWatch Logs ](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_GettingStarted.html)
+  [Security Partner Solutions: Logging and Monitoring](https://aws.amazon.com/security/partner-solutions/#logging-monitoring) 

 **Related videos:** 
+ [ Centrally Monitoring Resource Configuration and Compliance ](https://youtu.be/kErRv4YB_T4)
+  [Remediating Amazon GuardDuty and AWS Security Hub CSPM Findings ](https://youtu.be/nyh4imv8zuk)
+ [ Threat management in the cloud: Amazon GuardDuty and AWS Security Hub CSPM](https://youtu.be/vhYsm5gq9jE)

# SEC04-BP03 Automate response to events
<a name="sec_detect_investigate_events_auto_response"></a>

 Using automation to investigate and remediate events reduces human effort and error, and allows you to scale investigation capabilities. Regular reviews will help you tune automation tools, and continuously iterate. 

In AWS, investigating events of interest and information on potentially unexpected changes into an automated workflow can be achieved using Amazon EventBridge. This service provides a scalable rules engine designed to broker both native AWS event formats (such as AWS CloudTrail events), as well as custom events you can generate from your application. Amazon GuardDuty also allows you to route events to a workflow system for those building incident response systems (AWS Step Functions), or to a central Security Account, or to a bucket for further analysis.

Detecting change and routing this information to the correct workflow can also be accomplished using AWS Config Rules and [Conformance Packs](https://docs.aws.amazon.com/config/latest/developerguide/conformance-packs.html). AWS Config detects changes to in-scope services (though with higher latency than EventBridge) and generates events that can be parsed using AWS Config Rules for rollback, enforcement of compliance policy, and forwarding of information to systems, such as change management platforms and operational ticketing systems. As well as writing your own Lambda functions to respond to AWS Config events, you can also take advantage of the [AWS Config Rules Development Kit](https://github.com/awslabs/aws-config-rdk), and a [library of open source](https://github.com/awslabs/aws-config-rules) AWS Config Rules. Conformance packs are a collection of AWS Config Rules and remediation actions you deploy as a single entity authored as a YAML template. A [sample conformance pack template](https://docs.aws.amazon.com/config/latest/developerguide/operational-best-practices-for-wa-Security-Pillar.html) is available for the Well-Architected Security Pillar.

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Implement automated alerting with GuardDuty: GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads. Turn on GuardDuty and configure automated alerts. 
+  Automate investigation processes: Develop automated processes that investigate an event and report information to an administrator to save time. 
  + [ Lab: Amazon GuardDuty hands on ](https://hands-on-guardduty.awssecworkshops.com/)

## Resources
<a name="resources"></a>

 **Related documents:** 
+ [AWS Answers: Centralized Logging ](https://aws.amazon.com/answers/logging/centralized-logging/)
+  [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html) 
+ [ Amazon CloudWatch ](https://aws.amazon.com/cloudwatch/)
+  [Amazon EventBridge ](https://aws.amazon.com/eventbridge)
+ [ Getting started: Amazon CloudWatch Logs ](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_GettingStarted.html)
+  [Security Partner Solutions: Logging and Monitoring](https://aws.amazon.com/security/partner-solutions/#logging-monitoring) 
+ [ Setting up Amazon GuardDuty ](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_settingup.html)

 **Related videos:** 
+ [ Centrally Monitoring Resource Configuration and Compliance ](https://youtu.be/kErRv4YB_T4)
+  [Remediating Amazon GuardDuty and AWS Security Hub CSPM Findings ](https://youtu.be/nyh4imv8zuk)
+ [ Threat management in the cloud: Amazon GuardDuty and AWS Security Hub CSPM](https://youtu.be/vhYsm5gq9jE)

 **Related examples:** 
+  [Lab: Automated Deployment of Detective Controls ](https://wellarchitectedlabs.com/Security/200_Automated_Deployment_of_Detective_Controls/README.html)

# SEC04-BP04 Implement actionable security events
<a name="sec_detect_investigate_events_actionable_events"></a>

 Create alerts that are sent to and can be actioned by your team. Ensure that alerts include relevant information for the team to take action. For each detective mechanism you have, you should also have a process, in the form of a [runbook](https://wa.aws.amazon.com/wat.concept.runbook.en.html) or [playbook](https://wa.aws.amazon.com/wat.concept.playbook.en.html), to investigate. For example, when you use [Amazon GuardDuty](http://aws.amazon.com/guardduty), it generates different [findings](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_findings.html). You should have a runbook entry for each finding type, for example, if a [trojan](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_trojan.html) is discovered, your runbook has simple instructions that instruct someone to investigate and remediate. 

 **Level of risk exposed if this best practice is not established:** Low 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Discover metrics available for AWS services: Discover the metrics that are available through Amazon CloudWatch for the services that you are using. 
  +  [AWS service documentation](https://aws.amazon.com/documentation/) 
  +  [Using Amazon CloudWatch Metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html) 
+  Configure Amazon CloudWatch alarms. 
  +  [Using Amazon CloudWatch Alarms](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/AlarmThatSendsEmail.html) 

## Resources
<a name="resources"></a>

 **Related documents:** 
+ [ Amazon CloudWatch ](https://aws.amazon.com/cloudwatch/)
+  [Amazon EventBridge ](https://aws.amazon.com/eventbridge)
+  [Security Partner Solutions: Logging and Monitoring](https://aws.amazon.com/security/partner-solutions/#logging-monitoring) 

 **Related videos:** 
+ [ Centrally Monitoring Resource Configuration and Compliance ](https://youtu.be/kErRv4YB_T4)
+  [Remediating Amazon GuardDuty and AWS Security Hub CSPM Findings ](https://youtu.be/nyh4imv8zuk)
+ [ Threat management in the cloud: Amazon GuardDuty and AWS Security Hub CSPM](https://youtu.be/vhYsm5gq9jE)

# Infrastructure protection
<a name="a-infrastructure-protection"></a>

**Topics**
+ [SEC 5. How do you protect your network resources?](sec-05.md)
+ [SEC 6. How do you protect your compute resources?](sec-06.md)

# SEC 5. How do you protect your network resources?
<a name="sec-05"></a>

Any workload that has some form of network connectivity, whether it’s the internet or a private network, requires multiple layers of defense to help protect from external and internal network-based threats.

**Topics**
+ [SEC05-BP01 Create network layers](sec_network_protection_create_layers.md)
+ [SEC05-BP02 Control traffic at all layers](sec_network_protection_layered.md)
+ [SEC05-BP03 Automate network protection](sec_network_protection_auto_protect.md)
+ [SEC05-BP04 Implement inspection and protection](sec_network_protection_inspection.md)

# SEC05-BP01 Create network layers
<a name="sec_network_protection_create_layers"></a>

Group components that share sensitivity requirements into layers to minimize the potential scope of impact of unauthorized access. For example, a database cluster in a virtual private cloud (VPC) with no need for internet access should be placed in subnets with no route to or from the internet. Traffic should only flow from the adjacent next least sensitive resource. Consider a web application sitting behind a load balancer. Your database should not be accessible directly from the load balancer. Only the business logic or web server should have direct access to your database. 

 **Desired outcome:** Create a layered network. Layered networks help logically group similar networking components. They also shrink the potential scope of impact of unauthorized network access. A properly layered network makes it harder for unauthorized users to pivot to additional resources within your AWS environment. In addition to securing internal network paths, you should also protect your network edge, such as web applications and API endpoints. 

 **Common anti-patterns:** 
+  Creating all resources in a single VPC or subnet. 
+  Using overly permissive security groups. 
+  Failing to use subnets. 
+  Allowing direct access to data stores such as databases. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 Components such as Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Relational Database Service (Amazon RDS) database clusters, and AWS Lambda functions that share reachability requirements can be segmented into layers formed by subnets. Consider deploying serverless workloads, such as [Lambda](https://docs.aws.amazon.com/lambda/index.html) functions, within a VPC or behind an [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html). [AWS Fargate](https://aws.amazon.com/fargate/getting-started/) tasks that have no need for internet access should be placed in subnets with no route to or from the internet. This layered approach mitigates the impact of a single layer misconfiguration, which could allow unintended access. For AWS Lambda, you can run your functions in your VPC to take advantage of VPC-based controls. 

 For network connectivity that can include thousands of VPCs, AWS accounts, and on-premises networks, you should use [AWS Transit Gateway](https://aws.amazon.com/transit-gateway/). Transit Gateway acts as a hub that controls how traffic is routed among all the connected networks, which act like spokes. Traffic between Amazon Virtual Private Cloud (Amazon VPC) and Transit Gateway remains on the AWS private network, which reduces external exposure to unauthorized users and potential security issues. Transit Gateway Inter-Region peering also encrypts inter-Region traffic with no single point of failure or bandwidth bottleneck. 

 **Implementation steps** 
+  **Use [Reachability Analyzer](https://docs.aws.amazon.com/vpc/latest/reachability/how-reachability-analyzer-works.html) to analyze the path between a source and destination based on configuration:** Reachability Analyzer allows you to automate verification of connectivity to and from VPC connected resources. Note that this analysis is done by reviewing configuration (no network packets are sent in conducting the analysis). 
+  **Use [Amazon VPC Network Access Analyzer](https://docs.aws.amazon.com/vpc/latest/network-access-analyzer/what-is-network-access-analyzer.html) to identify unintended network access to resources:** Amazon VPC Network Access Analyzer allows you to specify your network access requirements and identify potential network paths. 
+  **Consider whether resources need to be in a public subnet:** Do not place resources in public subnets of your VPC unless they absolutely must receive inbound network traffic from public sources. 
+  **Create [subnets in your VPCs](https://docs.aws.amazon.com/vpc/latest/userguide/how-it-works.html):** Create subnets for each network layer (in groups that include multiple Availability Zones) to enhance micro-segmentation. Also verify that you have associated the correct [route tables](https://docs.aws.amazon.com/vpc/latest/userguide/how-it-works.html) with your subnets to control routing and internet connectivity. 
+  **Use [AWS Firewall Manager](https://docs.aws.amazon.com/waf/latest/developerguide/security-group-policies.html) to manage your VPC security groups:** AWS Firewall Manager helps lessen the management burden of using multiple security groups. 
+  **Use [AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/waf-chapter.html) to protect against common web vulnerabilities:** AWS WAF can help enhance edge security by inspecting traffic for common web vulnerabilities, such as SQL injection. It also allows you to restrict traffic from IP addresses originating from certain countries or geographical locations. 
+  **Use [Amazon CloudFront](https://docs.aws.amazon.com/cloudfront/index.html) as a content distribution network (CDN):** Amazon CloudFront can help speed up your web application by storing data closer to your users. It can also improve edge security by enforcing HTTPS, restricting access to geographic areas, and ensuring that network traffic can only access resources when routed through CloudFront. 
+  **Use [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html) when creating application programming interfaces (APIs):** Amazon API Gateway helps publish, monitor, and secure REST, HTTPS, and WebSocket APIs. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Firewall Manager](https://docs.aws.amazon.com/waf/latest/developerguide/fms-chapter.html) 
+ [ Amazon Inspector ](https://aws.amazon.com/inspector)
+  [Amazon VPC Security](https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Security.html) 
+ [ Reachability Analyzer ](https://docs.aws.amazon.com/vpc/latest/reachability/what-is-reachability-analyzer.html)
+ [ Amazon VPC Network Access Analyzer ](https://docs.aws.amazon.com/vpc/latest/network-access-analyzer/getting-started.html#run-analysis)

 **Related videos:** 
+  [AWS Transit Gateway reference architectures for many VPCs ](https://youtu.be/9Nikqn_02Oc)
+  [Application Acceleration and Protection with Amazon CloudFront, AWS WAF, and AWS Shield](https://youtu.be/0xlwLEccRe0) 
+ [AWS re:Inforce 2022 - Validate effective network access controls on AWS](https://www.youtube.com/watch?v=aN2P2zeQek0)
+ [AWS re:Inforce 2022 - Advanced protections against bots using AWS WAF](https://www.youtube.com/watch?v=pZ2eftlwZns)

 **Related examples:** 
+  [Well-Architected Lab - Automated Deployment of VPC](https://www.wellarchitectedlabs.com/Security/200_Automated_Deployment_of_VPC/README.html) 
+ [ Workshop: Amazon VPC Network Access Analyzer ](https://catalog.us-east-1.prod.workshops.aws/workshops/cf2ecaa4-e4be-4f40-b93f-e9fe3b1c1f64)

# SEC05-BP02 Control traffic at all layers
<a name="sec_network_protection_layered"></a>

  When architecting your network topology, you should examine the connectivity requirements of each component. For example, if a component requires internet accessibility (inbound and outbound), connectivity to VPCs, edge services, and external data centers. 

 A VPC allows you to define your network topology that spans an AWS Region with a private IPv4 address range that you set, or an IPv6 address range AWS selects. You should apply multiple controls with a defense in depth approach for both inbound and outbound traffic, including the use of security groups (stateful inspection firewall), Network ACLs, subnets, and route tables. Within a VPC, you can create subnets in an Availability Zone. Each subnet can have an associated route table that defines routing rules for managing the paths that traffic takes within the subnet. You can define an internet routable subnet by having a route that goes to an internet or NAT gateway attached to the VPC, or through another VPC. 

 When an instance, Amazon Relational Database Service(Amazon RDS) database, or other service is launched within a VPC, it has its own security group per network interface. This firewall is outside the operating system layer and can be used to define rules for allowed inbound and outbound traffic. You can also define relationships between security groups. For example, instances within a database tier security group only accept traffic from instances within the application tier, by reference to the security groups applied to the instances involved. Unless you are using non-TCP protocols, it shouldn’t be necessary to have an Amazon Elastic Compute Cloud(Amazon EC2) instance directly accessible by the internet (even with ports restricted by security groups) without a load balancer, or [CloudFront](https://aws.amazon.com/cloudfront). This helps protect it from unintended access through an operating system or application issue. A subnet can also have a network ACL attached to it, which acts as a stateless firewall. You should configure the network ACL to narrow the scope of traffic allowed between layers, note that you need to define both inbound and outbound rules. 

 Some AWS services require components to access the internet for making API calls, where [AWS API endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html) are located. Other AWS services use [VPC endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints.html) within your Amazon VPCs. Many AWS services, including Amazon S3 and Amazon DynamoDB, support VPC endpoints, and this technology has been generalized in [AWS PrivateLink](https://aws.amazon.com/privatelink/). We recommend you use this approach to access AWS services, third-party services, and your own services hosted in other VPCs securely. All network traffic on AWS PrivateLink stays on the global AWS backbone and never traverses the internet. Connectivity can only be initiated by the consumer of the service, and not by the provider of the service. Using AWS PrivateLink for external service access allows you to create air-gapped VPCs with no internet access and helps protect your VPCs from external threat vectors. Third-party services can use AWS PrivateLink to allow their customers to connect to the services from their VPCs over private IP addresses. For VPC assets that need to make outbound connections to the internet, these can be made outbound only (one-way) through an AWS managed NAT gateway, outbound only internet gateway, or web proxies that you create and manage. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Control network traffic in a VPC: Implement VPC best practices to control traffic. 
  +  [Amazon VPC security](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Security.html) 
  +  [VPC endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-endpoints.html) 
  +  [Amazon VPC security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) 
  +  [Network ACLs](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html) 
+  Control traffic at the edge: Implement edge services, such as Amazon CloudFront, to provide an additional layer of protection and other features. 
  +  [Amazon CloudFront use cases](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/IntroductionUseCases.html) 
  +  [AWS Global Accelerator](https://docs.aws.amazon.com/global-accelerator/latest/dg/what-is-global-accelerator.html) 
  +  [AWS Web Application Firewall (AWS WAF)](https://docs.aws.amazon.com/waf/latest/developerguide/waf-section.html) 
  +  [Amazon Route 53](https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/Welcome.html) 
  +  [Amazon VPC Ingress Routing](https://aws.amazon.com/about-aws/whats-new/2019/12/amazon-vpc-ingress-routing-insert-virtual-appliances-forwarding-path-vpc-traffic/) 
+  Control private network traffic: Implement services that protect your private traffic for your workload. 
  +  [Amazon VPC Peering](https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html) 
  +  [Amazon VPC Endpoint Services (AWS PrivateLink)](https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-service.html) 
  +  [Amazon VPC Transit Gateway](https://docs.aws.amazon.com/vpc/latest/tgw/what-is-transit-gateway.html) 
  +  [AWS Direct Connect](https://docs.aws.amazon.com/directconnect/latest/UserGuide/Welcome.html) 
  +  [AWS Site-to-Site VPN](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPC_VPN.html) 
  +  [AWS Client VPN](https://docs.aws.amazon.com/vpn/latest/clientvpn-user/user-getting-started.html) 
  +  [Amazon S3 Access Points](https://docs.aws.amazon.com/AmazonS3/latest/dev/access-points.html) 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Firewall Manager](https://docs.aws.amazon.com/waf/latest/developerguide/fms-section.html) 
+  [Amazon Inspector](https://aws.amazon.com/inspector) 
+  [Getting started with AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/getting-started.html) 

 **Related videos:** 
+  [AWS Transit Gateway reference architectures for many VPCs](https://youtu.be/9Nikqn_02Oc) 
+  [Application Acceleration and Protection with Amazon CloudFront, AWS WAF, and AWS Shield](https://youtu.be/0xlwLEccRe0)

 **Related examples:** 
+  [Lab: Automated Deployment of VPC](https://www.wellarchitectedlabs.com/Security/200_Automated_Deployment_of_VPC/README.html) 

# SEC05-BP03 Automate network protection
<a name="sec_network_protection_auto_protect"></a>

 Automate protection mechanisms to provide a self-defending network based on threat intelligence and anomaly detection. For example, intrusion detection and prevention tools that can adapt to current threats and reduce their impact. A web application firewall is an example of where you can automate network protection, for example, by using the AWS WAF Security Automations solution ([https://github.com/awslabs/aws-waf-security-automations](https://github.com/awslabs/aws-waf-security-automations)) to automatically block requests originating from IP addresses associated with known threat actors. 

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Automate protection for web-based traffic: AWS offers a solution that uses AWS CloudFormation to automatically deploy a set of AWS WAF rules designed to filter common web-based attacks. Users can select from preconfigured protective features that define the rules included in an AWS WAF web access control list (web ACL). 
  +  [AWS WAF security automations](https://aws.amazon.com/solutions/aws-waf-security-automations/) 
+  Consider AWS Partner solutions: AWS Partners offer hundreds of industry-leading products that are equivalent, identical to, or integrate with existing controls in your on-premises environments. These products complement the existing AWS services to allow you to deploy a comprehensive security architecture and a more seamless experience across your cloud and on-premises environments. 
  +  [Infrastructure security](https://aws.amazon.com/security/partner-solutions/#infrastructure_security) 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Firewall Manager](https://docs.aws.amazon.com/waf/latest/developerguide/fms-section.html) 
+  [Amazon Inspector](https://aws.amazon.com/inspector) 
+ [Amazon VPC Security](https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Security.html)
+  [Getting started with AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/getting-started.html) 

 **Related videos:** 
+  [AWS Transit Gateway reference architectures for many VPCs](https://youtu.be/9Nikqn_02Oc) 
+  [Application Acceleration and Protection with Amazon CloudFront, AWS WAF, and AWS Shield](https://youtu.be/0xlwLEccRe0)

 **Related examples:** 
+  [Lab: Automated Deployment of VPC](https://www.wellarchitectedlabs.com/Security/200_Automated_Deployment_of_VPC/README.html) 

# SEC05-BP04 Implement inspection and protection
<a name="sec_network_protection_inspection"></a>

 Inspect and filter your traffic at each layer. You can inspect your VPC configurations for potential unintended access using [VPC Network Access Analyzer](https://docs.aws.amazon.com/vpc/latest/network-access-analyzer/what-is-vaa.html). You can specify your network access requirements and identify potential network paths that do not meet them. For components transacting over HTTP-based protocols, a web application firewall can help protect from common attacks. [AWS WAF](https://aws.amazon.com/waf) is a web application firewall that lets you monitor and block HTTP(s) requests that match your configurable rules that are forwarded to an Amazon API Gateway API, Amazon CloudFront, or an Application Load Balancer. To get started with AWS WAF, you can use [AWS Managed Rules](https://docs.aws.amazon.com/waf/latest/developerguide/getting-started.html#getting-started-wizard-add-rule-group) in combination with your own, or use existing [partner integrations](https://aws.amazon.com/waf/partners/). 

 For managing AWS WAF, AWS Shield Advanced protections, and Amazon VPC security groups across AWS Organizations, you can use AWS Firewall Manager. It allows you to centrally configure and manage firewall rules across your accounts and applications, making it easier to scale enforcement of common rules. It also allows you to rapidly respond to attacks, using [AWS Shield Advanced](https://docs.aws.amazon.com/waf/latest/developerguide/ddos-responding.html), or [solutions](https://aws.amazon.com/solutions/aws-waf-security-automations/) that can automatically block unwanted requests to your web applications. Firewall Manager also works with [AWS Network Firewall](https://aws.amazon.com/network-firewall/). AWS Network Firewall is a managed service that uses a rules engine to give you fine-grained control over both stateful and stateless network traffic. It supports the [Suricata compatible](https://docs.aws.amazon.com/network-firewall/latest/developerguide/stateful-rule-groups-ips.html) open source intrusion prevention system (IPS) specifications for rules to help protect your workload. 

 **Level of risk exposed if this best practice is not established:** Low 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Configure Amazon GuardDuty: GuardDuty is a threat detection service that continuously monitors for malicious activity and unauthorized behavior to protect your AWS accounts and workloads. Use GuardDuty and configure automated alerts. 
  +  [Amazon GuardDuty](https://docs.aws.amazon.com/guardduty/latest/ug/what-is-guardduty.html) 
  +  [Lab: Automated Deployment of Detective Controls](https://wellarchitectedlabs.com/Security/200_Automated_Deployment_of_Detective_Controls/README.html) 
+  Configure virtual private cloud (VPC) Flow Logs: VPC Flow Logs is a feature that allows you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data can be published to Amazon CloudWatch Logs and Amazon Simple Storage Service (Amazon S3). After you've created a flow log, you can retrieve and view its data in the chosen destination. 
+  Consider VPC traffic mirroring: Traffic mirroring is an Amazon VPC feature that you can use to copy network traffic from an elastic network interface of Amazon Elastic Compute Cloud (Amazon EC2) instances and then send it to out-of-band security and monitoring appliances for content inspection, threat monitoring, and troubleshooting. 
  +  [VPC traffic mirroring](https://docs.aws.amazon.com/vpc/latest/mirroring/what-is-traffic-mirroring.html) 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Firewall Manager](https://docs.aws.amazon.com/waf/latest/developerguide/fms-section.html) 
+  [Amazon Inspector](https://aws.amazon.com/inspector) 
+  [Amazon VPC Security](https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Security.html) 
+  [Getting started with AWS WAF](https://docs.aws.amazon.com/waf/latest/developerguide/getting-started.html) 

 **Related videos:** 
+  [AWS Transit Gateway reference architectures for many VPCs](https://youtu.be/9Nikqn_02Oc) 
+  [Application Acceleration and Protection with Amazon CloudFront, AWS WAF, and AWS Shield](https://youtu.be/0xlwLEccRe0) 

# SEC 6. How do you protect your compute resources?
<a name="sec-06"></a>

Compute resources in your workload require multiple layers of defense to help protect from external and internal threats. Compute resources include EC2 instances, containers, AWS Lambda functions, database services, IoT devices, and more.

**Topics**
+ [SEC06-BP01 Perform vulnerability management](sec_protect_compute_vulnerability_management.md)
+ [SEC06-BP02 Reduce attack surface](sec_protect_compute_reduce_surface.md)
+ [SEC06-BP03 Implement managed services](sec_protect_compute_implement_managed_services.md)
+ [SEC06-BP04 Automate compute protection](sec_protect_compute_auto_protection.md)
+ [SEC06-BP05 Enable people to perform actions at a distance](sec_protect_compute_actions_distance.md)
+ [SEC06-BP06 Validate software integrity](sec_protect_compute_validate_software_integrity.md)

# SEC06-BP01 Perform vulnerability management
<a name="sec_protect_compute_vulnerability_management"></a>

Frequently scan and patch for vulnerabilities in your code, dependencies, and in your infrastructure to help protect against new threats.

 **Desired outcome:** Create and maintain a vulnerability management program. Regularly scan and patch resources such as Amazon EC2 instances, Amazon Elastic Container Service (Amazon ECS) containers, and Amazon Elastic Kubernetes Service (Amazon EKS) workloads. Configure maintenance windows for AWS managed resources, such as Amazon Relational Database Service (Amazon RDS) databases. Use static code scanning to inspect application source code for common issues. Consider web application penetration testing if your organization has the requisite skills or can hire outside assistance. 

 **Common anti-patterns:** 
+  Not having a vulnerability management program. 
+  Performing system patching without considering severity or risk avoidance. 
+  Using software that has passed its vendor-provided end of life (EOL) date. 
+  Deploying code into production before analyzing it for security issues. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 A vulnerability management program includes security assessment, identifying issues, prioritizing, and performing patch operations as part of resolving the issues. Automation is the key to continually scanning workloads for issues and unintended network exposure and performing remediation. Automating the creation and updating of resources saves time and reduces the risk of configuration errors creating further issues. A well-designed vulnerability management program should also consider vulnerability testing during the development and deployment stages of the software life cycle. Implementing vulnerability management during development and deployment helps lessen the chance that a vulnerability can make its way into your production environment. 

 Implementing a vulnerability management program requires a good understanding of the [AWS Shared Responsibly model](https://aws.amazon.com/compliance/shared-responsibility-model/) and how it relates to your specific workloads. Under the Shared Responsibility Model, AWS is responsible for protecting the infrastructure of the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. You are responsible for security in the cloud, for example, the actual data, security configuration, and management tasks of Amazon EC2 instances, and verifying that your Amazon S3 objects are properly classified and configured. Your approach to vulnerability management also can vary depending on the services you consume. For example, AWS manages the patching for our managed relational database service, Amazon RDS, but you would be responsible for patching self-hosted databases. 

 AWS has a range of services to help with your vulnerability management program. [Amazon Inspector](https://docs.aws.amazon.com/inspector/latest/user/what-is-inspector.html) continually scans AWS workloads for software issues and unintended network access. [AWS Systems Manager Patch Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html) helps manage patching across your Amazon EC2 instances. Amazon Inspector and Systems Manager can be viewed in [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html), a cloud security posture management service that helps automate AWS security checks and centralize security alerts. 

 [Amazon CodeGuru](https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/welcome.html) can help identify potential issues in Java and Python applications using static code analysis. 

 **Implementation steps** 
+  **Configure [Amazon Inspector](https://docs.aws.amazon.com/inspector/v1/userguide/inspector_introduction.html):** Amazon Inspector automatically detects newly launched Amazon EC2 instances, Lambda functions, and eligible container images pushed to Amazon ECR and immediately scans them for software issues, potential defects, and unintended network exposure. 
+  **Scan source code:** Scan libraries and dependencies for issues and defects. [Amazon CodeGuru](https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/welcome.html) can scan and provide recommendations to remediating [common security issues](https://docs.aws.amazon.com/codeguru/detector-library/index.html) for both Java and Python applications. [The OWASP Foundation](https://owasp.org/www-community/Source_Code_Analysis_Tools) publishes a list of Source Code Analysis Tools (also known as SAST tools). 
+  **Implement a mechanism to scan and patch your existing environment, as well as scanning as part of a CI/CD pipeline build process:** Implement a mechanism to scan and patch for issues in your dependencies and operating systems to help protect against new threats. Have that mechanism run on a regular basis. Software vulnerability management is essential to understanding where you need to apply patches or address software issues. Prioritize remediation of potential security issues by embedding vulnerability assessments early into your continuous integration/continuous delivery (CI/CD) pipeline. Your approach can vary based on the AWS services that you are consuming. To check for potential issues in software running in Amazon EC2 instances, add [Amazon Inspector](https://aws.amazon.com/inspector/) to your pipeline to alert you and stop the build process if issues or potential defects are detected. Amazon Inspector continually monitors resources. You can also use open source products such as [OWASP Dependency-Check](https://owasp.org/www-project-dependency-check/), [Snyk](https://snyk.io/product/open-source-security-management/), [OpenVAS](https://www.openvas.org/), package managers, and AWS Partner tools for vulnerability management. 
+  **Use [AWS Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/what-is-systems-manager.html):** You are responsible for patch management for your AWS resources, including Amazon Elastic Compute Cloud (Amazon EC2) instances, Amazon Machine Images (AMIs), and other compute resources. [AWS Systems Manager Patch Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html) automates the process of patching managed instances with both security related and other types of updates. Patch Manager can be used to apply patches on Amazon EC2 instances for both operating systems and applications, including Microsoft applications, Windows service packs, and minor version upgrades for Linux based instances. In addition to Amazon EC2, Patch Manager can also be used to patch on-premises servers. 

   For a list of supported operating systems, see [Supported operating systems](https://docs.aws.amazon.com/systems-manager/latest/userguide/prereqs-operating-systems.html) in the Systems Manager User Guide. You can scan instances to see only a report of missing patches, or you can scan and automatically install all missing patches. 
+  **Use [AWS Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/what-is-securityhub.html):** Security Hub CSPM provides a comprehensive view of your security state in AWS. It collects security data across [multiple AWS services](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-internal-providers.html) and provides those findings in a standardized format, allowing you to prioritize security findings across AWS services. 
+  **Use [AWS CloudFormation](https://aws.amazon.com/cloudformation/):** [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) is an infrastructure as code (IaC) service that can help with vulnerability management by automating resource deployment and standardizing resource architecture across multiple accounts and environments. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Systems Manager](https://aws.amazon.com/systems-manager/) 
+  [Security Overview of AWS Lambda](https://pages.awscloud.com/rs/112-TZM-766/images/Overview-AWS-Lambda-Security.pdf) 
+ [ Amazon CodeGuru ](https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/welcome.html)
+ [ Improved, Automated Vulnerability Management for Cloud Workloads with a New Amazon Inspector ](https://aws.amazon.com/blogs/aws/improved-automated-vulnerability-management-for-cloud-workloads-with-a-new-amazon-inspector/)
+ [ Automate vulnerability management and remediation in AWS using Amazon Inspector and AWS Systems Manager – Part 1 ](https://aws.amazon.com/blogs/mt/automate-vulnerability-management-and-remediation-in-aws-using-amazon-inspector-and-aws-systems-manager-part-1/)

 **Related videos:** 
+  [Securing Serverless and Container Services](https://youtu.be/kmSdyN9qiXY) 
+  [Security best practices for the Amazon EC2 instance metadata service](https://youtu.be/2B5bhZzayjI) 

# SEC06-BP02 Reduce attack surface
<a name="sec_protect_compute_reduce_surface"></a>

 Reduce your exposure to unintended access by hardening operating systems and minimizing the components, libraries, and externally consumable services in use. Start by reducing unused components, whether they are operating system packages or applications, for Amazon Elastic Compute Cloud (Amazon EC2)-based workloads, or external software modules in your code, for all workloads. You can find many hardening and security configuration guides for common operating systems and server software. For example, you can start with the [Center for Internet Security](https://www.cisecurity.org/) and iterate.

 In Amazon EC2, you can create your own Amazon Machine Images (AMIs), which you have patched and hardened, to help you meet the specific security requirements for your organization. The patches and other security controls you apply on the AMI are effective at the point in time in which they were created—they are not dynamic unless you modify after launching, for example, with AWS Systems Manager. 

 You can simplify the process of building secure AMIs with EC2 Image Builder. EC2 Image Builder significantly reduces the effort required to create and maintain golden images without writing and maintaining automation. When software updates become available, Image Builder automatically produces a new image without requiring users to manually initiate image builds. EC2 Image Builder allows you to easily validate the functionality and security of your images before using them in production with AWS-provided tests and your own tests. You can also apply AWS-provided security settings to further secure your images to meet internal security criteria. For example, you can produce images that conform to the Security Technical Implementation Guide (STIG) standard using AWS-provided templates. 

 Using third-party static code analysis tools, you can identify common security issues such as unchecked function input bounds, as well as applicable common vulnerabilities and exposures (CVEs). You can use [Amazon CodeGuru](https://aws.amazon.com/codeguru/) for supported languages. Dependency checking tools can also be used to determine whether libraries your code links against are the latest versions, are themselves free of CVEs, and have licensing conditions that meet your software policy requirements. 

 Using Amazon Inspector, you can perform configuration assessments against your instances for known CVEs, assess against security benchmarks, and automate the notification of defects. Amazon Inspector runs on production instances or in a build pipeline, and it notifies developers and engineers when findings are present. You can access findings programmatically and direct your team to backlogs and bug-tracking systems. [EC2 Image Builder](https://aws.amazon.com/image-builder/) can be used to maintain server images (AMIs) with automated patching, AWS-provided security policy enforcement, and other customizations. When using containers implement [ECR Image Scanning](https://docs.aws.amazon.com/AmazonECR/latest/userguide/image-scanning.html) in your build pipeline and on a regular basis against your image repository to look for CVEs in your containers. 

 While Amazon Inspector and other tools are effective at identifying configurations and any CVEs that are present, other methods are required to test your workload at the application level. [Fuzzing](https://owasp.org/www-community/Fuzzing) is a well-known method of finding bugs using automation to inject malformed data into input fields and other areas of your application. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Harden operating system: Configure operating systems to meet best practices. 
  +  [Securing Amazon Linux](https://www.cisecurity.org/benchmark/amazon_linux/) 
  +  [Securing Microsoft Windows Server](https://www.cisecurity.org/benchmark/microsoft_windows_server/) 
+  Harden containerized resources: Configure containerized resources to meet security best practices. 
+  Implement AWS Lambda best practices. 
  +  [AWS Lambda best practices](https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html) 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Systems Manager](https://aws.amazon.com/systems-manager/) 
+  [Replacing a Bastion Host with Amazon EC2 Systems Manager](https://aws.amazon.com/blogs/mt/replacing-a-bastion-host-with-amazon-ec2-systems-manager/) 
+  [Security Overview of AWS Lambda](https://pages.awscloud.com/rs/112-TZM-766/images/Overview-AWS-Lambda-Security.pdf) 

 **Related videos:** 
+  [Running high-security workloads on Amazon EKS](https://youtu.be/OWRWDXszR-4) 
+  [Securing Serverless and Container Services](https://youtu.be/kmSdyN9qiXY) 
+  [Security best practices for the Amazon EC2 instance metadata service](https://youtu.be/2B5bhZzayjI) 

 **Related examples:** 
+  [Lab: Automated Deployment of Web Application Firewall](https://wellarchitectedlabs.com/Security/200_Automated_Deployment_of_Web_Application_Firewall/README.html) 

# SEC06-BP03 Implement managed services
<a name="sec_protect_compute_implement_managed_services"></a>

 Implement services that manage resources, such as Amazon Relational Database Service (Amazon RDS), AWS Lambda, and Amazon Elastic Container Service (Amazon ECS), to reduce your security maintenance tasks as part of the shared responsibility model. For example, Amazon RDS helps you set up, operate, and scale a relational database, automates administration tasks such as hardware provisioning, database setup, patching, and backups. This means you have more free time to focus on securing your application in other ways described in the AWS Well-Architected Framework. Lambda lets you run code without provisioning or managing servers, so you only need to focus on the connectivity, invocation, and security at the code level–not the infrastructure or operating system. 

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Explore available services: Explore, test, and implement services that manage resources, such as Amazon RDS, AWS Lambda, and Amazon ECS. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+ [AWS Website ](https://aws.amazon.com/)
+  [AWS Systems Manager](https://aws.amazon.com/systems-manager/) 
+  [Replacing a Bastion Host with Amazon EC2 Systems Manager](https://aws.amazon.com/blogs/mt/replacing-a-bastion-host-with-amazon-ec2-systems-manager/) 
+  [Security Overview of AWS Lambda](https://pages.awscloud.com/rs/112-TZM-766/images/Overview-AWS-Lambda-Security.pdf) 

 **Related videos:** 
+  [Running high-security workloads on Amazon EKS](https://youtu.be/OWRWDXszR-4) 
+  [Securing Serverless and Container Services](https://youtu.be/kmSdyN9qiXY) 
+  [Security best practices for the Amazon EC2 instance metadata service](https://youtu.be/2B5bhZzayjI) 

 **Related examples:** 
+ [Lab: AWS Certificate Manager Request Public Certificate ](https://wellarchitectedlabs.com/security/200_labs/200_certificate_manager_request_public_certificate/)

# SEC06-BP04 Automate compute protection
<a name="sec_protect_compute_auto_protection"></a>

 Automate your protective compute mechanisms including vulnerability management, reduction in attack surface, and management of resources. The automation will help you invest time in securing other aspects of your workload, and reduce the risk of human error. 

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Automate configuration management: Enforce and validate secure configurations automatically by using a configuration management service or tool. 
  +  [AWS Systems Manager](https://aws.amazon.com/systems-manager/) 
  +  [AWS CloudFormation](https://aws.amazon.com/cloudformation/) 
  +  [Lab: Automated deployment of VPC](https://wellarchitectedlabs.com/Security/200_Automated_Deployment_of_VPC/README.html) 
  +  [Lab: Automated deployment of EC2 web application](https://wellarchitectedlabs.com/Security/200_Automated_Deployment_of_EC2_Web_Application/README.html) 
+  Automate patching of Amazon Elastic Compute Cloud (Amazon EC2) instances: AWS Systems Manager Patch Manager automates the process of patching managed instances with both security-related and other types of updates. You can use Patch Manager to apply patches for both operating systems and applications. 
  +  [AWS Systems Manager Patch Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html) 
  +  [Centralized multi-account and multi-Region patching with AWS Systems Manager Automation](https://aws.amazon.com/blogs/mt/centralized-multi-account-and-multi-region-patching-with-aws-systems-manager-automation/) 
+  Implement intrusion detection and prevention: Implement an intrusion detection and prevention tool to monitor and stop malicious activity on instances. 
+  Consider AWS Partner solutions: AWS Partners offer hundreds of industry-leading products that are equivalent, identical to, or integrate with existing controls in your on-premises environments. These products complement the existing AWS services to allow you to deploy a comprehensive security architecture and a more seamless experience across your cloud and on-premises environments. 
  +  [Infrastructure security](https://aws.amazon.com/security/partner-solutions/#infrastructure_security) 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS CloudFormation](https://aws.amazon.com/cloudformation/) 
+  [AWS Systems Manager](https://aws.amazon.com/systems-manager/) 
+  [AWS Systems Manager Patch Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-patch.html) 
+  [Centralized multi-account and multi-region patching with AWS Systems Manager Automation](https://aws.amazon.com/blogs/mt/centralized-multi-account-and-multi-region-patching-with-aws-systems-manager-automation/) 
+  [Infrastructure security](https://aws.amazon.com/security/partner-solutions/#infrastructure_security) 
+  [Replacing a Bastion Host with Amazon EC2 Systems Manager](https://aws.amazon.com/blogs/mt/replacing-a-bastion-host-with-amazon-ec2-systems-manager/) 
+  [Security Overview of AWS Lambda](https://pages.awscloud.com/rs/112-TZM-766/images/Overview-AWS-Lambda-Security.pdf) 

 **Related videos:** 
+  [Running high-security workloads on Amazon EKS](https://youtu.be/OWRWDXszR-4) 
+  [Securing Serverless and Container Services](https://youtu.be/kmSdyN9qiXY) 
+  [Security best practices for the Amazon EC2 instance metadata service](https://youtu.be/2B5bhZzayjI) 

 **Related examples:** 
+  [Lab: Automated Deployment of Web Application Firewall](https://wellarchitectedlabs.com/Security/200_Automated_Deployment_of_Web_Application_Firewall/README.html) 
+  [Lab: Automated deployment of Amazon EC2 web application](https://wellarchitectedlabs.com/Security/200_Automated_Deployment_of_EC2_Web_Application/README.html) 

# SEC06-BP05 Enable people to perform actions at a distance
<a name="sec_protect_compute_actions_distance"></a>

 Removing the ability for interactive access reduces the risk of human error, and the potential for manual configuration or management. For example, use a change management workflow to deploy Amazon Elastic Compute Cloud (Amazon EC2) instances using infrastructure-as-code, then manage Amazon EC2 instances using tools such as AWS Systems Manager instead of allowing direct access or through a bastion host. AWS Systems Manager can automate a variety of maintenance and deployment tasks, using features including [automation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html) [workflows](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html), [documents](https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-documents.html) (playbooks), and the [run command](https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html). AWS CloudFormation stacks build from pipelines and can automate your infrastructure deployment and management tasks without using the AWS Management Console or APIs directly. 

 **Level of risk exposed if this best practice is not established:** Low 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Replace console access: Replace console access (SSH or RDP) to instances with AWS Systems Manager Run Command to automate management tasks. 
+  [AWS Systems Manager Run Command](https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html) 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Systems Manager](https://aws.amazon.com/systems-manager/) 
+  [AWS Systems Manager Run Command](https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html) 
+  [Replacing a Bastion Host with Amazon EC2 Systems Manager](https://aws.amazon.com/blogs/mt/replacing-a-bastion-host-with-amazon-ec2-systems-manager/) 
+  [Security Overview of AWS Lambda](https://pages.awscloud.com/rs/112-TZM-766/images/Overview-AWS-Lambda-Security.pdf) 

 **Related videos:** 
+  [Running high-security workloads on Amazon EKS](https://youtu.be/OWRWDXszR-4) 
+  [Securing Serverless and Container Services](https://youtu.be/kmSdyN9qiXY) 
+  [Security best practices for the Amazon EC2 instance metadata service](https://youtu.be/2B5bhZzayjI) 

 **Related examples:** 
+  [Lab: Automated Deployment of Web Application Firewall](https://wellarchitectedlabs.com/Security/200_Automated_Deployment_of_Web_Application_Firewall/README.html) 

# SEC06-BP06 Validate software integrity
<a name="sec_protect_compute_validate_software_integrity"></a>

 Implement mechanisms (for example, code signing) to validate that the software, code and libraries used in the workload are from trusted sources and have not been tampered with. For example, you should verify the code signing certificate of binaries and scripts to confirm the author, and ensure it has not been tampered with since created by the author. [AWS Signer](https://docs.aws.amazon.com/signer/latest/developerguide/Welcome.html) can help ensure the trust and integrity of your code by centrally managing the code- signing lifecycle, including signing certification and public and private keys. You can learn how to use advanced patterns and best practices for code signing with [AWS Lambda](https://aws.amazon.com/blogs/security/best-practices-and-advanced-patterns-for-lambda-code-signing/). Additionally, a checksum of software that you download, compared to that of the checksum from the provider, can help ensure it has not been tampered with. 

 **Level of risk exposed if this best practice is not established:** Low 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Investigate mechanisms: Code signing is one mechanism that can be used to validate software integrity. 
  +  [NIST: Security Considerations for Code Signing](https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.01262018.pdf) 

## Resources
<a name="resources"></a>

**Related documents:** 
+ [AWS Signer](https://docs.aws.amazon.com/signer/index.html)
+ [New – Code Signing, a Trust and Integrity Control for AWS Lambda](https://aws.amazon.com/blogs/aws/new-code-signing-a-trust-and-integrity-control-for-aws-lambda/) 

# Data protection
<a name="a-data-protection"></a>

**Topics**
+ [SEC 7. How do you classify your data?](sec-07.md)
+ [SEC 8. How do you protect your data at rest?](sec-08.md)
+ [SEC 9. How do you protect your data in transit?](sec-09.md)

# SEC 7. How do you classify your data?
<a name="sec-07"></a>

Classification provides a way to categorize data, based on criticality and sensitivity in order to help you determine appropriate protection and retention controls.

**Topics**
+ [SEC07-BP01 Identify the data within your workload](sec_data_classification_identify_data.md)
+ [SEC07-BP02 Define data protection controls](sec_data_classification_define_protection.md)
+ [SEC07-BP03 Automate identification and classification](sec_data_classification_auto_classification.md)
+ [SEC07-BP04 Define data lifecycle management](sec_data_classification_lifecycle_management.md)

# SEC07-BP01 Identify the data within your workload
<a name="sec_data_classification_identify_data"></a>

It’s critical to understand the type and classification of data your workload is processing, the associated business processes, where the data is stored, and who is the data owner. You should also have an understanding of the applicable legal and compliance requirements of your workload, and what data controls need to be enforced. Identifying data is the first step in the data classification journey. 

**Benefits of establishing this best practice:**

 Data classification allows workload owners to identify locations that store sensitive data and determine how that data should be accessed and shared. 

 Data classification aims to answer the following questions: 
+ **What type of data do you have?**

  This could be data such as: 
  +  Intellectual property (IP) such as trade secrets, patents, or contract agreements. 
  +  Protected health information (PHI) such as medical records that contain medical history information connected to an individual. 
  +  Personally identifiable information (PII), such as name, address, date of birth, and national ID or registration number. 
  +  Credit card data, such as the Primary Account Number (PAN), cardholder name, expiration date, and service code number. 
  +  Where is the sensitive data is stored? 
  +  Who can access, modify, and delete data? 
  +  Understanding user permissions is essential in guarding against potential data mishandling. 
+ **Who can perform create, read, update, and delete (CRUD) operations? **
  +  Account for potential escalation of privileges by understanding who can manage permissions to the data. 
+ **What business impact might occur if the data is disclosed unintentionally, altered, or deleted? **
  +  Understand the risk consequence if data is modified, deleted, or inadvertently disclosed. 

By knowing the answers to these questions, you can take the following actions: 
+  Decrease sensitive data scope (such as the number of sensitive data locations) and limit access to sensitive data to only approved users. 
+  Gain an understanding of different data types so that you can implement appropriate data protection mechanisms and techniques, such as encryption, data loss prevention, and identity and access management. 
+  Optimize costs by delivering the right control objectives for the data. 
+  Confidently answer questions from regulators and auditors regarding the types and amount of data, and how data of different sensitivities are isolated from each other. 

 **Level of risk exposed if this best practice is not established**: High 

## Implementation guidance
<a name="implementation-guidance"></a>

 Data classification is the act of identifying the sensitivity of data. It might involve tagging to make the data easily searchable and trackable. Data classification also reduces the duplication of data, which can help reduce storage and backup costs while speeding up the search process. 

 Use services such as Amazon Macie to automate at scale both the discovery and classification of sensitive data. Other services, such as Amazon EventBridge and AWS Config, can be used to automate remediation for data security issues such as unencrypted Amazon Simple Storage Service (Amazon S3) buckets and Amazon EC2 EBS volumes or untagged data resources. For a complete list of AWS service integrations, see the [EventBridge documentation](https://docs.aws.amazon.com/eventbridge/latest/userguide/event-types.html). 

 [Detecting PII](https://docs.aws.amazon.com/comprehend/latest/dg/how-pii.html) in unstructured data such as customer emails, support tickets, product reviews, and social media, is possible by [using Amazon Comprehend](https://aws.amazon.com/blogs/machine-learning/detecting-and-redacting-pii-using-amazon-comprehend/), which is a natural language processing (NLP) service that uses machine learning (ML) to find insights and relationships like people, places, sentiments, and topics in unstructured text. For a list of AWS services that can assist with data identification, see [Common techniques to detect PHI and PII data using AWS services](https://aws.amazon.com/blogs/industries/common-techniques-to-detect-phi-and-pii-data-using-aws-services/). 

 Another method that supports data classification and protection is [AWS resource tagging](https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html). Tagging allows you to assign metadata to your AWS resources that you can use to manage, identify, organize, search for, and filter resources. 

 In some cases, you might choose to tag entire resources (such as an S3 bucket), especially when a specific workload or service is expected to store processes or transmissions of already known data classification. 

 Where appropriate, you can tag an S3 bucket instead of individual objects for ease of administration and security maintenance. 

### Implementation steps
<a name="implementation-steps"></a>

**Detect sensitive data within Amazon S3: **

1.  Before starting, make sure you have the appropriate permissions to access the Amazon Macie console and API operations. For additional details, see [Getting started with Amazon Macie](https://docs.aws.amazon.com/macie/latest/user/getting-started.html). 

1.  Use Amazon Macie to perform automated data discovery when your sensitive data resides in [Amazon S3](https://aws.amazon.com/s3/). 
   +  Use the [Getting Started with Amazon Macie](https://docs.aws.amazon.com/macie/latest/user/getting-started.html) guide to configure a repository for sensitive data discovery results and create a discovery job for sensitive data. 
   +  [How to use Amazon Macie to preview sensitive data in S3 buckets.](https://aws.amazon.com/blogs/security/how-to-use-amazon-macie-to-preview-sensitive-data-in-s3-buckets/) 

      By default, Macie analyzes objects by using the set of managed data identifiers that we recommend for automated sensitive data discovery. You can tailor the analysis by configuring Macie to use specific managed data identifiers, custom data identifiers, and allow lists when it performs automated sensitive data discovery for your account or organization. You can adjust the scope of the analysis by excluding specific buckets (for example, S3 buckets that typically store AWS logging data). 

1.  To configure and use automated sensitive data discovery, see [Performing automated sensitive data discovery with Amazon Macie](https://docs.aws.amazon.com/macie/latest/user/discovery-asdd-account-manage.html). 

1.  You might also consider [Automated Data Discovery for Amazon Macie](https://aws.amazon.com/blogs/aws/automated-data-discovery-for-amazon-macie/). 

**Detect sensitive data within Amazon RDS: **

 For more information on data discovery in [Amazon Relational Database Service (Amazon RDS)](https://aws.amazon.com/rds/) databases, see [Enabling data classification for Amazon RDS database with Macie](https://aws.amazon.com/blogs/security/enabling-data-classification-for-amazon-rds-database-with-amazon-macie/). 

**Detect sensitive data within DynamoDB: **
+  [Detecting sensitive data in DynamoDB with Macie](https://aws.amazon.com/blogs/security/detecting-sensitive-data-in-dynamodb-with-macie/) explains how to use Amazon Macie to detect sensitive data in [Amazon DynamoDB](https://aws.amazon.com/dynamodb/) tables by exporting the data to Amazon S3 for scanning. 

**AWS Partner solutions: **
+  Consider using our extensive AWS Partner Network. AWS Partners have extensive tools and compliance frameworks that directly integrate with AWS services. Partners can provide you with a tailored governance and compliance solution to help you meet your organizational needs. 
+  For customized solutions in data classification, see [Data governance in the age of regulation and compliance requirements](https://aws.amazon.com/big-data/featured-partner-solutions-data-governance-compliance/). 

 You can automatically enforce the tagging standards that your organization adopts by creating and deploying policies using AWS Organizations. Tag policies let you specify rules that define valid key names and what values are valid for each key. You can choose to monitor only, which gives you an opportunity to evaluate and clean up your existing tags. After your tags are in compliance with your chosen standards, you can turn on enforcement in the tag policies to prevent non-compliant tags from being created. For more details, see [Securing resource tags used for authorization using a service control policy in AWS Organizations](https://aws.amazon.com/blogs/security/securing-resource-tags-used-for-authorization-using-service-control-policy-in-aws-organizations/) and the example policy on [preventing tags from being modified except by authorized principals](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps_examples_tagging.html#example-require-restrict-tag-mods-to-admin). 
+  To begin using tag policies in [AWS Organizations](https://aws.amazon.com/organizations/), it’s strongly recommended that you follow the workflow in [Getting started with tag policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-policies-getting-started.html) before moving on to more advanced tag policies. Understanding the effects of attaching a simple tag policy to a single account before expanding to an entire organizational unit (OU) or organization allows you to see a tag policy’s effects before you enforce compliance with the tag policy. [Getting started with tag policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-policies-getting-started.html) provides links to instructions for more advanced policy-related tasks. 
+  Consider evaluating other [AWS services and features](https://docs.aws.amazon.com/whitepapers/latest/data-classification/using-aws-cloud-to-support-data-classification.html#aws-services-and-features) that support data classification, which are listed in the [Data Classification](https://docs.aws.amazon.com/whitepapers/latest/data-classification/data-classification.html) whitepaper. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [Getting Started with Amazon Macie](https://docs.aws.amazon.com/macie/latest/user/getting-started.html) 
+  [Automated data discovery with Amazon Macie](https://docs.aws.amazon.com/macie/latest/user/discovery-asdd.html) 
+  [Getting started with tag policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-policies-getting-started.html) 
+  [Detecting PII entities](https://docs.aws.amazon.com/comprehend/latest/dg/how-pii.html) 

 **Related blogs:** 
+  [How to use Amazon Macie to preview sensitive data in S3 buckets.](https://aws.amazon.com/blogs/security/how-to-use-amazon-macie-to-preview-sensitive-data-in-s3-buckets/) 
+  [Performing automated sensitive data discovery with Amazon Macie.](https://aws.amazon.com/blogs/aws/automated-data-discovery-for-amazon-macie/) 
+  [Common techniques to detect PHI and PII data using AWS Services](https://aws.amazon.com/blogs/industries/common-techniques-to-detect-phi-and-pii-data-using-aws-services/) 
+  [Detecting and redacting PII using Amazon Comprehend](https://aws.amazon.com/blogs/machine-learning/detecting-and-redacting-pii-using-amazon-comprehend/) 
+  [Securing resource tags used for authorization using a service control policy in AWS Organizations](https://aws.amazon.com/blogs/security/securing-resource-tags-used-for-authorization-using-service-control-policy-in-aws-organizations/) 
+  [Enabling data classification for Amazon RDS database with Macie](https://aws.amazon.com/blogs/security/enabling-data-classification-for-amazon-rds-database-with-amazon-macie/) 
+  [Detecting sensitive data in DynamoDB with Macie](https://aws.amazon.com/blogs/security/detecting-sensitive-data-in-dynamodb-with-macie/) 

 **Related videos:** 
+  [Event-driven data security using Amazon Macie](https://www.youtube.com/watch?v=onqA7MJssoU) 
+  [Amazon Macie for data protection and governance](https://www.youtube.com/watch?v=SmMSt0n6a4k) 
+  [Fine-tune sensitive data findings with allow lists](https://www.youtube.com/watch?v=JmQ_Hybh2KI) 

# SEC07-BP02 Define data protection controls
<a name="sec_data_classification_define_protection"></a>

 Protect data according to its classification level. For example, secure data classified as public by using relevant recommendations while protecting sensitive data with additional controls. 

By using resource tags, separate AWS accounts per sensitivity (and potentially also for each caveat, enclave, or community of interest), IAM policies, AWS Organizations SCPs, AWS Key Management Service (AWS KMS), and AWS CloudHSM, you can define and implement your policies for data classification and protection with encryption. For example, if you have a project with S3 buckets that contain highly critical data or Amazon Elastic Compute Cloud (Amazon EC2) instances that process confidential data, they can be tagged with a `Project=ABC` tag. Only your immediate team knows what the project code means, and it provides a way to use attribute-based access control. You can define levels of access to the AWS KMS encryption keys through key policies and grants to ensure that only appropriate services have access to the sensitive content through a secure mechanism. If you are making authorization decisions based on tags you should make sure that the permissions on the tags are defined appropriately using tag policies in AWS Organizations.

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Define your data identification and classification schema: Identification and classification of your data is performed to assess the potential impact and type of data you store, and who can access it. 
  +  [AWS Documentation](https://docs.aws.amazon.com/) 
+  Discover available AWS controls: For the AWS services you are or plan to use, discover the security controls. Many services have a security section in their documentation. 
  +  [AWS Documentation](https://docs.aws.amazon.com/) 
+  Identify AWS compliance resources: Identify resources that AWS has available to assist. 
  +  [https://aws.amazon.com/compliance/](https://aws.amazon.com/compliance/?ref=wellarchitected) 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Documentation](https://docs.aws.amazon.com/) 
+  [Data Classification whitepaper](https://docs.aws.amazon.com/whitepapers/latest/data-classification/data-classification.html) 
+  [Getting started with Amazon Macie](https://docs.aws.amazon.com/macie/latest/user/getting-started.html) 
+  [AWS Compliance](https://aws.amazon.com/compliance/) 

 **Related videos:** 
+  [Introducing the New Amazon Macie](https://youtu.be/I-ewoQekdXE) 

# SEC07-BP03 Automate identification and classification
<a name="sec_data_classification_auto_classification"></a>

 Automating the identification and classification of data can help you implement the correct controls. Using automation for this instead of direct access from a person reduces the risk of human error and exposure. You should evaluate using a tool, such as [Amazon Macie](https://aws.amazon.com/macie/), that uses machine learning to automatically discover, classify, and protect sensitive data in AWS. Amazon Macie recognizes sensitive data, such as personally identifiable information (PII) or intellectual property, and provides you with dashboards and alerts that give visibility into how this data is being accessed or moved. 

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Use Amazon Simple Storage Service (Amazon S3) Inventory: Amazon S3 inventory is one of the tools you can use to audit and report on the replication and encryption status of your objects. 
  +  [Amazon S3 Inventory](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-inventory.html) 
+  Consider Amazon Macie: Amazon Macie uses machine learning to automatically discover and classify data stored in Amazon S3.
  +  [Amazon Macie](https://aws.amazon.com/macie/) 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [Amazon Macie](https://aws.amazon.com/macie/) 
+  [Amazon S3 Inventory](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-inventory.html) 
+  [Data Classification Whitepaper](https://docs.aws.amazon.com/whitepapers/latest/data-classification/data-classification.html) 
+  [Getting started with Amazon Macie](https://docs.aws.amazon.com/macie/latest/user/getting-started.html) 

 **Related videos:** 
+  [Introducing the New Amazon Macie](https://youtu.be/I-ewoQekdXE) 

# SEC07-BP04 Define data lifecycle management
<a name="sec_data_classification_lifecycle_management"></a>

 Your defined lifecycle strategy should be based on sensitivity level as well as legal and organization requirements. Aspects including the duration for which you retain data, data destruction processes, data access management, data transformation, and data sharing should be considered. When choosing a data classification methodology, balance usability versus access. You should also accommodate the multiple levels of access and nuances for implementing a secure, but still usable, approach for each level. Always use a defense in depth approach and reduce human access to data and mechanisms for transforming, deleting, or copying data. For example, require users to strongly authenticate to an application, and give the application, rather than the users, the requisite access permission to perform action at a distance. In addition, ensure that users come from a trusted network path and require access to the decryption keys. Use tools, such as dashboards and automated reporting, to give users information from the data rather than giving them direct access to the data. 

 **Level of risk exposed if this best practice is not established:** Low 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Identify data types: Identify the types of data that you are storing or processing in your workload. That data could be text, images, binary databases, and so forth. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [Data Classification Whitepaper](https://docs.aws.amazon.com/whitepapers/latest/data-classification/data-classification.html) 
+  [Getting started with Amazon Macie](https://docs.aws.amazon.com/macie/latest/user/getting-started.html) 

 **Related videos:** 
+  [Introducing the New Amazon Macie](https://youtu.be/I-ewoQekdXE) 

# SEC 8. How do you protect your data at rest?
<a name="sec-08"></a>

Protect your data at rest by implementing multiple controls, to reduce the risk of unauthorized access or mishandling.

**Topics**
+ [SEC08-BP01 Implement secure key management](sec_protect_data_rest_key_mgmt.md)
+ [SEC08-BP02 Enforce encryption at rest](sec_protect_data_rest_encrypt.md)
+ [SEC08-BP03 Automate data at rest protection](sec_protect_data_rest_automate_protection.md)
+ [SEC08-BP04 Enforce access control](sec_protect_data_rest_access_control.md)
+ [SEC08-BP05 Use mechanisms to keep people away from data](sec_protect_data_rest_use_people_away.md)

# SEC08-BP01 Implement secure key management
<a name="sec_protect_data_rest_key_mgmt"></a>

 Secure key management includes the storage, rotation, access control, and monitoring of key material required to secure data at rest for your workload. 

 **Desired outcome:** A scalable, repeatable, and automated key management mechanism. The mechanism should provide the ability to enforce least privilege access to key material, provide the correct balance between key availability, confidentiality, and integrity. Access to keys should be monitored, and key material rotated through an automated process. Key material should never be accessible to human identities. 

**Common anti-patterns:** 
+  Human access to unencrypted key material. 
+  Creating custom cryptographic algorithms. 
+  Overly broad permissions to access key material. 

 **Benefits of establishing this best practice:** By establishing a secure key management mechanism for your workload, you can help provide protection for your content against unauthorized access. Additionally, you may be subject to regulatory requirements to encrypt your data. An effective key management solution can provide technical mechanisms aligned to those regulations to protect key material. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 Many regulatory requirements and best practices include encryption of data at rest as a fundamental security control. In order to comply with this control, your workload needs a mechanism to securely store and manage the key material used to encrypt your data at rest. 

 AWS offers AWS Key Management Service (AWS KMS) to provide durable, secure, and redundant storage for AWS KMS keys. [Many AWS services integrate with AWS KMS](https://aws.amazon.com/kms/features/#integration) to support encryption of your data. AWS KMS uses FIPS 140-2 Level 3 validated hardware security modules to protect your keys. There is no mechanism to export AWS KMS keys in plain text. 

 When deploying workloads using a multi-account strategy, it is considered [best practice](https://docs.aws.amazon.com/prescriptive-guidance/latest/security-reference-architecture/application.html#app-kms) to keep AWS KMS keys in the same account as the workload that uses them. In this distributed model, responsibility for managing the AWS KMS keys resides with the application team. In other use cases, organizations may choose to store AWS KMS keys into a centralized account. This centralized structure requires additional policies to enable the cross-account access required for the workload account to access keys stored in the centralized account, but may be more applicable in use cases where a single key is shared across multiple AWS accounts. 

 Regardless of where the key material is stored, access to the key should be tightly controlled through the use of [key policies](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) and IAM policies. Key policies are the primary way to control access to a AWS KMS key. In addition, AWS KMS key grants can provide access to AWS services to encrypt and decrypt data on your behalf. Take time to review the [best practices for access control to your AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/iam-policies-best-practices.html). 

 It is best practice to monitor the use of encryption keys to detect unusual access patterns. Operations performed using AWS managed keys and customer managed keys stored in AWS KMS can be logged in AWS CloudTrail and should be reviewed periodically. Special attention should be placed on monitoring key destruction events. To mitigate accidental or malicious destruction of key material, key destruction events do not delete the key material immediately. Attempts to delete keys in AWS KMS are subject to a [waiting period](https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys.html#deleting-keys-how-it-works), which defaults to 30 days, providing administrators time to review these actions and roll back the request if necessary. 

 Most AWS services use AWS KMS in a way that is transparent to you - your only requirement is to decide whether to use an AWS managed or customer managed key. If your workload requires the direct use of AWS KMS to encrypt or decrypt data, the best practice is to use [envelope encryption](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping) to protect your data. The [AWS Encryption SDK](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html) can provide your applications client-side encryption primitives to implement envelope encryption and integrate with AWS KMS. 

### Implementation steps
<a name="implementation-steps"></a>

1.  Determine the appropriate [key management options](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#key-mgmt) (AWS managed or customer managed) for the key. 
   +  For ease of use, AWS offers AWS owned and AWS managed keys for most services, which provide encryption-at-rest capability without the need to manage key material or key policies. 
   +  When using customer managed keys, consider the default key store to provide the best balance between agility, security, data sovereignty, and availability. Other use cases may require the use of custom key stores with [AWS CloudHSM](https://aws.amazon.com/cloudhsm/) or the [external key store](https://docs.aws.amazon.com/kms/latest/developerguide/keystore-external.html). 

1.  Review the list of services that you are using for your workload to understand how AWS KMS integrates with the service. For example, EC2 instances can use encrypted EBS volumes, verifying that Amazon EBS snapshots created from those volumes are also encrypted using a customer managed key and mitigating accidental disclosure of unencrypted snapshot data. 
   +  [How AWS services use AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/service-integration.html) 
   +  For detailed information about the encryption options that an AWS service offers, see the Encryption at Rest topic in the user guide or the developer guide for the service. 

1.  Implement AWS KMS: AWS KMS makes it simple for you to create and manage keys and control the use of encryption across a wide range of AWS services and in your applications. 
   +  [Getting started: AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/getting-started.html) 
   +  Review the [best practices for access control to your AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/iam-policies-best-practices.html). 

1.  Consider AWS Encryption SDK: Use the AWS Encryption SDK with AWS KMS integration when your application needs to encrypt data client-side. 
   +  [AWS Encryption SDK](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html) 

1.  Enable [IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html) to automatically review and notify if there are overly broad AWS KMS key policies. 

1.  Enable [Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/kms-controls.html) to receive notifications if there are misconfigured key policies, keys scheduled for deletion, or keys without automated rotation enabled. 

1.  Determine the logging level appropriate for your AWS KMS keys. Since calls to AWS KMS, including read-only events, are logged, the CloudTrail logs associated with AWS KMS can become voluminous. 
   +  Some organizations prefer to segregate the AWS KMS logging activity into a separate trail. For more detail, see the [Logging AWS KMS API calls with CloudTrail](https://docs.aws.amazon.com/kms/latest/developerguide/logging-using-cloudtrail.html) section of the AWS KMS developers guide. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Key Management Service](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) 
+  [AWS cryptographic services and tools](https://docs.aws.amazon.com/crypto/latest/userguide/awscryp-overview.html) 
+  [Protecting Amazon S3 Data Using Encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html) 
+  [Envelope encryption](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping) 
+  [Digital sovereignty pledge](https://aws.amazon.com/blogs/security/aws-digital-sovereignty-pledge-control-without-compromise/) 
+  [Demystifying AWS KMS key operations, bring your own key, custom key store, and ciphertext portability](https://aws.amazon.com/blogs/security/demystifying-kms-keys-operations-bring-your-own-key-byok-custom-key-store-and-ciphertext-portability/) 
+  [AWS Key Management Service cryptographic details](https://docs.aws.amazon.com/kms/latest/cryptographic-details/intro.html) 

 **Related videos:** 
+  [How Encryption Works in AWS](https://youtu.be/plv7PQZICCM) 
+  [Securing Your Block Storage on AWS](https://youtu.be/Y1hE1Nkcxs8) 
+  [AWS data protection: Using locks, keys, signatures, and certificates](https://www.youtube.com/watch?v=lD34wbc7KNA) 

 **Related examples:** 
+  [Implement advanced access control mechanisms using AWS KMS](https://catalog.workshops.aws/advkmsaccess/en-US/introduction) 

# SEC08-BP02 Enforce encryption at rest
<a name="sec_protect_data_rest_encrypt"></a>

 You should enforce the use of encryption for data at rest. Encryption maintains the confidentiality of sensitive data in the event of unauthorized access or accidental disclosure. 

 **Desired outcome:** Private data should be encrypted by default when at rest. Encryption helps maintain confidentiality of the data and provides an additional layer of protection against intentional or inadvertent data disclosure or exfiltration. Data that is encrypted cannot be read or accessed without first unencrypting the data. Any data stored unencrypted should be inventoried and controlled. 

 **Common anti-patterns:** 
+  Not using encrypt-by-default configurations. 
+  Providing overly permissive access to decryption keys. 
+  Not monitoring the use of encryption and decryption keys. 
+  Storing data unencrypted. 
+  Using the same encryption key for all data regardless of data usage, types, and classification. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 Map encryption keys to data classifications within your workloads. This approach helps protect against overly permissive access when using either a single, or very small number of encryption keys for your data (see [SEC07-BP01 Identify the data within your workload](sec_data_classification_identify_data.md)). 

 AWS Key Management Service (AWS KMS) integrates with many AWS services to make it easier to encrypt your data at rest. For example, in Amazon Simple Storage Service (Amazon S3), you can set [default encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html) on a bucket so that new objects are automatically encrypted. When using AWS KMS, consider how tightly the data needs to be restricted. Default and service-controlled AWS KMS keys are managed and used on your behalf by AWS. For sensitive data that requires fine-grained access to the underlying encryption key, consider customer managed keys (CMKs). You have full control over CMKs, including rotation and access management through the use of key policies. 

 Additionally, [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#encryption-by-default) and [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-bucket-encryption.html) support the enforcement of encryption by setting default encryption. You can use [AWS Config Rules](https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws-config.html) to check automatically that you are using encryption, for example, for [Amazon Elastic Block Store (Amazon EBS) volumes](https://docs.aws.amazon.com/config/latest/developerguide/encrypted-volumes.html), [Amazon Relational Database Service (Amazon RDS) instances](https://docs.aws.amazon.com/config/latest/developerguide/rds-storage-encrypted.html), and [Amazon S3 buckets](https://docs.aws.amazon.com/config/latest/developerguide/s3-default-encryption-kms.html). 

 AWS also provides options for client-side encryption, allowing you to encrypt data prior to uploading it to the cloud. The AWS Encryption SDK provides a way to encrypt your data using [envelope encryption](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping). You provide the wrapping key, and the AWS Encryption SDK generates a unique data key for each data object it encrypts. Consider AWS CloudHSM if you need a managed single-tenant hardware security module (HSM). AWS CloudHSM allows you to generate, import, and manage cryptographic keys on a FIPS 140-2 level 3 validated HSM. Some use cases for AWS CloudHSM include protecting private keys for issuing a certificate authority (CA), and turning on transparent data encryption (TDE) for Oracle databases. The AWS CloudHSM Client SDK provides software that allows you to encrypt data client side using keys stored inside AWS CloudHSM prior to uploading your data into AWS. The Amazon DynamoDB Encryption Client also allows you to encrypt and sign items prior to upload into a DynamoDB table. 

 **Implementation steps** 
+  **Enforce encryption at rest for Amazon S3:** Implement [Amazon S3 bucket default encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-bucket-encryption.html). 

   **Configure [default encryption for new Amazon EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html):** Specify that you want all newly created Amazon EBS volumes to be created in encrypted form, with the option of using the default key provided by AWS or a key that you create. 

   **Configure encrypted Amazon Machine Images (AMIs):** Copying an existing AMI with encryption configured will automatically encrypt root volumes and snapshots. 

   **Configure [Amazon RDS encryption](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Overview.Encryption.html):** Configure encryption for your Amazon RDS database clusters and snapshots at rest by using the encryption option. 

   **Create and configure AWS KMS keys with policies that limit access to the appropriate principals for each classification of data:** For example, create one AWS KMS key for encrypting production data and a different key for encrypting development or test data. You can also provide key access to other AWS accounts. Consider having different accounts for your development and production environments. If your production environment needs to decrypt artifacts in the development account, you can edit the CMK policy used to encrypt the development artifacts to give the production account the ability to decrypt those artifacts. The production environment can then ingest the decrypted data for use in production. 

   **Configure encryption in additional AWS services:** For other AWS services you use, review the [security documentation](https://docs.aws.amazon.com/security/) for that service to determine the service’s encryption options. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Crypto Tools](https://docs.aws.amazon.com/aws-crypto-tools) 
+  [AWS Encryption SDK](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html) 
+  [AWS KMS Cryptographic Details Whitepaper](https://docs.aws.amazon.com/kms/latest/cryptographic-details/intro.html) 
+  [AWS Key Management Service](https://aws.amazon.com/kms) 
+  [AWS cryptographic services and tools](https://docs.aws.amazon.com/crypto/latest/userguide/awscryp-overview.html) 
+  [Amazon EBS Encryption](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) 
+  [Default encryption for Amazon EBS volumes](https://aws.amazon.com/blogs/aws/new-opt-in-to-default-encryption-for-new-ebs-volumes/) 
+  [Encrypting Amazon RDS Resources](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html) 
+  [How do I enable default encryption for an Amazon S3 bucket?](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/default-bucket-encryption.html) 
+  [Protecting Amazon S3 Data Using Encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html) 

 **Related videos:** 
+  [How Encryption Works in AWS](https://youtu.be/plv7PQZICCM) 
+  [Securing Your Block Storage on AWS](https://youtu.be/Y1hE1Nkcxs8) 

# SEC08-BP03 Automate data at rest protection
<a name="sec_protect_data_rest_automate_protection"></a>

 Use automated tools to validate and enforce data at rest controls continuously, for example, verify that there are only encrypted storage resources. You can [automate validation that all EBS volumes are encrypted](https://docs.aws.amazon.com/config/latest/developerguide/encrypted-volumes.html) using [AWS Config Rules](https://docs.aws.amazon.com/config/latest/developerguide/evaluate-config.html). [AWS Security Hub CSPM](http://aws.amazon.com/security-hub/) can also verify several different controls through automated checks against security standards. Additionally, your AWS Config Rules can automatically [remediate noncompliant resources](https://docs.aws.amazon.com/config/latest/developerguide/remediation.html#setup-autoremediation). 

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation_guidance"></a>

 *Data at rest* represents any data that you persist in non-volatile storage for any duration in your workload. This includes block storage, object storage, databases, archives, IoT devices, and any other storage medium on which data is persisted. Protecting your data at rest reduces the risk of unauthorized access, when encryption and appropriate access controls are implemented. 

 Enforce encryption at rest: You should ensure that the only way to store data is by using encryption. AWS KMS integrates seamlessly with many AWS services to make it easier for you to encrypt all your data at rest. For example, in Amazon Simple Storage Service (Amazon S3) you can set [default encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html) on a bucket so that all new objects are automatically encrypted. Additionally, [Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#encryption-by-default) and [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-bucket-encryption.html) support the enforcement of encryption by setting default encryption. You can use [AWS Managed Config Rules](https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws-config.html) to check automatically that you are using encryption, for example, for [EBS volumes](https://docs.aws.amazon.com/config/latest/developerguide/encrypted-volumes.html), [Amazon Relational Database Service (Amazon RDS) instances](https://docs.aws.amazon.com/config/latest/developerguide/rds-storage-encrypted.html), and [Amazon S3 buckets](https://docs.aws.amazon.com/config/latest/developerguide/s3-default-encryption-kms.html). 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Crypto Tools](https://docs.aws.amazon.com/aws-crypto-tools) 
+  [AWS Encryption SDK](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html) 

 **Related videos:** 
+  [How Encryption Works in AWS](https://youtu.be/plv7PQZICCM) 
+  [Securing Your Block Storage on AWS](https://youtu.be/Y1hE1Nkcxs8) 

# SEC08-BP04 Enforce access control
<a name="sec_protect_data_rest_access_control"></a>

 To help protect your data at rest, enforce access control using mechanisms, such as isolation and versioning, and apply the principle of least privilege. Prevent the granting of public access to your data. 

**Desired outcome:** Verify that only authorized users can access data on a need-to-know basis. Protect your data with regular backups and versioning to prevent against intentional or inadvertent modification or deletion of data. Isolate critical data from other data to protect its confidentiality and data integrity. 

**Common anti-patterns:**
+  Storing data with different sensitivity requirements or classification together. 
+  Using overly permissive permissions on decryption keys. 
+  Improperly classifying data. 
+  Not retaining detailed backups of important data. 
+  Providing persistent access to production data. 
+  Not auditing data access or regularly reviewing permissions.

**Level of risk exposed if this best practice is not established:** Low 

## Implementation guidance
<a name="implementation-guidance"></a>

 Multiple controls can help protect your data at rest, including access (using least privilege), isolation, and versioning. Access to your data should be audited using detective mechanisms, such as AWS CloudTrail, and service level logs, such as Amazon Simple Storage Service (Amazon S3) access logs. You should inventory what data is publicly accessible, and create a plan to reduce the amount of publicly available data over time. 

 Amazon Glacier Vault Lock and Amazon S3 Object Lock provide mandatory access control for objects in Amazon S3—once a vault policy is locked with the compliance option, not even the root user can change it until the lock expires. 

### Implementation steps
<a name="implementation-steps"></a>
+  **Enforce access control**: Enforce access control with least privileges, including access to encryption keys. 
+  **Separate data based on different classification levels**: Use different AWS accounts for data classification levels, and manage those accounts using [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html). 
+  **Review AWS Key Management Service (AWS KMS) policies**: [Review the level of access](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) granted in AWS KMS policies. 
+  **Review Amazon S3 bucket and object permissions**: Regularly review the level of access granted in S3 bucket policies. Best practice is to avoid using publicly readable or writeable buckets. Consider using [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws-config.html) to detect buckets that are publicly available, and Amazon CloudFront to serve content from Amazon S3. Verify that buckets that should not allow public access are properly configured to prevent public access. By default, all S3 buckets are private, and can only be accessed by users that have been explicitly granted access. 
+  **Use [AWS IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html):** IAM Access Analyzer analyzes Amazon S3 buckets and generates a finding when [an S3 policy grants access to an external entity.](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-resources.html#access-analyzer-s3) 
+  **Use [Amazon S3 versioning](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html) and [object lock](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html) when appropriate**. 
+  **Use [Amazon S3 Inventory](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-inventory.html)**: Amazon S3 Inventory can be used to audit and report on the replication and encryption status of your S3 objects. 
+  **Review [Amazon EBS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html) and [AMI sharing](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharing-amis.html) permissions**: Sharing permissions can allow images and volumes to be shared with AWS accounts that are external to your workload. 
+  **Review [AWS Resource Access Manager](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) Shares periodically to determine whether resources should continue to be shared.** Resource Access Manager allows you to share resources, such as AWS Network Firewall policies, Amazon Route 53 resolver rules, and subnets, within your Amazon VPCs. Audit shared resources regularly and stop sharing resources which no longer need to be shared. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+ [SEC03-BP01 Define access requirements](sec_permissions_define.md) 
+  [SEC03-BP02 Grant least privilege access](sec_permissions_least_privileges.md) 

 **Related documents:** 
+  [AWS KMS Cryptographic Details Whitepaper](https://docs.aws.amazon.com/kms/latest/cryptographic-details/intro.html) 
+  [Introduction to Managing Access Permissions to Your Amazon S3 Resources](https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-managing-access-s3-resources.html) 
+  [Overview of managing access to your AWS KMS resources](https://docs.aws.amazon.com/kms/latest/developerguide/control-access-overview.html) 
+  [AWS Config Rules](https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws-config.html) 
+  [Amazon S3 \$1 Amazon CloudFront: A Match Made in the Cloud](https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-a-match-made-in-the-cloud/) 
+  [Using versioning](https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html) 
+  [Locking Objects Using Amazon S3 Object Lock](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html) 
+  [Sharing an Amazon EBS Snapshot](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html) 
+  [Shared AMIs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharing-amis.html) 
+  [Hosting a single-page application on Amazon S3](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront.html) 

 **Related videos:** 
+  [Securing Your Block Storage on AWS](https://youtu.be/Y1hE1Nkcxs8) 

# SEC08-BP05 Use mechanisms to keep people away from data
<a name="sec_protect_data_rest_use_people_away"></a>

 Keep all users away from directly accessing sensitive data and systems under normal operational circumstances. For example, use a change management workflow to manage Amazon Elastic Compute Cloud (Amazon EC2) instances using tools instead of allowing direct access or a bastion host. This can be achieved using [AWS Systems Manager Automation](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html), which uses [automation documents](https://docs.aws.amazon.com/systems-manager/latest/userguide/automation-documents.html) that contain steps you use to perform tasks. These documents can be stored in source control, be peer reviewed before running, and tested thoroughly to minimize risk compared to shell access. Business users could have a dashboard instead of direct access to a data store to run queries. Where CI/CD pipelines are not used, determine which controls and processes are required to adequately provide a normally deactivated break-glass access mechanism. 

 **Level of risk exposed if this best practice is not established:** Low 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Implement mechanisms to keep people away from data: Mechanisms include using dashboards, such as Quick, to display data to users instead of directly querying. 
  +  [Quick](https://aws.amazon.com/quicksight/) 
+  Automate configuration management: Perform actions at a distance, enforce and validate secure configurations automatically by using a configuration management service or tool. Avoid use of bastion hosts or directly accessing EC2 instances. 
  +  [AWS Systems Manager](https://aws.amazon.com/systems-manager/) 
  +  [AWS CloudFormation](https://aws.amazon.com/cloudformation/) 
  +  [CI/CD Pipeline for AWS CloudFormation templates on AWS](https://aws.amazon.com/quickstart/architecture/cicd-taskcat/) 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS KMS Cryptographic Details Whitepaper](https://docs.aws.amazon.com/kms/latest/cryptographic-details/intro.html) 

 **Related videos:** 
+  [How Encryption Works in AWS](https://youtu.be/plv7PQZICCM) 
+  [Securing Your Block Storage on AWS](https://youtu.be/Y1hE1Nkcxs8) 

# SEC 9. How do you protect your data in transit?
<a name="sec-09"></a>

Protect your data in transit by implementing multiple controls to reduce the risk of unauthorized access or loss.

**Topics**
+ [SEC09-BP01 Implement secure key and certificate management](sec_protect_data_transit_key_cert_mgmt.md)
+ [SEC09-BP02 Enforce encryption in transit](sec_protect_data_transit_encrypt.md)
+ [SEC09-BP03 Automate detection of unintended data access](sec_protect_data_transit_auto_unintended_access.md)
+ [SEC09-BP04 Authenticate network communications](sec_protect_data_transit_authentication.md)

# SEC09-BP01 Implement secure key and certificate management
<a name="sec_protect_data_transit_key_cert_mgmt"></a>

 Transport Layer Security (TLS) certificates are used to secure network communications and establish the identity of websites, resources, and workloads over the internet, as well as private networks. 

 **Desired outcome:** A secure certificate management system that can provision, deploy, store, and renew certificates in a public key infrastructure (PKI). A secure key and certificate management mechanism prevents certificate private key material from disclosure and automatically renews the certificate on a periodic basis. It also integrates with other services to provide secure network communications and identity for machine resources inside of your workload. Key material should never be accessible to human identities. 

 **Common anti-patterns:** 
+  Performing manual steps during the certificate deployment or renewal processes. 
+  Paying insufficient attention to certificate authority (CA) hierarchy when designing a private CA. 
+  Using self-signed certificates for public resources. 

 **Benefits of establishing this best practice: **
+  Simplify certificate management through automated deployment and renewal 
+  Encourage encryption of data in transit using TLS certificates 
+  Increased security and auditability of certificate actions taken by the certificate authority 
+  Organization of management duties at different layers of the CA hierarchy 

 **Level of risk exposed if this best practice is not established:** High

## Implementation guidance
<a name="implementation-guidance"></a>

 Modern workloads make extensive use of encrypted network communications using PKI protocols such as TLS. PKI certificate management can be complex, but automated certificate provisioning, deployment, and renewal can reduce the friction associated with certificate management. 

 AWS provides two services to manage general-purpose PKI certificates: [AWS Certificate Manager](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html) and [AWS Private Certificate Authority (AWS Private CA)](https://docs.aws.amazon.com/privateca/latest/userguide/PcaWelcome.html). ACM is the primary service that customers use to provision, manage, and deploy certificates for use in both public-facing as well as private AWS workloads. ACM issues certificates using AWS Private CA and [integrates](https://docs.aws.amazon.com/acm/latest/userguide/acm-services.html) with many other AWS managed services to provide secure TLS certificates for workloads. 

 AWS Private CA allows you to establish your own root or subordinate certificate authority and issue TLS certificates through an API. You can use these kinds of certificates in scenarios where you control and manage the trust chain on the client side of the TLS connection. In addition to TLS use cases, AWS Private CA can be used to issue certificates to Kubernetes pods, Matter device product attestations, code signing, and other use cases with a [custom template](https://docs.aws.amazon.com/privateca/latest/userguide/UsingTemplates.html). You can also use [IAM Roles Anywhere](https://docs.aws.amazon.com/rolesanywhere/latest/userguide/introduction.html) to provide temporary IAM credentials to on-premises workloads that have been issued X.509 certificates signed by your Private CA. 

 In addition to ACM and AWS Private CA, [AWS IoT Core](https://docs.aws.amazon.com/iot/latest/developerguide/what-is-aws-iot.html) provides specialized support for provisioning, managing and deploying PKI certificates to IoT devices. AWS IoT Core provides specialized mechanisms for [onboarding IoT devices](https://docs.aws.amazon.com/whitepapers/latest/device-manufacturing-provisioning/device-manufacturing-provisioning.html) into your public key infrastructure at scale. 

**Considerations for establishing a private CA hierarchy **

 When you need to establish a private CA, it's important to take special care to properly design the CA hierarchy upfront. It's a best practice to deploy each level of your CA hierarchy into separate AWS accounts when creating a private CA hierarchy. This intentional step reduces the surface area for each level in the CA hierarchy, making it simpler to discover anomalies in CloudTrail log data and reducing the scope of access or impact if there is unauthorized access to one of the accounts. The root CA should reside in its own separate account and should only be used to issue one or more intermediate CA certificates. 

 Then, create one or more intermediate CAs in accounts separate from the root CA's account to issue certificates for end users, devices, or other workloads. Finally, issue certificates from your root CA to the intermediate CAs, which will in turn issue certificates to your end users or devices. For more information on planning your CA deployment and designing your CA hierarchy, including planning for resiliency, cross-region replication, sharing CAs across your organization, and more, see [Planning your AWS Private CA deployment](https://docs.aws.amazon.com/privateca/latest/userguide/PcaPlanning.html). 

### Implementation steps
<a name="implementation-steps"></a>

1.  Determine the relevant AWS services required for your use case: 
   +  Many use cases can leverage the existing AWS public key infrastructure using [AWS Certificate Manager](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html). ACM can be used to deploy TLS certificates for web servers, load balancers, or other uses for publicly trusted certificates. 
   +  Consider [AWS Private CA](https://docs.aws.amazon.com/privateca/latest/userguide/PcaWelcome.html) when you need to establish your own private certificate authority hierarchy or need access to exportable certificates. ACM can then be used to issue [many types of end-entity certificates](https://docs.aws.amazon.com/privateca/latest/userguide/PcaIssueCert.html) using the AWS Private CA. 
   +  For use cases where certificates must be provisioned at scale for embedded Internet of things (IoT) devices, consider [AWS IoT Core](https://docs.aws.amazon.com/iot/latest/developerguide/x509-client-certs.html). 

1.  Implement automated certificate renewal whenever possible: 
   +  Use [ACM managed renewal](https://docs.aws.amazon.com/acm/latest/userguide/managed-renewal.html) for certificates issued by ACM along with integrated AWS managed services. 

1.  Establish logging and audit trails: 
   +  Enable [CloudTrail logs](https://docs.aws.amazon.com/privateca/latest/userguide/PcaCtIntro.html) to track access to the accounts holding certificate authorities. Consider configuring log file integrity validation in CloudTrail to verify the authenticity of the log data. 
   +  Periodically generate and review [audit reports](https://docs.aws.amazon.com/privateca/latest/userguide/PcaAuditReport.html) that list the certificates that your private CA has issued or revoked. These reports can be exported to an S3 bucket. 
   +  When deploying a private CA, you will also need to establish an S3 bucket to store the Certificate Revocation List (CRL). For guidance on configuring this S3 bucket based on your workload's requirements, see [Planning a certificate revocation list (CRL)](https://docs.aws.amazon.com/privateca/latest/userguide/crl-planning.html). 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [SEC02-BP02 Use temporary credentials](sec_identities_unique.md) 
+ [SEC08-BP01 Implement secure key management](sec_protect_data_rest_key_mgmt.md)
+  [SEC09-BP04 Authenticate network communications](sec_protect_data_transit_authentication.md) 

 **Related documents:** 
+  [How to host and manage an entire private certificate infrastructure in AWS](https://aws.amazon.com/blogs/security/how-to-host-and-manage-an-entire-private-certificate-infrastructure-in-aws/) 
+  [How to secure an enterprise scale ACM Private CA hierarchy for automotive and manufacturing](https://aws.amazon.com/blogs/security/how-to-secure-an-enterprise-scale-acm-private-ca-hierarchy-for-automotive-and-manufacturing/) 
+  [Private CA best practices](https://docs.aws.amazon.com/privateca/latest/userguide/ca-best-practices.html) 
+  [How to use AWS RAM to share your ACM Private CA cross-account](https://aws.amazon.com/blogs/security/how-to-use-aws-ram-to-share-your-acm-private-ca-cross-account/) 

 **Related videos:** 
+  [Activating AWS Certificate Manager Private CA (workshop)](https://www.youtube.com/watch?v=XrrdyplT3PE) 

 **Related examples:** 
+  [Private CA workshop](https://catalog.workshops.aws/certificatemanager/en-US/introduction) 
+  [IOT Device Management Workshop](https://iot-device-management.workshop.aws/en/) (including device provisioning) 

 **Related tools:** 
+  [Plugin to Kubernetes cert-manager to use AWS Private CA](https://github.com/cert-manager/aws-privateca-issuer) 

# SEC09-BP02 Enforce encryption in transit
<a name="sec_protect_data_transit_encrypt"></a>

Enforce your defined encryption requirements based on your organization’s policies, regulatory obligations and standards to help meet organizational, legal, and compliance requirements. Only use protocols with encryption when transmitting sensitive data outside of your virtual private cloud (VPC). Encryption helps maintain data confidentiality even when the data transits untrusted networks.

 **Desired outcome:** All data should be encrypted in transit using secure TLS protocols and cipher suites. Network traffic between your resources and the internet must be encrypted to mitigate unauthorized access to the data. Network traffic solely within your internal AWS environment should be encrypted using TLS wherever possible. The AWS internal network is encrypted by default and network traffic within a VPC cannot be spoofed or sniffed unless an unauthorized party has gained access to whatever resource is generating traffic (such as Amazon EC2 instances, and Amazon ECS containers). Consider protecting network-to-network traffic with an IPsec virtual private network (VPN). 

 **Common anti-patterns:** 
+  Using deprecated versions of SSL, TLS, and cipher suite components (for example, SSL v3.0, 1024-bit RSA keys, and RC4 cipher). 
+  Allowing unencrypted (HTTP) traffic to or from public-facing resources. 
+  Not monitoring and replacing X.509 certificates prior to expiration. 
+  Using self-signed X.509 certificates for TLS. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 AWS services provide HTTPS endpoints using TLS for communication, providing encryption in transit when communicating with the AWS APIs. Insecure protocols like HTTP can be audited and blocked in a VPC through the use of security groups. HTTP requests can also be [automatically redirected to HTTPS](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-viewers-to-cloudfront.html) in Amazon CloudFront or on an [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#redirect-actions). You have full control over your computing resources to implement encryption in transit across your services. Additionally, you can use VPN connectivity into your VPC from an external network or [AWS Direct Connect](https://aws.amazon.com/directconnect/) to facilitate encryption of traffic. Verify that your clients are making calls to AWS APIs using at least TLS 1.2, as [AWS is deprecating the use of earlier versions of TLS in June 2023](https://aws.amazon.com/blogs/security/tls-1-2-required-for-aws-endpoints/). AWS recommends using TLS 1.3. Third-party solutions are available in the AWS Marketplace if you have special requirements. 

 **Implementation steps** 
+  **Enforce encryption in transit:** Your defined encryption requirements should be based on the latest standards and best practices and only allow secure protocols. For example, configure a security group to only allow the HTTPS protocol to an application load balancer or Amazon EC2 instance. 
+  **Configure secure protocols in edge services:** [Configure HTTPS with Amazon CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https.html) and use a [security profile appropriate for your security posture and use case](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/secure-connections-supported-viewer-protocols-ciphers.html#secure-connections-supported-ciphers). 
+  **Use a [VPN for external connectivity](https://docs.aws.amazon.com/vpc/latest/userguide/vpn-connections.html):** Consider using an IPsec VPN for securing point-to-point or network-to-network connections to help provide both data privacy and integrity. 
+  **Configure secure protocols in load balancers:** Select a security policy that provides the strongest cipher suites supported by the clients that will be connecting to the listener. [Create an HTTPS listener for your Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html). 
+  **Configure secure protocols in Amazon Redshift:** Configure your cluster to require a [secure socket layer (SSL) or transport layer security (TLS) connection](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html). 
+  **Configure secure protocols:** Review AWS service documentation to determine encryption-in-transit capabilities. 
+  **Configure secure access when uploading to Amazon S3 buckets:** Use Amazon S3 bucket policy controls to [enforce secure access](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html) to data. 
+  **Consider using [AWS Certificate Manager](https://aws.amazon.com/certificate-manager/):** ACM allows you to provision, manage, and deploy public TLS certificates for use with AWS services. 
+  **Consider using [AWS Private Certificate Authority](https://aws.amazon.com/private-ca/) for private PKI needs:** AWS Private CA allows you to create private certificate authority (CA) hierarchies to issue end-entity X.509 certificates that can be used to create encrypted TLS channels. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+ [ Using HTTPS with CloudFront ](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https.html)
+ [ Connect your VPC to remote networks using AWS Virtual Private Network](https://docs.aws.amazon.com/vpc/latest/userguide/vpn-connections.html)
+ [ Create an HTTPS listener for your Application Load Balancer ](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html)
+ [ Tutorial: Configure SSL/TLS on Amazon Linux 2 ](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/SSL-on-amazon-linux-2.html)
+ [ Using SSL/TLS to encrypt a connection to a DB instance ](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html)
+ [ Configuring security options for connections ](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-ssl-support.html)

# SEC09-BP03 Automate detection of unintended data access
<a name="sec_protect_data_transit_auto_unintended_access"></a>

 Use tools such as Amazon GuardDuty to automatically detect suspicious activity or attempts to move data outside of defined boundaries. For example, GuardDuty can detect Amazon Simple Storage Service (Amazon S3) read activity that is unusual with the [Exfiltration:S3/AnomalousBehavior finding](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-s3.html#exfiltration-s3-objectreadunusual). In addition to GuardDuty, [Amazon VPC Flow Logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html), which capture network traffic information, can be used with Amazon EventBridge to detect connections, both successful and denied. [Amazon S3 Access Analyzer](http://aws.amazon.com/blogs/storage/protect-amazon-s3-buckets-using-access-analyzer-for-s3) can help assess what data is accessible to who in your Amazon S3 buckets. 

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>
+  Automate detection of unintended data access: Use a tool or detection mechanism to automatically detect attempts to move data outside of defined boundaries, for example, to detect a database system that is copying data to an unrecognized host. 
  + [ VPC Flow Logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html) 
+  Consider Amazon Macie: Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. 
  + [ Amazon Macie ](https://aws.amazon.com/macie/)

## Resources
<a name="resources"></a>

 **Related documents:** 
+ [ VPC Flow Logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html) 
+ [ Amazon Macie ](https://aws.amazon.com/macie/)

# SEC09-BP04 Authenticate network communications
<a name="sec_protect_data_transit_authentication"></a>


****  

|  | 
| --- |
| This best practice was updated with new guidance on December 6, 2023. | 

 Verify the identity of communications by using protocols that support authentication, such as Transport Layer Security (TLS) or IPsec. 

 Design your workload to use secure, authenticated network protocols whenever communicating between services, applications, or to users. Using network protocols that support authentication and authorization provides stronger control over network flows and reduces the impact of unauthorized access. 

 **Desired outcome:** A workload with well-defined data plane and control plane traffic flows between services. The traffic flows use authenticated and encrypted network protocols where technically feasible. 

 **Common anti-patterns:** 
+  Unencrypted or unauthenticated traffic flows within your workload. 
+  Reusing authentication credentials across multiple users or entities. 
+  Relying solely on network controls as an access control mechanism. 
+  Creating a custom authentication mechanism rather than relying on industry-standard authentication mechanisms. 
+  Overly permissive traffic flows between service components or other resources in the VPC. 

 **Benefits of establishing this best practice:** 
+  Limits the scope of impact for unauthorized access to one part of the workload. 
+  Provides a higher level of assurance that actions are only performed by authenticated entities. 
+  Improves decoupling of services by clearly defining and enforcing intended data transfer interfaces. 
+  Enhances monitoring, logging, and incident response through request attribution and well-defined communication interfaces. 
+  Provides defense-in-depth for your workloads by combining network controls with authentication and authorization controls. 

 **Level of risk exposed if this best practice is not established:** Low 

## Implementation guidance
<a name="implementation-guidance"></a>

 Your workload’s network traffic patterns can be characterized into two categories: 
+  *East-west traffic* represents traffic flows between services that make up a workload. 
+  *North-south traffic* represents traffic flows between your workload and consumers. 

 While it is common practice to encrypt north-south traffic, securing east-west traffic using authenticated protocols is less common. Modern security practices recommend that network design alone does not grant a trusted relationship between two entities. When two services may reside within a common network boundary, it is still best practice to encrypt, authenticate, and authorize communications between those services. 

 As an example, AWS service APIs use the [AWS Signature Version 4 (SigV4)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-signing.html) signature protocol to authenticate the caller, no matter what network the request originates from. This authentication ensures that AWS APIs can verify the identity that requested the action, and that identity can then be combined with policies to make an authorization decision to determine whether the action should be allowed or not. 

 Services such as [Amazon VPC Lattice](https://docs.aws.amazon.com/vpc-lattice/latest/ug/access-management-overview.html) and [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/permissions.html) allow you use the same SigV4 signature protocol to add authentication and authorization to east-west traffic in your own workloads. If resources outside of your AWS environment need to communicate with services that require SigV4-based authentication and authorization, you can use [AWS Identity and Access Management (IAM) Roles Anywhere](https://docs.aws.amazon.com/rolesanywhere/latest/userguide/introduction.html) on the non-AWS resource to acquire temporary AWS credentials. These credentials can be used to sign requests to services using SigV4 to authorize access. 

 Another common mechanism for authenticating east-west traffic is TLS mutual authentication (mTLS). Many Internet of Things (IoT), business-to-business applications, and microservices use mTLS to validate the identity of both sides of a TLS communication through the use of both client and server-side X.509 certificates. These certificates can be issued by AWS Private Certificate Authority (AWS Private CA). You can use services such as [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/rest-api-mutual-tls.html) and [AWS App Mesh](https://docs.aws.amazon.com/app-mesh/latest/userguide/mutual-tls.html) to provide mTLS authentication for inter- or intra-workload communication. While mTLS provides authentication information for both sides of a TLS communication, it does not provide a mechanism for authorization. 

 Finally, OAuth 2.0 and OpenID Connect (OIDC) are two protocols typically used for controlling access to services by users, but are now becoming popular for service-to-service traffic as well. API Gateway provides a [JSON Web Token (JWT) authorizer](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-jwt-authorizer.html), allowing workloads to restrict access to API routes using JWTs issued from OIDC or OAuth 2.0 identity providers. OAuth2 scopes can be used as a source for basic authorization decisions, but the authorization checks still need to be implemented in the application layer, and OAuth2 scopes alone cannot support more complex authorization needs. 

### Implementation steps
<a name="implementation-steps"></a>
+  **Define and document your workload network flows:** The first step in implementing a defense-in-depth strategy is defining your workload’s traffic flows. 
  +  Create a data flow diagram that clearly defines how data is transmitted between different services that comprise your workload. This diagram is the first step to enforcing those flows through authenticated network channels. 
  +  Instrument your workload in development and testing phases to validate that the data flow diagram accurately reflects the workload’s behavior at runtime. 
  +  A data flow diagram can also be useful when performing a threat modeling exercise, as described in [SEC01-BP07 Identify threats and prioritize mitigations using a threat model](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_securely_operate_threat_model.html). 
+  **Establish network controls:** Consider AWS capabilities to establish network controls aligned to your data flows. While network boundaries should not be the only security control, they provide a layer in the defense-in-depth strategy to protect your workload. 
  +  Use [security groups](https://docs.aws.amazon.com/vpc/latest/userguide/security-groups.html) to establish define and restrict data flows between resources. 
  +  Consider using [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html) to communicate with both AWS and third-party services that support AWS PrivateLink. Data sent through a AWS PrivateLink interface endpoint stays within the AWS network backbone and does not traverse the public Internet. 
+  **Implement authentication and authorization across services in your workload:** Choose the set of AWS services most appropriate to provide authenticated, encrypted traffic flows in your workload. 
  +  Consider [Amazon VPC Lattice](https://docs.aws.amazon.com/vpc-lattice/latest/ug/what-is-vpc-lattice.html) to secure service-to-service communication. VPC Lattice can use [SigV4 authentication combined with auth policies](https://docs.aws.amazon.com/vpc-lattice/latest/ug/auth-policies.html) to control service-to-service access. 
  +  For service-to-service communication using mTLS, consider [API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/rest-api-mutual-tls.html) or [App Mesh](https://docs.aws.amazon.com/app-mesh/latest/userguide/mutual-tls.html). [AWS Private CA](https://docs.aws.amazon.com/privateca/latest/userguide/PcaWelcome.html) can be used to establish a private CA hierarchy capable of issuing certificates for use with mTLS. 
  +  When integrating with services using OAuth 2.0 or OIDC, consider [API Gateway using the JWT authorizer](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-jwt-authorizer.html). 
  +  For communication between your workload and IoT devices, consider [AWS IoT Core](https://docs.aws.amazon.com/iot/latest/developerguide/client-authentication.html), which provides several options for network traffic encryption and authentication. 
+  **Monitor for unauthorized access:** Continually monitor for unintended communication channels, unauthorized principals attempting to access protected resources, and other improper access patterns. 
  +  If using VPC Lattice to manage access to your services, consider enabling and monitoring [VPC Lattice access logs](https://docs.aws.amazon.com/vpc-lattice/latest/ug/monitoring-access-logs.html). These access logs include information on the requesting entity, network information including source and destination VPC, and request metadata. 
  +  Consider enabling [VPC flow logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html) to capture metadata on network flows and periodically review for anomalies. 
  +  Refer to the [AWS Security Incident Response Guide](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/aws-security-incident-response-guide.html) and the [Incident Response section](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/incident-response.html) of the AWS Well-Architected Framework security pillar for more guidance on planning, simulating, and responding to security incidents. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+ [ SEC03-BP07 Analyze public and cross-account access ](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_permissions_analyze_cross_account.html)
+ [ SEC02-BP02 Use temporary credentials ](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_identities_unique.html)
+ [ SEC01-BP07 Identify threats and prioritize mitigations using a threat model ](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_securely_operate_threat_model.html)

 **Related documents:** 
+ [ Evaluating access control methods to secure Amazon API Gateway APIs ](https://aws.amazon.com/blogs/compute/evaluating-access-control-methods-to-secure-amazon-api-gateway-apis/)
+ [ Configuring mutual TLS authentication for a REST API ](https://docs.aws.amazon.com/apigateway/latest/developerguide/rest-api-mutual-tls.html)
+ [ How to secure API Gateway HTTP endpoints with JWT authorizer ](https://aws.amazon.com/blogs/security/how-to-secure-api-gateway-http-endpoints-with-jwt-authorizer/)
+ [ Authorizing direct calls to AWS services using AWS IoT Core credential provider ](https://docs.aws.amazon.com/iot/latest/developerguide/authorizing-direct-aws.html)
+ [AWS Security Incident Response Guide ](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/aws-security-incident-response-guide.html)

 **Related videos:** 
+ [AWS re:invent 2022: Introducing VPC Lattice ](https://www.youtube.com/watch?v=fRjD1JI0H5w)
+ [AWS re:invent 2020: Serverless API authentication for HTTP APIs on AWS](https://www.youtube.com/watch?v=AW4kvUkUKZ0)

 **Related examples:** 
+ [ Amazon VPC Lattice Workshop ](https://catalog.us-east-1.prod.workshops.aws/workshops/9e543f60-e409-43d4-b37f-78ff3e1a07f5/en-US)
+ [ Zero-Trust Episode 1 – The Phantom Service Perimeter workshop ](https://catalog.us-east-1.prod.workshops.aws/workshops/dc413216-deab-4371-9e4a-879a4f14233d/en-US)

# Incident response
<a name="a-incident-response"></a>

**Topics**
+ [SEC 10. How do you anticipate, respond to, and recover from incidents?](sec-10.md)

# SEC 10. How do you anticipate, respond to, and recover from incidents?
<a name="sec-10"></a>

 Even with mature preventive and detective controls, your organization should implement mechanisms to respond to and mitigate the potential impact of security incidents. Your preparation strongly affects the ability of your teams to operate effectively during an incident, to isolate, contain and perform forensics on issues, and to restore operations to a known good state. Putting in place the tools and access ahead of a security incident, then routinely practicing incident response through game days, helps ensure that you can recover while minimizing business disruption. 

**Topics**
+ [SEC10-BP01 Identify key personnel and external resources](sec_incident_response_identify_personnel.md)
+ [SEC10-BP02 Develop incident management plans](sec_incident_response_develop_management_plans.md)
+ [SEC10-BP03 Prepare forensic capabilities](sec_incident_response_prepare_forensic.md)
+ [SEC10-BP04 Develop and test security incident response playbooks](sec_incident_response_playbooks.md)
+ [SEC10-BP05 Pre-provision access](sec_incident_response_pre_provision_access.md)
+ [SEC10-BP06 Pre-deploy tools](sec_incident_response_pre_deploy_tools.md)
+ [SEC10-BP07 Run simulations](sec_incident_response_run_game_days.md)
+ [SEC10-BP08 Establish a framework for learning from incidents](sec_incident_response_establish_incident_framework.md)

# SEC10-BP01 Identify key personnel and external resources
<a name="sec_incident_response_identify_personnel"></a>

 Identify internal and external personnel, resources, and legal obligations that would help your organization respond to an incident. 

When you define your approach to incident response in the cloud, in unison with other teams (such as your legal counsel, leadership, business stakeholders, AWS Support Services, and others), you must identify key personnel, stakeholders, and relevant contacts. To reduce dependency and decrease response time, make sure that your team, specialist security teams, and responders are educated about the services that you use and have opportunities to practice hands-on.

We encourage you to identify external AWS security partners that can provide you with outside expertise and a different perspective to augment your response capabilities. Your trusted security partners can help you identify potential risks or threats that you might not be familiar with.

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>
+  **Identify key personnel in your organization:** Maintain a contact list of personnel within your organization that you would need to involve to respond to and recover from an incident. 
+  **Identify external partners:** Engage with external partners if necessary that can help you respond to and recover from an incident. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Incident Response Guide](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/welcome.html) 

 **Related videos:** 
+  [Prepare for and respond to security incidents in your AWS environment ](https://youtu.be/8uiO0Z5meCs)

 **Related examples:** 

# SEC10-BP02 Develop incident management plans
<a name="sec_incident_response_develop_management_plans"></a>

The first document to develop for incident response is the incident response plan. The incident response plan is designed to be the foundation for your incident response program and strategy. 

 **Benefits of establishing this best practice:** Developing thorough and clearly defined incident response processes is key to a successful and scalable incident response program. When a security event occurs, clear steps and workflows can help you to respond in a timely manner. You might already have existing incident response processes. Regardless of your current state, it’s important to update, iterate, and test your incident response processes regularly. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

An incident management plan is critical to respond, mitigate, and recover from the potential impact of security incidents. An incident management plan is a structured process for identifying, remediating, and responding in a timely matter to security incidents.

 The cloud has many of the same operational roles and requirements found in an on-premises environment. When creating an incident management plan, it is important to factor response and recovery strategies that best align with your business outcome and compliance requirements. For example, if you are operating workloads in AWS that are FedRAMP compliant in the United States, it’s useful to adhere to [NIST SP 800-61 Computer Security Handling Guide](https://nvlpubs.nist.gov/nistpubs/specialpublications/nist.sp.800-61r2.pdf). Similarly, when operating workloads with European personally identifiable information (PII) data, consider scenarios like how you might protect and respond to issues related to data residency as mandated by [EU General Data Protection Regulation (GDPR) Regulations](https://ec.europa.eu/info/law/law-topic/data-protection/reform/what-does-general-data-protection-regulation-gdpr-govern_en). 

 When building an incident management plan for your workloads in AWS, start with the [AWS Shared Responsibility Model](https://aws.amazon.com/compliance/shared-responsibility-model/) for building a defense-in-depth approach towards incident response. In this model, AWS manages security of the cloud, and you are responsible for security in the cloud. This means that you retain control and are responsible for the security controls you choose to implement. The [AWS Security Incident Response Guide](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/welcome.html) details key concepts and foundational guidance for building a cloud-centric incident management plan.

 An effective incident management plan must be continually iterated upon, remaining current with your cloud operations goal. Consider using the implementation plans detailed below as you create and evolve your incident management plan. 

## Implementation steps
<a name="implementation-steps"></a>

 **Define roles and responsibilities** 

 Handling security events requires cross-organizational discipline and an inclination for action. Within your organizational structure, there should be many people who are responsible, accountable, consulted, or kept informed during an incident, such as representatives from human resources (HR), the executive team, and legal. Consider these roles and responsibilities, and whether any third parties must be involved. Note that many geographies have local laws that govern what should and should not be done. Although it might seem bureaucratic to build a responsible, accountable, consulted, and informed (RACI) chart for your security response plans, doing so facilitates quick and direct communication and clearly outlines the leadership across different stages of the event. 

 During an incident, including the owners and developers of impacted applications and resources is key because they are subject matter experts (SMEs) that can provide information and context to aid in measuring impact. Make sure to practice and build relationships with the developers and application owners before you rely on their expertise for incident response. Application owners or SMEs, such as your cloud administrators or engineers, might need to act in situations where the environment is unfamiliar or has complexity, or where the responders don’t have access. 

 Lastly, trusted partners might be involved in the investigation or response because they can provide additional expertise and valuable scrutiny. When you don’t have these skills on your own team, you might want to hire an external party for assistance. 

 **Understand AWS response teams and support** 
+  **AWS Support** 
  +  [Support](https://aws.amazon.com/premiumsupport/) offers a range of plans that provide access to tools and expertise that support the success and operational health of your AWS solutions. If you need technical support and more resources to help plan, deploy, and optimize your AWS environment, you can select a support plan that best aligns with your AWS use case. 
  +  Consider the [Support Center](https://console.aws.amazon.com/support) in AWS Management Console (sign-in required) as the central point of contact to get support for issues that affect your AWS resources. Access to Support is controlled by AWS Identity and Access Management. For more information about getting access to Support features, see [Getting started with Support](https://docs.aws.amazon.com/awssupport/latest/user/getting-started.html#accessing-support). 
+  **AWS Customer Incident Response Team (CIRT)** 
  +  The AWS Customer Incident Response Team (CIRT) is a specialized 24/7 global AWS team that provides support to customers during active security events on the customer side of the [AWS Shared Responsibility Model](https://aws.amazon.com/compliance/shared-responsibility-model/). 
  +  When the AWS CIRT supports you, they provide assistance with triage and recovery for an active security event on AWS. They can assist in root cause analysis through the use of AWS service logs and provide you with recommendations for recovery. They can also provide security recommendations and best practices to help you avoid security events in the future. 
  +  AWS customers can engage the AWS CIRT through an [Support case](https://docs.aws.amazon.com/awssupport/latest/user/case-management.html). 
+  **DDoS response support** 
  +  AWS offers [AWS Shield](https://aws.amazon.com/shield/), which provides a managed distributed denial of service (DDoS) protection service that safeguards web applications running on AWS. Shield provides always-on detection and automatic inline mitigations that can minimize application downtime and latency, so there is no need to engage Support to benefit from DDoS protection. There are two tiers of Shield: AWS Shield Standard and AWS Shield Advanced. To learn about the differences between these two tiers, see [Shield features documentation](https://aws.amazon.com/shield/features/). 
+  **AWS Managed Services (AMS)** 
  +  [AWS Managed Services (AMS)](https://aws.amazon.com/managed-services/) provides ongoing management of your AWS infrastructure so you can focus on your applications. By implementing best practices to maintain your infrastructure, AMS helps reduce your operational overhead and risk. AMS automates common activities such as change requests, monitoring, patch management, security, and backup services, and provides full-lifecycle services to provision, run, and support your infrastructure. 
  +  AMS takes responsibility for deploying a suite of security detective controls and provides a 24/7 first line of response to alerts. When an alert is initiated, AMS follows a standard set of automated and manual playbooks to verify a consistent response. These playbooks are shared with AMS customers during onboarding so that they can develop and coordinate a response with AMS. 

 **Develop the incident response plan** 

 The incident response plan is designed to be the foundation for your incident response program and strategy. The incident response plan should be in a formal document. An incident response plan typically includes these sections: 
+  **An incident response team overview:** Outlines the goals and functions of the incident response team. 
+  **Roles and responsibilities:** Lists the incident response stakeholders and details their roles when an incident occurs. 
+  **A communication plan:** Details contact information and how you communicate during an incident. 
+  **Backup communication methods:** It’s a best practice to have out-of-band communication as a backup for incident communication. An example of an application that provides a secure out-of-band communications channel is AWS Wickr. 
+  **Phases of incident response and actions to take:** Enumerates the phases of incident response (for example, detect, analyze, eradicate, contain, and recover), including high-level actions to take within those phases. 
+  **Incident severity and prioritization definitions:** Details how to classify the severity of an incident, how to prioritize the incident, and then how the severity definitions affect escalation procedures. 

 While these sections are common throughout companies of different sizes and industries, each organization’s incident response plan is unique. You need to build an incident response plan that works best for your organization. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [SEC04 (How do you detect and investigate security events?)](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/detection.html) 

 **Related documents:** 
+  [AWS Security Incident Response Guide](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/welcome.html) 
+ [ NIST: Computer Security Incident Handling Guide ](https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-61r2.pdf)

# SEC10-BP03 Prepare forensic capabilities
<a name="sec_incident_response_prepare_forensic"></a>

Ahead of a security incident, consider developing forensics capabilities to support security event investigations. 

 **Level of risk exposed if this best practice is not established:** Medium 

 Concepts from traditional on-premises forensics apply to AWS. For key information to start building forensics capabilities in the AWS Cloud, see [Forensic investigation environment strategies in the AWS Cloud](https://aws.amazon.com/blogs/security/forensic-investigation-environment-strategies-in-the-aws-cloud/). 

 Once you have your environment and AWS account structure set up for forensics, define the technologies required to effectively perform forensically sound methodologies across the four phases: 
+  **Collection:** Collect relevant AWS logs, such as AWS CloudTrail, AWS Config, VPC Flow Logs, and host-level logs. Collect snapshots, backups, and memory dumps of impacted AWS resources where available. 
+  **Examination:** Examine the data collected by extracting and assessing the relevant information. 
+  **Analysis:** Analyze the data collected in order to understand the incident and draw conclusions from it. 
+  **Reporting:** Present the information resulting from the analysis phase. 

## Implementation steps
<a name="implementation-steps"></a>

 **Prepare your forensics environment** 

 [AWS Organizations](https://aws.amazon.com/organizations/) helps you centrally manage and govern an AWS environment as you grow and scale AWS resources. An AWS organization consolidates your AWS accounts so that you can administer them as a single unit. You can use organizational units (OUs) to group accounts together to administer as a single unit. 

 For incident response, it’s helpful to have an AWS account structure that supports the functions of incident response, which includes a *security OU* and a *forensics OU*. Within the security OU, you should have accounts for: 
+  **Log archival:** Aggregate logs in a log archival AWS account with limited permissions. 
+  **Security tools:** Centralize security services in a security tool AWS account. This account operates as the delegated administrator for security services. 

 Within the forensics OU, you have the option to implement a single forensics account or accounts for each Region that you operate in, depending on which works best for your business and operational model. If you create a forensics account per Region, you can block the creation of AWS resources outside of that Region and reduce the risk of resources being copied to an unintended region. For example, if you only operate in US East (N. Virginia) Region (`us-east-1`) and US West (Oregon) (`us-west-2`), then you would have two accounts in the forensics OU: one for `us-east-1` and one for `us-west-2`. 

 You can create a forensics AWS account for multiple Regions. You should exercise caution in copying AWS resources to that account to verify you’re aligning with your data sovereignty requirements. Because it takes time to provision new accounts, it is imperative to create and instrument the forensics accounts well ahead of an incident so that responders can be prepared to effectively use them for response. 

 The following diagram displays a sample account structure including a forensics OU with per-Region forensics accounts: 

![\[Flow diagram showing a per-Region account structure for incident response, forking into a security and forensics OU.\]](http://docs.aws.amazon.com/wellarchitected/2023-10-03/framework/images/region-account-structure.png)


 **Capture backups and snapshots** 

 Setting up backups of key systems and databases are critical for recovering from a security incident and for forensics purposes. With backups in place, you can restore your systems to their previous safe state. On AWS, you can take snapshots of various resources. Snapshots provide you with point-in-time backups of those resources. There are many AWS services that can support you in backup and recovery. For detail on these services and approaches for backup and recovery, see [Backup and Recovery Prescriptive Guidance](https://docs.aws.amazon.com/prescriptive-guidance/latest/backup-recovery/services.html) and [Use backups to recover from security incidents](https://aws.amazon.com/blogs/security/use-backups-to-recover-from-security-incidents/). 

 Especially when it comes to situations such as ransomware, it’s critical for your backups to be well protected. For guidance on securing your backups, see [Top 10 security best practices for securing backups in AWS](https://aws.amazon.com/blogs/security/top-10-security-best-practices-for-securing-backups-in-aws/). In addition to securing your backups, you should regularly test your backup and restore processes to verify that the technology and processes you have in place work as expected. 

 **Automate forensics** 

 During a security event, your incident response team must be able to collect and analyze evidence quickly while maintaining accuracy for the time period surrounding the event (such as capturing logs related to a specific event or resource or collecting memory dump of an Amazon EC2 instance). It’s both challenging and time consuming for the incident response team to manually collect the relevant evidence, especially across a large number of instances and accounts. Additionally, manual collection can be prone to human error. For these reasons, you should develop and implement automation for forensics as much as possible. 

 AWS offers a number of automation resources for forensics, which are listed in the following Resources section. These resources are examples of forensics patterns that we have developed and customers have implemented. While they might be a useful reference architecture to start with, consider modifying them or creating new forensics automation patterns based on your environment, requirements, tools, and forensics processes. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+ [AWS Security Incident Response Guide - Develop Forensics Capabilities ](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/develop-forensics-capabilities.html)
+ [AWS Security Incident Response Guide - Forensics Resources ](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/appendix-b-incident-response-resources.html#forensic-resources)
+ [Forensic investigation environment strategies in the AWS Cloud](https://aws.amazon.com/blogs/security/forensic-investigation-environment-strategies-in-the-aws-cloud/)
+  [How to automate forensic disk collection in AWS](https://aws.amazon.com/blogs/security/how-to-automate-forensic-disk-collection-in-aws/) 
+ [AWS Prescriptive Guidance - Automate incident response and forensics ](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automate-incident-response-and-forensics.html)

 **Related videos:** 
+ [ Automating Incident Response and Forensics ](https://www.youtube.com/watch?v=f_EcwmmXkXk)

 **Related examples:** 
+ [ Automated Incident Response and Forensics Framework ](https://github.com/awslabs/aws-automated-incident-response-and-forensics)
+ [ Automated Forensics Orchestrator for Amazon EC2 ](https://docs.aws.amazon.com/solutions/latest/automated-forensics-orchestrator-for-amazon-ec2/welcome.html)

# SEC10-BP04 Develop and test security incident response playbooks
<a name="sec_incident_response_playbooks"></a>

 A key part of preparing your incident response processes is developing playbooks. Incident response playbooks provide a series of prescriptive guidance and steps to follow when a security event occurs. Having clear structure and steps simplifies the response and reduces the likelihood for human error. 

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>

 Playbooks should be created for incident scenarios such as: 
+  **Expected incidents**: Playbooks should be created for incidents you anticipate. This includes threats like denial of service (DoS), ransomware, and credential compromise. 
+  **Known security findings or alerts**: Playbooks should be created for your known security findings and alerts, such as GuardDuty findings. You might receive a GuardDuty finding and think, "Now what?" To prevent the mishandling or ignoring of a GuardDuty finding, create a playbook for each potential GuardDuty finding. Some remediation details and guidance can be found in the [GuardDuty documentation](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_remediate.html). It’s worth noting that GuardDuty is not enabled by default and does incur a cost. For more detail on GuardDuty, see [Appendix A: Cloud capability definitions - Visibility and alerting](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/visibility-and-alerting.html). 

 Playbooks should contain technical steps for a security analyst to complete in order to adequately investigate and respond to a potential security incident. 

### Implementation steps
<a name="implementation-steps"></a>

 Items to include in a playbook include: 
+  **Playbook overview**: What risk or incident scenario does this playbook address? What is the goal of the playbook? 
+  **Prerequisites**: What logs, detection mechanisms, and automated tools are required for this incident scenario? What is the expected notification? 
+  **Communication and escalation information**: Who is involved and what is their contact information? What are each of the stakeholders’ responsibilities? 
+  **Response steps**: Across phases of incident response, what tactical steps should be taken? What queries should an analyst run? What code should be run to achieve the desired outcome? 
  +  **Detect**: How will the incident be detected? 
  +  **Analyze**: How will the scope of impact be determined? 
  +  **Contain**: How will the incident be isolated to limit scope? 
  +  **Eradicate**: How will the threat be removed from the environment? 
  +  **Recover**: How will the affected system or resource be brought back into production? 
+  **Expected outcomes**: After queries and code are run, what is the expected result of the playbook? 

## Resources
<a name="resources"></a>

 **Related Well-Architected best practices:** 
+  [SEC10-BP02 - Develop incident management plans](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_incident_response_develop_management_plans.html) 

 **Related documents:** 
+  [Framework for Incident Response Playbooks](https://github.com/aws-samples/aws-customer-playbook-framework)  
+  [Develop your own Incident Response Playbooks](https://github.com/aws-samples/aws-incident-response-playbooks-workshop)  
+  [Incident Response Playbook Samples](https://github.com/aws-samples/aws-incident-response-playbooks)  
+  [Building an AWS incident response runbook using Jupyter playbooks and CloudTrail Lake](https://catalog.workshops.aws/incident-response-jupyter/en-US)  

 

# SEC10-BP05 Pre-provision access
<a name="sec_incident_response_pre_provision_access"></a>

Verify that incident responders have the correct access pre-provisioned in AWS to reduce the time needed for investigation through to recovery.

 **Common anti-patterns:** 
+  Using the root account for incident response. 
+  Altering existing accounts. 
+  Manipulating IAM permissions directly when providing just-in-time privilege elevation. 

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>

AWS recommends reducing or eliminating reliance on long-lived credentials wherever possible, in favor of temporary credentials and *just-in-time* privilege escalation mechanisms. Long-lived credentials are prone to security risk and increase operational overhead. For most management tasks, as well as incident response tasks, we recommend you implement [identity federation](https://aws.amazon.com/identity/federation/) alongside [temporary escalation for administrative access](https://aws.amazon.com/blogs/security/managing-temporary-elevated-access-to-your-aws-environment/). In this model, a user requests elevation to a higher level of privilege (such as an incident response role) and, provided the user is eligible for elevation, a request is sent to an approver. If the request is approved, the user receives a set of temporary [AWS credentials](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) which can be used to complete their tasks. After these credentials expire, the user must submit a new elevation request.

 We recommend the use of temporary privilege escalation in the majority of incident response scenarios. The correct way to do this is to use the [AWS Security Token Service](https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html) and [session policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html#policies_session) to scope access. 

 There are scenarios where federated identities are unavailable, such as: 
+  Outage related to a compromised identity provider (IdP). 
+  Misconfiguration or human error causing broken federated access management system. 
+  Malicious activity such as a distributed denial of service (DDoS) event or rendering unavailability of the system. 

 In the preceding cases, there should be emergency *break glass* access configured to allow investigation and timely remediation of incidents. We recommend that you use a [user, group, or role with appropriate permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#lock-away-credentials) to perform tasks and access AWS resources. Use the root user only for [tasks that require root user credentials](https://docs.aws.amazon.com/accounts/latest/reference/root-user-tasks.html). To verify that incident responders have the correct level of access to AWS and other relevant systems, we recommend the pre-provisioning of dedicated accounts. The accounts require privileged access, and must be tightly controlled and monitored. The accounts must be built with the fewest privileges required to perform the necessary tasks, and the level of access should be based on the playbooks created as part of the incident management plan. 

 Use purpose-built and dedicated users and roles as a best practice. Temporarily escalating user or role access through the addition of IAM policies both makes it unclear what access users had during the incident, and risks the escalated privileges not being revoked. 

 It is important to remove as many dependencies as possible to verify that access can be gained under the widest possible number of failure scenarios. To support this, create a playbook to verify that incident response users are created as users in a dedicated security account, and not managed through any existing Federation or single sign-on (SSO) solution. Each individual responder must have their own named account. The account configuration must enforce [strong password policy](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-policy.html) and multi-factor authentication (MFA). If the incident response playbooks only require access to the AWS Management Console, the user should not have access keys configured and should be explicitly disallowed from creating access keys. This can be configured with IAM policies or service control policies (SCPs) as mentioned in the AWS Security Best Practices for [AWS Organizations SCPs](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html). The users should have no privileges other than the ability to assume incident response roles in other accounts. 

 During an incident it might be necessary to grant access to other internal or external individuals to support investigation, remediation, or recovery activities. In this case, use the playbook mechanism mentioned previously, and there must be a process to verify that any additional access is revoked immediately after the incident is complete. 

 To verify that the use of incident response roles can be properly monitored and audited, it is essential that the IAM accounts created for this purpose are not shared between individuals, and that the AWS account root user is not used unless [required for a specific task](https://docs.aws.amazon.com/accounts/latest/reference/root-user-tasks.html). If the root user is required (for example, IAM access to a specific account is unavailable), use a separate process with a playbook available to verify availability of the root user sign-in credentials and MFA token. 

 To configure the IAM policies for the incident response roles, consider using [IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-policy-generation.html) to generate policies based on AWS CloudTrail logs. To do this, grant administrator access to the incident response role on a non-production account and run through your playbooks. Once complete, a policy can be created that allows only the actions taken. This policy can then be applied to all the incident response roles across all accounts. You might wish to create a separate IAM policy for each playbook to allow easier management and auditing. Example playbooks could include response plans for ransomware, data breaches, loss of production access, and other scenarios. 

 Use the incident response accounts to assume dedicated incident response [IAM roles in other AWS accounts](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_common-scenarios_aws-accounts.html). These roles must be configured to only be assumable by users in the security account, and the trust relationship must require that the calling principal has authenticated using MFA. The roles must use tightly-scoped IAM policies to control access. Ensure that all `AssumeRole` requests for these roles are logged in CloudTrail and alerted on, and that any actions taken using these roles are logged. 

 It is strongly recommended that both the IAM accounts and the IAM roles are clearly named to allow them to be easily found in CloudTrail logs. An example of this would be to name the IAM accounts `<USER_ID>-BREAK-GLASS` and the IAM roles `BREAK-GLASS-ROLE`. 

 [CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) is used to log API activity in your AWS accounts and should be used to [configure alerts on usage of the incident response roles](https://aws.amazon.com/blogs/security/how-to-receive-notifications-when-your-aws-accounts-root-access-keys-are-used/). Refer to the blog post on configuring alerts when root keys are used. The instructions can be modified to configure the [Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) metric filter-to-filter on `AssumeRole` events related to the incident response IAM role: 

```
{ $.eventName = "AssumeRole" && $.requestParameters.roleArn = "<INCIDENT_RESPONSE_ROLE_ARN>" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != "AwsServiceEvent" }
```

 As the incident response roles are likely to have a high level of access, it is important that these alerts go to a wide group and are acted upon promptly. 

 During an incident, it is possible that a responder might require access to systems which are not directly secured by IAM. These could include Amazon Elastic Compute Cloud instances, Amazon Relational Database Service databases, or software-as-a-service (SaaS) platforms. It is strongly recommended that rather than using native protocols such as SSH or RDP, [AWS Systems Manager Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager.html) is used for all administrative access to Amazon EC2 instances. This access can be controlled using IAM, which is secure and audited. It might also be possible to automate parts of your playbooks using [AWS Systems Manager Run Command documents](https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html), which can reduce user error and improve time to recovery. For access to databases and third-party tools, we recommend storing access credentials in AWS Secrets Manager and granting access to the incident responder roles. 

 Finally, the management of the incident response IAM accounts should be added to your [Joiners, Movers, and Leavers processes](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/permissions-management.html) and reviewed and tested periodically to verify that only the intended access is allowed. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [Managing temporary elevated access to your AWS environment](https://aws.amazon.com/blogs/security/managing-temporary-elevated-access-to-your-aws-environment/) 
+  [AWS Security Incident Response Guide ](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/welcome.html)
+  [AWS Elastic Disaster Recovery](https://aws.amazon.com/disaster-recovery/) 
+  [AWS Systems Manager Incident Manager](https://docs.aws.amazon.com/incident-manager/latest/userguide/what-is-incident-manager.html) 
+  [Setting an account password policy for IAM users](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_account-policy.html) 
+  [Using multi-factor authentication (MFA) in AWS](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html) 
+ [ Configuring Cross-Account Access with MFA ](https://aws.amazon.com/blogs/security/how-do-i-protect-cross-account-access-using-mfa-2/)
+ [ Using IAM Access Analyzer to generate IAM policies ](https://aws.amazon.com/blogs/security/use-iam-access-analyzer-to-generate-iam-policies-based-on-access-activity-found-in-your-organization-trail/)
+ [ Best Practices for AWS Organizations Service Control Policies in a Multi-Account Environment ](https://aws.amazon.com/blogs/industries/best-practices-for-aws-organizations-service-control-policies-in-a-multi-account-environment/)
+ [ How to Receive Notifications When Your AWS Account’s Root Access Keys Are Used ](https://aws.amazon.com/blogs/security/how-to-receive-notifications-when-your-aws-accounts-root-access-keys-are-used/)
+ [ Create fine-grained session permissions using IAM managed policies ](https://aws.amazon.com/blogs/security/create-fine-grained-session-permissions-using-iam-managed-policies/)

 **Related videos:** 
+ [ Automating Incident Response and Forensics in AWS](https://www.youtube.com/watch?v=f_EcwmmXkXk)
+  [DIY guide to runbooks, incident reports, and incident response](https://youtu.be/E1NaYN_fJUo) 
+ [ Prepare for and respond to security incidents in your AWS environment ](https://www.youtube.com/watch?v=8uiO0Z5meCs)

 **Related examples:** 
+ [ Lab: AWS Account Setup and Root User ](https://www.wellarchitectedlabs.com/security/300_labs/300_incident_response_playbook_with_jupyter-aws_iam/)
+ [ Lab: Incident Response with AWS Console and CLI ](https://wellarchitectedlabs.com/security/300_labs/300_incident_response_with_aws_console_and_cli/)

# SEC10-BP06 Pre-deploy tools
<a name="sec_incident_response_pre_deploy_tools"></a>

Verify that security personnel have the right tools pre-deployed to reduce the time for investigation through to recovery.

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>

 To automate security response and operations functions, you can use a comprehensive set of APIs and tools from AWS. You can fully automate identity management, network security, data protection, and monitoring capabilities and deliver them using popular software development methods that you already have in place. When you build security automation, your system can monitor, review, and initiate a response, rather than having people monitor your security position and manually react to events. 

 If your incident response teams continue to respond to alerts in the same way, they risk alert fatigue. Over time, the team can become desensitized to alerts and can either make mistakes handling ordinary situations or miss unusual alerts. Automation helps avoid alert fatigue by using functions that process the repetitive and ordinary alerts, leaving humans to handle the sensitive and unique incidents. Integrating anomaly detection systems, such as Amazon GuardDuty, AWS CloudTrail Insights, and Amazon CloudWatch Anomaly Detection, can reduce the burden of common threshold-based alerts. 

 You can improve manual processes by programmatically automating steps in the process. After you define the remediation pattern to an event, you can decompose that pattern into actionable logic, and write the code to perform that logic. Responders can then run that code to remediate the issue. Over time, you can automate more and more steps, and ultimately automatically handle whole classes of common incidents. 

 During a security investigation, you need to be able to review relevant logs to record and understand the full scope and timeline of the incident. Logs are also required for alert generation, indicating certain actions of interest have happened. It is critical to select, enable, store, and set up querying and retrieval mechanisms, and set up alerting. Additionally, an effective way to provide tools to search log data is [Amazon Detective](https://aws.amazon.com/detective/). 

 AWS oﬀers over 200 cloud services and thousands of features. We recommend that you review the services that can support and simplify your incident response strategy. 

 In addition to logging, you should develop and implement a [tagging strategy](https://docs.aws.amazon.com/whitepapers/latest/tagging-best-practices/tagging-best-practices.html). Tagging can help provide context around the purpose of an AWS resource. Tagging can also be used for automation. 

### Implementation steps
<a name="implementation-steps"></a>

 **Select and set up logs for analysis and alerting** 

 See the following documentation on configuring logging for incident response: 
+ [ Logging strategies for security incident response ](https://aws.amazon.com/blogs/security/logging-strategies-for-security-incident-response/)
+  [SEC04-BP01 Configure service and application logging](sec_detect_investigate_events_app_service_logging.md) 

 **Enable security services to support detection and response** 

 AWS provides native detective, preventative, and responsive capabilities, and other services can be used to architect custom security solutions. For a list of the most relevant services for security incident response, see [Cloud capability definitions](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/appendix-a-cloud-capability-definitions.html). 

 **Develop and implement a tagging strategy** 

 Obtaining contextual information on the business use case and relevant internal stakeholders surrounding an AWS resource can be difficult. One way to do this is in the form of tags, which assign metadata to your AWS resources and consist of a user-defined key and value. You can create tags to categorize resources by purpose, owner, environment, type of data processed, and other criteria of your choice. 

 Having a consistent tagging strategy can speed up response times and minimize time spent on organizational context by allowing you to quickly identify and discern contextual information about an AWS resource. Tags can also serve as a mechanism to initiate response automations. For more detail on what to tag, see [Tagging your AWS resources](https://docs.aws.amazon.com/tag-editor/latest/userguide/tagging.html). You’ll want to first define the tags you want to implement across your organization. After that, you’ll implement and enforce this tagging strategy. For more detail on implementation and enforcement, see [Implement AWS resource tagging strategy using AWS Tag Policies and Service Control Policies (SCPs)](https://aws.amazon.com/blogs/mt/implement-aws-resource-tagging-strategy-using-aws-tag-policies-and-service-control-policies-scps/). 

## Resources
<a name="resources"></a>

 **Related Well-Architected best practices:** 
+  [SEC04-BP01 Configure service and application logging](sec_detect_investigate_events_app_service_logging.md) 
+  [SEC04-BP02 Analyze logs, findings, and metrics centrally](sec_detect_investigate_events_analyze_all.md) 

 **Related documents:** 
+ [ Logging strategies for security incident response ](https://aws.amazon.com/blogs/security/logging-strategies-for-security-incident-response/)
+ [ Incident response cloud capability definitions ](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/appendix-a-cloud-capability-definitions.html)

 **Related examples:** 
+ [ Threat Detection and Response with Amazon GuardDuty and Amazon Detective ](https://catalog.workshops.aws/guardduty/en-US)
+ [ Security Hub Workshop ](https://catalog.workshops.aws/security-hub/en-US)
+ [ Vulnerability Management with Amazon Inspector ](https://catalog.workshops.aws/inspector/en-US)

# SEC10-BP07 Run simulations
<a name="sec_incident_response_run_game_days"></a>

 As organizations grow and evolve over time, so does the threat landscape, making it important to continually review your incident response capabilities. Running simulations (also known as game days) is one method that can be used to perform this assessment. Simulations use real-world security event scenarios designed to mimic a threat actor’s tactics, techniques, and procedures (TTPs) and allow an organization to exercise and evaluate their incident response capabilities by responding to these mock cyber events as they might occur in reality.

 **Benefits of establishing this best practice:** Simulations have a variety of benefits: 
+  Validating cyber readiness and developing the confidence of your incident responders. 
+  Testing the accuracy and efficiency of tools and workflows. 
+  Refining communication and escalation methods aligned with your incident response plan. 
+  Providing an opportunity to respond to less common vectors. 

**Level of risk exposed if this best practice is not established:** Medium

## Implementation guidance
<a name="implementation-guidance"></a>

 There are three main types of simulations: 
+  **Tabletop exercises:** The tabletop approach to simulations is a discussion-based session involving the various incident response stakeholders to practice roles and responsibilities and use established communication tools and playbooks. Exercise facilitation can typically be accomplished in a full day in a virtual venue, physical venue, or a combination. Because it is discussion-based, the tabletop exercise focuses on processes, people, and collaboration. Technology is an integral part of the discussion, but the actual use of incident response tools or scripts is generally not a part of the tabletop exercise. 
+  **Purple team exercises:** Purple team exercises increase the level of collaboration between the incident responders (blue team) and simulated threat actors (red team). The blue team is comprised of members of the security operations center (SOC), but can also include other stakeholders that would be involved during an actual cyber event. The red team is comprised of a penetration testing team or key stakeholders that are trained in offensive security. The red team works collaboratively with the exercise facilitators when designing a scenario so that the scenario is accurate and feasible. During purple team exercises, the primary focus is on the detection mechanisms, the tools, and the standard operating procedures (SOPs) supporting the incident response efforts. 
+  **Red team exercises:** During a red team exercise, the offense (red team) conducts a simulation to achieve a certain objective or set of objectives from a predetermined scope. The defenders (blue team) will not necessarily have knowledge of the scope and duration of the exercise, which provides a more realistic assessment of how they would respond to an actual incident. Because red team exercises can be invasive tests, be cautious and implement controls to verify that the exercise does not cause actual harm to your environment. 

 Consider facilitating cyber simulations at a regular interval. Each exercise type can provide unique benefits to the participants and the organization as a whole, so you might choose to start with less complex simulation types (such as tabletop exercises) and progress to more complex simulation types (red team exercises). You should select a simulation type based on your security maturity, resources, and your desired outcomes. Some customers might not choose to perform red team exercises due to complexity and cost. 

## Implementation steps
<a name="implementation-steps"></a>

 Regardless of the type of simulation you choose, simulations generally follow these implementation steps: 

1.  **Define core exercise elements:** Define the simulation scenario and the objectives of the simulation. Both of these should have leadership acceptance. 

1.  **Identify key stakeholders:** At a minimum, an exercise needs exercise facilitators and participants. Depending on the scenario, additional stakeholders such as legal, communications, or executive leadership might be involved. 

1.  **Build and test the scenario:** The scenario might need to be redefined as it is being built if specific elements aren’t feasible. A finalized scenario is expected as the output of this stage. 

1.  **Facilitate the simulation:** The type of simulation determines the facilitation used (a paper-based scenario compared to a highly technical, simulated scenario). The facilitators should align their facilitation tactics to the exercise objects and they should engage all exercise participants wherever possible to provide the most benefit. 

1.  **Develop the after-action report (AAR):** Identify areas that went well, those that can use improvement, and potential gaps. The AAR should measure the effectiveness of the simulation as well as the team’s response to the simulated event so that progress can be tracked over time with future simulations. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+ [AWS Incident Response Guide](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/welcome.html) 

 **Related videos:** 
+ [AWS GameDay - Security Edition](https://www.youtube.com/watch?v=XnfDWID_OQs)

# SEC10-BP08 Establish a framework for learning from incidents
<a name="sec_incident_response_establish_incident_framework"></a>

 Implementing a *lessons learned* framework and root cause analysis capability can not only help improve incident response capabilities, but also help prevent the incident from recurring. By learning from each incident, you can help avoid repeating the same mistakes, exposures, or misconfigurations, not only improving your security posture, but also minimizing time lost to preventable situations. 

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>

 It's important to implement a *lessons learned* framework that establishes and achieves, at a high level, the following points: 
+  When is a lessons learned held? 
+  What is involved in the lessons learned process? 
+  How is a lessons learned performed? 
+  Who is involved in the process and how? 
+  How will areas of improvement be identified? 
+  How will you ensure improvements are effectively tracked and implemented? 

 The framework should not focus on or blame individuals, but instead should focus on improving tools and processes. 

### Implementation steps
<a name="implementation-steps"></a>

 Aside from the preceding high-level outcomes listed, it’s important to make sure that you ask the right questions to derive the most value (information that leads to actionable improvements) from the process. Consider these questions to help get you started in fostering your lessons learned discussions: 
+  What was the incident? 
+  When was the incident first identified? 
+  How was it identified? 
+  What systems alerted on the activity? 
+  What systems, services, and data were involved? 
+  What specifically occurred? 
+  What worked well? 
+  What didn't work well? 
+  Which process or procedures failed or failed to scale to respond to the incident? 
+  What can be improved within the following areas: 
  +  **People** 
    +  Were the people who were needed to be contacted actually available and was the contact list up to date? 
    +  Were people missing training or capabilities needed to effectively respond and investigate the incident? 
    +  Were the appropriate resources ready and available? 
  +  **Process** 
    +  Were processes and procedures followed? 
    +  Were processes and procedures documented and available for this (type of) incident? 
    +  Were required processes and procedures missing? 
    +  Were the responders able to gain timely access to the required information to respond to the issue? 
  +  **Technology** 
    +  Did existing alerting systems effectively identify and alert on the activity? 
    +  How could we have reduced time-to-detection by 50%? 
    +  Do existing alerts need improvement or new alerts need to be built for this (type of) incident? 
    +  Did existing tools allow for effective investigation (search/analysis) of the incident? 
    +  What can be done to help identify this (type of) incident sooner? 
    +  What can be done to help prevent this (type of) incident from occurring again? 
    +  Who owns the improvement plan and how will you test that it has been implemented? 
    +  What is the timeline for the additional monitoring or preventative controls and processes to be implemented and tested? 

 This list isn’t all-inclusive, but is intended to serve as a starting point for identifying what the organization and business needs are and how you can analyze them in order to most effectively learn from incidents and continuously improve your security posture. Most important is getting started by incorporating lessons learned as a standard part of your incident response process, documentation, and expectations across the stakeholders. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Security Incident Response Guide - Establish a framework for learning from incidents](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/establish-framework-for-learning.html) 
+  [NCSC CAF guidance - Lessons learned](https://www.ncsc.gov.uk/collection/caf/caf-principles-and-guidance/d-2-lessons-learned) 

# Application security
<a name="a-appsec"></a>

**Topics**
+ [SEC 11. How do you incorporate and validate the security properties of applications throughout the design, development, and deployment lifecycle?](sec-11.md)

# SEC 11. How do you incorporate and validate the security properties of applications throughout the design, development, and deployment lifecycle?
<a name="sec-11"></a>

Training people, testing using automation, understanding dependencies, and validating the security properties of tools and applications help to reduce the likelihood of security issues in production workloads.

**Topics**
+ [SEC11-BP01 Train for application security](sec_appsec_train_for_application_security.md)
+ [SEC11-BP02 Automate testing throughout the development and release lifecycle](sec_appsec_automate_testing_throughout_lifecycle.md)
+ [SEC11-BP03 Perform regular penetration testing](sec_appsec_perform_regular_penetration_testing.md)
+ [SEC11-BP04 Manual code reviews](sec_appsec_manual_code_reviews.md)
+ [SEC11-BP05 Centralize services for packages and dependencies](sec_appsec_centralize_services_for_packages_and_dependencies.md)
+ [SEC11-BP06 Deploy software programmatically](sec_appsec_deploy_software_programmatically.md)
+ [SEC11-BP07 Regularly assess security properties of the pipelines](sec_appsec_regularly_assess_security_properties_of_pipelines.md)
+ [SEC11-BP08 Build a program that embeds security ownership in workload teams](sec_appsec_build_program_that_embeds_security_ownership_in_teams.md)

# SEC11-BP01 Train for application security
<a name="sec_appsec_train_for_application_security"></a>

 Provide training to the builders in your organization on common practices for the secure development and operation of applications. Adopting security focused development practices helps reduce the likelihood of issues that are only detected at the security review stage. 

**Desired outcome:** Software should be designed and built with security in mind. When the builders in an organization are trained on secure development practices that start with a threat model, it improves the overall quality and security of the software produced. This approach can reduce the time to ship software or features because less rework is needed after the security review stage. 

 For the purposes of this best practice, *secure development* refers to the software that is being written and the tools or systems that support the software development lifecycle (SDLC). 

**Common anti-patterns:**
+  Waiting until a security review, and then considering the security properties of a system. 
+  Leaving all security decisions to the security team. 
+  Failing to communicate how the decisions taken in the SDLC relate to the overall security expectations or policies of the organization. 
+  Engaging in the security review process too late. 

**Benefits of establishing this best practice:**
+  Better knowledge of the organizational requirements for security early in the development cycle. 
+  Being able to identify and remediate potential security issues faster, resulting in a quicker delivery of features. 
+  Improved quality of software and systems. 

**Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>

 Provide training to the builders in your organization. Starting off with a course on [threat modeling](https://catalog.workshops.aws/threatmodel/en-US) is a good foundation for helping train for security. Ideally, builders should be able to self-serve access to information relevant to their workloads. This access helps them make informed decisions about the security properties of the systems they build without needing to ask another team. The process for engaging the security team for reviews should be clearly defined and simple to follow. The steps in the review process should be included in the security training. Where known implementation patterns or templates are available, they should be simple to find and link to the overall security requirements. Consider using [AWS CloudFormation,](https://aws.amazon.com/cloudformation/) [AWS Cloud Development Kit (AWS CDK) Constructs](https://docs.aws.amazon.com/cdk/v2/guide/constructs.html), [Service Catalog](https://aws.amazon.com/servicecatalog/), or other templating tools to reduce the need for custom configuration. 

### Implementation steps
<a name="implementation-steps"></a>
+  Start builders with a course on [threat modeling](https://catalog.workshops.aws/threatmodel/en-US) to build a good foundation, and help train them on how to think about security. 
+  Provide access to [AWS Training and Certification](https://www.aws.training/LearningLibrary?query=&filters=Language%3A1%20Domain%3A27&from=0&size=15&sort=_score&trk=el_a134p000007C9OtAAK&trkCampaign=GLBL-FY21-TRAINCERT-800-Security&sc_channel=el&sc_campaign=GLBL-FY21-TRAINCERT-800-Security-Blog&sc_outcome=Training_and_Certification&sc_geo=mult), industry, or AWS Partner training. 
+  Provide training on your organization's security review process, which clarifies the division of responsibilities between the security team, workload teams, and other stakeholders. 
+  Publish self-service guidance on how to meet your security requirements, including code examples and templates, if available. 
+  Regularly obtain feedback from builder teams on their experience with the security review process and training, and use that feedback to improve. 
+  Use game days or bug bash campaigns to help reduce the number of issues, and increase the skills of your builders. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [SEC11-BP08 Build a program that embeds security ownership in workload teams](sec_appsec_build_program_that_embeds_security_ownership_in_teams.md) 

 **Related documents:** 
+  [AWS Training and Certification](https://www.aws.training/LearningLibrary?query=&filters=Language%3A1%20Domain%3A27&from=0&size=15&sort=_score&trk=el_a134p000007C9OtAAK&trkCampaign=GLBL-FY21-TRAINCERT-800-Security&sc_channel=el&sc_campaign=GLBL-FY21-TRAINCERT-800-Security-Blog&sc_outcome=Training_and_Certification&sc_geo=mult) 
+  [How to think about cloud security governance](https://aws.amazon.com/blogs/security/how-to-think-about-cloud-security-governance/) 
+  [How to approach threat modeling](https://aws.amazon.com/blogs/security/how-to-approach-threat-modeling/) 
+  [Accelerating training – The AWS Skills Guild](https://docs.aws.amazon.com/whitepapers/latest/public-sector-cloud-transformation/accelerating-training-the-aws-skills-guild.html) 

 **Related videos:** 
+  [Proactive security: Considerations and approaches](https://www.youtube.com/watch?v=CBrUE6Qwfag) 

 **Related examples:** 
+  [Workshop on threat modeling](https://catalog.workshops.aws/threatmodel) 
+  [Industry awareness for developers](https://owasp.org/www-project-top-ten/) 

 **Related services:** 
+  [AWS CloudFormation](https://aws.amazon.com/cloudformation/) 
+  [AWS Cloud Development Kit (AWS CDK) (AWS CDK) Constructs](https://docs.aws.amazon.com/cdk/v2/guide/constructs.html) 
+  [Service Catalog](https://aws.amazon.com/servicecatalog/) 
+  [AWS BugBust](https://docs.aws.amazon.com/codeguru/latest/bugbust-ug/what-is-aws-bugbust.html) 

# SEC11-BP02 Automate testing throughout the development and release lifecycle
<a name="sec_appsec_automate_testing_throughout_lifecycle"></a>

 Automate the testing for security properties throughout the development and release lifecycle. Automation makes it easier to consistently and repeatably identify potential issues in software prior to release, which reduces the risk of security issues in the software being provided. 

**Desired outcome: ** The goal of automated testing is to provide a programmatic way of detecting potential issues early and often throughout the development lifecycle. When you automate regression testing, you can rerun functional and non-functional tests to verify that previously tested software still performs as expected after a change. When you define security unit tests to check for common misconfigurations, such as broken or missing authentication, you can identify and fix these issues early in the development process. 

 Test automation uses purpose-built test cases for application validation, based on the application’s requirements and desired functionality. The result of the automated testing is based on comparing the generated test output to its respective expected output, which expedites the overall testing lifecycle. Testing methodologies such as regression testing and unit test suites are best suited for automation. Automating the testing of security properties allows builders to receive automated feedback without having to wait for a security review. Automated tests in the form of static or dynamic code analysis can increase code quality and help detect potential software issues early in the development lifecycle. 

**Common anti-patterns: **
+  Not communicating the test cases and test results of the automated testing. 
+  Performing the automated testing only immediately prior to a release. 
+  Automating test cases with frequently changing requirements. 
+  Failing to provide guidance on how to address the results of security tests. 

**Benefits of establishing this best practice: **
+  Reduced dependency on people evaluating the security properties of systems. 
+  Having consistent findings across multiple workstreams improves consistency. 
+  Reduced likelihood of introducing security issues into production software. 
+  Shorter window of time between detection and remediation due to catching software issues earlier. 
+  Increased visibility of systemic or repeated behavior across multiple workstreams, which can be used to drive organization-wide improvements. 

** Level of risk exposed if this best practice is not established: **Medium 

## Implementation guidance
<a name="implementation-guidance"></a>

As you build your software, adopt various mechanisms for software testing to ensure that you are testing your application for both functional requirements, based on your application’s business logic, and non-functional requirements, which are focused on application reliability, performance, and security. 

 Static application security testing (SAST) analyzes your source code for anomalous security patterns, and provides indications for defect prone code. SAST relies on static inputs, such as documentation (requirements specification, design documentation, and design specifications) and application source code to test for a range of known security issues. Static code analyzers can help expedite the analysis of large volumes of code. The [NIST Quality Group](https://www.nist.gov/itl/ssd/software-quality-group) provides a comparison of [Source Code Security Analyzers](https://www.nist.gov/itl/ssd/software-quality-group/source-code-security-analyzers), which includes open source tools for [Byte Code Scanners](https://samate.nist.gov/index.php/Byte_Code_Scanners.html) and [Binary Code Scanners](https://samate.nist.gov/index.php/Binary_Code_Scanners.html).

 Complement your static testing with dynamic analysis security testing (DAST) methodologies, which performs tests against the running application to identify potentially unexpected behavior. Dynamic testing can be used to detect potential issues that are not detectable via static analysis. Testing at the code repository, build, and pipeline stages allows you to check for different types of potential issues from entering into your code. [Amazon CodeWhisperer](https://aws.amazon.com/codewhisperer/) provides code recommendations, including security scanning, in the builder’s IDE. [Amazon CodeGuru Reviewer](https://aws.amazon.com/codeguru/) can identify critical issues, security issues, and hard-to-find bugs during application development, and provides recommendations to improve code quality. 

 The [Security for Developers workshop](https://catalog.workshops.aws/sec4devs) uses AWS developer tools, such as [AWS CodeBuild](https://aws.amazon.com/codebuild/), [AWS CodeCommit](https://aws.amazon.com/codecommit/), and [AWS CodePipeline](https://aws.amazon.com/codepipeline/), for release pipeline automation that includes SAST and DAST testing methodologies. 

 As you progress through your SDLC, establish an iterative process that includes periodic application reviews with your security team. Feedback gathered from these security reviews should be addressed and validated as part of your release readiness review. These reviews establish a robust application security posture, and provide builders with actionable feedback to address potential issues. 

### Implementation steps
<a name="implementation-steps"></a>
+  Implement consistent IDE, code review, and CI/CD tools that include security testing. 
+  Consider where in the SDLC it is appropriate to block pipelines instead of just notifying builders that issues need to be remediated. 
+  The [Security for Developers workshop](https://catalog.workshops.aws/sec4devs) provides an example of integrating static and dynamic testing into a release pipeline. 
+  Performing testing or code analysis using automated tools, such as [Amazon CodeWhisperer](https://aws.amazon.com/codewhisperer/) integrated with developer IDEs, and [Amazon CodeGuru Reviewer](https://aws.amazon.com/codeguru/) for scanning code on commit, helps builders get feedback at the right time. 
+  When building using AWS Lambda, you can use [Amazon Inspector](https://aws.amazon.com/about-aws/whats-new/2023/02/code-scans-lambda-functions-amazon-inspector-preview/) to scan the application code in your functions. 
+  When automated testing is included in CI/CD pipelines, you should use a ticketing system to track the notification and remediation of software issues. 
+  For security tests that might generate findings, linking to guidance for remediation helps builders improve code quality. 
+  Regularly analyze the findings from automated tools to prioritize the next automation, builder training, or awareness campaign. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [Continuous Delivery and Continuous Deployment](https://aws.amazon.com/devops/continuous-delivery/) 
+  [AWS DevOps Competency Partners](https://aws.amazon.com/devops/partner-solutions/?blog-posts-cards.sort-by=item.additionalFields.createdDate&blog-posts-cards.sort-order=desc&partner-solutions-cards.sort-by=item.additionalFields.partnerNameLower&partner-solutions-cards.sort-order=asc&awsf.partner-solutions-filter-partner-type=partner-type%23technology&awsf.Filter%20Name%3A%20partner-solutions-filter-partner-location=*all&awsf.partner-solutions-filter-partner-location=*all&partner-case-studies-cards.sort-by=item.additionalFields.sortDate&partner-case-studies-cards.sort-order=desc&awsm.page-partner-solutions-cards=1) 
+  [AWS Security Competency Partners](https://aws.amazon.com/security/partner-solutions/?blog-posts-cards.sort-by=item.additionalFields.createdDate&blog-posts-cards.sort-order=desc&partner-solutions-cards.sort-by=item.additionalFields.partnerNameLower&partner-solutions-cards.sort-order=asc&awsf.partner-solutions-filter-partner-type=*all&awsf.Filter%20Name%3A%20partner-solutions-filter-partner-categories=use-case%23app-security&awsf.partner-solutions-filter-partner-location=*all&partner-case-studies-cards.sort-by=item.additionalFields.sortDate&partner-case-studies-cards.sort-order=desc&events-master-partner-webinars.sort-by=item.additionalFields.startDateTime&events-master-partner-webinars.sort-order=asc) for Application Security 
+  [Choosing a Well-Architected CI/CD approach](https://aws.amazon.com/blogs/devops/choosing-well-architected-ci-cd-open-source-software-aws-services/) 
+  [Monitoring CodeCommit events in Amazon EventBridge and Amazon CloudWatch Events](https://docs.aws.amazon.com/codecommit/latest/userguide/monitoring-events.html) 
+  [Secrets detection in Amazon CodeGuru Review](https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/recommendations.html#secrets-detection) 
+  [Accelerate deployments on AWS with effective governance](https://aws.amazon.com/blogs/architecture/accelerate-deployments-on-aws-with-effective-governance/) 
+  [How AWS approaches automating safe, hands-off deployments](https://aws.amazon.com/builders-library/automating-safe-hands-off-deployments/) 

 **Related videos:**
+  [Hands-off: Automating continuous delivery pipelines at Amazon](https://www.youtube.com/watch?v=ngnMj1zbMPY) 
+  [Automating cross-account CI/CD pipelines](https://www.youtube.com/watch?v=AF-pSRSGNks) 

 **Related examples:**
+  [Industry awareness for developers](https://owasp.org/www-project-top-ten/) 
+  [AWS CodePipeline Governance](https://github.com/awslabs/aws-codepipeline-governance) (GitHub) 
+  [Security for Developers workshop](https://catalog.us-east-1.prod.workshops.aws/workshops/66275888-6bab-4872-8c6e-ed2fe132a362/en-US) 

# SEC11-BP03 Perform regular penetration testing
<a name="sec_appsec_perform_regular_penetration_testing"></a>

Perform regular penetration testing of your software. This mechanism helps identify potential software issues that cannot be detected by automated testing or a manual code review. It can also help you understand the efficacy of your detective controls. Penetration testing should try to determine if the software can be made to perform in unexpected ways, such as exposing data that should be protected, or granting broader permissions than expected.

 

**Desired outcome:** Penetration testing is used to detect, remediate, and validate your application’s security properties. Regular and scheduled penetration testing should be performed as part of the software development lifecycle (SDLC). The findings from penetration tests should be addressed prior to the software being released. You should analyze the findings from penetration tests to identify if there are issues that could be found using automation. Having a regular and repeatable penetration testing process that includes an active feedback mechanism helps inform the guidance to builders and improves software quality. 

**Common anti-patterns: **
+  Only penetration testing for known or prevalent security issues. 
+  Penetration testing applications without dependent third-party tools and libraries. 
+  Only penetration testing for package security issues, and not evaluating implemented business logic. 

**Benefits of establishing this best practice:**
+  Increased confidence in the security properties of the software prior to release. 
+  Opportunity to identify preferred application patterns, which leads to greater software quality. 
+  A feedback loop that identifies earlier in the development cycle where automation or additional training can improve the security properties of software. 

** Level of risk exposed if this best practice is not established: **High 

## Implementation guidance
<a name="implementation-guidance"></a>

 Penetration testing is a structured security testing exercise where you run planned security breach scenarios to detect, remediate, and validate security controls. Penetration tests start with reconnaissance, during which data is gathered based on the current design of the application and its dependencies. A curated list of security-specific testing scenarios are built and run. The key purpose of these tests is to uncover security issues in your application, which could be exploited for gaining unintended access to your environment, or unauthorized access to data. You should perform penetration testing when you launch new features, or whenever your application has undergone major changes in function or technical implementation. 

 You should identify the most appropriate stage in the development lifecycle to perform penetration testing. This testing should happen late enough that the functionality of the system is close to the intended release state, but with enough time remaining for any issues to be remediated. 

### Implementation steps
<a name="implementation-steps"></a>
+  Have a structured process for how penetration testing is scoped, basing this process on the [threat model](https://aws.amazon.com/blogs/security/how-to-approach-threat-modeling/) is a good way of maintaining context. 
+  Identify the appropriate place in the development cycle to perform penetration testing. This should be when there is minimal change expected in the application, but with enough time to perform remediation. 
+  Train your builders on what to expect from penetration testing findings, and how to get information on remediation. 
+  Use tools to speed up the penetration testing process by automating common or repeatable tests. 
+  Analyze penetration testing findings to identify systemic security issues, and use this data to inform additional automated testing and ongoing builder education. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [SEC11-BP01 Train for application security](sec_appsec_train_for_application_security.md) 
+ [SEC11-BP02 Automate testing throughout the development and release lifecycle](sec_appsec_automate_testing_throughout_lifecycle.md)

 **Related documents:** 
+  [AWS Penetration Testing](https://aws.amazon.com/security/penetration-testing/) provides detailed guidance for penetration testing on AWS 
+  [Accelerate deployments on AWS with effective governance](https://aws.amazon.com/blogs/architecture/accelerate-deployments-on-aws-with-effective-governance/) 
+  [AWS Security Competency Partners](https://aws.amazon.com/security/partner-solutions/?blog-posts-cards.sort-by=item.additionalFields.createdDate&blog-posts-cards.sort-order=desc&partner-solutions-cards.sort-by=item.additionalFields.partnerNameLower&partner-solutions-cards.sort-order=asc&awsf.partner-solutions-filter-partner-type=*all&awsf.Filter%20Name%3A%20partner-solutions-filter-partner-categories=*all&awsf.partner-solutions-filter-partner-location=*all&partner-case-studies-cards.sort-by=item.additionalFields.sortDate&partner-case-studies-cards.sort-order=desc&events-master-partner-webinars.sort-by=item.additionalFields.startDateTime&events-master-partner-webinars.sort-order=asc) 
+  [Modernize your penetration testing architecture on AWS Fargate](https://aws.amazon.com/blogs/architecture/modernize-your-penetration-testing-architecture-on-aws-fargate/) 
+  [AWS Fault injection Simulator](https://aws.amazon.com/fis/) 

 **Related examples:** 
+  [Automate API testing with AWS CodePipeline](https://github.com/aws-samples/aws-codepipeline-codebuild-with-postman) (GitHub) 
+  [Automated security helper](https://github.com/aws-samples/automated-security-helper) (GitHub) 

# SEC11-BP04 Manual code reviews
<a name="sec_appsec_manual_code_reviews"></a>

Perform a manual code review of the software that you produce. This process helps verify that the person who wrote the code is not the only one checking the code quality.

**Desired outcome:** Including a manual code review step during development increases the quality of the software being written, helps upskill less experienced members of the team, and provides an opportunity to identify places where automation can be used. Manual code reviews can be supported by automated tools and testing. 

**Common anti-patterns:**
+  Not performing reviews of code before deployment. 
+  Having the same person write and review the code. 
+  Not using automation to assist or orchestrate code reviews. 
+  Not training builders on application security before they review code. 

**Benefits of establishing this best practice:**
+  Increased code quality. 
+  Increased consistency of code development through reuse of common approaches. 
+  Reduction in the number of issues discovered during penetration testing and later stages. 
+  Improved knowledge transfer within the team. 

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>

 The review step should be implemented as part of the overall code management flow. The specifics depend on the approach used for branching, pull-requests, and merging. You might be using AWS CodeCommit or third-party solutions such as GitHub, GitLab, or Bitbucket. Whatever method you use, it’s important to verify that your processes require the review of code before it’s deployed in a production environment. Using tools such as [Amazon CodeGuru Reviewer](https://docs.aws.amazon.com/codeguru/latest/reviewer-ug/welcome.html) can make it easier to orchestrate the code review process. 

### Implementation steps
<a name="implementation-steps-required-field"></a>
+  Implement a manual review step as part of your code management flow and perform this review before proceeding. 
+  Consider [Amazon CodeGuru Reviewer](https://aws.amazon.com/codeguru/) for managing and assisting in code reviews. 
+  Implement an approval flow that requires a code review being completed before code can progress to the next stage. 
+  Verify there is a process to identify issues being found during manual code reviews that could be detected automatically. 
+  Integrate the manual code review step in a way that aligns with your code development practices. 

## Resources
<a name="resources-required-field"></a>

 **Related best practices:**
+  [SEC11-BP02 Automate testing throughout the development and release lifecycle](sec_appsec_automate_testing_throughout_lifecycle.md) 

 **Related documents:**
+  [Working with pull requests in AWS CodeCommit repositories](https://docs.aws.amazon.com/codecommit/latest/userguide/pull-requests.html) 
+  [Working with approval rule templates in AWS CodeCommit](https://docs.aws.amazon.com/codecommit/latest/userguide/approval-rule-templates.html) 
+  [About pull requests in GitHub](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests) 
+  [Automate code reviews with Amazon CodeGuru Reviewer](https://aws.amazon.com/blogs/devops/automate-code-reviews-with-amazon-codeguru-reviewer/) 
+  [Automating detection of security vulnerabilities and bugs in CI/CD pipelines using Amazon CodeGuru Reviewer CLI](https://aws.amazon.com/blogs/devops/automating-detection-of-security-vulnerabilities-and-bugs-in-ci-cd-pipelines-using-amazon-codeguru-reviewer-cli/) 

 **Related videos:**
+  [Continuous improvement of code quality with Amazon CodeGuru](https://www.youtube.com/watch?v=iX1i35H1OVw) 

 **Related examples:** 
+  [Security for Developers workshop](https://catalog.workshops.aws/sec4devs) 

# SEC11-BP05 Centralize services for packages and dependencies
<a name="sec_appsec_centralize_services_for_packages_and_dependencies"></a>

Provide centralized services for builder teams to obtain software packages and other dependencies. This allows the validation of packages before they are included in the software that you write, and provides a source of data for the analysis of the software being used in your organization.

**Desired outcome:** Software is comprised of a set of other software packages in addition to the code that is being written. This makes it simple to consume implementations of functionality that are repeatedly used, such as a JSON parser or an encryption library. Logically centralizing the sources for these packages and dependencies provides a mechanism for security teams to validate the properties of the packages before they are used. This approach also reduces the risk of an unexpected issue being caused by a change in an existing package, or by builder teams including arbitrary packages directly from the internet. Use this approach in conjunction with the manual and automated testing flows to increase the confidence in the quality of the software that is being developed. 

**Common anti-patterns:**
+  Pulling packages from arbitrary repositories on the internet. 
+  Not testing new packages before making them available to builders. 

**Benefits of establishing this best practice:**
+  Better understanding of what packages are being used in the software being built. 
+  Being able to notify workload teams when a package needs to be updated based on the understanding of who is using what. 
+  Reducing the risk of a package with issues being included in your software. 

 **Level of risk exposed if this best practice is not established: **Medium 

## Implementation guidance
<a name="implementation-guidance"></a>

 Provide centralized services for packages and dependencies in a way that is simple for builders to consume. Centralized services can be logically central rather than implemented as a monolithic system. This approach allows you to provide services in a way that meets the needs of your builders. You should implement an efficient way of adding packages to the repository when updates happen or new requirements emerge. AWS services such as [AWS CodeArtifact](https://aws.amazon.com/codeartifact/) or similar AWS partner solutions provide a way of delivering this capability. 

### Implementation steps:
<a name="implementation-steps"></a>
+ Implement a logically centralized repository service that is available in all of the environments where software is developed. 
+ Include access to the repository as part of the AWS account vending process.
+ Build automation to test packages before they are published in a repository.
+ Maintain metrics of the most commonly used packages, languages, and teams with the highest amount of change.
+  Provide an automated mechanism for builder teams to request new packages and provide feedback. 
+  Regularly scan packages in your repository to identify the potential impact of newly discovered issues. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [SEC11-BP02 Automate testing throughout the development and release lifecycle](sec_appsec_automate_testing_throughout_lifecycle.md) 

 **Related documents:** 
+  [Accelerate deployments on AWS with effective governance](https://aws.amazon.com/blogs/architecture/accelerate-deployments-on-aws-with-effective-governance/) 
+  [Tighten your package security with CodeArtifact Package Origin Control toolkit](https://aws.amazon.com/blogs/devops/tighten-your-package-security-with-codeartifact-package-origin-control-toolkit/) 
+  [Detecting security issues in logging with Amazon CodeGuru Reviewer](https://aws.amazon.com/blogs/devops/detecting-security-issues-in-logging-with-amazon-codeguru-reviewer/) 
+  [Supply chain Levels for Software Artifacts (SLSA)](https://slsa.dev/) 

 **Related videos:** 
+  [Proactive security: Considerations and approaches](https://www.youtube.com/watch?v=CBrUE6Qwfag) 
+  [The AWS Philosophy of Security (re:Invent 2017)](https://www.youtube.com/watch?v=KJiCfPXOW-U) 
+  [When security, safety, and urgency all matter: Handling Log4Shell](https://www.youtube.com/watch?v=pkPkm7W6rGg) 

 **Related examples:** 
+  [Multi Region Package Publishing Pipeline](https://github.com/aws-samples/multi-region-python-package-publishing-pipeline) (GitHub) 
+  [Publishing Node.js Modules on AWS CodeArtifact using AWS CodePipeline](https://github.com/aws-samples/aws-codepipeline-publish-nodejs-modules) (GitHub) 
+  [AWS CDK Java CodeArtifact Pipeline Sample](https://github.com/aws-samples/aws-cdk-codeartifact-pipeline-sample) (GitHub) 
+  [Distribute private .NET NuGet packages with AWS CodeArtifact](https://github.com/aws-samples/aws-codeartifact-nuget-dotnet-pipelines) (GitHub) 

# SEC11-BP06 Deploy software programmatically
<a name="sec_appsec_deploy_software_programmatically"></a>

Perform software deployments programmatically where possible. This approach reduces the likelihood that a deployment fails or an unexpected issue is introduced due to human error.

**Desired outcome: **Keeping people away from data is a key principle of building securely in the AWS Cloud. This principle includes how you deploy your software. 

 The benefits of not relying on people to deploy software is the greater confidence that what you tested is what gets deployed, and that the deployment is performed consistently every time. The software should not need to be changed to function in different environments. Using the principles of twelve-factor application development, specifically the externalizing of configuration, allows you to deploy the same code to multiple environments without requiring changes. Cryptographically signing software packages is a good way to verify that nothing has changed between environments. The overall outcome of this approach is to reduce risk in your change process and improve the consistency of software releases. 

**Common anti-patterns: **
+  Manually deploying software into production. 
+  Manually performing changes to software to cater to different environments. 

**Benefits of establishing this best practice: **
+  Increased confidence in the software release process. 
+  Reduced risk of a failed change impacting business functionality. 
+  Increased release cadence due to lower change risk. 
+  Automatic rollback capability for unexpected events during deployment. 
+  Ability to cryptographically prove that the software that was tested is the software deployed. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 Build your AWS account structure to remove persistent human access from environments and use CI/CD tools to perform deployments. Architect your applications so that environment-specific configuration data is obtained from an external source, such as [AWS Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html). Sign packages after they have been tested, and validate these signatures during deployment. Configure your CI/CD pipelines to push application code and use canaries to confirm successful deployment. Use tools such as [AWS CloudFormation](https://aws.amazon.com/cloudformation/) or [AWS CDK](https://aws.amazon.com/cdk/) to define your infrastructure, then use [AWS CodeBuild](https://aws.amazon.com/codebuild/) and [AWS CodePipeline](https://aws.amazon.com/codepipeline/) to perform CI/CD operations. 

### Implementation steps
<a name="implementation-steps"></a>
+  Build well-defined CI/CD pipelines to streamline the deployment process. 
+  Using [AWS CodeBuild](https://aws.amazon.com/codebuild/) and [AWS Code Pipeline](https://aws.amazon.com/codepipeline/) to provide CI/CD capability makes it simple to integrate security testing into your pipelines. 
+  Follow the guidance on separation of environments in the [Organizing Your AWS Environment Using Multiple Accounts](https://docs.aws.amazon.com/whitepapers/latest/organizing-your-aws-environment/organizing-your-aws-environment.html) whitepaper. 
+  Verify no persistent human access to environments where production workloads are running. 
+  Architect your applications to support the externalization of configuration data. 
+  Consider deploying using a blue/green deployment model. 
+  Implement canaries to validate the successful deployment of software. 
+  Use cryptographic tools such as [AWS Signer](https://docs.aws.amazon.com/signer/latest/developerguide/Welcome.html) or [AWS Key Management Service (AWS KMS)](https://aws.amazon.com/kms/) to sign and verify the software packages that you are deploying. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [SEC11-BP02 Automate testing throughout the development and release lifecycle](sec_appsec_automate_testing_throughout_lifecycle.md) 

 **Related documents:** 
+  [AWS CI/CD Workshop](https://catalog.us-east-1.prod.workshops.aws/workshops/ef1c179d-8097-4f34-8dc3-0e9eb381b6eb/en-US/) 
+  [Accelerate deployments on AWS with effective governance](https://aws.amazon.com/blogs/architecture/accelerate-deployments-on-aws-with-effective-governance/) 
+  [Automating safe, hands-off deployments](https://aws.amazon.com/builders-library/automating-safe-hands-off-deployments/) 
+  [Code signing using AWS Certificate Manager Private CA and AWS Key Management Service asymmetric keys](https://aws.amazon.com/blogs/security/code-signing-aws-certificate-manager-private-ca-aws-key-management-service-asymmetric-keys/) 
+  [Code Signing, a Trust and Integrity Control for AWS Lambda](https://aws.amazon.com/blogs/aws/new-code-signing-a-trust-and-integrity-control-for-aws-lambda/) 

 **Related videos:** 
+  [Hands-off: Automating continuous delivery pipelines at Amazon](https://www.youtube.com/watch?v=ngnMj1zbMPY) 

 **Related examples:** 
+  [Blue/Green deployments with AWS Fargate](https://catalog.us-east-1.prod.workshops.aws/workshops/954a35ee-c878-4c22-93ce-b30b25918d89/en-US) 

# SEC11-BP07 Regularly assess security properties of the pipelines
<a name="sec_appsec_regularly_assess_security_properties_of_pipelines"></a>

 Apply the principles of the Well-Architected Security Pillar to your pipelines, with particular attention to the separation of permissions. Regularly assess the security properties of your pipeline infrastructure. Effectively managing the security *of* the pipelines allows you to deliver the security of the software that passes *through* the pipelines. 

**Desired outcome: **The pipelines used to build and deploy your software should follow the same recommended practices as any other workload in your environment. The tests that are implemented in the pipelines should not be editable by the builders who are using them. The pipelines should only have the permissions needed for the deployments they are doing and should implement safeguards to avoid deploying to the wrong environments. Pipelines should not rely on long-term credentials, and should be configured to emit state so that the integrity of the build environments can be validated. 

**Common anti-patterns:**
+  Security tests that can be bypassed by builders. 
+  Overly broad permissions for deployment pipelines. 
+  Pipelines not being configured to validate inputs. 
+  Not regularly reviewing the permissions associated with your CI/CD infrastructure. 
+  Use of long-term or hardcoded credentials. 

**Benefits of establishing this best practice:**
+  Greater confidence in the integrity of the software that is built and deployed through the pipelines. 
+  Ability to stop a deployment when there is suspicious activity. 

** Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 Starting with managed CI/CD services that support IAM roles reduces the risk of credential leakage. Applying the Security Pillar principles to your CI/CD pipeline infrastructure can help you determine where security improvements can be made. Following the [AWS Deployment Pipelines Reference Architecture](https://aws.amazon.com/blogs/aws/new_deployment_pipelines_reference_architecture_and_-reference_implementations/) is a good starting point for building your CI/CD environments. Regularly reviewing the pipeline implementation and analyzing logs for unexpected behavior can help you understand the usage patterns of the pipelines being used to deploy software. 

### Implementation steps
<a name="implementation-steps"></a>
+  Start with the [AWS Deployment Pipelines Reference Architecture](https://aws.amazon.com/blogs/aws/new_deployment_pipelines_reference_architecture_and_-reference_implementations/). 
+  Consider using [AWS IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html) to programmatically generate least privilege IAM policies for the pipelines. 
+  Integrate your pipelines with monitoring and alerting so that you are notified of unexpected or abnormal activity, for AWS managed services [Amazon EventBridge](https://aws.amazon.com/eventbridge/) allows you to route data to targets such as [AWS Lambda](https://aws.amazon.com/lambda/) or [Amazon Simple Notification Service](https://aws.amazon.com/sns/) (Amazon SNS). 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Deployment Pipelines Reference Architecture](https://aws.amazon.com/blogs/aws/new_deployment_pipelines_reference_architecture_and_-reference_implementations/) 
+  [Monitoring AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/monitoring.html) 
+  [Security best practices for AWS CodePipeline](https://docs.aws.amazon.com/codepipeline/latest/userguide/security-best-practices.html) 

 **Related examples:** 
+  [DevOps monitoring dashboard](https://github.com/aws-solutions/aws-devops-monitoring-dashboard) (GitHub) 

# SEC11-BP08 Build a program that embeds security ownership in workload teams
<a name="sec_appsec_build_program_that_embeds_security_ownership_in_teams"></a>

Build a program or mechanism that empowers builder teams to make security decisions about the software that they create. Your security team still needs to validate these decisions during a review, but embedding security ownership in builder teams allows for faster, more secure workloads to be built. This mechanism also promotes a culture of ownership that positively impacts the operation of the systems you build.

 

**Desired outcome: **To embed security ownership and decision making in builder teams, you can either train builders on how to think about security or you can augment their training with security people embedded or associated with the builder teams. Either approach is valid and allows the team to make higher quality security decisions earlier in the development cycle. This ownership model is predicated on training for application security. Starting with the threat model for the particular workload helps focus the design thinking on the appropriate context. Another benefit of having a community of security focused builders, or a group of security engineers working with builder teams, is that you can more deeply understand how software is written. This understanding helps you determine the next areas for improvement in your automation capability. 

**Common anti-patterns:**
+  Leaving all security design decisions to a security team. 
+  Not addressing security requirements early enough in the development process. 
+  Not obtaining feedback from builders and security people on the operation of the program. 

**Benefits of establishing this best practice: **
+  Reduced time to complete security reviews. 
+  Reduction in security issues that are only detected at the security review stage. 
+  Improvement in the overall quality of the software being written. 
+  Opportunity to identify and understand systemic issues or areas of high value improvement. 
+  Reduction in the amount of rework required due to security review findings. 
+  Improvement in the perception of the security function. 

 **Level of risk exposed if this best practice is not established:** Low 

## Implementation guidance
<a name="implementation-guidance"></a>

 Start with the guidance in [SEC11-BP01 Train for application security](sec_appsec_train_for_application_security.md). Then identify the operational model for the program that you think might work best for your organization. The two main patterns are to train builders or to embed security people in builder teams. After you have decided on the initial approach, you should pilot with a single or small group of workload teams to prove the model works for your organization. Leadership support from the builder and security parts of the organization helps with the delivery and success of the program. As you build this program, it’s important to choose metrics that can be used to show the value of the program. Learning from how AWS has approached this problem is a good learning experience. This best practice is very much focused on organizational change and culture. The tools that you use should support the collaboration between the builder and security communities. 

### Implementation steps
<a name="implementation-steps"></a>
+  Start by training your builders for application security. 
+  Create a community and an onboarding program to educate builders. 
+  Pick a name for the program. Guardians, Champions, or Advocates are commonly used. 
+  Identify the model to use: train builders, embed security engineers, or have affinity security roles. 
+  Identify project sponsors from security, builders, and potentially other relevant groups. 
+  Track metrics for the number of people involved in the program, the time taken for reviews, and the feedback from builders and security people. Use these metrics to make improvements. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [SEC11-BP01 Train for application security](sec_appsec_train_for_application_security.md) 
+  [SEC11-BP02 Automate testing throughout the development and release lifecycle](sec_appsec_automate_testing_throughout_lifecycle.md) 

 **Related documents:** 
+  [How to approach threat modeling](https://aws.amazon.com/blogs/security/how-to-approach-threat-modeling/) 
+  [How to think about cloud security governance](https://aws.amazon.com/blogs/security/how-to-think-about-cloud-security-governance/) 

 **Related videos:** 
+  [Proactive security: Considerations and approaches](https://www.youtube.com/watch?v=CBrUE6Qwfag) 