

# Data protection
<a name="a-data-protection"></a>

**Topics**
+ [

# SEC 7. How do you classify your data?
](sec-07.md)
+ [

# SEC 8. How do you protect your data at rest?
](sec-08.md)
+ [

# SEC 9. How do you protect your data in transit?
](sec-09.md)

# SEC 7. How do you classify your data?
<a name="sec-07"></a>

Classification provides a way to categorize data, based on criticality and sensitivity in order to help you determine appropriate protection and retention controls.

**Topics**
+ [

# SEC07-BP01 Understand your data classification scheme
](sec_data_classification_identify_data.md)
+ [

# SEC07-BP02 Apply data protection controls based on data sensitivity
](sec_data_classification_define_protection.md)
+ [

# SEC07-BP03 Automate identification and classification
](sec_data_classification_auto_classification.md)
+ [

# SEC07-BP04 Define scalable data lifecycle management
](sec_data_classification_lifecycle_management.md)

# SEC07-BP01 Understand your data classification scheme
<a name="sec_data_classification_identify_data"></a>

 Understand the classification of data your workload is processing, its handling requirements, the associated business processes, where the data is stored, and who the data owner is.  Your data classification and handling scheme should consider the applicable legal and compliance requirements of your workload and what data controls are needed. Understanding the data is the first step in the data classification journey.  

 **Desired outcome:** The types of data present in your workload are well-understood and documented.  Appropriate controls are in place to protect sensitive data based on its classification.  These controls govern considerations such as who is allowed to access the data and for what purpose, where the data is stored, the encryption policy for that data and how encryption keys are managed, the lifecycle for the data and its retention requirements, appropriate destruction processes, what backup and recovery processes are in place, and the auditing of access. 

 **Common anti-patterns:** 
+  Not having a formal data classification policy in place to define data sensitivity levels and their handling requirements 
+  Not having a good understanding of the sensitivity levels of data within your workload, and not capturing this information in architecture and operations documentation 
+  Failing to apply the appropriate controls around your data based on its sensitivity and requirements, as outlined in your data classification and handling policy 
+  Failing to provide feedback about data classification and handling requirements to owners of the policies. 

 **Benefits of establishing this best practice:** This practice removes ambiguity around the appropriate handling of data within your workload.  Applying a formal policy that defines the sensitivity levels of data in your organization and their required protections can help you comply with legal regulations and other cybersecurity attestations and certifications.  Workload owners can have confidence in knowing where sensitive data is stored and what protection controls are in place.  Capturing these in documentation helps new team members better understand them and maintain controls early in their tenure. These practices can also help reduce costs by right sizing the controls for each type of data. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 When designing a workload, you may be considering ways to protect sensitive data intuitively.  For example, in a multi-tenant application, it is intuitive to think of each tenant's data as sensitive and put protections in place so that one tenant can't access the data of another tenant.  Likewise, you may intuitively design access controls so only administrators can modify data while other users have only read-level access or no access at all. 

 By having these data sensitivity levels defined and captured in policy, along with their data protection requirements, you can formally identify what data resides in your workload. You can then determine if the right controls are in place, if the controls can be audited, and what responses are appropriate if data is found to be mishandled. 

 To help with categorizing where sensitive data is present within your workload, consider using [resource tags](https://docs.aws.amazon.com/whitepapers/latest/tagging-best-practices/tagging-best-practices.html) where available.  For example, you can apply a tag that has a *tag key* of `Classification` and a *tag value* of `PHI` for protected health information (PHI), and another tag that has a *tag key* of `Sensitivity` and a *tag value* of `High`.  Services such as [AWS Config](https://aws.amazon.com/config/) can then be used to monitor these resources for changes and alert if they are modified in a way that brings them out of compliance with your protection requirements (such as changing the encryption settings).  You can capture the standard definition of your tag keys and acceptable values using [tag policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_tag-policies.html), a feature of AWS Organizations. It is not recommended that the tag key or value contains private or sensitive data. 

### Implementation steps
<a name="implementation-steps"></a>

1.  Understand your organization's data classification scheme and protection requirements. 

1.  Identify the types of sensitive data processed by your workloads. 

1.  Verify that sensitive data is being stored and protected within your workload according to your policy.  Use techniques such as automated testing to audit the effectiveness of your controls. 

1.  Consider using resource and data-level tagging, where available, to tag data with its sensitivity level and other operational metadata that can help with monitoring and incident response. 

   1.   AWS Organizations tag policies can be used to enforce tagging standards. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [SUS04-BP01 Implement a data classification policy](https://docs.aws.amazon.com/wellarchitected/latest/framework/sus_sus_data_a2.html) 

 **Related documents:** 
+  [Data Classification whitepaper](https://docs.aws.amazon.com/whitepapers/latest/data-classification/data-classification-overview.html) 
+  [Best Practices for Tagging AWS Resources](https://docs.aws.amazon.com/whitepapers/latest/tagging-best-practices/tagging-best-practices.html) 

 **Related examples:** 
+  [AWS Organizations Tag Policy Syntax and Examples](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_example-tag-policies.html) 

 **Related tools** 
+  [AWS Tag Editor](https://docs.aws.amazon.com/tag-editor/latest/userguide/tag-editor.html) 

# SEC07-BP02 Apply data protection controls based on data sensitivity
<a name="sec_data_classification_define_protection"></a>

 Apply data protection controls that provide an appropriate level of control for each class of data defined in your classification policy.  This practice can allow you to protect sensitive data from unauthorized access and use, while preserving the availability and use of data. 

 **Desired outcome:** You have a classification policy that defines the different levels of sensitivity for data in your organization.  For each of these sensitivity levels, you have clear guidelines published for approved storage and handling services and locations, and their required configuration.  You implement the controls for each level according to the level of protection required and their associated costs.  You have monitoring and alerting in place to detect if data is present in unauthorized locations, processed in unauthorized environments, accessed by unauthorized actors, or the configuration of related services becomes non-compliant. 

 **Common anti-patterns:** 
+  Applying the same level of protection controls across all data. This may lead to over-provisioning security controls for low-sensitivity data, or insufficient protection of highly sensitive data. 
+  Not involving relevant stakeholders from security, compliance, and business teams when defining data protection controls. 
+  Overlooking the operational overhead and costs associated with implementing and maintaining data protection controls. 
+  Not conducting periodic data protection control reviews to maintain alignment with classification policies. 

 **Benefits of establishing this best practice:** By aligning your controls to the classification level of your data, your organization can invest in higher levels of control where needed. This can include increasing resources on securing, monitoring, measuring, remediating, and reporting.  Where fewer controls are appropriate, you can improve the accessibility and completeness of data for your workforce, customers, or constituents.  This approach gives your organization the most flexibility with data usage, while still adhering to data protection requirements. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 Implementing data protection controls based on data sensitivity levels involves several key steps. First, identify the different data sensitivity levels within your workload architecture (such as public, internal, confidential, and restricted) and evaluate where you store and process this data. Next, define isolation boundaries around data based on its sensitivity level. We recommend you separate data into different AWS accounts, using [service control policies](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_manage_policies_scps.html) (SCPs) to restrict services and actions allowed for each data sensitivity level. This way, you can create strong isolation boundaries and enforce the principle of least privilege. 

 After you define the isolation boundaries, implement appropriate protection controls based on the data sensitivity levels. Refer to best practices for [Protecting data at rest](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/protecting-data-at-rest.html) and [Protecting data in transit](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/protecting-data-in-transit.html) to implement relevant controls like encryption, access controls, and auditing. Consider techniques like tokenization or anonymization to reduce the sensitivity level of your data. Simplify applying consistent data policies across your business with a centralized system for tokenization and de-tokenization. 

 Continuously monitor and test the effectiveness of the implemented controls. Regularly review and update the data classification scheme, risk assessments, and protection controls as your organization's data landscape and threats evolve. Align the implemented data protection controls with relevant industry regulations, standards, and legal requirements. Further, provide security awareness and training to help employees understand the data classification scheme and their responsibilities in handling and protecting sensitive data. 

### Implementation steps
<a name="implementation-steps"></a>

1.  Identify the classification and sensitivity levels of data within your workload. 

1.  Define isolation boundaries for each level and determine an enforcement strategy. 

1.  Evaluate the controls you define that govern access, encryption, auditing, retention, and others required by your data classification policy. 

1.  Evaluate options to reduce the sensitivity level of data where appropriate, such as using tokenization or anonymization. 

1.  Verify your controls using automated testing and monitoring of your configured resources. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [PERF03-BP01 Use a purpose-built data store that best supports your data access and storage requirements](https://docs.aws.amazon.com/wellarchitected/latest/framework/perf_data_use_purpose_built_data_store.html) 
+  [COST04-BP05 Enforce data retention policies](https://docs.aws.amazon.com/wellarchitected/latest/framework/cost_decomissioning_resources_data_retention.html) 

 **Related documents:** 
+  [Data Classification whitepaper](https://docs.aws.amazon.com/whitepapers/latest/data-classification/data-classification.html) 
+  [Best Practices for Security, Identify, & Compliance](https://aws.amazon.com/architecture/security-identity-compliance/?cards-all.sort-by=item.additionalFields.sortDate&cards-all.sort-order=desc&awsf.content-type=*all&awsf.methodology=*all) 
+  [AWS KMS Best Practices](https://docs.aws.amazon.com/kms/latest/developerguide/best-practices.html) 
+  [Encryption best practices and features for AWS services](https://docs.aws.amazon.com/prescriptive-guidance/latest/encryption-best-practices/welcome.html) 

 **Related examples:** 
+  [Building a serverless tokenization solution to mask sensitive data](https://aws.amazon.com/blogs/compute/building-a-serverless-tokenization-solution-to-mask-sensitive-data/) 
+  [How to use tokenization to improve data security and reduce audit scope](https://aws.amazon.com/blogs/security/how-to-use-tokenization-to-improve-data-security-and-reduce-audit-scope/) 

 **Related tools:** 
+  [AWS Key Management Service (AWS KMS)](https://aws.amazon.com/kms/) 
+  [AWS CloudHSM](https://aws.amazon.com/cloudhsm/) 
+  [AWS Organizations](https://aws.amazon.com/organizations/) 

# SEC07-BP03 Automate identification and classification
<a name="sec_data_classification_auto_classification"></a>

 Automating the identification and classification of data can help you implement the correct controls. Using automation to augment manual determination reduces the risk of human error and exposure. 

 **Desired outcome:** You are able to verify whether the proper controls are in place based on your classification and handling policy. Automated tools and services help you to identify and classify the sensitivity level of your data.  Automation also helps you continually monitor your environments to detect and alert if data is being stored or handled in unauthorized ways so corrective action can be taken quickly. 

 **Common anti-patterns:** 
+  Relying solely on manual processes for data identification and classification, which can be error-prone and time-consuming.  This can lead to inefficient and inconsistent data classification, especially as data volumes grow. 
+  Not having mechanisms to track and manage data assets across the organization. 
+  Overlooking the need for continuous monitoring and classification of data as it moves and evolves within the organization. 

 **Benefits of establishing this best practice:** Automating data identification and classification can lead to more consistent and accurate application of data protection controls, reducing the risk of human error.  Automation can also provide visibility into sensitive data access and movement, helping you detect unauthorized handling and take corrective action. 

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance
<a name="implementation-guidance"></a>

 While human judgment is often used to classify data during the initial design phases of a workload, consider having systems in place that automate identification and classification on test data as a preventive control. For example, developers can be provided a tool or service to scan representative data to determine its sensitivity.  Within AWS, you can upload data sets into [Amazon S3](https://aws.amazon.com/s3/) and scan them using [Amazon Macie](https://aws.amazon.com/macie/), [Amazon Comprehend](https://aws.amazon.com/comprehend/), or [Amazon Comprehend Medical](https://aws.amazon.com/comprehend/medical/).  Likewise, consider scanning data as part of unit and integration testing to detect where sensitive data is not expected. Alerting on sensitive data at this stage can highlight gaps in protections before deployment to production. Other features such as sensitive data detection in [AWS Glue](https://docs.aws.amazon.com/glue/latest/dg/detect-PII.html), [Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/sns-message-data-protection-managed-data-identifiers.htm), and [Amazon CloudWatch](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/mask-sensitive-log-data.html) can also be used to detect PII and take mitigating action. For any automated tool or service, understand how it defines sensitive data, and augment it with other human or automated solutions to close any gaps as needed. 

 As a detective control, use ongoing monitoring of your environments to detect if sensitive data is being stored in non-compliant ways.  This can help detect situations such as sensitive data being emitted into log files or being copied to a data analytics environment without proper de-identification or redaction.  Data that is stored in Amazon S3 can be continually monitored for sensitive data using Amazon Macie.   

### Implementation steps
<a name="implementation-steps"></a>

1.  Perform an initial scan of your environments for automated identification and classification. 

   1.  An initial full scan of your data can help produce a comprehensive understanding of where sensitive data resides in your environments. When a full scan is not initially required or is unable to be completed up-front due to cost, evaluate if data sampling techniques are suitable to achieve your outcomes. For example, Amazon Macie can be configured to perform a broad automated sensitive data discovery operation across your S3 buckets.  This capability uses sampling techniques to cost-efficiently perform a preliminary analysis of where sensitive data resides.  A deeper analysis of S3 buckets can then be performed using a sensitive data discovery job. Other data stores can also be exported to S3 to be scanned by Macie. 

1.  Configure ongoing scans of your environments. 

   1.  The automated sensitive data discovery capability of Macie can be used to perform ongoing scans of your environments.  Known S3 buckets that are authorized to store sensitive data can be excluded using an allow list in Macie. 

1.  Incorporate identification and classification into your build and test processes. 

   1.  Identify tools that developers can use to scan data for sensitivity while workloads are in development.  Use these tools as part of integration testing to alert when sensitive data is unexpected and prevent further deployment. 

1.  Implement a system or runbook to take action when sensitive data is found in unauthorized locations. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Glue: Detect and process sensitive data](https://docs.aws.amazon.com/glue/latest/dg/detect-PII.html) 
+  [Using managed data identifiers in Amazon SNS](https://docs.aws.amazon.com/sns/latest/dg/sns-message-data-protection-managed-data-identifiers.html) 
+  [Amazon CloudWatch Logs: Help protect sensitive log data with masking](https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/mask-sensitive-log-data.html) 

 **Related examples:** 
+  [Enabling data classification for Amazon RDS database with Macie](https://aws.amazon.com/blogs/security/enabling-data-classification-for-amazon-rds-database-with-amazon-macie/) 
+  [Detecting sensitive data in DynamoDB with Macie](https://aws.amazon.com/blogs/security/detecting-sensitive-data-in-dynamodb-with-macie/) 

 **Related tools:** 
+  [Amazon Macie](https://aws.amazon.com/macie/) 
+  [Amazon Comprehend](https://aws.amazon.com/comprehend/) 
+  [Amazon Comprehend Medical](https://aws.amazon.com/comprehend/medical/) 
+  [AWS Glue](https://aws.amazon.com/glue/) 

# SEC07-BP04 Define scalable data lifecycle management
<a name="sec_data_classification_lifecycle_management"></a>

 Understand your data lifecycle requirements as they relate to your different levels of data classification and handling.  This can include how data is handled when it first enters your environment, how data is transformed, and the rules for its destruction. Consider factors such as retention periods, access, auditing, and tracking provenance. 

 **Desired outcome:** You classify data as close as possible to the point and time of ingestion. When data classification requires masking, tokenization, or other processes that reduce sensitivity level, you perform these actions as close as possible to point and time of ingestion. 

 You delete data in accordance with your policy when it is no longer appropriate to keep, based on its classification. 

 **Common anti-patterns:** 
+  Implementing a one-size-fits-all approach to data lifecycle management, without considering varying sensitivity levels and access requirements. 
+  Considering lifecycle management only from the perspective of either data that is usable, or data that is backed up, but not both. 
+  Assuming that data that has entered your workload is valid, without establishing its value or provenance. 
+  Relying on data durability as a substitute for data backups and protection. 
+  Retaining data beyond its usefulness and required retention period. 

 **Benefits of establishing this best practice:** A well-defined and scalable data lifecycle management strategy helps maintain regulatory compliance, improves data security, optimizes storage costs, and enables efficient data access and sharing while maintaining appropriate controls. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 Data within a workload is often dynamic.  The form it takes when entering your workload environment can be different from when it is stored or used in business logic, reporting, analytics, or machine learning.  In addition, the value of data can change over time. Some data is temporal in nature and loses value as it gets older.  Consider how these changes to your data impact evaluation under your data classification scheme and associated controls.  Where possible, use an automated lifecycle mechanism, such as [Amazon S3 lifecycle policies](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lifecycle-mgmt.html) and the [Amazon Data Lifecycle Manager](https://aws.amazon.com/ebs/data-lifecycle-manager/), to configure your data retention, archiving, and expiration processes.   

 Distinguish between data that is available for use, and data that is stored as a backup.  Consider using [AWS Backup](https://aws.amazon.com/backup/) to automate the backup of data across AWS services.  [Amazon EBS snapshots](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-snapshots.html) provide a way to copy an EBS volume and store it using S3 features, including lifecycle, data protection, and access to protection mechanisms. Two of these mechanisms are [S3 Object Lock](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html) and [AWS Backup Vault Lock](https://docs.aws.amazon.com/aws-backup/latest/devguide/vault-lock.html), which can provide you with additional security and control over your backups. Manage clear separation of duties and access for backups. Isolate backups at the account level to maintain separation from the affected environment during an event. 

 Another aspect of lifecycle management is recording the history of data as it progresses through your workload, called *data provenance tracking*. This can give confidence that you know where the data came from, any transformations performed, what owner or process made those changes, and when.  Having this history helps with troubleshooting issues and investigations during potential security events.  For example, you can log metadata about transformations in an [Amazon DynamoDB](https://aws.amazon.com/dynamodb/) table.  Within a data lake, you can keep copies of transformed data in different S3 buckets for each data pipeline stage. Store schema and timestamp information in an [AWS Glue Data Catalog](https://docs.aws.amazon.com/glue/latest/dg/catalog-and-crawler.html).  Regardless of your solution, consider the requirements of your end users to determine the appropriate tooling you need to report on your data provenance.  This will help you determine how to best track your provenance. 

### Implementation steps
<a name="implementation-steps"></a>

1.  Analyze the workload's data types, sensitivity levels, and access requirements to classify the data and define appropriate lifecycle management strategies. 

1.  Design and implement data retention policies and automated destruction processes that align with legal, regulatory, and organizational requirements. 

1.  Establish processes and automation for continuous monitoring, auditing, and adjustment of data lifecycle management strategies, controls, and policies as workload requirements and regulations evolve. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [COST04-BP05 Enforce data retention policies](https://docs.aws.amazon.com/wellarchitected/latest/framework/cost_decomissioning_resources_data_retention.html) 
+  [SUS04-BP03 Use policies to manage the lifecycle of your datasets](https://docs.aws.amazon.com/wellarchitected/latest/framework/sus_sus_data_a4.html) 

 **Related documents:** 
+  [Data Classification Whitepaper](https://docs.aws.amazon.com/whitepapers/latest/data-classification/data-classification-overview.html) 
+  [AWS Blueprint for Ransomware Defense](https://d1.awsstatic.com/whitepapers/compliance/AWS-Blueprint-for-Ransomware-Defense.pdf) 
+  [DevOps Guidance: Improve traceability with data provenance tracking](https://docs.aws.amazon.com/wellarchitected/latest/devops-guidance/ag.dlm.8-improve-traceability-with-data-provenance-tracking.html) 

 **Related examples:** 
+  [How to protect sensitive data for its entire lifecycle in AWS](https://aws.amazon.com/blogs/security/how-to-protect-sensitive-data-for-its-entire-lifecycle-in-aws/) 
+  [Build data lineage for data lakes using AWS Glue, Amazon Neptune, and Spline](https://aws.amazon.com/blogs/big-data/build-data-lineage-for-data-lakes-using-aws-glue-amazon-neptune-and-spline/) 

 **Related tools:** 
+  [AWS Backup](https://aws.amazon.com/backup/) 
+  [Amazon Data Lifecycle Manager](https://aws.amazon.com/ebs/data-lifecycle-manager/) 
+  [AWS Identity and Access Management Access Analyzer](https://aws.amazon.com/iam/access-analyzer/) 

# SEC 8. How do you protect your data at rest?
<a name="sec-08"></a>

Protect your data at rest by implementing multiple controls, to reduce the risk of unauthorized access or mishandling.

**Topics**
+ [

# SEC08-BP01 Implement secure key management
](sec_protect_data_rest_key_mgmt.md)
+ [

# SEC08-BP02 Enforce encryption at rest
](sec_protect_data_rest_encrypt.md)
+ [

# SEC08-BP03 Automate data at rest protection
](sec_protect_data_rest_automate_protection.md)
+ [

# SEC08-BP04 Enforce access control
](sec_protect_data_rest_access_control.md)

# SEC08-BP01 Implement secure key management
<a name="sec_protect_data_rest_key_mgmt"></a>

 Secure key management includes the storage, rotation, access control, and monitoring of key material required to secure data at rest for your workload. 

 **Desired outcome:** A scalable, repeatable, and automated key management mechanism. The mechanism should provide the ability to enforce least privilege access to key material, provide the correct balance between key availability, confidentiality, and integrity. Access to keys should be monitored, and key material rotated through an automated process. Key material should never be accessible to human identities. 

**Common anti-patterns:** 
+  Human access to unencrypted key material. 
+  Creating custom cryptographic algorithms. 
+  Overly broad permissions to access key material. 

 **Benefits of establishing this best practice:** By establishing a secure key management mechanism for your workload, you can help provide protection for your content against unauthorized access. Additionally, you may be subject to regulatory requirements to encrypt your data. An effective key management solution can provide technical mechanisms aligned to those regulations to protect key material. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 Many regulatory requirements and best practices include encryption of data at rest as a fundamental security control. In order to comply with this control, your workload needs a mechanism to securely store and manage the key material used to encrypt your data at rest. 

 AWS offers AWS Key Management Service (AWS KMS) to provide durable, secure, and redundant storage for AWS KMS keys. [Many AWS services integrate with AWS KMS](https://aws.amazon.com/kms/features/#integration) to support encryption of your data. AWS KMS uses FIPS 140-2 Level 3 validated hardware security modules to protect your keys. There is no mechanism to export AWS KMS keys in plain text. 

 When deploying workloads using a multi-account strategy, it is considered [best practice](https://docs.aws.amazon.com/prescriptive-guidance/latest/security-reference-architecture/application.html#app-kms) to keep AWS KMS keys in the same account as the workload that uses them. In this distributed model, responsibility for managing the AWS KMS keys resides with the application team. In other use cases, organizations may choose to store AWS KMS keys into a centralized account. This centralized structure requires additional policies to enable the cross-account access required for the workload account to access keys stored in the centralized account, but may be more applicable in use cases where a single key is shared across multiple AWS accounts. 

 Regardless of where the key material is stored, access to the key should be tightly controlled through the use of [key policies](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) and IAM policies. Key policies are the primary way to control access to a AWS KMS key. In addition, AWS KMS key grants can provide access to AWS services to encrypt and decrypt data on your behalf. Take time to review the [best practices for access control to your AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/iam-policies-best-practices.html). 

 It is best practice to monitor the use of encryption keys to detect unusual access patterns. Operations performed using AWS managed keys and customer managed keys stored in AWS KMS can be logged in AWS CloudTrail and should be reviewed periodically. Special attention should be placed on monitoring key destruction events. To mitigate accidental or malicious destruction of key material, key destruction events do not delete the key material immediately. Attempts to delete keys in AWS KMS are subject to a [waiting period](https://docs.aws.amazon.com/kms/latest/developerguide/deleting-keys.html#deleting-keys-how-it-works), which defaults to 30 days, providing administrators time to review these actions and roll back the request if necessary. 

 Most AWS services use AWS KMS in a way that is transparent to you - your only requirement is to decide whether to use an AWS managed or customer managed key. If your workload requires the direct use of AWS KMS to encrypt or decrypt data, the best practice is to use [envelope encryption](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping) to protect your data. The [AWS Encryption SDK](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html) can provide your applications client-side encryption primitives to implement envelope encryption and integrate with AWS KMS. 

### Implementation steps
<a name="implementation-steps"></a>

1.  Determine the appropriate [key management options](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#key-mgmt) (AWS managed or customer managed) for the key. 
   +  For ease of use, AWS offers AWS owned and AWS managed keys for most services, which provide encryption-at-rest capability without the need to manage key material or key policies. 
   +  When using customer managed keys, consider the default key store to provide the best balance between agility, security, data sovereignty, and availability. Other use cases may require the use of custom key stores with [AWS CloudHSM](https://aws.amazon.com/cloudhsm/) or the [external key store](https://docs.aws.amazon.com/kms/latest/developerguide/keystore-external.html). 

1.  Review the list of services that you are using for your workload to understand how AWS KMS integrates with the service. For example, EC2 instances can use encrypted EBS volumes, verifying that Amazon EBS snapshots created from those volumes are also encrypted using a customer managed key and mitigating accidental disclosure of unencrypted snapshot data. 
   +  [How AWS services use AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/service-integration.html) 
   +  For detailed information about the encryption options that an AWS service offers, see the Encryption at Rest topic in the user guide or the developer guide for the service. 

1.  Implement AWS KMS: AWS KMS makes it simple for you to create and manage keys and control the use of encryption across a wide range of AWS services and in your applications. 
   +  [Getting started: AWS Key Management Service (AWS KMS)](https://docs.aws.amazon.com/kms/latest/developerguide/getting-started.html) 
   +  Review the [best practices for access control to your AWS KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/iam-policies-best-practices.html). 

1.  Consider AWS Encryption SDK: Use the AWS Encryption SDK with AWS KMS integration when your application needs to encrypt data client-side. 
   +  [AWS Encryption SDK](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html) 

1.  Enable [IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html) to automatically review and notify if there are overly broad AWS KMS key policies. 

1.  Enable [Security Hub CSPM](https://docs.aws.amazon.com/securityhub/latest/userguide/kms-controls.html) to receive notifications if there are misconfigured key policies, keys scheduled for deletion, or keys without automated rotation enabled. 

1.  Determine the logging level appropriate for your AWS KMS keys. Since calls to AWS KMS, including read-only events, are logged, the CloudTrail logs associated with AWS KMS can become voluminous. 
   +  Some organizations prefer to segregate the AWS KMS logging activity into a separate trail. For more detail, see the [Logging AWS KMS API calls with CloudTrail](https://docs.aws.amazon.com/kms/latest/developerguide/logging-using-cloudtrail.html) section of the AWS KMS developers guide. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Key Management Service](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) 
+  [AWS cryptographic services and tools](https://docs.aws.amazon.com/crypto/latest/userguide/awscryp-overview.html) 
+  [Protecting Amazon S3 Data Using Encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html) 
+  [Envelope encryption](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping) 
+  [Digital sovereignty pledge](https://aws.amazon.com/blogs/security/aws-digital-sovereignty-pledge-control-without-compromise/) 
+  [Demystifying AWS KMS key operations, bring your own key, custom key store, and ciphertext portability](https://aws.amazon.com/blogs/security/demystifying-kms-keys-operations-bring-your-own-key-byok-custom-key-store-and-ciphertext-portability/) 
+  [AWS Key Management Service cryptographic details](https://docs.aws.amazon.com/kms/latest/cryptographic-details/intro.html) 

 **Related videos:** 
+  [How Encryption Works in AWS](https://youtu.be/plv7PQZICCM) 
+  [Securing Your Block Storage on AWS](https://youtu.be/Y1hE1Nkcxs8) 
+  [AWS data protection: Using locks, keys, signatures, and certificates](https://www.youtube.com/watch?v=lD34wbc7KNA) 

 **Related examples:** 
+  [Implement advanced access control mechanisms using AWS KMS](https://catalog.workshops.aws/advkmsaccess/en-US/introduction) 

# SEC08-BP02 Enforce encryption at rest
<a name="sec_protect_data_rest_encrypt"></a>

 You should enforce the use of encryption for data at rest. Encryption maintains the confidentiality of sensitive data in the event of unauthorized access or accidental disclosure. 

 **Desired outcome:** Private data should be encrypted by default when at rest. Encryption helps maintain confidentiality of the data and provides an additional layer of protection against intentional or inadvertent data disclosure or exfiltration. Data that is encrypted cannot be read or accessed without first unencrypting the data. Any data stored unencrypted should be inventoried and controlled. 

 **Common anti-patterns:** 
+  Not using encrypt-by-default configurations. 
+  Providing overly permissive access to decryption keys. 
+  Not monitoring the use of encryption and decryption keys. 
+  Storing data unencrypted. 
+  Using the same encryption key for all data regardless of data usage, types, and classification. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 Map encryption keys to data classifications within your workloads. This approach helps protect against overly permissive access when using either a single, or very small number of encryption keys for your data (see [SEC07-BP01 Understand your data classification scheme](sec_data_classification_identify_data.md)). 

 AWS Key Management Service (AWS KMS) integrates with many AWS services to make it easier to encrypt your data at rest. For example, in Amazon Simple Storage Service (Amazon S3), you can set [default encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket-encryption.html) on a bucket so that new objects are automatically encrypted. When using AWS KMS, consider how tightly the data needs to be restricted. Default and service-controlled AWS KMS keys are managed and used on your behalf by AWS. For sensitive data that requires fine-grained access to the underlying encryption key, consider customer managed keys (CMKs). You have full control over CMKs, including rotation and access management through the use of key policies. 

 Additionally, [Amazon Elastic Compute Cloud (Amazon EC2)](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html#encryption-by-default) and [Amazon S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-bucket-encryption.html) support the enforcement of encryption by setting default encryption. You can use [AWS Config Rules](https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws-config.html) to check automatically that you are using encryption, for example, for [Amazon Elastic Block Store (Amazon EBS) volumes](https://docs.aws.amazon.com/config/latest/developerguide/encrypted-volumes.html), [Amazon Relational Database Service (Amazon RDS) instances](https://docs.aws.amazon.com/config/latest/developerguide/rds-storage-encrypted.html), and [Amazon S3 buckets](https://docs.aws.amazon.com/config/latest/developerguide/s3-default-encryption-kms.html). 

 AWS also provides options for client-side encryption, allowing you to encrypt data prior to uploading it to the cloud. The AWS Encryption SDK provides a way to encrypt your data using [envelope encryption](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#enveloping). You provide the wrapping key, and the AWS Encryption SDK generates a unique data key for each data object it encrypts. Consider AWS CloudHSM if you need a managed single-tenant hardware security module (HSM). AWS CloudHSM allows you to generate, import, and manage cryptographic keys on a FIPS 140-2 level 3 validated HSM. Some use cases for AWS CloudHSM include protecting private keys for issuing a certificate authority (CA), and turning on transparent data encryption (TDE) for Oracle databases. The AWS CloudHSM Client SDK provides software that allows you to encrypt data client side using keys stored inside AWS CloudHSM prior to uploading your data into AWS. The Amazon DynamoDB Encryption Client also allows you to encrypt and sign items prior to upload into a DynamoDB table. 

 **Implementation steps** 
+  **Enforce encryption at rest for Amazon S3:** Implement [Amazon S3 bucket default encryption](https://docs.aws.amazon.com/AmazonS3/latest/userguide/default-bucket-encryption.html). 

   **Configure [default encryption for new Amazon EBS volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html):** Specify that you want all newly created Amazon EBS volumes to be created in encrypted form, with the option of using the default key provided by AWS or a key that you create. 

   **Configure encrypted Amazon Machine Images (AMIs):** Copying an existing AMI with encryption configured will automatically encrypt root volumes and snapshots. 

   **Configure [Amazon RDS encryption](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Overview.Encryption.html):** Configure encryption for your Amazon RDS database clusters and snapshots at rest by using the encryption option. 

   **Create and configure AWS KMS keys with policies that limit access to the appropriate principals for each classification of data:** For example, create one AWS KMS key for encrypting production data and a different key for encrypting development or test data. You can also provide key access to other AWS accounts. Consider having different accounts for your development and production environments. If your production environment needs to decrypt artifacts in the development account, you can edit the CMK policy used to encrypt the development artifacts to give the production account the ability to decrypt those artifacts. The production environment can then ingest the decrypted data for use in production. 

   **Configure encryption in additional AWS services:** For other AWS services you use, review the [security documentation](https://docs.aws.amazon.com/security/) for that service to determine the service’s encryption options. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+  [AWS Crypto Tools](https://docs.aws.amazon.com/aws-crypto-tools) 
+  [AWS Encryption SDK](https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/introduction.html) 
+  [AWS KMS Cryptographic Details Whitepaper](https://docs.aws.amazon.com/kms/latest/cryptographic-details/intro.html) 
+  [AWS Key Management Service](https://aws.amazon.com/kms) 
+  [AWS cryptographic services and tools](https://docs.aws.amazon.com/crypto/latest/userguide/awscryp-overview.html) 
+  [Amazon EBS Encryption](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html) 
+  [Default encryption for Amazon EBS volumes](https://aws.amazon.com/blogs/aws/new-opt-in-to-default-encryption-for-new-ebs-volumes/) 
+  [Encrypting Amazon RDS Resources](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html) 
+  [How do I enable default encryption for an Amazon S3 bucket?](https://docs.aws.amazon.com/AmazonS3/latest/user-guide/default-bucket-encryption.html) 
+  [Protecting Amazon S3 Data Using Encryption](https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingEncryption.html) 

 **Related videos:** 
+  [How Encryption Works in AWS](https://youtu.be/plv7PQZICCM) 
+  [Securing Your Block Storage on AWS](https://youtu.be/Y1hE1Nkcxs8) 

# SEC08-BP03 Automate data at rest protection
<a name="sec_protect_data_rest_automate_protection"></a>

 Use automation to validate and enforce data at rest controls.  Use automated scanning to detect misconfiguration of your data storage solutions, and perform remediations through automated programmatic response where possible.  Incorporate automation in your CI/CD processes to detect data storage misconfigurations before they are deployed to production. 

 **Desired outcome:** Automated systems scan and monitor data storage locations for misconfiguration of controls, unauthorized access, and unexpected use.  Detection of misconfigured storage locations initiates automated remediations.  Automated processes create data backups and store immutable copies outside of the original environment. 

 **Common anti-patterns:** 
+  Not considering options to enable encryption by default settings, where supported. 
+  Not considering security events, in addition to operational events, when formulating an automated backup and recovery strategy. 
+  Not enforcing public access settings for storage services. 
+  Not monitoring and audit your controls for protecting data at rest. 

 **Benefits of establishing this best practice:** Automation helps to prevent the risk of misconfiguring your data storage locations. It helps to prevent misconfigurations from entering your production environments. This best practice also helps with detecting and fixing misconfigurations if they occur.  

 **Level of risk exposed if this best practice is not established:** Medium 

## Implementation guidance 
<a name="implementation-guidance"></a>

 Automation is a theme throughout the practices for protecting your data at rest. [SEC01-BP06 Automate deployment of standard security controls](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_securely_operate_automate_security_controls.html) describes how you can capture the configuration of your resources using *infrastructure as code* (IaC) templates, such as with [AWS CloudFormation](https://aws.amazon.com/cloudformation/).  These templates are committed to a version control system, and are used to deploy resources on AWS through a CI/CD pipeline.  These techniques equally apply to automating the configuration of your data storage solutions, such as encryption settings on Amazon S3 buckets.   

 You can check the settings that you define in your IaC templates for misconfiguration in your CI/CD pipelines using rules in [AWS CloudFormation Guard](https://docs.aws.amazon.com/cfn-guard/latest/ug/what-is-guard.html).  You can monitor settings that are not yet available in CloudFormation or other IaC tooling for misconfiguration with [AWS Config](https://aws.amazon.com/config/).  Alerts that Config generates for misconfigurations can be remediated automatically, as described in [SEC04-BP04 Initiate remediation for non-compliant resources](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_detect_investigate_events_noncompliant_resources.html). 

 Using automation as part of your permissions management strategy is also an integral component of automated data protections. [SEC03-BP02 Grant least privilege access](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_permissions_least_privileges.html) and [SEC03-BP04 Reduce permissions continuously](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_permissions_continuous_reduction.html) describe configuring least-privilege access policies that are continually monitored by the [AWS Identity and Access Management Access Analyzer](https://aws.amazon.com/iam/access-analyzer/) to generate findings when permission can be reduced.  Beyond automation for monitoring permissions, you can configure [Amazon GuardDuty](https://aws.amazon.com/guardduty/) to watch for anomalous data access behavior for your [EBS volumes](https://docs.aws.amazon.com/guardduty/latest/ug/guardduty_finding-types-ec2.html) (by way of an EC2 instance), [S3 buckets](https://docs.aws.amazon.com/guardduty/latest/ug/s3-protection.html), and supported [Amazon Relational Database Service databases](https://docs.aws.amazon.com/guardduty/latest/ug/rds-protection.html). 

 Automation also plays a role in detecting when sensitive data is stored in unauthorized locations. [SEC07-BP03 Automate identification and classification](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_data_classification_auto_classification.html) describes how [Amazon Macie](https://aws.amazon.com/macie/) can monitor your S3 buckets for unexpected sensitive data and generate alerts that can initiate an automated response. 

 Follow the practices in [REL09 Back up data](https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/back-up-data.html) to develop an automated data backup and recovery strategy. Data backup and recovery is as important for recovering from security events as it is for operational events. 

### Implementation steps
<a name="implementation-steps"></a>

1.  Capture data storage configuration in IaC templates.  Use automated checks in your CI/CD pipelines to detect misconfigurations. 

   1.  You can use for your IaC templates, and [CloudFormation Guard](https://docs.aws.amazon.com/cfn-guard/latest/ug/what-is-guard.html) for checking templates for misconfiguration. 

   1.  Use [AWS Config](https://aws.amazon.com/config/) to run rules in a proactive evaluation mode. Use this setting to check the compliance of a resource as a step in your CI/CD pipeline before creating it. 

1.  Monitor resources for data storage misconfigurations. 

   1.  Set [AWS Config](https://aws.amazon.com/config/) to monitor data storage resources for changes in control configurations and generate alerts to invoke remediation actions when it detects a misconfiguration. 

   1.  See [SEC04-BP04 Initiate remediation for non-compliant resources](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_detect_investigate_events_noncompliant_resources.html) for more guidance on automated remediations. 

1.  Monitor and reduce data access permissions continually through automation. 

   1.  [IAM Access Analyzer](https://aws.amazon.com/iam/access-analyzer/) can run continually to generate alerts when permissions can potentially be reduced. 

1.  Monitor and alert on anomalous data access behaviors. 

   1.  [GuardDuty](https://aws.amazon.com/guardduty/) watches for both known threat signatures and deviations from baseline access behaviors for data storage resources such as EBS volumes, S3 buckets, and RDS databases. 

1.  Monitor and alert on sensitive data being stored in unexpected locations. 

   1.  Use [Amazon Macie](https://aws.amazon.com/macie/) to continually scan your S3 buckets for sensitive data. 

1.  Automate secure and encrypted backups of your data. 

   1.  [AWS Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html) is a managed service that creates encrypted and secure backups of various data sources on AWS.  [Elastic Disaster Recovery](https://aws.amazon.com/disaster-recovery/) allows you to copy full server workloads and maintain continuous data protection with a recovery point objective (RPO) measured in seconds.  You can configure both services to work together to automate creating data backups and copying them to failover locations.  This can help keep your data available when impacted by either operational or security events. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [SEC01-BP06 Automate deployment of standard security controls](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_securely_operate_automate_security_controls.html) 
+  [SEC03-BP02 Grant least privilege access](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_permissions_least_privileges.html) 
+  [SEC03-BP04 Reduce permissions continuously](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_permissions_continuous_reduction.html) 
+  [SEC04-BP04 Initiate remediation for non-compliant resources](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_detect_investigate_events_noncompliant_resources.html) 
+  [SEC07-BP03 Automate identification and classification](https://docs.aws.amazon.com/wellarchitected/latest/framework/sec_data_classification_auto_classification.html) 
+  [REL09-BP02 Secure and encrypt backups](https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_backing_up_data_secured_backups_data.html) 
+  [REL09-BP03 Perform data backup automatically](https://docs.aws.amazon.com/wellarchitected/latest/reliability-pillar/rel_backing_up_data_automated_backups_data.html) 

 **Related documents:** 
+  [AWS Prescriptive Guidance: Automatically encrypt existing and new Amazon EBS volumes](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/automatically-encrypt-existing-and-new-amazon-ebs-volumes.html) 
+  [Ransomware Risk Management on AWS Using the NIST Cyber Security Framework (CSF)](https://docs.aws.amazon.com/whitepapers/latest/ransomware-risk-management-on-aws-using-nist-csf/ransomware-risk-management-on-aws-using-nist-csf.html) 

 **Related examples:** 
+  [How to use AWS Config proactive rules and AWS CloudFormation Hooks to prevent creation of noncompliant cloud resources](https://aws.amazon.com/blogs/mt/how-to-use-aws-config-proactive-rules-and-aws-cloudformation-hooks-to-prevent-creation-of-non-complaint-cloud-resources/) 
+  [Automate and centrally manage data protection for Amazon S3 with AWS Backup](https://aws.amazon.com/blogs/storage/automate-and-centrally-manage-data-protection-for-amazon-s3-with-aws-backup/) 
+  [AWS re:Invent 2023 - Implement proactive data protection using Amazon EBS snapshots](https://www.youtube.com/watch?v=d7C6XsUnmHc) 
+  [AWS re:Invent 2022 - Build and automate for resilience with modern data protection](https://www.youtube.com/watch?v=OkaGvr3xYNk) 

 **Related tools:** 
+  [AWS CloudFormation Guard](https://docs.aws.amazon.com/cfn-guard/latest/ug/what-is-guard.html) 
+  [AWS CloudFormation Guard Rules Registry](https://github.com/aws-cloudformation/aws-guard-rules-registry) 
+  [IAM Access Analyzer](https://aws.amazon.com/iam/access-analyzer/) 
+  [Amazon Macie](https://aws.amazon.com/macie/) 
+  [AWS Backup](https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html) 
+  [Elastic Disaster Recovery](https://aws.amazon.com/disaster-recovery/) 

# SEC08-BP04 Enforce access control
<a name="sec_protect_data_rest_access_control"></a>

 To help protect your data at rest, enforce access control using mechanisms, such as isolation and versioning, and apply the principle of least privilege. Prevent the granting of public access to your data. 

**Desired outcome:** Verify that only authorized users can access data on a need-to-know basis. Protect your data with regular backups and versioning to prevent against intentional or inadvertent modification or deletion of data. Isolate critical data from other data to protect its confidentiality and data integrity. 

**Common anti-patterns:**
+  Storing data with different sensitivity requirements or classification together. 
+  Using overly permissive permissions on decryption keys. 
+  Improperly classifying data. 
+  Not retaining detailed backups of important data. 
+  Providing persistent access to production data. 
+  Not auditing data access or regularly reviewing permissions.

**Level of risk exposed if this best practice is not established:** Low 

## Implementation guidance
<a name="implementation-guidance"></a>

 Multiple controls can help protect your data at rest, including access (using least privilege), isolation, and versioning. Access to your data should be audited using detective mechanisms, such as AWS CloudTrail, and service level logs, such as Amazon Simple Storage Service (Amazon S3) access logs. You should inventory what data is publicly accessible, and create a plan to reduce the amount of publicly available data over time. 

 Amazon Glacier Vault Lock and Amazon S3 Object Lock provide mandatory access control for objects in Amazon S3—once a vault policy is locked with the compliance option, not even the root user can change it until the lock expires. 

### Implementation steps
<a name="implementation-steps"></a>
+  **Enforce access control**: Enforce access control with least privileges, including access to encryption keys. 
+  **Separate data based on different classification levels**: Use different AWS accounts for data classification levels, and manage those accounts using [AWS Organizations](https://docs.aws.amazon.com/organizations/latest/userguide/orgs_introduction.html). 
+  **Review AWS Key Management Service (AWS KMS) policies**: [Review the level of access](https://docs.aws.amazon.com/kms/latest/developerguide/overview.html) granted in AWS KMS policies. 
+  **Review Amazon S3 bucket and object permissions**: Regularly review the level of access granted in S3 bucket policies. Best practice is to avoid using publicly readable or writeable buckets. Consider using [AWS Config](https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws-config.html) to detect buckets that are publicly available, and Amazon CloudFront to serve content from Amazon S3. Verify that buckets that should not allow public access are properly configured to prevent public access. By default, all S3 buckets are private, and can only be accessed by users that have been explicitly granted access. 
+  **Use [AWS IAM Access Analyzer](https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-analyzer.html):** IAM Access Analyzer analyzes Amazon S3 buckets and generates a finding when [an S3 policy grants access to an external entity.](https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-resources.html#access-analyzer-s3) 
+  **Use [Amazon S3 versioning](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Versioning.html) and [object lock](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock.html) when appropriate**. 
+  **Use [Amazon S3 Inventory](https://docs.aws.amazon.com/AmazonS3/latest/dev/storage-inventory.html)**: Amazon S3 Inventory can be used to audit and report on the replication and encryption status of your S3 objects. 
+  **Review [Amazon EBS](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html) and [AMI sharing](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharing-amis.html) permissions**: Sharing permissions can allow images and volumes to be shared with AWS accounts that are external to your workload. 
+  **Review [AWS Resource Access Manager](https://docs.aws.amazon.com/ram/latest/userguide/what-is.html) Shares periodically to determine whether resources should continue to be shared.** Resource Access Manager allows you to share resources, such as AWS Network Firewall policies, Amazon Route 53 resolver rules, and subnets, within your Amazon VPCs. Audit shared resources regularly and stop sharing resources which no longer need to be shared. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+ [SEC03-BP01 Define access requirements](sec_permissions_define.md) 
+  [SEC03-BP02 Grant least privilege access](sec_permissions_least_privileges.md) 

 **Related documents:** 
+  [AWS KMS Cryptographic Details Whitepaper](https://docs.aws.amazon.com/kms/latest/cryptographic-details/intro.html) 
+  [Introduction to Managing Access Permissions to Your Amazon S3 Resources](https://docs.aws.amazon.com/AmazonS3/latest/dev/intro-managing-access-s3-resources.html) 
+  [Overview of managing access to your AWS KMS resources](https://docs.aws.amazon.com/kms/latest/developerguide/control-access-overview.html) 
+  [AWS Config Rules](https://docs.aws.amazon.com/config/latest/developerguide/managed-rules-by-aws-config.html) 
+  [Amazon S3 \$1 Amazon CloudFront: A Match Made in the Cloud](https://aws.amazon.com/blogs/networking-and-content-delivery/amazon-s3-amazon-cloudfront-a-match-made-in-the-cloud/) 
+  [Using versioning](https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html) 
+  [Locking Objects Using Amazon S3 Object Lock](https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html) 
+  [Sharing an Amazon EBS Snapshot](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html) 
+  [Shared AMIs](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharing-amis.html) 
+  [Hosting a single-page application on Amazon S3](https://docs.aws.amazon.com/prescriptive-guidance/latest/patterns/deploy-a-react-based-single-page-application-to-amazon-s3-and-cloudfront.html) 

 **Related videos:** 
+  [Securing Your Block Storage on AWS](https://youtu.be/Y1hE1Nkcxs8) 

# SEC 9. How do you protect your data in transit?
<a name="sec-09"></a>

Protect your data in transit by implementing multiple controls to reduce the risk of unauthorized access or loss.

**Topics**
+ [

# SEC09-BP01 Implement secure key and certificate management
](sec_protect_data_transit_key_cert_mgmt.md)
+ [

# SEC09-BP02 Enforce encryption in transit
](sec_protect_data_transit_encrypt.md)
+ [

# SEC09-BP03 Authenticate network communications
](sec_protect_data_transit_authentication.md)

# SEC09-BP01 Implement secure key and certificate management
<a name="sec_protect_data_transit_key_cert_mgmt"></a>

 Transport Layer Security (TLS) certificates are used to secure network communications and establish the identity of websites, resources, and workloads over the internet, as well as private networks. 

 **Desired outcome:** A secure certificate management system that can provision, deploy, store, and renew certificates in a public key infrastructure (PKI). A secure key and certificate management mechanism prevents certificate private key material from disclosure and automatically renews the certificate on a periodic basis. It also integrates with other services to provide secure network communications and identity for machine resources inside of your workload. Key material should never be accessible to human identities. 

 **Common anti-patterns:** 
+  Performing manual steps during the certificate deployment or renewal processes. 
+  Paying insufficient attention to certificate authority (CA) hierarchy when designing a private CA. 
+  Using self-signed certificates for public resources. 

 **Benefits of establishing this best practice: **
+  Simplify certificate management through automated deployment and renewal 
+  Encourage encryption of data in transit using TLS certificates 
+  Increased security and auditability of certificate actions taken by the certificate authority 
+  Organization of management duties at different layers of the CA hierarchy 

 **Level of risk exposed if this best practice is not established:** High

## Implementation guidance
<a name="implementation-guidance"></a>

 Modern workloads make extensive use of encrypted network communications using PKI protocols such as TLS. PKI certificate management can be complex, but automated certificate provisioning, deployment, and renewal can reduce the friction associated with certificate management. 

 AWS provides two services to manage general-purpose PKI certificates: [AWS Certificate Manager](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html) and [AWS Private Certificate Authority (AWS Private CA)](https://docs.aws.amazon.com/privateca/latest/userguide/PcaWelcome.html). ACM is the primary service that customers use to provision, manage, and deploy certificates for use in both public-facing as well as private AWS workloads. ACM issues certificates using AWS Private CA and [integrates](https://docs.aws.amazon.com/acm/latest/userguide/acm-services.html) with many other AWS managed services to provide secure TLS certificates for workloads. 

 AWS Private CA allows you to establish your own root or subordinate certificate authority and issue TLS certificates through an API. You can use these kinds of certificates in scenarios where you control and manage the trust chain on the client side of the TLS connection. In addition to TLS use cases, AWS Private CA can be used to issue certificates to Kubernetes pods, Matter device product attestations, code signing, and other use cases with a [custom template](https://docs.aws.amazon.com/privateca/latest/userguide/UsingTemplates.html). You can also use [IAM Roles Anywhere](https://docs.aws.amazon.com/rolesanywhere/latest/userguide/introduction.html) to provide temporary IAM credentials to on-premises workloads that have been issued X.509 certificates signed by your Private CA. 

 In addition to ACM and AWS Private CA, [AWS IoT Core](https://docs.aws.amazon.com/iot/latest/developerguide/what-is-aws-iot.html) provides specialized support for provisioning, managing and deploying PKI certificates to IoT devices. AWS IoT Core provides specialized mechanisms for [onboarding IoT devices](https://docs.aws.amazon.com/whitepapers/latest/device-manufacturing-provisioning/device-manufacturing-provisioning.html) into your public key infrastructure at scale. 

**Considerations for establishing a private CA hierarchy **

 When you need to establish a private CA, it's important to take special care to properly design the CA hierarchy upfront. It's a best practice to deploy each level of your CA hierarchy into separate AWS accounts when creating a private CA hierarchy. This intentional step reduces the surface area for each level in the CA hierarchy, making it simpler to discover anomalies in CloudTrail log data and reducing the scope of access or impact if there is unauthorized access to one of the accounts. The root CA should reside in its own separate account and should only be used to issue one or more intermediate CA certificates. 

 Then, create one or more intermediate CAs in accounts separate from the root CA's account to issue certificates for end users, devices, or other workloads. Finally, issue certificates from your root CA to the intermediate CAs, which will in turn issue certificates to your end users or devices. For more information on planning your CA deployment and designing your CA hierarchy, including planning for resiliency, cross-region replication, sharing CAs across your organization, and more, see [Planning your AWS Private CA deployment](https://docs.aws.amazon.com/privateca/latest/userguide/PcaPlanning.html). 

### Implementation steps
<a name="implementation-steps"></a>

1.  Determine the relevant AWS services required for your use case: 
   +  Many use cases can leverage the existing AWS public key infrastructure using [AWS Certificate Manager](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html). ACM can be used to deploy TLS certificates for web servers, load balancers, or other uses for publicly trusted certificates. 
   +  Consider [AWS Private CA](https://docs.aws.amazon.com/privateca/latest/userguide/PcaWelcome.html) when you need to establish your own private certificate authority hierarchy or need access to exportable certificates. ACM can then be used to issue [many types of end-entity certificates](https://docs.aws.amazon.com/privateca/latest/userguide/PcaIssueCert.html) using the AWS Private CA. 
   +  For use cases where certificates must be provisioned at scale for embedded Internet of things (IoT) devices, consider [AWS IoT Core](https://docs.aws.amazon.com/iot/latest/developerguide/x509-client-certs.html). 

1.  Implement automated certificate renewal whenever possible: 
   +  Use [ACM managed renewal](https://docs.aws.amazon.com/acm/latest/userguide/managed-renewal.html) for certificates issued by ACM along with integrated AWS managed services. 

1.  Establish logging and audit trails: 
   +  Enable [CloudTrail logs](https://docs.aws.amazon.com/privateca/latest/userguide/PcaCtIntro.html) to track access to the accounts holding certificate authorities. Consider configuring log file integrity validation in CloudTrail to verify the authenticity of the log data. 
   +  Periodically generate and review [audit reports](https://docs.aws.amazon.com/privateca/latest/userguide/PcaAuditReport.html) that list the certificates that your private CA has issued or revoked. These reports can be exported to an S3 bucket. 
   +  When deploying a private CA, you will also need to establish an S3 bucket to store the Certificate Revocation List (CRL). For guidance on configuring this S3 bucket based on your workload's requirements, see [Planning a certificate revocation list (CRL)](https://docs.aws.amazon.com/privateca/latest/userguide/crl-planning.html). 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+  [SEC02-BP02 Use temporary credentials](sec_identities_unique.md) 
+ [SEC08-BP01 Implement secure key management](sec_protect_data_rest_key_mgmt.md)
+  [SEC09-BP03 Authenticate network communications](sec_protect_data_transit_authentication.md) 

 **Related documents:** 
+  [How to host and manage an entire private certificate infrastructure in AWS](https://aws.amazon.com/blogs/security/how-to-host-and-manage-an-entire-private-certificate-infrastructure-in-aws/) 
+  [How to secure an enterprise scale ACM Private CA hierarchy for automotive and manufacturing](https://aws.amazon.com/blogs/security/how-to-secure-an-enterprise-scale-acm-private-ca-hierarchy-for-automotive-and-manufacturing/) 
+  [Private CA best practices](https://docs.aws.amazon.com/privateca/latest/userguide/ca-best-practices.html) 
+  [How to use AWS RAM to share your ACM Private CA cross-account](https://aws.amazon.com/blogs/security/how-to-use-aws-ram-to-share-your-acm-private-ca-cross-account/) 

 **Related videos:** 
+  [Activating AWS Certificate Manager Private CA (workshop)](https://www.youtube.com/watch?v=XrrdyplT3PE) 

 **Related examples:** 
+  [Private CA workshop](https://catalog.workshops.aws/certificatemanager/en-US/introduction) 
+  [IOT Device Management Workshop](https://iot-device-management.workshop.aws/en/) (including device provisioning) 

 **Related tools:** 
+  [Plugin to Kubernetes cert-manager to use AWS Private CA](https://github.com/cert-manager/aws-privateca-issuer) 

# SEC09-BP02 Enforce encryption in transit
<a name="sec_protect_data_transit_encrypt"></a>

Enforce your defined encryption requirements based on your organization’s policies, regulatory obligations and standards to help meet organizational, legal, and compliance requirements. Only use protocols with encryption when transmitting sensitive data outside of your virtual private cloud (VPC). Encryption helps maintain data confidentiality even when the data transits untrusted networks.

 **Desired outcome:** All data should be encrypted in transit using secure TLS protocols and cipher suites. Network traffic between your resources and the internet must be encrypted to mitigate unauthorized access to the data. Network traffic solely within your internal AWS environment should be encrypted using TLS wherever possible. The AWS internal network is encrypted by default and network traffic within a VPC cannot be spoofed or sniffed unless an unauthorized party has gained access to whatever resource is generating traffic (such as Amazon EC2 instances, and Amazon ECS containers). Consider protecting network-to-network traffic with an IPsec virtual private network (VPN). 

 **Common anti-patterns:** 
+  Using deprecated versions of SSL, TLS, and cipher suite components (for example, SSL v3.0, 1024-bit RSA keys, and RC4 cipher). 
+  Allowing unencrypted (HTTP) traffic to or from public-facing resources. 
+  Not monitoring and replacing X.509 certificates prior to expiration. 
+  Using self-signed X.509 certificates for TLS. 

 **Level of risk exposed if this best practice is not established:** High 

## Implementation guidance
<a name="implementation-guidance"></a>

 AWS services provide HTTPS endpoints using TLS for communication, providing encryption in transit when communicating with the AWS APIs. Insecure protocols like HTTP can be audited and blocked in a VPC through the use of security groups. HTTP requests can also be [automatically redirected to HTTPS](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-viewers-to-cloudfront.html) in Amazon CloudFront or on an [Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#redirect-actions). You have full control over your computing resources to implement encryption in transit across your services. Additionally, you can use VPN connectivity into your VPC from an external network or [AWS Direct Connect](https://aws.amazon.com/directconnect/) to facilitate encryption of traffic. Verify that your clients are making calls to AWS APIs using at least TLS 1.2, as [AWS is deprecating the use of earlier versions of TLS in June 2023](https://aws.amazon.com/blogs/security/tls-1-2-required-for-aws-endpoints/). AWS recommends using TLS 1.3. Third-party solutions are available in the AWS Marketplace if you have special requirements. 

 **Implementation steps** 
+  **Enforce encryption in transit:** Your defined encryption requirements should be based on the latest standards and best practices and only allow secure protocols. For example, configure a security group to only allow the HTTPS protocol to an application load balancer or Amazon EC2 instance. 
+  **Configure secure protocols in edge services:** [Configure HTTPS with Amazon CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https.html) and use a [security profile appropriate for your security posture and use case](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/secure-connections-supported-viewer-protocols-ciphers.html#secure-connections-supported-ciphers). 
+  **Use a [VPN for external connectivity](https://docs.aws.amazon.com/vpc/latest/userguide/vpn-connections.html):** Consider using an IPsec VPN for securing point-to-point or network-to-network connections to help provide both data privacy and integrity. 
+  **Configure secure protocols in load balancers:** Select a security policy that provides the strongest cipher suites supported by the clients that will be connecting to the listener. [Create an HTTPS listener for your Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html). 
+  **Configure secure protocols in Amazon Redshift:** Configure your cluster to require a [secure socket layer (SSL) or transport layer security (TLS) connection](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html). 
+  **Configure secure protocols:** Review AWS service documentation to determine encryption-in-transit capabilities. 
+  **Configure secure access when uploading to Amazon S3 buckets:** Use Amazon S3 bucket policy controls to [enforce secure access](https://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html) to data. 
+  **Consider using [AWS Certificate Manager](https://aws.amazon.com/certificate-manager/):** ACM allows you to provision, manage, and deploy public TLS certificates for use with AWS services. 
+  **Consider using [AWS Private Certificate Authority](https://aws.amazon.com/private-ca/) for private PKI needs:** AWS Private CA allows you to create private certificate authority (CA) hierarchies to issue end-entity X.509 certificates that can be used to create encrypted TLS channels. 

## Resources
<a name="resources"></a>

 **Related documents:** 
+ [ Using HTTPS with CloudFront ](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https.html)
+ [ Connect your VPC to remote networks using AWS Virtual Private Network](https://docs.aws.amazon.com/vpc/latest/userguide/vpn-connections.html)
+ [ Create an HTTPS listener for your Application Load Balancer ](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html)
+ [ Tutorial: Configure SSL/TLS on Amazon Linux 2 ](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/SSL-on-amazon-linux-2.html)
+ [ Using SSL/TLS to encrypt a connection to a DB instance ](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html)
+ [ Configuring security options for connections ](https://docs.aws.amazon.com/redshift/latest/mgmt/connecting-ssl-support.html)

# SEC09-BP03 Authenticate network communications
<a name="sec_protect_data_transit_authentication"></a>

 Verify the identity of communications by using protocols that support authentication, such as Transport Layer Security (TLS) or IPsec. 

 Design your workload to use secure, authenticated network protocols whenever communicating between services, applications, or to users. Using network protocols that support authentication and authorization provides stronger control over network flows and reduces the impact of unauthorized access. 

 **Desired outcome:** A workload with well-defined data plane and control plane traffic flows between services. The traffic flows use authenticated and encrypted network protocols where technically feasible. 

 **Common anti-patterns:** 
+  Unencrypted or unauthenticated traffic flows within your workload. 
+  Reusing authentication credentials across multiple users or entities. 
+  Relying solely on network controls as an access control mechanism. 
+  Creating a custom authentication mechanism rather than relying on industry-standard authentication mechanisms. 
+  Overly permissive traffic flows between service components or other resources in the VPC. 

 **Benefits of establishing this best practice:** 
+  Limits the scope of impact for unauthorized access to one part of the workload. 
+  Provides a higher level of assurance that actions are only performed by authenticated entities. 
+  Improves decoupling of services by clearly defining and enforcing intended data transfer interfaces. 
+  Enhances monitoring, logging, and incident response through request attribution and well-defined communication interfaces. 
+  Provides defense-in-depth for your workloads by combining network controls with authentication and authorization controls. 

 **Level of risk exposed if this best practice is not established:** Low 

## Implementation guidance
<a name="implementation-guidance"></a>

 Your workload’s network traffic patterns can be characterized into two categories: 
+  *East-west traffic* represents traffic flows between services that make up a workload. 
+  *North-south traffic* represents traffic flows between your workload and consumers. 

 While it is common practice to encrypt north-south traffic, securing east-west traffic using authenticated protocols is less common. Modern security practices recommend that network design alone does not grant a trusted relationship between two entities. When two services may reside within a common network boundary, it is still best practice to encrypt, authenticate, and authorize communications between those services. 

 As an example, AWS service APIs use the [AWS Signature Version 4 (SigV4)](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_aws-signing.html) signature protocol to authenticate the caller, no matter what network the request originates from. This authentication ensures that AWS APIs can verify the identity that requested the action, and that identity can then be combined with policies to make an authorization decision to determine whether the action should be allowed or not. 

 Services such as [Amazon VPC Lattice](https://docs.aws.amazon.com/vpc-lattice/latest/ug/access-management-overview.html) and [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/permissions.html) allow you use the same SigV4 signature protocol to add authentication and authorization to east-west traffic in your own workloads. If resources outside of your AWS environment need to communicate with services that require SigV4-based authentication and authorization, you can use [AWS Identity and Access Management (IAM) Roles Anywhere](https://docs.aws.amazon.com/rolesanywhere/latest/userguide/introduction.html) on the non-AWS resource to acquire temporary AWS credentials. These credentials can be used to sign requests to services using SigV4 to authorize access. 

 Another common mechanism for authenticating east-west traffic is TLS mutual authentication (mTLS). Many Internet of Things (IoT), business-to-business applications, and microservices use mTLS to validate the identity of both sides of a TLS communication through the use of both client and server-side X.509 certificates. These certificates can be issued by AWS Private Certificate Authority (AWS Private CA). You can use services such as [Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/rest-api-mutual-tls.html) and [AWS App Mesh](https://docs.aws.amazon.com/app-mesh/latest/userguide/mutual-tls.html) to provide mTLS authentication for inter- or intra-workload communication. While mTLS provides authentication information for both sides of a TLS communication, it does not provide a mechanism for authorization. 

 Finally, OAuth 2.0 and OpenID Connect (OIDC) are two protocols typically used for controlling access to services by users, but are now becoming popular for service-to-service traffic as well. API Gateway provides a [JSON Web Token (JWT) authorizer](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-jwt-authorizer.html), allowing workloads to restrict access to API routes using JWTs issued from OIDC or OAuth 2.0 identity providers. OAuth2 scopes can be used as a source for basic authorization decisions, but the authorization checks still need to be implemented in the application layer, and OAuth2 scopes alone cannot support more complex authorization needs. 

### Implementation steps
<a name="implementation-steps"></a>
+  **Define and document your workload network flows:** The first step in implementing a defense-in-depth strategy is defining your workload’s traffic flows. 
  +  Create a data flow diagram that clearly defines how data is transmitted between different services that comprise your workload. This diagram is the first step to enforcing those flows through authenticated network channels. 
  +  Instrument your workload in development and testing phases to validate that the data flow diagram accurately reflects the workload’s behavior at runtime. 
  +  A data flow diagram can also be useful when performing a threat modeling exercise, as described in [SEC01-BP07 Identify threats and prioritize mitigations using a threat model](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_securely_operate_threat_model.html). 
+  **Establish network controls:** Consider AWS capabilities to establish network controls aligned to your data flows. While network boundaries should not be the only security control, they provide a layer in the defense-in-depth strategy to protect your workload. 
  +  Use [security groups](https://docs.aws.amazon.com/vpc/latest/userguide/security-groups.html) to establish define and restrict data flows between resources. 
  +  Consider using [AWS PrivateLink](https://docs.aws.amazon.com/vpc/latest/privatelink/what-is-privatelink.html) to communicate with both AWS and third-party services that support AWS PrivateLink. Data sent through a AWS PrivateLink interface endpoint stays within the AWS network backbone and does not traverse the public Internet. 
+  **Implement authentication and authorization across services in your workload:** Choose the set of AWS services most appropriate to provide authenticated, encrypted traffic flows in your workload. 
  +  Consider [Amazon VPC Lattice](https://docs.aws.amazon.com/vpc-lattice/latest/ug/what-is-vpc-lattice.html) to secure service-to-service communication. VPC Lattice can use [SigV4 authentication combined with auth policies](https://docs.aws.amazon.com/vpc-lattice/latest/ug/auth-policies.html) to control service-to-service access. 
  +  For service-to-service communication using mTLS, consider [API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/rest-api-mutual-tls.html) or [App Mesh](https://docs.aws.amazon.com/app-mesh/latest/userguide/mutual-tls.html). [AWS Private CA](https://docs.aws.amazon.com/privateca/latest/userguide/PcaWelcome.html) can be used to establish a private CA hierarchy capable of issuing certificates for use with mTLS. 
  +  When integrating with services using OAuth 2.0 or OIDC, consider [API Gateway using the JWT authorizer](https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-jwt-authorizer.html). 
  +  For communication between your workload and IoT devices, consider [AWS IoT Core](https://docs.aws.amazon.com/iot/latest/developerguide/client-authentication.html), which provides several options for network traffic encryption and authentication. 
+  **Monitor for unauthorized access:** Continually monitor for unintended communication channels, unauthorized principals attempting to access protected resources, and other improper access patterns. 
  +  If using VPC Lattice to manage access to your services, consider enabling and monitoring [VPC Lattice access logs](https://docs.aws.amazon.com/vpc-lattice/latest/ug/monitoring-access-logs.html). These access logs include information on the requesting entity, network information including source and destination VPC, and request metadata. 
  +  Consider enabling [VPC flow logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html) to capture metadata on network flows and periodically review for anomalies. 
  +  Refer to the [AWS Security Incident Response Guide](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/aws-security-incident-response-guide.html) and the [Incident Response section](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/incident-response.html) of the AWS Well-Architected Framework security pillar for more guidance on planning, simulating, and responding to security incidents. 

## Resources
<a name="resources"></a>

 **Related best practices:** 
+ [ SEC03-BP07 Analyze public and cross-account access ](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_permissions_analyze_cross_account.html)
+ [ SEC02-BP02 Use temporary credentials ](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_identities_unique.html)
+ [ SEC01-BP07 Identify threats and prioritize mitigations using a threat model ](https://docs.aws.amazon.com/wellarchitected/latest/security-pillar/sec_securely_operate_threat_model.html)

 **Related documents:** 
+ [ Evaluating access control methods to secure Amazon API Gateway APIs ](https://aws.amazon.com/blogs/compute/evaluating-access-control-methods-to-secure-amazon-api-gateway-apis/)
+ [ Configuring mutual TLS authentication for a REST API ](https://docs.aws.amazon.com/apigateway/latest/developerguide/rest-api-mutual-tls.html)
+ [ How to secure API Gateway HTTP endpoints with JWT authorizer ](https://aws.amazon.com/blogs/security/how-to-secure-api-gateway-http-endpoints-with-jwt-authorizer/)
+ [ Authorizing direct calls to AWS services using AWS IoT Core credential provider ](https://docs.aws.amazon.com/iot/latest/developerguide/authorizing-direct-aws.html)
+ [AWS Security Incident Response Guide ](https://docs.aws.amazon.com/whitepapers/latest/aws-security-incident-response-guide/aws-security-incident-response-guide.html)

 **Related videos:** 
+ [AWS re:invent 2022: Introducing VPC Lattice ](https://www.youtube.com/watch?v=fRjD1JI0H5w)
+ [AWS re:invent 2020: Serverless API authentication for HTTP APIs on AWS](https://www.youtube.com/watch?v=AW4kvUkUKZ0)

 **Related examples:** 
+ [ Amazon VPC Lattice Workshop ](https://catalog.us-east-1.prod.workshops.aws/workshops/9e543f60-e409-43d4-b37f-78ff3e1a07f5/en-US)
+ [ Zero-Trust Episode 1 – The Phantom Service Perimeter workshop ](https://catalog.us-east-1.prod.workshops.aws/workshops/dc413216-deab-4371-9e4a-879a4f14233d/en-US)