

# Securing your workloads


A workload is a collection of resources and code that delivers business value, such as a customer-facing application or a backend process. As you build and deploy workloads on AWS, the controls in this section help you protect your data, limit exposure of sensitive resources, and establish secure defaults. The controls cover managing application secrets, restricting access scope, minimizing access routes to private resources, and encrypting data in transit and at rest.

**This section contains the following topics:**
+ [WKLD.01 Use IAM roles for compute environment permissions](wkld-01.md)
+ [WKLD.02 Restrict credential usage scope with resource-based policies](wkld-02.md)
+ [WKLD.03 Use ephemeral secrets or a secrets management service](wkld-03.md)
+ [WKLD.04 Prevent application secrets from being exposed](wkld-04.md)
+ [WKLD.05 Detect and remediate when secrets are exposed](wkld-05.md)
+ [WKLD.06 Use AWS Systems Manager instead of SSH or RDP](wkld-06.md)
+ [WKLD.07 Enable CloudTrail data events for Amazon S3 buckets with sensitive data](wkld-07.md)
+ [WKLD.08 Encrypt Amazon EBS volumes](wkld-08.md)
+ [WKLD.09 Encrypt Amazon RDS databases](wkld-09.md)
+ [WKLD.10 Deploy private resources into private subnets](wkld-10.md)
+ [WKLD.11 Restrict network access with security groups](wkld-11.md)
+ [WKLD.12 Use VPC endpoints to access supported AWS and external services](wkld-12.md)
+ [WKLD.13 Require HTTPS for public web endpoints](wkld-13.md)
+ [WKLD.14 Use edge protection services for public endpoints](wkld-14.md)
+ [WKLD.15 Define security controls in templates and deploy them by using CI/CD practices](wkld-15.md)

# WKLD.01 Use IAM roles for compute environment permissions


In AWS Identity and Access Management (IAM), a *role* represents a set of permissions that can be assumed by an IAM user, an AWS service, or a federated identity for a configurable period of time. Using roles removes the need to store or manage long-term credentials, which reduces the chance of unintended use. Assign an IAM role directly to Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS Fargate tasks and services, AWS Lambda functions, and other AWS compute services that support IAM roles. Applications that use an AWS SDK and run in these compute environments automatically use the IAM role credentials for authentication.

For instructions on using IAM roles with services, see the following documentation:
+ [IAM roles for Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) in the Amazon EC2 documentation
+ [IAM roles for tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) in the Amazon Elastic Container Service (Amazon ECS) documentation
+ [Lambda execution role](https://docs.aws.amazon.com/lambda/latest/dg/lambda-intro-execution-role.html) in the AWS Lambda documentation
+ For other AWS compute services, refer to the *Security* section of the [AWS service documentation](https://docs.aws.amazon.com/).

# WKLD.02 Restrict credential usage scope with resource-based policies


*Policies* define permissions or specify access conditions for AWS resources. There are two primary types of policies:
+ *Identity-based policies* are attached to principals and define what the principal's permissions are in the AWS environment.
+ *Resource-based policies* are attached to a resource, such as an Amazon Simple Storage Service (Amazon S3) bucket, or virtual private cloud (VPC) endpoint. These policies specify which principals are allowed access, supported actions, and any other conditions that must be met.

For a principal to access a resource, the principal must have permission in its identity-based policy and meet the conditions of the resource-based policy. For more information, see [Identity-based policies and resource-based policies](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_identity-vs-resource.html) in the IAM documentation.

The following conditions help restrict access to trusted sources and reduce the risk of unintended access:
+ Restrict access to principals in a specified organization (defined in AWS Organizations) by using the `aws:PrincipalOrgID` condition.
+ Restrict access to traffic that originates from a specific VPC or VPC endpoint by using the `aws:SourceVpc` or `aws:SourceVpce` condition, respectively.
+ Allow or deny traffic based on the source IP address by using an `aws:SourceIp` condition.

The following example shows a resource-based policy that uses the `aws:PrincipalOrgID` condition to allow only principals in your organization to access an Amazon S3 bucket. Replace `o-xxxxxxxxxxx` with your organization ID and `bucket-name` with your bucket name:

```
{
    "Version": "2012-10-17",		 	 	 
    "Statement":[
      {
        "Sid":"AllowFromOrganization",
        "Effect":"Allow",
        "Principal":"*",
        "Action":"s3:*",
        "Resource":"arn:aws:s3:::bucket-name/*",
        "Condition": {
          "StringEquals": {"aws:PrincipalOrgID":"<o-xxxxxxxxxxx>"}
        }
      }
    ]
 }
```

# WKLD.03 Use ephemeral secrets or a secrets-management service


Application secrets include credentials, such as key pairs, access tokens, digital certificates, and sign-in credentials. The application uses these secrets to gain access to other services it depends upon, such as a database. To help protect these secrets, we recommend they are either *ephemeral* (generated at the time of request and short-lived, such as with IAM roles) or retrieved from a secrets management service. This reduces the risk of secrets being accidentally stored in static configuration files, environment variables, or source code. Centralizing secrets management also makes it straightforward to move application code between development and production environments without reconfiguring credentials.

For a secrets management, use a combination of Parameter Store (a capability of AWS Systems Manager) and AWS Secrets Manager:
+ Use Parameter Store to manage secrets and other parameters that are individual key-value pairs, string-based, short in overall length, and accessed frequently. You use an AWS Key Management Service (AWS KMS) key to encrypt the secret. There is no charge to store parameters in the standard tier of Parameter Store. For more information about parameter tiers, see [Managing parameter tiers](https://docs.aws.amazon.com/systems-manager/latest/userguide/parameter-store-advanced-parameters.html) in the Systems Manager documentation.
+ Use Secrets Manager to store secrets that are in document form (such as multiple, related key-value pairs), that are larger than 4 KB (such as digital certificates), or that would benefit from automated rotation.

You can use Parameter Store APIs to retrieve secrets stored in Secrets Manager. With this approach, you can standardize the code in your application when using a combination of both services.

**To manage secrets in Parameter Store**

1. Create a symmetric AWS KMS key. For more information, see [Create a symmetric encryption KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/create-symmetric-cmk.html) in the AWS KMS documentation.

1. Create a `SecureString` parameter. For more information, see [Create a SecureString parameter](https://docs.aws.amazon.com/systems-manager/latest/userguide/param-create-cli.html#param-create-cli-securestring) in the Systems Manager documentation. Secrets in Parameter Store use the `SecureString` data type.

1. In your application, retrieve a parameter from Parameter Store by using the AWS SDK for your programming language. For code examples, see [GetParameter](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_GetParameter.html) in the Systems Manager documentation.

**To manage secrets in Secrets Manager**

1. Create a secret. For more information, see [Create a secret](https://docs.aws.amazon.com/secretsmanager/latest/userguide/create_secret.html) in the Secrets Manager documentation.

1. Retrieve secrets from Secrets Manager in code. For more information, see [Get secrets from AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/retrieving-secrets.html) in the Secrets Manager documentation.

For information about improving the availability and latency of secret retrieval, see [Use AWS Secrets Manager client-side caching libraries to improve the availability and latency of using your secrets](https://aws.amazon.com/blogs/security/use-aws-secrets-manager-client-side-caching-libraries-to-improve-the-availability-and-latency-of-using-your-secrets/) on the AWS Security Blog. The client-side caching SDK reduces the number of API calls your application makes to Secrets Manager and can improve secret retrieval performance.

## Cost considerations


The cost of secrets management depends on which service you use and how your application accesses secrets:
+ For Parameter Store, standard tier parameters are available at no additional charge for values up to 4 KB. The advanced tier applies additional charges for larger parameters or higher throughput.
+ AWS Secrets Manager charges for each secret stored on a monthly basis and charges for each API call made to retrieve secrets. Using the Secrets Manager client-side caching SDK reduces the number of API calls your application makes to Secrets Manager, which can reduce costs.
+ Encrypting secrets with an AWS managed KMS key is available at no additional charge. Customer-managed keys incur a monthly charge for each key and a charge for each API call.

For most early-stage startups, a cost effective starting point is to use Parameter Store with an AWS managed KMS key for frequently accessed secrets and use Secrets Manager for secrets that benefit from automated rotation.

For current pricing, see [AWS Systems Manager pricing](https://aws.amazon.com/systems-manager/pricing/) and [AWS Key Management Service pricing](https://aws.amazon.com/kms/pricing/).

# WKLD.04 Prevent application secrets from being exposed


During local development, application secrets can be stored in local configuration or code files and accidentally checked in to source code repositories. If a repository hosted on a public service provider is unsecured, unauthorized users can access it and discover exposed secrets. Use available tools to prevent secrets from being committed to your repository. During code reviews, check for hardcoded credentials, API keys, and other secrets before merging changes.

The following open-source tools can help prevent application secrets from being checked in to source code repositories:
+ [Gitleaks](https://github.com/zricethezav/gitleaks) on GitHub
+ [detect-secrets](https://github.com/Yelp/detect-secrets) on GitHub
+ [git-secrets](https://github.com/awslabs/git-secrets) on GitHub
+ [TruffleHog](https://github.com/trufflesecurity/truffleHog) on GitHub

**Note**  
These tools are open source and available at no charge.

For guidance on detecting and remediating secrets that have already been exposed, see [WKLD.05 Detect and remediate when secrets are exposed](wkld-05.md).

# WKLD.05 Detect and remediate exposed secrets


In [WKLD.03 Use ephemeral secrets or a secrets-management service](wkld-03.md) and [WKLD.04 Prevent application secrets from being exposed](wkld-04.md), you put measures in place to protect secrets. In this control, you set up tooling to detect secrets that were accidentally committed or exposed, and take action to revoke or rotate them.

An exposed secret can be exploited and risks unauthorized access to your AWS resources and data. Rotate or revoke it immediately after detection.

Scan code repositories regularly for accidentally committed secrets. Use [Kiro CLI](https://aws.amazon.com/kiro/) or the open-source tools listed in [WKLD.04](wkld-04.md) and integrate the tool into your local development or CI/CD pipeline. If you identify an exposed secret, remediate it immediately. Rotate or revoke the exposed credential to prevent further use, and remove it from source control history.

**To detect exposed secrets using Kiro CLI**

1. Install Kiro CLI in your development environment. For more information, see [Kiro CLI](https://kiro.dev/docs/cli/) in the Kiro documentation.

1. Configure Kiro CLI to scan your code repositories, focusing on high-risk repositories such as production or public-facing code.

1. Schedule regular scans. Consider daily scans for production repositories and weekly scans for development repositories.

1. Review scan results and identify any exposed secrets.

**To remediate exposed secrets**

1. Rotate or revoke the exposed secret immediately in the originating service (for example, regenerate an API key or reset a password).

1. Create a new secret in AWS Secrets Manager or AWS Systems Manager Parameter Store.

1. Update your applications to retrieve the new secret from the secure storage service.

1. Remove the exposed secret from your code repository history by using `git filter-repo`.

The open-source tools listed in [WKLD.04](wkld-04.md) can also detect secrets that are already present in your repository.

**Note**  
Kiro CLI is available at no charge under the Free tier. For more information, see [Kiro pricing](https://kiro.dev/pricing/).

# WKLD.06 Use Systems Manager instead of SSH or RDP


*Public subnets*, which have a default route pointing to an internet gateway, present a greater security risk than *private subnets*, which have no route to the internet. You can run Amazon EC2 instances in private subnets and use the Session Manager capability of AWS Systems Manager to remotely access the instances through either the AWS Command Line Interface (AWS CLI) or AWS Management Console. You can then use the AWS CLI or console to start a session that connects into the instance through a secure tunnel, which removes the need to manage credentials for Secure Shell (SSH) or Windows remote desktop protocol (RDP).

Use Session Manager instead of running Amazon EC2 instances in public subnets or running bastion hosts.

**To set up Session Manager**

1. Verify that the Amazon EC2 instance uses a supported operating system Amazon Machine Image (AMI), such as Amazon Linux or Ubuntu, with the AWS Systems Manager Agent (SSM Agent) pre-installed.

1. Confirm that the instance has connectivity, either through an internet gateway or through VPC endpoints, to the following endpoints (replacing `<Region>` with the appropriate AWS Region):
   + `ec2messages.<Region>.amazonaws.com`
   + `ssm.<Region>.amazonaws.com`
   + `ssmmessages.<Region>.amazonaws.com`

1. Attach the `AmazonSSMManagedInstanceCore` AWS managed policy to the IAM role associated with your instances.

For more information, see [Setting up Session Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started.html) in the *AWS Systems Manager User Guide*.

**To start a session**

1. See [Starting a session](https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-sessions-start.html#start-sys-console) in the Systems Manager documentation.

**Note**  
Session Manager is available at no additional charge for Amazon EC2 instances. If you use VPC endpoints for Session Manager connectivity, interface endpoints incur an hourly charge and a per-GB data-processing charge. For more information, see [Systems Manager pricing](https://aws.amazon.com/systems-manager/pricing/).

# WKLD.07 Log data events for S3 buckets with sensitive data


By default, AWS CloudTrail captures *management events*, which are events that create, modify, or delete resources in your account. This does not include read or write operations on individual objects in Amazon S3 buckets. To support investigation during a security event, for detection and auditing purposes, log data events for Amazon S3 buckets that store sensitive or business-critical data.

**To log data events for trails**

1. Open the [CloudTrail console](https://console.aws.amazon.com/cloudtrail/).

1. In the navigation pane, choose **Trails**, and then choose a trail name.

1. In **General details**, choose **Edit **to change the following settings (you cannot change the name of a trail).

   1. In **Data events**, choose **Edit**.

   1. For **Data event source**, choose **S3**.

   1. For **All current and future S3 buckets**, clear **Read** and **Write **to deselect the default selection.

   1. In **Individual bucket selection**, choose the bucket on which to log data events. To add more buckets, choose **Add bucket**.

   1. Choose to log **Read** events (such as `GetObject`), **Write** events (such as `PutObject`), or both.

   1. Choose **Update trail**.

**Note**  
Additional charges apply for logging CloudTrail data events. For more information, see [AWS CloudTrail pricing](https://aws.amazon.com/cloudtrail/pricing/).

# WKLD.08 Encrypt Amazon EBS volumes


Verify that encryption by default is enabled for Amazon Elastic Block Store (Amazon EBS) volumes in your AWS account. Enabling encryption by default ensures that new Amazon EBS volumes and snapshots are encrypted automatically, removing the need to configure encryption for each volume individually. Encrypted volumes have the same input/output operations per second (IOPS) performance as unencrypted volumes with a minimal effect on latency. For more information, see [Must-know best practices for Amazon EBS encryption](https://aws.amazon.com/blogs/storage/must-know-best-practices-for-amazon-ebs-encryption/) on the AWS Compute Blog.

To enable encryption by default for Amazon EBS volumes, see [Enable encryption by default](https://docs.aws.amazon.com/ebs/latest/userguide/encryption-by-default.html) in the Amazon EBS documentation. Enabling encryption by default does not encrypt existing unencrypted volumes. To encrypt an existing unencrypted Amazon EBS volume, create an encrypted snapshot copy of the volume and then create a new encrypted volume from that snapshot. For step-by-step instructions, see [Create an Amazon EBS volume](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-creating-volume.html) in the Amazon EBS documentation.

**Note**  
Encrypting Amazon EBS volumes with an AWS managed AWS KMS key is available at no additional charge. Customer managed keys incur a monthly charge per key and a charge per API call. For more information, see [AWS Key Management Service pricing](https://aws.amazon.com/kms/pricing/).

# WKLD.09 Encrypt Amazon RDS databases


Enable encryption for [Amazon Relational Database Service (Amazon RDS)](https://aws.amazon.com/rds/) databases to protect data at rest. Amazon RDS encrypts data at the underlying volume level and delivers the same IOPS performance as unencrypted volumes with a minimal effect on latency. For more information, see [Overview of encrypting Amazon RDS resources](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html) in the Amazon RDS documentation.

To encrypt a new Amazon RDS database instance, see [Encrypt a database instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html#Overview.Encryption.Enabling) in the Amazon RDS documentation.

**Note**  
Encryption must be enabled when creating the database. You cannot enable encryption on an existing unencrypted Amazon RDS database instance. If you need to encrypt an existing unencrypted database, you must create a new encrypted database and migrate your data. For more information, see [Copying a DB snapshot for Amazon RDS](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CopySnapshot.html) in the Amazon RDS documentation.

**Note**  
Encrypting Amazon RDS databases with an AWS managed AWS KMS key is available at no additional charge. Customer-managed keys incur a monthly charge per key and a charge per API call. For more information, see [AWS Key Management Service pricing](https://aws.amazon.com/kms/pricing/).

# WKLD.10 Deploy private resources into private subnets


Deploy resources that don't require direct internet access (such as Amazon EC2 instances, databases, queues, caching, or other infrastructure) into a VPC private subnet. Private subnets don't have a route declared in their route table to an attached internet gateway and cannot receive internet traffic. Traffic from a private subnet that is destined for the internet must go through network address translation (NAT). You can use a managed AWS NAT Gateway or an Amazon EC2 instance running NAT processes in a public subnet. For more information about network isolation, see [Infrastructure security in Amazon VPC](https://docs.aws.amazon.com/vpc/latest/userguide/infrastructure-security.html) in the Amazon Virtual Private Cloud (Amazon VPC) documentation.

Use the following practices when creating private resources and subnets:
+ When creating a private subnet, disable **Auto-assign public IPv4 address**.
+ When creating private Amazon EC2 instances, disable **Auto-assign Public IP**. This prevents a public IP address from being assigned if the instance is unintentionally deployed into a public subnet due to misconfiguration.
+ When creating [AWS Fargate](https://aws.amazon.com/fargate/) tasks and services, deploy them into private subnets and set **Assign public IP** to **TURNED OFF**. Fargate tasks deployed in a public subnet can be assigned a public IP address, which exposes them directly to the internet. For more information, see [AWS Fargate task networking](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-task-networking.html) in the Amazon Elastic Container Service (Amazon ECS) documentation.

When deploying a resource, specify the private subnet in the resource's network configuration.

**Note**  
Private subnets are available at no additional charge. If your private resources require outbound internet access, AWS NAT Gateway incurs an hourly charge and a per-GB data-processing charge. For more information, see [Amazon VPC pricing](https://aws.amazon.com/vpc/pricing/).

# WKLD.11 Restrict network access by using security groups


Use security groups to control traffic to Amazon EC2 instances, containers, Amazon RDS databases, and other supported resources. *Security groups* act as a virtual firewall that can be applied to a group of related resources to consistently define rules for allowing inbound and outbound traffic. In addition to rules based on IP addresses and ports, security groups support rules to allow traffic from resources associated with other security groups. For example, a database security group can have rules to allow only traffic from an application server security group.

Security groups apply to AWS Fargate tasks in the same way they apply to Amazon EC2 instances. When you create an Amazon ECS service or run a Fargate task, you assign one or more security groups to the task's Elastic Network Interface. For more information, see [AWS Fargate task networking](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/fargate-task-networking.html) in the Amazon Elastic Container Service documentation.

By default, security groups allow all outbound traffic but don't allow inbound traffic. You can remove the outbound traffic rule, or configure additional rules to restrict outbound traffic and allow inbound traffic. If the security group has no outbound rules, outbound traffic from your instance is blocked. For more information, see [Control traffic to resources using security groups](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-security-groups.html) in the Amazon VPC documentation.

The following example shows three security groups that control traffic from an Application Load Balancer to containers (Amazon EC2 instances or Fargate tasks) that connect to an Amazon RDS for PostgreSQL database.


| 
| 
| Security group | Inbound rules | Outbound rules | 
| --- |--- |--- |
| Application Load Balancer security group | **Description:** Allow HTTPS traffic from anywhere**Type:** HTTPS**Source:** Anywhere-IPv4 (0.0.0.0/0) | **Description:** Allow all traffic to anywhere**Type:** All traffic**Destination:** Anywhere-IPv4 (0.0.0.0/0) | 
| Container security group (Amazon EC2 or Fargate task) | **Description:** Allow HTTP traffic from the Application Load Balancer**Type:** HTTP**Source:** Application Load Balancer security group | **Description:** Allow all traffic to anywhere**Type:** All traffic**Destination:** Anywhere-IPv4 (0.0.0.0/0) | 
| Amazon RDS database security group | **Description:** Allow PostgreSQL traffic from container**Type:** PostgreSQL**Source:** Container security group | None | 

**Note**  
Security groups are available at no additional charge.

# WKLD.12 Use VPC endpoints to access supported services


In VPCs, resources that need to access AWS or other external services require either a route to the internet (`0.0.0.0/0`) or to the public IP address of the target service. Use VPC endpoints to enable a private IP route from your VPC to supported AWS or other services, removing the need for an internet gateway, NAT device, virtual private network (VPN) connection, or AWS Direct Connect connection.

You can attach policies and security groups to VPC endpoints to control access to a service. For example, you can write a VPC endpoint policy for [Amazon DynamoDB](https://aws.amazon.com/dynamodb/) to allow only item-level actions and prevent table-level actions for resources in the VPC, regardless of their own permission policy. You can also write an Amazon S3 bucket policy to allow only requests originating from a specific VPC endpoint, denying other external access. A VPC endpoint can also have a security group rule that, for example, restricts access to Amazon EC2 instances associated with an application-specific security group, such as the business-logic tier of a web application.

VPC endpoints come in two types: *interface* endpoints and *gateway* endpoints. You access most services by using a VPC interface endpoint. DynamoDB is accessed using a gateway endpoint. Amazon S3 supports both interface and gateway endpoints. We recommend gateway endpoints for workloads that are contained within a single AWS account and Region. Gateway endpoints come at no additional charge. We recommend interface endpoints when you need more extensible access, such as to an Amazon S3 bucket from other VPCs, from on-premises networks, or from different AWS Regions. 

For more information about using VPC endpoints, see the following resources:
+ For more information about selecting between gateway and interface endpoints for Amazon S3, see [Choosing your VPC endpoint strategy for Amazon S3](https://aws.amazon.com/blogs/architecture/choosing-your-vpc-endpoint-strategy-for-amazon-s3/) on the AWS Architecture Blog.
+ [Access an AWS service using an interface VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html#create-interface-endpoint) in the Amazon VPC documentation.
+ [Gateway endpoints](https://docs.aws.amazon.com/vpc/latest/privatelink/gateway-endpoints.html) in the Amazon VPC documentation.
+ For example Amazon S3 bucket policies that restrict access to a specific VPC or VPC endpoint, see [Restricting access to a specific VPC](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example-bucket-policies-vpc-endpoint.html#example-bucket-policies-restrict-access-vpc) in the Amazon S3 documentation.
+ For example DynamoDB endpoint policies that restrict actions, see [Endpoint policies for DynamoDB](https://docs.aws.amazon.com/vpc/latest/privatelink/vpc-endpoints-ddb.html#vpc-endpoints-policies-ddb) in the Amazon VPC documentation.

**Note**  
Gateway endpoints are available at no additional charge. Interface endpoints incur an hourly charge and a per-GB data-processing charge. These charges are lower than the equivalent charges for routing traffic through AWS NAT Gateway. For more information, see [Amazon VPC pricing](https://aws.amazon.com/vpc/pricing/).

# WKLD.13 Require HTTPS for public web endpoints


Require HTTPS so that your endpoints can use certificates to prove their identity and so that traffic between your endpoint and clients is encrypted. For public websites, HTTPS also improves search engine ranking.

Many AWS services provide public web endpoints for your resources, such as AWS Elastic Beanstalk, Amazon CloudFront, Amazon API Gateway, Elastic Load Balancing, and AWS Amplify. For instructions about how to require HTTPS for each of these services, see the following:
+ [Configuring HTTPS for your Elastic Beanstalk environment](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https.html) in the AWS Elastic Beanstalk documentation
+ [Requiring HTTPS for communication between viewers and CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https.html) in the Amazon CloudFront documentation
+ [How can I use an Application Load Balancer to redirect HTTP requests to HTTPS?](https://repost.aws/knowledge-center/elb-redirect-http-to-https-using-alb) on AWS re:Post
+ [How do I redirect HTTP requests to HTTPS on a Classic Load Balancer?](https://repost.aws/knowledge-center/redirect-http-https-elb) on AWS re:Post
**Note**  
Classic Load Balancer is a legacy option. For new deployments, we recommend using an Application Load Balancer.
+ [Connecting a custom domain](https://docs.aws.amazon.com/amplify/latest/userguide/custom-domains.html) in the AWS Amplify documentation

Static websites hosted on Amazon S3 do not support HTTPS. To require HTTPS for these websites, you can use CloudFront. When you use CloudFront to serve content from an Amazon S3 bucket, you don't need to enable public access on the bucket. Use an origin access control (OAC) to allow CloudFront to access the private bucket.

For instructions on setting up CloudFront to serve a static website hosted on Amazon S3, see [How do I use CloudFront to serve a static website hosted on Amazon S3?](https://repost.aws/knowledge-center/cloudfront-serve-static-website) on AWS re:Post.

**To configure HTTPS for a static website hosted on Amazon S3**

1. If you are configuring access to a public Amazon S3 bucket, require HTTPS between viewers and CloudFront. For more information, see [Require HTTPS for communication between viewers and CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-viewers-to-cloudfront.html) in the Amazon CloudFront documentation.

1. If you are configuring access to a private Amazon S3 bucket, restrict access to Amazon S3 content by using an origin access control (OAC). For more information, see [Restricting access to an Amazon S3 origin](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html) in the Amazon CloudFront documentation.

Configure HTTPS endpoints to require modern Transport Layer Security (TLS) protocols and ciphers, unless compatibility with older protocols is needed. For example, use the** **`ELBSecurityPolicy-TLS13-1-0-PQ-2025-09` policy or the most recent policy available for Application Load Balancer HTTPS listeners. The most current policies require TLS 1.3 at a minimum, forward secrecy, and strong ciphers that are compatible with modern web browsers.

For more information about the available security policies for HTTPS public endpoints, see the following:
+ [Predefined SSL security policies for Classic Load Balancers](https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-security-policy-table.html) in the Elastic Load Balancing documentation
+ [Security policies for your Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-https-listener.html#describe-ssl-policies) in the Elastic Load Balancing documentation
+ [Supported protocols and ciphers between viewers and CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/secure-connections-supported-viewer-protocols-ciphers.html) in the Amazon CloudFront documentation

# WKLD.14 Use edge-protection services for public endpoints


Rather than serve traffic directly from compute services such as Amazon EC2 instances or containers, use an edge protection service. An edge protection service sits between internet traffic and your backend resources, filtering unwanted requests, enforcing encryption, and applying rules such as load balancing before traffic reaches your workloads.

AWS services that can provide public endpoint protection include AWS WAF, Amazon CloudFront, Elastic Load Balancing, Amazon API Gateway, and AWS Amplify Hosting. Deploy VPC-based services, such as Elastic Load Balancing, in a public subnet to receive internet traffic and forward it to your workloads running in a private subnet.

Amazon CloudFront, Amazon API Gateway, and Amazon Route 53 provide protection from Layer 3 and 4 distributed denial of service (DDoS) attacks at no additional charge. AWS WAF provides protection against Layer 7 attacks and incurs additional charges.

For instructions on getting started with each of these services, see the following:
+ [Getting started with AWS WAF](https://aws.amazon.com/waf/getting-started/)
+ [Getting started with Amazon CloudFront](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html)
+ [Getting started with Elastic Load Balancing](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/load-balancer-getting-started.html)
+ [Getting started with Amazon API Gateway](https://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started.html)
+ [Getting started with AWS Amplify Hosting](https://docs.aws.amazon.com/amplify/latest/userguide/getting-started.html)

# WKLD.15 Define security controls in templates and deploy them by using CI/CD practices


*Infrastructure as code (IaC)* is the practice of defining your AWS resources and configurations in templates and code that you deploy by using continuous integration and continuous delivery (CI/CD) pipelines, the same pipelines used to deploy software applications. IaC tools, such as [AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html) and the [AWS Cloud Development Kit (AWS CDK)](https://docs.aws.amazon.com/cdk/v2/guide/home.html), support IAM identity-based and resource-based policies and integrate with AWS services, such as AWS WAF and Amazon VPC. Define your IAM policies, resource-based policies, and security service configurations as IaC templates. Commit the templates to a source code repository and deploy them by using CI/CD pipelines.

Commit application permission policies with application code in the same repository. Manage general resource policies and security service configurations in separate repositories and deployment pipelines. This separation reduces the risk of a single compromised repository affecting both application code and security configurations.

The following TypeScript AWS CDK stack demonstrates three foundational security controls from this document: an Amazon S3 bucket with `BlockPublicAccess` and server-side encryption ([ACCT.08](acct-08.md)), a CloudTrail trail with log file validation ([ACCT.07](acct-07.md)), and IAM Access Analyzer ([ACCT.11](acct-11.md)).

```
import * as cdk from "aws-cdk-lib";
import {
  aws_s3 as s3,
  aws_cloudtrail as cloudtrail,
  aws_accessanalyzer as accessanalyzer,
  aws_iam as iam,
  RemovalPolicy,
} from "aws-cdk-lib";
import { Construct } from "constructs";

export class SecurityBaselineStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    const accountId = cdk.Stack.of(this).account;
    const region = cdk.Stack.of(this).region;
    const trailName = "audit-trail";
    const trailArn = `arn:aws:cloudtrail:${region}:${accountId}:trail/${trailName}`;

    // -------------------------------------------------------
    // Tagging — applied to every resource in the stack
    // -------------------------------------------------------

    cdk.Tags.of(this).add("Environment", "production");
    cdk.Tags.of(this).add("Team", "platform");
    cdk.Tags.of(this).add("ManagedBy", "cdk");

    // -------------------------------------------------------
    // ACCT.08 — Block Public Access on S3
    // WKLD.08 — Encrypt data at rest (SSE-S3, AWS-managed)
    // -------------------------------------------------------
    const loggingBucket = new s3.Bucket(this, 'AccessLogsBucket', {
      blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
      encryption: s3.BucketEncryption.S3_MANAGED,
      enforceSSL: true,
      versioned: true,
      accessControl: s3.BucketAccessControl.LOG_DELIVERY_WRITE,
      lifecycleRules: [
        {
          id: "ArchiveAfter90Days",
          transitions: [
            {
              storageClass: s3.StorageClass.GLACIER,
              transitionAfter: cdk.Duration.days(90),
            },
          ],
        },
      ],
      removalPolicy: RemovalPolicy.RETAIN,
    });

    const auditLogsBucket = new s3.Bucket(this, "AuditLogsBucket", {
      blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
      encryption: s3.BucketEncryption.S3_MANAGED,
      enforceSSL: true,
      versioned: true,
      serverAccessLogsBucket: loggingBucket,
      serverAccessLogsPrefix: 'access-logs',
      lifecycleRules: [
        {
          id: "ArchiveAfter90Days",
          transitions: [
            {
              storageClass: s3.StorageClass.GLACIER,
              transitionAfter: cdk.Duration.days(90),
            },
          ],
        },
      ],
      removalPolicy: RemovalPolicy.RETAIN,
    });

    // -------------------------------------------------------
    // Bucket policy — CloudTrail access + account boundary
    //
    // Per AWS docs, CloudTrail needs two permissions:
    //   1. GetBucketAcl to verify bucket ownership
    //   2. PutObject to write log files
    // Both are scoped to this specific trail via aws:SourceArn.
    // -------------------------------------------------------

    auditLogsBucket.addToResourcePolicy(
      new iam.PolicyStatement({
        sid: "AWSCloudTrailAclCheck",
        effect: iam.Effect.ALLOW,
        principals: [new iam.ServicePrincipal("cloudtrail.amazonaws.com")],
        actions: ["s3:GetBucketAcl"],
        resources: [auditLogsBucket.bucketArn],
        conditions: {
          StringEquals: {
            "aws:SourceArn": trailArn,
          },
        },
      })
    );

    auditLogsBucket.addToResourcePolicy(
      new iam.PolicyStatement({
        sid: "AWSCloudTrailWrite",
        effect: iam.Effect.ALLOW,
        principals: [new iam.ServicePrincipal("cloudtrail.amazonaws.com")],
        actions: ["s3:PutObject"],
        resources: [
          `${auditLogsBucket.bucketArn}/cloudtrail/AWSLogs/${accountId}/*`,
        ],
        conditions: {
          StringEquals: {
            "s3:x-amz-acl": "bucket-owner-full-control",
            "aws:SourceArn": trailArn,
          },
        },
      })
    );

    auditLogsBucket.addToResourcePolicy(
      new iam.PolicyStatement({
        sid: "DenyExternalAccess",
        effect: iam.Effect.DENY,
        principals: [new iam.AnyPrincipal()],
        actions: ["s3:*"],
        resources: [
          auditLogsBucket.bucketArn,
          `${auditLogsBucket.bucketArn}/*`,
        ],
        conditions: {
          StringNotEquals: {
            "aws:PrincipalAccount": accountId,
          },
          Bool: {
          "aws:PrincipalIsAWSService": "false",
          },
        },
      })
    );

    // -------------------------------------------------------
    // ACCT.07 — Deliver CloudTrail logs to a protected S3 bucket
    // -------------------------------------------------------

    const trail = new cloudtrail.Trail(this, "AuditTrail", {
      trailName: trailName,
      bucket: auditLogsBucket,
      s3KeyPrefix: "cloudtrail",
      isMultiRegionTrail: true,
      isOrganizationTrail: false,
      includeGlobalServiceEvents: true,
      enableFileValidation: true,

      // ACCT.07: Captures both read and write management events.
      // For production environments, consider filtering high-volume events
      // per the guidance in ACCT.07.
      managementEvents: cloudtrail.ReadWriteType.ALL,
    });

    // -------------------------------------------------------
    // ACCT.11 — Enable IAM Access Analyzer (account scope)
    // -------------------------------------------------------

    const analyzer = new accessanalyzer.CfnAnalyzer(this, "AccessAnalyzer", {
      analyzerName: "account-analyzer",
      type: "ACCOUNT",
    });

    // -------------------------------------------------------
    // Outputs
    // -------------------------------------------------------

    new cdk.CfnOutput(this, "AuditLogsBucketArn", {
      description: "ARN of the audit logs S3 bucket",
      value: auditLogsBucket.bucketArn,
    });

    new cdk.CfnOutput(this, "AuditTrailArn", {
      description: "ARN of the CloudTrail trail",
      value: trail.trailArn,
    });

    new cdk.CfnOutput(this, "AccessAnalyzerArn", {
      description: "ARN of the IAM Access Analyzer",
      value: analyzer.attrArn,
    });
  }
}
```

For more information about getting started with IaC on AWS, see [Getting started with the AWS CDK](https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html) in the AWS CDK documentation.

**Note**  
AWS CloudFormation is available at no additional charge, and the AWS CDK is open source and also available at no charge. You pay only for the AWS resources that your stacks create.