

# Architect your application for Amazon ECS
Architect your application

You architect your application by creating a task definition for your application. The task definition contains the parameters that define information about the application, including:
+ The capacity to use, which determines the infrastructure that your tasks are hosted on.

  When you use the EC2 capacity provider, you also choose the instance type. When you use the Amazon ECS Managed Instances capacity provider, you can provide instance requirements for Amazon ECS to manage compute capacity. For some instance types, such as GPU, you need to set specific parameters. For more information, see [Amazon ECS task definition use cases](use-cases.md).
+ The container image, which holds your application code and all the dependencies that your application code requires to run.
+ The networking mode to use for the containers in your task.

  The networking mode determines how your task communicates over the network.

  For tasks that run on EC2 instances and Amazon ECS Managed Instances, there are multiple options, but we recommend that you use the `awsvpc` network mode. The `awsvpc` network mode simplifies container networking by giving you more control over how your applications communicate with each other and other services within your VPCs. 

  For tasks that run on Fargate, you must use the `awsvpc` network mode.
+ The logging configuration to use for your tasks.
+ Any data volumes that are used with the containers in the task.

For a complete list of task definition parameters, see [Amazon ECS task definition parameters for Fargate](task_definition_parameters.md).

Use the following guidelines when you create your task definitions:
+ Use each task definition family for only one business purpose.

  If you group multiple types of application containers together in the same task definition, you can't independently scale those containers. For example, a website and an API typically require different scaling patterns. As traffic increases, you might need a different number of web containers than API containers. If these two containers are deployed in the same task definition, every task runs the same number of web containers and API containers.
+ Match each application version with a task definition revision within a task definition family.

  Within a task definition family, each task definition revision represents a point-in-time snapshot of the settings for a particular container image. This is similar to how the container is a snapshot of all the components needed to run a particular version of your application code.

  Create a one-to-one mapping between a version of application code, a container image tag, and a task definition revision. A typical release process involves a git commit that gets turned into a container image that's tagged with the git commit SHA. Then, that container image tag gets its own Amazon ECS task definition revision. Last, the Amazon ECS service is updated to deploy the new task definition revision.
+ Use different IAM roles for each task definition family.

  Define each task definition with its own IAM role. Implement this practice along with providing each business component its own task definition family. By implementing both of these best practices, you can limit how much access each service has to resources in your AWS account. For example, you can give your authentication service access to connect to your passwords database. At the same time, you can ensure that only your order service has access to the credit card payment information.

# Best practices for Amazon ECS task sizes
Best practices for task sizes

 Your container and task sizes are both essential for scaling and capacity planning. In Amazon ECS, CPU and memory are two resource metrics used for capacity. CPU is measured in units of 1/1024 of a full vCPU (where 1024 units is equal to 1 whole vCPU). Memory is measured in mebibytes. In your task definition, you can configure resource reservations and limits.

When you configure a reservation, you're setting the minimum amount of resources that a task requires. Your task receives at least the amount of resources requested. Your application might be able to use more CPU or memory than the reservation that you declare. However, this is subject to any limits that you also declared. Using more than the reservation amount is known as bursting. In Amazon ECS, reservations are guaranteed. For example, if you use Amazon EC2 instances to provide capacity, Amazon ECS doesn't place a task on an instance where the reservation can't be fulfilled.

A limit is the maximum amount of CPU units or memory that your container or task can use. Any attempt to use more CPU more than this limit results in throttling. Any attempt to use more memory results in your container being stopped.

Choosing these values can be challenging. This is because the values that are the most well suited for your application greatly depend on the resource requirements of your application. Load testing your application is the key to successful resource requirement planning and better understanding your application's requirements.

## Stateless applications


For stateless applications that scale horizontally, such as an application behind a load balancer, we recommend that you first determine the amount of memory that your application consumes when it serves requests. To do this, you can use traditional tools such as `ps` or `top`, or monitoring solutions such as CloudWatch Container Insights.

When determining a CPU reservation, consider how you want to scale your application to meet your business requirements. You can use smaller CPU reservations, such as 256 CPU units (or 1/4 vCPU), to scale out in a fine-grained way that minimizes cost. But, they might not scale fast enough to meet significant spikes in demand. You can use larger CPU reservations to scale in and out more quickly and therefore match demand spikes more quickly. However, larger CPU reservations are more costly.

## Other applications


For applications that don't scale horizontally, such as singleton workers or database servers, available capacity and cost represent your most important considerations. You should choose the amount of memory and CPU based on what load testing indicates you need to serve traffic to meet your service-level objective. Amazon ECS ensures that the application is placed on a host that has adequate capacity.

# Amazon ECS task networking for Amazon ECS Managed Instances
Task networking for Amazon ECS Managed Instances

The networking behavior of Amazon ECS tasks running on Amazon ECS Managed Instances is determined by the *network mode* specified in the task definition. You must specify a network mode in the task definition. You will not be able to run tasks on Amazon ECS Managed Instances using a task definition that doesn't specify a network mode. Amazon ECS Managed Instances supports the following networking modes, ensuring backward compatibility for migrating workloads from Fargate or Amazon ECS on Amazon EC2:


| Network mode | Description | 
| --- | --- | 
|  `awsvpc`  |  Each task receives its own elastic network interface (ENI) and private IPv4 address. This provides the same networking properties as Amazon EC2 instances and is compatible with traditional Fargate tasks. Uses ENI trunking for high task density.  | 
|  `host`  |  Tasks share the host's network namespace directly. Container networking is tied to the underlying host instance.  | 

## Using a VPC in IPv6-only mode


In an IPv6-only configuration, your Amazon ECS tasks communicate exclusively over IPv6. To set up VPCs and subnets for an IPv6-only configuration, you must add an IPv6 CIDR block to the VPC and create subnets that include only an IPv6 CIDR block. For more information see [Add IPv6 support for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6-add.html) and [Create a subnet](https://docs.aws.amazon.com/vpc/latest/userguide/create-subnets.html) in the *Amazon VPC User Guide*. You must also update route tables with IPv6 targets and configure security groups with IPv6 rules. For more information, see [Configure route tables](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html) and [Configure security group rules](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-security-group-rules.html) in the *Amazon VPC User Guide*.

The following considerations apply:
+ You can update an IPv4-only or dualstack Amazon ECS service to an IPv6-only configuration by either updating the service directly to use IPv6-only subnets or by creating a parallel IPv6-only service and using Amazon ECS blue-green deployments to shift traffic to the new service. For more information about Amazon ECS blue-green deployments, see [Amazon ECS blue/green deployments](deployment-type-blue-green.md).
+ An IPv6-only Amazon ECS service must use dualstack load balancers with IPv6 target groups. If you're migrating an existing Amazon ECS service that's behind a Application Load Balancer or a Network Load Balancer, you can create a new dualstack load balancer and shift traffic from the old load balancer, or update the IP address type of the existing load balancer.

   For more information about Network Load Balancers, see [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html) and [Update the IP address types for your Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-ip-address-type.html) in the *User Guide for Network Load Balancers*. For more information about Application Load Balancers, see [Create an Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html) and [Update the IP address types for your Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-ip-address-type.html) in the *User Guide for Application Load Balancers*.
+ For Amazon ECS tasks in an IPv6-only configuration to communicate with IPv4-only endpoints, you can set up DNS64 and NAT64 for network address translation from IPv6 to IPv4. For more information, see [DNS64 and NAT64](https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-nat64-dns64.html) in the *Amazon VPC User Guide*.
+ Amazon ECS workloads in an IPv6-only configuration must use Amazon ECR dualstack image URI endpoints when pulling images from Amazon ECR. For more information, see [Getting started with making requests over IPv6](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr-requests.html#ipv6-access-getting-started) in the *Amazon Elastic Container Registry User Guide*.
**Note**  
Amazon ECR doesn't support dualstack interface VPC endpoints that tasks in an IPv6-only configuration can use. For more information, see [Getting started with making requests over IPv6](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr-requests.html#ipv6-access-getting-started) in the *Amazon Elastic Container Registry User Guide*.
+ Amazon ECS Exec isn't supported in an IPv6-only configuration.

# Allocate a network interface for tasks on Amazon ECS Managed Instances
AWSVPC network mode

 Using the `awsvpc` network mode in Amazon ECS Managed Instances simplifies container networking because you have more control over how your applications communicate with each other and other services within your VPCs. The `awsvpc` network mode also provides greater security for your containers by allowing you to use security groups and network monitoring tools at a more granular level within your tasks.

By default, every Amazon ECS Managed Instances instance has a trunk Elastic Network Interface (ENI) attached during launch as a primary ENI when the instance type supports trunking. For more information about instance types that support ENI trunking, see [Supported instances for increased Amazon ECS container network interfaces](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/eni-trunking-supported-instance-types.html).

**Note**  
When the chosen instance type doesn't support trunk ENIs, the instance will be launched with a regular ENI.

Each task that runs on the instance receives its own ENI attached to the trunk ENI, with a primary private IP address. If your VPC is configured for dual-stack mode and you use a subnet with an IPv6 CIDR block, the ENI also receives an IPv6 address. When using a public subnet, you can optionally assign a public IP address to the Amazon ECS Managed Instance primary ENI by enabling IPv4 public addressing for the subnet. For more information, see [Modify the IP addressing attributes of your subnet](https://docs.aws.amazon.com//vpc/latest/userguide/subnet-public-ip.html) in *Amazon VPC User Guide*. A task can only have one ENI that's associated with it at a time. 

 Containers that belong to the same task can also communicate over the `localhost` interface. For more information about VPCs and subnets, see [How Amazon VPC works](https://docs.aws.amazon.com/vpc/latest/userguide/how-it-works.html) in the *Amazon VPC User Guide*

The following operations use the primary ENI attached to the instance:
+ **Image downloads** - Container images are downloaded from Amazon ECR through the primary ENI.
+ **Secrets retrieval** - Secrets Manager secrets and other credentials are retrieved through the primary ENI.
+ **Log uploads** - Logs are uploaded to CloudWatch through the primary ENI.
+ **Environment file downloads** - Environment files are downloaded through the primary ENI.

Application traffic flows through the task ENI.

Because each task gets its own ENI, you can use networking features such as VPC Flow Logs, which you can use to monitor traffic to and from your tasks. For more information, see [VPC Flow Logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html) in the *Amazon VPC User Guide*.

You can also take advantage of AWS PrivateLink. You can configure a VPC interface endpoint so that you can access Amazon ECS APIs through private IP addresses. AWS PrivateLink restricts all network traffic between your VPC and Amazon ECS to the Amazon network. You don't need an internet gateway, a NAT device, or a virtual private gateway. For more information, see [Amazon ECS interface VPC endpoints (AWS PrivateLink)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/vpc-endpoints.html).

The `awsvpc` network mode also allows you to leverage Amazon VPC Traffic Mirroring for security and monitoring of network traffic when using instance types that don't have trunk ENIs attached. For more information, see [What is Traffic Mirroring?](https://docs.aws.amazon.com/vpc/latest/mirroring/what-is-traffic-mirroring.html) in the *Amazon VPC Traffic Mirroring Guide*.

## Considerations for `awsvpc` mode

+ Tasks require the Amazon ECS service-linked role for ENI management. This role is created automatically when you create a cluster or service.
+ Task ENIs are managed by Amazon ECS and cannot be manually detached or modified.
+ Assigning a public IP address to the task ENI using `assignPublicIp` when running a standalone task (`RunTask`) or creating or updating a service (`CreateService`/`UpdateService`) is not supported.
+ When you configure `awsvpc` networking at the task level, you must use the same VPC that you specified as part of the Amazon ECS Managed Instances capacity provider's launch template. You can use different subnets and security groups from those specified in the launch template.
+ For `awsvpc` network mode tasks, use `ip` target type when configuring load balancer target groups. Amazon ECS automatically manages target group registration for supported networking modes.

## Using a VPC in dual-stack mode


When using a VPC in dual-stack mode, your tasks can communicate over IPv4, or IPv6, or both. IPv4 and IPv6 addresses are independent of each other. Therefore you must configure routing and security in your VPC separately for IPv4 and IPv6. For more information about how to configure your VPC for dual-stack mode, see [Migrating to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the *Amazon VPC User Guide*.

If you configured your VPC with an internet gateway or an outbound-only internet gateway, you can use your VPC in dual-stack mode. By doing this, tasks that are assigned an IPv6 address can access the internet through an internet gateway or an egress-only internet gateway. NAT gateways are optional. For more information, see [Internet gateways](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html) and [Egress-only internet gateways](https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html) in the *Amazon VPC User Guide*.

Amazon ECS tasks are assigned an IPv6 address if the following conditions are met:
+ The Amazon ECS Managed Instances instance that hosts the task is using version `1.45.0` or later of the container agent. For information about how to check the agent version your instance is using, and updating it if needed, see [Updating the Amazon ECS container agent](ecs-agent-update.md).
+ The `dualStackIPv6` account setting is enabled. For more information, see [Access Amazon ECS features with account settings](ecs-account-settings.md).
+ Your task is using the `awsvpc` network mode.
+ Your VPC and subnet are configured for IPv6. The configuration includes the network interfaces that are created in the specified subnet. For more information about how to configure your VPC for dual-stack mode, see [Migrating to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) and [Modify the IPv6 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#subnet-ipv6) in the *Amazon VPC User Guide*.

# Host network mode
Host network mode

In `host` mode, tasks share the host's network namespace directly. The container's networking configuration is tied to the underlying Amazon ECS Managed Instances host instance that you specify using the `networkConfiguration` parameter when you create an Amazon ECS Managed Instances capacity provider.``

There are significant drawbacks to using this network mode. You can’t run more than a single instantiation of a task on each host. This is because only the first task can bind to its required port on the Amazon EC2 instance. There's also no way to remap a container port when it's using `host` network mode. For example, if an application needs to listen on a particular port number, you can't remap the port number directly. Instead, you must manage any port conflicts through changing the application configuration.

There are also security implications when using the `host` network mode. This mode allows containers to impersonate the host, and it allows containers to connect to private loopback network services on the host.

Use host mode only when you need direct access to host networking or when migrating applications that require host-level network access.

# Amazon ECS task networking options for EC2
Task networking for EC2

The networking behavior of Amazon ECS tasks that are hosted on Amazon EC2 instances is dependent on the *network mode* that's defined in the task definition. We recommend that you use the `awsvpc` network mode unless you have a specific need to use a different network mode.

The following are the available network modes.


| Network mode | Linux containers on EC2 | Windows containers on EC2 | Description | 
| --- | --- | --- | --- | 
|  `awsvpc`  |  Yes  |  Yes  |  The task is allocated its own elastic network interface (ENI) and a primary private IPv4 or IPv6 address. This gives the task the same networking properties as Amazon EC2 instances.  | 
|  `bridge`  |  Yes  |  No  |  The task uses Docker's built-in virtual network on Linux, which runs inside each Amazon EC2 instance that hosts the task. The built-in virtual network on Linux uses the `bridge` Docker network driver. This is the default network mode on Linux if a network mode isn't specified in the task definition.  | 
|  `host`  |  Yes  |  No  |  The task uses the host's network which bypasses Docker's built-in virtual network by mapping container ports directly to the ENI of the Amazon EC2 instance that hosts the task. Dynamic port mappings can’t be used in this network mode. A container in a task definition that uses this mode must specify a specific `hostPort` number. A port number on a host can’t be used by multiple tasks. As a result, you can’t run multiple tasks of the same task definition on a single Amazon EC2 instance.  | 
|  `none`  |  Yes  |  No  |  The task has no external network connectivity.  | 
|  `default`  |  No  |  Yes  |  The task uses Docker's built-in virtual network on Windows, which runs inside each Amazon EC2 instance that hosts the task. The built-in virtual network on Windows uses the `nat` Docker network driver. This is the default network mode on Windows if a network mode isn't specified in the task definition.  | 

For more information about Docker networking on Linux, see [Networking overview](https://docs.docker.com/engine/network/) in the *Docker Documentation*.

For more information about Docker networking on Windows, see [Windows container networking](https://learn.microsoft.com/en-us/virtualization/windowscontainers/container-networking/architecture) in the Microsoft *Containers on Windows Documentation*.

## Using a VPC in IPv6-only mode


In an IPv6-only configuration, your Amazon ECS tasks communicate exclusively over IPv6. To set up VPCs and subnets for an IPv6-only configuration, you must add an IPv6 CIDR block to the VPC and create new subnets that include only an IPv6 CIDR block. For more information see [Add IPv6 support for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6-add.html) and [Create a subnet](https://docs.aws.amazon.com/vpc/latest/userguide/create-subnets.html) in the *Amazon VPC User Guide*.

You must also update route tables with IPv6 targets and configure security groups with IPv6 rules. For more information, see [Configure route tables](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html) and [Configure security group rules](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-security-group-rules.html) in the *Amazon VPC User Guide*.

The following considerations apply:
+ You can update an IPv4-only or dualstack Amazon ECS service to an IPv6-only configuration by either updating the service directly to use IPv6-only subnets or by creating a parallel IPv6-only service and using Amazon ECS blue-green deployments to shift traffic to the new service. For more information about Amazon ECS blue-green deployments, see [Amazon ECS blue/green deployments](deployment-type-blue-green.md).
+ An IPv6-only Amazon ECS service must use dualstack load balancers with IPv6 target groups. If you're migrating an existing Amazon ECS service that's behind a Application Load Balancer or a Network Load Balancer, you can create a new dualstack load balancer and shift traffic from the old load balancer, or update the IP address type of the existing load balancer.

  For more information about Network Load Balancers, see [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html) and [Update the IP address types for your Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-ip-address-type.html) in the *User Guide for Network Load Balancers*. For more information about Application Load Balancers, see [Create an Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html) and [Update the IP address types for your Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-ip-address-type.html) in the *User Guide for Application Load Balancers*.
+ IPv6-only configuration isn't supported on Windows. You must use Amazon ECS-optimized Linux AMIs to run tasks in an IPv6-only configuration. For more information about Amazon ECS-optimized Linux AMIs, see [Amazon ECS-optimized Linux AMIs](ecs-optimized_AMI.md).
+ When you launch a container instance for running tasks in an IPv6-only configuration, you must set a primary IPv6 address for the instance by using the `--enable-primary-ipv6` EC2 parameter.
**Note**  
Without a primary IPv6 address, tasks running on the container instance in the host or bridge network modes will fail to register with load balancers or with AWS Cloud Map.

  For more information about the `--enable-primary-ipv6` for running Amazon EC2 instances, see [run-instances](https://docs.aws.amazon.com/cli/latest/reference/ec2/run-instances.html) in the *AWS CLI Command Reference*.

  For more information about launching container instances using the AWS Management Console, see [Launching an Amazon ECS Linux container instance](launch_container_instance.md).
+ By default, the Amazon ECS container agent will try to detect the container instance's compatibility for an IPv6-only configuration by looking at the instance's default IPv4 and IPv6 routes. To override this behavior, you can set the ` ECS_INSTANCE_IP_COMPATIBILITY` parameter to `ipv4` or `ipv6` in the instance's `/etc/ecs/ecs.config` file.
+ Tasks must use version `1.99.1` or later of the container agent. For information about how to check the agent version your instance is using and updating it if needed, see [Updating the Amazon ECS container agent](ecs-agent-update.md).
+ For Amazon ECS tasks in an IPv6-only configuration to communicate with IPv4-only endpoints, you can set up DNS64 and NAT64 for network address translation from IPv6 to IPv4. For more information, see [DNS64 and NAT64](https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-nat64-dns64.html) in the *Amazon VPC User Guide*.
+ Amazon ECS workloads in an IPv6-only configuration must use Amazon ECR dualstack image URI endpoints when pulling images from Amazon ECR. For more information, see [Getting started with making requests over IPv6](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr-requests.html#ipv6-access-getting-started) in the *Amazon Elastic Container Registry User Guide*.
**Note**  
Amazon ECR doesn't support dualstack interface VPC endpoints that tasks in an IPv6-only configuration can use. For more information, see [Getting started with making requests over IPv6](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr-requests.html#ipv6-access-getting-started) in the *Amazon Elastic Container Registry User Guide*.
+ Amazon ECS Exec isn't supported in an IPv6-only configuration.

### AWS Regions that support IPv6-only mode for Amazon ECS


You can run tasks in an IPv6-only configuration in the following AWS regions that Amazon ECS is available in:
+ US East (Ohio)
+ US East (N. Virginia)
+ US West (N. California)
+ US West (Oregon)
+ Africa (Cape Town)
+ Asia Pacific (Hong Kong)
+ Asia Pacific (Hyderabad)
+ Asia Pacific (Jakarta)
+ Asia Pacific (Melbourne)
+ Asia Pacific (Mumbai)
+ Asia Pacific (Osaka)
+ Asia Pacific (Seoul)
+ Asia Pacific (Singapore)
+ Asia Pacific (Sydney)
+ Asia Pacific (Tokyo)
+ Canada (Central)
+ Canada West (Calgary)
+ China (Beijing)
+ China (Ningxia)
+ Europe (Frankfurt)
+ Europe (London)
+ Europe (Milan)
+ Europe (Paris)
+ Europe (Spain)
+ Israel (Tel Aviv)
+ Middle East (Bahrain)
+ Middle East (UAE)
+ South America (São Paulo)
+ AWS GovCloud (US-East)
+ AWS GovCloud (US-West)

# Allocate a network interface for an Amazon ECS task
AWSVPC network mode

The task networking features that are provided by the `awsvpc` network mode give Amazon ECS tasks the same networking properties as Amazon EC2 instances. Using the `awsvpc` network mode simplifies container networking, because you have more control over how your applications communicate with each other and other services within your VPCs. The `awsvpc` network mode also provides greater security for your containers by allowing you to use security groups and network monitoring tools at a more granular level within your tasks. You can also use other Amazon EC2 networking features such as VPC Flow Logs to monitor traffic to and from your tasks. Additionally, containers that belong to the same task can communicate over the `localhost` interface.

The task elastic network interface (ENI) is a fully managed feature of Amazon ECS. Amazon ECS creates the ENI and attaches it to the host Amazon EC2 instance with the specified security group. The task sends and receives network traffic over the ENI in the same way that Amazon EC2 instances do with their primary network interfaces. Each task ENI is assigned a private IPv4 address by default. If your VPC is enabled for dual-stack mode and you use a subnet with an IPv6 CIDR block, the task ENI will also receive an IPv6 address. Each task can only have one ENI. 

These ENIs are visible in the Amazon EC2 console for your account. Your account can't detach or modify the ENIs. This is to prevent accidental deletion of an ENI that is associated with a running task. You can view the ENI attachment information for tasks in the Amazon ECS console or with the [DescribeTasks](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DescribeTasks.html) API operation. When the task stops or if the service is scaled down, the task ENI is detached and deleted.

When you need increased ENI density, use the `awsvpcTrunking` account setting. Amazon ECS also creates and attaches a "trunk" network interface for your container instance. The trunk network is fully managed by Amazon ECS. The trunk ENI is deleted when you either terminate or deregister your container instance from the Amazon ECS cluster. For more information about the `awsvpcTrunking` account setting, see [Prerequisites](container-instance-eni.md#eni-trunking-launching).

You specify `awsvpc` in the `networkMode` parameter of the task definition. For more information, see [Network mode](task_definition_parameters.md#network_mode). 

Then, when you run a task or create a service, use the `networkConfiguration` parameter that includes one or more subnets to place your tasks in and one or more security groups to attach to an ENI. For more information, see [Network configuration](service_definition_parameters.md#sd-networkconfiguration). The tasks are placed on compatible Amazon EC2 instances in the same Availability Zones as those subnets, and the specified security groups are associated with the ENI that's provisioned for the task.

## Linux considerations


 Consider the following when using the Linux operating system.
+ If you use a p5.48xlarge instance in `awsvpc` mode, you can't run more than 1 task on the instance.
+ Tasks and services that use the `awsvpc` network mode require the Amazon ECS service-linked role to provide Amazon ECS with the permissions to make calls to other AWS services on your behalf. This role is created for you automatically when you create a cluster or if you create or update a service, in the AWS Management Console. For more information, see [Using service-linked roles for Amazon ECS](using-service-linked-roles.md). You can also create the service-linked role with the following AWS CLI command:

  ```
  aws iam [create-service-linked-role](https://docs.aws.amazon.com/cli/latest/reference/iam/create-service-linked-role.html) --aws-service-name ecs.amazonaws.com
  ```
+ Your Amazon EC2 Linux instance requires version `1.15.0` or later of the container agent to run tasks that use the `awsvpc` network mode. If you're using an Amazon ECS-optimized AMI, your instance needs at least version `1.15.0-4` of the `ecs-init` package as well.
+ Amazon ECS populates the hostname of the task with an Amazon-provided (internal) DNS hostname when both the `enableDnsHostnames` and `enableDnsSupport` options are enabled on your VPC. If these options aren't enabled, the DNS hostname of the task is set to a random hostname. For more information about the DNS settings for a VPC, see [Using DNS with Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html) in the *Amazon VPC User Guide*.
+ Each Amazon ECS task that uses the `awsvpc` network mode receives its own elastic network interface (ENI), which is attached to the Amazon EC2 instance that hosts it. There's a default quota for the number of network interfaces that can be attached to an Amazon EC2 Linux instance. The primary network interface counts as one toward that quota. For example, by default, a `c5.large` instance might have only up to three ENIs that can be attached to it. The primary network interface for the instance counts as one. You can attach an additional two ENIs to the instance. Because each task that uses the `awsvpc` network mode requires an ENI, you can typically only run two such tasks on this instance type. For more information about the default ENI limits for each instance type, see [IP addresses per network interface per instance type](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI) in the *Amazon EC2 User Guide*.
+ Amazon ECS supports the launch of Amazon EC2 Linux instances that use supported instance types with increased ENI density. When you opt in to the `awsvpcTrunking` account setting and register Amazon EC2 Linux instances that use these instance types to your cluster, these instances have higher ENI quota. Using these instances with this higher quota means that you can place more tasks on each Amazon EC2 Linux instance. To use the increased ENI density with the trunking feature, your Amazon EC2 instance must use version `1.28.1` or later of the container agent. If you're using an Amazon ECS-optimized AMI, your instance also requires at least version `1.28.1-2` of the `ecs-init` package. For more information about opting in to the `awsvpcTrunking` account setting, see [Access Amazon ECS features with account settings](ecs-account-settings.md). For more information about ENI trunking, see [Increasing Amazon ECS Linux container instance network interfaces](container-instance-eni.md).
+ When hosting tasks that use the `awsvpc` network mode on Amazon EC2 Linux instances, your task ENIs aren't given public IP addresses. To access the internet, tasks must be launched in a private subnet that's configured to use a NAT gateway. For more information, see [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) in the *Amazon VPC User Guide*. Inbound network access must be from within a VPC that uses the private IP address or routed through a load balancer from within the VPC. Tasks that are launched within public subnets do not have access to the internet.
+ Amazon ECS recognizes only the ENIs that it attaches to your Amazon EC2 Linux instances. If you manually attached ENIs to your instances, Amazon ECS might attempt to add a task to an instance that doesn't have enough network adapters. This can result in the task timing out and moving to a deprovisioning status and then a stopped status. We recommend that you don't attach ENIs to your instances manually.
+ Amazon EC2 Linux instances must be registered with the `ecs.capability.task-eni` capability to be considered for placement of tasks with the `awsvpc` network mode. Instances running version `1.15.0-4` or later of `ecs-init` are registered with this attribute automatically.
+ The ENIs that are created and attached to your Amazon EC2 Linux instances cannot be detached manually or modified by your account. This is to prevent the accidental deletion of an ENI that is associated with a running task. To release the ENIs for a task, stop the task.
+ There is a limit of 16 subnets and 5 security groups that are able to be specified in the `awsVpcConfiguration` when running a task or creating a service that uses the `awsvpc` network mode. For more information, see [AwsVpcConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_AwsVpcConfiguration.html) in the *Amazon Elastic Container Service API Reference*.
+ When a task is started with the `awsvpc` network mode, the Amazon ECS container agent creates an additional `pause` container for each task before starting the containers in the task definition. It then configures the network namespace of the `pause` container by running the [amazon-ecs-cni-plugins ](https://github.com/aws/amazon-ecs-cni-plugins) CNI plugins. The agent then starts the rest of the containers in the task so that they share the network stack of the `pause` container. This means that all containers in a task are addressable by the IP addresses of the ENI, and they can communicate with each other over the `localhost` interface.
+ Services with tasks that use the `awsvpc` network mode only support Application Load Balancer and Network Load Balancer. When you create any target groups for these services, you must choose `ip` as the target type. Do not use `instance`. This is because tasks that use the `awsvpc` network mode are associated with an ENI, not with an Amazon EC2 Linux instance. For more information, see [Use load balancing to distribute Amazon ECS service traffic](service-load-balancing.md).
+ If your VPC is updated to change the DHCP options set it uses, you can't apply these changes to existing tasks. Start new tasks with these changes applied to them, verify that they are working correctly, and then stop the existing tasks in order to safely change these network configurations.

## Windows considerations


 The following are considerations when you use the Windows operating system:
+ Container instances using the Amazon ECS optimized Windows Server 2016 AMI can't host tasks that use the `awsvpc` network mode. If you have a cluster that contains Amazon ECS optimized Windows Server 2016 AMIs and Windows AMIs that support `awsvpc` network mode, tasks that use `awsvpc` network mode aren't launched on the Windows 2016 Server instances. Rather, they're launched on instances that support `awsvpc` network mode.
+ Your Amazon EC2 Windows instance requires version `1.57.1` or later of the container agent to use CloudWatch metrics for Windows containers that use the `awsvpc` network mode.
+ Tasks and services that use the `awsvpc` network mode require the Amazon ECS service-linked role to provide Amazon ECS with the permissions to make calls to other AWS services on your behalf. This role is created for you automatically when you create a cluster, or if you create or update a service, in the AWS Management Console. For more information, see [Using service-linked roles for Amazon ECS](using-service-linked-roles.md). You can also create the service-linked role with the following AWS CLI command.

  ```
  aws iam [create-service-linked-role](https://docs.aws.amazon.com/cli/latest/reference/iam/create-service-linked-role.html) --aws-service-name ecs.amazonaws.com
  ```
+ Your Amazon EC2 Windows instance requires version `1.54.0` or later of the container agent to run tasks that use the `awsvpc` network mode. When you bootstrap the instance, you must configure the options that are required for `awsvpc` network mode. For more information, see [Bootstrapping Amazon ECS Windows container instances to pass data](bootstrap_windows_container_instance.md).
+ Amazon ECS populates the hostname of the task with an Amazon provided (internal) DNS hostname when both the `enableDnsHostnames` and `enableDnsSupport` options are enabled on your VPC. If these options aren't enabled, the DNS hostname of the task is a random hostname. For more information about the DNS settings for a VPC, see [Using DNS with Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html) in the *Amazon VPC User Guide*.
+ Each Amazon ECS task that uses the `awsvpc` network mode receives its own elastic network interface (ENI), which is attached to the Amazon EC2 Windows instance that hosts it. There is a default quota for the number of network interfaces that can be attached to an Amazon EC2 Windows instance. The primary network interface counts as one toward this quota. For example, by default a `c5.large` instance might have only up to three ENIs attached to it. The primary network interface for the instance counts as one of those. You can attach an additional two ENIs to the instance. Because each task using the `awsvpc` network mode requires an ENI, you can typically only run two such tasks on this instance type. For more information about the default ENI limits for each instance type, see [IP addresses per network interface per instance type](https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/using-eni.html#AvailableIpPerENI) in the *Amazon EC2 User Guide*.
+ When hosting tasks that use the `awsvpc` network mode on Amazon EC2 Windows instances, your task ENIs aren't given public IP addresses. To access the internet, launch tasks in a private subnet that's configured to use a NAT gateway. For more information, see [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) in the *Amazon VPC User Guide*. Inbound network access must be from within the VPC that is using the private IP address or routed through a load balancer from within the VPC. Tasks that are launched within public subnets don't have access to the internet.
+ Amazon ECS recognizes only the ENIs that it has attached to your Amazon EC2 Windows instance. If you manually attached ENIs to your instances, Amazon ECS might attempt to add a task to an instance that doesn't have enough network adapters. This can result in the task timing out and moving to a deprovisioning status and then a stopped status. We recommend that you don't attach ENIs to your instances manually.
+ Amazon EC2 Windows instances must be registered with the `ecs.capability.task-eni` capability to be considered for placement of tasks with the `awsvpc` network mode. 
+  You can't manually modify or detach ENIs that are created and attached to your Amazon EC2 Windows instances. This is to prevent you from accidentally deleting an ENI that's associated with a running task. To release the ENIs for a task, stop the task.
+  You can only specify up to 16 subnets and 5 security groups in `awsVpcConfiguration` when you run a task or create a service that uses the `awsvpc` network mode. For more information, see [AwsVpcConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_AwsVpcConfiguration.html) in the *Amazon Elastic Container Service API Reference*.
+ When a task is started with the `awsvpc` network mode, the Amazon ECS container agent creates an additional `pause` container for each task before starting the containers in the task definition. It then configures the network namespace of the `pause` container by running the [amazon-ecs-cni-plugins ](https://github.com/aws/amazon-ecs-cni-plugins) CNI plugins. The agent then starts the rest of the containers in the task so that they share the network stack of the `pause` container. This means that all containers in a task are addressable by the IP addresses of the ENI, and they can communicate with each other over the `localhost` interface.
+ Services with tasks that use the `awsvpc` network mode only support Application Load Balancer and Network Load Balancer. When you create any target groups for these services, you must choose `ip` as the target type, not `instance`. This is because tasks that use the `awsvpc` network mode are associated with an ENI, not with an Amazon EC2 Windows instance. For more information, see [Use load balancing to distribute Amazon ECS service traffic](service-load-balancing.md).
+ If your VPC is updated to change the DHCP options set it uses, you can't apply these changes to existing tasks. Start new tasks with these changes applied to them, verify that they are working correctly, and then stop the existing tasks in order to safely change these network configurations.
+ The following are not supported when you use `awsvpc` network mode in an EC2 Windows configuration:
  + Dual-stack configuration
  + IPv6
  + ENI trunking

## Using a VPC in dual-stack mode


When using a VPC in dual-stack mode, your tasks can communicate over IPv4, or IPv6, or both. IPv4 and IPv6 addresses are independent of each other. Therefore you must configure routing and security in your VPC separately for IPv4 and IPv6. For more information about how to configure your VPC for dual-stack mode, see [Migrating to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the *Amazon VPC User Guide*.

If you configured your VPC with an internet gateway or an outbound-only internet gateway, you can use your VPC in dual-stack mode. By doing this, tasks that are assigned an IPv6 address can access the internet through an internet gateway or an egress-only internet gateway. NAT gateways are optional. For more information, see [Internet gateways](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html) and [Egress-only internet gateways](https://docs.aws.amazon.com/vpc/latest/userguide/egress-only-internet-gateway.html) in the *Amazon VPC User Guide*.

Amazon ECS tasks are assigned an IPv6 address if the following conditions are met:
+ The Amazon EC2 Linux instance that hosts the task is using version `1.45.0` or later of the container agent. For information about how to check the agent version your instance is using, and updating it if needed, see [Updating the Amazon ECS container agent](ecs-agent-update.md).
+ The `dualStackIPv6` account setting is enabled. For more information, see [Access Amazon ECS features with account settings](ecs-account-settings.md).
+ Your task is using the `awsvpc` network mode.
+ Your VPC and subnet are configured for IPv6. The configuration includes the network interfaces that are created in the specified subnet. For more information about how to configure your VPC for dual-stack mode, see [Migrating to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) and [Modify the IPv6 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-ip-addressing.html#subnet-ipv6) in the *Amazon VPC User Guide*.

# Map Amazon ECS container ports to the EC2 instance network interface
Host network mode

The `host` network mode is only supported for Amazon ECS tasks hosted on Amazon EC2 instances. It's not supported when using Amazon ECS on Fargate.

The `host` network mode is the most basic network mode that's supported in Amazon ECS. Using host mode, the networking of the container is tied directly to the underlying host that's running the container.

![\[Diagram showing architecture of a network with containers using the host network mode.\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/images/networkmode-host.png)


Assume that you're running a Node.js container with an Express application that listens on port `3000` similar to the one illustrated in the preceding diagram. When the `host` network mode is used, the container receives traffic on port 3000 using the IP address of the underlying host Amazon EC2 instance. We do not recommend using this mode.

There are significant drawbacks to using this network mode. You can’t run more than a single instantiation of a task on each host. This is because only the first task can bind to its required port on the Amazon EC2 instance. There's also no way to remap a container port when it's using `host` network mode. For example, if an application needs to listen on a particular port number, you can't remap the port number directly. Instead, you must manage any port conflicts through changing the application configuration.

There are also security implications when using the `host` network mode. This mode allows containers to impersonate the host, and it allows containers to connect to private loopback network services on the host.

# Use Docker's virtual network for Amazon ECS Linux tasks
Bridge network mode

The `bridge` network mode is only supported for Amazon ECS tasks hosted on Amazon EC2 instances.

With `bridge` mode, you're using a virtual network bridge to create a layer between the host and the networking of the container. This way, you can create port mappings that remap a host port to a container port. The mappings can be either static or dynamic.

![\[Diagram showing architecture of a network using bridge network mode with static port mapping.\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/images/networkmode-bridge.png)


With a static port mapping, you can explicitly define which host port you want to map to a container port. Using the example above, port `80` on the host is being mapped to port `3000` on the container. To communicate to the containerized application, you send traffic to port `80` to the Amazon EC2 instance's IP address. From the containerized application’s perspective it sees that inbound traffic on port `3000`.

If you only want to change the traffic port, then static port mappings is suitable. However, this still has the same disadvantage as using the `host` network mode. You can't run more than a single instantiation of a task on each host. This is because a static port mapping only allows a single container to be mapped to port 80.

To solve this problem, consider using the `bridge` network mode with a dynamic port mapping as shown in the following diagram.

![\[Diagram showing architecture of a network using bridge network mode with dynamic port mapping.\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/images/networkmode-bridge-dynamic.png)


By not specifying a host port in the port mapping, you can have Docker choose a random, unused port from the ephemeral port range and assign it as the public host port for the container. For example, the Node.js application listening on port `3000` on the container might be assigned a random high number port such as `47760` on the Amazon EC2 host. Doing this means that you can run multiple copies of that container on the host. Moreover, each container can be assigned its own port on the host. Each copy of the container receives traffic on port `3000`. However, clients that send traffic to these containers use the randomly assigned host ports.

Amazon ECS helps you to keep track of the randomly assigned ports for each task. It does this by automatically updating load balancer target groups and AWS Cloud Map service discovery to have the list of task IP addresses and ports. This makes it easier to use services operating using `bridge` mode with dynamic ports.

However, one disadvantage of using the `bridge` network mode is that it's difficult to lock down service to service communications. Because services might be assigned to any random, unused port, it's necessary to open broad port ranges between hosts. However, it's not easy to create specific rules so that a particular service can only communicate to one other specific service. The services have no specific ports to use for security group networking rules.

## Configuring bridge networking mode for IPv6-only workloads


To configure `bridge` mode for communication over IPv6, you must update Docker daemon settings. Update `/etc/docker/daemon.json` with the following:

```
{
  "ipv6": true,
  "fixed-cidr-v6": "2001:db8:1::/64",
  "ip6tables": true,
  "experimental": true
}
```

After updating Docker daemon settings, you will need to restart the daemon.

**Note**  
When you update and restart the daemon, Docker enables IPv6 forwarding on the instance, which can result in the loss of default routes on instances that use an Amazon Linux 2 AMI. To avoid this, use the following command to add a default route via the subnet's IPv6 gateway.  

```
ip route add default via FE80:EC2::1 dev eth0 metric 100
```

# Amazon ECS task networking options for Fargate
Task networking for Fargate

By default, every Amazon ECS task on Fargate is provided an elastic network interface (ENI) with a primary private IP address. When using a public subnet, you can optionally assign a public IP address to the task's ENI. If your VPC is configured for dual-stack mode and you use a subnet with an IPv6 CIDR block, your task's ENI also receives an IPv6 address. A task can only have one ENI that's associated with it at a time. Containers that belong to the same task can also communicate over the `localhost` interface. For more information about VPCs and subnets, see [How Amazon VPC works](https://docs.aws.amazon.com/vpc/latest/userguide/how-it-works.html) in the *Amazon VPC User Guide*.

For a task on Fargate to pull a container image, the task must have a route to the internet. The following describes how you can verify that your task has a route to the internet.
+ When using a public subnet, you can assign a public IP address to the task ENI.
+ When using a private subnet, the subnet can have a NAT gateway attached.
+ When using container images that are hosted in Amazon ECR, you can configure Amazon ECR to use an interface VPC endpoint and the image pull occurs over the task's private IPv4 address. For more information, see [Amazon ECR interface VPC endpoints (AWS PrivateLink)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/vpc-endpoints.html) in the *Amazon Elastic Container Registry User Guide*.

Because each task gets its own ENI, you can use networking features such as VPC Flow Logs, which you can use to monitor traffic to and from your tasks. For more information, see [VPC Flow Logs](https://docs.aws.amazon.com/vpc/latest/userguide/flow-logs.html) in the *Amazon VPC User Guide*.

You can also take advantage of AWS PrivateLink. You can configure a VPC interface endpoint so that you can access Amazon ECS APIs through private IP addresses. AWS PrivateLink restricts all network traffic between your VPC and Amazon ECS to the Amazon network. You don't need an internet gateway, a NAT device, or a virtual private gateway. For more information, see [Amazon ECS interface VPC endpoints (AWS PrivateLink)](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/vpc-endpoints.html).

For examples of how to use the `NetworkConfiguration` resource with CloudFormation, see [CloudFormation example templates for Amazon ECS](working-with-templates.md).

The ENIs that are created are fully managed by AWS Fargate. Moreover, there's an associated IAM policy that's used to grant permissions for Fargate. For tasks using Fargate platform version `1.4.0` or later, the task receives a single ENI (referred to as the task ENI) and all network traffic flows through that ENI within your VPC. This traffic is recorded in your VPC flow logs. For tasks that use Fargate platform version `1.3.0` and earlier, in addition to the task ENI, the task also receives a separate Fargate owned ENI, which is used for some network traffic that isn't visible in the VPC flow logs. The following table describes the network traffic behavior and the required IAM policy for each platform version.


|  Action  |  Traffic flow with Linux platform version `1.3.0` and earlier  |  Traffic flow with Linux platform version `1.4.0`  |  Traffic flow with Windows platform version `1.0.0`  |  IAM permission  | 
| --- | --- | --- | --- | --- | 
|  Retrieving Amazon ECR login credentials  |  Fargate owned ENI  |  Task ENI  |  Task ENI  |  Task execution IAM role  | 
|  Image pull  |  Task ENI  |  Task ENI  |  Task ENI  |  Task execution IAM role  | 
|  Sending logs through a log driver  |  Task ENI  |  Task ENI  |  Task ENI  |  Task execution IAM role  | 
|  Sending logs through FireLens for Amazon ECS  |  Task ENI  |  Task ENI  |  Task ENI  |  Task IAM role  | 
|  Retrieving secrets from Secrets Manager or Systems Manager  |  Fargate owned ENI  |  Task ENI  |  Task ENI  |  Task execution IAM role  | 
|  Amazon EFS file system traffic  |  Not available  |  Task ENI  |  Task ENI  |  Task IAM role  | 
|  Application traffic  |  Task ENI  |  Task ENI  |  Task ENI  |  Task IAM role  | 

## Considerations


Consider the following when using task networking.
+ The Amazon ECS service-linked role is required to provide Amazon ECS with the permissions to make calls to other AWS services on your behalf. This role is created for you when you create a cluster or if you create or update a service in the AWS Management Console. For more information, see [Using service-linked roles for Amazon ECS](using-service-linked-roles.md). You can also create the service-linked role using the following AWS CLI command.

  ```
  aws iam [create-service-linked-role](https://docs.aws.amazon.com/cli/latest/reference/iam/create-service-linked-role.html) --aws-service-name ecs.amazonaws.com
  ```
+ Amazon ECS populates the hostname of the task with an Amazon provided DNS hostname when both the `enableDnsHostnames` and `enableDnsSupport` options are enabled on your VPC. If these options aren't enabled, the DNS hostname of the task is set to a random hostname. For more information about the DNS settings for a VPC, see [Using DNS with Your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html) in the *Amazon VPC User Guide*.
+ You can only specify up to 16 subnets and 5 security groups for `awsVpcConfiguration`. For more information, see [AwsVpcConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_AwsVpcConfiguration.html) in the *Amazon Elastic Container Service API Reference*.
+ You can't manually detach or modify the ENIs that are created and attached by Fargate. This is to prevent the accidental deletion of an ENI that's associated with a running task. To release the ENIs for a task, stop the task.
+ If a VPC subnet is updated to change the DHCP options set it uses, you can't also apply these changes to existing tasks that use the VPC. Start new tasks, which will receive the new setting to smoothly migrate while testing the new change and then stop the old ones, if no rollback is required.
+ The following applies to tasks run on Fargate platform version `1.4.0` or later for Linux or `1.0.0` for Windows. Tasks launched in dual-stack subnets receive an IPv4 address and an IPv6 address. Tasks launched in IPv6-only subnets receive only an IPv6 address.
+ For tasks that use platform version `1.4.0` or later for Linux or `1.0.0` for Windows, the task ENIs support jumbo frames. Network interfaces are configured with a maximum transmission unit (MTU), which is the size of the largest payload that fits within a single frame. The larger the MTU, the more application payload can fit within a single frame, which reduces per-frame overhead and increases efficiency. Supporting jumbo frames reduces overhead when the network path between your task and the destination supports jumbo frames.
+ Services with tasks that use Fargate only support Application Load Balancer and Network Load Balancer. Classic Load Balancer isn't supported. When you create any target groups, you must choose `ip` as the target type, not `instance`. For more information, see [Use load balancing to distribute Amazon ECS service traffic](service-load-balancing.md).

## Using a VPC in dual-stack mode


When using a VPC in dual-stack mode, your tasks can communicate over IPv4 or IPv6, or both. IPv4 and IPv6 addresses are independent of each other and you must configure routing and security in your VPC separately for IPv4 and IPv6. For more information about configuring your VPC for dual-stack mode, see [Migrating to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the *Amazon VPC User Guide*.

If the following conditions are met, Amazon ECS tasks on Fargate are assigned an IPv6 address:
+ Your Amazon ECS `dualStackIPv6` account setting is turned on (`enabled`) for the IAM principal launching your tasks in the Region you're launching your tasks in. This setting can only be modified using the API or AWS CLI. You have the option to turn this setting on for a specific IAM principal on your account or for your entire account by setting your account default setting. For more information, see [Access Amazon ECS features with account settings](ecs-account-settings.md).
+ Your VPC and subnet are enabled for IPv6. For more information about how to configure your VPC for dual-stack mode, see [Migrating to IPv6](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6.html) in the *Amazon VPC User Guide*.
+ Your subnet is enabled for auto-assigning IPv6 addresses. For more information about how to configure your subnet, see [Modify the IPv6 addressing attribute for your subnet](https://docs.aws.amazon.com/vpc/latest/userguide/modify-subnets.html) in the *Amazon VPC User Guide*.
+ The task or service uses Fargate platform version `1.4.0` or later for Linux.

For Amazon ECS tasks on Fargate running in a VPC in dual-stack mode, to communicate with dependency services used in task launch process such as ECR, SSM and SecretManager, the public subnet's route table needs IPv4 (0.0.0.0/0) route to an internet gateway and the private subnet's route table needs IPv4 (0.0.0.0/0) route to an NAT gateway. For more information, see [Internet gateways](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Internet_Gateway.html) and [NAT gateways](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html) in the *Amazon VPC User Guide*. 

For examples of how to configure a dual-stack VPC, see [ Example dual-stack VPC configuration](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6-example.html). 

## Using a VPC in IPv6-only mode


In an IPv6-only configuration, your Amazon ECS tasks communicate exclusively over IPv6. To set up VPCs and subnets for an IPv6-only configuration, you must add an IPv6 CIDR block to the VPC and create subnets that include only an IPv6 CIDR block. For more information see [Add IPv6 support for your VPC](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-migrate-ipv6-add.html) and [Create a subnet](https://docs.aws.amazon.com/vpc/latest/userguide/create-subnets.html) in the *Amazon VPC User Guide*. You must also update route tables with IPv6 targets and configure security groups with IPv6 rules. For more information, see [Configure route tables](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html) and [Configure security group rules](https://docs.aws.amazon.com/vpc/latest/userguide/working-with-security-group-rules.html) in the *Amazon VPC User Guide*.

The following considerations apply:
+ You can update an IPv4-only or dualstack Amazon ECS service to an IPv6-only configuration by either updating the service directly to use IPv6-only subnets or by creating a parallel IPv6-only service and using Amazon ECS blue-green deployments to shift traffic to the new service. For more information about Amazon ECS blue-green deployments, see [Amazon ECS blue/green deployments](deployment-type-blue-green.md).
+ An IPv6-only Amazon ECS service must use dualstack load balancers with IPv6 target groups. If you're migrating an existing Amazon ECS service that's behind a Application Load Balancer or a Network Load Balancer, you can create a new dualstack load balancer and shift traffic from the old load balancer, or update the IP address type of the existing load balancer.

   For more information about Network Load Balancers, see [Create a Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/create-network-load-balancer.html) and [Update the IP address types for your Network Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/load-balancer-ip-address-type.html) in the *User Guide for Network Load Balancers*. For more information about Application Load Balancers, see [Create an Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html) and [Update the IP address types for your Application Load Balancer](https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-ip-address-type.html) in the *User Guide for Application Load Balancers*.
+ IPv6-only configuration isn't supported on Windows.
+ For Amazon ECS tasks in an IPv6-only configuration to communicate with IPv4-only endpoints, you can set up DNS64 and NAT64 for network address translation from IPv6 to IPv4. For more information, see [DNS64 and NAT64](https://docs.aws.amazon.com/vpc/latest/userguide/nat-gateway-nat64-dns64.html) in the *Amazon VPC User Guide*.
+ IPv6-only configuration is supported on Fargate platform version `1.4.0` or later.
+ Amazon ECS workloads in an IPv6-only configuration must use Amazon ECR dualstack image URI endpoints when pulling images from Amazon ECR. For more information, see [Getting started with making requests over IPv6](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr-requests.html#ipv6-access-getting-started) in the *Amazon Elastic Container Registry User Guide*.
**Note**  
Amazon ECR doesn't support dualstack interface VPC endpoints that tasks in an IPv6-only configuration can use. For more information, see [Getting started with making requests over IPv6](https://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr-requests.html#ipv6-access-getting-started) in the *Amazon Elastic Container Registry User Guide*.
+ Amazon ECS Exec isn't supported in an IPv6-only configuration.
+ Amazon CloudWatch doesn't support a dualstack FIPS endpoint that can be used to monitor Amazon ECS tasks in IPv6-only configuration that use FIPS-140 compliance. For more information about FIPS-140, see [AWS Fargate Federal Information Processing Standard (FIPS-140)](ecs-fips-compliance.md).

### AWS Regions that support IPv6-only mode for Amazon ECS


You can run tasks in an IPv6-only configuration in the following AWS Regions that Amazon ECS is available in:
+ US East (Ohio)
+ US East (N. Virginia)
+ US West (N. California)
+ US West (Oregon)
+ Africa (Cape Town)
+ Asia Pacific (Hong Kong)
+ Asia Pacific (Hyderabad)
+ Asia Pacific (Jakarta)
+ Asia Pacific (Melbourne)
+ Asia Pacific (Mumbai)
+ Asia Pacific (Osaka)
+ Asia Pacific (Seoul)
+ Asia Pacific (Singapore)
+ Asia Pacific (Sydney)
+ Asia Pacific (Tokyo)
+ Canada (Central)
+ Canada West (Calgary)
+ China (Beijing)
+ China (Ningxia)
+ Europe (Frankfurt)
+ Europe (London)
+ Europe (Milan)
+ Europe (Paris)
+ Europe (Spain)
+ Israel (Tel Aviv)
+ Middle East (Bahrain)
+ Middle East (UAE)
+ South America (São Paulo)
+ AWS GovCloud (US-East)
+ AWS GovCloud (US-West)

# Storage options for Amazon ECS tasks
Storage options for tasks

Amazon ECS provides you with flexible, cost effective, and easy-to-use data storage options depending on your needs. Amazon ECS supports the following data volume options for containers:


| Data volume | Supported capacity | Supported operating systems | Storage persistence | Use cases | 
| --- | --- | --- | --- | --- | 
| Amazon Elastic Block Store (Amazon EBS) | Fargate, Amazon EC2, Amazon ECS Managed Instances | Linux, Windows (on Amazon EC2 only) | Can be persisted when attached to a standalone task. Ephemeral when attached to a task maintained by a service. | Amazon EBS volumes provide cost-effective, durable, high-performance block storage for data-intensive containerized workloads. Common use cases include transactional workloads such as databases, virtual desktops and root volumes, and throughput intensive workloads such as log processing and ETL workloads. For more information, see [Use Amazon EBS volumes with Amazon ECS](ebs-volumes.md). | 
| Amazon Elastic File System (Amazon EFS) | Fargate, Amazon EC2, Amazon ECS Managed Instances | Linux | Persistent | Amazon EFS volumes provide simple, scalable, and persistent shared file storage for use with your Amazon ECS tasks that grows and shrinks automatically as you add and remove files. Amazon EFS volumes support concurrency and are useful for containerized applications that scale horizontally and need storage functionalities like low latency, high throughput, and read-after-write consistency. Common use cases include workloads such as data analytics, media processing, content management, and web serving. For more information, see [Use Amazon EFS volumes with Amazon ECS](efs-volumes.md). | 
| Amazon FSx for Windows File Server | Amazon EC2 | Windows | Persistent | FSx for Windows File Server volumes provide fully managed Windows file servers that you can use to provision your Windows tasks that need persistent, distributed, shared, and static file storage. Common use cases include .NET applications that might require local folders as persistent storage to save application outputs. Amazon FSx for Windows File Server offers a local folder in the container which allows for multiple containers to read-write on the same file system that's backed by a SMB Share. For more information, see [Use FSx for Windows File Server volumes with Amazon ECS](wfsx-volumes.md). | 
| Amazon FSx for NetApp ONTAP | Amazon EC2 | Linux | Persistent | Amazon FSx for NetApp ONTAP volumes provide fully managed NetApp ONTAP file systems that you can use to provision your Linux tasks that need persistent, high-performance, and feature-rich shared file storage. Amazon FSx for NetApp ONTAP supports NFS and SMB protocols and provides enterprise-grade features like snapshots, cloning, and data deduplication. Common use cases include high-performance computing workloads, content repositories, and applications requiring POSIX-compliant shared storage. For more information, see [Mounting Amazon FSx for NetApp ONTAP file systems from Amazon ECS containers](https://docs.aws.amazon.com/fsx/latest/ONTAPGuide/mount-ontap-ecs-containers.html). | 
| Amazon S3 Files | Fargate, Amazon ECS Managed Instances | Linux | Persistent | Amazon S3 Files is a high-performance file system that provides fast, cached access to Amazon S3 data through a mountable file system interface. S3 Files volumes give your containers direct file-system access to data stored in S3 buckets. Common use cases include data analytics, machine learning training, and applications that need high-throughput access to S3 data. For more information, see [Configuring S3 Files for Amazon ECS](s3files-volumes.md). | 
| Docker volumes | Amazon EC2 | Windows, Linux | Persistent | Docker volumes are a feature of the Docker container runtime that allow containers to persist data by mounting a directory from the file system of the host. Docker volume drivers (also referred to as plugins) are used to integrate container volumes with external storage systems. Docker volumes can be managed by third-party drivers or by the built in local driver. Common use cases for Docker volumes include providing persistent data volumes or sharing volumes at different locations on different containers on the same container instance. For more information, see [Use Docker volumes with Amazon ECS](docker-volumes.md). | 
| Bind mounts | Fargate, Amazon EC2, Amazon ECS Managed Instances | Windows, Linux | Ephemeral | Bind mounts consist of a file or directory on the host, such as an Amazon EC2 instance or AWS Fargate, that is mounted onto a container. Common use cases for bind mounts include sharing a volume from a source container with other containers in the same task, or mounting a host volume or an empty volume in one or more containers. For more information, see [Use bind mounts with Amazon ECS](bind-mounts.md). | 

# Use Amazon EBS volumes with Amazon ECS
Amazon EBS

Amazon Elastic Block Store (Amazon EBS) volumes provide highly available, cost-effective, durable, high-performance block storage for data-intensive workloads. Amazon EBS volumes can be used with Amazon ECS tasks for high throughput and transaction-intensive applications. For more information about Amazon EBS volumes, see [Amazon EBS volumes](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-volumes.html) in the *Amazon EBS User Guide*.

Amazon EBS volumes that are attached to Amazon ECS tasks are managed by Amazon ECS on your behalf. During standalone task launch, you can provide the configuration that will be used to attach one EBS volume to the task. During service creation or update, you can provide the configuration that will be used to attach one EBS volume per task to each task managed by the Amazon ECS service. You can either configure new, empty volumes for attachment, or you can use snapshots to load data from existing volumes.

**Note**  
When you use snapshots to configure volumes, you can specify a `volumeInitializationRate`, in MiB/s, at which data is fetched from the snapshot to create volumes that are fully initialized in a predictable amount of time. For more information about volume initialization, see [Initialize Amazon EBS volumes](https://docs.aws.amazon.com/ebs/latest/userguide/initalize-volume.html) in the *Amazon EBS User Guide*. For more information about configuring Amazon EBS volumes, see [Defer volume configuration to launch time in an Amazon ECS task definition](specify-ebs-config.md) and [Specify Amazon EBS volume configuration at Amazon ECS deployment](configure-ebs-volume.md).

Volume configuration is deferred to launch time using the `configuredAtLaunch` parameter in the task definition. By providing volume configuration at launch time rather than in the task definition, you get to create task definitions that aren't constrained to a specific data volume type or specific EBS volume settings. You can then reuse your task definitions across different runtime environments. For example, you can provide more throughput during deployment for your production workloads than your pre-prod environments.

 Amazon EBS volumes attached to tasks can be encrypted with AWS Key Management Service (AWS KMS) keys to protect your data. For more information see, [Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks](ebs-kms-encryption.md).

To monitor your volume's performance, you can also use Amazon CloudWatch metrics. For more information about Amazon ECS metrics for Amazon EBS volumes, see [Amazon ECS CloudWatch metrics](available-metrics.md) and [Amazon ECS Container Insights metrics](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-metrics-ECS.html).

Attaching an Amazon EBS volume to a task is supported in all commercial and China [AWS Regions](https://docs.aws.amazon.com/glossary/latest/reference/glos-chap.html?icmpid=docs_homepage_addtlrcs#region) that support Amazon ECS.

## Supported operating systems and capacity


The following table provides the supported operating system and capacity configurations.


| Capacity | Linux  | Windows | 
| --- | --- | --- | 
| Fargate |  Amazon EBS volumes are supported on platform version 1.4.0 or later (Linux). For more information, see [Fargate platform versions for Amazon ECS](platform-fargate.md). | Not supported | 
| EC2 | Amazon EBS volumes are supported for tasks hosted on Nitro-based instances with Amazon ECS-optimized Amazon Machine Images (AMIs). For more information about instance types, see [Instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) in the Amazon EC2 User Guide. Amazon EBS volumes are supported on ECS-optimized AMI `20231219` or later. For more information, see [Retrieving Amazon ECS-Optimized AMI metadata](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/retrieve-ecs-optimized_AMI.html). | Tasks hosted on Nitro-based instances with Amazon ECS-optimized Amazon Machine Images (AMIs). For more information about instance types, see [Instance types](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) in the Amazon EC2 User Guide. Amazon EBS volumes are supported on ECS-optimized AMI `20241017` or later. For more information, see [Retrieving Amazon ECS-Optimized Windows AMI metadata](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/retrieve-ecs-optimized_windows_AMI.html). | 
| Amazon ECS Managed Instances | Amazon EBS volumes are supported for tasks hosted on Amazon ECS Managed Instances on Linux. | Not supported | 

## Considerations


 Consider the following when using Amazon EBS volumes:
+ You can't configure Amazon EBS volumes for attachment to Fargate Amazon ECS tasks in the `use1-az3` Availability Zone.
+ The magnetic (`standard`) Amazon EBS volume type is not supported for tasks hosted on Fargate. For more information about Amazon EBS volume types, see [Amazon EBS volumes](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html) in the *Amazon EC2 User Guide*.
+ An Amazon ECS infrastructure IAM role is required when creating a service or a standalone task that is configuring a volume at deployment. You can attach the AWS managed `AmazonECSInfrastructureRolePolicyForVolumes` IAM policy to the role, or you can use the managed policy as a guide to create and attach your own policy with permissions that meet your specific needs. For more information, see [Amazon ECS infrastructure IAM role](infrastructure_IAM_role.md).
+ You can attach at most one Amazon EBS volume to each Amazon ECS task, and it must be a new volume. You can't attach an existing Amazon EBS volume to a task. However, you can configure a new Amazon EBS volume at deployment using the snapshot of an existing volume.
+ To use Amazon EBS volumes with Amazon ECS services, the deployment controller must be `ECS`. Both rolling and blue/green deployment strategies are supported when using this deployment controller.
+ For a container in your task to write to the mounted Amazon EBS volume, the container must have appropriate file system permissions. When you specify a non-root user in your container definition, Amazon ECS automatically configures the volume with group-based permissions that allow the specified user to read and write to the volume. If no user is specified, the container runs as root and has full access to the volume.
+ Amazon ECS automatically adds the reserved tags `AmazonECSCreated` and `AmazonECSManaged` to the attached volume. If you remove these tags from the volume, Amazon ECS won't be able to manage the volume on your behalf. For more information about tagging Amazon EBS volumes, see [Tagging Amazon EBS volumes](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specify-ebs-config.html#ebs-volume-tagging). For more information about tagging Amazon ECS resources, see [Tagging your Amazon ECS resources](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-using-tags.html).
+ Provisioning volumes from a snapshot of an Amazon EBS volume that contains partitions isn't supported.
+ Volumes that are attached to tasks that are managed by a service aren't preserved and are always deleted upon task termination.
+ You can't configure Amazon EBS volumes for attachment to Amazon ECS tasks that are running on AWS Outposts.

# Non-root user behavior


When you specify a non-root user in your container definition, Amazon ECS automatically configures the Amazon EBS volume with group-based permissions that allow the specified user to read and write to the volume. The volume is mounted with the following characteristics:
+ The volume is owned by the root user and root group.
+ Group permissions are set to allow read and write access.
+ The non-root user is added to the appropriate group to access the volume.

Follow these best practices when using Amazon EBS volumes with non-root containers:
+ Use consistent user IDs (UIDs) and group IDs (GIDs) across your container images to ensure consistent permissions.
+ Pre-create mount point directories in your container image and set appropriate ownership and permissions.
+ Test your containers with Amazon EBS volumes in a development environment to confirm that file system permissions work as expected.
+ If multiple containers in the same task share a volume, ensure they either use compatible UIDs/GIDs or mount the volume with consistent access expectations.

# Defer volume configuration to launch time in an Amazon ECS task definition
Defer volume configuration to launch time in a task definition

To configure an Amazon EBS volume for attachment to your task, you must specify the mount point configuration in your task definition and name the volume. You must also set `configuredAtLaunch` to `true` because Amazon EBS volumes can't be configured for attachment in the task definition. Instead, Amazon EBS volumes are configured for attachment during deployment.

To register the task definition by using the AWS Command Line Interface (AWS CLI), save the template as a JSON file, and then pass the file as an input for the `[register-task-definition](https://docs.aws.amazon.com/cli/latest/reference/ecs/register-task-definition.html)` command. 

To create and register a task definition using the AWS Management Console, see [Creating an Amazon ECS task definition using the console](create-task-definition.md).

The following task definition shows the syntax for the `mountPoints` and `volumes` objects in the task definition. For more information about task definition parameters, see [Amazon ECS task definition parameters for Fargate](task_definition_parameters.md). To use this example, replace the `user input placeholders` with your own information.

## Linux


```
{
    "family": "mytaskdef",
    "containerDefinitions": [
        {
            "name": "nginx",
            "image": "public.ecr.aws/nginx/nginx:latest",
            "networkMode": "awsvpc",
           "portMappings": [
                {
                    "name": "nginx-80-tcp",
                    "containerPort": 80,
                    "hostPort": 80,
                    "protocol": "tcp",
                    "appProtocol": "http"
                }
            ],
            "mountPoints": [
                {
                    "sourceVolume": "myEBSVolume",
                    "containerPath": "/mount/ebs",
                    "readOnly": true
                }
            ]
        }
    ],
    "volumes": [
        {
            "name": "myEBSVolume",
            "configuredAtLaunch": true
        }
    ],
    "requiresCompatibilities": [
        "FARGATE", "EC2"
    ],
    "cpu": "1024",
    "memory": "3072",
    "networkMode": "awsvpc"
}
```

## Windows


```
{
    "family": "mytaskdef",
     "memory": "4096",
     "cpu": "2048",
    "family": "windows-simple-iis-2019-core",
    "executionRoleArn": "arn:aws:iam::012345678910:role/ecsTaskExecutionRole",
    "runtimePlatform": {"operatingSystemFamily": "WINDOWS_SERVER_2019_CORE"},
    "requiresCompatibilities": ["EC2"]
    "containerDefinitions": [
        {
             "command": ["New-Item -Path C:\\inetpub\\wwwroot\\index.html -Type file -Value '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>Congratulations!</h2> <p>Your application is now running on a container in Amazon ECS.</p>'; C:\\ServiceMonitor.exe w3svc"],
            "entryPoint": [
                "powershell",
                "-Command"
            ],
            "essential": true,
            "cpu": 2048,
            "memory": 4096,
            "image": "mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019",
            "name": "sample_windows_app",
            "portMappings": [
                {
                    "hostPort": 443,
                    "containerPort": 80,
                    "protocol": "tcp"
                }
            ],
            "mountPoints": [
                {
                    "sourceVolume": "myEBSVolume",
                    "containerPath": "drive:\ebs",
                    "readOnly": true
                }
            ]
        }
    ],
    "volumes": [
        {
            "name": "myEBSVolume",
            "configuredAtLaunch": true
        }
    ],
    "requiresCompatibilities": [
        "FARGATE", "EC2"
    ],
    "cpu": "1024",
    "memory": "3072",
    "networkMode": "awsvpc"
}
```

`mountPoints`  
Type: Object array  
Required: No  
The mount points for the data volumes in your container. This parameter maps to `Volumes` in the create-container Docker API and the `--volume` option to docker run.  
Windows containers can mount whole directories on the same drive as `$env:ProgramData`. Windows containers cannot mount directories on a different drive, and mount points cannot be used across drives. You must specify mount points to attach an Amazon EBS volume directly to an Amazon ECS task.    
`sourceVolume`  
Type: String  
Required: Yes, when `mountPoints` are used  
The name of the volume to mount.  
`containerPath`  
Type: String  
Required: Yes, when `mountPoints` are used  
The path in the container where the volume will be mounted.  
`readOnly`  
Type: Boolean  
Required: No  
If this value is `true`, the container has read-only access to the volume. If this value is `false`, then the container can write to the volume. The default value is `false`.  
For tasks that run on EC2 instances running the Windows operating system, leave the value as the default of `false`.

`name`  
Type: String  
Required: No  
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, hyphens (`-`), and underscores (`_`) are allowed. This name is referenced in the `sourceVolume` parameter of the container definition `mountPoints` object.

`configuredAtLaunch`  
Type: Boolean  
Required: Yes, when you want to use attach an EBS volume directly to a task.  
Specifies whether a volume is configurable at launch. When set to `true`, you can configure the volume when you run a standalone task, or when you create or update a service. When set to `false`, you won't be able to provide another volume configuration in the task definition. This parameter must be provided and set to `true` to configure an Amazon EBS volume for attachment to a task.

# Encrypt data stored in Amazon EBS volumes attached to Amazon ECS tasks
Encrypt data stored in Amazon EBS volumes

You can use AWS Key Management Service (AWS KMS) to make and manage cryptographic keys that protect your data. Amazon EBS volumes are encrypted at rest by using AWS KMS keys. The following types of data are encrypted:
+ Data stored at rest on the volume
+ Disk I/O
+ Snapshots created from the volume
+ New volumes created from encrypted snapshots

Amazon EBS volumes that are attached to tasks can be encrypted by using either a default AWS managed key with the alias `alias/aws/ebs`, or a symmetric customer managed key specified in the volume configuration. Default AWS managed keys are unique to each AWS account per AWS Region and are created automatically. To create a symmetric customer managed key, follow the steps in [Creating symmetric encryption KMS keys](https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html#create-symmetric-cmk) in the *AWS KMS Developer Guide*.

You can configure Amazon EBS encryption by default so that all new volumes created and attached to a task in a specific AWS Region are encrypted by using the KMS key that you specify for your account. For more information about Amazon EBS encryption and encryption by default, see [Amazon EBS encryption](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-encryption.html) in the *Amazon EBS User Guide*.

## Amazon ECS Managed Instances behavior


You encrypt Amazon EBS volumes by enabling encryption, either using encryption by default or by enabling encryption when you create a volume that you want to encrypt. For information about how to enable encryption by default (at the account-level, see [Encryption by default](https://docs.aws.amazon.com/ebs/latest/userguide/encryption-by-default.html) in the *Amazon EBS User Guide*.

You can configure any combination of these keys. The order of precedence of KMS keys is as follows:

1. The KMS key specified in the volume configuration. When you specify a KMS key in the volume configuration, it overrides the Amazon EBS default and any KMS key that is specified at the account level.

1. The KMS key specified at the account level. When you specify a KMS key for cluster-level encryption of Amazon ECS managed storage, it overrides Amazon EBS default encryption but does not override any KMS key that is specified in the volume configuration.

1. Amazon EBS default encryption. Default encryption applies when you don't specify either a account-level KMS key or a key in the volume configuration. If you enable Amazon EBS encryption by default, the default is the KMS key you specify for encryption by default. Otherwise, the default is the AWS managed key with the alias `alias/aws/ebs`.
**Note**  
If you set `encrypted` to `false` in your volume configuration, specify no account-level KMS key, and enable Amazon EBS encryption by default, the volume will still be encrypted with the key specified for Amazon EBS encryption by default.

## Non-Amazon ECS Managed Instances behavior


You can also set up Amazon ECS cluster-level encryption for Amazon ECS managed storage when you create or update a cluster. Cluster-level encryption takes effect at the task level and can be used to encrypt the Amazon EBS volumes attached to each task running in a specific cluster by using the specified KMS key. For more information about configuring encryption at the cluster level for each task, see [ManagedStorageConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ManagedStorageConfiguration.html) in the *Amazon ECS API reference*.

You can configure any combination of these keys. The order of precedence of KMS keys is as follows:

1. The KMS key specified in the volume configuration. When you specify a KMS key in the volume configuration, it overrides the Amazon EBS default and any KMS key that is specified at the cluster level.

1. The KMS key specified at the cluster level. When you specify a KMS key for cluster-level encryption of Amazon ECS managed storage, it overrides Amazon EBS default encryption but does not override any KMS key that is specified in the volume configuration.

1. Amazon EBS default encryption. Default encryption applies when you don't specify either a cluster-level KMS key or a key in the volume configuration. If you enable Amazon EBS encryption by default, the default is the KMS key you specify for encryption by default. Otherwise, the default is the AWS managed key with the alias `alias/aws/ebs`.
**Note**  
If you set `encrypted` to `false` in your volume configuration, specify no cluster-level KMS key, and enable Amazon EBS encryption by default, the volume will still be encrypted with the key specified for Amazon EBS encryption by default.

## Customer managed KMS key policy


To encrypt an EBS volume that's attached to your task by using a customer managed key, you must configure your KMS key policy to ensure that the IAM role that you use for volume configuration has the necessary permissions to use the key. The key policy must include the `kms:CreateGrant` and `kms:GenerateDataKey*` permissions. The `kms:ReEncryptTo` and `kms:ReEncryptFrom` permissions are necessary for encrypting volumes that are created using snapshots. If you want to configure and encrypt only new, empty volumes for attachment, you can exclude the `kms:ReEncryptTo` and `kms:ReEncryptFrom` permissions. 

The following JSON snippet shows key policy statements that you can attach to your KMS key policy. Using these statements will provide access for Amazon ECS to use the key for encrypting the EBS volume. To use the example policy statements, replace the `user input placeholders` with your own information. As always, only configure the permissions that you need.

```
{
      "Effect": "Allow",
      "Principal": { "AWS": "arn:aws:iam::111122223333:role/ecsInfrastructureRole" },
      "Action": "kms:DescribeKey",
      "Resource":"*"
    },
    {
      "Effect": "Allow",
      "Principal": { "AWS": "arn:aws:iam::111122223333:role/ecsInfrastructureRole" },
      "Action": [
      "kms:GenerateDataKey*",
      "kms:ReEncryptTo",
      "kms:ReEncryptFrom"
      ],
      "Resource":"*",
      "Condition": {
        "StringEquals": {
          "kms:CallerAccount": "aws_account_id",
          "kms:ViaService": "ec2.region.amazonaws.com"
        },
        "ForAnyValue:StringEquals": {
          "kms:EncryptionContextKeys": "aws:ebs:id"
        }
      }
    },
    {
      "Effect": "Allow",
      "Principal": { "AWS": "arn:aws:iam::111122223333:role/ecsInfrastructureRole" },
      "Action": "kms:CreateGrant",
      "Resource":"*",
      "Condition": {
        "StringEquals": {
          "kms:CallerAccount": "aws_account_id",
          "kms:ViaService": "ec2.region.amazonaws.com"
        },
        "ForAnyValue:StringEquals": {
          "kms:EncryptionContextKeys": "aws:ebs:id"
        },
        "Bool": {
          "kms:GrantIsForAWSResource": true
        }
      }
    }
```

For more information about key policies and permissions, see [Key policies in AWS KMS](https://docs.aws.amazon.com/kms/latest/developerguide/key-policies.html) and [AWS KMS permissions](https://docs.aws.amazon.com/kms/latest/developerguide/kms-api-permissions-reference.html) in the *AWS KMS Developer Guide*. For troubleshooting EBS volume attachment issues related to key permissions, see [Troubleshooting Amazon EBS volume attachments to Amazon ECS tasks](troubleshoot-ebs-volumes.md).

# Specify Amazon EBS volume configuration at Amazon ECS deployment
Specify Amazon EBS volume configuration at deployment

After you register a task definition with the `configuredAtLaunch` parameter set to `true`, you can configure an Amazon EBS volume at deployment when you run a standalone task, or when you create or update a service. For more information about deferring volume configuration to launch time using the `configuredAtLaunch` parameter, see [Defer volume configuration to launch time in an Amazon ECS task definition](specify-ebs-config.md).

To configure a volume, you can use the Amazon ECS APIs, or you can pass a JSON file as input for the following AWS CLI commands:
+ `[run-task](https://docs.aws.amazon.com/cli/latest/reference/ecs/run-task.html)` to run a standalone ECS task.
+ `[start-task](https://docs.aws.amazon.com/cli/latest/reference/ecs/start-task.html)` to run a standalone ECS task in a specific container instance. This command is not applicable for Fargate tasks.
+ `[create-service](https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html)` to create a new ECS service.
+ `[update-service](https://docs.aws.amazon.com/cli/latest/reference/ecs/update-service.html)` to update an existing service.

**Note**  
For a container in your task to write to the mounted Amazon EBS volume, the container must have appropriate file system permissions. When you specify a non-root user in your container definition, Amazon ECS automatically configures the volume with group-based permissions that allow the specified user to read and write to the volume. If no user is specified, the container runs as root and has full access to the volume.

 You can also configure an Amazon EBS volume by using the AWS Management Console. For more information, see [Running an application as an Amazon ECS task](standalone-task-create.md), [Creating an Amazon ECS rolling update deployment](create-service-console-v2.md), and [Updating an Amazon ECS service](update-service-console-v2.md).

The following JSON snippet shows all the parameters of an Amazon EBS volume that can be configured at deployment. To use these parameters for volume configuration, replace the `user input placeholders` with your own information. For more information about these parameters, see [Volume configurations](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service_definition_parameters.html#sd-volumeConfigurations).

```
"volumeConfigurations": [
        {
            "name": "ebs-volume", 
            "managedEBSVolume": {
                "encrypted": true, 
                "kmsKeyId": "arn:aws:kms:us-east-1:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab", 
                "volumeType": "gp3", 
                "sizeInGiB": 10, 
                "snapshotId": "snap-12345", 
                "volumeInitializationRate":100,
                "iops": 3000, 
                "throughput": 125, 
                "tagSpecifications": [
                    {
                        "resourceType": "volume", 
                        "tags": [
                            {
                                "key": "key1", 
                                "value": "value1"
                            }
                        ], 
                        "propagateTags": "NONE"
                    }
                ], 
                "roleArn": "arn:aws:iam::1111222333:role/ecsInfrastructureRole", 
                 "terminationPolicy": {
                    "deleteOnTermination": true//can't be configured for service-managed tasks, always true 
                },
                "filesystemType": "ext4"
            }
        }
    ]
```

**Important**  
Ensure that the `volumeName` you specify in the configuration is the same as the `volumeName` you specify in your task definition.

For information about checking the status of volume attachment, see [Troubleshooting Amazon EBS volume attachments to Amazon ECS tasks](troubleshoot-ebs-volumes.md). For information about the Amazon ECS infrastructure AWS Identity and Access Management (IAM) role necessary for EBS volume attachment, see [Amazon ECS infrastructure IAM role](infrastructure_IAM_role.md).

The following are JSON snippet examples that show the configuration of Amazon EBS volumes. These examples can be used by saving the snippets in JSON files and passing the files as parameters (using the `--cli-input-json file://filename` parameter) for AWS CLI commands. Replace the `user input placeholders` with your own information.

## Configure a volume for a standalone task


The following snippet shows the syntax for configuring Amazon EBS volumes for attachment to a standalone task. The following JSON snippet shows the syntax for configuring the `volumeType`, `sizeInGiB`, `encrypted`, and `kmsKeyId` settings. The configuration specified in the JSON file is used to create and attach an EBS volume to the standalone task.

```
{
   "cluster": "mycluster",
   "taskDefinition": "mytaskdef",
   "volumeConfigurations": [
        {
            "name": "datadir",
            "managedEBSVolume": {
                "volumeType": "gp3",
                "sizeInGiB": 100,
                "roleArn":"arn:aws:iam::1111222333:role/ecsInfrastructureRole",
                "encrypted": true,
                "kmsKeyId": "arn:aws:kms:region:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
            }
        }
   ]
}
```

## Configure a volume at service creation


The following snippet shows the syntax for configuring Amazon EBS volumes for attachment to tasks managed by a service. The volumes are sourced from the snapshot specified using the `snapshotId` parameter at a rate of 200 MiB/s. The configuration specified in the JSON file is used to create and attach an EBS volume to each task managed by the service.

```
{
   "cluster": "mycluster",
   "taskDefinition": "mytaskdef",
   "serviceName": "mysvc",
   "desiredCount": 2,
   "volumeConfigurations": [
        {
            "name": "myEbsVolume",
            "managedEBSVolume": {
              "roleArn":"arn:aws:iam::1111222333:role/ecsInfrastructureRole",
              "snapshotId": "snap-12345",
              "volumeInitializationRate": 200
            }
        }
   ]
}
```

## Configure a volume at service update


The following JSON snippet shows the syntax for updating a service that previously did not have Amazon EBS volumes configured for attachment to tasks. You must provide the ARN of a task definition revision with `configuredAtLaunch` set to `true`. The following JSON snippet shows the syntax for configuring the `volumeType`, `sizeInGiB`, `throughput`, and `iops`, and `filesystemType` settings. This configuration is used to create and attach an EBS volume to each task managed by the service.

```
{
   "cluster": "mycluster",
   "taskDefinition": "mytaskdef",
   "service": "mysvc",
   "desiredCount": 2,
   "volumeConfigurations": [
        {
            "name": "myEbsVolume",
            "managedEBSVolume": {
              "roleArn":"arn:aws:iam::1111222333:role/ecsInfrastructureRole",
               "volumeType": "gp3",
                "sizeInGiB": 100,
                 "iops": 3000, 
                "throughput": 125, 
                "filesystemType": "ext4"
            }
        }
   ]
}
```

### Configure a service to no longer utilize Amazon EBS volumes


The following JSON snippet shows the syntax for updating a service to no longer utilize Amazon EBS volumes. You must provide the ARN of a task definition with `configuredAtLaunch` set to `false`, or a task definition without the `configuredAtLaunch` parameter. You must also provide an empty `volumeConfigurations` object.

```
{
   "cluster": "mycluster",
   "taskDefinition": "mytaskdef",
   "service": "mysvc",
   "desiredCount": 2,
   "volumeConfigurations": []
}
```

## Termination policy for Amazon EBS volumes


When an Amazon ECS task terminates, Amazon ECS uses the `deleteOnTermination` value to determine whether the Amazon EBS volume that's associated with the terminated task should be deleted. By default, EBS volumes that are attached to tasks are deleted when the task is terminated. For standalone tasks, you can change this setting to instead preserve the volume upon task termination.

**Note**  
Volumes that are attached to tasks that are managed by a service are not preserved and are always deleted upon task termination.

## Tag Amazon EBS volumes


You can tag Amazon EBS volumes by using the `tagSpecifications` object. Using the object, you can provide your own tags and set propagation of tags from the task definition or the service, depending on whether the volume is attached to a standalone task or a task in a service. The maximum number of tags that can be attached to a volume is 50.

**Important**  
Amazon ECS automatically attaches the `AmazonECSCreated` and `AmazonECSManaged` reserved tags to an Amazon EBS volume. This means you can control the attachment of a maximum of 48 additional tags to a volume. These additional tags can be user-defined, ECS-managed, or propagated tags.

If you want to add Amazon ECS-managed tags to your volume, you must set `enableECSManagedTags` to `true` in your `UpdateService`, `CreateService`,`RunTask` or `StartTask` call. If you turn on Amazon ECS-managed tags, Amazon ECS will tag the volume automatically with cluster and service information (`aws:ecs:clusterName` and `aws:ecs:serviceName`). For more information about tagging Amazon ECS resources, see [Tagging your Amazon ECS resources](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-using-tags.html).

The following JSON snippet shows the syntax for tagging each Amazon EBS volume that is attached to each task in a service with a user-defined tag. To use this example for creating a service, replace the `user input placeholders` with your own information.

```
{
   "cluster": "mycluster",
   "taskDefinition": "mytaskdef",
   "serviceName": "mysvc",
   "desiredCount": 2,
   "enableECSManagedTags": true,
   "volumeConfigurations": [
        {
            "name": "datadir",
            "managedEBSVolume": {
                "volumeType": "gp3",
                "sizeInGiB": 100,
                 "tagSpecifications": [
                    {
                        "resourceType": "volume", 
                        "tags": [
                            {
                                "key": "key1", 
                                "value": "value1"
                            }
                        ], 
                        "propagateTags": "NONE"
                    }
                ],
                "roleArn":"arn:aws:iam:1111222333:role/ecsInfrastructureRole",
                "encrypted": true,
                "kmsKeyId": "arn:aws:kms:region:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab"
            }
        }
   ]
}
```

**Important**  
You must specify a `volume` resource type to tag Amazon EBS volumes.

# Performance of Amazon EBS volumes for Fargate on-demand tasks


The baseline Amazon EBS volume IOPS and throughput available for a Fargate on-demand task depends on the total CPU units you request for the task. If you request 0.25, 0.5, or 1 virtual CPU unit (vCPU) for your Fargate task, we recommend that you configure a General Purpose SSD volume (`gp2` or `gp3`) or a Hard Disk Drive (HDD) volume (`st1` or `sc1`). If you request more than 1 vCPU for your Fargate task, the following baseline performance limits apply to an Amazon EBS volume attached to the task. You may temporarily get higher EBS performance than the following limits. However, we recommend that you plan your workload based on these limits.


| CPU units requested (in vCPUs) | Baseline Amazon EBS IOPS(16 KiB I/O) | Baseline Amazon EBS Throughput (in MiBps, 128 KiB I/O) | Baseline bandwidth (in Mbps) | 
| --- | --- | --- | --- | 
| 2 | 3,000 | 75 | 360 | 
| 4 | 5,000 | 120 | 1,150 | 
| 8 | 10,000 | 250 | 2,300 | 
| 16 | 15,000 | 500 | 4,500 | 

**Note**  
 When you configure an Amazon EBS volume for attachment to a Fargate task, the Amazon EBS performance limit for Fargate task is shared between the task's ephemeral storage and the attached volume.

# Performance of Amazon EBS volumes for EC2 tasks


Amazon EBS provides volume types, which differ in performance characteristics and price, so that you can tailor your storage performance and cost to the needs of your applications. For information about performance, including IOPS per volume and throughput per volume, see [Amazon EBS volume types](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html) in the *Amazon Elastic Block Store User Guide*.

# Performance of Amazon EBS volumes for Amazon ECS Managed Instances tasks


Amazon EBS provides volume types, which differ in performance characteristics and price, so that you can tailor your storage performance and cost to the needs of your applications. For information about performance, including IOPS per volume and throughput per volume, see [Amazon EBS volume types](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-volume-types.html) in the *Amazon Elastic Block Store User Guide*.

# Troubleshooting Amazon EBS volume attachments to Amazon ECS tasks
Troubleshooting Amazon EBS volume attachment

You might need to troubleshoot or verify the attachment of Amazon EBS volumes to Amazon ECS tasks.

## Check volume attachment status


You can use the AWS Management Console to view the status of an Amazon EBS volume's attachment to an Amazon ECS task. If the task starts and the attachment fails, you'll also see a status reason that you can use to troubleshoot. The created volume will be deleted and the task will be stopped. For more information about status reasons, see [Status reasons for Amazon EBS volume attachment to Amazon ECS tasks](troubleshoot-ebs-volumes-scenarios.md).

**To view a volume's attachment status and status reason using the console**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. On the **Clusters** page, choose the cluster that your task is running in. The cluster's details page appears.

1. On the cluster's details page, choose the **Tasks** tab.

1. Choose the task that you want to view the volume attachment status for. You might need to use **Filter desired status** and choose **Stopped** if the task you want to examine has stopped.

1. On the task's details page, choose the **Volumes** tab. You will be able to see the attachment status of the Amazon EBS volume under **Attachment status**. If the volume fails to attach to the task, you can choose the status under **Attachment status** to display the cause of the failure.

You can also view a task's volume attachment status and associated status reason by using the [DescribeTasks](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DescribeTasks.html) API.

## Service and task failures


You might encounter service or task failures that aren't specific to Amazon EBS volumes that can affect volume attachment. For more information, see
+ [Service event messages](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-event-messages.html)
+ [Stopped task error codes](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/stopped-task-error-codes.html)
+ [API failure reasons](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/api_failures_messages.html)

# Container can't write to Amazon EBS volume


Non-root user without proper permissions  
When you specify a non-root user in your container definition, Amazon ECS automatically configures the volume with group-based permissions to allow write access. However, if you're still experiencing permission issues:  
+ Verify that the `user` parameter is correctly specified in your container definition using the format `uid:gid` (for example, `1001:1001`).
+ Ensure your container image doesn't override the user permissions after the volume is mounted.
+ Check that your application is running with the expected user ID by examining the container logs or using Amazon ECS Exec to inspect the running container.

Root user with permission issues  
If no user is specified in your container definition, the container runs as root and should have full access to the volume. If you're experiencing issues:  
+ Verify that the volume is properly mounted by checking the mount points inside the container.
+ Ensure the volume isn't configured as read-only in your mount point configuration.

Multi-container tasks with different users  
In tasks with multiple containers running as different users, Amazon ECS automatically manages group permissions to allow all specified users to write to the volume. If containers can't write:  
+ Verify that all containers requiring write access have the `user` parameter properly configured.
+ Check that the volume is mounted in all containers that need access to it.

For more information about configuring users in container definitions, see [ Amazon ECS task definition parameters for Fargate ](https://docs.aws.amazon.com/./task_definition_parameters.html). 

# Status reasons for Amazon EBS volume attachment to Amazon ECS tasks
Status reasons for Amazon EBS volume attachment

Use the following reference to fix issues that you might encounter in the form of status reasons in the AWS Management Console when you configure Amazon EBS volumes for attachment to Amazon ECS tasks. For more information on locating these status reasons in the console, see [Check volume attachment status](troubleshoot-ebs-volumes.md#troubleshoot-ebs-volumes-location).

ECS was unable to assume the configured ECS Infrastructure Role 'arn:aws:iam::*111122223333*:role/*ecsInfrastructureRole*'. Please verify that the role being passed has the proper trust relationship with Amazon ECS  
This status reason appears in the following scenarios.  
+  You provide an IAM role without the necessary trust policy attached. Amazon ECS can't access the Amazon ECS infrastructure IAM role that you provide if the role doesn't have the necessary trust policy. The task can get stuck in the `DEPROVISIONING` state. For more information about the necessary trust policy, see [Amazon ECS infrastructure IAM role](infrastructure_IAM_role.md).
+ Your IAM user doesn't have permission to pass the Amazon ECS infrastructure role to Amazon ECS. The task can get stuck in the `DEPROVISIONING` state. To avoid this problem, you can attach the `PassRole` permission to your user. For more information, see [Amazon ECS infrastructure IAM role](infrastructure_IAM_role.md).
+ Your IAM role doesn't have the necessary permissions for Amazon EBS volume attachment. The task can get stuck in the `DEPROVISIONING` state. For more information about the specific permissions necessary for attaching Amazon EBS volumes to tasks, see [Amazon ECS infrastructure IAM role](infrastructure_IAM_role.md).
You may also see this error message due to a delay in role propagation. If retrying to use the role after waiting for a few minutes doesn't fix the issue, you might have misconfigured the trust policy for the role.

ECS failed to set up the EBS volume. Encountered IdempotentParameterMismatch"; "The client token you have provided is associated with a resource that is already deleted. Please use a different client token."  
The following AWS KMS key scenarios can lead to an `IdempotentParameterMismatch` message appearing:  
+ You specify a KMS key ARN, ID, or alias that isn't valid. In this scenario, the task might appear to launch successfully, but the task eventually fails because AWS authenticates the KMS key asynchronously. For more information, see [Amazon EBS encryption](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-encryption.html) in the *Amazon EC2 User Guide*.
+ You provide a customer managed key that lacks the permissions that allow the Amazon ECS infrastructure IAM role to use the key for encryption. To avoid key-policy permission issues, see the example AWS KMS key policy in [Data encryption for Amazon EBS volumes](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ebs-volumes.html#ebs-kms-encryption).
You can set up Amazon EventBridge to send Amazon EBS volume events and Amazon ECS task state change events to a target, such as Amazon CloudWatch groups. You can then use these events to identify the specific customer managed key related issue that affected volume attachment. For more information, see  
+  [How can I create a CloudWatch log group to use as a target for an EventBridge rule?](https://repost.aws/knowledge-center/cloudwatch-log-group-eventbridge) on AWS re:Post.
+ [Task state change events](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_cwe_events.html#ecs_task_events).
+ [Amazon EventBridge events for Amazon EBS](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-cloud-watch-events.html) in the *Amazon EBS User Guide*.

ECS timed out while configuring the EBS volume attachment to your Task.  
The following file system format scenarios result in this message.  
+ The file system format that you specify during configuration isn't compatible with the [task's operating system](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RuntimePlatform.html).
+ You configure an Amazon EBS volume to be created from a snapshot, and the snapshot's file system format isn't compatible with the task's operating system. For volumes created from a snapshot, you must specify the same filesystem type that the volume was using when the snapshot was created.
You can utilize the Amazon ECS container agent logs to troubleshoot this message for EC2 tasks. For more information, see [Amazon ECS log file locations](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/logs.html) and [Amazon ECS log collector](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-logs-collector.html).

# Use Amazon EFS volumes with Amazon ECS
Amazon EFS

Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with your Amazon ECS tasks. With Amazon EFS, storage capacity is elastic. It grows and shrinks automatically as you add and remove files. Your applications can have the storage they need and when they need it.

You can use Amazon EFS file systems with Amazon ECS to export file system data across your fleet of container instances. That way, your tasks have access to the same persistent storage, no matter the instance on which they land. Your task definitions must reference volume mounts on the container instance to use the file system.

For a tutorial, see [Configuring Amazon EFS file systems for Amazon ECS using the console](tutorial-efs-volumes.md).

## Considerations


 Consider the following when using Amazon EFS volumes:
+ For tasks that run on EC2, Amazon EFS file system support was added as a public preview with Amazon ECS-optimized AMI version `20191212` with container agent version 1.35.0. However, Amazon EFS file system support entered general availability with Amazon ECS-optimized AMI version `20200319` with container agent version 1.38.0, which contained the Amazon EFS access point and IAM authorization features. We recommend that you use Amazon ECS-optimized AMI version `20200319` or later to use these features. For more information, see [Amazon ECS-optimized Linux AMIs](ecs-optimized_AMI.md).
**Note**  
If you create your own AMI, you must use container agent 1.38.0 or later, `ecs-init` version 1.38.0-1 or later, and run the following commands on your Amazon EC2 instance to enable the Amazon ECS volume plugin. The commands are dependent on whether you're using Amazon Linux 2 or Amazon Linux as your base image.  
Amazon Linux 2  

  ```
  yum install amazon-efs-utils
  systemctl enable --now amazon-ecs-volume-plugin
  ```
Amazon Linux  

  ```
  yum install amazon-efs-utils
  sudo shutdown -r now
  ```
+ For tasks that are hosted on Fargate, Amazon EFS file systems are supported on platform version 1.4.0 or later (Linux). For more information, see [Fargate platform versions for Amazon ECS](platform-fargate.md).
+ When using Amazon EFS volumes for tasks that are hosted on Fargate, Fargate creates a supervisor container that's responsible for managing the Amazon EFS volume. The supervisor container uses a small amount of the task's memory and CPU. The supervisor container is visible when querying the task metadata version 4 endpoint. Additionally, it is visible in CloudWatch Container Insights as the container name `aws-fargate-supervisor`. For more information when using the EC2, see [Amazon ECS task metadata endpoint version 4](task-metadata-endpoint-v4.md). For more information when using the Fargate, see [Amazon ECS task metadata endpoint version 4 for tasks on Fargate](task-metadata-endpoint-v4-fargate.md).
+ Using Amazon EFS volumes or specifying an `EFSVolumeConfiguration` isn't supported on external instances.
+ Using Amazon EFS volumes is supported for tasks that run on Amazon ECS Managed Instances.
+ We recommend that you set the `ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION` parameter in the agent configuration file to a value that is less than the default (about 1 hour). This change helps prevent EFS mount credential expiration and allows for cleanup of mounts that are not in use.  For more information, see [Amazon ECS container agent configuration](ecs-agent-config.md).

## Use Amazon EFS access points


Amazon EFS access points are application-specific entry points into an EFS file system for managing application access to shared datasets. For more information about Amazon EFS access points and how to control access to them, see [Working with Amazon EFS Access Points](https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html) in the *Amazon Elastic File System User Guide*.

Access points can enforce a user identity, including the user's POSIX groups, for all file system requests that are made through the access point. Access points can also enforce a different root directory for the file system. This is so that clients can only access data in the specified directory or its subdirectories.

**Note**  
When creating an EFS access point, specify a path on the file system to serve as the root directory. When referencing the EFS file system with an access point ID in your Amazon ECS task definition, the root directory must either be omitted or set to `/`, which enforces the path set on the EFS access point.

You can use an Amazon ECS task IAM role to enforce that specific applications use a specific access point. By combining IAM policies with access points, you can provide secure access to specific datasets for your applications. For more information about how to use task IAM roles, see [Amazon ECS task IAM role](task-iam-roles.md).

# Best practices for using Amazon EFS volumes with Amazon ECS
Best practices for using Amazon EFS

Make note of the following best practice recommendations when you use Amazon EFS with Amazon ECS.

## Security and access controls for Amazon EFS volumes


Amazon EFS offers access control features that you can use to ensure that the data stored in an Amazon EFS file system is secure and accessible only from applications that need it. You can secure data by enabling encryption at rest and in-transit. For more information, see [Data encryption in Amazon EFS](https://docs.aws.amazon.com/efs/latest/ug/encryption.html) in the *Amazon Elastic File System User Guide*.

In addition to data encryption, you can also use Amazon EFS to restrict access to a file system. There are three ways to implement access control in EFS.
+ **Security groups**—With Amazon EFS mount targets, you can configure a security group that's used to permit and deny network traffic. You can configure the security group attached to Amazon EFS to permit NFS traffic (port 2049) from the security group that's attached to your Amazon ECS instances or, when using the `awsvpc` network mode, the Amazon ECS task.
+ **IAM**—You can restrict access to an Amazon EFS file system using IAM. When configured, Amazon ECS tasks require an IAM role for file system access to mount an EFS file system. For more information, see [Using IAM to control file system data access](https://docs.aws.amazon.com/efs/latest/ug/iam-access-control-nfs-efs.html) in the *Amazon Elastic File System User Guide*.

  IAM policies can also enforce predefined conditions such as requiring a client to use TLS when connecting to an Amazon EFS file system. For more information, see [Amazon EFS condition keys for clients](https://docs.aws.amazon.com/efs/latest/ug/iam-access-control-nfs-efs.html#efs-condition-keys-for-nfs) in the *Amazon Elastic File System User Guide*.
+ **Amazon EFS access points**—Amazon EFS access points are application-specific entry points into an Amazon EFS file system. You can use access points to enforce a user identity, including the user's POSIX groups, for all file system requests that are made through the access point. Access points can also enforce a different root directory for the file system. This is so that clients can only access data in the specified directory or its sub-directories.

### IAM policies


You can use IAM policies to control the access to the Amazon EFS file system.

You can specify the following actions for clients accessing a file system using a file system policy.


| Action | Description | 
| --- | --- | 
|  `elasticfilesystem:ClientMount`  |  Provides read-only access to a file system.  | 
|  `elasticfilesystem:ClientWrite`  |  Provides write permissions on a file system.  | 
|  `elasticfilesystem:ClientRootAccess`  |  Provides use of the root user when accessing a file system.  | 

You need to specify each action in a policy. The policies can be defined in the following ways:
+ Client-based - Attach the policy to the task role

  Set the **IAM authorization** option when you create the task definition. 
+ Resource-based - Attach the policy to Amazon EFS file system

  If the resource-based policy does not exist, by default at file system creation access is granted to all principals (\$1). 

When you set the **IAM authorization** option, we merge the the policy associated with the task role and the Amazon EFS resource-based. The **IAM authorization** option passes the task identity (the task role) with the policy to Amazon EFS. This allows the Amazon EFS resource-based policy to have context for the IAM user or role specified in the policy. If you do not set the option, the Amazon EFS resource-level policy identifies the IAM user as ”anonymous".

Consider implementing all three access controls on an Amazon EFS file system for maximum security. For example, you can configure the security group attached to an Amazon EFS mount point to only permit ingress NFS traffic from a security group that's associated with your container instance or Amazon ECS task. Additionally, you can configure Amazon EFS to require an IAM role to access the file system, even if the connection originates from a permitted security group. Last, you can use Amazon EFS access points to enforce POSIX user permissions and specify root directories for applications.

The following task definition snippet shows how to mount an Amazon EFS file system using an access point.

```
"volumes": [
    {
      "efsVolumeConfiguration": {
        "fileSystemId": "fs-1234",
        "authorizationConfig": {
          "accessPointId": "fsap-1234",
          "iam": "ENABLED"
        },
        "transitEncryption": "ENABLED",
        "rootDirectory": ""
      },
      "name": "my-filesystem"
    }
]
```

## Amazon EFS volume performance


Amazon EFS offers two performance modes: General Purpose and Max I/O. General Purpose is suitable for latency-sensitive applications such as content management systems and CI/CD tools. In contrast, Max I/O file systems are suitable for workloads such as data analytics, media processing, and machine learning. These workloads need to perform parallel operations from hundreds or even thousands of containers and require the highest possible aggregate throughput and IOPS. For more information, see [Amazon EFS performance modes](https://docs.aws.amazon.com/efs/latest/ug/performance.html#performancemodes) in the *Amazon Elastic File System User Guide*.

Some latency sensitive workloads require both the higher I/O levels that are provided by Max I/O performance mode and the lower latency that are provided by General Purpose performance mode. For this type of workload, we recommend creating multiple General Purpose performance mode file systems. That way, you can spread your application workload across all these file systems, as long as the workload and applications can support it.

## Amazon EFS volume throughput


All Amazon EFS file systems have an associated metered throughput that's determined by either the amount of provisioned throughput for file systems using *Provisioned Throughput* or the amount of data stored in the EFS Standard or One Zone storage class for file systems using *Bursting Throughput*. For more information, see [Understanding metered throughput](https://docs.aws.amazon.com/efs/latest/ug/performance.html#read-write-throughput) in the *Amazon Elastic File System User Guide*.

The default throughput mode for Amazon EFS file systems is bursting mode. With bursting mode, the throughput that's available to a file system scales in or out as a file system grows. Because file-based workloads typically spike, requiring high levels of throughput for periods of time and lower levels of throughput the rest of the time, Amazon EFS is designed to burst to allow high throughput levels for periods of time. Additionally, because many workloads are read-heavy, read operations are metered at a 1:3 ratio to other NFS operations (like write). 

All Amazon EFS file systems deliver a consistent baseline performance of 50 MB/s for each TB of Amazon EFS Standard or Amazon EFS One Zone storage. All file systems (regardless of size) can burst to 100 MB/s. File systems with more than 1TB of EFS Standard or EFS One Zone storage can burst to 100 MB/s for each TB. Because read operations are metered at a 1:3 ratio, you can drive up to 300 MiBs/s for each TiB of read throughput. As you add data to your file system, the maximum throughput that's available to the file system scales linearly and automatically with your storage in the Amazon EFS Standard storage class. If you need more throughput than you can achieve with your amount of data stored, you can configure Provisioned Throughput to the specific amount your workload requires.

File system throughput is shared across all Amazon EC2 instances connected to a file system. For example, a 1TB file system that can burst to 100 MB/s of throughput can drive 100 MB/s from a single Amazon EC2 instance can each drive 10 MB/s. For more information, see [Amazon EFS performance](https://docs.aws.amazon.com/efs/latest/ug/performance.html) in the *Amazon Elastic File System User Guide*.

## Optimizing cost for Amazon EFS volumes


Amazon EFS simplifies scaling storage for you. Amazon EFS file systems grow automatically as you add more data. Especially with Amazon EFS *Bursting Throughput* mode, throughput on Amazon EFS scales as the size of your file system in the standard storage class grows. To improve the throughput without paying an additional cost for provisioned throughput on an EFS file system, you can share an Amazon EFS file system with multiple applications. Using Amazon EFS access points, you can implement storage isolation in shared Amazon EFS file systems. By doing so, even though the applications still share the same file system, they can't access data unless you authorize it.

As your data grows, Amazon EFS helps you automatically move infrequently accessed files to a lower storage class. The Amazon EFS Standard-Infrequent Access (IA) storage class reduces storage costs for files that aren't accessed every day. It does this without sacrificing the high availability, high durability, elasticity, and the POSIX file system access that Amazon EFS provides. For more information, see [EFS storage classes](https://docs.aws.amazon.com/efs/latest/ug/features.html) in the *Amazon Elastic File System User Guide*.

Consider using Amazon EFS lifecycle policies to automatically save money by moving infrequently accessed files to Amazon EFS IA storage. For more information, see [Amazon EFS lifecycle management](https://docs.aws.amazon.com/efs/latest/ug/lifecycle-management-efs.html) in the *Amazon Elastic File System User Guide*.

When creating an Amazon EFS file system, you can choose if Amazon EFS replicates your data across multiple Availability Zones (Standard) or stores your data redundantly within a single Availability Zone. The Amazon EFS One Zone storage class can reduce storage costs by a significant margin compared to Amazon EFS Standard storage classes. Consider using Amazon EFS One Zone storage class for workloads that don't require multi-AZ resilience. You can further reduce the cost of Amazon EFS One Zone storage by moving infrequently accessed files to Amazon EFS One Zone-Infrequent Access. For more information, see [Amazon EFS Infrequent Access](https://aws.amazon.com/efs/features/infrequent-access/).

## Amazon EFS volume data protection


Amazon EFS stores your data redundantly across multiple Availability Zones for file systems using Standard storage classes. If you select Amazon EFS One Zone storage classes, your data is redundantly stored within a single Availability Zone. Additionally, Amazon EFS is designed to provide 99.999999999% (11 9’s) of durability over a given year.

As with any environment, it's a best practice to have a backup and to build safeguards against accidental deletion. For Amazon EFS data, that best practice includes a functioning, regularly tested backup using AWS Backup. File systems using Amazon EFS One Zone storage classes are configured to automatically back up files by default at file system creation unless you choose to disable this functionality. For more information, see [Backing up EFS file systems](https://docs.aws.amazon.com/efs/latest/ug/awsbackup.html) in the *Amazon Elastic File System User Guide*.

# Specify an Amazon EFS file system in an Amazon ECS task definition
Specify an Amazon EFS file system in a task definition

To use Amazon EFS file system volumes for your containers, you must specify the volume and mount point configurations in your task definition. The following task definition JSON snippet shows the syntax for the `volumes` and `mountPoints` objects for a container.

```
{
    "containerDefinitions": [
        {
            "name": "container-using-efs",
            "image": "public.ecr.aws/amazonlinux/amazonlinux:latest",
            "entryPoint": [
                "sh",
                "-c"
            ],
            "command": [
                "ls -la /mount/efs"
            ],
            "mountPoints": [
                {
                    "sourceVolume": "myEfsVolume",
                    "containerPath": "/mount/efs",
                    "readOnly": true
                }
            ]
        }
    ],
    "volumes": [
        {
            "name": "myEfsVolume",
            "efsVolumeConfiguration": {
                "fileSystemId": "fs-1234",
                "rootDirectory": "/path/to/my/data",
                "transitEncryption": "ENABLED",
                "transitEncryptionPort": integer,
                "authorizationConfig": {
                    "accessPointId": "fsap-1234",
                    "iam": "ENABLED"
                }
            }
        }
    ]
}
```

`efsVolumeConfiguration`  
Type: Object  
Required: No  
This parameter is specified when using Amazon EFS volumes.    
`fileSystemId`  
Type: String  
Required: Yes  
The Amazon EFS file system ID to use.  
`rootDirectory`  
Type: String  
Required: No  
The directory within the Amazon EFS file system to mount as the root directory inside the host. If this parameter is omitted, the root of the Amazon EFS volume is used. Specifying `/` has the same effect as omitting this parameter.  
If an EFS access point is specified in the `authorizationConfig`, the root directory parameter must either be omitted or set to `/`, which enforces the path set on the EFS access point.  
`transitEncryption`  
Type: String  
Valid values: `ENABLED` \$1 `DISABLED`  
Required: No  
Specifies whether to enable encryption for Amazon EFS data in transit between the Amazon ECS host and the Amazon EFS server. If Amazon EFS IAM authorization is used, transit encryption must be enabled. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [Encrypting Data in Transit](https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html) in the *Amazon Elastic File System User Guide*.  
`transitEncryptionPort`  
Type: Integer  
Required: No  
The port to use when sending encrypted data between the Amazon ECS host and the Amazon EFS server. If you don't specify a transit encryption port, it uses the port selection strategy that the Amazon EFS mount helper uses. For more information, see [EFS Mount Helper](https://docs.aws.amazon.com/efs/latest/ug/efs-mount-helper.html) in the *Amazon Elastic File System User Guide*.  
`authorizationConfig`  
Type: Object  
Required: No  
The authorization configuration details for the Amazon EFS file system.    
`accessPointId`  
Type: String  
Required: No  
The access point ID to use. If an access point is specified, the root directory value in the `efsVolumeConfiguration` must either be omitted or set to `/`, which enforces the path set on the EFS access point. If an access point is used, transit encryption must be enabled in the `EFSVolumeConfiguration`. For more information, see [Working with Amazon EFS Access Points](https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html) in the *Amazon Elastic File System User Guide*.  
`iam`  
Type: String  
Valid values: `ENABLED` \$1 `DISABLED`  
Required: No  
 Specifies whether to use the Amazon ECS task IAM role defined in a task definition when mounting the Amazon EFS file system. If enabled, transit encryption must be enabled in the `EFSVolumeConfiguration`. If this parameter is omitted, the default value of `DISABLED` is used. For more information, see [IAM Roles for Tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html).

# Configuring Amazon EFS file systems for Amazon ECS using the console
Configuring Amazon EFS file systems

Learn how to use Amazon Elastic File System (Amazon EFS) file systems with Amazon ECS.

## Step 1: Create an Amazon ECS cluster


Use the following steps to create an Amazon ECS cluster. 

**To create a new cluster (Amazon ECS console)**

Before you begin, assign the appropriate IAM permission. For more information, see [Amazon ECS cluster examples](security_iam_id-based-policy-examples.md#IAM_cluster_policies).

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. From the navigation bar, select the Region to use.

1. In the navigation pane, choose **Clusters**.

1. On the **Clusters** page, choose **Create cluster**.

1. Under **Cluster configuration**, for **Cluster name**, enter `EFS-tutorial` for the cluster name.

1. (Optional) To change the VPC and subnets where your tasks and services launch, under **Networking**, perform any of the following operations:
   + To remove a subnet, under **Subnets**, choose **X** for each subnet that you want to remove.
   + To change to a VPC other than the **default** VPC, under **VPC**, choose an existing **VPC**, and then under **Subnets**, select each subnet.

1.  To add Amazon EC2 instances to your cluster, expand **Infrastructure**, and then select **Amazon EC2 instances**. Next, configure the Auto Scaling group which acts as the capacity provider:

   1. To create a Auto Scaling group, from **Auto Scaling group (ASG)**, select **Create new group**, and then provide the following details about the group:
     + For **Operating system/Architecture**, choose Amazon Linux 2.
     + For **EC2 instance type**, choose `t2.micro`.

        For **SSH key pair**, choose the pair that proves your identity when you connect to the instance.
     + For **Capacity**, enter `1`.

1. Choose **Create**.

## Step 2: Create a security group for Amazon EC2 instances and the Amazon EFS file system


In this step, you create a security group for your Amazon EC2 instances that allows inbound network traffic on port 80 and your Amazon EFS file system that allows inbound access from your container instances. 

Create a security group for your Amazon EC2 instances with the following options:
+ **Security group name** - a unique name for your security group.
+ **VPC** - the VPC that you identified earlier for your cluster.
+ **Inbound rule**
  + **Type** - **HTTP**
  + **Source** - **0.0.0.0/0**.

Create a security group for your Amazon EFS file system with the following options:
+ **Security group name** - a unique name for your security group. For example, `EFS-access-for-sg-dc025fa2`.
+ **VPC** - the VPC that you identified earlier for your cluster.
+ **Inbound rule**
  + **Type** - **NFS**
  + **Source** - **Custom** with the ID of the security group you created for your instances.

For information about how to create a security group, see [Create a security group for your Amazon EC2 instance](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-security-group.html) in the *Amazon EC2 User Guide*.

## Step 3: Create an Amazon EFS file system


In this step, you create an Amazon EFS file system.

**To create an Amazon EFS file system for Amazon ECS tasks.**

1. Open the Amazon Elastic File System console at [https://console.aws.amazon.com/efs/](https://console.aws.amazon.com/efs/).

1. Choose **Create file system**.

1. Enter a name for your file system and then choose the VPC that your container instances are hosted in. By default, each subnet in the specified VPC receives a mount target that uses the default security group for that VPC. Then, choose ** Customize**.
**Note**  
This tutorial assumes that your Amazon EFS file system, Amazon ECS cluster, container instances, and tasks are in the same VPC. For more information about mounting a file system from a different VPC, see [Walkthrough: Mount a file system from a different VPC](https://docs.aws.amazon.com/efs/latest/ug/efs-different-vpc.html) in the *Amazon EFS User Guide*.

1. On the **File system settings** page, configure optional settings and then under **Performance settings**, choose the **Bursting** throughput mode for your file system. After you have configured settings, select **Next**.

   1. (Optional) Add tags for your file system. For example, you could specify a unique name for the file system by entering that name in the **Value** column next to the **Name** key.

   1. (Optional) Enable lifecycle management to save money on infrequently accessed storage. For more information, see [EFS Lifecycle Management](https://docs.aws.amazon.com/efs/latest/ug/lifecycle-management-efs.html) in the *Amazon Elastic File System User Guide*.

   1. (Optional) Enable encryption. Select the check box to enable encryption of your Amazon EFS file system at rest.

1. On the **Network access** page, under **Mount targets**, replace the existing security group configuration for every availability zone with the security group you created for the file system in [Step 2: Create a security group for Amazon EC2 instances and the Amazon EFS file system](#efs-security-group) and then choose **Next**.

1.  You do not need to configure **File system policy** for this tutorial, so you can skip the section by choosing **Next**.

1. Review your file system options and choose **Create** to complete the process.

1. From the **File systems** screen, record the **File system ID**. In the next step, you will reference this value in your Amazon ECS task definition.

## Step 4: Add content to the Amazon EFS file system


In this step, you mount the Amazon EFS file system to an Amazon EC2 instance and add content to it. This is for testing purposes in this tutorial, to illustrate the persistent nature of the data. When using this feature you would normally have your application or another method of writing data to your Amazon EFS file system.

**To create an Amazon EC2 instance and mount the Amazon EFS file system**

1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

1. Choose **Launch Instance**.

1. Under **Application and OS Images (Amazon Machine Image)**, select the **Amazon Linux 2 AMI (HVM)**.

1. Under **Instance type**, keep the default instance type, `t2.micro`.

1.  Under **Key pair (login)**, select a key pair for SSH access to the instance.

1. Under **Network settings**, select the VPC that you specified for your Amazon EFS file system and Amazon ECS cluster. Select a subnet and the instance security group created in [Step 2: Create a security group for Amazon EC2 instances and the Amazon EFS file system](#efs-security-group). Configure the instance's security group. Ensure that **Auto-assign public IP** is enabled.

1. Under **Configure storage**, choose the **Edit** button for file systems and then choose **EFS**. Select the file system you created in [Step 3: Create an Amazon EFS file system](#efs-create-filesystem). You can optionally change the mount point or leave the default value.
**Important**  
Your must select a subnet before you can add a file system to the instance.

1. Clear the **Automatically create and attach security groups**. Leave the other check box selected. Choose **Add shared file system**.

1. Under **Advanced Details**, ensure that the user data script is populated automatically with the Amazon EFS file system mounting steps.

1.  Under **Summary**, ensure the **Number of instances** is **1**. Choose **Launch instance**.

1. On the **Launch an instance** page, choose **View all instances** to see the status of your instances. Initially, the **Instance state** status is `PENDING`. After the state changes to `RUNNING` and the instance passes all status checks, the instance is ready for use.

Now, you connect to the Amazon EC2 instance and add content to the Amazon EFS file system.

**To connect to the Amazon EC2 instance and add content to the Amazon EFS file system**

1. SSH to the Amazon EC2 instance you created. For more information, see [Connect to your Linux instance using SSH](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connect-to-linux-instance.html) in the *Amazon EC2 User Guide*.

1. From the terminal window, run the **df -T** command to verify that the Amazon EFS file system is mounted. In the following output, we have highlighted the Amazon EFS file system mount.

   ```
   $ df -T
   Filesystem     Type            1K-blocks    Used        Available Use% Mounted on
   devtmpfs       devtmpfs           485468       0           485468   0% /dev
   tmpfs          tmpfs              503480       0           503480   0% /dev/shm
   tmpfs          tmpfs              503480     424           503056   1% /run
   tmpfs          tmpfs              503480       0           503480   0% /sys/fs/cgroup
   /dev/xvda1     xfs               8376300 1310952          7065348  16% /
   127.0.0.1:/    nfs4     9007199254739968       0 9007199254739968   0% /mnt/efs/fs1
   tmpfs          tmpfs              100700       0           100700   0% /run/user/1000
   ```

1. Navigate to the directory that the Amazon EFS file system is mounted at. In the example above, that is `/mnt/efs/fs1`.

1. Create a file named `index.html` with the following content:

   ```
   <html>
       <body>
           <h1>It Works!</h1>
           <p>You are using an Amazon EFS file system for persistent container storage.</p>
       </body>
   </html>
   ```

## Step 5: Create a task definition


The following task definition creates a data volume named `efs-html`. The `nginx` container mounts the host data volume at the NGINX root, `/usr/share/nginx/html`.

**To create a new task definition using the Amazon ECS console**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Task definitions**.

1. Choose **Create new task definition**, **Create new task definition with JSON**.

1. In the JSON editor box, copy and paste the following JSON text, replacing the `fileSystemId` with the ID of your Amazon EFS file system.

   ```
   {
       "containerDefinitions": [
           {
               "memory": 128,
               "portMappings": [
                   {
                       "hostPort": 80,
                       "containerPort": 80,
                       "protocol": "tcp"
                   }
               ],
               "essential": true,
               "mountPoints": [
                   {
                       "containerPath": "/usr/share/nginx/html",
                       "sourceVolume": "efs-html"
                   }
               ],
               "name": "nginx",
               "image": "public.ecr.aws/docker/library/nginx:latest"
           }
       ],
       "volumes": [
           {
               "name": "efs-html",
               "efsVolumeConfiguration": {
                   "fileSystemId": "fs-1324abcd",
                   "transitEncryption": "ENABLED"
               }
           }
       ],
       "family": "efs-tutorial",
       "executionRoleArn":"arn:aws:iam::111122223333:role/ecsTaskExecutionRole"
   }
   ```
**Note**  
The Amazon ECS task execution IAM role does not require any specific Amazon EFS-related permissions to mount an Amazon EFS file system. By default, if no Amazon EFS resource-based policy exists, access is granted to all principals (\$1) at file system creation.  
The Amazon ECS task role is only required if "EFS IAM authorization" is enabled in the Amazon ECS task definition. When enabled, the task role identity must be allowed access to the Amazon EFS file system in the Amazon EFS resource-based policy, and anonymous access should be disabled.

1. Choose **Create**.

## Step 6: Run a task and view the results


Now that your Amazon EFS file system is created and there is web content for the NGINX container to serve, you can run a task using the task definition that you created. The NGINX web server serves your simple HTML page. If you update the content in your Amazon EFS file system, those changes are propagated to any containers that have also mounted that file system.

The task runs in the subnet that you defined for the cluster.

**To run a task and view the results using the console**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. On the **Clusters** page, select the cluster to run the standalone task in.

   Determine the resource from where you launch the service.    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/tutorial-efs-volumes.html)

1. (Optional) Choose how your scheduled task is distributed across your cluster infrastructure. Expand **Compute configuration**, and then do the following:    
[\[See the AWS documentation website for more details\]](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/tutorial-efs-volumes.html)

1. For **Application type**, choose **Task**.

1. For **Task definition**, choose the `efs-tutorial` task definition that you created earlier.

1. For **Desired tasks**, enter `1`.

1. Choose **Create**.

1. On the **Cluster** page, choose **Infrastructure**.

1. Under **Container Instances**, choose the container instance to connect to.

1. On the **Container Instance** page, under **Networking**, record the **Public IP** for your instance.

1. Open a browser and enter the public IP address. You should see the following message:

   ```
   It works!
   You are using an Amazon EFS file system for persistent container storage.
   ```
**Note**  
If you do not see the message, make sure that the security group for your container instance allows inbound network traffic on port 80 and the security group for your file system allows inbound access from the container instance.

# Use FSx for Windows File Server volumes with Amazon ECS
FSx for Windows File Server

FSx for Windows File Server provides fully managed Windows file servers, that are backed by a Windows file system. When using FSx for Windows File Server together with ECS, you can provision your Windows tasks with persistent, distributed, shared, static file storage. For more information, see [What Is FSx for Windows File Server?](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html).

**Note**  
EC2 instances that use the Amazon ECS-Optimized Windows Server 2016 Full AMI do not support FSx for Windows File Server ECS task volumes.  
You can't use FSx for Windows File Server volumes in a Windows containers on Fargate configuration. Instead, you can [modify containers to mount them on startup](https://aws.amazon.com/blogs/containers/use-smb-storage-with-windows-containers-on-aws-fargate/).

You can use FSx for Windows File Server to deploy Windows workloads that require access to shared external storage, highly-available Regional storage, or high-throughput storage. You can mount one or more FSx for Windows File Server file system volumes to an Amazon ECS container that runs on an Amazon ECS Windows instance. You can share FSx for Windows File Server file system volumes between multiple Amazon ECS containers within a single Amazon ECS task.

To enable the use of FSx for Windows File Server with ECS, include the FSx for Windows File Server file system ID and the related information in a task definition. This is in the following example task definition JSON snippet. Before you create and run a task definition, you need the following.
+ An ECS Windows EC2 instance that's joined to a valid domain. It can be hosted by an [AWS Directory Service for Microsoft Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html), on-premises Active Directory or self-hosted Active Directory on Amazon EC2.
+ An AWS Secrets Manager secret or Systems Manager parameter that contains the credentials that are used to join the Active Directory domain and attach the FSx for Windows File Server file system. The credential values are the name and password credentials that you entered when creating the Active Directory.

For a related tutorial, see [Learn how to configure FSx for Windows File Server file systems for Amazon ECS](tutorial-wfsx-volumes.md).

## Considerations


Consider the following when using FSx for Windows File Server volumes:
+ FSx for Windows File Server volumes are natively supported with Amazon ECS on Windows Amazon EC2 instances — Amazon ECS automatically manages the mount through task definition configuration.

  On Linux Amazon EC2 instances, Amazon ECS can't automatically mount FSx for Windows File Server volumes through task definitions. However, you can manually mount an FSx for Windows File Server file share on a Linux EC2 instance at the host level and then bind-mount that path into your Amazon ECS containers. For more information, see [Mounting Amazon FSx file shares from Linux](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/map-shares-linux.html).
**Important**  
This is a self-managed configuration. For guidance on mounting and maintaining FSx for Windows File Server file shares on Linux, refer to the [FSx for Windows File Server documentation](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/).
**Important**  
When using a manually mounted FSx for Windows File Server share on Linux EC2 instances, Amazon ECS and FSx for Windows File Server operate independently — Amazon ECS does not monitor the Amazon FSx mount, and FSx for Windows File Server does not track Amazon ECS task placement or lifecycle events. You are responsible for ensuring network reachability between your Amazon ECS container instances and the Amazon FSx file system, implementing mount health checks, and handling reconnection logic to tolerate failover events.
+ FSx for Windows File Server with Amazon ECS doesn't support AWS Fargate.
+ FSx for Windows File Server with Amazon ECS isn't supported on Amazon ECS Managed Instances.
+ FSx for Windows File Server with Amazon ECS with `awsvpc` network mode requires version `1.54.0` or later of the container agent.
+ The maximum number of drive letters that can be used for an Amazon ECS task is 23. Each task with an FSx for Windows File Server volume gets a drive letter assigned to it.
+ By default, task resource cleanup time is three hours after the task ended. Even if no tasks use it, a file mapping that's created by a task persists for three hours. The default cleanup time can be configured by using the Amazon ECS environment variable `ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION`. For more information, see [Amazon ECS container agent configuration](ecs-agent-config.md).
+ Tasks typically only run in the same VPC as the FSx for Windows File Server file system. However, it's possible to have cross-VPC support if there's an established network connectivity between the Amazon ECS cluster VPC and the FSx for Windows File Server file-system through VPC peering.
+ You control access to an FSx for Windows File Server file system at the network level by configuring the VPC security groups. Only tasks that are hosted on EC2 instances joined to the Active Directory domain with correctly configured Active Directory security groups can access the FSx for Windows File Server file-share. If the security groups are misconfigured, Amazon ECS fails to launch the task with the following error message: `unable to mount file system fs-id`.” 
+ FSx for Windows File Server is integrated with AWS Identity and Access Management (IAM) to control the actions that your IAM users and groups can take on specific FSx for Windows File Server resources. With client authorization, customers can define IAM roles that allow or deny access to specific FSx for Windows File Server file systems, optionally require read-only access, and optionally allow or disallow root access to the file system from the client. For more information, see [Security](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/security.html) in the Amazon FSx Windows User Guide.

# Best practices for using FSx for Windows File Server with Amazon ECS
Best practices for using FSx for Windows File Server

Make note of the following best practice recommendations when you use FSx for Windows File Server with Amazon ECS.

## Security and access controls for FSx for Windows File Server


FSx for Windows File Server offers the following access control features that you can use to ensure that the data stored in an FSx for Windows File Server file system is secure and accessible only from applications that need it.

### Data encryption for FSx for Windows File Server volumes


FSx for Windows File Server supports two forms of encryption for file systems. They are encryption of data in transit and encryption at rest. Encryption of data in transit is supported on file shares that are mapped on a container instance that supports SMB protocol 3.0 or newer. Encryption of data at rest is automatically enabled when creating an Amazon FSx file system. Amazon FSx automatically encrypts data in transit using SMB encryption as you access your file system without the need for you to modify your applications. For more information, see [Data encryption in Amazon FSx](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/encryption.html) in the *Amazon FSx for Windows File Server User Guide*.

### Use Windows ACLs for folder level access control


The Windows Amazon EC2 instance access Amazon FSx file shares using Active Directory credentials. It uses standard Windows access control lists (ACLs) for fine-grained file-level and folder-level access control. You can create multiple credentials, each one for a specific folder within the share which maps to a specific task.

In the following example, the task has access to the folder `App01` using a credential saved in Secrets Manager. Its Amazon Resource Name (ARN) is `1234`.

```
"rootDirectory": "\\path\\to\\my\\data\App01",
"credentialsParameter": "arn-1234",
"domain": "corp.fullyqualified.com",
```

In another example, a task has access to the folder `App02` using a credential saved in the Secrets Manager. Its ARN is `6789`.

```
"rootDirectory": "\\path\\to\\my\\data\App02",
"credentialsParameter": "arn-6789",
"domain": "corp.fullyqualified.com",
```

# Specify an FSx for Windows File Server file system in an Amazon ECS task definition
Specify an FSx for Windows File Server file system in a task definition

To use FSx for Windows File Server file system volumes for your containers, specify the volume and mount point configurations in your task definition. The following task definition JSON snippet shows the syntax for the `volumes` and `mountPoints` objects for a container.

```
{
    "containerDefinitions": [
        {
            "entryPoint": [
                "powershell",
                "-Command"
            ],
            "portMappings": [],
            "command": ["New-Item -Path C:\\fsx-windows-dir\\index.html -ItemType file -Value '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>It Works!</h2> <p>You are using Amazon FSx for Windows File Server file system for persistent container storage.</p>' -Force"],
            "cpu": 512,
            "memory": 256,
            "image": "mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019",
            "essential": false,
            "name": "container1",
            "mountPoints": [
                {
                    "sourceVolume": "fsx-windows-dir",
                    "containerPath": "C:\\fsx-windows-dir",
                    "readOnly": false
                }
            ]
        },
        {
            "entryPoint": [
                "powershell",
                "-Command"
            ],
            "portMappings": [
                {
                    "hostPort": 443,
                    "protocol": "tcp",
                    "containerPort": 80
                }
            ],
            "command": ["Remove-Item -Recurse C:\\inetpub\\wwwroot\\* -Force; Start-Sleep -Seconds 120; Move-Item -Path C:\\fsx-windows-dir\\index.html -Destination C:\\inetpub\\wwwroot\\index.html -Force; C:\\ServiceMonitor.exe w3svc"],
            "mountPoints": [
                {
                    "sourceVolume": "fsx-windows-dir",
                    "containerPath": "C:\\fsx-windows-dir",
                    "readOnly": false
                }
            ],
            "cpu": 512,
            "memory": 256,
            "image": "mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019",
            "essential": true,
            "name": "container2"
        }
    ],
    "family": "fsx-windows",
    "executionRoleArn": "arn:aws:iam::111122223333:role/ecsTaskExecutionRole",
    "volumes": [
        {
            "name": "fsx-windows-dir",
            "fsxWindowsFileServerVolumeConfiguration": {
                "fileSystemId": "fs-0eeb5730b2EXAMPLE",
                "authorizationConfig": {
                    "domain": "example.com",
                    "credentialsParameter": "arn:arn-1234"
                },
                "rootDirectory": "share"
            }
        }
    ]
}
```

`FSxWindowsFileServerVolumeConfiguration`  
Type: Object  
Required: No  
This parameter is specified when you're using [FSx for Windows File Server](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html) file system for task storage.    
`fileSystemId`  
Type: String  
Required: Yes  
The FSx for Windows File Server file system ID to use.  
`rootDirectory`  
Type: String  
Required: Yes  
The directory within the FSx for Windows File Server file system to mount as the root directory inside the host.  
`authorizationConfig`    
`credentialsParameter`  
Type: String  
Required: Yes  
The authorization credential options:  
+ Amazon Resource Name (ARN) of an [Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) secret.
+ Amazon Resource Name (ARN) of an [Systems Manager](https://docs.aws.amazon.com/systems-manager/latest/userguide/integration-ps-secretsmanager.html) parameter.  
`domain`  
Type: String  
Required: Yes  
A fully qualified domain name that's hosted by an [AWS Directory Service for Microsoft Active Directory](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/directory_microsoft_ad.html) (AWS Managed Microsoft AD) directory or a self-hosted EC2 Active Directory.

## Methods for storing FSx for Windows File Server volume credentials


There are two different methods of storing credentials for use with the credentials parameter.
+ **AWS Secrets Manager secret**

  This credential can be created in the AWS Secrets Manager console by using the *Other type of secret* category. You add a row for each key/value pair, username/admin and password/*password*.
+ **Systems Manager parameter**

  This credential can be created in the Systems Manager parameter console by entering text in the form that's in the following example code snippet.

  ```
  {
    "username": "admin",
    "password": "password"
  }
  ```

The `credentialsParameter` in the task definition `FSxWindowsFileServerVolumeConfiguration` parameter holds either the secret ARN or the Systems Manager parameter ARN. For more information, see [What is AWS Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html) in the *Secrets Manager User Guide* and [Systems Manager Parameter Store](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-parameter-store.html) from the *Systems Manager User Guide*.

# Learn how to configure FSx for Windows File Server file systems for Amazon ECS
Learn how to configure FSx for Windows File Server file systems

Learn how to launch an Amazon ECS-Optimized Windows instance that hosts an FSx for Windows File Server file system and containers that can access the file system. To do this, you first create an Directory Service AWS Managed Microsoft Active Directory. Then, you create an FSx for Windows File Server File Server file system and cluster with an Amazon EC2 instance and a task definition. You configure the task definition for your containers to use the FSx for Windows File Server file system. Finally, you test the file system.

It takes 20 to 45 minutes each time you launch or delete either the Active Directory or the FSx for Windows File Server file system. Be prepared to reserve at least 90 minutes to complete the tutorial or complete the tutorial over a few sessions.

## Prerequisites for the tutorial

+ An administrative user. See [Set up to use Amazon ECS](get-set-up-for-amazon-ecs.md).
+ (Optional) A `PEM` key pair for connecting to your EC2 Windows instance through RDP access. For information about how to create key pairs, see [Amazon EC2 key pairs and Amazon EC2 instances](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) in the *Amazon EC2 User Guide*.
+ A VPC with at least one public and one private subnet, and one security group. You can use your default VPC. You don't need a NAT gateway or device. Directory Service doesn't support Network Address Translation (NAT) with Active Directory. For this to work, the Active Directory, FSx for Windows File Server file system, ECS Cluster, and EC2 instance must be located within your VPC. For more information regarding VPCs and Active Directories, see [Create a VPC](https://docs.aws.amazon.com/vpc/latest/userguide/create-vpc.html) and [Prerequisites for creating an AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started.html#ms_ad_getting_started_prereqs).
+ The IAM ecsInstanceRole and ecsTaskExecutionRole permissions are associated with your account. These service-linked roles allow services to make API calls and access containers, secrets, directories, and file servers on your behalf.

## Step 1: Create IAM access roles


**Create a cluster with the AWS Management Console.**

1. See [Amazon ECS container instance IAM role](instance_IAM_role.md) to check whether you have an ecsInstanceRole and to see how you can create one if you don't have one.

1. We recommend that role policies are customized for minimum permissions in an actual production environment. For the purpose of working through this tutorial, verify that the following AWS managed policy is attached to your ecsInstanceRole. Attach the policy if it is not already attached.
   + AmazonEC2ContainerServiceforEC2Role
   + AmazonSSMManagedInstanceCore
   + AmazonSSMDirectoryServiceAccess

   To attach AWS managed policies.

   1. Open the [IAM console](https://console.aws.amazon.com//iam/).

   1. In the navigation pane, choose **Roles.**

   1. Choose an **AWS managed role**.

   1. Choose **Permissions, Attach policies**.

   1. To narrow the available policies to attach, use **Filter**.

   1. Select the appropriate policy and choose **Attach policy**.

1. See [Amazon ECS task execution IAM role](task_execution_IAM_role.md) to check whether you have an ecsTaskExecutionRole and to see how you can create one if you don't have one.

   We recommend that role policies are customized for minimum permissions in an actual production environment. For the purpose of working through this tutorial, verify that the following AWS managed policies are attached to your ecsTaskExecutionRole. Attach the policies if they are not already attached. Use the procedure given in the preceding section to attach the AWS managed policies.
   + SecretsManagerReadWrite
   + AmazonFSxReadOnlyAccess
   + AmazonSSMReadOnlyAccess
   + AmazonECSTaskExecutionRolePolicy

## Step 2: Create Windows Active Directory (AD)


1. Follow the steps described in [Creating your AWS Managed Microsoft AD](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/ms_ad_getting_started.html#ms_ad_getting_started_create_directory) in the AWS *Directory Service Administration Guide*. Use the VPC you have designated for this tutorial. On Step 3 of *Creating your AWS Managed Microsoft AD*, save the user name and admin password for use in a following step. Also, note the fully qualified directory DNS name for future steps. You can complete the following step while the Active Directory is being created.

1. Create an AWS Secrets Manager secret to use in the following steps. For more information, see [Get started with Secrets Manager](https://docs.aws.amazon.com/secretsmanager/latest/userguide/intro.html#get-started) in the AWS *Secrets Manager User Guide*.

   1. Open the [Secrets Manager console](https://console.aws.amazon.com//secretsmanager/).

   1. Click **Store a new secret**.

   1. Select **Other type of secrets**.

   1. For **Secret key/value**, in the first row, create a key **username** with value **admin**. Click on **\$1 Add row**.

   1. In the new row, create a key **password**. For value, type in the password you entered in Step 3 of *Create Your AWS Managed AD Directory*.

   1. Click on the **Next** button.

   1. Provide a secret name and description. Click **Next**.

   1. Click **Next**. Click **Store**.

   1. From the list of **Secrets** page, click on the secret you have just created.

   1. Save the ARN of the new secret for use in the following steps.

   1. You can proceed to the next step while your Active Directory is being created.

## Step 3: Verify and update your security group


In this step, you verify and update the rules for the security group that you're using. For this, you can use the default security group that was created for your VPC.

**Verify and update security group.**

You need to create or edit your security group to send data from and to the ports, which are described in [Amazon VPC Security Groups](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/limit-access-security-groups.html#fsx-vpc-security-groups) in the *FSx for Windows File Server User Guide*. You can do this by creating the security group inbound rule shown in the first row of the following table of inbound rules. This rule allows inbound traffic from network interfaces (and their associated instances) that are assigned to the security group. All of the cloud resources you create are within the same VPC and attached to the same security group. Therefore, this rule allows traffic to be sent to and from the FSx for Windows File Server file system, Active Directory, and ECS instance as required. The other inbound rules allow traffic to serve the website and RDP access for connecting to your ECS instance.

The following table shows which security group inbound rules are required for this tutorial.


| Type | Protocol | Port range | Source | 
| --- | --- | --- | --- | 
|  All traffic  |  All  |  All  |  *sg-securitygroup*  | 
|  HTTPS  |  TCP  |  443  |  0.0.0.0/0  | 
|  RDP  |  TCP  |  3389  |  your laptop IP address  | 

The following table shows which security group outbound rules are required for this tutorial.


| Type | Protocol | Port range | Destination | 
| --- | --- | --- | --- | 
|  All traffic  |  All  |  All  |  0.0.0.0/0  | 

1. Open the [EC2 console](https://console.aws.amazon.com//ec2/) and select **Security Groups** from the left-hand menu.

1. From the list of security groups now displayed, select check the check-box to the left of the security group that you are using for this tutorial.

   Your security group details are displayed.

1. Edit the inbound and outbound rules by selecting the **Inbound rules** or **Outbound rules** tabs and choosing the **Edit inbound rules** or **Edit outbound rules** buttons. Edit the rules to match those displayed in the preceding tables. After you create your EC2 instance later on in this tutorial, edit the inbound rule RDP source with the public IP address of your EC2 instance as described in [Connect to your Windows instance using RDP](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connecting_to_windows_instance.html) from the *Amazon EC2 User Guide*.

## Step 4: Create an FSx for Windows File Server file system


After your security group is verified and updated and your Active Directory is created and is in the active status, create the FSx for Windows File Server file system in the same VPC as your Active Directory. Use the following steps to create an FSx for Windows File Server file system for your Windows tasks.

**Create your first file system.**

1. Open the [Amazon FSx console](https://console.aws.amazon.com//fsx/).

1. On the dashboard, choose **Create file system** to start the file system creation wizard.

1. On the **Select file system type** page, choose **FSx for Windows File Server**, and then choose **Next**. The **Create file system** page appears.

1. In the **File system details** section, provide a name for your file system. Naming your file systems makes it easier to find and manage your them. You can use up to 256 Unicode characters. Allowed characters are letters, numbers, spaces, and the special characters plus sign (\$1). minus sign (-), equal sign (=), period (.), underscore (\$1), colon (:), and forward slash (/).

1. For **Deployment type** choose **Single-AZ** to deploy a file system that is deployed in a single Availability Zone. *Single-AZ 2* is the latest generation of single Availability Zone file systems, and it supports SSD and HDD storage.

1. For **Storage type**, choose **HDD**.

1. For **Storage capacity**, enter the minimum storage capacity. 

1. Keep **Throughput capacity** at its default setting.

1. In the **Network & security** section, choose the same Amazon VPC that you chose for your Directory Service directory.

1. For **VPC Security Groups**, choose the security group that you verified in *Step 3: Verify and update your security group*.

1. For **Windows authentication**, choose **AWS Managed Microsoft Active Directory**, and then choose your Directory Service directory from the list.

1. For **Encryption**, keep the default **Encryption key** setting of **aws/fsx (default)**.

1. Keep the default settings for **Maintenance preferences**.

1. Click on the **Next** button.

1. Review the file system configuration shown on the **Create file system** page. For your reference, note which file system settings you can modify after file system is created. Choose **Create file system**. 

1. Note the file system ID. You will need to use it in a later step.

   You can go on to the next steps to create a cluster and EC2 instance while the FSx for Windows File Server file system is being created.

## Step 5: Create an Amazon ECS cluster


**Create a cluster using the Amazon ECS console**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. From the navigation bar, select the Region to use.

1. In the navigation pane, choose **Clusters**.

1. On the **Clusters** page, choose **Create cluster**.

1. Under **Cluster configuration**, for **Cluster name**, enter **windows-fsx-cluster**.

1. Expand **Infrastructure**, clear AWS Fargate (serverless) and then select **Amazon EC2 instances**.

   1. To create a Auto Scaling group, from **Auto Scaling group (ASG)**, select **Create new group**, and then provide the following details about the group:
     + For **Operating system/Architecture**, choose **Windows Server 2019 Core**.
     + For **EC2 instance type**, choose t2.medium or t2.micro.

1. Choose **Create**.

## Step 6: Create an Amazon ECS optimized Amazon EC2 instance


Create an Amazon ECS Windows container instance.

**To create an Amazon ECS instance**

1. Use the `aws ssm get-parameters` command to retrieve the AMI name for the Region that hosts your VPC. For more information, see [Retrieving Amazon ECS-Optimized AMI metadata](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/retrieve-ecs-optimized_windows_AMI.html).

1. Use the Amazon EC2 console to launch the instance.

   1. Open the Amazon EC2 console at [https://console.aws.amazon.com/ec2/](https://console.aws.amazon.com/ec2/).

   1. From the navigation bar, select the Region to use.

   1. From the **EC2 Dashboard**, choose **Launch instance**.

   1. For **Name**, enter a unique name.

   1. For **Application and OS Images (Amazon Machine Image)**, in the **search** field, enter the AMI name that you retrieved.

   1. For **Instance type**, choose t2.medium or t2.micro.

   1. For **Key pair (login)**, choose a key pair. If you don't specify a key pair, you 

   1. Under **Network settings**, for **VPC** and **Subnet**, choose your VPC and a public subnet.

   1. Under **Network settings**, for **Security group**, choose an existing security group, or create a new one. Ensure that the security group you choose has the inbound and outbound rules defined in [Prerequisites for the tutorial](#wfsx-prerequisites)

   1. Under **Network settings**, for **Auto-assign Public IP**, select **Enable**. 

   1. Expand **Advanced details**, and then for **Domain join directory**, select the ID of the Active Directory that you created. This option domain joins your AD when the EC2 instance is launched.

   1. Under **Advanced details**, for **IAM instance profile** , choose **ecsInstanceRole**.

   1. Configure your Amazon ECS container instance with the following user data. Under **Advanced Details**, paste the following script into the **User data** field, replacing *cluster\$1name* with the name of your cluster.

      ```
      <powershell>
      Initialize-ECSAgent -Cluster windows-fsx-cluster -EnableTaskIAMRole
      </powershell>
      ```

   1. When you are ready, select the acknowledgment field, and then choose **Launch Instances**. 

   1. A confirmation page lets you know that your instance is launching. Choose **View Instances** to close the confirmation page and return to the console.

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Clusters**, and then choose **windows-fsx-cluster**.

1. Choose the **Infrastructure** tab and verify that your instance has been registered in the **windows-fsx-cluster** cluster.

## Step 7: Register a Windows task definition


Before you can run Windows containers in your Amazon ECS cluster, you must register a task definition. The following task definition example displays a simple web page. The task launches two containers that have access to the FSx file system. The first container writes an HTML file to the file system. The second container downloads the HTML file from the file system and serves the webpage.

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Task definitions**.

1. Choose **Create new task definition**, **Create new task definition with JSON**.

1. In the JSON editor box, replace the values for your task execution role and the details about your FSx file system and then choose **Save**.

   ```
   {
       "containerDefinitions": [
           {
               "entryPoint": [
                   "powershell",
                   "-Command"
               ],
               "portMappings": [],
               "command": ["New-Item -Path C:\\fsx-windows-dir\\index.html -ItemType file -Value '<html> <head> <title>Amazon ECS Sample App</title> <style>body {margin-top: 40px; background-color: #333;} </style> </head><body> <div style=color:white;text-align:center> <h1>Amazon ECS Sample App</h1> <h2>It Works!</h2> <p>You are using Amazon FSx for Windows File Server file system for persistent container storage.</p>' -Force"],
               "cpu": 512,
               "memory": 256,
               "image": "mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019",
               "essential": false,
               "name": "container1",
               "mountPoints": [
                   {
                       "sourceVolume": "fsx-windows-dir",
                       "containerPath": "C:\\fsx-windows-dir",
                       "readOnly": false
                   }
               ]
           },
           {
               "entryPoint": [
                   "powershell",
                   "-Command"
               ],
               "portMappings": [
                   {
                       "hostPort": 443,
                       "protocol": "tcp",
                       "containerPort": 80
                   }
               ],
               "command": ["Remove-Item -Recurse C:\\inetpub\\wwwroot\\* -Force; Start-Sleep -Seconds 120; Move-Item -Path C:\\fsx-windows-dir\\index.html -Destination C:\\inetpub\\wwwroot\\index.html -Force; C:\\ServiceMonitor.exe w3svc"],
               "mountPoints": [
                   {
                       "sourceVolume": "fsx-windows-dir",
                       "containerPath": "C:\\fsx-windows-dir",
                       "readOnly": false
                   }
               ],
               "cpu": 512,
               "memory": 256,
               "image": "mcr.microsoft.com/windows/servercore/iis:windowsservercore-ltsc2019",
               "essential": true,
               "name": "container2"
           }
       ],
       "family": "fsx-windows",
       "executionRoleArn": "arn:aws:iam::111122223333:role/ecsTaskExecutionRole",
       "volumes": [
           {
               "name": "fsx-windows-dir",
               "fsxWindowsFileServerVolumeConfiguration": {
                   "fileSystemId": "fs-0eeb5730b2EXAMPLE",
                   "authorizationConfig": {
                       "domain": "example.com",
                       "credentialsParameter": "arn:arn-1234"
                   },
                   "rootDirectory": "share"
               }
           }
       ]
   }
   ```

## Step 8: Run a task and view the results


Before running the task, verify that the status of your FSx for Windows File Server file system is **Available**. After it is available, you can run a task using the task definition that you created. The task starts out by creating containers that shuffle an HTML file between them using the file system. After the shuffle, a web server serves the simple HTML page.

**Note**  
You might not be able to connect to the website from within a VPN.

**Run a task and view the results with the Amazon ECS console.**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Clusters**, and then choose **windows-fsx-cluster**.

1. Choose the **Tasks** tab, and then choose **Run new task**.

1. For **Launch Type**, choose **EC2**.

1. Under Deployment configuration, for **Task Definition**, choose the **fsx-windows**, and then choose **Create**.

1. When your task status is **RUNNING**, choose the task ID.

1. Under **Containers**, when the container1 status is **STOPPED**, select container2 to view the container's details.

1.  Under **Container details for container2**, select **Network bindings** and then click on the external IP address that is associated with the container. Your browser will open and display the following message.

   ```
   Amazon ECS Sample App
   It Works! 
   You are using Amazon FSx for Windows File Server file system for persistent container storage.
   ```
**Note**  
It may take a few minutes for the message to be displayed. If you don't see this message after a few minutes, check that you aren't running in a VPN and make sure that the security group for your container instance allows inbound network HTTP traffic on port 443.

## Step 9: Clean up


**Note**  
It takes 20 to 45 minutes to delete the FSx for Windows File Server file system or the AD. You must wait until the FSx for Windows File Server file system delete operations are complete before starting the AD delete operations.

**Delete FSx for Windows File Server file system.**

1. Open the [Amazon FSx console](https://console.aws.amazon.com//fsx/)

1. Choose the radio button to the left of the FSx for Windows File Server file system that you just created.

1. Choose **Actions**.

1. Select **Delete file system**.

**Delete AD.**

1. Open the [Directory Service console](https://console.aws.amazon.com//directoryservicev2/).

1. Choose the radio button to the left of the AD you just created.

1. Choose **Actions**.

1. Select **Delete directory**.

**Delete the cluster.**

1. Open the console at [https://console.aws.amazon.com/ecs/v2](https://console.aws.amazon.com/ecs/v2).

1. In the navigation pane, choose **Clusters**, and then choose **windows-fsx-cluster**.

1. Choose **Delete cluster**.

1. Enter the phrase and then choose **Delete**.

**Terminate EC2 instance.**

1. Open the [Amazon EC2 console](https://console.aws.amazon.com//ec2/).

1. From the left-hand menu, select **Instances**.

1. Check the box to the left of the EC2 instance you created.

1. Click the **Instance state**, **Terminate instance**.

**Delete secret.**

1. Open the [Secrets Manager console](https://console.aws.amazon.com//secretsmanager/).

1. Select the secret you created for this walk through.

1. Click **Actions**.

1. Select **Delete secret**.

# Configuring S3 Files for Amazon ECS
Amazon S3 Files

S3 Files is a shared file system that connects any AWS compute resource directly with your data in Amazon S3. It provides fast, direct access to all of your S3 data as files with full file system semantics and low-latency performance, without your data ever leaving S3. You can read, write, and organize data using file and directory operations, while S3 Files keeps your file system and S3 bucket synchronized automatically. With Amazon ECS, you can define S3 file systems as volumes in your task definitions, giving your containers direct file system access to data stored in S3 buckets. To learn more about Amazon S3 Files and its capabilities, see the [Amazon S3 User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/).

## Availability


S3 Files support in Amazon ECS is available for the following launch types at General Availability:
+ **Fargate** — Fully supported.
+ **Amazon ECS Managed Instances** — Fully supported.

**Important**  
S3 Files are not supported on the Amazon EC2 launch type at this time. If you configure an S3 file system in a task definition and attempt to run it on the Amazon EC2 launch type, the task will fail at launch. Amazon EC2 launch type support is planned for a future release.

## Considerations

+ S3 file system uses a dedicated `s3filesVolumeConfiguration` parameter in the task definition.
+ S3 file system requires a full Amazon Resource Name (ARN) to identify the file system. The ARN format is:

  ```
  arn:{partition}:s3files:{region}:{account-id}:file-system/fs-xxxxx
  ```
+ Transit encryption is mandatory for S3 file system volumes and is automatically enforced. There is no option to disable it.
+ Task IAM Role is mandatory for S3 file system volumes and is automatically enforced. There is no option to disable it.

## Prerequisites


Before configuring S3 file system volumes in your Amazon ECS task definitions, ensure the following prerequisites are met:
+ **An S3 file system and mount target** — You must have an S3 file system created and associated with an S3 bucket. For instructions on creating an S3 file system, see the [Amazon S3 Files User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/).
+ **A Task IAM Role** — Your task definition must include a Task IAM Role with the following permissions:
  + Permissions to connect to and interact with S3 file systems from your application code (running in the container).
  + Permissions to read S3 objects from your application code (running in the container).
+ **VPC and security group configuration** — Your S3 file system must be accessible from the VPC and subnets where your Amazon ECS tasks run.
+ **(Optional) S3 Files access points** — If you want to enforce application-specific access controls, create an S3 Files access point and provide the ARN in the task definition.

For more information, refer to [prerequisites for S3 Files](https://docs.aws.amazon.com/AmazonS3/latest/userguide/s3-files-prereq-policies.html#s3-files-prereq-iam-compute-role).

# Specify an Amazon S3 Files volume in your Amazon ECS task definition
Specify an S3 Files volume in a task definition

You can configure S3 Files volumes in your Amazon ECS task definitions using the Amazon ECS console, the AWS CLI, or the AWS API.

## Using the Amazon ECS console


1. Open the Amazon ECS console at [https://console.aws.amazon.com/ecs/](https://console.aws.amazon.com/ecs/).

1. In the navigation pane, choose **Task definitions**.

1. Choose **Create new task definition** or select an existing task definition and create a new revision.

1. In the **Infrastructure** section, ensure you have a Task IAM Role configured with the required permissions.

1. In the **Storage** section, choose **Add volume**.

1. For **Volume type**, select **S3 Files**.

1. For **File system ARN**, enter the full ARN of your S3 file system. The ARN format is:

   ```
   arn:{partition}:s3files:{region}:{account-id}:file-system/fs-xxxxx
   ```

1. (Optional) For **Root directory**, enter the path within the file system to mount as the root. If not specified, the root of the file system (`/`) is used.

1. (Optional) For **Transit encryption port**, enter the port number for sending encrypted data between the Amazon ECS host and the S3 file system. If you don't specify a transit encryption port, it uses the port selection strategy that the Amazon EFS mount helper uses.

1. (Optional) For **Access point ARN**, select the S3 Files access point to use from the dropdown list.

1. In the **Container mount points** section, select the container and specify the local mount path in your container where the volume should be mounted inside the container.

1. Choose **Create** to create the task definition.

## Using the AWS CLI


To specify an S3 Files volume in a task definition using the AWS CLI, use the `register-task-definition` command with the `s3filesVolumeConfiguration` parameter in the volume definition.

The following is an example task definition JSON snippet that defines an S3 Files volume and mounts it to a container:

```
{
  "family": "s3files-task-example",
  "taskRoleArn": "arn:aws:iam::123456789012:role/ecsTaskRole",
  "containerDefinitions": [
    {
      "name": "my-container",
      "image": "my-image:latest",
      "essential": true,
      "mountPoints": [
        {
          "containerPath": "/mnt/s3data",
          "sourceVolume": "my-s3files-volume"
        }
      ]
    }
  ],
  "volumes": [
    {
      "name": "my-s3files-volume",
      "s3filesVolumeConfiguration": {
        "fileSystemArn": "arn:aws:s3files:us-east-1:123456789012:file-system/fs-0123456789abcdef0",
        "rootDirectory": "/",
        "transitEncryptionPort": 2999
      }
    }
  ]
}
```

Register the task definition:

```
aws ecs register-task-definition --cli-input-json file://s3files-task-def.json
```

To use an access point, include the `accessPointArn` parameter:

```
{
  "name": "my-s3files-volume",
  "s3filesVolumeConfiguration": {
    "fileSystemArn": "arn:aws:s3files:us-east-1:123456789012:file-system/fs-0123456789abcdef0",
    "rootDirectory": "/",
    "transitEncryptionPort": 2999,
    "accessPointArn": "arn:aws:s3files:us-east-1:123456789012:file-system/fs-0123456789abcdef0/access-point/fsap-0123456789abcdef0"
  }
}
```

## S3 Files volume configuration parameters


The following table describes the parameters available in the `s3filesVolumeConfiguration` object:

`fileSystemArn`  
Type: String  
Required: Yes  
The full ARN of the S3 file system to mount. Format: `arn:{partition}:s3files:{region}:{account-id}:file-system/fs-xxxxx`

`rootDirectory`  
Type: String  
Required: No  
The directory within the S3 file system to mount as the root of the volume. Defaults to `/` if not specified.

`transitEncryptionPort`  
Type: Integer  
Required: No  
The port to use for sending encrypted data between the Amazon ECS host and the S3 file system. Transit encryption itself is always enabled and cannot be disabled.

`accessPointArn`  
Type: String  
Required: No  
The full ARN of the S3 Files access point to use. Access points provide application-specific entry points into the file system with enforced user identity and root directory settings.

# Use Docker volumes with Amazon ECS
Docker volumes

When using Docker volumes, the built-in `local` driver or a third-party volume driver can be used. Docker volumes are managed by Docker and a directory is created in `/var/lib/docker/volumes` on the container instance that contains the volume data.

To use Docker volumes, specify a `dockerVolumeConfiguration` in your task definition. For more information, see [Volumes](https://docs.docker.com/engine/storage/volumes/) in the Docker documentation.

Some common use cases for Docker volumes are the following:
+ To provide persistent data volumes for use with containers
+ To share a defined data volume at different locations on different containers on the same container instance
+ To define an empty, nonpersistent data volume and mount it on multiple containers within the same task
+ To provide a data volume to your task that's managed by a third-party driver

## Considerations for using Docker volumes


Consider the following when using Docker volumes:
+ Docker volumes are only supported when using the EC2 launch type or external instances.
+ Windows containers only support the use of the `local` driver.
+ If a third-party driver is used, make sure it's installed and active on the container instance before the container agent is started. If the third-party driver isn't active before the agent is started, you can restart the container agent using one of the following commands:
  + For the Amazon ECS-optimized Amazon Linux 2 AMI:

    ```
    sudo systemctl restart ecs
    ```
  + For the Amazon ECS-optimized Amazon Linux AMI:

    ```
    sudo stop ecs && sudo start ecs
    ```

For information about how to specify a Docker volume in a task definition, see [Specify a Docker volume in an Amazon ECS task definition](specify-volume-config.md).

# Specify a Docker volume in an Amazon ECS task definition
Specify a Docker volume in a task definition

Before your containers can use data volumes, you must specify the volume and mount point configurations in your task definition. This section describes the volume configuration for a container. For tasks that use a Docker volume, specify a `dockerVolumeConfiguration`. For tasks that use a bind mount host volume, specify a `host` and optional `sourcePath`.

The following task definition JSON shows the syntax for the `volumes` and `mountPoints` objects for a container.

```
{
    "containerDefinitions": [
        {
            "mountPoints": [
                {
                    "sourceVolume": "string",
                    "containerPath": "/path/to/mount_volume",
                    "readOnly": boolean
                }
            ]
        }
    ],
    "volumes": [
        {
            "name": "string",
            "dockerVolumeConfiguration": {
                "scope": "string",
                "autoprovision": boolean,
                "driver": "string",
                "driverOpts": {
                    "key": "value"
                },
                "labels": {
                    "key": "value"
                }
            }
        }
    ]
}
```

`name`  
Type: String  
Required: No  
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, hyphens (`-`), and underscores (`_`) are allowed. This name is referenced in the `sourceVolume` parameter of the container definition `mountPoints` object.

`dockerVolumeConfiguration`  
Type: [DockerVolumeConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_DockerVolumeConfiguration.html) Object  
Required: No  
This parameter is specified when using Docker volumes. Docker volumes are supported only when running tasks on EC2 instances. Windows containers support only the use of the `local` driver. To use bind mounts, specify a `host` instead.    
`scope`  
Type: String  
Valid Values: `task` \$1 `shared`  
Required: No  
The scope for the Docker volume, which determines its lifecycle. Docker volumes that are scoped to a `task` are automatically provisioned when the task starts and destroyed when the task stops. Docker volumes that are scoped as `shared` persist after the task stops.  
`autoprovision`  
Type: Boolean  
Default value: `false`  
Required: No  
If this value is `true`, the Docker volume is created if it doesn't already exist. This field is used only if the `scope` is `shared`. If the `scope` is `task`, then this parameter must be omitted.  
`driver`  
Type: String  
Required: No  
The Docker volume driver to use. The driver value must match the driver name provided by Docker because this name is used for task placement. If the driver was installed by using the Docker plugin CLI, use `docker plugin ls` to retrieve the driver name from your container instance. If the driver was installed by using another method, use Docker plugin discovery to retrieve the driver name.  
`driverOpts`  
Type: String  
Required: No  
A map of Docker driver-specific options to pass through. This parameter maps to `DriverOpts` in the Create a volume section of Docker.  
`labels`  
Type: String  
Required: No  
Custom metadata to add to your Docker volume.

`mountPoints`  
Type: Object array  
Required: No  
The mount points for the data volumes in your container. This parameter maps to `Volumes` in the create-container Docker API and the `--volume` option to docker run.  
Windows containers can mount whole directories on the same drive as `$env:ProgramData`. Windows containers cannot mount directories on a different drive, and mount points cannot be used across drives. You must specify mount points to attach an Amazon EBS volume directly to an Amazon ECS task.    
`sourceVolume`  
Type: String  
Required: Yes, when `mountPoints` are used  
The name of the volume to mount.  
`containerPath`  
Type: String  
Required: Yes, when `mountPoints` are used  
The path in the container where the volume will be mounted.  
`readOnly`  
Type: Boolean  
Required: No  
If this value is `true`, the container has read-only access to the volume. If this value is `false`, then the container can write to the volume. The default value is `false`.  
For tasks that run on EC2 instances running the Windows operating system, leave the value as the default of `false`.

# Docker volume examples for Amazon ECS
Docker volume examples

The following examples show how to provide ephemeral storage for a container and how to provide a shared volume for multiple containers, and how to provide NFS persistent storage for a container.

**To provide ephemeral storage for a container using a Docker volume**

In this example, a container uses an empty data volume that is disposed of after the task is finished. One example use case is that you might have a container that needs to access some scratch file storage location during a task. This task can be achieved using a Docker volume.

1. In the task definition `volumes` section, define a data volume with `name` and `DockerVolumeConfiguration` values. In this example, we specify the scope as `task` so the volume is deleted after the task stops and use the built-in `local` driver.

   ```
   "volumes": [
       {
           "name": "scratch",
           "dockerVolumeConfiguration" : {
               "scope": "task",
               "driver": "local",
               "labels": {
                   "scratch": "space"
               }
           }
       }
   ]
   ```

1. In the `containerDefinitions` section, define a container with `mountPoints` values that reference the name of the defined volume and the `containerPath` value to mount the volume at on the container.

   ```
   "containerDefinitions": [
       {
           "name": "container-1",
           "mountPoints": [
               {
                 "sourceVolume": "scratch",
                 "containerPath": "/var/scratch"
               }
           ]
       }
   ]
   ```

**To provide persistent storage for multiple containers using a Docker volume**

In this example, you want a shared volume for multiple containers to use and you want it to persist after any single task that use it stopped. The built-in `local` driver is being used. This is so the volume is still tied to the lifecycle of the container instance.

1. In the task definition `volumes` section, define a data volume with `name` and `DockerVolumeConfiguration` values. In this example, specify a `shared` scope so the volume persists, set autoprovision to `true`. This is so that the volume is created for use. Then, also use the built-in `local` driver.

   ```
   "volumes": [
       {
           "name": "database",
           "dockerVolumeConfiguration" : {
               "scope": "shared",
               "autoprovision": true,
               "driver": "local",
               "labels": {
                   "database": "database_name"
               }
           }
       }
   ]
   ```

1. In the `containerDefinitions` section, define a container with `mountPoints` values that reference the name of the defined volume and the `containerPath` value to mount the volume at on the container.

   ```
   "containerDefinitions": [
       {
           "name": "container-1",
           "mountPoints": [
           {
             "sourceVolume": "database",
             "containerPath": "/var/database"
           }
         ]
       },
       {
         "name": "container-2",
         "mountPoints": [
           {
             "sourceVolume": "database",
             "containerPath": "/var/database"
           }
         ]
       }
     ]
   ```

**To provide NFS persistent storage for a container using a Docker volume**

 In this example, a container uses an NFS data volume that is automatically mounted when the task starts and unmounted when the task stops. This uses the Docker built-in `local` driver. One example use case is that you might have a local NFS storage and need to access it from an ECS Anywhere task. This can be achieved using a Docker volume with NFS driver option.

1. In the task definition `volumes` section, define a data volume with `name` and `DockerVolumeConfiguration` values. In this example, specify a `task` scope so the volume is unmounted after the task stops. Use the `local` driver and configure the `driverOpts` with the `type`, `device`, and `o` options accordingly. Replace `NFS_SERVER` with the NFS server endpoint.

   ```
   "volumes": [
          {
              "name": "NFS",
              "dockerVolumeConfiguration" : {
                  "scope": "task",
                  "driver": "local",
                  "driverOpts": {
                      "type": "nfs",
                      "device": "$NFS_SERVER:/mnt/nfs",
                      "o": "addr=$NFS_SERVER"
                  }
              }
          }
      ]
   ```

1. In the `containerDefinitions` section, define a container with `mountPoints` values that reference the name of the defined volume and the `containerPath` value to mount the volume on the container.

   ```
   "containerDefinitions": [
          {
              "name": "container-1",
              "mountPoints": [
                  {
                    "sourceVolume": "NFS",
                    "containerPath": "/var/nfsmount"
                  }
              ]
          }
      ]
   ```

# Use bind mounts with Amazon ECS
Bind mounts

With bind mounts, a file or directory on a host, such as an Amazon EC2 instance, is mounted into a container. Bind mounts are supported for tasks that are hosted on both Fargate and Amazon EC2 instances. Bind mounts are tied to the lifecycle of the container that uses them. After all of the containers that use a bind mount are stopped, such as when a task is stopped, the data is removed. For tasks that are hosted on Amazon EC2 instances, the data can be tied to the lifecycle of the host Amazon EC2 instance by specifying a `host` and optional `sourcePath` value in your task definition. For more information, see [Bind mounts](https://docs.docker.com/engine/storage/bind-mounts/) in the Docker documentation.

The following are common use cases for bind mounts.
+ To provide an empty data volume to mount in one or more containers.
+ To mount a host data volume in one or more containers.
+ To share a data volume from a source container with other containers in the same task.
+ To expose a path and its contents from a Dockerfile to one or more containers.

## Considerations when using bind mounts


When using bind mounts, consider the following.
+ By default, tasks that are hosted on AWS Fargate using platform version `1.4.0` or later (Linux) or `1.0.0` or later (Windows) receive a minimum of 20 GiB of ephemeral storage for bind mounts. You can increase the total amount of ephemeral storage up to a maximum of 200 GiB by specifying the `ephemeralStorage` parameter in your task definition.
+ To expose files from a Dockerfile to a data volume when a task is run, the Amazon ECS data plane looks for a `VOLUME` directive. If the absolute path that's specified in the `VOLUME` directive is the same as the `containerPath` that's specified in the task definition, the data in the `VOLUME` directive path is copied to the data volume. In the following Dockerfile example, a file that's named `examplefile` in the `/var/log/exported` directory is written to the host and then mounted inside the container.

  ```
  FROM public.ecr.aws/amazonlinux/amazonlinux:latest
  RUN mkdir -p /var/log/exported
  RUN touch /var/log/exported/examplefile
  VOLUME ["/var/log/exported"]
  ```

  By default, the volume permissions are set to `0755` and the owner as `root`. You can customize these permissions in the Dockerfile. The following example defines the owner of the directory as `node`.

  ```
  FROM public.ecr.aws/amazonlinux/amazonlinux:latest
  RUN yum install -y shadow-utils && yum clean all
  RUN useradd node
  RUN mkdir -p /var/log/exported && chown node:node /var/log/exported
  RUN touch /var/log/exported/examplefile
  USER node
  VOLUME ["/var/log/exported"]
  ```
+ For tasks that are hosted on Amazon EC2 instances, when a `host` and `sourcePath` value aren't specified, the Docker daemon manages the bind mount for you. When no containers reference this bind mount, the Amazon ECS container agent task cleanup service eventually deletes it. By default, this happens three hours after the container exits. However, you can configure this duration with the `ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION` agent variable. For more information, see [Amazon ECS container agent configuration](ecs-agent-config.md). If you need this data to persist beyond the lifecycle of the container, specify a `sourcePath` value for the bind mount.
+ For tasks that are hosted on Amazon ECS Managed Instances, portions of the root filesystem are read-only. Read/write bind mounts must use writable directories such as `/var` for persistent data or `/tmp` for temporary data. Attempting to create read/write bind mounts to other directories results in the task failing to launch with an error similar to the following:

  ```
  error creating empty volume: error while creating volume path '/path': mkdir /path: read-only file system
  ```

  Read-only bind mounts (configured with `"readOnly": true` in the `mountPoints` parameter) can point to any accessible directory on the host.

  To view a full list of writable paths, you can run a task on a Amazon ECS Managed Instance and use to inspect the instance's mount table. Create a task definition with the following settings to access the host filesystem:

  ```
  {
      "pidMode": "host",
      "containerDefinitions": [{
          "privileged": true,
          ...
      }]
  }
  ```

  Then run the following commands from within the container:

  ```
  # List writable mounts
  cat /proc/1/root/proc/1/mounts | awk '$4 ~ /^rw,/ || $4 == "rw" {print $2}' | sort
  
  # List read-only mounts
  cat /proc/1/root/proc/1/mounts | awk '$4 ~ /^ro,/ || $4 == "ro" {print $2}' | sort
  ```
**Important**  
The `privileged` setting grants the container extended capabilities on the host, equivalent to root access. In this example, it is used to inspect the host's mount table for diagnostic purposes. For more information, see [Avoid running containers as privileged (Amazon EC2)](security-tasks-containers.md#security-tasks-containers-recommendations-avoid-privileged-containers).

  For more information about running commands interactively in containers, see [Monitor Amazon ECS containers with ECS Exec](ecs-exec.md).

# Specify a bind mount in an Amazon ECS task definition
Specify a bind mount in a task definition

For Amazon ECS tasks that are hosted on either Fargate or Amazon EC2 instances, the following task definition JSON snippet shows the syntax for the `volumes`, `mountPoints`, and `ephemeralStorage` objects for a task definition.

```
{
   "family": "",
   ...
   "containerDefinitions" : [
      {
         "mountPoints" : [
            {
               "containerPath" : "/path/to/mount_volume",
               "sourceVolume" : "string"
            }
          ],
          "name" : "string"
       }
    ],
    ...
    "volumes" : [
       {
          "name" : "string"
       }
    ],
    "ephemeralStorage": {
	   "sizeInGiB": integer
    }
}
```

For Amazon ECS tasks that are hosted on Amazon EC2 instances, you can use the optional `host` parameter and a `sourcePath` when specifying the task volume details. When it's specified, it ties the bind mount to the lifecycle of the task rather than the container.

```
"volumes" : [
    {
        "host" : {
            "sourcePath" : "string"
        },
        "name" : "string"
    }
]
```

The following describes each task definition parameter in more detail.

`name`  
Type: String  
Required: No  
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, hyphens (`-`), and underscores (`_`) are allowed. This name is referenced in the `sourceVolume` parameter of the container definition `mountPoints` object.

`host`  
Required: No  
The `host` parameter is used to tie the lifecycle of the bind mount to the host Amazon EC2 instance, rather than the task, and where it is stored. If the `host` parameter is empty, then the Docker daemon assigns a host path for your data volume, but the data is not guaranteed to persist after the containers associated with it stop running.  
Windows containers can mount whole directories on the same drive as `$env:ProgramData`.  
The `sourcePath` parameter is supported only when using tasks that are hosted on Amazon EC2 instances or Amazon ECS Managed Instances.  
`sourcePath`  
Type: String  
Required: No  
When the `host` parameter is used, specify a `sourcePath` to declare the path on the host Amazon EC2 instance that is presented to the container. If this parameter is empty, then the Docker daemon assigns a host path for you. If the `host` parameter contains a `sourcePath` file location, then the data volume persists at the specified location on the host Amazon EC2 instance until you delete it manually. If the `sourcePath` value does not exist on the host Amazon EC2 instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.

`mountPoints`  
Type: Object array  
Required: No  
The mount points for the data volumes in your container. This parameter maps to `Volumes` in the create-container Docker API and the `--volume` option to docker run.  
Windows containers can mount whole directories on the same drive as `$env:ProgramData`. Windows containers cannot mount directories on a different drive, and mount points cannot be used across drives. You must specify mount points to attach an Amazon EBS volume directly to an Amazon ECS task.    
`sourceVolume`  
Type: String  
Required: Yes, when `mountPoints` are used  
The name of the volume to mount.  
`containerPath`  
Type: String  
Required: Yes, when `mountPoints` are used  
The path in the container where the volume will be mounted.  
`readOnly`  
Type: Boolean  
Required: No  
If this value is `true`, the container has read-only access to the volume. If this value is `false`, then the container can write to the volume. The default value is `false`.  
For tasks that run on EC2 instances running the Windows operating system, leave the value as the default of `false`.

`ephemeralStorage`  
Type: Object  
Required: No  
The amount of ephemeral storage to allocate for the task. This parameter is used to expand the total amount of ephemeral storage available, beyond the default amount, for tasks hosted on AWS Fargate using platform version `1.4.0` or later (Linux) or `1.0.0` or later (Windows).  
You can use the Copilot CLI, CloudFormation, the AWS SDK or the CLI to specify ephemeral storage for a bind mount.

# Bind mount examples for Amazon ECS


The following examples cover the common use cases for using a bind mount for your containers.

**To allocate an increased amount of ephemeral storage space for a Fargate task**

For Amazon ECS tasks that are hosted on Fargate using platform version `1.4.0` or later (Linux) or `1.0.0` (Windows), you can allocate more than the default amount of ephemeral storage for the containers in your task to use. This example can be incorporated into the other examples to allocate more ephemeral storage for your Fargate tasks.
+ In the task definition, define an `ephemeralStorage` object. The `sizeInGiB` must be an integer between the values of `21` and `200` and is expressed in GiB.

  ```
  "ephemeralStorage": {
      "sizeInGiB": integer
  }
  ```

**To provide an empty data volume for one or more containers**

In some cases, you want to provide the containers in a task some scratch space. For example, you might have two database containers that need to access the same scratch file storage location during a task. This can be achieved using a bind mount.

1. In the task definition `volumes` section, define a bind mount with the name `database_scratch`.

   ```
     "volumes": [
       {
         "name": "database_scratch"
       }
     ]
   ```

1. In the `containerDefinitions` section, create the database container definitions. This is so that they mount the volume.

   ```
   "containerDefinitions": [
       {
         "name": "database1",
         "image": "my-repo/database",
         "cpu": 100,
         "memory": 100,
         "essential": true,
         "mountPoints": [
           {
             "sourceVolume": "database_scratch",
             "containerPath": "/var/scratch"
           }
         ]
       },
       {
         "name": "database2",
         "image": "my-repo/database",
         "cpu": 100,
         "memory": 100,
         "essential": true,
         "mountPoints": [
           {
             "sourceVolume": "database_scratch",
             "containerPath": "/var/scratch"
           }
         ]
       }
     ]
   ```

**To expose a path and its contents in a Dockerfile to a container**

In this example, you have a Dockerfile that writes data that you want to mount inside a container. This example works for tasks that are hosted on Fargate or Amazon EC2 instances.

1. Create a Dockerfile. The following example uses the public Amazon Linux 2 container image and creates a file that's named `examplefile` in the `/var/log/exported` directory that we want to mount inside the container. The `VOLUME` directive should specify an absolute path.

   ```
   FROM public.ecr.aws/amazonlinux/amazonlinux:latest
   RUN mkdir -p /var/log/exported
   RUN touch /var/log/exported/examplefile
   VOLUME ["/var/log/exported"]
   ```

   By default, the volume permissions are set to `0755` and the owner as `root`. These permissions can be changed in the Dockerfile. In the following example, the owner of the `/var/log/exported` directory is set to `node`.

   ```
   FROM public.ecr.aws/amazonlinux/amazonlinux:latest
   RUN yum install -y shadow-utils && yum clean all
   RUN useradd node
   RUN mkdir -p /var/log/exported && chown node:node /var/log/exported					    
   USER node
   RUN touch /var/log/exported/examplefile
   VOLUME ["/var/log/exported"]
   ```

1. In the task definition `volumes` section, define a volume with the name `application_logs`.

   ```
     "volumes": [
       {
         "name": "application_logs"
       }
     ]
   ```

1. In the `containerDefinitions` section, create the application container definitions. This is so they mount the storage. The `containerPath` value must match the absolute path that's specified in the `VOLUME` directive from the Dockerfile.

   ```
     "containerDefinitions": [
       {
         "name": "application1",
         "image": "my-repo/application",
         "cpu": 100,
         "memory": 100,
         "essential": true,
         "mountPoints": [
           {
             "sourceVolume": "application_logs",
             "containerPath": "/var/log/exported"
           }
         ]
       },
       {
         "name": "application2",
         "image": "my-repo/application",
         "cpu": 100,
         "memory": 100,
         "essential": true,
         "mountPoints": [
           {
             "sourceVolume": "application_logs",
             "containerPath": "/var/log/exported"
           }
         ]
       }
     ]
   ```

**To provide an empty data volume for a container that's tied to the lifecycle of the host Amazon EC2 instance**

For tasks that are hosted on Amazon EC2 instances, you can use bind mounts and have the data tied to the lifecycle of the host Amazon EC2 instance. You can do this by using the `host` parameter and specifying a `sourcePath` value. Any files that exist at the `sourcePath` are presented to the containers at the `containerPath` value. Any files that are written to the `containerPath` value are written to the `sourcePath` value on the host Amazon EC2 instance.
**Important**  
Amazon ECS doesn't sync your storage across Amazon EC2 instances. Tasks that use persistent storage can be placed on any Amazon EC2 instance in your cluster that has available capacity. If your tasks require persistent storage after stopping and restarting, always specify the same Amazon EC2 instance at task launch time with the AWS CLI [start-task](https://docs.aws.amazon.com/cli/latest/reference/ecs/start-task.html) command. You can also use Amazon EFS volumes for persistent storage. For more information, see [Use Amazon EFS volumes with Amazon ECS](efs-volumes.md).

1. In the task definition `volumes` section, define a bind mount with `name` and `sourcePath` values. In the following example, the host Amazon EC2 instance contains data at `/ecs/webdata` that you want to mount inside the container.

   ```
     "volumes": [
       {
         "name": "webdata",
         "host": {
           "sourcePath": "/ecs/webdata"
         }
       }
     ]
   ```

1. In the `containerDefinitions` section, define a container with a `mountPoints` value that references the name of the bind mount and the `containerPath` value to mount the bind mount at on the container.

   ```
     "containerDefinitions": [
       {
         "name": "web",
         "image": "public.ecr.aws/docker/library/nginx:latest",
         "cpu": 99,
         "memory": 100,
         "portMappings": [
           {
             "containerPort": 80,
             "hostPort": 80
           }
         ],
         "essential": true,
         "mountPoints": [
           {
             "sourceVolume": "webdata",
             "containerPath": "/usr/share/nginx/html"
           }
         ]
       }
     ]
   ```

**To mount a defined volume on multiple containers at different locations**

You can define a data volume in a task definition and mount that volume at different locations on different containers. For example, your host container has a website data folder at `/data/webroot`. You might want to mount that data volume as read-only on two different web servers that have different document roots.

1. In the task definition `volumes` section, define a data volume with the name `webroot` and the source path `/data/webroot`.

   ```
     "volumes": [
       {
         "name": "webroot",
         "host": {
           "sourcePath": "/data/webroot"
         }
       }
     ]
   ```

1. In the `containerDefinitions` section, define a container for each web server with `mountPoints` values that associate the `webroot` volume with the `containerPath` value pointing to the document root for that container.

   ```
     "containerDefinitions": [
       {
         "name": "web-server-1",
         "image": "my-repo/ubuntu-apache",
         "cpu": 100,
         "memory": 100,
         "portMappings": [
           {
             "containerPort": 80,
             "hostPort": 80
           }
         ],
         "essential": true,
         "mountPoints": [
           {
             "sourceVolume": "webroot",
             "containerPath": "/var/www/html",
             "readOnly": true
           }
         ]
       },
       {
         "name": "web-server-2",
         "image": "my-repo/sles11-apache",
         "cpu": 100,
         "memory": 100,
         "portMappings": [
           {
             "containerPort": 8080,
             "hostPort": 8080
           }
         ],
         "essential": true,
         "mountPoints": [
           {
             "sourceVolume": "webroot",
             "containerPath": "/srv/www/htdocs",
             "readOnly": true
           }
         ]
       }
     ]
   ```

**To mount volumes from another container using `volumesFrom`**

For tasks hosted on Amazon EC2 instances, you can define one or more volumes on a container, and then use the `volumesFrom` parameter in a different container definition within the same task to mount all of the volumes from the `sourceContainer` at their originally defined mount points. The `volumesFrom` parameter applies to volumes defined in the task definition, and those that are built into the image with a Dockerfile.

1. (Optional) To share a volume that is built into an image, use the `VOLUME` instruction in the Dockerfile. The following example Dockerfile uses an `httpd` image, and then adds a volume and mounts it at `dockerfile_volume` in the Apache document root. It is the folder used by the `httpd` web server.

   ```
   FROM httpd
   VOLUME ["/usr/local/apache2/htdocs/dockerfile_volume"]
   ```

   You can build an image with this Dockerfile and push it to a repository, such as Docker Hub, and use it in your task definition. The example `my-repo/httpd_dockerfile_volume` image that's used in the following steps was built with the preceding Dockerfile.

1. Create a task definition that defines your other volumes and mount points for the containers. In this example `volumes` section, you create an empty volume called `empty`, which the Docker daemon manages. There's also a host volume defined that's called `host_etc`. It exports the `/etc` folder on the host container instance.

   ```
   {
     "family": "test-volumes-from",
     "volumes": [
       {
         "name": "empty",
         "host": {}
       },
       {
         "name": "host_etc",
         "host": {
           "sourcePath": "/etc"
         }
       }
     ],
   ```

   In the container definitions section, create a container that mounts the volumes defined earlier. In this example, the `web` container mounts the `empty` and `host_etc` volumes. This is the container that uses the image built with a volume in the Dockerfile.

   ```
   "containerDefinitions": [
       {
         "name": "web",
         "image": "my-repo/httpd_dockerfile_volume",
         "cpu": 100,
         "memory": 500,
         "portMappings": [
           {
             "containerPort": 80,
             "hostPort": 80
           }
         ],
         "mountPoints": [
           {
             "sourceVolume": "empty",
             "containerPath": "/usr/local/apache2/htdocs/empty_volume"
           },
           {
             "sourceVolume": "host_etc",
             "containerPath": "/usr/local/apache2/htdocs/host_etc"
           }
         ],
         "essential": true
       },
   ```

   Create another container that uses `volumesFrom` to mount all of the volumes that are associated with the `web` container. All of the volumes on the `web` container are likewise mounted on the `busybox` container. This includes the volume that's specified in the Dockerfile that was used to build the `my-repo/httpd_dockerfile_volume` image.

   ```
       {
         "name": "busybox",
         "image": "busybox",
         "volumesFrom": [
           {
             "sourceContainer": "web"
           }
         ],
         "cpu": 100,
         "memory": 500,
         "entryPoint": [
           "sh",
           "-c"
         ],
         "command": [
           "echo $(date) > /usr/local/apache2/htdocs/empty_volume/date && echo $(date) > /usr/local/apache2/htdocs/host_etc/date && echo $(date) > /usr/local/apache2/htdocs/dockerfile_volume/date"
         ],
         "essential": false
       }
     ]
   }
   ```

   When this task is run, the two containers mount the volumes, and the `command` in the `busybox` container writes the date and time to a file. This file is called `date` in each of the volume folders. The folders are then visible at the website displayed by the `web` container.
**Note**  
Because the `busybox` container runs a quick command and then exits, it must be set as `"essential": false` in the container definition. Otherwise, it stops the entire task when it exits.

# Managing container swap memory space on Amazon ECS
Managing container swap memory space

With Amazon ECS, you can control the usage of swap memory space on your Linux-based Amazon EC2 instances at the container level. Using a per-container swap configuration, each container within a task definition can have swap enabled or disabled. For those that have it enabled, the maximum amount of swap space that's used can be limited. For example, latency-critical containers can have swap disabled. In contrast, containers with high transient memory demands can have swap turned on to reduce the chances of out-of-memory errors when the container is under load.

The swap configuration for a container is managed by the following container definition parameters.

`maxSwap`  
The total amount of swap memory (in MiB) a container can use. This parameter is translated to the `--memory-swap` option to docker run where the value is the sum of the container memory plus the `maxSwap` value.  
If a `maxSwap` value of `0` is specified, the container doesn't use swap. Accepted values are `0` or any positive integer. If the `maxSwap` parameter is omitted, the container uses the swap configuration for the container instance that it's running on. A `maxSwap` value must be set for the `swappiness` parameter to be used.

`swappiness`  
You can use this to tune a container's memory swappiness behavior. A `swappiness` value of `0` causes swapping to not occur unless required. A `swappiness` value of `100` causes pages to be swapped aggressively. Accepted values are whole numbers between `0` and `100`. If the `swappiness` parameter isn't specified, a default value of `60` is used. If a value isn't specified for `maxSwap`, this parameter is ignored. This parameter maps to the `--memory-swappiness` option to docker run.

In the following example, the JSON syntax is provided.

```
"containerDefinitions": [{
        ...
        "linuxParameters": {
            "maxSwap": integer,
            "swappiness": integer
        },
        ...
}]
```

## Considerations


Consider the following when you use a per-container swap configuration.
+ Swap space must be enabled and allocated on the Amazon EC2 instance hosting your tasks for the containers to use. By default, the Amazon ECS optimized AMIs do not have swap enabled. You must enable swap on the instance to use this feature. For more information, see [Instance Store Swap Volumes](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-store-swap-volumes.html) in the *Amazon EC2 User Guide* or [How do I allocate memory to work as swap space in an Amazon EC2 instance?](https://repost.aws/knowledge-center/ec2-memory-swap-file).
+ The swap space container definition parameters are only supported for task definitions that specify EC2. These parameters are not supported for task definitions intended only for Amazon ECS on Fargate use.
+ This feature is only supported for Linux containers. Windows containers are not supported currently.
+ If the `maxSwap` and `swappiness` container definition parameters are omitted from a task definition, each container has a default `swappiness` value of `60`. Moreover, the total swap usage is limited to two times the memory of the container.
+ If you're using tasks on Amazon Linux 2023 the `swappiness` parameter isn't supported.

# Amazon ECS task definition differences for Amazon ECS Managed Instances
Task definition differences for Amazon ECS Managed Instances

In order to use Amazon ECS Managed Instances, you must configure your task definition to use the Amazon ECS Managed Instances launch type. There are additional considerations when using Amazon ECS Managed Instances.

## Task definition parameters


Tasks that use Amazon ECS Managed Instances support most of the Amazon ECS task definition parameters that are available. However, some parameters have specific behaviors or limitations when used with Amazon ECS Managed Instances tasks.

The following task definition parameters are not valid in Amazon ECS Managed Instances tasks:
+ `disableNetworking`
+ `dnsSearchDomains`
+ `dnsServers`
+ `dockerLabels`
+ `dockerSecurityOptions`
+ `dockerVolumeConfiguration`
+ `ephemeralStorage`
+ `extraHosts`
+ `fsxWindowsFileServerVolumeConfiguration`
+ `hostname`
+ `inferenceAccelerator`
+ `ipcMode`
+ `links`
+ `maxSwap`
+ `proxyConfiguration`
+ `sharedMemorySize`
+ `sourcepath` volumes
+ `swappiness`
+ `tmpfs`

The following task definition parameters are valid in Amazon ECS Managed Instances tasks, but have limitations that should be noted:
+ `networkConfiguration` - Amazon ECS Managed Instances tasks use the `awsvpc` or `host` network mode.
+ `placementConstraints` - The following constraint attributes are supported.
  + `ecs.subnet-id`
  + `ecs.availability-zone`
  + `ecs.instance-type`
  + `ecs.cpu-architecture`
+ `requiresCompatibilities` - Must include `MANAGED_INSTANCES` to ensure the task definition is compatible with Amazon ECS Managed Instances.
+ `resourceRequirement` - `InferenceAccelerator` is not supported.
+ `operatingSystemFamily` - Amazon ECS Managed Instances use `LINUX`.
+ `volumes` - When using bind mounts with a `sourcePath`, the path must point to a writable directory on the host. Portions of the Amazon ECS Managed Instance filesystem are read-only. Writable directories include `/var` and `/tmp`. For more information, see [Use bind mounts with Amazon ECS](bind-mounts.md).

To ensure that your task definition validates for use with Amazon ECS Managed Instances, you can specify the following when you register the task definition: 
+ In the AWS Management Console, for the **Requires Compatibilities** field, specify `MANAGED_INSTANCES`.
+ In the AWS CLI, specify the `--requires-compatibilities` option.
+ In the Amazon ECS API, specify the `requiresCompatibilities` flag.

# Amazon ECS task definition differences for Fargate
Task definition differences for Fargate

In order to use Fargate, you must configure your task definition to use the Fargate launch type. There are additional considerations when using Fargate.

## Task definition parameters


Tasks that use Fargate don't support all of the Amazon ECS task definition parameters that are available. Some parameters aren't supported at all, and others behave differently for Fargate tasks.

The following task definition parameters are not valid in Fargate tasks:
+ `disableNetworking`
+ `dnsSearchDomains`
+ `dnsServers`
+ `dockerSecurityOptions`
+ `extraHosts`
+ `gpu`
+ `ipcMode`
+ `links`
+ `placementConstraints`
+ `privileged`
+ `maxSwap`
+ `swappiness`

The following task definition parameters are valid in Fargate tasks, but have limitations that should be noted:
+ `linuxParameters` – When specifying Linux-specific options that are applied to the container, for `capabilities` the only capability you can add is `CAP_SYS_PTRACE`. The `devices`, `sharedMemorySize`, and `tmpfs` parameters are not supported. For more information, see [Linux parameters](task_definition_parameters.md#container_definition_linuxparameters).
+ `volumes` – Fargate tasks only support bind mount host volumes, so the `dockerVolumeConfiguration` parameter is not supported. For more information, see [Volumes](task_definition_parameters.md#volumes).
+ `cpu` - For Windows containers on AWS Fargate, the value cannot be less than 1 vCPU.
+ `networkConfiguration` - Fargate tasks always use the `awsvpc` network mode.

To ensure that your task definition validates for use with Fargate, you can specify the following when you register the task definition: 
+ In the AWS Management Console, for the **Requires Compatibilities** field, specify `FARGATE`.
+ In the AWS CLI, specify the `--requires-compatibilities` option.
+ In the Amazon ECS API, specify the `requiresCompatibilities` flag.

## Operating Systems and architectures


When you configure a task and container definition for AWS Fargate, you must specify the Operating System that the container runs. The following Operating Systems are supported for AWS Fargate:
+ Amazon Linux 2
**Note**  
Linux containers use only the kernel and kernel configuration from the host Operating System. For example, the kernel configuration includes the `sysctl` system controls. A Linux container image can be made from a base image that contains the files and programs from any Linux distribution. If the CPU architecture matches, you can run containers from any Linux container image on any Operating System.
+ Windows Server 2019 Full
+ Windows Server 2019 Core
+ Windows Server 2022 Full
+ Windows Server 2022 Core

When you run Windows containers on AWS Fargate, you must have the X86\$164 CPU architecture.

When you run Linux containers on AWS Fargate, you can use the X86\$164 CPU architecture, or the ARM64 architecture for your ARM-based applications. For more information, see [Amazon ECS task definitions for 64-bit ARM workloads](ecs-arm64.md).

## Task CPU and memory


Amazon ECS task definitions for AWS Fargate require that you specify CPU and memory at the task level. Most use cases are satisfied by only specifying these resources at the task level. The table below shows the valid combinations of task-level CPU and memory. You can specify memory values in the task definition as a string in MiB or GB. For example, you can specify a memory value either as `3072` in MiB or `3 GB` in GB. You can specify CPU values in the JSON file as a string in CPU units or virtual CPUs (vCPUs). For example, you can specify a CPU value either as `1024` in CPU units or `1 vCPU` in vCPUs.


|  CPU value  |  Memory value  |  Operating systems supported for AWS Fargate  | 
| --- | --- | --- | 
|  256 (.25 vCPU)  |  512 MiB, 1 GB, 2 GB  |  Linux  | 
|  512 (.5 vCPU)  |  1 GB, 2 GB, 3 GB, 4 GB  |  Linux  | 
|  1024 (1 vCPU)  |  2 GB, 3 GB, 4 GB, 5 GB, 6 GB, 7 GB, 8 GB  |  Linux, Windows  | 
|  2048 (2 vCPU)  |  Between 4 GB and 16 GB in 1 GB increments  |  Linux, Windows  | 
|  4096 (4 vCPU)  |  Between 8 GB and 30 GB in 1 GB increments  |  Linux, Windows  | 
|  8192 (8 vCPU)  This option requires Linux platform `1.4.0` or later.   |  Between 16 GB and 60 GB in 4 GB increments  |  Linux  | 
|  16384 (16vCPU)  This option requires Linux platform `1.4.0` or later.   |  Between 32 GB and 120 GB in 8 GB increments  |  Linux  | 

## Task networking


Amazon ECS tasks for AWS Fargate require the `awsvpc` network mode, which provides each task with an elastic network interface. When you run a task or create a service with this network mode, you must specify one or more subnets to attach the network interface and one or more security groups to apply to the network interface. 

If you are using public subnets, decide whether to provide a public IP address for the network interface. For a Fargate task in a public subnet to pull container images, a public IP address needs to be assigned to the task's elastic network interface, with a route to the internet or a NAT gateway that can route requests to the internet. For a Fargate task in a private subnet to pull container images, you need a NAT gateway in the subnet to route requests to the internet. When you host your container images in Amazon ECR, you can configure Amazon ECR to use an interface VPC endpoint. In this case, the task's private IPv4 address is used for the image pull. For more information about Amazon ECR interface endpoints, see [Amazon ECR interface VPC endpoints (AWS PrivateLink)](https://docs.aws.amazon.com/AmazonECR/latest/userguide/vpc-endpoints.html) in the *Amazon Elastic Container Registry User Guide*.

The following is an example of the `networkConfiguration` section for a Fargate service:

```
"networkConfiguration": { 
   "awsvpcConfiguration": { 
      "assignPublicIp": "ENABLED",
      "securityGroups": [ "sg-12345678" ],
      "subnets": [ "subnet-12345678" ]
   }
}
```

## Task resource limits


Amazon ECS task definitions for Linux containers on AWS Fargate support the `ulimits` parameter to define the resource limits to set for a container.

Amazon ECS task definitions for Windows on AWS Fargate do not support the `ulimits` parameter to define the resource limits to set for a container.

Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the `nofile` resource limit parameter. The `nofile` resource limit sets a restriction on the number of open files that a container can use. On Fargate, the default `nofile` soft limit is ` 65535` and hard limit is `65535`. You can set the values of both limits up to `1048576`.

The following is an example task definition snippet that shows how to define a custom `nofile` limit that has been doubled:

```
"ulimits": [
    {
       "name": "nofile",
       "softLimit": 2048,
       "hardLimit": 8192
    }
]
```

For more information on the other resource limits that can be adjusted, see [Resource limits](task_definition_parameters.md#container_definition_limits).

## Logging


### Event logging


Amazon ECS logs the actions that it takes to EventBridge. You can use Amazon ECS events for EventBridge to receive near real-time notifications regarding the current state of your Amazon ECS clusters, services, and tasks. Additionally, you can automate actions to respond to these events. For more information, see [Automate responses to Amazon ECS errors using EventBridge](cloudwatch_event_stream.md).

### Task lifecycle logging


Tasks that run on Fargate publish timestamps to track the task through the states of the task lifecycle. You can see the timestamps in the task details in the AWS Management Console and by describing the task in the AWS CLI and SDKs. For example, you can use the timestamps to evaluate how much time the task spent downloading the container images and decide if you should optimize the container image size, or use Seekable OCI indexes. For more information about container image practices, see [Best practices for Amazon ECS container images](container-considerations.md).

### Application logging


Amazon ECS task definitions for AWS Fargate support the `awslogs`, `splunk`, and `awsfirelens` log drivers for the log configuration.

The `awslogs` log driver configures your Fargate tasks to send log information to Amazon CloudWatch Logs. The following shows a snippet of a task definition where the `awslogs` log driver is configured:

```
"logConfiguration": { 
   "logDriver": "awslogs",
   "options": { 
      "awslogs-group" : "/ecs/fargate-task-definition",
      "awslogs-region": "us-east-1",
      "awslogs-stream-prefix": "ecs"
   }
}
```

For more information about using the `awslogs` log driver in a task definition to send your container logs to CloudWatch Logs, see [Send Amazon ECS logs to CloudWatch](using_awslogs.md).

For more information about the `awsfirelens` log driver in a task definition, see [Send Amazon ECS logs to an AWS service or AWS Partner](using_firelens.md).

For more information about using the `splunk` log driver in a task definition, see [`splunk` log driver](example_task_definitions.md#example_task_definition-splunk).

## Task storage


For Amazon ECS tasks hosted on Fargate, the following storage types are supported:
+ Amazon EBS volumes provide cost-effective, durable, high-performance block storage for data-intensive containerized workloads. For more information, see [Use Amazon EBS volumes with Amazon ECS](ebs-volumes.md).
+ Amazon EFS volumes for persistent storage. For more information, see [Use Amazon EFS volumes with Amazon ECS](efs-volumes.md).
+ Bind mounts for ephemeral storage. For more information, see [Use bind mounts with Amazon ECS](bind-mounts.md).

## Lazy loading container images using Seekable OCI (SOCI)
Lazy loading SOCI container images

Amazon ECS tasks on Fargate that use Linux platform version `1.4.0` can use Seekable OCI (SOCI) to help start tasks faster. With SOCI, containers only spend a few seconds on the image pull before they can start, providing time for environment setup and application instantiation while the image is downloaded in the background. This is called *lazy loading*. When Fargate starts an Amazon ECS task, Fargate automatically detects if a SOCI index exists for an image in the task and starts the container without waiting for the entire image to be downloaded.

For containers that run without SOCI indexes, container images are downloaded completely before the container is started. This behavior is the same on all other platform versions of Fargate and on the Amazon ECS-optimized AMI on Amazon EC2 instances.

Seekable OCI (SOCI) is an open source technology developed by AWS that can launch containers faster by lazily loading the container image. SOCI works by creating an index (SOCI Index) of the files within an existing container image. This index helps to launch containers faster, providing the capability to extract an individual file from a container image before downloading the entire image. The SOCI index must be stored as an artifact in the same repository as the image within the container registry. You should only use SOCI indices from trusted sources, as the index is the authoritative source for the contents of the image. For more information, see [Introducing Seekable OCI for lazy loading container images](https://aws.amazon.com/about-aws/whats-new/2022/09/introducing-seekable-oci-lazy-loading-container-images/).

Customers looking to use SOCI will only be able to use the SOCI index manifest v2. Existing customers that have previously used SOCI on Fargate can continue to use the SOCI Index Manifest v1, however we strongly advise those customers migrate to SOCI Index Manifest v2. The SOCI Index Manifest v2 creates an explicit relationship between container images and their SOCI indexes to ensure consistent deployments.
<a name="fargate-soci-considerations"></a>
**Considerations**  
If you want Fargate to use a SOCI index to lazily load container images in a task, consider the following:
+ Only tasks that run on Linux platform version `1.4.0` can use SOCI indexes. Tasks that run Windows containers on Fargate aren't supported.
+ Tasks that run on X86\$164 or ARM64 CPU architecture are supported.
+ Container images in the task definition must be stored in a compatible image registry. The following lists the compatible registries:
  + Amazon ECR private registries.
+ Only container images that use gzip compression, or are not compressed are supported. Container images that use zstd compression aren't supported.
+ For SOCI Index Manifest v2, generating a SOCI index manifest modifies the container image manifest as we add an annotation for the SOCI index. This results in a new container image digest. The contents of the container images filesystem layers do not change.
+ For SOCI Index Manifest v2, when the container image has already been stored in the container image repository, after a SOCI index has been generated, you need to repush the container image. Re-pushing a container image will not increase storage costs by duplicating the filesystem layers, it only uploads a new manifest file.
+ We recommend that you try lazy loading with container images greater than 250 MiB compressed in size. You are less likely to see a reduction in the time to load smaller images.
+ Because lazy loading can change how long your tasks take to start, you might need to change various timeouts like the health check grace period for Elastic Load Balancing.
+ If you want to prevent a container image from being lazy loaded, the container image will need to be repushed without a SOCI index attached.
<a name="create-soci"></a>
**Creating a Seekable OCI index**  
For a container image to be lazy loaded it needs a SOCI index (a metadata file) created and stored in the container image repository along side the container image. To create and push a SOCI index you can use the open source [ soci-snapshotter CLI tool](https://github.com/awslabs/soci-snapshotter) on GitHub. Or, you can deploy the CloudFormation AWS SOCI Index Builder. This is a serverless solution that automatically creates and pushes a SOCI index when a container image is pushed to Amazon ECR. For more information about the solution and the installation steps, see [CloudFormation AWS SOCI Index Builder](https://awslabs.github.io/cfn-ecr-aws-soci-index-builder/) on GitHub. The CloudFormation AWS SOCI Index Builder is a way to automate getting started with SOCI, while the open source soci tool has more flexibility around index generation and the ability to integrate index generation in your continuous integration and continuous delivery (CI/CD) pipelines.

**Note**  
For the SOCI index to be created for an image, the image must exist in the containerd image store on the computer running `soci-snapshotter`. If the image is in the Docker image store, the image can't be found.
<a name="verify-soci"></a>
**Verifying that a task used lazy loading**  
To verify that a task was lazily loaded using SOCI, check the task metadata endpoint from inside the task. When you query the task metadata endpoint version 4, there is a `Snapshotter` field in the default path for the container that you are querying from. Additionally, there are `Snapshotter` fields for each container in the `/task` path. The default value for this field is `overlayfs`, and this field is set to `soci` if SOCI is used. To verify that a container image has a SOCI Index Manifest v2 attached you can retrieve the Image Index from Amazon ECR using the AWS CLI.

```
IMAGE_REPOSITORY=r
IMAGE_TAG=latest

aws ecr batch-get-image \
    --repository-name=$IMAGE_REPOSITORY \
    --image-ids imageTag=$IMAGE_TAG \
    --query 'images[0].imageManifest' --output text | jq -r '.manifests[] | select(.artifactType=="application/vnd.amazon.soci.index.v2+json")'
```

To verify that a container image has a SOCI Index Manifest v1 attached you can use the OCI Referrers API.

```
ACCOUNT_ID=111222333444
AWS_REGION=us-east-1
IMAGE_REPOSITORY=nginx-demo
IMAGE_TAG=latest
IMAGE_DIGEST=$(aws ecr describe-images --repository-name $IMAGE_REPOSITORY --image-ids imageTag=$IMAGE_TAG --query 'imageDetails[0].imageDigest' --output text)
ECR_PASSWORD=$(aws ecr get-login-password)

curl \
    --silent \
    --user AWS:$ECR_PASSWORD \
    https://$ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/v2/$IMAGE_REPOSITORY/referrers/$IMAGE_DIGEST?artifactType=application%2Fvnd.amazon.soci.index.v1%2Bjson | jq -r '.'
```

# Amazon ECS task definition differences for EC2 instances running Windows
Task definition differences for EC2 instances running Windows

Tasks that run on EC2 Windows instances don't support all of the Amazon ECS task definition parameters that are available. Some parameters aren't supported at all, and others behave differently.

The following task definition parameters aren't supported for Amazon EC2 Windows task definitions:
+ `containerDefinitions`
  + `disableNetworking`
  + `dnsServers`
  + `dnsSearchDomains`
  + `extraHosts`
  + `links`
  + `linuxParameters`
  + `privileged`
  + `readonlyRootFilesystem`
  + `user`
  + `ulimits`
+ `volumes`
  + `dockerVolumeConfiguration`
+ `cpu`

  We recommend specifying container-level CPU for Windows containers.
+ `memory`

  We recommend specifying container-level memory for Windows containers.
+ `proxyConfiguration`
+ `ipcMode`
+ `pidMode`
+ `taskRoleArn`

  The IAM roles for tasks on EC2 Windows instances features requires additional configuration, but much of this configuration is similar to configuring IAM roles for tasks on Linux container instances. For more information see [Amazon EC2 Windows instance additional configuration](task-iam-roles.md#windows_task_IAM_roles).