Describes a daemon task definition. You can specify a family and revision to find information about a specific daemon task definition, or you can simply specify the family to find the latest ACTIVE revision in that family.
See also: AWS API Documentation
describe-daemon-task-definition
--daemon-task-definition <value>
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]
[--debug]
[--endpoint-url <value>]
[--no-verify-ssl]
[--no-paginate]
[--output <value>]
[--query <value>]
[--profile <value>]
[--region <value>]
[--version <value>]
[--color <value>]
[--no-sign-request]
[--ca-bundle <value>]
[--cli-read-timeout <value>]
[--cli-connect-timeout <value>]
[--cli-binary-format <value>]
[--no-cli-pager]
[--cli-auto-prompt]
[--no-cli-auto-prompt]
[--cli-error-format <value>]
--daemon-task-definition (string) [required]
Thefamilyfor the latestACTIVErevision,familyandrevision(family:revision) for a specific revision in the family, or full Amazon Resource Name (ARN) of the daemon task definition to describe.
--cli-input-json | --cli-input-yaml (string)
Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml.
--generate-cli-skeleton (string)
Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input, prints a sample input JSON that can be used as an argument for --cli-input-json. Similarly, if provided yaml-input it will print a sample input YAML that can be used with --cli-input-yaml. If provided with the value output, it validates the command inputs and returns a sample output JSON for that command. The generated JSON skeleton is not stable between versions of the AWS CLI and there are no backwards compatibility guarantees in the JSON skeleton generated.
--debug (boolean)
Turn on debug logging.
--endpoint-url (string)
Override command’s default URL with the given URL.
--no-verify-ssl (boolean)
By default, the AWS CLI uses SSL when communicating with AWS services. For each SSL connection, the AWS CLI will verify SSL certificates. This option overrides the default behavior of verifying SSL certificates.
--no-paginate (boolean)
Disable automatic pagination. If automatic pagination is disabled, the AWS CLI will only make one call, for the first page of results.
--output (string)
The formatting style for command output.
--query (string)
A JMESPath query to use in filtering the response data.
--profile (string)
Use a specific profile from your credential file.
--region (string)
The region to use. Overrides config/env settings.
--version (string)
Display the version of this tool.
--color (string)
Turn on/off color output.
--no-sign-request (boolean)
Do not sign requests. Credentials will not be loaded if this argument is provided.
--ca-bundle (string)
The CA certificate bundle to use when verifying SSL certificates. Overrides config/env settings.
--cli-read-timeout (int)
The maximum socket read time in seconds. If the value is set to 0, the socket read will be blocking and not timeout. The default value is 60 seconds.
--cli-connect-timeout (int)
The maximum socket connect time in seconds. If the value is set to 0, the socket connect will be blocking and not timeout. The default value is 60 seconds.
--cli-binary-format (string)
The formatting style to be used for binary blobs. The default format is base64. The base64 format expects binary blobs to be provided as a base64 encoded string. The raw-in-base64-out format preserves compatibility with AWS CLI V1 behavior and binary values must be passed literally. When providing contents from a file that map to a binary blob fileb:// will always be treated as binary and use the file contents directly regardless of the cli-binary-format setting. When using file:// the file contents will need to properly formatted for the configured cli-binary-format.
--no-cli-pager (boolean)
Disable cli pager for output.
--cli-auto-prompt (boolean)
Automatically prompt for CLI input parameters.
--no-cli-auto-prompt (boolean)
Disable automatically prompt for CLI input parameters.
--cli-error-format (string)
The formatting style for error output. By default, errors are displayed in enhanced format.
daemonTaskDefinition -> (structure)
The full daemon task definition description.
daemonTaskDefinitionArn -> (string)
The full Amazon Resource Name (ARN) of the daemon task definition.family -> (string)
The name of a family that this daemon task definition is registered to.revision -> (integer)
The revision of the daemon task in a particular family. The revision is a version number of a daemon task definition in a family. When you register a daemon task definition for the first time, the revision is1. Each time that you register a new revision of a daemon task definition in the same family, the revision value always increases by one.taskRoleArn -> (string)
The short name or full Amazon Resource Name (ARN) of the IAM role that grants containers in the daemon task permission to call Amazon Web Services APIs on your behalf.executionRoleArn -> (string)
The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf.containerDefinitions -> (list)
A list of container definitions in JSON format that describe the containers that make up the daemon task.
(structure)
A container definition for a daemon task. Daemon container definitions describe the containers that run as part of a daemon task on container instances managed by capacity providers.
name -> (string)
The name of the container. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.image -> (string) [required]
The image used to start the container. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are specified with either `` repository-url /image :tag `` or `` repository-url /image @*digest* `` .memory -> (integer)
The amount (in MiB) of memory to present to the container. If the container attempts to exceed the memory specified here, the container is killed.memoryReservation -> (integer)
The soft limit (in MiB) of memory to reserve for the container.repositoryCredentials -> (structure)
The private repository authentication credentials to use.
credentialsParameter -> (string) [required]
The Amazon Resource Name (ARN) of the secret containing the private repository credentials.
Note
When you use the Amazon ECS API, CLI, or Amazon Web Services SDK, if the secret exists in the same Region as the task that you’re launching then you can use either the full ARN or the name of the secret. When you use the Amazon Web Services Management Console, you must specify the full ARN of the secret.healthCheck -> (structure)
The container health check command and associated configuration parameters for the container.
command -> (list) [required]
A string array representing the command that the container runs to determine if it is healthy. The string array must start with
CMDto run the command arguments directly, orCMD-SHELLto run the command with the container’s default shell.When you use the Amazon Web Services Management Console JSON panel, the Command Line Interface, or the APIs, enclose the list of commands in double quotes and brackets.
[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]You don’t include the double quotes and brackets when you use the Amazon Web Services Management Console.
CMD-SHELL, curl -f http://localhost/ || exit 1An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, see
HealthCheckin the docker container create command.(string)
interval -> (integer)
The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify acommand.timeout -> (integer)
The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify acommand.retries -> (integer)
The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify acommand.startPeriod -> (integer)
The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, the
startPeriodis off. This value applies only when you specify acommand.Note
If a health check succeeds within thestartPeriod, then the container is considered healthy and any subsequent failures count toward the maximum number of retries.cpu -> (integer)
The number ofcpuunits reserved for the container.essential -> (boolean)
If theessentialparameter of a container is marked astrue, and that container fails or stops for any reason, all other containers that are part of the task are stopped.entryPoint -> (list)
The entry point that’s passed to the container.
(string)
command -> (list)
The command that’s passed to the container.
(string)
workingDirectory -> (string)
The working directory to run commands inside the container in.environmentFiles -> (list)
A list of files containing the environment variables to pass to a container.
(structure)
A list of files containing the environment variables to pass to a container. You can specify up to ten environment files. The file must have a
.envfile extension. Each line in an environment file should contain an environment variable inVARIABLE=VALUEformat. Lines beginning with#are treated as comments and are ignored.If there are environment variables specified using the
environmentparameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they’re processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide .Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply.
You must use the following platforms for the Fargate launch type:
- Linux platform version
1.4.0or later.- Windows platform version
1.0.0or later.Consider the following when using the Fargate launch type:
- The file is handled like a native Docker env-file.
- There is no support for shell escape handling.
- The container entry point interperts the
VARIABLEvalues.value -> (string) [required]
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.type -> (string) [required]
The file type to use. Environment files are objects in Amazon S3. The only supported value is
s3.Possible values:
s3environment -> (list)
The environment variables to pass to a container.
(structure)
A key-value pair object.
name -> (string)
The name of the key-value pair. For environment variables, this is the name of the environment variable.value -> (string)
The value of the key-value pair. For environment variables, this is the value of the environment variable.secrets -> (list)
The secrets to pass to the container.
(structure)
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
- To inject sensitive data into your containers as environment variables, use the
secretscontainer definition parameter.- To reference sensitive information in the log configuration of a container, use the
secretOptionscontainer definition parameter.For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide .
name -> (string) [required]
The name of the secret.valueFrom -> (string) [required]
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide .
Note
If the SSM Parameter Store parameter exists in the same Region as the task you’re launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.readonlyRootFilesystem -> (boolean)
When this parameter is true, the container is given read-only access to its root file system.mountPoints -> (list)
The mount points for data volumes in your container.
(structure)
The details for a volume mount point that’s used in a container definition.
sourceVolume -> (string)
The name of the volume to mount. Must be a volume name referenced in thenameparameter of task definitionvolume.containerPath -> (string)
The path on the container to mount the host volume at.readOnly -> (boolean)
If this value istrue, the container has read-only access to the volume. If this value isfalse, then the container can write to the volume. The default value isfalse.logConfiguration -> (structure)
The log configuration specification for the container.
logDriver -> (string) [required]
The log driver to use for the container.
For tasks on Fargate, the supported log drivers are
awslogs,splunk, andawsfirelens.For tasks hosted on Amazon EC2 instances, the supported log drivers are
awslogs,fluentd,gelf,json-file,journald,syslog,splunk, andawsfirelens.For more information about using the
awslogslog driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide .For more information about using the
awsfirelenslog driver, see Send Amazon ECS logs to an Amazon Web Services service or Amazon Web Services Partner .Note
If you have a custom driver that isn’t listed, you can fork the Amazon ECS container agent project that’s available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don’t currently provide support for running modified copies of this software.Possible values:
json-filesyslogjournaldgelffluentdawslogssplunkawsfirelensoptions -> (map)
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the
awslogslog driver to route logs to Amazon CloudWatch include the following:awslogs-create-groupRequired: No
Specify whether you want the log group to be created automatically. If this option isn’t specified, it defaults to
false.Note
Your IAM policy must include the
logs:CreateLogGrouppermission before you attempt to useawslogs-create-group.awslogs-region
Required: Yes
Specify the Amazon Web Services Region that the
awslogslog driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they’re all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option.awslogs-groupRequired: Yes
Make sure to specify a log group that the
awslogslog driver sends its log streams to.awslogs-stream-prefixRequired: Yes, when using Fargate.Optional when using EC2.
Use the
awslogs-stream-prefixoption to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the formatprefix-name/container-name/ecs-task-id.If you don’t specify a prefix with this option, then the log stream is named after the container ID that’s assigned by the Docker daemon on the container instance. Because it’s difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option.
For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to.
You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console.
awslogs-datetime-formatRequired: No
This option defines a multiline start pattern in Python
strftimeformat. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry.
For more information, see awslogs-datetime-format .
You cannot configure both the
awslogs-datetime-formatandawslogs-multiline-patternoptions.Note
Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.
awslogs-multiline-pattern
Required: No
This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages.
For more information, see awslogs-multiline-pattern .
This option is ignored if
awslogs-datetime-formatis also configured.You cannot configure both the
awslogs-datetime-formatandawslogs-multiline-patternoptions.Note
Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance.The following options apply to all supported log drivers.
modeRequired: No
Valid values:
non-blocking|blockingThis option defines the delivery mode of log messages from the container to the log driver specified using
logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted.If you use the
blockingmode and the flow of logs is interrupted, calls from container code to write to thestdoutandstderrstreams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure.If you use the
non-blockingmode, the container’s logs are instead stored in an in-memory intermediate buffer configured with themax-buffer-sizeoption. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the ``awslogs` container log driver <http://aws.amazon.com/blogs/containers/preventing-log-loss-with-non-blocking-mode-in-the-awslogs-container-log-driver/>`__ .You can set a default
modefor all containers in a specific Amazon Web Services Region by using thedefaultLogDriverModeaccount setting. If you don’t specify themodeoption or configure the account setting, Amazon ECS will default to thenon-blockingmode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide .Note
On June 25, 2025, Amazon ECS changed the default log driver mode from
blockingtonon-blockingto prioritize task availability over logging. To continue using theblockingmode after this change, do one of the following:
- Set the
modeoption in your container definition’slogConfigurationasblocking.- Set the
defaultLogDriverModeaccount setting toblocking.max-buffer-size
Required: No
Default value:
10mWhen
non-blockingmode is used, themax-buffer-sizelog option controls the size of the buffer that’s used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost.To route logs using the
splunklog router, you need to specify asplunk-tokenand asplunk-url.When you use the
awsfirelenslog router to route logs to an Amazon Web Services Service or Amazon Web Services Partner Network destination for log storage and analytics, you can set thelog-driver-buffer-limitoption to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker.Other options you can specify when using
awsfirelensto route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the Amazon Web Services Region withregionand a name for the log stream withdelivery_stream.When you export logs to Amazon Kinesis Data Streams, you can specify an Amazon Web Services Region with
regionand a data stream name withstream.When you export logs to Amazon OpenSearch Service, you can specify options like
Name,Host(OpenSearch Service endpoint without protocol),Port,Index,Type,Aws_auth,Aws_region,Suppress_Type_Name, andtls. For more information, see Under the hood: FireLens for Amazon ECS Tasks .When you export logs to Amazon S3, you can specify the bucket using the
bucketoption. You can also specifyregion,total_file_size,upload_timeout, anduse_put_objectas options.This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:
sudo docker version --format '{{.Server.APIVersion}}'key -> (string)
value -> (string)
secretOptions -> (list)
The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide .
(structure)
An object representing the secret to expose to your container. Secrets can be exposed to a container in the following ways:
- To inject sensitive data into your containers as environment variables, use the
secretscontainer definition parameter.- To reference sensitive information in the log configuration of a container, use the
secretOptionscontainer definition parameter.For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide .
name -> (string) [required]
The name of the secret.valueFrom -> (string) [required]
The secret to expose to the container. The supported values are either the full ARN of the Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.
For information about the require Identity and Access Management permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide .
Note
If the SSM Parameter Store parameter exists in the same Region as the task you’re launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.firelensConfiguration -> (structure)
The FireLens configuration for the container. This is used to specify and configure a log router for container logs.
type -> (string) [required]
The log router to use. The valid values are
fluentdorfluentbit.Possible values:
fluentdfluentbitoptions -> (map)
The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is
"options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}. For more information, see Creating a task definition that uses a FireLens configuration in the Amazon Elastic Container Service Developer Guide .Note
Tasks hosted on Fargate only support thefileconfiguration file type.key -> (string)
value -> (string)
privileged -> (boolean)
When this parameter is true, the container is given elevated privileges on the host container instance (similar to therootuser).user -> (string)
The user to use inside the container.ulimits -> (list)
A list of
ulimitsto set in the container.(structure)
The
ulimitsettings to pass to the container.Amazon ECS tasks hosted on Fargate use the default resource limit values set by the operating system with the exception of the
nofileresource limit parameter which Fargate overrides. Thenofileresource limit sets a restriction on the number of open files that a container can use. The defaultnofilesoft limit is65535and the default hard limit is65535.You can specify the
ulimitsettings for a container in a task definition.name -> (string) [required]
The
typeof theulimit.Possible values:
corecpudatafsizelocksmemlockmsgqueuenicenofilenprocrssrtpriorttimesigpendingstacksoftLimit -> (integer) [required]
The soft limit for theulimittype. The value can be specified in bytes, seconds, or as a count, depending on thetypeof theulimit.hardLimit -> (integer) [required]
The hard limit for theulimittype. The value can be specified in bytes, seconds, or as a count, depending on thetypeof theulimit.linuxParameters -> (structure)
Linux-specific modifications that are applied to the container configuration, such as Linux kernel capabilities.
capabilities -> (structure)
The Linux capabilities for the container that are added to or dropped from the default configuration provided by Docker.
add -> (list)
The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps to
CapAddin the docker container create command and the--cap-addoption to docker run.Note
Tasks launched on Fargate only support adding theSYS_PTRACEkernel capability.Valid values:
"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"(string)
drop -> (list)
The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps to
CapDropin the docker container create command and the--cap-dropoption to docker run.Valid values:
"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"(string)
devices -> (list)
Any host devices to expose to the container.
(structure)
An object representing a container instance host device.
hostPath -> (string) [required]
The path for the device on the host container instance.containerPath -> (string)
The path inside the container at which to expose the host device.permissions -> (list)
The explicit permissions to provide to the container for the device. By default, the container has permissions for
read,write, andmknodfor the device.(string)
Possible values:
readwritemknodinitProcessEnabled -> (boolean)
Run aninitprocess inside the container that forwards signals and reaps processes.tmpfs -> (list)
The container path, mount options, and size (in MiB) of the tmpfs mount.
(structure)
The container path, mount options, and size of the tmpfs mount.
containerPath -> (string) [required]
The absolute file path where the tmpfs volume is to be mounted.size -> (integer) [required]
The maximum size (in MiB) of the tmpfs volume.mountOptions -> (list)
The list of tmpfs volume mount options.
Valid values:
"defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"(string)
dependsOn -> (list)
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition.
(structure)
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed.
Your Amazon ECS container instances require at least version 1.26.0 of the container agent to use container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide . If you’re using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the
ecs-initpackage. If your container instances are launched from version20190301or later, then they contain the required versions of the container agent andecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide .Note
For tasks that use the Fargate launch type, the task or service requires the following platforms:
- Linux platform version
1.3.0or later.- Windows platform version
1.0.0or later.For more information about how to create a container dependency, see Container dependency in the Amazon Elastic Container Service Developer Guide .
containerName -> (string) [required]
The name of a container.condition -> (string) [required]
The dependency condition of the container. The following are the available conditions and their behavior:
START- This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.COMPLETE- This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can’t be set on an essential container.SUCCESS- This condition is the same asCOMPLETE, but it also requires that the container exits with azerostatus. This condition can’t be set on an essential container.HEALTHY- This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup.Possible values:
STARTCOMPLETESUCCESSHEALTHYstartTimeout -> (integer)
Time duration (in seconds) to wait before giving up on resolving dependencies for a container.stopTimeout -> (integer)
Time duration (in seconds) to wait before the container is forcefully killed if it doesn’t exit normally on its own.systemControls -> (list)
A list of namespaced kernel parameters to set in the container.
(structure)
A list of namespaced kernel parameters to set in the container. This parameter maps to
Sysctlsin the docker container create command and the--sysctloption to docker run. For example, you can configurenet.ipv4.tcp_keepalive_timesetting to maintain longer lived connections.We don’t recommend that you specify network-related
systemControlsparameters for multiple containers in a single task that also uses either theawsvpcorhostnetwork mode. Doing this has the following disadvantages:
- For tasks that use the
awsvpcnetwork mode including Fargate, if you setsystemControlsfor any container, it applies to all containers in the task. If you set differentsystemControlsfor multiple containers in a single task, the container that’s started last determines whichsystemControlstake effect.- For tasks that use the
hostnetwork mode, the network namespacesystemControlsaren’t supported.If you’re setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see IPC mode .
- For tasks that use the
hostIPC mode, IPC namespacesystemControlsaren’t supported.- For tasks that use the
taskIPC mode, IPC namespacesystemControlsvalues apply to all containers within a task.Note
This parameter is not supported for Windows containers.Note
This parameter is only supported for tasks that are hosted on Fargate if the tasks are using platform version1.4.0or later (Linux). This isn’t supported for Windows containers on Fargate.namespace -> (string)
The namespaced kernel parameter to set avaluefor.value -> (string)
The namespaced kernel parameter to set a
valuefor.Valid IPC namespace values:
"kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced", andSysctlsthat start with"fs.mqueue.*"Valid network namespace values:
Sysctlsthat start with"net.*". Only namespacedSysctlsthat exist within the container starting with “net.* are accepted.All of these values are supported by Fargate.
interactive -> (boolean)
When this parameter istrue, you can deploy containerized applications that requirestdinor attyto be allocated.pseudoTerminal -> (boolean)
When this parameter istrue, a TTY is allocated.restartPolicy -> (structure)
The restart policy for the container. When you set up a restart policy, Amazon ECS can restart the container without needing to replace the task.
enabled -> (boolean) [required]
Specifies whether a restart policy is enabled for the container.ignoredExitCodes -> (list)
A list of exit codes that Amazon ECS will ignore and not attempt a restart on. You can specify a maximum of 50 container exit codes. By default, Amazon ECS does not ignore any exit codes.
(integer)
restartAttemptPeriod -> (integer)
A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be restarted only once everyrestartAttemptPeriodseconds. If a container isn’t able to run for this time period and exits early, it will not be restarted. You can set a minimumrestartAttemptPeriodof 60 seconds and a maximumrestartAttemptPeriodof 1800 seconds. By default, a container must run for 300 seconds before it can be restarted.volumes -> (list)
The list of data volume definitions for the daemon task.
(structure)
A data volume definition for a daemon task.
name -> (string)
The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.host -> (structure)
The contents of the
hostparameter determine whether your bind mount host volume persists on the host container instance and where it’s stored.sourcePath -> (string)
When the
hostparameter is used, specify asourcePathto declare the path on the host container instance that’s presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If thehostparameter contains asourcePathfile location, then the data volume persists at the specified location on the host container instance until you delete it manually. If thesourcePathvalue doesn’t exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported.If you’re using the Fargate launch type, the
sourcePathparameter is not supported.cpu -> (string)
The number of CPU units used by the daemon task.memory -> (string)
The amount of memory (in MiB) used by the daemon task.status -> (string)
The status of the daemon task definition. The valid values are
ACTIVE,DELETE_IN_PROGRESS, andDELETED.Possible values:
ACTIVEDELETE_IN_PROGRESSDELETEDregisteredAt -> (timestamp)
The Unix timestamp for the time when the daemon task definition was registered.deleteRequestedAt -> (timestamp)
The Unix timestamp for the time when the daemon task definition delete was requested.registeredBy -> (string)
The principal that registered the daemon task definition.