CfnDaemonTaskDefinition
- class aws_cdk.aws_ecs.CfnDaemonTaskDefinition(scope, id, *, container_definitions=None, cpu=None, execution_role_arn=None, family=None, memory=None, tags=None, task_role_arn=None, volumes=None)
Bases:
CfnResourceThe details of a daemon task definition.
A daemon task definition is a template that describes the containers that form a daemon. Daemons deploy cross-cutting software agents independently across your Amazon ECS infrastructure.
- See:
- CloudformationResource:
AWS::ECS::DaemonTaskDefinition
- ExampleMetadata:
fixture=_generated
Example:
from aws_cdk import CfnTag # The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs cfn_daemon_task_definition = ecs.CfnDaemonTaskDefinition(self, "MyCfnDaemonTaskDefinition", container_definitions=[ecs.CfnDaemonTaskDefinition.DaemonContainerDefinitionProperty( image="image", name="name", # the properties below are optional command=["command"], cpu=123, depends_on=[ecs.CfnDaemonTaskDefinition.ContainerDependencyProperty( condition="condition", container_name="containerName" )], entry_point=["entryPoint"], environment=[ecs.CfnDaemonTaskDefinition.KeyValuePairProperty( name="name", value="value" )], environment_files=[ecs.CfnDaemonTaskDefinition.EnvironmentFileProperty( type="type", value="value" )], essential=False, firelens_configuration=ecs.CfnDaemonTaskDefinition.FirelensConfigurationProperty( options={ "options_key": "options" }, type="type" ), health_check=ecs.CfnDaemonTaskDefinition.HealthCheckProperty( command=["command"], interval=123, retries=123, start_period=123, timeout=123 ), interactive=False, linux_parameters=ecs.CfnDaemonTaskDefinition.LinuxParametersProperty( capabilities=ecs.CfnDaemonTaskDefinition.KernelCapabilitiesProperty( add=["add"], drop=["drop"] ), devices=[ecs.CfnDaemonTaskDefinition.DeviceProperty( container_path="containerPath", host_path="hostPath", permissions=["permissions"] )], init_process_enabled=False, tmpfs=[ecs.CfnDaemonTaskDefinition.TmpfsProperty( size=123, # the properties below are optional container_path="containerPath", mount_options=["mountOptions"] )] ), log_configuration=ecs.CfnDaemonTaskDefinition.LogConfigurationProperty( log_driver="logDriver", # the properties below are optional options={ "options_key": "options" }, secret_options=[ecs.CfnDaemonTaskDefinition.SecretProperty( name="name", value_from="valueFrom" )] ), memory=123, memory_reservation=123, mount_points=[ecs.CfnDaemonTaskDefinition.MountPointProperty( container_path="containerPath", read_only=False, source_volume="sourceVolume" )], privileged=False, pseudo_terminal=False, readonly_root_filesystem=False, repository_credentials=ecs.CfnDaemonTaskDefinition.RepositoryCredentialsProperty( credentials_parameter="credentialsParameter" ), restart_policy=ecs.CfnDaemonTaskDefinition.RestartPolicyProperty( enabled=False, ignored_exit_codes=[123], restart_attempt_period=123 ), secrets=[ecs.CfnDaemonTaskDefinition.SecretProperty( name="name", value_from="valueFrom" )], start_timeout=123, stop_timeout=123, system_controls=[ecs.CfnDaemonTaskDefinition.SystemControlProperty( namespace="namespace", value="value" )], ulimits=[ecs.CfnDaemonTaskDefinition.UlimitProperty( hard_limit=123, name="name", soft_limit=123 )], user="user", working_directory="workingDirectory" )], cpu="cpu", execution_role_arn="executionRoleArn", family="family", memory="memory", tags=[CfnTag( key="key", value="value" )], task_role_arn="taskRoleArn", volumes=[ecs.CfnDaemonTaskDefinition.VolumeProperty( host=ecs.CfnDaemonTaskDefinition.HostVolumePropertiesProperty( source_path="sourcePath" ), name="name" )] )
Create a new
AWS::ECS::DaemonTaskDefinition.- Parameters:
scope (
Construct) – Scope in which this resource is defined.id (
str) – Construct identifier for this resource (unique in its scope).container_definitions (
Union[IResolvable,Sequence[Union[IResolvable,DaemonContainerDefinitionProperty,Dict[str,Any]]],None]) – A list of container definitions in JSON format that describe the containers that make up the daemon task.cpu (
Optional[str]) – The number of CPU units used by the daemon task.execution_role_arn (
Optional[str]) – The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf.family (
Optional[str]) – The name of a family that this daemon task definition is registered to.memory (
Optional[str]) – The amount of memory (in MiB) used by the daemon task.tags (
Optional[Sequence[Union[CfnTag,Dict[str,Any]]]])task_role_arn (
Optional[str]) – The short name or full Amazon Resource Name (ARN) of the IAM role that grants containers in the daemon task permission to call Amazon Web Services APIs on your behalf.volumes (
Union[IResolvable,Sequence[Union[IResolvable,VolumeProperty,Dict[str,Any]]],None]) – The list of data volume definitions for the daemon task.
Methods
- add_deletion_override(path)
Syntactic sugar for
addOverride(path, undefined).- Parameters:
path (
str) – The path of the value to delete.- Return type:
None
- add_dependency(target)
Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
This can be used for resources across stacks (or nested stack) boundaries and the dependency will automatically be transferred to the relevant scope.
- Parameters:
target (
CfnResource)- Return type:
None
- add_depends_on(target)
(deprecated) Indicates that this resource depends on another resource and cannot be provisioned unless the other resource has been successfully provisioned.
- Parameters:
target (
CfnResource)- Deprecated:
use addDependency
- Stability:
deprecated
- Return type:
None
- add_metadata(key, value)
Add a value to the CloudFormation Resource Metadata.
- Parameters:
key (
str)value (
Any)
- See:
- Return type:
None
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- add_override(path, value)
Adds an override to the synthesized CloudFormation resource.
To add a property override, either use
addPropertyOverrideor prefixpathwith “Properties.” (i.e.Properties.TopicName).If the override is nested, separate each nested level using a dot (.) in the path parameter. If there is an array as part of the nesting, specify the index in the path.
To include a literal
.in the property name, prefix with a\. In most programming languages you will need to write this as"\\."because the\itself will need to be escaped.For example:
cfn_resource.add_override("Properties.GlobalSecondaryIndexes.0.Projection.NonKeyAttributes", ["myattribute"]) cfn_resource.add_override("Properties.GlobalSecondaryIndexes.1.ProjectionType", "INCLUDE")
would add the overrides Example:
"Properties": { "GlobalSecondaryIndexes": [ { "Projection": { "NonKeyAttributes": [ "myattribute" ] ... } ... }, { "ProjectionType": "INCLUDE" ... }, ] ... }
The
valueargument toaddOverridewill not be processed or translated in any way. Pass raw JSON values in here with the correct capitalization for CloudFormation. If you pass CDK classes or structs, they will be rendered with lowercased key names, and CloudFormation will reject the template.- Parameters:
path (
str) –The path of the property, you can use dot notation to override values in complex types. Any intermediate keys will be created as needed.
value (
Any) –The value. Could be primitive or complex.
- Return type:
None
- add_property_deletion_override(property_path)
Adds an override that deletes the value of a property from the resource definition.
- Parameters:
property_path (
str) – The path to the property.- Return type:
None
- add_property_override(property_path, value)
Adds an override to a resource property.
Syntactic sugar for
addOverride("Properties.<...>", value).- Parameters:
property_path (
str) – The path of the property.value (
Any) – The value.
- Return type:
None
- apply_removal_policy(policy=None, *, apply_to_update_replace_policy=None, default=None)
Sets the deletion policy of the resource based on the removal policy specified.
The Removal Policy controls what happens to this resource when it stops being managed by CloudFormation, either because you’ve removed it from the CDK application or because you’ve made a change that requires the resource to be replaced.
The resource can be deleted (
RemovalPolicy.DESTROY), or left in your AWS account for data recovery and cleanup later (RemovalPolicy.RETAIN). In some cases, a snapshot can be taken of the resource prior to deletion (RemovalPolicy.SNAPSHOT). A list of resources that support this policy can be found in the following link:- Parameters:
policy (
Optional[RemovalPolicy])apply_to_update_replace_policy (
Optional[bool]) – Apply the same deletion policy to the resource’s “UpdateReplacePolicy”. Default: truedefault (
Optional[RemovalPolicy]) – The default policy to apply in case the removal policy is not defined. Default: - Default value is resource specific. To determine the default value for a resource, please consult that specific resource’s documentation.
- See:
- Return type:
None
- cfn_property_name(cdk_property_name)
- Parameters:
cdk_property_name (
str)- Return type:
Optional[str]
- get_att(attribute_name, type_hint=None)
Returns a token for an runtime attribute of this resource.
Ideally, use generated attribute accessors (e.g.
resource.arn), but this can be used for future compatibility in case there is no generated attribute.- Parameters:
attribute_name (
str) – The name of the attribute.type_hint (
Optional[ResolutionTypeHint])
- Return type:
- get_metadata(key)
Retrieve a value value from the CloudFormation Resource Metadata.
- Parameters:
key (
str)- See:
- Return type:
Any
Note that this is a different set of metadata from CDK node metadata; this metadata ends up in the stack template under the resource, whereas CDK node metadata ends up in the Cloud Assembly.
- inspect(inspector)
Examines the CloudFormation resource and discloses attributes.
- Parameters:
inspector (
TreeInspector) – tree inspector to collect and process attributes.- Return type:
None
- obtain_dependencies()
Retrieves an array of resources this resource depends on.
This assembles dependencies on resources across stacks (including nested stacks) automatically.
- Return type:
List[Union[Stack,CfnResource]]
- obtain_resource_dependencies()
Get a shallow copy of dependencies between this resource and other resources in the same stack.
- Return type:
List[CfnResource]
- override_logical_id(new_logical_id)
Overrides the auto-generated logical ID with a specific ID.
- Parameters:
new_logical_id (
str) – The new logical ID to use for this stack element.- Return type:
None
- remove_dependency(target)
Indicates that this resource no longer depends on another resource.
This can be used for resources across stacks (including nested stacks) and the dependency will automatically be removed from the relevant scope.
- Parameters:
target (
CfnResource)- Return type:
None
- replace_dependency(target, new_target)
Replaces one dependency with another.
- Parameters:
target (
CfnResource) – The dependency to replace.new_target (
CfnResource) – The new dependency to add.
- Return type:
None
- to_string()
Returns a string representation of this construct.
- Return type:
str- Returns:
a string representation of this resource
- with_(*mixins)
Applies one or more mixins to this construct.
Mixins are applied in order. The list of constructs is captured at the start of the call, so constructs added by a mixin will not be visited. Use multiple
with()calls if subsequent mixins should apply to added constructs.- Parameters:
mixins (
IMixin)- Return type:
Attributes
- CFN_RESOURCE_TYPE_NAME = 'AWS::ECS::DaemonTaskDefinition'
- attr_daemon_task_definition_arn
DaemonTaskDefinitionArn
- Type:
cloudformationAttribute
- cdk_tag_manager
Tag Manager which manages the tags for this resource.
- cfn_options
Options for this resource, such as condition, update policy etc.
- cfn_resource_type
AWS resource type.
- container_definitions
A list of container definitions in JSON format that describe the containers that make up the daemon task.
- cpu
The number of CPU units used by the daemon task.
- creation_stack
return:
the stack trace of the point where this Resource was created from, sourced from the +metadata+ entry typed +aws:cdk:logicalId+, and with the bottom-most node +internal+ entries filtered.
- daemon_task_definition_ref
A reference to a DaemonTaskDefinition resource.
- env
- execution_role_arn
The Amazon Resource Name (ARN) of the task execution role that grants the Amazon ECS container agent permission to make Amazon Web Services API calls on your behalf.
- family
The name of a family that this daemon task definition is registered to.
- logical_id
The logical ID for this CloudFormation stack element.
The logical ID of the element is calculated from the path of the resource node in the construct tree.
To override this value, use
overrideLogicalId(newLogicalId).- Returns:
the logical ID as a stringified token. This value will only get resolved during synthesis.
- memory
The amount of memory (in MiB) used by the daemon task.
- node
The tree node.
- ref
Return a string that will be resolved to a CloudFormation
{ Ref }for this element.If, by any chance, the intrinsic reference of a resource is not a string, you could coerce it to an IResolvable through
Lazy.any({ produce: resource.ref }).
- stack
The stack in which this element is defined.
CfnElements must be defined within a stack scope (directly or indirectly).
- tags
- task_role_arn
The short name or full Amazon Resource Name (ARN) of the IAM role that grants containers in the daemon task permission to call Amazon Web Services APIs on your behalf.
- volumes
The list of data volume definitions for the daemon task.
Static Methods
- classmethod arn_for_daemon_task_definition(resource)
- Parameters:
resource (
IDaemonTaskDefinitionRef)- Return type:
str
- classmethod is_cfn_daemon_task_definition(x)
Checks whether the given object is a CfnDaemonTaskDefinition.
- Parameters:
x (
Any)- Return type:
bool
- classmethod is_cfn_element(x)
Returns
trueif a construct is a stack element (i.e. part of the synthesized cloudformation template).Uses duck-typing instead of
instanceofto allow stack elements from different versions of this library to be included in the same stack.- Parameters:
x (
Any)- Return type:
bool- Returns:
The construct as a stack element or undefined if it is not a stack element.
- classmethod is_cfn_resource(x)
Check whether the given object is a CfnResource.
- Parameters:
x (
Any)- Return type:
bool
- classmethod is_construct(x)
Checks if
xis a construct.Use this method instead of
instanceofto properly detectConstructinstances, even when the construct library is symlinked.Explanation: in JavaScript, multiple copies of the
constructslibrary on disk are seen as independent, completely different libraries. As a consequence, the classConstructin each copy of theconstructslibrary is seen as a different class, and an instance of one class will not test asinstanceofthe other class.npm installwill not create installations like this, but users may manually symlink construct libraries together or use a monorepo tool: in those cases, multiple copies of theconstructslibrary can be accidentally installed, andinstanceofwill behave unpredictably. It is safest to avoid usinginstanceof, and using this type-testing method instead.- Parameters:
x (
Any) – Any object.- Return type:
bool- Returns:
true if
xis an object created from a class which extendsConstruct.
ContainerDependencyProperty
- class CfnDaemonTaskDefinition.ContainerDependencyProperty(*, condition=None, container_name=None)
Bases:
objectThe dependencies defined for container startup and shutdown.
A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed. Your Amazon ECS container instances require at least version 1.26.0 of the container agent to use container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you’re using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the
ecs-initpackage. If your container instances are launched from version20190301or later, then they contain the required versions of the container agent andecs-init. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide. For tasks that use the Fargate launch type, the task or service requires the following platforms:Linux platform version
1.3.0or later.Windows platform version
1.0.0or later.
For more information about how to create a container dependency, see Container dependency in the Amazon Elastic Container Service Developer Guide.
- Parameters:
condition (
Optional[str]) – The dependency condition of the container. The following are the available conditions and their behavior: -START- This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start. -COMPLETE- This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can’t be set on an essential container. -SUCCESS- This condition is the same asCOMPLETE, but it also requires that the container exits with azerostatus. This condition can’t be set on an essential container. -HEALTHY- This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup.container_name (
Optional[str]) – The name of a container.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs container_dependency_property = ecs.CfnDaemonTaskDefinition.ContainerDependencyProperty( condition="condition", container_name="containerName" )
Attributes
- condition
The dependency condition of the container.
The following are the available conditions and their behavior:
START- This condition emulates the behavior of links and volumes today. It validates that a dependent container is started before permitting other containers to start.COMPLETE- This condition validates that a dependent container runs to completion (exits) before permitting other containers to start. This can be useful for nonessential containers that run a script and then exit. This condition can’t be set on an essential container.SUCCESS- This condition is the same asCOMPLETE, but it also requires that the container exits with azerostatus. This condition can’t be set on an essential container.HEALTHY- This condition validates that the dependent container passes its Docker health check before permitting other containers to start. This requires that the dependent container has health checks configured. This condition is confirmed only at task startup.
DaemonContainerDefinitionProperty
- class CfnDaemonTaskDefinition.DaemonContainerDefinitionProperty(*, image, name, command=None, cpu=None, depends_on=None, entry_point=None, environment=None, environment_files=None, essential=None, firelens_configuration=None, health_check=None, interactive=None, linux_parameters=None, log_configuration=None, memory=None, memory_reservation=None, mount_points=None, privileged=None, pseudo_terminal=None, readonly_root_filesystem=None, repository_credentials=None, restart_policy=None, secrets=None, start_timeout=None, stop_timeout=None, system_controls=None, ulimits=None, user=None, working_directory=None)
Bases:
objectA container definition for a daemon task.
Daemon container definitions describe the containers that run as part of a daemon task on container instances managed by capacity providers.
- Parameters:
image (
str) – The image used to start the container. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are specified with eitherrepository-url/image:tagorrepository-url/image@digest.name (
str) – The name of the container. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.command (
Optional[Sequence[str]]) – The command that’s passed to the container.cpu (
Union[int,float,None]) – The number ofcpuunits reserved for the container.depends_on (
Union[IResolvable,Sequence[Union[IResolvable,ContainerDependencyProperty,Dict[str,Any]]],None]) – The dependencies defined for container startup and shutdown. A container can contain multiple dependencies on other containers in a task definition.entry_point (
Optional[Sequence[str]]) – The entry point that’s passed to the container.environment (
Union[IResolvable,Sequence[Union[IResolvable,KeyValuePairProperty,Dict[str,Any]]],None]) – The environment variables to pass to a container.environment_files (
Union[IResolvable,Sequence[Union[IResolvable,EnvironmentFileProperty,Dict[str,Any]]],None]) – A list of files containing the environment variables to pass to a container.essential (
Union[bool,IResolvable,None]) – If theessentialparameter of a container is marked astrue, and that container fails or stops for any reason, all other containers that are part of the task are stopped.firelens_configuration (
Union[IResolvable,FirelensConfigurationProperty,Dict[str,Any],None]) – The FireLens configuration for the container. This is used to specify and configure a log router for container logs. For more information, see Custom log routing in the Amazon Elastic Container Service Developer Guide.health_check (
Union[IResolvable,HealthCheckProperty,Dict[str,Any],None]) –An object representing a container health check. Health check parameters that are specified in a container definition override any Docker health checks that exist in the container image (such as those specified in a parent image or from the image’s Dockerfile). This configuration maps to the
HEALTHCHECKparameter of docker run. The Amazon ECS container agent only monitors and reports on the health checks specified in the task definition. Amazon ECS does not monitor Docker health checks that are embedded in a container image and not specified in the container definition. Health check parameters that are specified in a container definition override any Docker health checks that exist in the container image. You can view the health status of both individual containers and a task with the DescribeTasks API operation or when viewing the task details in the console. The health check is designed to make sure that your containers survive agent restarts, upgrades, or temporary unavailability. Amazon ECS performs health checks on containers with the default that launched the container instance or the task. The following describes the possiblehealthStatusvalues for a container: -HEALTHY-The container health check has passed successfully. -UNHEALTHY-The container health check has failed. -UNKNOWN-The container health check is being evaluated, there’s no container health check defined, or Amazon ECS doesn’t have the health status of the container. The following describes the possiblehealthStatusvalues based on the container health checker status of essential containers in the task with the following priority order (high to low): -UNHEALTHY-One or more essential containers have failed their health check. -UNKNOWN-Any essential container running within the task is in anUNKNOWNstate and no other essential containers have anUNHEALTHYstate. -HEALTHY-All essential containers within the task have passed their health checks. Consider the following task health example with 2 containers. - If Container1 isUNHEALTHYand Container2 isUNKNOWN, the task health isUNHEALTHY. - If Container1 isUNHEALTHYand Container2 isHEALTHY, the task health isUNHEALTHY. - If Container1 isHEALTHYand Container2 isUNKNOWN, the task health isUNKNOWN. - If Container1 isHEALTHYand Container2 isHEALTHY, the task health isHEALTHY. Consider the following task health example with 3 containers. - If Container1 isUNHEALTHYand Container2 isUNKNOWN, and Container3 isUNKNOWN, the task health isUNHEALTHY. - If Container1 isUNHEALTHYand Container2 isUNKNOWN, and Container3 isHEALTHY, the task health isUNHEALTHY. - If Container1 isUNHEALTHYand Container2 isHEALTHY, and Container3 isHEALTHY, the task health isUNHEALTHY. - If Container1 isHEALTHYand Container2 isUNKNOWN, and Container3 isHEALTHY, the task health isUNKNOWN. - If Container1 isHEALTHYand Container2 isUNKNOWN, and Container3 isUNKNOWN, the task health isUNKNOWN. - If Container1 isHEALTHYand Container2 isHEALTHY, and Container3 isHEALTHY, the task health isHEALTHY. If a task is run manually, and not as part of a service, the task will continue its lifecycle regardless of its health status. For tasks that are part of a service, if the task reports as unhealthy then the task will be stopped and the service scheduler will replace it. When a container health check fails for a task that is part of a service, the following process occurs: 1. The task is marked asUNHEALTHY. 1. The unhealthy task will be stopped, and during the stopping process, it will go through the following states: -DEACTIVATING- In this state, Amazon ECS performs additional steps before stopping the task. For example, for tasks that are part of services configured to use Elastic Load Balancing target groups, target groups will be deregistered in this state. -STOPPING- The task is in the process of being stopped. -DEPROVISIONING- Resources associated with the task are being cleaned up. -STOPPED- The task has been completely stopped. 1. After the old task stops, a new task will be launched to ensure service operation, and the new task will go through the following lifecycle: -PROVISIONING- Resources required for the task are being provisioned. -PENDING- The task is waiting to be placed on a container instance. -ACTIVATING- In this state, Amazon ECS pulls container images, creates containers, configures task networking, registers load balancer target groups, and configures service discovery status. -RUNNING- The task is running and performing its work. For more detailed information about task lifecycle states, see Task lifecycle in the Amazon Elastic Container Service Developer Guide. The following are notes about container health check support: - If the Amazon ECS container agent becomes disconnected from the Amazon ECS service, this won’t cause a container to transition to anUNHEALTHYstatus. This is by design, to ensure that containers remain running during agent restarts or temporary unavailability. The health check status is the “last heard from” response from the Amazon ECS agent, so if the container was consideredHEALTHYprior to the disconnect, that status will remain until the agent reconnects and another health check occurs. There are no assumptions made about the status of the container health checks. - Container health checks require version1.17.0or greater of the Amazon ECS container agent. For more information, see Updating the Amazon ECS container agent. - Container health checks are supported for Fargate tasks if you’re using platform version1.1.0or greater. For more information, see platform versions. - Container health checks aren’t supported for tasks that are part of a service that’s configured to use a Classic Load Balancer. For an example of how to specify a task definition with multiple containers where container dependency is specified, see Container dependency in the Amazon Elastic Container Service Developer Guide.interactive (
Union[bool,IResolvable,None]) – When this parameter istrue, you can deploy containerized applications that requirestdinor attyto be allocated.linux_parameters (
Union[IResolvable,LinuxParametersProperty,Dict[str,Any],None]) – The Linux-specific options that are applied to the container, such as Linux KernelCapabilities.log_configuration (
Union[IResolvable,LogConfigurationProperty,Dict[str,Any],None]) – The log configuration for the container. This parameter maps toLogConfigin the docker container create command and the--log-driveroption to docker run. By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. Understand the following when specifying a log configuration for your containers. - Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on FARGATElong, the supported log drivers areawslogs,splunk, andawsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers areawslogs,fluentd,gelf,json-file,journald,``syslog``,splunk, andawsfirelens. - This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. - For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with theECS_AVAILABLE_LOGGING_DRIVERSenvironment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide. - For tasks that are on FARGATElong, because you don’t have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.memory (
Union[int,float,None]) – The amount (in MiB) of memory to present to the container. If the container attempts to exceed the memory specified here, the container is killed.memory_reservation (
Union[int,float,None]) – The soft limit (in MiB) of memory to reserve for the container.mount_points (
Union[IResolvable,Sequence[Union[IResolvable,MountPointProperty,Dict[str,Any]]],None]) – The mount points for data volumes in your container.privileged (
Union[bool,IResolvable,None]) – When this parameter is true, the container is given elevated privileges on the host container instance (similar to therootuser).pseudo_terminal (
Union[bool,IResolvable,None]) – When this parameter istrue, a TTY is allocated.readonly_root_filesystem (
Union[bool,IResolvable,None]) – When this parameter is true, the container is given read-only access to its root file system.repository_credentials (
Union[IResolvable,RepositoryCredentialsProperty,Dict[str,Any],None]) – The repository credentials for private registry authentication.restart_policy (
Union[IResolvable,RestartPolicyProperty,Dict[str,Any],None])secrets (
Union[IResolvable,Sequence[Union[IResolvable,SecretProperty,Dict[str,Any]]],None]) – The secrets to pass to the container.start_timeout (
Union[int,float,None]) – Time duration (in seconds) to wait before giving up on resolving dependencies for a container.stop_timeout (
Union[int,float,None]) – Time duration (in seconds) to wait before the container is forcefully killed if it doesn’t exit normally on its own.system_controls (
Union[IResolvable,Sequence[Union[IResolvable,SystemControlProperty,Dict[str,Any]]],None]) – A list of namespaced kernel parameters to set in the container.ulimits (
Union[IResolvable,Sequence[Union[IResolvable,UlimitProperty,Dict[str,Any]]],None]) – A list ofulimitsto set in the container.user (
Optional[str]) – The user to use inside the container.working_directory (
Optional[str]) – The working directory to run commands inside the container in.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs daemon_container_definition_property = ecs.CfnDaemonTaskDefinition.DaemonContainerDefinitionProperty( image="image", name="name", # the properties below are optional command=["command"], cpu=123, depends_on=[ecs.CfnDaemonTaskDefinition.ContainerDependencyProperty( condition="condition", container_name="containerName" )], entry_point=["entryPoint"], environment=[ecs.CfnDaemonTaskDefinition.KeyValuePairProperty( name="name", value="value" )], environment_files=[ecs.CfnDaemonTaskDefinition.EnvironmentFileProperty( type="type", value="value" )], essential=False, firelens_configuration=ecs.CfnDaemonTaskDefinition.FirelensConfigurationProperty( options={ "options_key": "options" }, type="type" ), health_check=ecs.CfnDaemonTaskDefinition.HealthCheckProperty( command=["command"], interval=123, retries=123, start_period=123, timeout=123 ), interactive=False, linux_parameters=ecs.CfnDaemonTaskDefinition.LinuxParametersProperty( capabilities=ecs.CfnDaemonTaskDefinition.KernelCapabilitiesProperty( add=["add"], drop=["drop"] ), devices=[ecs.CfnDaemonTaskDefinition.DeviceProperty( container_path="containerPath", host_path="hostPath", permissions=["permissions"] )], init_process_enabled=False, tmpfs=[ecs.CfnDaemonTaskDefinition.TmpfsProperty( size=123, # the properties below are optional container_path="containerPath", mount_options=["mountOptions"] )] ), log_configuration=ecs.CfnDaemonTaskDefinition.LogConfigurationProperty( log_driver="logDriver", # the properties below are optional options={ "options_key": "options" }, secret_options=[ecs.CfnDaemonTaskDefinition.SecretProperty( name="name", value_from="valueFrom" )] ), memory=123, memory_reservation=123, mount_points=[ecs.CfnDaemonTaskDefinition.MountPointProperty( container_path="containerPath", read_only=False, source_volume="sourceVolume" )], privileged=False, pseudo_terminal=False, readonly_root_filesystem=False, repository_credentials=ecs.CfnDaemonTaskDefinition.RepositoryCredentialsProperty( credentials_parameter="credentialsParameter" ), restart_policy=ecs.CfnDaemonTaskDefinition.RestartPolicyProperty( enabled=False, ignored_exit_codes=[123], restart_attempt_period=123 ), secrets=[ecs.CfnDaemonTaskDefinition.SecretProperty( name="name", value_from="valueFrom" )], start_timeout=123, stop_timeout=123, system_controls=[ecs.CfnDaemonTaskDefinition.SystemControlProperty( namespace="namespace", value="value" )], ulimits=[ecs.CfnDaemonTaskDefinition.UlimitProperty( hard_limit=123, name="name", soft_limit=123 )], user="user", working_directory="workingDirectory" )
Attributes
- command
The command that’s passed to the container.
- cpu
The number of
cpuunits reserved for the container.
- depends_on
The dependencies defined for container startup and shutdown.
A container can contain multiple dependencies on other containers in a task definition.
- entry_point
The entry point that’s passed to the container.
- environment
The environment variables to pass to a container.
- environment_files
A list of files containing the environment variables to pass to a container.
- essential
If the
essentialparameter of a container is marked astrue, and that container fails or stops for any reason, all other containers that are part of the task are stopped.
- firelens_configuration
The FireLens configuration for the container.
This is used to specify and configure a log router for container logs. For more information, see Custom log routing in the Amazon Elastic Container Service Developer Guide.
- health_check
An object representing a container health check.
Health check parameters that are specified in a container definition override any Docker health checks that exist in the container image (such as those specified in a parent image or from the image’s Dockerfile). This configuration maps to the
HEALTHCHECKparameter of docker run. The Amazon ECS container agent only monitors and reports on the health checks specified in the task definition. Amazon ECS does not monitor Docker health checks that are embedded in a container image and not specified in the container definition. Health check parameters that are specified in a container definition override any Docker health checks that exist in the container image. You can view the health status of both individual containers and a task with the DescribeTasks API operation or when viewing the task details in the console. The health check is designed to make sure that your containers survive agent restarts, upgrades, or temporary unavailability. Amazon ECS performs health checks on containers with the default that launched the container instance or the task. The following describes the possiblehealthStatusvalues for a container:HEALTHY-The container health check has passed successfully.UNHEALTHY-The container health check has failed.UNKNOWN-The container health check is being evaluated, there’s no container health check defined, or Amazon ECS doesn’t have the health status of the container.
The following describes the possible
healthStatusvalues based on the container health checker status of essential containers in the task with the following priority order (high to low):UNHEALTHY-One or more essential containers have failed their health check.UNKNOWN-Any essential container running within the task is in anUNKNOWNstate and no other essential containers have anUNHEALTHYstate.HEALTHY-All essential containers within the task have passed their health checks.
Consider the following task health example with 2 containers.
If Container1 is
UNHEALTHYand Container2 isUNKNOWN, the task health isUNHEALTHY.If Container1 is
UNHEALTHYand Container2 isHEALTHY, the task health isUNHEALTHY.If Container1 is
HEALTHYand Container2 isUNKNOWN, the task health isUNKNOWN.If Container1 is
HEALTHYand Container2 isHEALTHY, the task health isHEALTHY.
Consider the following task health example with 3 containers.
If Container1 is
UNHEALTHYand Container2 isUNKNOWN, and Container3 isUNKNOWN, the task health isUNHEALTHY.If Container1 is
UNHEALTHYand Container2 isUNKNOWN, and Container3 isHEALTHY, the task health isUNHEALTHY.If Container1 is
UNHEALTHYand Container2 isHEALTHY, and Container3 isHEALTHY, the task health isUNHEALTHY.If Container1 is
HEALTHYand Container2 isUNKNOWN, and Container3 isHEALTHY, the task health isUNKNOWN.If Container1 is
HEALTHYand Container2 isUNKNOWN, and Container3 isUNKNOWN, the task health isUNKNOWN.If Container1 is
HEALTHYand Container2 isHEALTHY, and Container3 isHEALTHY, the task health isHEALTHY.
If a task is run manually, and not as part of a service, the task will continue its lifecycle regardless of its health status. For tasks that are part of a service, if the task reports as unhealthy then the task will be stopped and the service scheduler will replace it. When a container health check fails for a task that is part of a service, the following process occurs:
The task is marked as
UNHEALTHY.The unhealthy task will be stopped, and during the stopping process, it will go through the following states:
DEACTIVATING- In this state, Amazon ECS performs additional steps before stopping the task. For example, for tasks that are part of services configured to use Elastic Load Balancing target groups, target groups will be deregistered in this state.STOPPING- The task is in the process of being stopped.DEPROVISIONING- Resources associated with the task are being cleaned up.STOPPED- The task has been completely stopped.
After the old task stops, a new task will be launched to ensure service operation, and the new task will go through the following lifecycle:
PROVISIONING- Resources required for the task are being provisioned.PENDING- The task is waiting to be placed on a container instance.ACTIVATING- In this state, Amazon ECS pulls container images, creates containers, configures task networking, registers load balancer target groups, and configures service discovery status.RUNNING- The task is running and performing its work.
For more detailed information about task lifecycle states, see Task lifecycle in the Amazon Elastic Container Service Developer Guide. The following are notes about container health check support:
If the Amazon ECS container agent becomes disconnected from the Amazon ECS service, this won’t cause a container to transition to an
UNHEALTHYstatus. This is by design, to ensure that containers remain running during agent restarts or temporary unavailability. The health check status is the “last heard from” response from the Amazon ECS agent, so if the container was consideredHEALTHYprior to the disconnect, that status will remain until the agent reconnects and another health check occurs. There are no assumptions made about the status of the container health checks.Container health checks require version
1.17.0or greater of the Amazon ECS container agent. For more information, see Updating the Amazon ECS container agent.Container health checks are supported for Fargate tasks if you’re using platform version
1.1.0or greater. For more information, see platform versions.Container health checks aren’t supported for tasks that are part of a service that’s configured to use a Classic Load Balancer.
For an example of how to specify a task definition with multiple containers where container dependency is specified, see Container dependency in the Amazon Elastic Container Service Developer Guide.
- image
The image used to start the container.
This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. Other repositories are specified with either
repository-url/image:tagorrepository-url/image@digest.
- interactive
When this parameter is
true, you can deploy containerized applications that requirestdinor attyto be allocated.
- linux_parameters
The Linux-specific options that are applied to the container, such as Linux KernelCapabilities.
- log_configuration
The log configuration for the container.
This parameter maps to
LogConfigin the docker container create command and the--log-driveroption to docker run. By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. Understand the following when specifying a log configuration for your containers.Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on FARGATElong, the supported log drivers are
awslogs,splunk, andawsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers areawslogs,fluentd,gelf,json-file,journald,``syslog``,splunk, andawsfirelens.This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the
ECS_AVAILABLE_LOGGING_DRIVERSenvironment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide.For tasks that are on FARGATElong, because you don’t have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
- memory
The amount (in MiB) of memory to present to the container.
If the container attempts to exceed the memory specified here, the container is killed.
- memory_reservation
The soft limit (in MiB) of memory to reserve for the container.
- mount_points
The mount points for data volumes in your container.
- name
The name of the container.
Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed.
- privileged
When this parameter is true, the container is given elevated privileges on the host container instance (similar to the
rootuser).
- pseudo_terminal
When this parameter is
true, a TTY is allocated.
- readonly_root_filesystem
When this parameter is true, the container is given read-only access to its root file system.
- repository_credentials
The repository credentials for private registry authentication.
- restart_policy
-
- Type:
see
- secrets
The secrets to pass to the container.
- start_timeout
Time duration (in seconds) to wait before giving up on resolving dependencies for a container.
- stop_timeout
Time duration (in seconds) to wait before the container is forcefully killed if it doesn’t exit normally on its own.
- system_controls
A list of namespaced kernel parameters to set in the container.
- ulimits
A list of
ulimitsto set in the container.
- user
The user to use inside the container.
- working_directory
The working directory to run commands inside the container in.
DeviceProperty
- class CfnDaemonTaskDefinition.DeviceProperty(*, container_path=None, host_path=None, permissions=None)
Bases:
objectAn object representing a container instance host device.
- Parameters:
container_path (
Optional[str]) – The path inside the container at which to expose the host device.host_path (
Optional[str]) – The path for the device on the host container instance.permissions (
Optional[Sequence[str]]) – The explicit permissions to provide to the container for the device. By default, the container has permissions forread,write, andmknodfor the device.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs device_property = ecs.CfnDaemonTaskDefinition.DeviceProperty( container_path="containerPath", host_path="hostPath", permissions=["permissions"] )
Attributes
- container_path
The path inside the container at which to expose the host device.
- host_path
The path for the device on the host container instance.
- permissions
The explicit permissions to provide to the container for the device.
By default, the container has permissions for
read,write, andmknodfor the device.
EnvironmentFileProperty
- class CfnDaemonTaskDefinition.EnvironmentFileProperty(*, type=None, value=None)
Bases:
objectA list of files containing the environment variables to pass to a container.
You can specify up to ten environment files. The file must have a
.envfile extension. Each line in an environment file should contain an environment variable inVARIABLE=VALUEformat. Lines beginning with#are treated as comments and are ignored. If there are environment variables specified using theenvironmentparameter in a container definition, they take precedence over the variables contained within an environment file. If multiple environment files are specified that contain the same variable, they’re processed from the top down. We recommend that you use unique variable names. For more information, see Use a file to pass environment variables to a container in the Amazon Elastic Container Service Developer Guide. Environment variable files are objects in Amazon S3 and all Amazon S3 security considerations apply. You must use the following platforms for the Fargate launch type:Linux platform version
1.4.0or later.Windows platform version
1.0.0or later.
Consider the following when using the Fargate launch type:
The file is handled like a native Docker env-file.
There is no support for shell escape handling.
The container entry point interperts the
VARIABLEvalues.
- Parameters:
type (
Optional[str]) – The file type to use. Environment files are objects in Amazon S3. The only supported value iss3.value (
Optional[str]) – The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs environment_file_property = ecs.CfnDaemonTaskDefinition.EnvironmentFileProperty( type="type", value="value" )
Attributes
- type
The file type to use.
Environment files are objects in Amazon S3. The only supported value is
s3.
- value
The Amazon Resource Name (ARN) of the Amazon S3 object containing the environment variable file.
FirelensConfigurationProperty
- class CfnDaemonTaskDefinition.FirelensConfigurationProperty(*, options=None, type=None)
Bases:
objectThe FireLens configuration for the container.
This is used to specify and configure a log router for container logs. For more information, see Custom log routing in the Amazon Elastic Container Service Developer Guide.
- Parameters:
options (
Union[Mapping[str,str],IResolvable,None]) – The options to use when configuring the log router. This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is"options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}. For more information, see Creating a task definition that uses a FireLens configuration in the Amazon Elastic Container Service Developer Guide. Tasks hosted on FARGATElong only support thefileconfiguration file type.type (
Optional[str]) – The log router to use. The valid values arefluentdorfluentbit.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs firelens_configuration_property = ecs.CfnDaemonTaskDefinition.FirelensConfigurationProperty( options={ "options_key": "options" }, type="type" )
Attributes
- options
The options to use when configuring the log router.
This field is optional and can be used to specify a custom configuration file or to add additional metadata, such as the task, task definition, cluster, and container instance details to the log event. If specified, the syntax to use is
"options":{"enable-ecs-log-metadata":"true|false","config-file-type:"s3|file","config-file-value":"arn:aws:s3:::mybucket/fluent.conf|filepath"}. For more information, see Creating a task definition that uses a FireLens configuration in the Amazon Elastic Container Service Developer Guide. Tasks hosted on FARGATElong only support thefileconfiguration file type.
- type
The log router to use.
The valid values are
fluentdorfluentbit.
HealthCheckProperty
- class CfnDaemonTaskDefinition.HealthCheckProperty(*, command=None, interval=None, retries=None, start_period=None, timeout=None)
Bases:
objectAn object representing a container health check.
Health check parameters that are specified in a container definition override any Docker health checks that exist in the container image (such as those specified in a parent image or from the image’s Dockerfile). This configuration maps to the
HEALTHCHECKparameter of docker run. The Amazon ECS container agent only monitors and reports on the health checks specified in the task definition. Amazon ECS does not monitor Docker health checks that are embedded in a container image and not specified in the container definition. Health check parameters that are specified in a container definition override any Docker health checks that exist in the container image. You can view the health status of both individual containers and a task with the DescribeTasks API operation or when viewing the task details in the console. The health check is designed to make sure that your containers survive agent restarts, upgrades, or temporary unavailability. Amazon ECS performs health checks on containers with the default that launched the container instance or the task. The following describes the possiblehealthStatusvalues for a container:HEALTHY-The container health check has passed successfully.UNHEALTHY-The container health check has failed.UNKNOWN-The container health check is being evaluated, there’s no container health check defined, or Amazon ECS doesn’t have the health status of the container.
The following describes the possible
healthStatusvalues based on the container health checker status of essential containers in the task with the following priority order (high to low):UNHEALTHY-One or more essential containers have failed their health check.UNKNOWN-Any essential container running within the task is in anUNKNOWNstate and no other essential containers have anUNHEALTHYstate.HEALTHY-All essential containers within the task have passed their health checks.
Consider the following task health example with 2 containers.
If Container1 is
UNHEALTHYand Container2 isUNKNOWN, the task health isUNHEALTHY.If Container1 is
UNHEALTHYand Container2 isHEALTHY, the task health isUNHEALTHY.If Container1 is
HEALTHYand Container2 isUNKNOWN, the task health isUNKNOWN.If Container1 is
HEALTHYand Container2 isHEALTHY, the task health isHEALTHY.
Consider the following task health example with 3 containers.
If Container1 is
UNHEALTHYand Container2 isUNKNOWN, and Container3 isUNKNOWN, the task health isUNHEALTHY.If Container1 is
UNHEALTHYand Container2 isUNKNOWN, and Container3 isHEALTHY, the task health isUNHEALTHY.If Container1 is
UNHEALTHYand Container2 isHEALTHY, and Container3 isHEALTHY, the task health isUNHEALTHY.If Container1 is
HEALTHYand Container2 isUNKNOWN, and Container3 isHEALTHY, the task health isUNKNOWN.If Container1 is
HEALTHYand Container2 isUNKNOWN, and Container3 isUNKNOWN, the task health isUNKNOWN.If Container1 is
HEALTHYand Container2 isHEALTHY, and Container3 isHEALTHY, the task health isHEALTHY.
If a task is run manually, and not as part of a service, the task will continue its lifecycle regardless of its health status. For tasks that are part of a service, if the task reports as unhealthy then the task will be stopped and the service scheduler will replace it. When a container health check fails for a task that is part of a service, the following process occurs:
The task is marked as
UNHEALTHY.The unhealthy task will be stopped, and during the stopping process, it will go through the following states:
DEACTIVATING- In this state, Amazon ECS performs additional steps before stopping the task. For example, for tasks that are part of services configured to use Elastic Load Balancing target groups, target groups will be deregistered in this state.STOPPING- The task is in the process of being stopped.DEPROVISIONING- Resources associated with the task are being cleaned up.STOPPED- The task has been completely stopped.
After the old task stops, a new task will be launched to ensure service operation, and the new task will go through the following lifecycle:
PROVISIONING- Resources required for the task are being provisioned.PENDING- The task is waiting to be placed on a container instance.ACTIVATING- In this state, Amazon ECS pulls container images, creates containers, configures task networking, registers load balancer target groups, and configures service discovery status.RUNNING- The task is running and performing its work.
For more detailed information about task lifecycle states, see Task lifecycle in the Amazon Elastic Container Service Developer Guide. The following are notes about container health check support:
If the Amazon ECS container agent becomes disconnected from the Amazon ECS service, this won’t cause a container to transition to an
UNHEALTHYstatus. This is by design, to ensure that containers remain running during agent restarts or temporary unavailability. The health check status is the “last heard from” response from the Amazon ECS agent, so if the container was consideredHEALTHYprior to the disconnect, that status will remain until the agent reconnects and another health check occurs. There are no assumptions made about the status of the container health checks.Container health checks require version
1.17.0or greater of the Amazon ECS container agent. For more information, see Updating the Amazon ECS container agent.Container health checks are supported for Fargate tasks if you’re using platform version
1.1.0or greater. For more information, see platform versions.Container health checks aren’t supported for tasks that are part of a service that’s configured to use a Classic Load Balancer.
For an example of how to specify a task definition with multiple containers where container dependency is specified, see Container dependency in the Amazon Elastic Container Service Developer Guide.
- Parameters:
command (
Optional[Sequence[str]]) – A string array representing the command that the container runs to determine if it is healthy. The string array must start withCMDto run the command arguments directly, orCMD-SHELLto run the command with the container’s default shell. When you use the AWS Management Console JSON panel, the CLIlong, or the APIs, enclose the list of commands in double quotes and brackets.[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]You don’t include the double quotes and brackets when you use the AWS Management Console.CMD-SHELL, curl -f http://localhost/ || exit 1An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, seeHealthCheckin the docker container create command.interval (
Union[int,float,None]) – The time period in seconds between each health check execution. You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify acommand.retries (
Union[int,float,None]) – The number of times to retry a failed health check before the container is considered unhealthy. You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify acommand.start_period (
Union[int,float,None]) – The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries. You can specify between 0 and 300 seconds. By default, thestartPeriodis off. This value applies only when you specify acommand. If a health check succeeds within thestartPeriod, then the container is considered healthy and any subsequent failures count toward the maximum number of retries.timeout (
Union[int,float,None]) – The time period in seconds to wait for a health check to succeed before it is considered a failure. You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify acommand.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs health_check_property = ecs.CfnDaemonTaskDefinition.HealthCheckProperty( command=["command"], interval=123, retries=123, start_period=123, timeout=123 )
Attributes
- command
A string array representing the command that the container runs to determine if it is healthy.
The string array must start with
CMDto run the command arguments directly, orCMD-SHELLto run the command with the container’s default shell. When you use the AWS Management Console JSON panel, the CLIlong, or the APIs, enclose the list of commands in double quotes and brackets.[ "CMD-SHELL", "curl -f http://localhost/ || exit 1" ]You don’t include the double quotes and brackets when you use the AWS Management Console.CMD-SHELL, curl -f http://localhost/ || exit 1An exit code of 0 indicates success, and non-zero exit code indicates failure. For more information, seeHealthCheckin the docker container create command.
- interval
The time period in seconds between each health check execution.
You may specify between 5 and 300 seconds. The default value is 30 seconds. This value applies only when you specify a
command.
- retries
The number of times to retry a failed health check before the container is considered unhealthy.
You may specify between 1 and 10 retries. The default value is 3. This value applies only when you specify a
command.
- start_period
The optional grace period to provide containers time to bootstrap before failed health checks count towards the maximum number of retries.
You can specify between 0 and 300 seconds. By default, the
startPeriodis off. This value applies only when you specify acommand. If a health check succeeds within thestartPeriod, then the container is considered healthy and any subsequent failures count toward the maximum number of retries.
- timeout
The time period in seconds to wait for a health check to succeed before it is considered a failure.
You may specify between 2 and 60 seconds. The default value is 5. This value applies only when you specify a
command.
HostVolumePropertiesProperty
- class CfnDaemonTaskDefinition.HostVolumePropertiesProperty(*, source_path=None)
Bases:
objectDetails on a container instance bind mount host volume.
- Parameters:
source_path (
Optional[str]) – When thehostparameter is used, specify asourcePathto declare the path on the host container instance that’s presented to the container. If this parameter is empty, then the Docker daemon has assigned a host path for you. If thehostparameter contains asourcePathfile location, then the data volume persists at the specified location on the host container instance until you delete it manually. If thesourcePathvalue doesn’t exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported. If you’re using the Fargate launch type, thesourcePathparameter is not supported.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs host_volume_properties_property = ecs.CfnDaemonTaskDefinition.HostVolumePropertiesProperty( source_path="sourcePath" )
Attributes
- source_path
When the
hostparameter is used, specify asourcePathto declare the path on the host container instance that’s presented to the container.If this parameter is empty, then the Docker daemon has assigned a host path for you. If the
hostparameter contains asourcePathfile location, then the data volume persists at the specified location on the host container instance until you delete it manually. If thesourcePathvalue doesn’t exist on the host container instance, the Docker daemon creates it. If the location does exist, the contents of the source path folder are exported. If you’re using the Fargate launch type, thesourcePathparameter is not supported.
KernelCapabilitiesProperty
- class CfnDaemonTaskDefinition.KernelCapabilitiesProperty(*, add=None, drop=None)
Bases:
objectThe Linux capabilities to add or remove from the default Docker configuration for a container defined in the task definition.
For more detailed information about these Linux capabilities, see the capabilities(7) Linux manual page. The following describes how Docker processes the Linux capabilities specified in the
addanddroprequest parameters. For information about the latest behavior, see Docker Compose: order of cap_drop and cap_add in the Docker Community Forum.When the container is a privleged container, the container capabilities are all of the default Docker capabilities. The capabilities specified in the
addrequest parameter, and thedroprequest parameter are ignored.When the
addrequest parameter is set to ALL, the container capabilities are all of the default Docker capabilities, excluding those specified in thedroprequest parameter.When the
droprequest parameter is set to ALL, the container capabilities are the capabilities specified in theaddrequest parameter.When the
addrequest parameter and thedroprequest parameter are both empty, the capabilities the container capabilities are all of the default Docker capabilities.The default is to first drop the capabilities specified in the
droprequest parameter, and then add the capabilities specified in theaddrequest parameter.
- Parameters:
add (
Optional[Sequence[str]]) – The Linux capabilities for the container that have been added to the default configuration provided by Docker. This parameter maps toCapAddin the docker container create command and the--cap-addoption to docker run. Tasks launched on FARGATElong only support adding theSYS_PTRACEkernel capability. Valid values:"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"drop (
Optional[Sequence[str]]) – The Linux capabilities for the container that have been removed from the default configuration provided by Docker. This parameter maps toCapDropin the docker container create command and the--cap-dropoption to docker run. Valid values:"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs kernel_capabilities_property = ecs.CfnDaemonTaskDefinition.KernelCapabilitiesProperty( add=["add"], drop=["drop"] )
Attributes
- add
The Linux capabilities for the container that have been added to the default configuration provided by Docker.
This parameter maps to
CapAddin the docker container create command and the--cap-addoption to docker run. Tasks launched on FARGATElong only support adding theSYS_PTRACEkernel capability. Valid values:"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
- drop
The Linux capabilities for the container that have been removed from the default configuration provided by Docker.
This parameter maps to
CapDropin the docker container create command and the--cap-dropoption to docker run. Valid values:"ALL" | "AUDIT_CONTROL" | "AUDIT_WRITE" | "BLOCK_SUSPEND" | "CHOWN" | "DAC_OVERRIDE" | "DAC_READ_SEARCH" | "FOWNER" | "FSETID" | "IPC_LOCK" | "IPC_OWNER" | "KILL" | "LEASE" | "LINUX_IMMUTABLE" | "MAC_ADMIN" | "MAC_OVERRIDE" | "MKNOD" | "NET_ADMIN" | "NET_BIND_SERVICE" | "NET_BROADCAST" | "NET_RAW" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_ADMIN" | "SYS_BOOT" | "SYS_CHROOT" | "SYS_MODULE" | "SYS_NICE" | "SYS_PACCT" | "SYS_PTRACE" | "SYS_RAWIO" | "SYS_RESOURCE" | "SYS_TIME" | "SYS_TTY_CONFIG" | "SYSLOG" | "WAKE_ALARM"
KeyValuePairProperty
- class CfnDaemonTaskDefinition.KeyValuePairProperty(*, name=None, value=None)
Bases:
objectA key-value pair object.
- Parameters:
name (
Optional[str]) – The name of the key-value pair. For environment variables, this is the name of the environment variable.value (
Optional[str]) – The value of the key-value pair. For environment variables, this is the value of the environment variable.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs key_value_pair_property = ecs.CfnDaemonTaskDefinition.KeyValuePairProperty( name="name", value="value" )
Attributes
- name
The name of the key-value pair.
For environment variables, this is the name of the environment variable.
- value
The value of the key-value pair.
For environment variables, this is the value of the environment variable.
LinuxParametersProperty
- class CfnDaemonTaskDefinition.LinuxParametersProperty(*, capabilities=None, devices=None, init_process_enabled=None, tmpfs=None)
Bases:
objectThe Linux-specific options that are applied to the container, such as Linux KernelCapabilities.
- Parameters:
capabilities (
Union[IResolvable,KernelCapabilitiesProperty,Dict[str,Any],None]) –The Linux capabilities to add or remove from the default Docker configuration for a container defined in the task definition. For more detailed information about these Linux capabilities, see the capabilities(7) Linux manual page. The following describes how Docker processes the Linux capabilities specified in the
addanddroprequest parameters. For information about the latest behavior, see Docker Compose: order of cap_drop and cap_add in the Docker Community Forum. - When the container is a privleged container, the container capabilities are all of the default Docker capabilities. The capabilities specified in theaddrequest parameter, and thedroprequest parameter are ignored. - When theaddrequest parameter is set to ALL, the container capabilities are all of the default Docker capabilities, excluding those specified in thedroprequest parameter. - When thedroprequest parameter is set to ALL, the container capabilities are the capabilities specified in theaddrequest parameter. - When theaddrequest parameter and thedroprequest parameter are both empty, the capabilities the container capabilities are all of the default Docker capabilities. - The default is to first drop the capabilities specified in thedroprequest parameter, and then add the capabilities specified in theaddrequest parameter.devices (
Union[IResolvable,Sequence[Union[IResolvable,DeviceProperty,Dict[str,Any]]],None]) – Any host devices to expose to the container. This parameter maps toDevicesin the docker container create command and the--deviceoption to docker run. If you’re using tasks that use the Fargate launch type, thedevicesparameter isn’t supported.init_process_enabled (
Union[bool,IResolvable,None]) – Run aninitprocess inside the container that forwards signals and reaps processes. This parameter maps to the--initoption to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:sudo docker version --format '{{.Server.APIVersion}}'tmpfs (
Union[IResolvable,Sequence[Union[IResolvable,TmpfsProperty,Dict[str,Any]]],None]) – The container path, mount options, and size (in MiB) of the tmpfs mount. This parameter maps to the--tmpfsoption to docker run. If you’re using tasks that use the Fargate launch type, thetmpfsparameter isn’t supported.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs linux_parameters_property = ecs.CfnDaemonTaskDefinition.LinuxParametersProperty( capabilities=ecs.CfnDaemonTaskDefinition.KernelCapabilitiesProperty( add=["add"], drop=["drop"] ), devices=[ecs.CfnDaemonTaskDefinition.DeviceProperty( container_path="containerPath", host_path="hostPath", permissions=["permissions"] )], init_process_enabled=False, tmpfs=[ecs.CfnDaemonTaskDefinition.TmpfsProperty( size=123, # the properties below are optional container_path="containerPath", mount_options=["mountOptions"] )] )
Attributes
- capabilities
The Linux capabilities to add or remove from the default Docker configuration for a container defined in the task definition.
For more detailed information about these Linux capabilities, see the capabilities(7) Linux manual page. The following describes how Docker processes the Linux capabilities specified in the
addanddroprequest parameters. For information about the latest behavior, see Docker Compose: order of cap_drop and cap_add in the Docker Community Forum.When the container is a privleged container, the container capabilities are all of the default Docker capabilities. The capabilities specified in the
addrequest parameter, and thedroprequest parameter are ignored.When the
addrequest parameter is set to ALL, the container capabilities are all of the default Docker capabilities, excluding those specified in thedroprequest parameter.When the
droprequest parameter is set to ALL, the container capabilities are the capabilities specified in theaddrequest parameter.When the
addrequest parameter and thedroprequest parameter are both empty, the capabilities the container capabilities are all of the default Docker capabilities.The default is to first drop the capabilities specified in the
droprequest parameter, and then add the capabilities specified in theaddrequest parameter.
- devices
Any host devices to expose to the container.
This parameter maps to
Devicesin the docker container create command and the--deviceoption to docker run. If you’re using tasks that use the Fargate launch type, thedevicesparameter isn’t supported.
- init_process_enabled
Run an
initprocess inside the container that forwards signals and reaps processes.This parameter maps to the
--initoption to docker run. This parameter requires version 1.25 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:sudo docker version --format '{{.Server.APIVersion}}'
- tmpfs
The container path, mount options, and size (in MiB) of the tmpfs mount.
This parameter maps to the
--tmpfsoption to docker run. If you’re using tasks that use the Fargate launch type, thetmpfsparameter isn’t supported.
LogConfigurationProperty
- class CfnDaemonTaskDefinition.LogConfigurationProperty(*, log_driver, options=None, secret_options=None)
Bases:
objectThe log configuration for the container.
This parameter maps to
LogConfigin the docker container create command and the--log-driveroption to docker run. By default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition. Understand the following when specifying a log configuration for your containers.Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent. For tasks on FARGATElong, the supported log drivers are
awslogs,splunk, andawsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers areawslogs,fluentd,gelf,json-file,journald,``syslog``,splunk, andawsfirelens.This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the
ECS_AVAILABLE_LOGGING_DRIVERSenvironment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS container agent configuration in the Amazon Elastic Container Service Developer Guide.For tasks that are on FARGATElong, because you don’t have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
- Parameters:
log_driver (
str) – The log driver to use for the container. For tasks on FARGATElong, the supported log drivers areawslogs,splunk, andawsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers areawslogs,fluentd,gelf,json-file,journald,syslog,splunk, andawsfirelens. For more information about using theawslogslog driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide. For more information about using theawsfirelenslog driver, see Send Amazon ECS logs to an service or Partner. If you have a custom driver that isn’t listed, you can fork the Amazon ECS container agent project that’s available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don’t currently provide support for running modified copies of this software.options (
Union[Mapping[str,str],IResolvable,None]) – The configuration options to send to the log driver. The options you can specify depend on the log driver. Some of the options you can specify when you use theawslogslog driver to route logs to Amazon CloudWatch include the following: - awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn’t specified, it defaults to false. Your IAM policy must include the logs:CreateLogGroup permission before you attempt to use awslogs-create-group. + awslogs-region Required: Yes Specify the Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they’re all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. + awslogs-group Required: Yes Make sure to specify a log group that the awslogs log driver sends its log streams to. + awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id. If you don’t specify a prefix with this option, then the log stream is named after the container ID that’s assigned by the Docker daemon on the container instance. Because it’s difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. + awslogs-datetime-format Required: No This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. + awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline-pattern. This option is ignored if awslogs-datetime-format is also configured. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers. - mode Required: No Valid values: non-blocking | blocking This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the non-blocking mode, the container’s logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver. You can set a default mode for all containers in a specific Region by using the defaultLogDriverMode account setting. If you don’t specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide. On June 25, 2025, Amazon ECS changed the default log driver mode from blocking to non-blocking to prioritize task availability over logging. To continue using the blocking mode after this change, do one of the following: Set the mode option in your container definition’s logConfiguration as blocking. Set the defaultLogDriverMode account setting to blocking. + max-buffer-size Required: No Default value: 10m When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that’s used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using thesplunklog router, you need to specify asplunk-tokenand asplunk-url. When you use theawsfirelenslog router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set thelog-driver-buffer-limitoption to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when usingawsfirelensto route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region withregionand a name for the log stream withdelivery_stream. When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region withregionand a data stream name withstream. When you export logs to Amazon OpenSearch Service, you can specify options likeName,Host(OpenSearch Service endpoint without protocol),Port,Index,Type,Aws_auth,Aws_region,Suppress_Type_Name, andtls. For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using thebucketoption. You can also specifyregion,total_file_size,upload_timeout, anduse_put_objectas options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:sudo docker version --format '{{.Server.APIVersion}}'secret_options (
Union[IResolvable,Sequence[Union[IResolvable,SecretProperty,Dict[str,Any]]],None]) – The secrets to pass to the log configuration. For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs log_configuration_property = ecs.CfnDaemonTaskDefinition.LogConfigurationProperty( log_driver="logDriver", # the properties below are optional options={ "options_key": "options" }, secret_options=[ecs.CfnDaemonTaskDefinition.SecretProperty( name="name", value_from="valueFrom" )] )
Attributes
- log_driver
The log driver to use for the container.
For tasks on FARGATElong, the supported log drivers are
awslogs,splunk, andawsfirelens. For tasks hosted on Amazon EC2 instances, the supported log drivers areawslogs,fluentd,gelf,json-file,journald,syslog,splunk, andawsfirelens. For more information about using theawslogslog driver, see Send Amazon ECS logs to CloudWatch in the Amazon Elastic Container Service Developer Guide. For more information about using theawsfirelenslog driver, see Send Amazon ECS logs to an service or Partner. If you have a custom driver that isn’t listed, you can fork the Amazon ECS container agent project that’s available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we don’t currently provide support for running modified copies of this software.
- options
The configuration options to send to the log driver.
The options you can specify depend on the log driver. Some of the options you can specify when you use the
awslogslog driver to route logs to Amazon CloudWatch include the following:awslogs-create-group Required: No Specify whether you want the log group to be created automatically. If this option isn’t specified, it defaults to false. Your IAM policy must include the logs:CreateLogGroup permission before you attempt to use awslogs-create-group. + awslogs-region Required: Yes Specify the Region that the awslogs log driver is to send your Docker logs to. You can choose to send all of your logs from clusters in different Regions to a single region in CloudWatch Logs. This is so that they’re all visible in one location. Otherwise, you can separate them by Region for more granularity. Make sure that the specified log group exists in the Region that you specify with this option. + awslogs-group Required: Yes Make sure to specify a log group that the awslogs log driver sends its log streams to. + awslogs-stream-prefix Required: Yes, when using Fargate.Optional when using EC2. Use the awslogs-stream-prefix option to associate a log stream with the specified prefix, the container name, and the ID of the Amazon ECS task that the container belongs to. If you specify a prefix with this option, then the log stream takes the format prefix-name/container-name/ecs-task-id. If you don’t specify a prefix with this option, then the log stream is named after the container ID that’s assigned by the Docker daemon on the container instance. Because it’s difficult to trace logs back to the container that sent them with just the Docker container ID (which is only available on the container instance), we recommend that you specify a prefix with this option. For Amazon ECS services, you can use the service name as the prefix. Doing so, you can trace log streams to the service that the container belongs to, the name of the container that sent them, and the ID of the task that the container belongs to. You must specify a stream-prefix for your logs to have your logs appear in the Log pane when using the Amazon ECS console. + awslogs-datetime-format Required: No This option defines a multiline start pattern in Python strftime format. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. One example of a use case for using this format is for parsing output such as a stack dump, which might otherwise be logged in multiple entries. The correct pattern allows it to be captured in a single entry. For more information, see awslogs-datetime-format. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. + awslogs-multiline-pattern Required: No This option defines a multiline start pattern that uses a regular expression. A log message consists of a line that matches the pattern and any following lines that don’t match the pattern. The matched line is the delimiter between log messages. For more information, see awslogs-multiline-pattern. This option is ignored if awslogs-datetime-format is also configured. You cannot configure both the awslogs-datetime-format and awslogs-multiline-pattern options. Multiline logging performs regular expression parsing and matching of all log messages. This might have a negative impact on logging performance. The following options apply to all supported log drivers.
mode Required: No Valid values: non-blocking | blocking This option defines the delivery mode of log messages from the container to the log driver specified using logDriver. The delivery mode you choose affects application availability when the flow of logs from container is interrupted. If you use the blocking mode and the flow of logs is interrupted, calls from container code to write to the stdout and stderr streams will block. The logging thread of the application will block as a result. This may cause the application to become unresponsive and lead to container healthcheck failure. If you use the non-blocking mode, the container’s logs are instead stored in an in-memory intermediate buffer configured with the max-buffer-size option. This prevents the application from becoming unresponsive when logs cannot be sent. We recommend using this mode if you want to ensure service availability and are okay with some log loss. For more information, see Preventing log loss with non-blocking mode in the awslogs container log driver. You can set a default mode for all containers in a specific Region by using the defaultLogDriverMode account setting. If you don’t specify the mode option or configure the account setting, Amazon ECS will default to the non-blocking mode. For more information about the account setting, see Default log driver mode in the Amazon Elastic Container Service Developer Guide. On June 25, 2025, Amazon ECS changed the default log driver mode from blocking to non-blocking to prioritize task availability over logging. To continue using the blocking mode after this change, do one of the following: Set the mode option in your container definition’s logConfiguration as blocking. Set the defaultLogDriverMode account setting to blocking. + max-buffer-size Required: No Default value: 10m When non-blocking mode is used, the max-buffer-size log option controls the size of the buffer that’s used for intermediate message storage. Make sure to specify an adequate buffer size based on your application. When the buffer fills up, further logs cannot be stored. Logs that cannot be stored are lost. To route logs using the
splunklog router, you need to specify asplunk-tokenand asplunk-url. When you use theawsfirelenslog router to route logs to an AWS Service or AWS Partner Network destination for log storage and analytics, you can set thelog-driver-buffer-limitoption to limit the number of events that are buffered in memory, before being sent to the log router container. It can help to resolve potential log loss issue because high throughput might result in memory running out for the buffer inside of Docker. Other options you can specify when usingawsfirelensto route logs depend on the destination. When you export logs to Amazon Data Firehose, you can specify the AWS Region withregionand a name for the log stream withdelivery_stream. When you export logs to Amazon Kinesis Data Streams, you can specify an AWS Region withregionand a data stream name withstream. When you export logs to Amazon OpenSearch Service, you can specify options likeName,Host(OpenSearch Service endpoint without protocol),Port,Index,Type,Aws_auth,Aws_region,Suppress_Type_Name, andtls. For more information, see Under the hood: FireLens for Amazon ECS Tasks. When you export logs to Amazon S3, you can specify the bucket using thebucketoption. You can also specifyregion,total_file_size,upload_timeout, anduse_put_objectas options. This parameter requires version 1.19 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command:sudo docker version --format '{{.Server.APIVersion}}'
- secret_options
The secrets to pass to the log configuration.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
MountPointProperty
- class CfnDaemonTaskDefinition.MountPointProperty(*, container_path=None, read_only=None, source_volume=None)
Bases:
objectThe details for a volume mount point that’s used in a container definition.
- Parameters:
container_path (
Optional[str]) – The path on the container to mount the host volume at.read_only (
Union[bool,IResolvable,None]) – If this value istrue, the container has read-only access to the volume. If this value isfalse, then the container can write to the volume. The default value isfalse.source_volume (
Optional[str]) – The name of the volume to mount. Must be a volume name referenced in thenameparameter of task definitionvolume.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs mount_point_property = ecs.CfnDaemonTaskDefinition.MountPointProperty( container_path="containerPath", read_only=False, source_volume="sourceVolume" )
Attributes
- container_path
The path on the container to mount the host volume at.
- read_only
If this value is
true, the container has read-only access to the volume.If this value is
false, then the container can write to the volume. The default value isfalse.
- source_volume
The name of the volume to mount.
Must be a volume name referenced in the
nameparameter of task definitionvolume.
RepositoryCredentialsProperty
- class CfnDaemonTaskDefinition.RepositoryCredentialsProperty(*, credentials_parameter=None)
Bases:
objectThe repository credentials for private registry authentication.
- Parameters:
credentials_parameter (
Optional[str]) – The Amazon Resource Name (ARN) of the secret containing the private repository credentials. When you use the Amazon ECS API, CLI, or AWS SDK, if the secret exists in the same Region as the task that you’re launching then you can use either the full ARN or the name of the secret. When you use the AWS Management Console, you must specify the full ARN of the secret.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs repository_credentials_property = ecs.CfnDaemonTaskDefinition.RepositoryCredentialsProperty( credentials_parameter="credentialsParameter" )
Attributes
- credentials_parameter
The Amazon Resource Name (ARN) of the secret containing the private repository credentials.
When you use the Amazon ECS API, CLI, or AWS SDK, if the secret exists in the same Region as the task that you’re launching then you can use either the full ARN or the name of the secret. When you use the AWS Management Console, you must specify the full ARN of the secret.
RestartPolicyProperty
- class CfnDaemonTaskDefinition.RestartPolicyProperty(*, enabled=None, ignored_exit_codes=None, restart_attempt_period=None)
Bases:
object- Parameters:
enabled (
Union[bool,IResolvable,None])ignored_exit_codes (
Union[Sequence[Union[int,float]],IResolvable,None])restart_attempt_period (
Union[int,float,None])
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs restart_policy_property = ecs.CfnDaemonTaskDefinition.RestartPolicyProperty( enabled=False, ignored_exit_codes=[123], restart_attempt_period=123 )
Attributes
- enabled
-
- Type:
see
- ignored_exit_codes
-
- Type:
see
SecretProperty
- class CfnDaemonTaskDefinition.SecretProperty(*, name, value_from)
Bases:
objectAn object representing the secret to expose to your container.
Secrets can be exposed to a container in the following ways:
To inject sensitive data into your containers as environment variables, use the
secretscontainer definition parameter.To reference sensitive information in the log configuration of a container, use the
secretOptionscontainer definition parameter.
For more information, see Specifying sensitive data in the Amazon Elastic Container Service Developer Guide.
- Parameters:
name (
str) – The name of the secret.value_from (
str) –The secret to expose to the container. The supported values are either the full ARN of the ASMlong secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require IAMlong permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide. If the SSM Parameter Store parameter exists in the same Region as the task you’re launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs secret_property = ecs.CfnDaemonTaskDefinition.SecretProperty( name="name", value_from="valueFrom" )
Attributes
- name
The name of the secret.
- value_from
The secret to expose to the container.
The supported values are either the full ARN of the ASMlong secret or the full ARN of the parameter in the SSM Parameter Store. For information about the require IAMlong permissions, see Required IAM permissions for Amazon ECS secrets (for Secrets Manager) or Required IAM permissions for Amazon ECS secrets (for Systems Manager Parameter store) in the Amazon Elastic Container Service Developer Guide. If the SSM Parameter Store parameter exists in the same Region as the task you’re launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified.
SystemControlProperty
- class CfnDaemonTaskDefinition.SystemControlProperty(*, namespace=None, value=None)
Bases:
objectA list of namespaced kernel parameters to set in the container.
This parameter maps to
Sysctlsin the docker container create command and the--sysctloption to docker run. For example, you can configurenet.ipv4.tcp_keepalive_timesetting to maintain longer lived connections. We don’t recommend that you specify network-relatedsystemControlsparameters for multiple containers in a single task that also uses either theawsvpcorhostnetwork mode. Doing this has the following disadvantages:For tasks that use the
awsvpcnetwork mode including Fargate, if you setsystemControlsfor any container, it applies to all containers in the task. If you set differentsystemControlsfor multiple containers in a single task, the container that’s started last determines whichsystemControlstake effect.For tasks that use the
hostnetwork mode, the network namespacesystemControlsaren’t supported.
If you’re setting an IPC resource namespace to use for the containers in the task, the following conditions apply to your system controls. For more information, see IPC mode.
For tasks that use the
hostIPC mode, IPC namespacesystemControlsaren’t supported.For tasks that use the
taskIPC mode, IPC namespacesystemControlsvalues apply to all containers within a task.
This parameter is not supported for Windows containers. This parameter is only supported for tasks that are hosted on FARGATElong if the tasks are using platform version
1.4.0or later (Linux). This isn’t supported for Windows containers on Fargate.- Parameters:
namespace (
Optional[str]) – The namespaced kernel parameter to set avaluefor.value (
Optional[str]) – The namespaced kernel parameter to set avaluefor. Valid IPC namespace values:"kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced", andSysctlsthat start with"fs.mqueue.*"Valid network namespace values:Sysctlsthat start with"net.*". Only namespacedSysctlsthat exist within the container starting with “net.* are accepted. All of these values are supported by Fargate.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs system_control_property = ecs.CfnDaemonTaskDefinition.SystemControlProperty( namespace="namespace", value="value" )
Attributes
- namespace
The namespaced kernel parameter to set a
valuefor.
- value
The namespaced kernel parameter to set a
valuefor.Valid IPC namespace values:
"kernel.msgmax" | "kernel.msgmnb" | "kernel.msgmni" | "kernel.sem" | "kernel.shmall" | "kernel.shmmax" | "kernel.shmmni" | "kernel.shm_rmid_forced", andSysctlsthat start with"fs.mqueue.*"Valid network namespace values:Sysctlsthat start with"net.*". Only namespacedSysctlsthat exist within the container starting with “net.* are accepted. All of these values are supported by Fargate.
TmpfsProperty
- class CfnDaemonTaskDefinition.TmpfsProperty(*, size, container_path=None, mount_options=None)
Bases:
objectThe container path, mount options, and size of the tmpfs mount.
- Parameters:
size (
Union[int,float]) – The maximum size (in MiB) of the tmpfs volume.container_path (
Optional[str]) – The absolute file path where the tmpfs volume is to be mounted.mount_options (
Optional[Sequence[str]]) – The list of tmpfs volume mount options. Valid values:"defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs tmpfs_property = ecs.CfnDaemonTaskDefinition.TmpfsProperty( size=123, # the properties below are optional container_path="containerPath", mount_options=["mountOptions"] )
Attributes
- container_path
The absolute file path where the tmpfs volume is to be mounted.
- mount_options
The list of tmpfs volume mount options.
Valid values:
"defaults" | "ro" | "rw" | "suid" | "nosuid" | "dev" | "nodev" | "exec" | "noexec" | "sync" | "async" | "dirsync" | "remount" | "mand" | "nomand" | "atime" | "noatime" | "diratime" | "nodiratime" | "bind" | "rbind" | "unbindable" | "runbindable" | "private" | "rprivate" | "shared" | "rshared" | "slave" | "rslave" | "relatime" | "norelatime" | "strictatime" | "nostrictatime" | "mode" | "uid" | "gid" | "nr_inodes" | "nr_blocks" | "mpol"
- size
The maximum size (in MiB) of the tmpfs volume.
UlimitProperty
- class CfnDaemonTaskDefinition.UlimitProperty(*, hard_limit, name, soft_limit)
Bases:
objectThe
ulimitsettings to pass to the container.Amazon ECS tasks hosted on FARGATElong use the default resource limit values set by the operating system with the exception of the
nofileresource limit parameter which FARGATElong overrides. Thenofileresource limit sets a restriction on the number of open files that a container can use. The defaultnofilesoft limit is65535and the default hard limit is65535. You can specify theulimitsettings for a container in a task definition.- Parameters:
hard_limit (
Union[int,float]) – The hard limit for theulimittype. The value can be specified in bytes, seconds, or as a count, depending on thetypeof theulimit.name (
str) – Thetypeof theulimit.soft_limit (
Union[int,float]) – The soft limit for theulimittype. The value can be specified in bytes, seconds, or as a count, depending on thetypeof theulimit.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs ulimit_property = ecs.CfnDaemonTaskDefinition.UlimitProperty( hard_limit=123, name="name", soft_limit=123 )
Attributes
- hard_limit
The hard limit for the
ulimittype.The value can be specified in bytes, seconds, or as a count, depending on the
typeof theulimit.
- name
The
typeof theulimit.
- soft_limit
The soft limit for the
ulimittype.The value can be specified in bytes, seconds, or as a count, depending on the
typeof theulimit.
VolumeProperty
- class CfnDaemonTaskDefinition.VolumeProperty(*, host=None, name=None)
Bases:
objectThe data volume configuration for tasks launched using this task definition.
Specifying a volume configuration in a task definition is optional. The volume configuration may contain multiple volumes but only one volume configured at launch is supported. Each volume defined in the volume configuration may only specify a
nameand one of eitherconfiguredAtLaunch,dockerVolumeConfiguration,efsVolumeConfiguration,fsxWindowsFileServerVolumeConfiguration, orhost. If an empty volume configuration is specified, by default Amazon ECS uses a host volume. For more information, see Using data volumes in tasks.- Parameters:
host (
Union[IResolvable,HostVolumePropertiesProperty,Dict[str,Any],None]) – Details on a container instance bind mount host volume.name (
Optional[str]) – The name of the volume. Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. When using a volume configured at launch, thenameis required and must also be specified as the volume name in theServiceVolumeConfigurationorTaskVolumeConfigurationparameter when creating your service or standalone task. For all other types of volumes, this name is referenced in thesourceVolumeparameter of themountPointsobject in the container definition. When a volume is using theefsVolumeConfiguration, the name is required.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk import aws_ecs as ecs volume_property = ecs.CfnDaemonTaskDefinition.VolumeProperty( host=ecs.CfnDaemonTaskDefinition.HostVolumePropertiesProperty( source_path="sourcePath" ), name="name" )
Attributes
- host
Details on a container instance bind mount host volume.
- name
The name of the volume.
Up to 255 letters (uppercase and lowercase), numbers, underscores, and hyphens are allowed. When using a volume configured at launch, the
nameis required and must also be specified as the volume name in theServiceVolumeConfigurationorTaskVolumeConfigurationparameter when creating your service or standalone task. For all other types of volumes, this name is referenced in thesourceVolumeparameter of themountPointsobject in the container definition. When a volume is using theefsVolumeConfiguration, the name is required.