CfnAutoScalingGroupPropsMixin
- class aws_cdk.mixins_preview.aws_autoscaling.mixins.CfnAutoScalingGroupPropsMixin(props, *, strategy=None)
Bases:
MixinThe
AWS::AutoScaling::AutoScalingGroupresource defines an Amazon EC2 Auto Scaling group, which is a collection of Amazon EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management.For more information about Amazon EC2 Auto Scaling, see the Amazon EC2 Auto Scaling User Guide . .. epigraph:
Amazon EC2 Auto Scaling configures instances launched as part of an Auto Scaling group using either a `launch template <https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-launchtemplate.html>`_ or a launch configuration. We strongly recommend that you do not use launch configurations. For more information, see `Launch configurations <https://docs.aws.amazon.com/autoscaling/ec2/userguide/launch-configurations.html>`_ in the *Amazon EC2 Auto Scaling User Guide* . For help migrating from launch configurations to launch templates, see `Migrate AWS CloudFormation stacks from launch configurations to launch templates <https://docs.aws.amazon.com/autoscaling/ec2/userguide/migrate-launch-configurations-with-cloudformation.html>`_ in the *Amazon EC2 Auto Scaling User Guide* .
- See:
- CloudformationResource:
AWS::AutoScaling::AutoScalingGroup
- Mixin:
true
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview import mixins from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins cfn_auto_scaling_group_props_mixin = autoscaling_mixins.CfnAutoScalingGroupPropsMixin(autoscaling_mixins.CfnAutoScalingGroupMixinProps( auto_scaling_group_name="autoScalingGroupName", availability_zone_distribution=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.AvailabilityZoneDistributionProperty( capacity_distribution_strategy="capacityDistributionStrategy" ), availability_zone_impairment_policy=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.AvailabilityZoneImpairmentPolicyProperty( impaired_zone_health_check_behavior="impairedZoneHealthCheckBehavior", zonal_shift_enabled=False ), availability_zones=["availabilityZones"], capacity_rebalance=False, capacity_reservation_specification=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.CapacityReservationSpecificationProperty( capacity_reservation_preference="capacityReservationPreference", capacity_reservation_target=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.CapacityReservationTargetProperty( capacity_reservation_ids=["capacityReservationIds"], capacity_reservation_resource_group_arns=["capacityReservationResourceGroupArns"] ) ), context="context", cooldown="cooldown", default_instance_warmup=123, desired_capacity="desiredCapacity", desired_capacity_type="desiredCapacityType", health_check_grace_period=123, health_check_type="healthCheckType", instance_id="instanceId", instance_maintenance_policy=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.InstanceMaintenancePolicyProperty( max_healthy_percentage=123, min_healthy_percentage=123 ), launch_configuration_name="launchConfigurationName", launch_template=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LaunchTemplateSpecificationProperty( launch_template_id="launchTemplateId", launch_template_name="launchTemplateName", version="version" ), lifecycle_hook_specification_list=[autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LifecycleHookSpecificationProperty( default_result="defaultResult", heartbeat_timeout=123, lifecycle_hook_name="lifecycleHookName", lifecycle_transition="lifecycleTransition", notification_metadata="notificationMetadata", notification_target_arn="notificationTargetArn", role_arn="roleArn" )], load_balancer_names=["loadBalancerNames"], max_instance_lifetime=123, max_size="maxSize", metrics_collection=[autoscaling_mixins.CfnAutoScalingGroupPropsMixin.MetricsCollectionProperty( granularity="granularity", metrics=["metrics"] )], min_size="minSize", mixed_instances_policy=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.MixedInstancesPolicyProperty( instances_distribution=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.InstancesDistributionProperty( on_demand_allocation_strategy="onDemandAllocationStrategy", on_demand_base_capacity=123, on_demand_percentage_above_base_capacity=123, spot_allocation_strategy="spotAllocationStrategy", spot_instance_pools=123, spot_max_price="spotMaxPrice" ), launch_template=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LaunchTemplateProperty( launch_template_specification=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LaunchTemplateSpecificationProperty( launch_template_id="launchTemplateId", launch_template_name="launchTemplateName", version="version" ), overrides=[autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LaunchTemplateOverridesProperty( instance_requirements=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.InstanceRequirementsProperty( accelerator_count=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.AcceleratorCountRequestProperty( max=123, min=123 ), accelerator_manufacturers=["acceleratorManufacturers"], accelerator_names=["acceleratorNames"], accelerator_total_memory_mi_b=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.AcceleratorTotalMemoryMiBRequestProperty( max=123, min=123 ), accelerator_types=["acceleratorTypes"], allowed_instance_types=["allowedInstanceTypes"], bare_metal="bareMetal", baseline_ebs_bandwidth_mbps=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.BaselineEbsBandwidthMbpsRequestProperty( max=123, min=123 ), baseline_performance_factors=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.BaselinePerformanceFactorsRequestProperty( cpu=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.CpuPerformanceFactorRequestProperty( references=[autoscaling_mixins.CfnAutoScalingGroupPropsMixin.PerformanceFactorReferenceRequestProperty( instance_family="instanceFamily" )] ) ), burstable_performance="burstablePerformance", cpu_manufacturers=["cpuManufacturers"], excluded_instance_types=["excludedInstanceTypes"], instance_generations=["instanceGenerations"], local_storage="localStorage", local_storage_types=["localStorageTypes"], max_spot_price_as_percentage_of_optimal_on_demand_price=123, memory_gi_bPer_vCpu=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.MemoryGiBPerVCpuRequestProperty( max=123, min=123 ), memory_mi_b=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.MemoryMiBRequestProperty( max=123, min=123 ), network_bandwidth_gbps=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.NetworkBandwidthGbpsRequestProperty( max=123, min=123 ), network_interface_count=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.NetworkInterfaceCountRequestProperty( max=123, min=123 ), on_demand_max_price_percentage_over_lowest_price=123, require_hibernate_support=False, spot_max_price_percentage_over_lowest_price=123, total_local_storage_gb=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.TotalLocalStorageGBRequestProperty( max=123, min=123 ), v_cpu_count=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.VCpuCountRequestProperty( max=123, min=123 ) ), instance_type="instanceType", launch_template_specification=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LaunchTemplateSpecificationProperty( launch_template_id="launchTemplateId", launch_template_name="launchTemplateName", version="version" ), weighted_capacity="weightedCapacity" )] ) ), new_instances_protected_from_scale_in=False, notification_configuration=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.NotificationConfigurationProperty( notification_types=["notificationTypes"], topic_arn="topicArn" ), notification_configurations=[autoscaling_mixins.CfnAutoScalingGroupPropsMixin.NotificationConfigurationProperty( notification_types=["notificationTypes"], topic_arn="topicArn" )], placement_group="placementGroup", service_linked_role_arn="serviceLinkedRoleArn", skip_zonal_shift_validation=False, tags=[autoscaling_mixins.CfnAutoScalingGroupPropsMixin.TagPropertyProperty( key="key", propagate_at_launch=False, value="value" )], target_group_arns=["targetGroupArns"], termination_policies=["terminationPolicies"], traffic_sources=[autoscaling_mixins.CfnAutoScalingGroupPropsMixin.TrafficSourceIdentifierProperty( identifier="identifier", type="type" )], vpc_zone_identifier=["vpcZoneIdentifier"] ), strategy=mixins.PropertyMergeStrategy.OVERRIDE )
Create a mixin to apply properties to
AWS::AutoScaling::AutoScalingGroup.- Parameters:
props (
Union[CfnAutoScalingGroupMixinProps,Dict[str,Any]]) – L1 properties to apply.strategy (
Optional[PropertyMergeStrategy]) – (experimental) Strategy for merging nested properties. Default: - PropertyMergeStrategy.MERGE
Methods
- apply_to(construct)
Apply the mixin properties to the construct.
- Parameters:
construct (
IConstruct)- Return type:
- supports(construct)
Check if this mixin supports the given construct.
- Parameters:
construct (
IConstruct)- Return type:
bool
Attributes
- CFN_PROPERTY_KEYS = ['autoScalingGroupName', 'availabilityZoneDistribution', 'availabilityZoneImpairmentPolicy', 'availabilityZones', 'capacityRebalance', 'capacityReservationSpecification', 'context', 'cooldown', 'defaultInstanceWarmup', 'desiredCapacity', 'desiredCapacityType', 'healthCheckGracePeriod', 'healthCheckType', 'instanceId', 'instanceMaintenancePolicy', 'launchConfigurationName', 'launchTemplate', 'lifecycleHookSpecificationList', 'loadBalancerNames', 'maxInstanceLifetime', 'maxSize', 'metricsCollection', 'minSize', 'mixedInstancesPolicy', 'newInstancesProtectedFromScaleIn', 'notificationConfiguration', 'notificationConfigurations', 'placementGroup', 'serviceLinkedRoleArn', 'skipZonalShiftValidation', 'tags', 'targetGroupArns', 'terminationPolicies', 'trafficSources', 'vpcZoneIdentifier']
Static Methods
- classmethod is_mixin(x)
(experimental) Checks if
xis a Mixin.- Parameters:
x (
Any) – Any object.- Return type:
bool- Returns:
true if
xis an object created from a class which extendsMixin.- Stability:
experimental
AcceleratorCountRequestProperty
- class CfnAutoScalingGroupPropsMixin.AcceleratorCountRequestProperty(*, max=None, min=None)
Bases:
objectAcceleratorCountRequestis a property of theInstanceRequirementsproperty of the AWS::AutoScaling::AutoScalingGroup LaunchTemplateOverrides property type that describes the minimum and maximum number of accelerators for an instance type.- Parameters:
max (
Union[int,float,None]) – The maximum value.min (
Union[int,float,None]) – The minimum value.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins accelerator_count_request_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.AcceleratorCountRequestProperty( max=123, min=123 )
Attributes
- max
The maximum value.
AcceleratorTotalMemoryMiBRequestProperty
- class CfnAutoScalingGroupPropsMixin.AcceleratorTotalMemoryMiBRequestProperty(*, max=None, min=None)
Bases:
objectAcceleratorTotalMemoryMiBRequestis a property of theInstanceRequirementsproperty of the AWS::AutoScaling::AutoScalingGroup LaunchTemplateOverrides property type that describes the minimum and maximum total memory size for the accelerators for an instance type, in MiB.- Parameters:
max (
Union[int,float,None]) – The memory maximum in MiB.min (
Union[int,float,None]) – The memory minimum in MiB.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins accelerator_total_memory_mi_bRequest_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.AcceleratorTotalMemoryMiBRequestProperty( max=123, min=123 )
Attributes
- max
The memory maximum in MiB.
AvailabilityZoneDistributionProperty
- class CfnAutoScalingGroupPropsMixin.AvailabilityZoneDistributionProperty(*, capacity_distribution_strategy=None)
Bases:
objectAvailabilityZoneDistributionis a property of the AWS::AutoScaling::AutoScalingGroup resource.- Parameters:
capacity_distribution_strategy (
Optional[str]) – If launches fail in an Availability Zone, the following strategies are available. The default isbalanced-best-effort. -balanced-only- If launches fail in an Availability Zone, Auto Scaling will continue to attempt to launch in the unhealthy zone to preserve a balanced distribution. -balanced-best-effort- If launches fail in an Availability Zone, Auto Scaling will attempt to launch in another healthy Availability Zone instead.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins availability_zone_distribution_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.AvailabilityZoneDistributionProperty( capacity_distribution_strategy="capacityDistributionStrategy" )
Attributes
- capacity_distribution_strategy
If launches fail in an Availability Zone, the following strategies are available. The default is
balanced-best-effort.balanced-only- If launches fail in an Availability Zone, Auto Scaling will continue to attempt to launch in the unhealthy zone to preserve a balanced distribution.balanced-best-effort- If launches fail in an Availability Zone, Auto Scaling will attempt to launch in another healthy Availability Zone instead.
AvailabilityZoneImpairmentPolicyProperty
- class CfnAutoScalingGroupPropsMixin.AvailabilityZoneImpairmentPolicyProperty(*, impaired_zone_health_check_behavior=None, zonal_shift_enabled=None)
Bases:
objectDescribes an Availability Zone impairment policy.
- Parameters:
impaired_zone_health_check_behavior (
Optional[str]) – Specifies the health check behavior for the impaired Availability Zone in an active zonal shift. If you selectReplace unhealthy, instances that appear unhealthy will be replaced in all Availability Zones. If you selectIgnore unhealthy, instances will not be replaced in the Availability Zone with the active zonal shift. For more information, see Auto Scaling group zonal shift in the Amazon EC2 Auto Scaling User Guide .zonal_shift_enabled (
Union[bool,IResolvable,None]) – Iftrue, enable zonal shift for your Auto Scaling group.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins availability_zone_impairment_policy_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.AvailabilityZoneImpairmentPolicyProperty( impaired_zone_health_check_behavior="impairedZoneHealthCheckBehavior", zonal_shift_enabled=False )
Attributes
- impaired_zone_health_check_behavior
Specifies the health check behavior for the impaired Availability Zone in an active zonal shift.
If you select
Replace unhealthy, instances that appear unhealthy will be replaced in all Availability Zones. If you selectIgnore unhealthy, instances will not be replaced in the Availability Zone with the active zonal shift. For more information, see Auto Scaling group zonal shift in the Amazon EC2 Auto Scaling User Guide .
- zonal_shift_enabled
If
true, enable zonal shift for your Auto Scaling group.
BaselineEbsBandwidthMbpsRequestProperty
- class CfnAutoScalingGroupPropsMixin.BaselineEbsBandwidthMbpsRequestProperty(*, max=None, min=None)
Bases:
objectBaselineEbsBandwidthMbpsRequestis a property of theInstanceRequirementsproperty of the AWS::AutoScaling::AutoScalingGroup LaunchTemplateOverrides property type that describes the minimum and maximum baseline bandwidth performance for an instance type, in Mbps.- Parameters:
max (
Union[int,float,None]) – The maximum value in Mbps.min (
Union[int,float,None]) – The minimum value in Mbps.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins baseline_ebs_bandwidth_mbps_request_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.BaselineEbsBandwidthMbpsRequestProperty( max=123, min=123 )
Attributes
- max
The maximum value in Mbps.
BaselinePerformanceFactorsRequestProperty
- class CfnAutoScalingGroupPropsMixin.BaselinePerformanceFactorsRequestProperty(*, cpu=None)
Bases:
objectThe baseline performance to consider, using an instance family as a baseline reference.
The instance family establishes the lowest acceptable level of performance. Auto Scaling uses this baseline to guide instance type selection, but there is no guarantee that the selected instance types will always exceed the baseline for every application.
Currently, this parameter only supports CPU performance as a baseline performance factor. For example, specifying
c6iuses the CPU performance of thec6ifamily as the baseline reference.- Parameters:
cpu (
Union[IResolvable,CpuPerformanceFactorRequestProperty,Dict[str,Any],None]) – The CPU performance to consider, using an instance family as the baseline reference.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins baseline_performance_factors_request_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.BaselinePerformanceFactorsRequestProperty( cpu=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.CpuPerformanceFactorRequestProperty( references=[autoscaling_mixins.CfnAutoScalingGroupPropsMixin.PerformanceFactorReferenceRequestProperty( instance_family="instanceFamily" )] ) )
Attributes
- cpu
The CPU performance to consider, using an instance family as the baseline reference.
CapacityReservationSpecificationProperty
- class CfnAutoScalingGroupPropsMixin.CapacityReservationSpecificationProperty(*, capacity_reservation_preference=None, capacity_reservation_target=None)
Bases:
objectDescribes the Capacity Reservation preference and targeting options.
If you specify
openornoneforCapacityReservationPreference, do not specify aCapacityReservationTarget.- Parameters:
capacity_reservation_preference (
Optional[str]) – The capacity reservation preference. The following options are available:. -capacity-reservations-only- Auto Scaling will only launch instances into a Capacity Reservation or Capacity Reservation resource group. If capacity isn’t available, instances will fail to launch. -capacity-reservations-first- Auto Scaling will try to launch instances into a Capacity Reservation or Capacity Reservation resource group first. If capacity isn’t available, instances will run in On-Demand capacity. -none- Auto Scaling will not launch instances into a Capacity Reservation. Instances will run in On-Demand capacity. -default- Auto Scaling uses the Capacity Reservation preference from your launch template or an open Capacity Reservation.capacity_reservation_target (
Union[IResolvable,CapacityReservationTargetProperty,Dict[str,Any],None]) – Describes a target Capacity Reservation or Capacity Reservation resource group.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins capacity_reservation_specification_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.CapacityReservationSpecificationProperty( capacity_reservation_preference="capacityReservationPreference", capacity_reservation_target=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.CapacityReservationTargetProperty( capacity_reservation_ids=["capacityReservationIds"], capacity_reservation_resource_group_arns=["capacityReservationResourceGroupArns"] ) )
Attributes
- capacity_reservation_preference
.
capacity-reservations-only- Auto Scaling will only launch instances into a Capacity Reservation or Capacity Reservation resource group. If capacity isn’t available, instances will fail to launch.capacity-reservations-first- Auto Scaling will try to launch instances into a Capacity Reservation or Capacity Reservation resource group first. If capacity isn’t available, instances will run in On-Demand capacity.none- Auto Scaling will not launch instances into a Capacity Reservation. Instances will run in On-Demand capacity.default- Auto Scaling uses the Capacity Reservation preference from your launch template or an open Capacity Reservation.
- See:
- Type:
The capacity reservation preference. The following options are available
- capacity_reservation_target
Describes a target Capacity Reservation or Capacity Reservation resource group.
CapacityReservationTargetProperty
- class CfnAutoScalingGroupPropsMixin.CapacityReservationTargetProperty(*, capacity_reservation_ids=None, capacity_reservation_resource_group_arns=None)
Bases:
objectThe target for the Capacity Reservation.
Specify Capacity Reservations IDs or Capacity Reservation resource group ARNs.
- Parameters:
capacity_reservation_ids (
Optional[Sequence[str]]) – The Capacity Reservation IDs to launch instances into.capacity_reservation_resource_group_arns (
Optional[Sequence[str]]) – The resource group ARNs of the Capacity Reservation to launch instances into.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins capacity_reservation_target_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.CapacityReservationTargetProperty( capacity_reservation_ids=["capacityReservationIds"], capacity_reservation_resource_group_arns=["capacityReservationResourceGroupArns"] )
Attributes
- capacity_reservation_ids
The Capacity Reservation IDs to launch instances into.
- capacity_reservation_resource_group_arns
The resource group ARNs of the Capacity Reservation to launch instances into.
CpuPerformanceFactorRequestProperty
- class CfnAutoScalingGroupPropsMixin.CpuPerformanceFactorRequestProperty(*, references=None)
Bases:
objectThe CPU performance to consider, using an instance family as the baseline reference.
- Parameters:
references (
Union[IResolvable,Sequence[Union[IResolvable,PerformanceFactorReferenceRequestProperty,Dict[str,Any]]],None]) – Specify an instance family to use as the baseline reference for CPU performance. All instance types that match your specified attributes will be compared against the CPU performance of the referenced instance family, regardless of CPU manufacturer or architecture differences. .. epigraph:: Currently only one instance family can be specified in the list.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins cpu_performance_factor_request_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.CpuPerformanceFactorRequestProperty( references=[autoscaling_mixins.CfnAutoScalingGroupPropsMixin.PerformanceFactorReferenceRequestProperty( instance_family="instanceFamily" )] )
Attributes
- references
Specify an instance family to use as the baseline reference for CPU performance.
All instance types that match your specified attributes will be compared against the CPU performance of the referenced instance family, regardless of CPU manufacturer or architecture differences. .. epigraph:
Currently only one instance family can be specified in the list.
InstanceMaintenancePolicyProperty
- class CfnAutoScalingGroupPropsMixin.InstanceMaintenancePolicyProperty(*, max_healthy_percentage=None, min_healthy_percentage=None)
Bases:
objectInstanceMaintenancePolicyis a property of the AWS::AutoScaling::AutoScalingGroup resource.For more information, see Instance maintenance policies in the Amazon EC2 Auto Scaling User Guide .
- Parameters:
max_healthy_percentage (
Union[int,float,None]) – Specifies the upper threshold as a percentage of the desired capacity of the Auto Scaling group. It represents the maximum percentage of the group that can be in service and healthy, or pending, to support your workload when replacing instances. Value range is 100 to 200. To clear a previously set value, specify a value of-1. BothMinHealthyPercentageandMaxHealthyPercentagemust be specified, and the difference between them cannot be greater than 100. A large range increases the number of instances that can be replaced at the same time.min_healthy_percentage (
Union[int,float,None]) – Specifies the lower threshold as a percentage of the desired capacity of the Auto Scaling group. It represents the minimum percentage of the group to keep in service, healthy, and ready to use to support your workload when replacing instances. Value range is 0 to 100. To clear a previously set value, specify a value of-1.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins instance_maintenance_policy_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.InstanceMaintenancePolicyProperty( max_healthy_percentage=123, min_healthy_percentage=123 )
Attributes
- max_healthy_percentage
Specifies the upper threshold as a percentage of the desired capacity of the Auto Scaling group.
It represents the maximum percentage of the group that can be in service and healthy, or pending, to support your workload when replacing instances. Value range is 100 to 200. To clear a previously set value, specify a value of
-1.Both
MinHealthyPercentageandMaxHealthyPercentagemust be specified, and the difference between them cannot be greater than 100. A large range increases the number of instances that can be replaced at the same time.
- min_healthy_percentage
Specifies the lower threshold as a percentage of the desired capacity of the Auto Scaling group.
It represents the minimum percentage of the group to keep in service, healthy, and ready to use to support your workload when replacing instances. Value range is 0 to 100. To clear a previously set value, specify a value of
-1.
InstanceRequirementsProperty
- class CfnAutoScalingGroupPropsMixin.InstanceRequirementsProperty(*, accelerator_count=None, accelerator_manufacturers=None, accelerator_names=None, accelerator_total_memory_mib=None, accelerator_types=None, allowed_instance_types=None, bare_metal=None, baseline_ebs_bandwidth_mbps=None, baseline_performance_factors=None, burstable_performance=None, cpu_manufacturers=None, excluded_instance_types=None, instance_generations=None, local_storage=None, local_storage_types=None, max_spot_price_as_percentage_of_optimal_on_demand_price=None, memory_gib_per_v_cpu=None, memory_mib=None, network_bandwidth_gbps=None, network_interface_count=None, on_demand_max_price_percentage_over_lowest_price=None, require_hibernate_support=None, spot_max_price_percentage_over_lowest_price=None, total_local_storage_gb=None, v_cpu_count=None)
Bases:
objectThe attributes for the instance types for a mixed instances policy.
Amazon EC2 Auto Scaling uses your specified requirements to identify instance types. Then, it uses your On-Demand and Spot allocation strategies to launch instances from these instance types.
When you specify multiple attributes, you get instance types that satisfy all of the specified attributes. If you specify multiple values for an attribute, you get instance types that satisfy any of the specified values.
To limit the list of instance types from which Amazon EC2 Auto Scaling can identify matching instance types, you can use one of the following parameters, but not both in the same request:
AllowedInstanceTypes- The instance types to include in the list. All other instance types are ignored, even if they match your specified attributes.ExcludedInstanceTypes- The instance types to exclude from the list, even if they match your specified attributes.
You must specify
VCpuCountandMemoryMiB. All other attributes are optional. Any unspecified optional attribute is set to its default.For an example template, see Configure Amazon EC2 Auto Scaling resources .
For more information, see Creating an Auto Scaling group using attribute-based instance type selection in the Amazon EC2 Auto Scaling User Guide . For help determining which instance types match your attributes before you apply them to your Auto Scaling group, see Preview instance types with specified attributes in the Amazon EC2 User Guide for Linux Instances .
InstanceRequirementsis a property of theLaunchTemplateOverridesproperty of the AWS::AutoScaling::AutoScalingGroup LaunchTemplate property type.- Parameters:
accelerator_count (
Union[IResolvable,AcceleratorCountRequestProperty,Dict[str,Any],None]) – The minimum and maximum number of accelerators (GPUs, FPGAs, or AWS Inferentia chips) for an instance type. To exclude accelerator-enabled instance types, setMaxto0. Default: No minimum or maximum limitsaccelerator_manufacturers (
Optional[Sequence[str]]) – Indicates whether instance types must have accelerators by specific manufacturers. - For instance types with NVIDIA devices, specifynvidia. - For instance types with AMD devices, specifyamd. - For instance types with AWS devices, specifyamazon-web-services. - For instance types with Xilinx devices, specifyxilinx. Default: Any manufactureraccelerator_names (
Optional[Sequence[str]]) – Lists the accelerators that must be on an instance type. - For instance types with NVIDIA A100 GPUs, specifya100. - For instance types with NVIDIA V100 GPUs, specifyv100. - For instance types with NVIDIA K80 GPUs, specifyk80. - For instance types with NVIDIA T4 GPUs, specifyt4. - For instance types with NVIDIA M60 GPUs, specifym60. - For instance types with AMD Radeon Pro V520 GPUs, specifyradeon-pro-v520. - For instance types with Xilinx VU9P FPGAs, specifyvu9p. Default: Any acceleratoraccelerator_total_memory_mib (
Union[IResolvable,AcceleratorTotalMemoryMiBRequestProperty,Dict[str,Any],None]) – The minimum and maximum total memory size for the accelerators on an instance type, in MiB. Default: No minimum or maximum limitsaccelerator_types (
Optional[Sequence[str]]) – Lists the accelerator types that must be on an instance type. - For instance types with GPU accelerators, specifygpu. - For instance types with FPGA accelerators, specifyfpga. - For instance types with inference accelerators, specifyinference. Default: Any accelerator typeallowed_instance_types (
Optional[Sequence[str]]) – The instance types to apply your specified attributes against. All other instance types are ignored, even if they match your specified attributes. You can use strings with one or more wild cards, represented by an asterisk (*), to allow an instance type, size, or generation. The following are examples:m5.8xlarge,c5*.*,m5a.*,r*,*3*. For example, if you specifyc5*, Amazon EC2 Auto Scaling will allow the entire C5 instance family, which includes all C5a and C5n instance types. If you specifym5a.*, Amazon EC2 Auto Scaling will allow all the M5a instance types, but not the M5n instance types. .. epigraph:: If you specifyAllowedInstanceTypes, you can’t specifyExcludedInstanceTypes. Default: All instance typesbare_metal (
Optional[str]) – Indicates whether bare metal instance types are included, excluded, or required. Default:excludedbaseline_ebs_bandwidth_mbps (
Union[IResolvable,BaselineEbsBandwidthMbpsRequestProperty,Dict[str,Any],None]) – The minimum and maximum baseline bandwidth performance for an instance type, in Mbps. For more information, see Amazon EBS–optimized instances in the Amazon EC2 User Guide . Default: No minimum or maximum limitsbaseline_performance_factors (
Union[IResolvable,BaselinePerformanceFactorsRequestProperty,Dict[str,Any],None]) – The baseline performance factors for the instance requirements.burstable_performance (
Optional[str]) – Indicates whether burstable performance instance types are included, excluded, or required. For more information, see Burstable performance instances in the Amazon EC2 User Guide . Default:excludedcpu_manufacturers (
Optional[Sequence[str]]) – Lists which specific CPU manufacturers to include. - For instance types with Intel CPUs, specifyintel. - For instance types with AMD CPUs, specifyamd. - For instance types with AWS CPUs, specifyamazon-web-services. - For instance types with Apple CPUs, specifyapple. .. epigraph:: Don’t confuse the CPU hardware manufacturer with the CPU hardware architecture. Instances will be launched with a compatible CPU architecture based on the Amazon Machine Image (AMI) that you specify in your launch template. Default: Any manufacturerexcluded_instance_types (
Optional[Sequence[str]]) – The instance types to exclude. You can use strings with one or more wild cards, represented by an asterisk (*), to exclude an instance family, type, size, or generation. The following are examples:m5.8xlarge,c5*.*,m5a.*,r*,*3*. For example, if you specifyc5*, you are excluding the entire C5 instance family, which includes all C5a and C5n instance types. If you specifym5a.*, Amazon EC2 Auto Scaling will exclude all the M5a instance types, but not the M5n instance types. .. epigraph:: If you specifyExcludedInstanceTypes, you can’t specifyAllowedInstanceTypes. Default: No excluded instance typesinstance_generations (
Optional[Sequence[str]]) – Indicates whether current or previous generation instance types are included. - For current generation instance types, specifycurrent. The current generation includes EC2 instance types currently recommended for use. This typically includes the latest two to three generations in each instance family. For more information, see Instance types in the Amazon EC2 User Guide . - For previous generation instance types, specifyprevious. Default: Any current or previous generationlocal_storage (
Optional[str]) – Indicates whether instance types with instance store volumes are included, excluded, or required. For more information, see Amazon EC2 instance store in the Amazon EC2 User Guide . Default:includedlocal_storage_types (
Optional[Sequence[str]]) – Indicates the type of local storage that is required. - For instance types with hard disk drive (HDD) storage, specifyhdd. - For instance types with solid state drive (SSD) storage, specifyssd. Default: Any local storage typemax_spot_price_as_percentage_of_optimal_on_demand_price (
Union[int,float,None]) – [Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from either the lowest priced current generation instance types or, failing that, the lowest priced previous generation instance types that match your attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price exceeds your specified threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. If you setDesiredCapacityTypetovcpuormemory-mib, the price protection threshold is based on the per-vCPU or per-memory price instead of the per instance price. .. epigraph:: Only one ofSpotMaxPricePercentageOverLowestPriceorMaxSpotPriceAsPercentageOfOptimalOnDemandPricecan be specified. If you don’t specify either, Amazon EC2 Auto Scaling will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as999999.memory_gib_per_v_cpu (
Union[IResolvable,MemoryGiBPerVCpuRequestProperty,Dict[str,Any],None]) – The minimum and maximum amount of memory per vCPU for an instance type, in GiB. Default: No minimum or maximum limitsmemory_mib (
Union[IResolvable,MemoryMiBRequestProperty,Dict[str,Any],None]) – The minimum and maximum instance memory size for an instance type, in MiB.network_bandwidth_gbps (
Union[IResolvable,NetworkBandwidthGbpsRequestProperty,Dict[str,Any],None]) – The minimum and maximum amount of network bandwidth, in gigabits per second (Gbps). Default: No minimum or maximum limitsnetwork_interface_count (
Union[IResolvable,NetworkInterfaceCountRequestProperty,Dict[str,Any],None]) – The minimum and maximum number of network interfaces for an instance type. Default: No minimum or maximum limitson_demand_max_price_percentage_over_lowest_price (
Union[int,float,None]) – [Price protection] The price protection threshold for On-Demand Instances, as a percentage higher than an identified On-Demand price. The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from either the lowest priced current generation instance types or, failing that, the lowest priced previous generation instance types that match your attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price exceeds your specified threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. To turn off price protection, specify a high value, such as999999. If you setDesiredCapacityTypetovcpuormemory-mib, the price protection threshold is applied based on the per-vCPU or per-memory price instead of the per instance price. Default:20require_hibernate_support (
Union[bool,IResolvable,None]) – Indicates whether instance types must provide On-Demand Instance hibernation support. Default:falsespot_max_price_percentage_over_lowest_price (
Union[int,float,None]) – [Price protection] The price protection threshold for Spot Instances, as a percentage higher than an identified Spot price. The identified Spot price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from either the lowest priced current generation instance types or, failing that, the lowest priced previous generation instance types that match your attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price exceeds your specified threshold. The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage. If you setDesiredCapacityTypetovcpuormemory-mib, the price protection threshold is based on the per-vCPU or per-memory price instead of the per instance price. .. epigraph:: Only one ofSpotMaxPricePercentageOverLowestPriceorMaxSpotPriceAsPercentageOfOptimalOnDemandPricecan be specified. If you don’t specify either, Amazon EC2 Auto Scaling will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as999999.total_local_storage_gb (
Union[IResolvable,TotalLocalStorageGBRequestProperty,Dict[str,Any],None]) – The minimum and maximum total local storage size for an instance type, in GB. Default: No minimum or maximum limitsv_cpu_count (
Union[IResolvable,VCpuCountRequestProperty,Dict[str,Any],None]) – The minimum and maximum number of vCPUs for an instance type.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins instance_requirements_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.InstanceRequirementsProperty( accelerator_count=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.AcceleratorCountRequestProperty( max=123, min=123 ), accelerator_manufacturers=["acceleratorManufacturers"], accelerator_names=["acceleratorNames"], accelerator_total_memory_mi_b=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.AcceleratorTotalMemoryMiBRequestProperty( max=123, min=123 ), accelerator_types=["acceleratorTypes"], allowed_instance_types=["allowedInstanceTypes"], bare_metal="bareMetal", baseline_ebs_bandwidth_mbps=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.BaselineEbsBandwidthMbpsRequestProperty( max=123, min=123 ), baseline_performance_factors=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.BaselinePerformanceFactorsRequestProperty( cpu=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.CpuPerformanceFactorRequestProperty( references=[autoscaling_mixins.CfnAutoScalingGroupPropsMixin.PerformanceFactorReferenceRequestProperty( instance_family="instanceFamily" )] ) ), burstable_performance="burstablePerformance", cpu_manufacturers=["cpuManufacturers"], excluded_instance_types=["excludedInstanceTypes"], instance_generations=["instanceGenerations"], local_storage="localStorage", local_storage_types=["localStorageTypes"], max_spot_price_as_percentage_of_optimal_on_demand_price=123, memory_gi_bPer_vCpu=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.MemoryGiBPerVCpuRequestProperty( max=123, min=123 ), memory_mi_b=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.MemoryMiBRequestProperty( max=123, min=123 ), network_bandwidth_gbps=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.NetworkBandwidthGbpsRequestProperty( max=123, min=123 ), network_interface_count=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.NetworkInterfaceCountRequestProperty( max=123, min=123 ), on_demand_max_price_percentage_over_lowest_price=123, require_hibernate_support=False, spot_max_price_percentage_over_lowest_price=123, total_local_storage_gb=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.TotalLocalStorageGBRequestProperty( max=123, min=123 ), v_cpu_count=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.VCpuCountRequestProperty( max=123, min=123 ) )
Attributes
- accelerator_count
The minimum and maximum number of accelerators (GPUs, FPGAs, or AWS Inferentia chips) for an instance type.
To exclude accelerator-enabled instance types, set
Maxto0.Default: No minimum or maximum limits
- accelerator_manufacturers
Indicates whether instance types must have accelerators by specific manufacturers.
For instance types with NVIDIA devices, specify
nvidia.For instance types with AMD devices, specify
amd.For instance types with AWS devices, specify
amazon-web-services.For instance types with Xilinx devices, specify
xilinx.
Default: Any manufacturer
- accelerator_names
Lists the accelerators that must be on an instance type.
For instance types with NVIDIA A100 GPUs, specify
a100.For instance types with NVIDIA V100 GPUs, specify
v100.For instance types with NVIDIA K80 GPUs, specify
k80.For instance types with NVIDIA T4 GPUs, specify
t4.For instance types with NVIDIA M60 GPUs, specify
m60.For instance types with AMD Radeon Pro V520 GPUs, specify
radeon-pro-v520.For instance types with Xilinx VU9P FPGAs, specify
vu9p.
Default: Any accelerator
- accelerator_total_memory_mib
The minimum and maximum total memory size for the accelerators on an instance type, in MiB.
Default: No minimum or maximum limits
- accelerator_types
Lists the accelerator types that must be on an instance type.
For instance types with GPU accelerators, specify
gpu.For instance types with FPGA accelerators, specify
fpga.For instance types with inference accelerators, specify
inference.
Default: Any accelerator type
- allowed_instance_types
The instance types to apply your specified attributes against.
All other instance types are ignored, even if they match your specified attributes.
You can use strings with one or more wild cards, represented by an asterisk (
*), to allow an instance type, size, or generation. The following are examples:m5.8xlarge,c5*.*,m5a.*,r*,*3*.For example, if you specify
c5*, Amazon EC2 Auto Scaling will allow the entire C5 instance family, which includes all C5a and C5n instance types. If you specifym5a.*, Amazon EC2 Auto Scaling will allow all the M5a instance types, but not the M5n instance types. .. epigraph:If you specify ``AllowedInstanceTypes`` , you can't specify ``ExcludedInstanceTypes`` .
Default: All instance types
- bare_metal
Indicates whether bare metal instance types are included, excluded, or required.
Default:
excluded
- baseline_ebs_bandwidth_mbps
The minimum and maximum baseline bandwidth performance for an instance type, in Mbps.
For more information, see Amazon EBS–optimized instances in the Amazon EC2 User Guide .
Default: No minimum or maximum limits
- baseline_performance_factors
The baseline performance factors for the instance requirements.
- burstable_performance
Indicates whether burstable performance instance types are included, excluded, or required.
For more information, see Burstable performance instances in the Amazon EC2 User Guide .
Default:
excluded
- cpu_manufacturers
Lists which specific CPU manufacturers to include.
For instance types with Intel CPUs, specify
intel.For instance types with AMD CPUs, specify
amd.For instance types with AWS CPUs, specify
amazon-web-services.For instance types with Apple CPUs, specify
apple.
Don’t confuse the CPU hardware manufacturer with the CPU hardware architecture. Instances will be launched with a compatible CPU architecture based on the Amazon Machine Image (AMI) that you specify in your launch template.
Default: Any manufacturer
- excluded_instance_types
The instance types to exclude.
You can use strings with one or more wild cards, represented by an asterisk (
*), to exclude an instance family, type, size, or generation. The following are examples:m5.8xlarge,c5*.*,m5a.*,r*,*3*.For example, if you specify
c5*, you are excluding the entire C5 instance family, which includes all C5a and C5n instance types. If you specifym5a.*, Amazon EC2 Auto Scaling will exclude all the M5a instance types, but not the M5n instance types. .. epigraph:If you specify ``ExcludedInstanceTypes`` , you can't specify ``AllowedInstanceTypes`` .
Default: No excluded instance types
- instance_generations
Indicates whether current or previous generation instance types are included.
For current generation instance types, specify
current. The current generation includes EC2 instance types currently recommended for use. This typically includes the latest two to three generations in each instance family. For more information, see Instance types in the Amazon EC2 User Guide .For previous generation instance types, specify
previous.
Default: Any current or previous generation
- local_storage
Indicates whether instance types with instance store volumes are included, excluded, or required.
For more information, see Amazon EC2 instance store in the Amazon EC2 User Guide .
Default:
included
- local_storage_types
Indicates the type of local storage that is required.
For instance types with hard disk drive (HDD) storage, specify
hdd.For instance types with solid state drive (SSD) storage, specify
ssd.
Default: Any local storage type
- max_spot_price_as_percentage_of_optimal_on_demand_price
[Price protection] The price protection threshold for Spot Instances, as a percentage of an identified On-Demand price.
The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from either the lowest priced current generation instance types or, failing that, the lowest priced previous generation instance types that match your attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price exceeds your specified threshold.
The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage.
If you set
DesiredCapacityTypetovcpuormemory-mib, the price protection threshold is based on the per-vCPU or per-memory price instead of the per instance price. .. epigraph:Only one of ``SpotMaxPricePercentageOverLowestPrice`` or ``MaxSpotPriceAsPercentageOfOptimalOnDemandPrice`` can be specified. If you don't specify either, Amazon EC2 Auto Scaling will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as ``999999`` .
- memory_gib_per_v_cpu
The minimum and maximum amount of memory per vCPU for an instance type, in GiB.
Default: No minimum or maximum limits
- memory_mib
The minimum and maximum instance memory size for an instance type, in MiB.
- network_bandwidth_gbps
The minimum and maximum amount of network bandwidth, in gigabits per second (Gbps).
Default: No minimum or maximum limits
- network_interface_count
The minimum and maximum number of network interfaces for an instance type.
Default: No minimum or maximum limits
- on_demand_max_price_percentage_over_lowest_price
[Price protection] The price protection threshold for On-Demand Instances, as a percentage higher than an identified On-Demand price.
The identified On-Demand price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from either the lowest priced current generation instance types or, failing that, the lowest priced previous generation instance types that match your attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price exceeds your specified threshold.
The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage.
To turn off price protection, specify a high value, such as
999999.If you set
DesiredCapacityTypetovcpuormemory-mib, the price protection threshold is applied based on the per-vCPU or per-memory price instead of the per instance price.Default:
20
- require_hibernate_support
Indicates whether instance types must provide On-Demand Instance hibernation support.
Default:
false
- spot_max_price_percentage_over_lowest_price
[Price protection] The price protection threshold for Spot Instances, as a percentage higher than an identified Spot price.
The identified Spot price is the price of the lowest priced current generation C, M, or R instance type with your specified attributes. If no current generation C, M, or R instance type matches your attributes, then the identified price is from either the lowest priced current generation instance types or, failing that, the lowest priced previous generation instance types that match your attributes. When Amazon EC2 Auto Scaling selects instance types with your attributes, we will exclude instance types whose price exceeds your specified threshold.
The parameter accepts an integer, which Amazon EC2 Auto Scaling interprets as a percentage.
If you set
DesiredCapacityTypetovcpuormemory-mib, the price protection threshold is based on the per-vCPU or per-memory price instead of the per instance price. .. epigraph:Only one of ``SpotMaxPricePercentageOverLowestPrice`` or ``MaxSpotPriceAsPercentageOfOptimalOnDemandPrice`` can be specified. If you don't specify either, Amazon EC2 Auto Scaling will automatically apply optimal price protection to consistently select from a wide range of instance types. To indicate no price protection threshold for Spot Instances, meaning you want to consider all instance types that match your attributes, include one of these parameters and specify a high value, such as ``999999`` .
- total_local_storage_gb
The minimum and maximum total local storage size for an instance type, in GB.
Default: No minimum or maximum limits
- v_cpu_count
The minimum and maximum number of vCPUs for an instance type.
InstancesDistributionProperty
- class CfnAutoScalingGroupPropsMixin.InstancesDistributionProperty(*, on_demand_allocation_strategy=None, on_demand_base_capacity=None, on_demand_percentage_above_base_capacity=None, spot_allocation_strategy=None, spot_instance_pools=None, spot_max_price=None)
Bases:
objectUse this structure to specify the distribution of On-Demand Instances and Spot Instances and the allocation strategies used to fulfill On-Demand and Spot capacities for a mixed instances policy.
For more information, see Auto Scaling groups with multiple instance types and purchase options in the Amazon EC2 Auto Scaling User Guide .
InstancesDistributionis a property of the AWS::AutoScaling::AutoScalingGroup MixedInstancesPolicy property type.- Parameters:
on_demand_allocation_strategy (
Optional[str]) – The allocation strategy to apply to your On-Demand Instances when they are launched. Possible instance types are determined by the launch template overrides that you specify. The following lists the valid values: - lowest-price - Uses price to determine which instance types are the highest priority, launching the lowest priced instance types within an Availability Zone first. This is the default value for Auto Scaling groups that specifyInstanceRequirements. - prioritized - You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling launches your highest priority instance types first. If all your On-Demand capacity cannot be fulfilled using your highest priority instance type, then Amazon EC2 Auto Scaling launches the remaining capacity using the second priority instance type, and so on. This is the default value for Auto Scaling groups that don’t specifyInstanceRequirementsand cannot be used for groups that do.on_demand_base_capacity (
Union[int,float,None]) – The minimum amount of the Auto Scaling group’s capacity that must be fulfilled by On-Demand Instances. This base portion is launched first as your group scales. This number has the same unit of measurement as the group’s desired capacity. If you change the default unit of measurement (number of instances) by specifying weighted capacity values in your launch template overrides list, or by changing the default desired capacity type setting of the group, you must specify this number using the same unit of measurement. Default: 0 .. epigraph:: An update to this setting means a gradual replacement of instances to adjust the current On-Demand Instance levels. When replacing instances, Amazon EC2 Auto Scaling launches new instances before terminating the previous ones.on_demand_percentage_above_base_capacity (
Union[int,float,None]) – Controls the percentages of On-Demand Instances and Spot Instances for your additional capacity beyondOnDemandBaseCapacity. Expressed as a number (for example, 20 specifies 20% On-Demand Instances, 80% Spot Instances). If set to 100, only On-Demand Instances are used. Default: 100 .. epigraph:: An update to this setting means a gradual replacement of instances to adjust the current On-Demand and Spot Instance levels for your additional capacity higher than the base capacity. When replacing instances, Amazon EC2 Auto Scaling launches new instances before terminating the previous ones.spot_allocation_strategy (
Optional[str]) – The allocation strategy to apply to your Spot Instances when they are launched. Possible instance types are determined by the launch template overrides that you specify. The following lists the valid values: - capacity-optimized - Requests Spot Instances using pools that are optimally chosen based on the available Spot capacity. This strategy has the lowest risk of interruption. To give certain instance types a higher chance of launching first, usecapacity-optimized-prioritized. - capacity-optimized-prioritized - You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling honors the instance type priorities on a best effort basis but optimizes for capacity first. Note that if the On-Demand allocation strategy is set toprioritized, the same priority is applied when fulfilling On-Demand capacity. This is not a valid value for Auto Scaling groups that specifyInstanceRequirements. - lowest-price - Requests Spot Instances using the lowest priced pools within an Availability Zone, across the number of Spot pools that you specify for theSpotInstancePoolsproperty. To ensure that your desired capacity is met, you might receive Spot Instances from several pools. This is the default value, but it might lead to high interruption rates because this strategy only considers instance price and not available capacity. - price-capacity-optimized (recommended) - The price and capacity optimized allocation strategy looks at both price and capacity to select the Spot Instance pools that are the least likely to be interrupted and have the lowest possible price.spot_instance_pools (
Union[int,float,None]) – The number of Spot Instance pools across which to allocate your Spot Instances. The Spot pools are determined from the different instance types in the overrides. Valid only when theSpotAllocationStrategyislowest-price. Value must be in the range of 1–20. Default: 2spot_max_price (
Optional[str]) – The maximum price per unit hour that you are willing to pay for a Spot Instance. If your maximum price is lower than the Spot price for the instance types that you selected, your Spot Instances are not launched. We do not recommend specifying a maximum price because it can lead to increased interruptions. When Spot Instances launch, you pay the current Spot price. To remove a maximum price that you previously set, include the property but specify an empty string (“”) for the value. .. epigraph:: If you specify a maximum price, your instances will be interrupted more frequently than if you do not specify one. Valid Range: Minimum value of 0.001
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins instances_distribution_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.InstancesDistributionProperty( on_demand_allocation_strategy="onDemandAllocationStrategy", on_demand_base_capacity=123, on_demand_percentage_above_base_capacity=123, spot_allocation_strategy="spotAllocationStrategy", spot_instance_pools=123, spot_max_price="spotMaxPrice" )
Attributes
- on_demand_allocation_strategy
The allocation strategy to apply to your On-Demand Instances when they are launched.
Possible instance types are determined by the launch template overrides that you specify.
The following lists the valid values:
lowest-price - Uses price to determine which instance types are the highest priority, launching the lowest priced instance types within an Availability Zone first. This is the default value for Auto Scaling groups that specify
InstanceRequirements.prioritized - You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling launches your highest priority instance types first. If all your On-Demand capacity cannot be fulfilled using your highest priority instance type, then Amazon EC2 Auto Scaling launches the remaining capacity using the second priority instance type, and so on. This is the default value for Auto Scaling groups that don’t specify
InstanceRequirementsand cannot be used for groups that do.
- on_demand_base_capacity
The minimum amount of the Auto Scaling group’s capacity that must be fulfilled by On-Demand Instances.
This base portion is launched first as your group scales.
This number has the same unit of measurement as the group’s desired capacity. If you change the default unit of measurement (number of instances) by specifying weighted capacity values in your launch template overrides list, or by changing the default desired capacity type setting of the group, you must specify this number using the same unit of measurement.
Default: 0 .. epigraph:
An update to this setting means a gradual replacement of instances to adjust the current On-Demand Instance levels. When replacing instances, Amazon EC2 Auto Scaling launches new instances before terminating the previous ones.
- on_demand_percentage_above_base_capacity
Controls the percentages of On-Demand Instances and Spot Instances for your additional capacity beyond
OnDemandBaseCapacity.Expressed as a number (for example, 20 specifies 20% On-Demand Instances, 80% Spot Instances). If set to 100, only On-Demand Instances are used.
Default: 100 .. epigraph:
An update to this setting means a gradual replacement of instances to adjust the current On-Demand and Spot Instance levels for your additional capacity higher than the base capacity. When replacing instances, Amazon EC2 Auto Scaling launches new instances before terminating the previous ones.
- spot_allocation_strategy
The allocation strategy to apply to your Spot Instances when they are launched.
Possible instance types are determined by the launch template overrides that you specify.
The following lists the valid values:
capacity-optimized - Requests Spot Instances using pools that are optimally chosen based on the available Spot capacity. This strategy has the lowest risk of interruption. To give certain instance types a higher chance of launching first, use
capacity-optimized-prioritized.capacity-optimized-prioritized - You set the order of instance types for the launch template overrides from highest to lowest priority (from first to last in the list). Amazon EC2 Auto Scaling honors the instance type priorities on a best effort basis but optimizes for capacity first. Note that if the On-Demand allocation strategy is set to
prioritized, the same priority is applied when fulfilling On-Demand capacity. This is not a valid value for Auto Scaling groups that specifyInstanceRequirements.lowest-price - Requests Spot Instances using the lowest priced pools within an Availability Zone, across the number of Spot pools that you specify for the
SpotInstancePoolsproperty. To ensure that your desired capacity is met, you might receive Spot Instances from several pools. This is the default value, but it might lead to high interruption rates because this strategy only considers instance price and not available capacity.price-capacity-optimized (recommended) - The price and capacity optimized allocation strategy looks at both price and capacity to select the Spot Instance pools that are the least likely to be interrupted and have the lowest possible price.
- spot_instance_pools
The number of Spot Instance pools across which to allocate your Spot Instances.
The Spot pools are determined from the different instance types in the overrides. Valid only when the
SpotAllocationStrategyislowest-price. Value must be in the range of 1–20.Default: 2
- spot_max_price
The maximum price per unit hour that you are willing to pay for a Spot Instance.
If your maximum price is lower than the Spot price for the instance types that you selected, your Spot Instances are not launched. We do not recommend specifying a maximum price because it can lead to increased interruptions. When Spot Instances launch, you pay the current Spot price. To remove a maximum price that you previously set, include the property but specify an empty string (“”) for the value. .. epigraph:
If you specify a maximum price, your instances will be interrupted more frequently than if you do not specify one.
Valid Range: Minimum value of 0.001
LaunchTemplateOverridesProperty
- class CfnAutoScalingGroupPropsMixin.LaunchTemplateOverridesProperty(*, instance_requirements=None, instance_type=None, launch_template_specification=None, weighted_capacity=None)
Bases:
objectUse this structure to let Amazon EC2 Auto Scaling do the following when the Auto Scaling group has a mixed instances policy: - Override the instance type that is specified in the launch template.
Use multiple instance types.
Specify the instance types that you want, or define your instance requirements instead and let Amazon EC2 Auto Scaling provision the available instance types that meet your requirements. This can provide Amazon EC2 Auto Scaling with a larger selection of instance types to choose from when fulfilling Spot and On-Demand capacities. You can view which instance types are matched before you apply the instance requirements to your Auto Scaling group.
After you define your instance requirements, you don’t have to keep updating these settings to get new EC2 instance types automatically. Amazon EC2 Auto Scaling uses the instance requirements of the Auto Scaling group to determine whether a new EC2 instance type can be used.
LaunchTemplateOverridesis a property of the AWS::AutoScaling::AutoScalingGroup LaunchTemplate property type.- Parameters:
instance_requirements (
Union[IResolvable,InstanceRequirementsProperty,Dict[str,Any],None]) – The instance requirements. Amazon EC2 Auto Scaling uses your specified requirements to identify instance types. Then, it uses your On-Demand and Spot allocation strategies to launch instances from these instance types. You can specify up to four separate sets of instance requirements per Auto Scaling group. This is useful for provisioning instances from different Amazon Machine Images (AMIs) in the same Auto Scaling group. To do this, create the AMIs and create a new launch template for each AMI. Then, create a compatible set of instance requirements for each launch template. .. epigraph:: If you specifyInstanceRequirements, you can’t specifyInstanceType.instance_type (
Optional[str]) –The instance type, such as
m3.xlarge. You must specify an instance type that is supported in your requested Region and Availability Zones. For more information, see Instance types in the Amazon EC2 User Guide . You can specify up to 40 instance types per Auto Scaling group.launch_template_specification (
Union[IResolvable,LaunchTemplateSpecificationProperty,Dict[str,Any],None]) – Provides a launch template for the specified instance type or set of instance requirements. For example, some instance types might require a launch template with a different AMI. If not provided, Amazon EC2 Auto Scaling uses the launch template that’s specified in theLaunchTemplatedefinition. For more information, see Specifying a different launch template for an instance type in the Amazon EC2 Auto Scaling User Guide . You can specify up to 20 launch templates per Auto Scaling group. The launch templates specified in the overrides and in theLaunchTemplatedefinition count towards this limit.weighted_capacity (
Optional[str]) – If you provide a list of instance types to use, you can specify the number of capacity units provided by each instance type in terms of virtual CPUs, memory, storage, throughput, or other relative performance characteristic. When a Spot or On-Demand Instance is launched, the capacity units count toward the desired capacity. Amazon EC2 Auto Scaling launches instances until the desired capacity is totally fulfilled, even if this results in an overage. For example, if there are two units remaining to fulfill capacity, and Amazon EC2 Auto Scaling can only launch an instance with aWeightedCapacityof five units, the instance is launched, and the desired capacity is exceeded by three units. For more information, see Configure instance weighting for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide . Value must be in the range of 1-999. If you specify a value forWeightedCapacityfor one instance type, you must specify a value forWeightedCapacityfor all of them. .. epigraph:: Every Auto Scaling group has three size parameters (DesiredCapacity,MaxSize, andMinSize). Usually, you set these sizes based on a specific number of instances. However, if you configure a mixed instances policy that defines weights for the instance types, you must specify these sizes with the same units that you use for weighting instances.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins launch_template_overrides_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LaunchTemplateOverridesProperty( instance_requirements=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.InstanceRequirementsProperty( accelerator_count=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.AcceleratorCountRequestProperty( max=123, min=123 ), accelerator_manufacturers=["acceleratorManufacturers"], accelerator_names=["acceleratorNames"], accelerator_total_memory_mi_b=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.AcceleratorTotalMemoryMiBRequestProperty( max=123, min=123 ), accelerator_types=["acceleratorTypes"], allowed_instance_types=["allowedInstanceTypes"], bare_metal="bareMetal", baseline_ebs_bandwidth_mbps=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.BaselineEbsBandwidthMbpsRequestProperty( max=123, min=123 ), baseline_performance_factors=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.BaselinePerformanceFactorsRequestProperty( cpu=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.CpuPerformanceFactorRequestProperty( references=[autoscaling_mixins.CfnAutoScalingGroupPropsMixin.PerformanceFactorReferenceRequestProperty( instance_family="instanceFamily" )] ) ), burstable_performance="burstablePerformance", cpu_manufacturers=["cpuManufacturers"], excluded_instance_types=["excludedInstanceTypes"], instance_generations=["instanceGenerations"], local_storage="localStorage", local_storage_types=["localStorageTypes"], max_spot_price_as_percentage_of_optimal_on_demand_price=123, memory_gi_bPer_vCpu=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.MemoryGiBPerVCpuRequestProperty( max=123, min=123 ), memory_mi_b=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.MemoryMiBRequestProperty( max=123, min=123 ), network_bandwidth_gbps=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.NetworkBandwidthGbpsRequestProperty( max=123, min=123 ), network_interface_count=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.NetworkInterfaceCountRequestProperty( max=123, min=123 ), on_demand_max_price_percentage_over_lowest_price=123, require_hibernate_support=False, spot_max_price_percentage_over_lowest_price=123, total_local_storage_gb=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.TotalLocalStorageGBRequestProperty( max=123, min=123 ), v_cpu_count=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.VCpuCountRequestProperty( max=123, min=123 ) ), instance_type="instanceType", launch_template_specification=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LaunchTemplateSpecificationProperty( launch_template_id="launchTemplateId", launch_template_name="launchTemplateName", version="version" ), weighted_capacity="weightedCapacity" )
Attributes
- instance_requirements
The instance requirements.
Amazon EC2 Auto Scaling uses your specified requirements to identify instance types. Then, it uses your On-Demand and Spot allocation strategies to launch instances from these instance types.
You can specify up to four separate sets of instance requirements per Auto Scaling group. This is useful for provisioning instances from different Amazon Machine Images (AMIs) in the same Auto Scaling group. To do this, create the AMIs and create a new launch template for each AMI. Then, create a compatible set of instance requirements for each launch template. .. epigraph:
If you specify ``InstanceRequirements`` , you can't specify ``InstanceType`` .
- instance_type
The instance type, such as
m3.xlarge. You must specify an instance type that is supported in your requested Region and Availability Zones. For more information, see Instance types in the Amazon EC2 User Guide .You can specify up to 40 instance types per Auto Scaling group.
- launch_template_specification
Provides a launch template for the specified instance type or set of instance requirements.
For example, some instance types might require a launch template with a different AMI. If not provided, Amazon EC2 Auto Scaling uses the launch template that’s specified in the
LaunchTemplatedefinition. For more information, see Specifying a different launch template for an instance type in the Amazon EC2 Auto Scaling User Guide .You can specify up to 20 launch templates per Auto Scaling group. The launch templates specified in the overrides and in the
LaunchTemplatedefinition count towards this limit.
- weighted_capacity
If you provide a list of instance types to use, you can specify the number of capacity units provided by each instance type in terms of virtual CPUs, memory, storage, throughput, or other relative performance characteristic.
When a Spot or On-Demand Instance is launched, the capacity units count toward the desired capacity. Amazon EC2 Auto Scaling launches instances until the desired capacity is totally fulfilled, even if this results in an overage. For example, if there are two units remaining to fulfill capacity, and Amazon EC2 Auto Scaling can only launch an instance with a
WeightedCapacityof five units, the instance is launched, and the desired capacity is exceeded by three units. For more information, see Configure instance weighting for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide . Value must be in the range of 1-999.If you specify a value for
WeightedCapacityfor one instance type, you must specify a value forWeightedCapacityfor all of them. .. epigraph:Every Auto Scaling group has three size parameters ( ``DesiredCapacity`` , ``MaxSize`` , and ``MinSize`` ). Usually, you set these sizes based on a specific number of instances. However, if you configure a mixed instances policy that defines weights for the instance types, you must specify these sizes with the same units that you use for weighting instances.
LaunchTemplateProperty
- class CfnAutoScalingGroupPropsMixin.LaunchTemplateProperty(*, launch_template_specification=None, overrides=None)
Bases:
objectUse this structure to specify the launch templates and instance types (overrides) for a mixed instances policy.
LaunchTemplateis a property of the AWS::AutoScaling::AutoScalingGroup MixedInstancesPolicy property type.- Parameters:
launch_template_specification (
Union[IResolvable,LaunchTemplateSpecificationProperty,Dict[str,Any],None]) – The launch template.overrides (
Union[IResolvable,Sequence[Union[IResolvable,LaunchTemplateOverridesProperty,Dict[str,Any]]],None]) – Any properties that you specify override the same properties in the launch template.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins launch_template_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LaunchTemplateProperty( launch_template_specification=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LaunchTemplateSpecificationProperty( launch_template_id="launchTemplateId", launch_template_name="launchTemplateName", version="version" ), overrides=[autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LaunchTemplateOverridesProperty( instance_requirements=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.InstanceRequirementsProperty( accelerator_count=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.AcceleratorCountRequestProperty( max=123, min=123 ), accelerator_manufacturers=["acceleratorManufacturers"], accelerator_names=["acceleratorNames"], accelerator_total_memory_mi_b=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.AcceleratorTotalMemoryMiBRequestProperty( max=123, min=123 ), accelerator_types=["acceleratorTypes"], allowed_instance_types=["allowedInstanceTypes"], bare_metal="bareMetal", baseline_ebs_bandwidth_mbps=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.BaselineEbsBandwidthMbpsRequestProperty( max=123, min=123 ), baseline_performance_factors=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.BaselinePerformanceFactorsRequestProperty( cpu=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.CpuPerformanceFactorRequestProperty( references=[autoscaling_mixins.CfnAutoScalingGroupPropsMixin.PerformanceFactorReferenceRequestProperty( instance_family="instanceFamily" )] ) ), burstable_performance="burstablePerformance", cpu_manufacturers=["cpuManufacturers"], excluded_instance_types=["excludedInstanceTypes"], instance_generations=["instanceGenerations"], local_storage="localStorage", local_storage_types=["localStorageTypes"], max_spot_price_as_percentage_of_optimal_on_demand_price=123, memory_gi_bPer_vCpu=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.MemoryGiBPerVCpuRequestProperty( max=123, min=123 ), memory_mi_b=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.MemoryMiBRequestProperty( max=123, min=123 ), network_bandwidth_gbps=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.NetworkBandwidthGbpsRequestProperty( max=123, min=123 ), network_interface_count=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.NetworkInterfaceCountRequestProperty( max=123, min=123 ), on_demand_max_price_percentage_over_lowest_price=123, require_hibernate_support=False, spot_max_price_percentage_over_lowest_price=123, total_local_storage_gb=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.TotalLocalStorageGBRequestProperty( max=123, min=123 ), v_cpu_count=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.VCpuCountRequestProperty( max=123, min=123 ) ), instance_type="instanceType", launch_template_specification=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LaunchTemplateSpecificationProperty( launch_template_id="launchTemplateId", launch_template_name="launchTemplateName", version="version" ), weighted_capacity="weightedCapacity" )] )
Attributes
- launch_template_specification
The launch template.
- overrides
Any properties that you specify override the same properties in the launch template.
LaunchTemplateSpecificationProperty
- class CfnAutoScalingGroupPropsMixin.LaunchTemplateSpecificationProperty(*, launch_template_id=None, launch_template_name=None, version=None)
Bases:
objectSpecifies a launch template to use when provisioning EC2 instances for an Auto Scaling group.
You must specify the following:
The ID or the name of the launch template, but not both.
The version of the launch template.
LaunchTemplateSpecificationis property of the AWS::AutoScaling::AutoScalingGroup resource. It is also a property of the AWS::AutoScaling::AutoScalingGroup LaunchTemplate and AWS::AutoScaling::AutoScalingGroup LaunchTemplateOverrides property types.For information about creating a launch template, see AWS::EC2::LaunchTemplate and Create a launch template for an Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .
For examples of launch templates, see Create launch templates .
- Parameters:
launch_template_id (
Optional[str]) – The ID of the launch template. You must specify theLaunchTemplateIDor theLaunchTemplateName, but not both.launch_template_name (
Optional[str]) – The name of the launch template. You must specify theLaunchTemplateNameor theLaunchTemplateID, but not both.version (
Optional[str]) – The version number of the launch template. Specifying$Latestor$Defaultfor the template version number is not supported. However, you can specifyLatestVersionNumberorDefaultVersionNumberusing theFn::GetAttintrinsic function. For more information, see Fn::GetAtt . .. epigraph:: For an example of using theFn::GetAttfunction, see the Examples section of theAWS::AutoScaling::AutoScalingGroupresource.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins launch_template_specification_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LaunchTemplateSpecificationProperty( launch_template_id="launchTemplateId", launch_template_name="launchTemplateName", version="version" )
Attributes
- launch_template_id
The ID of the launch template.
You must specify the
LaunchTemplateIDor theLaunchTemplateName, but not both.
- launch_template_name
The name of the launch template.
You must specify the
LaunchTemplateNameor theLaunchTemplateID, but not both.
- version
The version number of the launch template.
Specifying
$Latestor$Defaultfor the template version number is not supported. However, you can specifyLatestVersionNumberorDefaultVersionNumberusing theFn::GetAttintrinsic function. For more information, see Fn::GetAtt . .. epigraph:For an example of using the ``Fn::GetAtt`` function, see the `Examples <https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-autoscaling-autoscalinggroup.html#aws-resource-autoscaling-autoscalinggroup--examples>`_ section of the ``AWS::AutoScaling::AutoScalingGroup`` resource.
LifecycleHookSpecificationProperty
- class CfnAutoScalingGroupPropsMixin.LifecycleHookSpecificationProperty(*, default_result=None, heartbeat_timeout=None, lifecycle_hook_name=None, lifecycle_transition=None, notification_metadata=None, notification_target_arn=None, role_arn=None)
Bases:
objectLifecycleHookSpecificationspecifies a lifecycle hook for theLifecycleHookSpecificationListproperty of the AWS::AutoScaling::AutoScalingGroup resource. A lifecycle hook specifies actions to perform when Amazon EC2 Auto Scaling launches or terminates instances.For more information, see Amazon EC2 Auto Scaling lifecycle hooks in the Amazon EC2 Auto Scaling User Guide . You can find a sample template snippet in the Examples section of the
AWS::AutoScaling::LifecycleHookresource.- Parameters:
default_result (
Optional[str]) – The action the Auto Scaling group takes when the lifecycle hook timeout elapses or if an unexpected failure occurs. The default value isABANDON. Valid values:CONTINUE|ABANDONheartbeat_timeout (
Union[int,float,None]) – The maximum time, in seconds, that can elapse before the lifecycle hook times out. The range is from30to7200seconds. The default value is3600seconds (1 hour).lifecycle_hook_name (
Optional[str]) – The name of the lifecycle hook.lifecycle_transition (
Optional[str]) – The lifecycle transition. For Auto Scaling groups, there are two major lifecycle transitions. - To create a lifecycle hook for scale-out events, specifyautoscaling:EC2_INSTANCE_LAUNCHING. - To create a lifecycle hook for scale-in events, specifyautoscaling:EC2_INSTANCE_TERMINATING.notification_metadata (
Optional[str]) – Additional information that you want to include any time Amazon EC2 Auto Scaling sends a message to the notification target.notification_target_arn (
Optional[str]) – The Amazon Resource Name (ARN) of the notification target that Amazon EC2 Auto Scaling sends notifications to when an instance is in a wait state for the lifecycle hook. You can specify an Amazon SNS topic or an Amazon SQS queue.role_arn (
Optional[str]) – The ARN of the IAM role that allows the Auto Scaling group to publish to the specified notification target. For information about creating this role, see Prepare to add a lifecycle hook to your Auto Scaling group in the Amazon EC2 Auto Scaling User Guide . Valid only if the notification target is an Amazon SNS topic or an Amazon SQS queue.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins lifecycle_hook_specification_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LifecycleHookSpecificationProperty( default_result="defaultResult", heartbeat_timeout=123, lifecycle_hook_name="lifecycleHookName", lifecycle_transition="lifecycleTransition", notification_metadata="notificationMetadata", notification_target_arn="notificationTargetArn", role_arn="roleArn" )
Attributes
- default_result
The action the Auto Scaling group takes when the lifecycle hook timeout elapses or if an unexpected failure occurs.
The default value is
ABANDON.Valid values:
CONTINUE|ABANDON
- heartbeat_timeout
The maximum time, in seconds, that can elapse before the lifecycle hook times out.
The range is from
30to7200seconds. The default value is3600seconds (1 hour).
- lifecycle_hook_name
The name of the lifecycle hook.
- lifecycle_transition
The lifecycle transition. For Auto Scaling groups, there are two major lifecycle transitions.
To create a lifecycle hook for scale-out events, specify
autoscaling:EC2_INSTANCE_LAUNCHING.To create a lifecycle hook for scale-in events, specify
autoscaling:EC2_INSTANCE_TERMINATING.
- notification_metadata
Additional information that you want to include any time Amazon EC2 Auto Scaling sends a message to the notification target.
- notification_target_arn
The Amazon Resource Name (ARN) of the notification target that Amazon EC2 Auto Scaling sends notifications to when an instance is in a wait state for the lifecycle hook.
You can specify an Amazon SNS topic or an Amazon SQS queue.
- role_arn
The ARN of the IAM role that allows the Auto Scaling group to publish to the specified notification target.
For information about creating this role, see Prepare to add a lifecycle hook to your Auto Scaling group in the Amazon EC2 Auto Scaling User Guide .
Valid only if the notification target is an Amazon SNS topic or an Amazon SQS queue.
MemoryGiBPerVCpuRequestProperty
- class CfnAutoScalingGroupPropsMixin.MemoryGiBPerVCpuRequestProperty(*, max=None, min=None)
Bases:
objectMemoryGiBPerVCpuRequestis a property of theInstanceRequirementsproperty of the AWS::AutoScaling::AutoScalingGroup LaunchTemplateOverrides property type that describes the minimum and maximum amount of memory per vCPU for an instance type, in GiB.- Parameters:
max (
Union[int,float,None]) – The memory maximum in GiB.min (
Union[int,float,None]) – The memory minimum in GiB.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins memory_gi_bPer_vCpu_request_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.MemoryGiBPerVCpuRequestProperty( max=123, min=123 )
Attributes
- max
The memory maximum in GiB.
MemoryMiBRequestProperty
- class CfnAutoScalingGroupPropsMixin.MemoryMiBRequestProperty(*, max=None, min=None)
Bases:
objectMemoryMiBRequestis a property of theInstanceRequirementsproperty of the AWS::AutoScaling::AutoScalingGroup LaunchTemplateOverrides property type that describes the minimum and maximum instance memory size for an instance type, in MiB.- Parameters:
max (
Union[int,float,None]) – The memory maximum in MiB.min (
Union[int,float,None]) – The memory minimum in MiB.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins memory_mi_bRequest_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.MemoryMiBRequestProperty( max=123, min=123 )
Attributes
- max
The memory maximum in MiB.
MetricsCollectionProperty
- class CfnAutoScalingGroupPropsMixin.MetricsCollectionProperty(*, granularity=None, metrics=None)
Bases:
objectMetricsCollectionis a property of the AWS::AutoScaling::AutoScalingGroup resource that describes the group metrics that an Amazon EC2 Auto Scaling group sends to Amazon CloudWatch. These metrics describe the group rather than any of its instances.For more information, see Monitor CloudWatch metrics for your Auto Scaling groups and instances in the Amazon EC2 Auto Scaling User Guide . You can find a sample template snippet in the Examples section of the
AWS::AutoScaling::AutoScalingGroupresource.- Parameters:
granularity (
Optional[str]) – The frequency at which Amazon EC2 Auto Scaling sends aggregated data to CloudWatch. The only valid value is1Minute.metrics (
Optional[Sequence[str]]) – Identifies the metrics to enable. You can specify one or more of the following metrics: -GroupMinSize-GroupMaxSize-GroupDesiredCapacity-GroupInServiceInstances-GroupPendingInstances-GroupStandbyInstances-GroupTerminatingInstances-GroupTotalInstances-GroupInServiceCapacity-GroupPendingCapacity-GroupStandbyCapacity-GroupTerminatingCapacity-GroupTotalCapacity-WarmPoolDesiredCapacity-WarmPoolWarmedCapacity-WarmPoolPendingCapacity-WarmPoolTerminatingCapacity-WarmPoolTotalCapacity-GroupAndWarmPoolDesiredCapacity-GroupAndWarmPoolTotalCapacityIf you specifyGranularityand don’t specify any metrics, all metrics are enabled. For more information, see Amazon CloudWatch metrics for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins metrics_collection_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.MetricsCollectionProperty( granularity="granularity", metrics=["metrics"] )
Attributes
- granularity
The frequency at which Amazon EC2 Auto Scaling sends aggregated data to CloudWatch.
The only valid value is
1Minute.
- metrics
Identifies the metrics to enable.
You can specify one or more of the following metrics:
GroupMinSizeGroupMaxSizeGroupDesiredCapacityGroupInServiceInstancesGroupPendingInstancesGroupStandbyInstancesGroupTerminatingInstancesGroupTotalInstancesGroupInServiceCapacityGroupPendingCapacityGroupStandbyCapacityGroupTerminatingCapacityGroupTotalCapacityWarmPoolDesiredCapacityWarmPoolWarmedCapacityWarmPoolPendingCapacityWarmPoolTerminatingCapacityWarmPoolTotalCapacityGroupAndWarmPoolDesiredCapacityGroupAndWarmPoolTotalCapacity
If you specify
Granularityand don’t specify any metrics, all metrics are enabled.For more information, see Amazon CloudWatch metrics for Amazon EC2 Auto Scaling in the Amazon EC2 Auto Scaling User Guide .
MixedInstancesPolicyProperty
- class CfnAutoScalingGroupPropsMixin.MixedInstancesPolicyProperty(*, instances_distribution=None, launch_template=None)
Bases:
objectUse this structure to launch multiple instance types and On-Demand Instances and Spot Instances within a single Auto Scaling group.
A mixed instances policy contains information that Amazon EC2 Auto Scaling can use to launch instances and help optimize your costs. For more information, see Auto Scaling groups with multiple instance types and purchase options in the Amazon EC2 Auto Scaling User Guide .
You can create a mixed instances policy for new and existing Auto Scaling groups. You must use a launch template to configure the policy. You cannot use a launch configuration.
There are key differences between Spot Instances and On-Demand Instances:
The price for Spot Instances varies based on demand
Amazon EC2 can terminate an individual Spot Instance as the availability of, or price for, Spot Instances changes
When a Spot Instance is terminated, Amazon EC2 Auto Scaling group attempts to launch a replacement instance to maintain the desired capacity for the group.
MixedInstancesPolicyis a property of the AWS::AutoScaling::AutoScalingGroup resource.- Parameters:
instances_distribution (
Union[IResolvable,InstancesDistributionProperty,Dict[str,Any],None]) – The instances distribution.launch_template (
Union[IResolvable,LaunchTemplateProperty,Dict[str,Any],None]) – One or more launch templates and the instance types (overrides) that are used to launch EC2 instances to fulfill On-Demand and Spot capacities.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins mixed_instances_policy_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.MixedInstancesPolicyProperty( instances_distribution=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.InstancesDistributionProperty( on_demand_allocation_strategy="onDemandAllocationStrategy", on_demand_base_capacity=123, on_demand_percentage_above_base_capacity=123, spot_allocation_strategy="spotAllocationStrategy", spot_instance_pools=123, spot_max_price="spotMaxPrice" ), launch_template=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LaunchTemplateProperty( launch_template_specification=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LaunchTemplateSpecificationProperty( launch_template_id="launchTemplateId", launch_template_name="launchTemplateName", version="version" ), overrides=[autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LaunchTemplateOverridesProperty( instance_requirements=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.InstanceRequirementsProperty( accelerator_count=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.AcceleratorCountRequestProperty( max=123, min=123 ), accelerator_manufacturers=["acceleratorManufacturers"], accelerator_names=["acceleratorNames"], accelerator_total_memory_mi_b=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.AcceleratorTotalMemoryMiBRequestProperty( max=123, min=123 ), accelerator_types=["acceleratorTypes"], allowed_instance_types=["allowedInstanceTypes"], bare_metal="bareMetal", baseline_ebs_bandwidth_mbps=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.BaselineEbsBandwidthMbpsRequestProperty( max=123, min=123 ), baseline_performance_factors=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.BaselinePerformanceFactorsRequestProperty( cpu=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.CpuPerformanceFactorRequestProperty( references=[autoscaling_mixins.CfnAutoScalingGroupPropsMixin.PerformanceFactorReferenceRequestProperty( instance_family="instanceFamily" )] ) ), burstable_performance="burstablePerformance", cpu_manufacturers=["cpuManufacturers"], excluded_instance_types=["excludedInstanceTypes"], instance_generations=["instanceGenerations"], local_storage="localStorage", local_storage_types=["localStorageTypes"], max_spot_price_as_percentage_of_optimal_on_demand_price=123, memory_gi_bPer_vCpu=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.MemoryGiBPerVCpuRequestProperty( max=123, min=123 ), memory_mi_b=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.MemoryMiBRequestProperty( max=123, min=123 ), network_bandwidth_gbps=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.NetworkBandwidthGbpsRequestProperty( max=123, min=123 ), network_interface_count=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.NetworkInterfaceCountRequestProperty( max=123, min=123 ), on_demand_max_price_percentage_over_lowest_price=123, require_hibernate_support=False, spot_max_price_percentage_over_lowest_price=123, total_local_storage_gb=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.TotalLocalStorageGBRequestProperty( max=123, min=123 ), v_cpu_count=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.VCpuCountRequestProperty( max=123, min=123 ) ), instance_type="instanceType", launch_template_specification=autoscaling_mixins.CfnAutoScalingGroupPropsMixin.LaunchTemplateSpecificationProperty( launch_template_id="launchTemplateId", launch_template_name="launchTemplateName", version="version" ), weighted_capacity="weightedCapacity" )] ) )
Attributes
- instances_distribution
The instances distribution.
- launch_template
One or more launch templates and the instance types (overrides) that are used to launch EC2 instances to fulfill On-Demand and Spot capacities.
NetworkBandwidthGbpsRequestProperty
- class CfnAutoScalingGroupPropsMixin.NetworkBandwidthGbpsRequestProperty(*, max=None, min=None)
Bases:
objectNetworkBandwidthGbpsRequestis a property of theInstanceRequirementsproperty of the AWS::AutoScaling::AutoScalingGroup LaunchTemplateOverrides property type that describes the minimum and maximum network bandwidth for an instance type, in Gbps.Setting the minimum bandwidth does not guarantee that your instance will achieve the minimum bandwidth. Amazon EC2 will identify instance types that support the specified minimum bandwidth, but the actual bandwidth of your instance might go below the specified minimum at times. For more information, see Available instance bandwidth in the Amazon EC2 User Guide for Linux Instances .
- Parameters:
max (
Union[int,float,None]) – The maximum amount of network bandwidth, in gigabits per second (Gbps).min (
Union[int,float,None]) – The minimum amount of network bandwidth, in gigabits per second (Gbps).
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins network_bandwidth_gbps_request_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.NetworkBandwidthGbpsRequestProperty( max=123, min=123 )
Attributes
- max
The maximum amount of network bandwidth, in gigabits per second (Gbps).
- min
The minimum amount of network bandwidth, in gigabits per second (Gbps).
NetworkInterfaceCountRequestProperty
- class CfnAutoScalingGroupPropsMixin.NetworkInterfaceCountRequestProperty(*, max=None, min=None)
Bases:
objectNetworkInterfaceCountRequestis a property of theInstanceRequirementsproperty of the AWS::AutoScaling::AutoScalingGroup LaunchTemplateOverrides property type that describes the minimum and maximum number of network interfaces for an instance type.- Parameters:
max (
Union[int,float,None]) – The maximum number of network interfaces.min (
Union[int,float,None]) – The minimum number of network interfaces.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins network_interface_count_request_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.NetworkInterfaceCountRequestProperty( max=123, min=123 )
Attributes
- max
The maximum number of network interfaces.
NotificationConfigurationProperty
- class CfnAutoScalingGroupPropsMixin.NotificationConfigurationProperty(*, notification_types=None, topic_arn=None)
Bases:
objectA structure that specifies an Amazon SNS notification configuration for the
NotificationConfigurationsproperty of the AWS::AutoScaling::AutoScalingGroup resource.For an example template snippet, see Configure Amazon EC2 Auto Scaling resources .
For more information, see Get Amazon SNS notifications when your Auto Scaling group scales in the Amazon EC2 Auto Scaling User Guide .
- Parameters:
notification_types (
Optional[Sequence[str]]) – A list of event types that send a notification. Event types can include any of the following types. Allowed values : -autoscaling:EC2_INSTANCE_LAUNCH-autoscaling:EC2_INSTANCE_LAUNCH_ERROR-autoscaling:EC2_INSTANCE_TERMINATE-autoscaling:EC2_INSTANCE_TERMINATE_ERROR-autoscaling:TEST_NOTIFICATIONtopic_arn (
Optional[str]) – The Amazon Resource Name (ARN) of the Amazon SNS topic.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins notification_configuration_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.NotificationConfigurationProperty( notification_types=["notificationTypes"], topic_arn="topicArn" )
Attributes
- notification_types
A list of event types that send a notification. Event types can include any of the following types.
Allowed values :
autoscaling:EC2_INSTANCE_LAUNCHautoscaling:EC2_INSTANCE_LAUNCH_ERRORautoscaling:EC2_INSTANCE_TERMINATEautoscaling:EC2_INSTANCE_TERMINATE_ERRORautoscaling:TEST_NOTIFICATION
- topic_arn
The Amazon Resource Name (ARN) of the Amazon SNS topic.
PerformanceFactorReferenceRequestProperty
- class CfnAutoScalingGroupPropsMixin.PerformanceFactorReferenceRequestProperty(*, instance_family=None)
Bases:
objectSpecify an instance family to use as the baseline reference for CPU performance.
All instance types that All instance types that match your specified attributes will be compared against the CPU performance of the referenced instance family, regardless of CPU manufacturer or architecture differences. .. epigraph:
Currently only one instance family can be specified in the list.
- Parameters:
instance_family (
Optional[str]) – The instance family to use as a baseline reference. .. epigraph:: Make sure that you specify the correct value for the instance family. The instance family is everything before the period (.) in the instance type name. For example, in the instancec6i.large, the instance family isc6i, notc6. For more information, see Amazon EC2 instance type naming conventions in Amazon EC2 Instance Types . The following instance types are not supported for performance protection. -c1-g3| g3s-hpc7g-m1| m2-mac1 | mac2 | mac2-m1ultra | mac2-m2 | mac2-m2pro-p3dn | p4d | p5-t1-u-12tb1 | u-18tb1 | u-24tb1 | u-3tb1 | u-6tb1 | u-9tb1 | u7i-12tb | u7in-16tb | u7in-24tb | u7in-32tbIf you performance protection by specifying a supported instance family, the returned instance types will exclude the preceding unsupported instance families. If you specify an unsupported instance family as a value for baseline performance, the API returns an empty response.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins performance_factor_reference_request_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.PerformanceFactorReferenceRequestProperty( instance_family="instanceFamily" )
Attributes
- instance_family
The instance family to use as a baseline reference.
Make sure that you specify the correct value for the instance family. The instance family is everything before the period (.) in the instance type name. For example, in the instance
c6i.large, the instance family isc6i, notc6. For more information, see Amazon EC2 instance type naming conventions in Amazon EC2 Instance Types .The following instance types are not supported for performance protection.
c1g3| g3shpc7gm1| m2mac1 | mac2 | mac2-m1ultra | mac2-m2 | mac2-m2prop3dn | p4d | p5t1u-12tb1 | u-18tb1 | u-24tb1 | u-3tb1 | u-6tb1 | u-9tb1 | u7i-12tb | u7in-16tb | u7in-24tb | u7in-32tb
If you performance protection by specifying a supported instance family, the returned instance types will exclude the preceding unsupported instance families.
If you specify an unsupported instance family as a value for baseline performance, the API returns an empty response.
TagPropertyProperty
- class CfnAutoScalingGroupPropsMixin.TagPropertyProperty(*, key=None, propagate_at_launch=None, value=None)
Bases:
objectA structure that specifies a tag for the
Tagsproperty of AWS::AutoScaling::AutoScalingGroup resource.For more information, see Tag Auto Scaling groups and instances in the Amazon EC2 Auto Scaling User Guide . You can find a sample template snippet in the Examples section of the
AWS::AutoScaling::AutoScalingGroupresource.CloudFormation adds the following tags to all Auto Scaling groups and associated instances:
aws:cloudformation:stack-name
aws:cloudformation:stack-id
aws:cloudformation:logical-id
- Parameters:
key (
Optional[str]) – The tag key.propagate_at_launch (
Union[bool,IResolvable,None]) – Set totrueif you want CloudFormation to copy the tag to EC2 instances that are launched as part of the Auto Scaling group. Set tofalseif you want the tag attached only to the Auto Scaling group and not copied to any instances launched as part of the Auto Scaling group.value (
Optional[str]) – The tag value.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins tag_property_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.TagPropertyProperty( key="key", propagate_at_launch=False, value="value" )
Attributes
- key
The tag key.
- propagate_at_launch
Set to
trueif you want CloudFormation to copy the tag to EC2 instances that are launched as part of the Auto Scaling group.Set to
falseif you want the tag attached only to the Auto Scaling group and not copied to any instances launched as part of the Auto Scaling group.
TotalLocalStorageGBRequestProperty
- class CfnAutoScalingGroupPropsMixin.TotalLocalStorageGBRequestProperty(*, max=None, min=None)
Bases:
objectTotalLocalStorageGBRequestis a property of theInstanceRequirementsproperty of the AWS::AutoScaling::AutoScalingGroup LaunchTemplateOverrides property type that describes the minimum and maximum total local storage size for an instance type, in GB.- Parameters:
max (
Union[int,float,None]) – The storage maximum in GB.min (
Union[int,float,None]) – The storage minimum in GB.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins total_local_storage_gBRequest_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.TotalLocalStorageGBRequestProperty( max=123, min=123 )
Attributes
- max
The storage maximum in GB.
TrafficSourceIdentifierProperty
- class CfnAutoScalingGroupPropsMixin.TrafficSourceIdentifierProperty(*, identifier=None, type=None)
Bases:
objectIdentifying information for a traffic source.
- Parameters:
identifier (
Optional[str]) – Identifies the traffic source. For Application Load Balancers, Gateway Load Balancers, Network Load Balancers, and VPC Lattice, this will be the Amazon Resource Name (ARN) for a target group in this account and Region. For Classic Load Balancers, this will be the name of the Classic Load Balancer in this account and Region. For example: - Application Load Balancer ARN:arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/my-targets/1234567890123456- Classic Load Balancer name:my-classic-load-balancer- VPC Lattice ARN:arn:aws:vpc-lattice:us-west-2:123456789012:targetgroup/tg-1234567890123456To get the ARN of a target group for a Application Load Balancer, Gateway Load Balancer, or Network Load Balancer, or the name of a Classic Load Balancer, use the Elastic Load Balancing DescribeTargetGroups and DescribeLoadBalancers API operations. To get the ARN of a target group for VPC Lattice, use the VPC Lattice GetTargetGroup API operation.type (
Optional[str]) – Provides additional context for the value ofIdentifier. The following lists the valid values: -elbifIdentifieris the name of a Classic Load Balancer. -elbv2ifIdentifieris the ARN of an Application Load Balancer, Gateway Load Balancer, or Network Load Balancer target group. -vpc-latticeifIdentifieris the ARN of a VPC Lattice target group. Required if the identifier is the name of a Classic Load Balancer.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins traffic_source_identifier_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.TrafficSourceIdentifierProperty( identifier="identifier", type="type" )
Attributes
- identifier
Identifies the traffic source.
For Application Load Balancers, Gateway Load Balancers, Network Load Balancers, and VPC Lattice, this will be the Amazon Resource Name (ARN) for a target group in this account and Region. For Classic Load Balancers, this will be the name of the Classic Load Balancer in this account and Region.
For example:
Application Load Balancer ARN:
arn:aws:elasticloadbalancing:us-west-2:123456789012:targetgroup/my-targets/1234567890123456Classic Load Balancer name:
my-classic-load-balancerVPC Lattice ARN:
arn:aws:vpc-lattice:us-west-2:123456789012:targetgroup/tg-1234567890123456
To get the ARN of a target group for a Application Load Balancer, Gateway Load Balancer, or Network Load Balancer, or the name of a Classic Load Balancer, use the Elastic Load Balancing DescribeTargetGroups and DescribeLoadBalancers API operations.
To get the ARN of a target group for VPC Lattice, use the VPC Lattice GetTargetGroup API operation.
- type
Provides additional context for the value of
Identifier.The following lists the valid values:
elbifIdentifieris the name of a Classic Load Balancer.elbv2ifIdentifieris the ARN of an Application Load Balancer, Gateway Load Balancer, or Network Load Balancer target group.vpc-latticeifIdentifieris the ARN of a VPC Lattice target group.
Required if the identifier is the name of a Classic Load Balancer.
VCpuCountRequestProperty
- class CfnAutoScalingGroupPropsMixin.VCpuCountRequestProperty(*, max=None, min=None)
Bases:
objectVCpuCountRequestis a property of theInstanceRequirementsproperty of the AWS::AutoScaling::AutoScalingGroup LaunchTemplateOverrides property type that describes the minimum and maximum number of vCPUs for an instance type.- Parameters:
max (
Union[int,float,None]) – The maximum number of vCPUs.min (
Union[int,float,None]) – The minimum number of vCPUs.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins v_cpu_count_request_property = autoscaling_mixins.CfnAutoScalingGroupPropsMixin.VCpuCountRequestProperty( max=123, min=123 )
Attributes
- max
The maximum number of vCPUs.