CfnScalingPolicyPropsMixin
- class aws_cdk.mixins_preview.aws_autoscaling.mixins.CfnScalingPolicyPropsMixin(props, *, strategy=None)
Bases:
MixinThe
AWS::AutoScaling::ScalingPolicyresource specifies an Amazon EC2 Auto Scaling scaling policy so that the Auto Scaling group can scale the number of instances available for your application.For more information about using scaling policies to scale your Auto Scaling group automatically, see Dynamic scaling and Predictive scaling in the Amazon EC2 Auto Scaling User Guide .
- See:
- CloudformationResource:
AWS::AutoScaling::ScalingPolicy
- Mixin:
true
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview import mixins from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins cfn_scaling_policy_props_mixin = autoscaling_mixins.CfnScalingPolicyPropsMixin(autoscaling_mixins.CfnScalingPolicyMixinProps( adjustment_type="adjustmentType", auto_scaling_group_name="autoScalingGroupName", cooldown="cooldown", estimated_instance_warmup=123, metric_aggregation_type="metricAggregationType", min_adjustment_magnitude=123, policy_type="policyType", predictive_scaling_configuration=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingConfigurationProperty( max_capacity_breach_behavior="maxCapacityBreachBehavior", max_capacity_buffer=123, metric_specifications=[autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingMetricSpecificationProperty( customized_capacity_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingCustomizedCapacityMetricProperty( metric_data_queries=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), stat="stat", unit="unit" ), return_data=False )] ), customized_load_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingCustomizedLoadMetricProperty( metric_data_queries=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), stat="stat", unit="unit" ), return_data=False )] ), customized_scaling_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingCustomizedScalingMetricProperty( metric_data_queries=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), stat="stat", unit="unit" ), return_data=False )] ), predefined_load_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingPredefinedLoadMetricProperty( predefined_metric_type="predefinedMetricType", resource_label="resourceLabel" ), predefined_metric_pair_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingPredefinedMetricPairProperty( predefined_metric_type="predefinedMetricType", resource_label="resourceLabel" ), predefined_scaling_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingPredefinedScalingMetricProperty( predefined_metric_type="predefinedMetricType", resource_label="resourceLabel" ), target_value=123 )], mode="mode", scheduling_buffer_time=123 ), scaling_adjustment=123, step_adjustments=[autoscaling_mixins.CfnScalingPolicyPropsMixin.StepAdjustmentProperty( metric_interval_lower_bound=123, metric_interval_upper_bound=123, scaling_adjustment=123 )], target_tracking_configuration=autoscaling_mixins.CfnScalingPolicyPropsMixin.TargetTrackingConfigurationProperty( customized_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.CustomizedMetricSpecificationProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", metrics=[autoscaling_mixins.CfnScalingPolicyPropsMixin.TargetTrackingMetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.TargetTrackingMetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), period=123, stat="stat", unit="unit" ), period=123, return_data=False )], namespace="namespace", period=123, statistic="statistic", unit="unit" ), disable_scale_in=False, predefined_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredefinedMetricSpecificationProperty( predefined_metric_type="predefinedMetricType", resource_label="resourceLabel" ), target_value=123 ) ), strategy=mixins.PropertyMergeStrategy.OVERRIDE )
Create a mixin to apply properties to
AWS::AutoScaling::ScalingPolicy.- Parameters:
props (
Union[CfnScalingPolicyMixinProps,Dict[str,Any]]) – L1 properties to apply.strategy (
Optional[PropertyMergeStrategy]) – (experimental) Strategy for merging nested properties. Default: - PropertyMergeStrategy.MERGE
Methods
- apply_to(construct)
Apply the mixin properties to the construct.
- Parameters:
construct (
IConstruct)- Return type:
- supports(construct)
Check if this mixin supports the given construct.
- Parameters:
construct (
IConstruct)- Return type:
bool
Attributes
- CFN_PROPERTY_KEYS = ['adjustmentType', 'autoScalingGroupName', 'cooldown', 'estimatedInstanceWarmup', 'metricAggregationType', 'minAdjustmentMagnitude', 'policyType', 'predictiveScalingConfiguration', 'scalingAdjustment', 'stepAdjustments', 'targetTrackingConfiguration']
Static Methods
- classmethod is_mixin(x)
(experimental) Checks if
xis a Mixin.- Parameters:
x (
Any) – Any object.- Return type:
bool- Returns:
true if
xis an object created from a class which extendsMixin.- Stability:
experimental
CustomizedMetricSpecificationProperty
- class CfnScalingPolicyPropsMixin.CustomizedMetricSpecificationProperty(*, dimensions=None, metric_name=None, metrics=None, namespace=None, period=None, statistic=None, unit=None)
Bases:
objectContains customized metric specification information for a target tracking scaling policy for Amazon EC2 Auto Scaling.
To create your customized metric specification:
Add values for each required property from CloudWatch. You can use an existing metric, or a new metric that you create. To use your own metric, you must first publish the metric to CloudWatch. For more information, see Publish Custom Metrics in the Amazon CloudWatch User Guide .
Choose a metric that changes proportionally with capacity. The value of the metric should increase or decrease in inverse proportion to the number of capacity units. That is, the value of the metric should decrease when capacity increases.
For more information about CloudWatch, see Amazon CloudWatch Concepts .
CustomizedMetricSpecificationis a property of the AWS::AutoScaling::ScalingPolicy TargetTrackingConfiguration property type.- Parameters:
dimensions (
Union[IResolvable,Sequence[Union[IResolvable,MetricDimensionProperty,Dict[str,Any]]],None]) – The dimensions of the metric. Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.metric_name (
Optional[str]) – The name of the metric. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics .metrics (
Union[IResolvable,Sequence[Union[IResolvable,TargetTrackingMetricDataQueryProperty,Dict[str,Any]]],None]) – The metrics to include in the target tracking scaling policy, as a metric data query. This can include both raw metric and metric math expressions.namespace (
Optional[str]) – The namespace of the metric.period (
Union[int,float,None]) – The period of the metric in seconds. The default value is 60. Accepted values are 10, 30, and 60. For high resolution metric, set the value to less than 60. For more information, see Create a target tracking policy using high-resolution metrics for faster response .statistic (
Optional[str]) – The statistic of the metric.unit (
Optional[str]) – The unit of the metric. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins customized_metric_specification_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.CustomizedMetricSpecificationProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", metrics=[autoscaling_mixins.CfnScalingPolicyPropsMixin.TargetTrackingMetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.TargetTrackingMetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), period=123, stat="stat", unit="unit" ), period=123, return_data=False )], namespace="namespace", period=123, statistic="statistic", unit="unit" )
Attributes
- dimensions
The dimensions of the metric.
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
- metric_name
The name of the metric.
To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics .
- metrics
The metrics to include in the target tracking scaling policy, as a metric data query.
This can include both raw metric and metric math expressions.
- namespace
The namespace of the metric.
- period
The period of the metric in seconds.
The default value is 60. Accepted values are 10, 30, and 60. For high resolution metric, set the value to less than 60. For more information, see Create a target tracking policy using high-resolution metrics for faster response .
- statistic
The statistic of the metric.
- unit
The unit of the metric.
For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
MetricDataQueryProperty
- class CfnScalingPolicyPropsMixin.MetricDataQueryProperty(*, expression=None, id=None, label=None, metric_stat=None, return_data=None)
Bases:
objectThe metric data to return.
Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
MetricDataQueryis a property of the following property types:AWS::AutoScaling::ScalingPolicy PredictiveScalingCustomizedScalingMetric
AWS::AutoScaling::ScalingPolicy PredictiveScalingCustomizedLoadMetric
AWS::AutoScaling::ScalingPolicy PredictiveScalingCustomizedCapacityMetric
Predictive scaling uses the time series data received from CloudWatch to understand how to schedule capacity based on your historical workload patterns.
You can call for a single metric or perform math expressions on multiple metrics. Any expressions used in a metric specification must eventually return a single time series.
For more information and examples, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
- Parameters:
expression (
Optional[str]) – The math expression to perform on the returned data, if this object is performing a math expression. This expression can use theIdof the other metrics to refer to those metrics, and can also use theIdof other expressions to use the result of those expressions. Conditional: Within eachMetricDataQueryobject, you must specify eitherExpressionorMetricStat, but not both.id (
Optional[str]) – A short name that identifies the object’s results in the response. This name must be unique among allMetricDataQueryobjects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.label (
Optional[str]) – A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.metric_stat (
Union[IResolvable,MetricStatProperty,Dict[str,Any],None]) – Information about the metric data to return. Conditional: Within eachMetricDataQueryobject, you must specify eitherExpressionorMetricStat, but not both.return_data (
Union[bool,IResolvable,None]) – Indicates whether to return the timestamps and raw data values of this metric. If you use any math expressions, specifytruefor this value for only the final math expression that the metric specification is based on. You must specifyfalseforReturnDatafor all the other metrics and expressions used in the metric specification. If you are only retrieving metrics and not performing any math expressions, do not specify anything forReturnData. This sets it to its default (true).
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins metric_data_query_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), stat="stat", unit="unit" ), return_data=False )
Attributes
- expression
The math expression to perform on the returned data, if this object is performing a math expression.
This expression can use the
Idof the other metrics to refer to those metrics, and can also use theIdof other expressions to use the result of those expressions.Conditional: Within each
MetricDataQueryobject, you must specify eitherExpressionorMetricStat, but not both.
- id
A short name that identifies the object’s results in the response.
This name must be unique among all
MetricDataQueryobjects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
- label
A human-readable label for this metric or expression.
This is especially useful if this is a math expression, so that you know what the value represents.
- metric_stat
Information about the metric data to return.
Conditional: Within each
MetricDataQueryobject, you must specify eitherExpressionorMetricStat, but not both.
- return_data
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify
truefor this value for only the final math expression that the metric specification is based on. You must specifyfalseforReturnDatafor all the other metrics and expressions used in the metric specification.If you are only retrieving metrics and not performing any math expressions, do not specify anything for
ReturnData. This sets it to its default (true).
MetricDimensionProperty
- class CfnScalingPolicyPropsMixin.MetricDimensionProperty(*, name=None, value=None)
Bases:
objectMetricDimensionspecifies a name/value pair that is part of the identity of a CloudWatch metric for theDimensionsproperty of the AWS::AutoScaling::ScalingPolicy CustomizedMetricSpecification property type. Duplicate dimensions are not allowed.- Parameters:
name (
Optional[str]) – The name of the dimension.value (
Optional[str]) – The value of the dimension.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins metric_dimension_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )
Attributes
- name
The name of the dimension.
MetricProperty
- class CfnScalingPolicyPropsMixin.MetricProperty(*, dimensions=None, metric_name=None, namespace=None)
Bases:
objectRepresents a specific metric.
Metricis a property of the AWS::AutoScaling::ScalingPolicy MetricStat property type.- Parameters:
dimensions (
Union[IResolvable,Sequence[Union[IResolvable,MetricDimensionProperty,Dict[str,Any]]],None]) – The dimensions for the metric. For the list of available dimensions, see the AWS documentation available from the table in AWS services that publish CloudWatch metrics in the Amazon CloudWatch User Guide . Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.metric_name (
Optional[str]) – The name of the metric.namespace (
Optional[str]) –The namespace of the metric. For more information, see the table in AWS services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins metric_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" )
Attributes
- dimensions
The dimensions for the metric.
For the list of available dimensions, see the AWS documentation available from the table in AWS services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
Conditional: If you published your metric with dimensions, you must specify the same dimensions in your scaling policy.
- metric_name
The name of the metric.
- namespace
The namespace of the metric.
For more information, see the table in AWS services that publish CloudWatch metrics in the Amazon CloudWatch User Guide .
MetricStatProperty
- class CfnScalingPolicyPropsMixin.MetricStatProperty(*, metric=None, stat=None, unit=None)
Bases:
objectMetricStatis a property of the AWS::AutoScaling::ScalingPolicy MetricDataQuery property type.This structure defines the CloudWatch metric to return, along with the statistic and unit.
For more information about the CloudWatch terminology below, see Amazon CloudWatch concepts in the Amazon CloudWatch User Guide .
- Parameters:
metric (
Union[IResolvable,MetricProperty,Dict[str,Any],None]) –The CloudWatch metric to return, including the metric name, namespace, and dimensions. To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics .
stat (
Optional[str]) – The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide . The most commonly used metrics for predictive scaling areAverageandSum.unit (
Optional[str]) –The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins metric_stat_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), stat="stat", unit="unit" )
Attributes
- metric
The CloudWatch metric to return, including the metric name, namespace, and dimensions.
To get the exact metric name, namespace, and dimensions, inspect the Metric object that is returned by a call to ListMetrics .
- stat
The statistic to return.
It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metrics for predictive scaling are
AverageandSum.
- unit
The unit to use for the returned data points.
For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
PredefinedMetricSpecificationProperty
- class CfnScalingPolicyPropsMixin.PredefinedMetricSpecificationProperty(*, predefined_metric_type=None, resource_label=None)
Bases:
objectContains predefined metric specification information for a target tracking scaling policy for Amazon EC2 Auto Scaling.
PredefinedMetricSpecificationis a property of the AWS::AutoScaling::ScalingPolicy TargetTrackingConfiguration property type.- Parameters:
predefined_metric_type (
Optional[str]) – The metric type. The following predefined metrics are available:. -ASGAverageCPUUtilization- Average CPU utilization of the Auto Scaling group. -ASGAverageNetworkIn- Average number of bytes received on all network interfaces by the Auto Scaling group. -ASGAverageNetworkOut- Average number of bytes sent out on all network interfaces by the Auto Scaling group. -ALBRequestCountPerTarget- Average Application Load Balancer request count per target for your Auto Scaling group.resource_label (
Optional[str]) – A label that uniquely identifies a specific Application Load Balancer target group from which to determine the average request count served by your Auto Scaling group. You can’t specify a resource label unless the target group is attached to the Auto Scaling group. You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff. Where: - app// is the final portion of the load balancer ARN - targetgroup// is the final portion of the target group ARN. To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins predefined_metric_specification_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.PredefinedMetricSpecificationProperty( predefined_metric_type="predefinedMetricType", resource_label="resourceLabel" )
Attributes
- predefined_metric_type
.
ASGAverageCPUUtilization- Average CPU utilization of the Auto Scaling group.ASGAverageNetworkIn- Average number of bytes received on all network interfaces by the Auto Scaling group.ASGAverageNetworkOut- Average number of bytes sent out on all network interfaces by the Auto Scaling group.ALBRequestCountPerTarget- Average Application Load Balancer request count per target for your Auto Scaling group.
- See:
- Type:
The metric type. The following predefined metrics are available
- resource_label
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the average request count served by your Auto Scaling group.
You can’t specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff.Where:
app// is the final portion of the load balancer ARN
targetgroup// is the final portion of the target group ARN.
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
PredictiveScalingConfigurationProperty
- class CfnScalingPolicyPropsMixin.PredictiveScalingConfigurationProperty(*, max_capacity_breach_behavior=None, max_capacity_buffer=None, metric_specifications=None, mode=None, scheduling_buffer_time=None)
Bases:
objectPredictiveScalingConfigurationis a property of the AWS::AutoScaling::ScalingPolicy resource that specifies a predictive scaling policy for Amazon EC2 Auto Scaling.For more information, see Predictive scaling in the Amazon EC2 Auto Scaling User Guide .
- Parameters:
max_capacity_breach_behavior (
Optional[str]) – Defines the behavior that should be applied if the forecast capacity approaches or exceeds the maximum capacity of the Auto Scaling group. Defaults toHonorMaxCapacityif not specified. The following are possible values: -HonorMaxCapacity- Amazon EC2 Auto Scaling can’t increase the maximum capacity of the group when the forecast capacity is close to or exceeds the maximum capacity. -IncreaseMaxCapacity- Amazon EC2 Auto Scaling can increase the maximum capacity of the group when the forecast capacity is close to or exceeds the maximum capacity. The upper limit is determined by the forecasted capacity and the value forMaxCapacityBuffer. .. epigraph:: Use caution when allowing the maximum capacity to be automatically increased. This can lead to more instances being launched than intended if the increased maximum capacity is not monitored and managed. The increased maximum capacity then becomes the new normal maximum capacity for the Auto Scaling group until you manually update it. The maximum capacity does not automatically decrease back to the original maximum.max_capacity_buffer (
Union[int,float,None]) – The size of the capacity buffer to use when the forecast capacity is close to or exceeds the maximum capacity. The value is specified as a percentage relative to the forecast capacity. For example, if the buffer is 10, this means a 10 percent buffer, such that if the forecast capacity is 50, and the maximum capacity is 40, then the effective maximum capacity is 55. If set to 0, Amazon EC2 Auto Scaling may scale capacity higher than the maximum capacity to equal but not exceed forecast capacity. Required if theMaxCapacityBreachBehaviorproperty is set toIncreaseMaxCapacity, and cannot be used otherwise.metric_specifications (
Union[IResolvable,Sequence[Union[IResolvable,PredictiveScalingMetricSpecificationProperty,Dict[str,Any]]],None]) – This structure includes the metrics and target utilization to use for predictive scaling. This is an array, but we currently only support a single metric specification. That is, you can specify a target value and a single metric pair, or a target value and one scaling metric and one load metric.mode (
Optional[str]) – The predictive scaling mode. Defaults toForecastOnlyif not specified.scheduling_buffer_time (
Union[int,float,None]) – The amount of time, in seconds, by which the instance launch time can be advanced. For example, the forecast says to add capacity at 10:00 AM, and you choose to pre-launch instances by 5 minutes. In that case, the instances will be launched at 9:55 AM. The intention is to give resources time to be provisioned. It can take a few minutes to launch an EC2 instance. The actual amount of time required depends on several factors, such as the size of the instance and whether there are startup scripts to complete. The value must be less than the forecast interval duration of 3600 seconds (60 minutes). Defaults to 300 seconds if not specified.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins predictive_scaling_configuration_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingConfigurationProperty( max_capacity_breach_behavior="maxCapacityBreachBehavior", max_capacity_buffer=123, metric_specifications=[autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingMetricSpecificationProperty( customized_capacity_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingCustomizedCapacityMetricProperty( metric_data_queries=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), stat="stat", unit="unit" ), return_data=False )] ), customized_load_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingCustomizedLoadMetricProperty( metric_data_queries=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), stat="stat", unit="unit" ), return_data=False )] ), customized_scaling_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingCustomizedScalingMetricProperty( metric_data_queries=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), stat="stat", unit="unit" ), return_data=False )] ), predefined_load_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingPredefinedLoadMetricProperty( predefined_metric_type="predefinedMetricType", resource_label="resourceLabel" ), predefined_metric_pair_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingPredefinedMetricPairProperty( predefined_metric_type="predefinedMetricType", resource_label="resourceLabel" ), predefined_scaling_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingPredefinedScalingMetricProperty( predefined_metric_type="predefinedMetricType", resource_label="resourceLabel" ), target_value=123 )], mode="mode", scheduling_buffer_time=123 )
Attributes
- max_capacity_breach_behavior
Defines the behavior that should be applied if the forecast capacity approaches or exceeds the maximum capacity of the Auto Scaling group.
Defaults to
HonorMaxCapacityif not specified.The following are possible values:
HonorMaxCapacity- Amazon EC2 Auto Scaling can’t increase the maximum capacity of the group when the forecast capacity is close to or exceeds the maximum capacity.IncreaseMaxCapacity- Amazon EC2 Auto Scaling can increase the maximum capacity of the group when the forecast capacity is close to or exceeds the maximum capacity. The upper limit is determined by the forecasted capacity and the value forMaxCapacityBuffer.
Use caution when allowing the maximum capacity to be automatically increased. This can lead to more instances being launched than intended if the increased maximum capacity is not monitored and managed. The increased maximum capacity then becomes the new normal maximum capacity for the Auto Scaling group until you manually update it. The maximum capacity does not automatically decrease back to the original maximum.
- max_capacity_buffer
The size of the capacity buffer to use when the forecast capacity is close to or exceeds the maximum capacity.
The value is specified as a percentage relative to the forecast capacity. For example, if the buffer is 10, this means a 10 percent buffer, such that if the forecast capacity is 50, and the maximum capacity is 40, then the effective maximum capacity is 55.
If set to 0, Amazon EC2 Auto Scaling may scale capacity higher than the maximum capacity to equal but not exceed forecast capacity.
Required if the
MaxCapacityBreachBehaviorproperty is set toIncreaseMaxCapacity, and cannot be used otherwise.
- metric_specifications
This structure includes the metrics and target utilization to use for predictive scaling.
This is an array, but we currently only support a single metric specification. That is, you can specify a target value and a single metric pair, or a target value and one scaling metric and one load metric.
- mode
The predictive scaling mode.
Defaults to
ForecastOnlyif not specified.
- scheduling_buffer_time
The amount of time, in seconds, by which the instance launch time can be advanced.
For example, the forecast says to add capacity at 10:00 AM, and you choose to pre-launch instances by 5 minutes. In that case, the instances will be launched at 9:55 AM. The intention is to give resources time to be provisioned. It can take a few minutes to launch an EC2 instance. The actual amount of time required depends on several factors, such as the size of the instance and whether there are startup scripts to complete.
The value must be less than the forecast interval duration of 3600 seconds (60 minutes). Defaults to 300 seconds if not specified.
PredictiveScalingCustomizedCapacityMetricProperty
- class CfnScalingPolicyPropsMixin.PredictiveScalingCustomizedCapacityMetricProperty(*, metric_data_queries=None)
Bases:
objectContains capacity metric information for the
CustomizedCapacityMetricSpecificationproperty of the AWS::AutoScaling::ScalingPolicy PredictiveScalingMetricSpecification property type.- Parameters:
metric_data_queries (
Union[IResolvable,Sequence[Union[IResolvable,MetricDataQueryProperty,Dict[str,Any]]],None]) – One or more metric data queries to provide the data points for a capacity metric. Use multiple metric data queries only if you are performing a math expression on returned data.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins predictive_scaling_customized_capacity_metric_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingCustomizedCapacityMetricProperty( metric_data_queries=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), stat="stat", unit="unit" ), return_data=False )] )
Attributes
- metric_data_queries
One or more metric data queries to provide the data points for a capacity metric.
Use multiple metric data queries only if you are performing a math expression on returned data.
PredictiveScalingCustomizedLoadMetricProperty
- class CfnScalingPolicyPropsMixin.PredictiveScalingCustomizedLoadMetricProperty(*, metric_data_queries=None)
Bases:
objectContains load metric information for the
CustomizedLoadMetricSpecificationproperty of the AWS::AutoScaling::ScalingPolicy PredictiveScalingMetricSpecification property type.- Parameters:
metric_data_queries (
Union[IResolvable,Sequence[Union[IResolvable,MetricDataQueryProperty,Dict[str,Any]]],None]) – One or more metric data queries to provide the data points for a load metric. Use multiple metric data queries only if you are performing a math expression on returned data.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins predictive_scaling_customized_load_metric_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingCustomizedLoadMetricProperty( metric_data_queries=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), stat="stat", unit="unit" ), return_data=False )] )
Attributes
- metric_data_queries
One or more metric data queries to provide the data points for a load metric.
Use multiple metric data queries only if you are performing a math expression on returned data.
PredictiveScalingCustomizedScalingMetricProperty
- class CfnScalingPolicyPropsMixin.PredictiveScalingCustomizedScalingMetricProperty(*, metric_data_queries=None)
Bases:
objectContains scaling metric information for the
CustomizedScalingMetricSpecificationproperty of the AWS::AutoScaling::ScalingPolicy PredictiveScalingMetricSpecification property type.- Parameters:
metric_data_queries (
Union[IResolvable,Sequence[Union[IResolvable,MetricDataQueryProperty,Dict[str,Any]]],None]) – One or more metric data queries to provide the data points for a scaling metric. Use multiple metric data queries only if you are performing a math expression on returned data.- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins predictive_scaling_customized_scaling_metric_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingCustomizedScalingMetricProperty( metric_data_queries=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), stat="stat", unit="unit" ), return_data=False )] )
Attributes
- metric_data_queries
One or more metric data queries to provide the data points for a scaling metric.
Use multiple metric data queries only if you are performing a math expression on returned data.
PredictiveScalingMetricSpecificationProperty
- class CfnScalingPolicyPropsMixin.PredictiveScalingMetricSpecificationProperty(*, customized_capacity_metric_specification=None, customized_load_metric_specification=None, customized_scaling_metric_specification=None, predefined_load_metric_specification=None, predefined_metric_pair_specification=None, predefined_scaling_metric_specification=None, target_value=None)
Bases:
objectA structure that specifies a metric specification for the
MetricSpecificationsproperty of the AWS::AutoScaling::ScalingPolicy PredictiveScalingConfiguration property type.You must specify either a metric pair, or a load metric and a scaling metric individually. Specifying a metric pair instead of individual metrics provides a simpler way to configure metrics for a scaling policy. You choose the metric pair, and the policy automatically knows the correct sum and average statistics to use for the load metric and the scaling metric.
Example
You create a predictive scaling policy and specify
ALBRequestCountas the value for the metric pair and1000.0as the target value. For this type of metric, you must provide the metric dimension for the corresponding target group, so you also provide a resource label for the Application Load Balancer target group that is attached to your Auto Scaling group.The number of requests the target group receives per minute provides the load metric, and the request count averaged between the members of the target group provides the scaling metric. In CloudWatch, this refers to the
RequestCountandRequestCountPerTargetmetrics, respectively.For optimal use of predictive scaling, you adhere to the best practice of using a dynamic scaling policy to automatically scale between the minimum capacity and maximum capacity in response to real-time changes in resource utilization.
Amazon EC2 Auto Scaling consumes data points for the load metric over the last 14 days and creates an hourly load forecast for predictive scaling. (A minimum of 24 hours of data is required.)
After creating the load forecast, Amazon EC2 Auto Scaling determines when to reduce or increase the capacity of your Auto Scaling group in each hour of the forecast period so that the average number of requests received by each instance is as close to 1000 requests per minute as possible at all times.
For information about using custom metrics with predictive scaling, see Advanced predictive scaling policy configurations using custom metrics in the Amazon EC2 Auto Scaling User Guide .
- Parameters:
customized_capacity_metric_specification (
Union[IResolvable,PredictiveScalingCustomizedCapacityMetricProperty,Dict[str,Any],None]) – The customized capacity metric specification.customized_load_metric_specification (
Union[IResolvable,PredictiveScalingCustomizedLoadMetricProperty,Dict[str,Any],None]) – The customized load metric specification.customized_scaling_metric_specification (
Union[IResolvable,PredictiveScalingCustomizedScalingMetricProperty,Dict[str,Any],None]) – The customized scaling metric specification.predefined_load_metric_specification (
Union[IResolvable,PredictiveScalingPredefinedLoadMetricProperty,Dict[str,Any],None]) – The predefined load metric specification.predefined_metric_pair_specification (
Union[IResolvable,PredictiveScalingPredefinedMetricPairProperty,Dict[str,Any],None]) – The predefined metric pair specification from which Amazon EC2 Auto Scaling determines the appropriate scaling metric and load metric to use.predefined_scaling_metric_specification (
Union[IResolvable,PredictiveScalingPredefinedScalingMetricProperty,Dict[str,Any],None]) – The predefined scaling metric specification.target_value (
Union[int,float,None]) – Specifies the target utilization. .. epigraph:: Some metrics are based on a count instead of a percentage, such as the request count for an Application Load Balancer or the number of messages in an SQS queue. If the scaling policy specifies one of these metrics, specify the target utilization as the optimal average request or message count per instance during any one-minute interval.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins predictive_scaling_metric_specification_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingMetricSpecificationProperty( customized_capacity_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingCustomizedCapacityMetricProperty( metric_data_queries=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), stat="stat", unit="unit" ), return_data=False )] ), customized_load_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingCustomizedLoadMetricProperty( metric_data_queries=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), stat="stat", unit="unit" ), return_data=False )] ), customized_scaling_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingCustomizedScalingMetricProperty( metric_data_queries=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), stat="stat", unit="unit" ), return_data=False )] ), predefined_load_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingPredefinedLoadMetricProperty( predefined_metric_type="predefinedMetricType", resource_label="resourceLabel" ), predefined_metric_pair_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingPredefinedMetricPairProperty( predefined_metric_type="predefinedMetricType", resource_label="resourceLabel" ), predefined_scaling_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingPredefinedScalingMetricProperty( predefined_metric_type="predefinedMetricType", resource_label="resourceLabel" ), target_value=123 )
Attributes
- customized_capacity_metric_specification
The customized capacity metric specification.
- customized_load_metric_specification
The customized load metric specification.
- customized_scaling_metric_specification
The customized scaling metric specification.
- predefined_load_metric_specification
The predefined load metric specification.
- predefined_metric_pair_specification
The predefined metric pair specification from which Amazon EC2 Auto Scaling determines the appropriate scaling metric and load metric to use.
- predefined_scaling_metric_specification
The predefined scaling metric specification.
- target_value
Specifies the target utilization.
Some metrics are based on a count instead of a percentage, such as the request count for an Application Load Balancer or the number of messages in an SQS queue. If the scaling policy specifies one of these metrics, specify the target utilization as the optimal average request or message count per instance during any one-minute interval.
PredictiveScalingPredefinedLoadMetricProperty
- class CfnScalingPolicyPropsMixin.PredictiveScalingPredefinedLoadMetricProperty(*, predefined_metric_type=None, resource_label=None)
Bases:
objectContains load metric information for the
PredefinedLoadMetricSpecificationproperty of the AWS::AutoScaling::ScalingPolicy PredictiveScalingMetricSpecification property type.Does not apply to policies that use a metric pair for the metric specification.
- Parameters:
predefined_metric_type (
Optional[str]) – The metric type.resource_label (
Optional[str]) –A label that uniquely identifies a specific Application Load Balancer target group from which to determine the request count served by your Auto Scaling group. You can’t specify a resource label unless the target group is attached to the Auto Scaling group. You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff. Where: - app// is the final portion of the load balancer ARN - targetgroup// is the final portion of the target group ARN. To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins predictive_scaling_predefined_load_metric_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingPredefinedLoadMetricProperty( predefined_metric_type="predefinedMetricType", resource_label="resourceLabel" )
Attributes
- predefined_metric_type
The metric type.
- resource_label
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the request count served by your Auto Scaling group.
You can’t specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff.Where:
app// is the final portion of the load balancer ARN
targetgroup// is the final portion of the target group ARN.
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
PredictiveScalingPredefinedMetricPairProperty
- class CfnScalingPolicyPropsMixin.PredictiveScalingPredefinedMetricPairProperty(*, predefined_metric_type=None, resource_label=None)
Bases:
objectContains metric pair information for the
PredefinedMetricPairSpecificationproperty of the AWS::AutoScaling::ScalingPolicy PredictiveScalingMetricSpecification property type.For more information, see Predictive scaling in the Amazon EC2 Auto Scaling User Guide .
- Parameters:
predefined_metric_type (
Optional[str]) – Indicates which metrics to use. There are two different types of metrics for each metric type: one is a load metric and one is a scaling metric. For example, if the metric type isASGCPUUtilization, the Auto Scaling group’s total CPU metric is used as the load metric, and the average CPU metric is used for the scaling metric.resource_label (
Optional[str]) –A label that uniquely identifies a specific Application Load Balancer target group from which to determine the total and average request count served by your Auto Scaling group. You can’t specify a resource label unless the target group is attached to the Auto Scaling group. You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff. Where: - app// is the final portion of the load balancer ARN - targetgroup// is the final portion of the target group ARN. To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins predictive_scaling_predefined_metric_pair_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingPredefinedMetricPairProperty( predefined_metric_type="predefinedMetricType", resource_label="resourceLabel" )
Attributes
- predefined_metric_type
Indicates which metrics to use.
There are two different types of metrics for each metric type: one is a load metric and one is a scaling metric. For example, if the metric type is
ASGCPUUtilization, the Auto Scaling group’s total CPU metric is used as the load metric, and the average CPU metric is used for the scaling metric.
- resource_label
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the total and average request count served by your Auto Scaling group.
You can’t specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff.Where:
app// is the final portion of the load balancer ARN
targetgroup// is the final portion of the target group ARN.
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
PredictiveScalingPredefinedScalingMetricProperty
- class CfnScalingPolicyPropsMixin.PredictiveScalingPredefinedScalingMetricProperty(*, predefined_metric_type=None, resource_label=None)
Bases:
objectContains scaling metric information for the
PredefinedScalingMetricSpecificationproperty of the AWS::AutoScaling::ScalingPolicy PredictiveScalingMetricSpecification property type.Does not apply to policies that use a metric pair for the metric specification.
- Parameters:
predefined_metric_type (
Optional[str]) – The metric type.resource_label (
Optional[str]) –A label that uniquely identifies a specific Application Load Balancer target group from which to determine the average request count served by your Auto Scaling group. You can’t specify a resource label unless the target group is attached to the Auto Scaling group. You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff. Where: - app// is the final portion of the load balancer ARN - targetgroup// is the final portion of the target group ARN. To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins predictive_scaling_predefined_scaling_metric_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.PredictiveScalingPredefinedScalingMetricProperty( predefined_metric_type="predefinedMetricType", resource_label="resourceLabel" )
Attributes
- predefined_metric_type
The metric type.
- resource_label
A label that uniquely identifies a specific Application Load Balancer target group from which to determine the average request count served by your Auto Scaling group.
You can’t specify a resource label unless the target group is attached to the Auto Scaling group.
You create the resource label by appending the final portion of the load balancer ARN and the final portion of the target group ARN into a single value, separated by a forward slash (/). The format of the resource label is:
app/my-alb/778d41231b141a0f/targetgroup/my-alb-target-group/943f017f100becff.Where:
app// is the final portion of the load balancer ARN
targetgroup// is the final portion of the target group ARN.
To find the ARN for an Application Load Balancer, use the DescribeLoadBalancers API operation. To find the ARN for the target group, use the DescribeTargetGroups API operation.
StepAdjustmentProperty
- class CfnScalingPolicyPropsMixin.StepAdjustmentProperty(*, metric_interval_lower_bound=None, metric_interval_upper_bound=None, scaling_adjustment=None)
Bases:
objectStepAdjustmentspecifies a step adjustment for theStepAdjustmentsproperty of the AWS::AutoScaling::ScalingPolicy resource.For the following examples, suppose that you have an alarm with a breach threshold of 50:
To trigger a step adjustment when the metric is greater than or equal to 50 and less than 60, specify a lower bound of 0 and an upper bound of 10.
To trigger a step adjustment when the metric is greater than 40 and less than or equal to 50, specify a lower bound of -10 and an upper bound of 0.
There are a few rules for the step adjustments for your step policy:
The ranges of your step adjustments can’t overlap or have a gap.
At most one step adjustment can have a null lower bound. If one step adjustment has a negative lower bound, then there must be a step adjustment with a null lower bound.
At most one step adjustment can have a null upper bound. If one step adjustment has a positive upper bound, then there must be a step adjustment with a null upper bound.
The upper and lower bound can’t be null in the same step adjustment.
For more information, see Step adjustments in the Amazon EC2 Auto Scaling User Guide .
You can find a sample template snippet in the Examples section of the
AWS::AutoScaling::ScalingPolicyresource.- Parameters:
metric_interval_lower_bound (
Union[int,float,None]) – The lower bound for the difference between the alarm threshold and the CloudWatch metric. If the metric value is above the breach threshold, the lower bound is inclusive (the metric must be greater than or equal to the threshold plus the lower bound). Otherwise, it is exclusive (the metric must be greater than the threshold plus the lower bound). A null value indicates negative infinity.metric_interval_upper_bound (
Union[int,float,None]) – The upper bound for the difference between the alarm threshold and the CloudWatch metric. If the metric value is above the breach threshold, the upper bound is exclusive (the metric must be less than the threshold plus the upper bound). Otherwise, it is inclusive (the metric must be less than or equal to the threshold plus the upper bound). A null value indicates positive infinity. The upper bound must be greater than the lower bound.scaling_adjustment (
Union[int,float,None]) – The amount by which to scale, based on the specified adjustment type. A positive value adds to the current capacity while a negative number removes from the current capacity. For exact capacity, you must specify a non-negative value.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins step_adjustment_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.StepAdjustmentProperty( metric_interval_lower_bound=123, metric_interval_upper_bound=123, scaling_adjustment=123 )
Attributes
- metric_interval_lower_bound
The lower bound for the difference between the alarm threshold and the CloudWatch metric.
If the metric value is above the breach threshold, the lower bound is inclusive (the metric must be greater than or equal to the threshold plus the lower bound). Otherwise, it is exclusive (the metric must be greater than the threshold plus the lower bound). A null value indicates negative infinity.
- metric_interval_upper_bound
The upper bound for the difference between the alarm threshold and the CloudWatch metric.
If the metric value is above the breach threshold, the upper bound is exclusive (the metric must be less than the threshold plus the upper bound). Otherwise, it is inclusive (the metric must be less than or equal to the threshold plus the upper bound). A null value indicates positive infinity.
The upper bound must be greater than the lower bound.
- scaling_adjustment
The amount by which to scale, based on the specified adjustment type.
A positive value adds to the current capacity while a negative number removes from the current capacity. For exact capacity, you must specify a non-negative value.
TargetTrackingConfigurationProperty
- class CfnScalingPolicyPropsMixin.TargetTrackingConfigurationProperty(*, customized_metric_specification=None, disable_scale_in=None, predefined_metric_specification=None, target_value=None)
Bases:
objectTargetTrackingConfigurationis a property of the AWS::AutoScaling::ScalingPolicy resource that specifies a target tracking scaling policy configuration for Amazon EC2 Auto Scaling.For more information about scaling policies, see Dynamic scaling in the Amazon EC2 Auto Scaling User Guide .
- Parameters:
customized_metric_specification (
Union[IResolvable,CustomizedMetricSpecificationProperty,Dict[str,Any],None]) – A customized metric. You must specify either a predefined metric or a customized metric.disable_scale_in (
Union[bool,IResolvable,None]) – Indicates whether scaling in by the target tracking scaling policy is disabled. If scaling in is disabled, the target tracking scaling policy doesn’t remove instances from the Auto Scaling group. Otherwise, the target tracking scaling policy can remove instances from the Auto Scaling group. The default isfalse.predefined_metric_specification (
Union[IResolvable,PredefinedMetricSpecificationProperty,Dict[str,Any],None]) – A predefined metric. You must specify either a predefined metric or a customized metric.target_value (
Union[int,float,None]) – The target value for the metric. .. epigraph:: Some metrics are based on a count instead of a percentage, such as the request count for an Application Load Balancer or the number of messages in an SQS queue. If the scaling policy specifies one of these metrics, specify the target utilization as the optimal average request or message count per instance during any one-minute interval.
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins target_tracking_configuration_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.TargetTrackingConfigurationProperty( customized_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.CustomizedMetricSpecificationProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", metrics=[autoscaling_mixins.CfnScalingPolicyPropsMixin.TargetTrackingMetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.TargetTrackingMetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), period=123, stat="stat", unit="unit" ), period=123, return_data=False )], namespace="namespace", period=123, statistic="statistic", unit="unit" ), disable_scale_in=False, predefined_metric_specification=autoscaling_mixins.CfnScalingPolicyPropsMixin.PredefinedMetricSpecificationProperty( predefined_metric_type="predefinedMetricType", resource_label="resourceLabel" ), target_value=123 )
Attributes
- customized_metric_specification
A customized metric.
You must specify either a predefined metric or a customized metric.
- disable_scale_in
Indicates whether scaling in by the target tracking scaling policy is disabled.
If scaling in is disabled, the target tracking scaling policy doesn’t remove instances from the Auto Scaling group. Otherwise, the target tracking scaling policy can remove instances from the Auto Scaling group. The default is
false.
- predefined_metric_specification
A predefined metric.
You must specify either a predefined metric or a customized metric.
- target_value
The target value for the metric.
Some metrics are based on a count instead of a percentage, such as the request count for an Application Load Balancer or the number of messages in an SQS queue. If the scaling policy specifies one of these metrics, specify the target utilization as the optimal average request or message count per instance during any one-minute interval.
TargetTrackingMetricDataQueryProperty
- class CfnScalingPolicyPropsMixin.TargetTrackingMetricDataQueryProperty(*, expression=None, id=None, label=None, metric_stat=None, period=None, return_data=None)
Bases:
objectThe metric data to return.
Also defines whether this call is returning data for one metric only, or whether it is performing a math expression on the values of returned metric statistics to create a new time series. A time series is a series of data points, each of which is associated with a timestamp.
You can use
TargetTrackingMetricDataQuerystructures with a PutScalingPolicy operation when you specify a TargetTrackingConfiguration in the request.You can call for a single metric or perform math expressions on multiple metrics. Any expressions used in a metric specification must eventually return a single time series.
For more information, see the Create a target tracking scaling policy for Amazon EC2 Auto Scaling using metric math in the Amazon EC2 Auto Scaling User Guide .
- Parameters:
expression (
Optional[str]) – The math expression to perform on the returned data, if this object is performing a math expression. This expression can use theIdof the other metrics to refer to those metrics, and can also use theIdof other expressions to use the result of those expressions. Conditional: Within eachTargetTrackingMetricDataQueryobject, you must specify eitherExpressionorMetricStat, but not both.id (
Optional[str]) – A short name that identifies the object’s results in the response. This name must be unique among allTargetTrackingMetricDataQueryobjects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.label (
Optional[str]) – A human-readable label for this metric or expression. This is especially useful if this is a math expression, so that you know what the value represents.metric_stat (
Union[IResolvable,TargetTrackingMetricStatProperty,Dict[str,Any],None]) – Information about the metric data to return. Conditional: Within eachTargetTrackingMetricDataQueryobject, you must specify eitherExpressionorMetricStat, but not both.period (
Union[int,float,None]) –The period of the metric in seconds. The default value is 60. Accepted values are 10, 30, and 60. For high resolution metric, set the value to less than 60. For more information, see Create a target tracking policy using high-resolution metrics for faster response .
return_data (
Union[bool,IResolvable,None]) – Indicates whether to return the timestamps and raw data values of this metric. If you use any math expressions, specifytruefor this value for only the final math expression that the metric specification is based on. You must specifyfalseforReturnDatafor all the other metrics and expressions used in the metric specification. If you are only retrieving metrics and not performing any math expressions, do not specify anything forReturnData. This sets it to its default (true).
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins target_tracking_metric_data_query_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.TargetTrackingMetricDataQueryProperty( expression="expression", id="id", label="label", metric_stat=autoscaling_mixins.CfnScalingPolicyPropsMixin.TargetTrackingMetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), period=123, stat="stat", unit="unit" ), period=123, return_data=False )
Attributes
- expression
The math expression to perform on the returned data, if this object is performing a math expression.
This expression can use the
Idof the other metrics to refer to those metrics, and can also use theIdof other expressions to use the result of those expressions.Conditional: Within each
TargetTrackingMetricDataQueryobject, you must specify eitherExpressionorMetricStat, but not both.
- id
A short name that identifies the object’s results in the response.
This name must be unique among all
TargetTrackingMetricDataQueryobjects specified for a single scaling policy. If you are performing math expressions on this set of data, this name represents that data and can serve as a variable in the mathematical expression. The valid characters are letters, numbers, and underscores. The first character must be a lowercase letter.
- label
A human-readable label for this metric or expression.
This is especially useful if this is a math expression, so that you know what the value represents.
- metric_stat
Information about the metric data to return.
Conditional: Within each
TargetTrackingMetricDataQueryobject, you must specify eitherExpressionorMetricStat, but not both.
- period
The period of the metric in seconds.
The default value is 60. Accepted values are 10, 30, and 60. For high resolution metric, set the value to less than 60. For more information, see Create a target tracking policy using high-resolution metrics for faster response .
- return_data
Indicates whether to return the timestamps and raw data values of this metric.
If you use any math expressions, specify
truefor this value for only the final math expression that the metric specification is based on. You must specifyfalseforReturnDatafor all the other metrics and expressions used in the metric specification.If you are only retrieving metrics and not performing any math expressions, do not specify anything for
ReturnData. This sets it to its default (true).
TargetTrackingMetricStatProperty
- class CfnScalingPolicyPropsMixin.TargetTrackingMetricStatProperty(*, metric=None, period=None, stat=None, unit=None)
Bases:
objectThis structure defines the CloudWatch metric to return, along with the statistic and unit.
TargetTrackingMetricStatis a property of the TargetTrackingMetricDataQuery object.For more information about the CloudWatch terminology below, see Amazon CloudWatch concepts in the Amazon CloudWatch User Guide .
- Parameters:
metric (
Union[IResolvable,MetricProperty,Dict[str,Any],None]) – The metric to use.period (
Union[int,float,None]) –The period of the metric in seconds. The default value is 60. Accepted values are 10, 30, and 60. For high resolution metric, set the value to less than 60. For more information, see Create a target tracking policy using high-resolution metrics for faster response .
stat (
Optional[str]) –The statistic to return. It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide . The most commonly used metric for scaling is
Average.unit (
Optional[str]) –The unit to use for the returned data points. For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .
- See:
- ExampleMetadata:
fixture=_generated
Example:
# The code below shows an example of how to instantiate this type. # The values are placeholders you should change. from aws_cdk.mixins_preview.aws_autoscaling import mixins as autoscaling_mixins target_tracking_metric_stat_property = autoscaling_mixins.CfnScalingPolicyPropsMixin.TargetTrackingMetricStatProperty( metric=autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricProperty( dimensions=[autoscaling_mixins.CfnScalingPolicyPropsMixin.MetricDimensionProperty( name="name", value="value" )], metric_name="metricName", namespace="namespace" ), period=123, stat="stat", unit="unit" )
Attributes
- metric
The metric to use.
- period
The period of the metric in seconds.
The default value is 60. Accepted values are 10, 30, and 60. For high resolution metric, set the value to less than 60. For more information, see Create a target tracking policy using high-resolution metrics for faster response .
- stat
The statistic to return.
It can include any CloudWatch statistic or extended statistic. For a list of valid values, see the table in Statistics in the Amazon CloudWatch User Guide .
The most commonly used metric for scaling is
Average.
- unit
The unit to use for the returned data points.
For a complete list of the units that CloudWatch supports, see the MetricDatum data type in the Amazon CloudWatch API Reference .