CfnInferenceComponentPropsMixin

class aws_cdk.mixins_preview.aws_sagemaker.mixins.CfnInferenceComponentPropsMixin(props, *, strategy=None)

Bases: Mixin

Creates an inference component, which is a SageMaker AI hosting object that you can use to deploy a model to an endpoint.

In the inference component settings, you specify the model, the endpoint, and how the model utilizes the resources that the endpoint hosts. You can optimize resource utilization by tailoring how the required CPU cores, accelerators, and memory are allocated. You can deploy multiple inference components to an endpoint, where each inference component contains one model and the resource utilization needs for that individual model. After you deploy an inference component, you can directly invoke the associated model when you use the InvokeEndpoint API action.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sagemaker-inferencecomponent.html

CloudformationResource:

AWS::SageMaker::InferenceComponent

Mixin:

true

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk.mixins_preview import mixins
from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins

cfn_inference_component_props_mixin = sagemaker_mixins.CfnInferenceComponentPropsMixin(sagemaker_mixins.CfnInferenceComponentMixinProps(
    deployment_config=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentDeploymentConfigProperty(
        auto_rollback_configuration=sagemaker_mixins.CfnInferenceComponentPropsMixin.AutoRollbackConfigurationProperty(
            alarms=[sagemaker_mixins.CfnInferenceComponentPropsMixin.AlarmProperty(
                alarm_name="alarmName"
            )]
        ),
        rolling_update_policy=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentRollingUpdatePolicyProperty(
            maximum_batch_size=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentCapacitySizeProperty(
                type="type",
                value=123
            ),
            maximum_execution_timeout_in_seconds=123,
            rollback_maximum_batch_size=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentCapacitySizeProperty(
                type="type",
                value=123
            ),
            wait_interval_in_seconds=123
        )
    ),
    endpoint_arn="endpointArn",
    endpoint_name="endpointName",
    inference_component_name="inferenceComponentName",
    runtime_config=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentRuntimeConfigProperty(
        copy_count=123,
        current_copy_count=123,
        desired_copy_count=123
    ),
    specification=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentSpecificationProperty(
        base_inference_component_name="baseInferenceComponentName",
        compute_resource_requirements=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentComputeResourceRequirementsProperty(
            max_memory_required_in_mb=123,
            min_memory_required_in_mb=123,
            number_of_accelerator_devices_required=123,
            number_of_cpu_cores_required=123
        ),
        container=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentContainerSpecificationProperty(
            artifact_url="artifactUrl",
            deployed_image=sagemaker_mixins.CfnInferenceComponentPropsMixin.DeployedImageProperty(
                resolution_time="resolutionTime",
                resolved_image="resolvedImage",
                specified_image="specifiedImage"
            ),
            environment={
                "environment_key": "environment"
            },
            image="image"
        ),
        model_name="modelName",
        startup_parameters=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentStartupParametersProperty(
            container_startup_health_check_timeout_in_seconds=123,
            model_data_download_timeout_in_seconds=123
        )
    ),
    tags=[CfnTag(
        key="key",
        value="value"
    )],
    variant_name="variantName"
),
    strategy=mixins.PropertyMergeStrategy.OVERRIDE
)

Create a mixin to apply properties to AWS::SageMaker::InferenceComponent.

Parameters:

Methods

apply_to(construct)

Apply the mixin properties to the construct.

Parameters:

construct (IConstruct)

Return type:

IConstruct

supports(construct)

Check if this mixin supports the given construct.

Parameters:

construct (IConstruct)

Return type:

bool

Attributes

CFN_PROPERTY_KEYS = ['deploymentConfig', 'endpointArn', 'endpointName', 'inferenceComponentName', 'runtimeConfig', 'specification', 'tags', 'variantName']

Static Methods

classmethod is_mixin(x)

(experimental) Checks if x is a Mixin.

Parameters:

x (Any) – Any object.

Return type:

bool

Returns:

true if x is an object created from a class which extends Mixin.

Stability:

experimental

AlarmProperty

class CfnInferenceComponentPropsMixin.AlarmProperty(*, alarm_name=None)

Bases: object

An Amazon CloudWatch alarm configured to monitor metrics on an endpoint.

Parameters:

alarm_name (Optional[str]) – The name of a CloudWatch alarm in your account.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-alarm.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins

alarm_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.AlarmProperty(
    alarm_name="alarmName"
)

Attributes

alarm_name

The name of a CloudWatch alarm in your account.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-alarm.html#cfn-sagemaker-inferencecomponent-alarm-alarmname

AutoRollbackConfigurationProperty

class CfnInferenceComponentPropsMixin.AutoRollbackConfigurationProperty(*, alarms=None)

Bases: object

Parameters:

alarms (Union[IResolvable, Sequence[Union[IResolvable, AlarmProperty, Dict[str, Any]]], None])

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-autorollbackconfiguration.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins

auto_rollback_configuration_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.AutoRollbackConfigurationProperty(
    alarms=[sagemaker_mixins.CfnInferenceComponentPropsMixin.AlarmProperty(
        alarm_name="alarmName"
    )]
)

Attributes

alarms

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-autorollbackconfiguration.html#cfn-sagemaker-inferencecomponent-autorollbackconfiguration-alarms

Type:

see

DeployedImageProperty

class CfnInferenceComponentPropsMixin.DeployedImageProperty(*, resolution_time=None, resolved_image=None, specified_image=None)

Bases: object

Gets the Amazon EC2 Container Registry path of the docker image of the model that is hosted in this ProductionVariant .

If you used the registry/repository[:tag] form to specify the image path of the primary container when you created the model hosted in this ProductionVariant , the path resolves to a path of the form registry/repository[@digest] . A digest is a hash value that identifies a specific version of an image. For information about Amazon ECR paths, see Pulling an Image in the Amazon ECR User Guide .

Parameters:
  • resolution_time (Optional[str]) – The date and time when the image path for the model resolved to the ResolvedImage.

  • resolved_image (Optional[str]) – The specific digest path of the image hosted in this ProductionVariant .

  • specified_image (Optional[str]) – The image path you specified when you created the model.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-deployedimage.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins

deployed_image_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.DeployedImageProperty(
    resolution_time="resolutionTime",
    resolved_image="resolvedImage",
    specified_image="specifiedImage"
)

Attributes

resolution_time

The date and time when the image path for the model resolved to the ResolvedImage.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-deployedimage.html#cfn-sagemaker-inferencecomponent-deployedimage-resolutiontime

resolved_image

The specific digest path of the image hosted in this ProductionVariant .

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-deployedimage.html#cfn-sagemaker-inferencecomponent-deployedimage-resolvedimage

specified_image

The image path you specified when you created the model.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-deployedimage.html#cfn-sagemaker-inferencecomponent-deployedimage-specifiedimage

InferenceComponentCapacitySizeProperty

class CfnInferenceComponentPropsMixin.InferenceComponentCapacitySizeProperty(*, type=None, value=None)

Bases: object

Specifies the type and size of the endpoint capacity to activate for a rolling deployment or a rollback strategy.

You can specify your batches as either of the following:

  • A count of inference component copies

  • The overall percentage or your fleet

For a rollback strategy, if you don’t specify the fields in this object, or if you set the Value parameter to 100%, then SageMaker AI uses a blue/green rollback strategy and rolls all traffic back to the blue fleet.

Parameters:
  • type (Optional[str]) – Specifies the endpoint capacity type. - COPY_COUNT - The endpoint activates based on the number of inference component copies. - CAPACITY_PERCENT - The endpoint activates based on the specified percentage of capacity.

  • value (Union[int, float, None]) – Defines the capacity size, either as a number of inference component copies or a capacity percentage.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentcapacitysize.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins

inference_component_capacity_size_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentCapacitySizeProperty(
    type="type",
    value=123
)

Attributes

type

Specifies the endpoint capacity type.

  • COPY_COUNT - The endpoint activates based on the number of inference component copies.

  • CAPACITY_PERCENT - The endpoint activates based on the specified percentage of capacity.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentcapacitysize.html#cfn-sagemaker-inferencecomponent-inferencecomponentcapacitysize-type

value

Defines the capacity size, either as a number of inference component copies or a capacity percentage.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentcapacitysize.html#cfn-sagemaker-inferencecomponent-inferencecomponentcapacitysize-value

InferenceComponentComputeResourceRequirementsProperty

class CfnInferenceComponentPropsMixin.InferenceComponentComputeResourceRequirementsProperty(*, max_memory_required_in_mb=None, min_memory_required_in_mb=None, number_of_accelerator_devices_required=None, number_of_cpu_cores_required=None)

Bases: object

Defines the compute resources to allocate to run a model, plus any adapter models, that you assign to an inference component.

These resources include CPU cores, accelerators, and memory.

Parameters:
  • max_memory_required_in_mb (Union[int, float, None]) – The maximum MB of memory to allocate to run a model that you assign to an inference component.

  • min_memory_required_in_mb (Union[int, float, None]) – The minimum MB of memory to allocate to run a model that you assign to an inference component.

  • number_of_accelerator_devices_required (Union[int, float, None]) – The number of accelerators to allocate to run a model that you assign to an inference component. Accelerators include GPUs and AWS Inferentia.

  • number_of_cpu_cores_required (Union[int, float, None]) – The number of CPU cores to allocate to run a model that you assign to an inference component.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentcomputeresourcerequirements.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins

inference_component_compute_resource_requirements_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentComputeResourceRequirementsProperty(
    max_memory_required_in_mb=123,
    min_memory_required_in_mb=123,
    number_of_accelerator_devices_required=123,
    number_of_cpu_cores_required=123
)

Attributes

max_memory_required_in_mb

The maximum MB of memory to allocate to run a model that you assign to an inference component.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentcomputeresourcerequirements.html#cfn-sagemaker-inferencecomponent-inferencecomponentcomputeresourcerequirements-maxmemoryrequiredinmb

min_memory_required_in_mb

The minimum MB of memory to allocate to run a model that you assign to an inference component.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentcomputeresourcerequirements.html#cfn-sagemaker-inferencecomponent-inferencecomponentcomputeresourcerequirements-minmemoryrequiredinmb

number_of_accelerator_devices_required

The number of accelerators to allocate to run a model that you assign to an inference component.

Accelerators include GPUs and AWS Inferentia.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentcomputeresourcerequirements.html#cfn-sagemaker-inferencecomponent-inferencecomponentcomputeresourcerequirements-numberofacceleratordevicesrequired

number_of_cpu_cores_required

The number of CPU cores to allocate to run a model that you assign to an inference component.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentcomputeresourcerequirements.html#cfn-sagemaker-inferencecomponent-inferencecomponentcomputeresourcerequirements-numberofcpucoresrequired

InferenceComponentContainerSpecificationProperty

class CfnInferenceComponentPropsMixin.InferenceComponentContainerSpecificationProperty(*, artifact_url=None, deployed_image=None, environment=None, image=None)

Bases: object

Defines a container that provides the runtime environment for a model that you deploy with an inference component.

Parameters:
  • artifact_url (Optional[str]) – The Amazon S3 path where the model artifacts, which result from model training, are stored. This path must point to a single gzip compressed tar archive (.tar.gz suffix).

  • deployed_image (Union[IResolvable, DeployedImageProperty, Dict[str, Any], None])

  • environment (Union[Mapping[str, str], IResolvable, None]) – The environment variables to set in the Docker container. Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.

  • image (Optional[str]) – The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentcontainerspecification.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins

inference_component_container_specification_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentContainerSpecificationProperty(
    artifact_url="artifactUrl",
    deployed_image=sagemaker_mixins.CfnInferenceComponentPropsMixin.DeployedImageProperty(
        resolution_time="resolutionTime",
        resolved_image="resolvedImage",
        specified_image="specifiedImage"
    ),
    environment={
        "environment_key": "environment"
    },
    image="image"
)

Attributes

artifact_url

The Amazon S3 path where the model artifacts, which result from model training, are stored.

This path must point to a single gzip compressed tar archive (.tar.gz suffix).

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentcontainerspecification.html#cfn-sagemaker-inferencecomponent-inferencecomponentcontainerspecification-artifacturl

deployed_image

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentcontainerspecification.html#cfn-sagemaker-inferencecomponent-inferencecomponentcontainerspecification-deployedimage

Type:

see

environment

The environment variables to set in the Docker container.

Each key and value in the Environment string-to-string map can have length of up to 1024. We support up to 16 entries in the map.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentcontainerspecification.html#cfn-sagemaker-inferencecomponent-inferencecomponentcontainerspecification-environment

image

The Amazon Elastic Container Registry (Amazon ECR) path where the Docker image for the model is stored.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentcontainerspecification.html#cfn-sagemaker-inferencecomponent-inferencecomponentcontainerspecification-image

InferenceComponentDeploymentConfigProperty

class CfnInferenceComponentPropsMixin.InferenceComponentDeploymentConfigProperty(*, auto_rollback_configuration=None, rolling_update_policy=None)

Bases: object

The deployment configuration for an endpoint that hosts inference components.

The configuration includes the desired deployment strategy and rollback settings.

Parameters:
See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentdeploymentconfig.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins

inference_component_deployment_config_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentDeploymentConfigProperty(
    auto_rollback_configuration=sagemaker_mixins.CfnInferenceComponentPropsMixin.AutoRollbackConfigurationProperty(
        alarms=[sagemaker_mixins.CfnInferenceComponentPropsMixin.AlarmProperty(
            alarm_name="alarmName"
        )]
    ),
    rolling_update_policy=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentRollingUpdatePolicyProperty(
        maximum_batch_size=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentCapacitySizeProperty(
            type="type",
            value=123
        ),
        maximum_execution_timeout_in_seconds=123,
        rollback_maximum_batch_size=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentCapacitySizeProperty(
            type="type",
            value=123
        ),
        wait_interval_in_seconds=123
    )
)

Attributes

auto_rollback_configuration

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentdeploymentconfig.html#cfn-sagemaker-inferencecomponent-inferencecomponentdeploymentconfig-autorollbackconfiguration

Type:

see

rolling_update_policy

Specifies a rolling deployment strategy for updating a SageMaker AI endpoint.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentdeploymentconfig.html#cfn-sagemaker-inferencecomponent-inferencecomponentdeploymentconfig-rollingupdatepolicy

InferenceComponentRollingUpdatePolicyProperty

class CfnInferenceComponentPropsMixin.InferenceComponentRollingUpdatePolicyProperty(*, maximum_batch_size=None, maximum_execution_timeout_in_seconds=None, rollback_maximum_batch_size=None, wait_interval_in_seconds=None)

Bases: object

Specifies a rolling deployment strategy for updating a SageMaker AI inference component.

Parameters:
  • maximum_batch_size (Union[IResolvable, InferenceComponentCapacitySizeProperty, Dict[str, Any], None]) – The batch size for each rolling step in the deployment process. For each step, SageMaker AI provisions capacity on the new endpoint fleet, routes traffic to that fleet, and terminates capacity on the old endpoint fleet. The value must be between 5% to 50% of the copy count of the inference component.

  • maximum_execution_timeout_in_seconds (Union[int, float, None]) – The time limit for the total deployment. Exceeding this limit causes a timeout.

  • rollback_maximum_batch_size (Union[IResolvable, InferenceComponentCapacitySizeProperty, Dict[str, Any], None]) – The batch size for a rollback to the old endpoint fleet. If this field is absent, the value is set to the default, which is 100% of the total capacity. When the default is used, SageMaker AI provisions the entire capacity of the old fleet at once during rollback.

  • wait_interval_in_seconds (Union[int, float, None]) – The length of the baking period, during which SageMaker AI monitors alarms for each batch on the new fleet.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentrollingupdatepolicy.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins

inference_component_rolling_update_policy_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentRollingUpdatePolicyProperty(
    maximum_batch_size=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentCapacitySizeProperty(
        type="type",
        value=123
    ),
    maximum_execution_timeout_in_seconds=123,
    rollback_maximum_batch_size=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentCapacitySizeProperty(
        type="type",
        value=123
    ),
    wait_interval_in_seconds=123
)

Attributes

maximum_batch_size

The batch size for each rolling step in the deployment process.

For each step, SageMaker AI provisions capacity on the new endpoint fleet, routes traffic to that fleet, and terminates capacity on the old endpoint fleet. The value must be between 5% to 50% of the copy count of the inference component.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentrollingupdatepolicy.html#cfn-sagemaker-inferencecomponent-inferencecomponentrollingupdatepolicy-maximumbatchsize

maximum_execution_timeout_in_seconds

The time limit for the total deployment.

Exceeding this limit causes a timeout.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentrollingupdatepolicy.html#cfn-sagemaker-inferencecomponent-inferencecomponentrollingupdatepolicy-maximumexecutiontimeoutinseconds

rollback_maximum_batch_size

The batch size for a rollback to the old endpoint fleet.

If this field is absent, the value is set to the default, which is 100% of the total capacity. When the default is used, SageMaker AI provisions the entire capacity of the old fleet at once during rollback.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentrollingupdatepolicy.html#cfn-sagemaker-inferencecomponent-inferencecomponentrollingupdatepolicy-rollbackmaximumbatchsize

wait_interval_in_seconds

The length of the baking period, during which SageMaker AI monitors alarms for each batch on the new fleet.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentrollingupdatepolicy.html#cfn-sagemaker-inferencecomponent-inferencecomponentrollingupdatepolicy-waitintervalinseconds

InferenceComponentRuntimeConfigProperty

class CfnInferenceComponentPropsMixin.InferenceComponentRuntimeConfigProperty(*, copy_count=None, current_copy_count=None, desired_copy_count=None)

Bases: object

Runtime settings for a model that is deployed with an inference component.

Parameters:
  • copy_count (Union[int, float, None]) – The number of runtime copies of the model container to deploy with the inference component. Each copy can serve inference requests.

  • current_copy_count (Union[int, float, None]) – The number of runtime copies of the model container that are currently deployed.

  • desired_copy_count (Union[int, float, None]) – The number of runtime copies of the model container that you requested to deploy with the inference component.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentruntimeconfig.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins

inference_component_runtime_config_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentRuntimeConfigProperty(
    copy_count=123,
    current_copy_count=123,
    desired_copy_count=123
)

Attributes

copy_count

The number of runtime copies of the model container to deploy with the inference component.

Each copy can serve inference requests.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentruntimeconfig.html#cfn-sagemaker-inferencecomponent-inferencecomponentruntimeconfig-copycount

current_copy_count

The number of runtime copies of the model container that are currently deployed.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentruntimeconfig.html#cfn-sagemaker-inferencecomponent-inferencecomponentruntimeconfig-currentcopycount

desired_copy_count

The number of runtime copies of the model container that you requested to deploy with the inference component.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentruntimeconfig.html#cfn-sagemaker-inferencecomponent-inferencecomponentruntimeconfig-desiredcopycount

InferenceComponentSpecificationProperty

class CfnInferenceComponentPropsMixin.InferenceComponentSpecificationProperty(*, base_inference_component_name=None, compute_resource_requirements=None, container=None, model_name=None, startup_parameters=None)

Bases: object

Details about the resources to deploy with this inference component, including the model, container, and compute resources.

Parameters:
  • base_inference_component_name (Optional[str]) – The name of an existing inference component that is to contain the inference component that you’re creating with your request. Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component. When you create an adapter inference component, use the Container parameter to specify the location of the adapter artifacts. In the parameter value, use the ArtifactUrl parameter of the InferenceComponentContainerSpecification data type. Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt.

  • compute_resource_requirements (Union[IResolvable, InferenceComponentComputeResourceRequirementsProperty, Dict[str, Any], None]) – The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component. Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component.

  • container (Union[IResolvable, InferenceComponentContainerSpecificationProperty, Dict[str, Any], None]) – Defines a container that provides the runtime environment for a model that you deploy with an inference component.

  • model_name (Optional[str]) – The name of an existing SageMaker AI model object in your account that you want to deploy with the inference component.

  • startup_parameters (Union[IResolvable, InferenceComponentStartupParametersProperty, Dict[str, Any], None]) – Settings that take effect while the model container starts up.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentspecification.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins

inference_component_specification_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentSpecificationProperty(
    base_inference_component_name="baseInferenceComponentName",
    compute_resource_requirements=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentComputeResourceRequirementsProperty(
        max_memory_required_in_mb=123,
        min_memory_required_in_mb=123,
        number_of_accelerator_devices_required=123,
        number_of_cpu_cores_required=123
    ),
    container=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentContainerSpecificationProperty(
        artifact_url="artifactUrl",
        deployed_image=sagemaker_mixins.CfnInferenceComponentPropsMixin.DeployedImageProperty(
            resolution_time="resolutionTime",
            resolved_image="resolvedImage",
            specified_image="specifiedImage"
        ),
        environment={
            "environment_key": "environment"
        },
        image="image"
    ),
    model_name="modelName",
    startup_parameters=sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentStartupParametersProperty(
        container_startup_health_check_timeout_in_seconds=123,
        model_data_download_timeout_in_seconds=123
    )
)

Attributes

base_inference_component_name

The name of an existing inference component that is to contain the inference component that you’re creating with your request.

Specify this parameter only if your request is meant to create an adapter inference component. An adapter inference component contains the path to an adapter model. The purpose of the adapter model is to tailor the inference output of a base foundation model, which is hosted by the base inference component. The adapter inference component uses the compute resources that you assigned to the base inference component.

When you create an adapter inference component, use the Container parameter to specify the location of the adapter artifacts. In the parameter value, use the ArtifactUrl parameter of the InferenceComponentContainerSpecification data type.

Before you can create an adapter inference component, you must have an existing inference component that contains the foundation model that you want to adapt.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentspecification.html#cfn-sagemaker-inferencecomponent-inferencecomponentspecification-baseinferencecomponentname

compute_resource_requirements

The compute resources allocated to run the model, plus any adapter models, that you assign to the inference component.

Omit this parameter if your request is meant to create an adapter inference component. An adapter inference component is loaded by a base inference component, and it uses the compute resources of the base inference component.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentspecification.html#cfn-sagemaker-inferencecomponent-inferencecomponentspecification-computeresourcerequirements

container

Defines a container that provides the runtime environment for a model that you deploy with an inference component.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentspecification.html#cfn-sagemaker-inferencecomponent-inferencecomponentspecification-container

model_name

The name of an existing SageMaker AI model object in your account that you want to deploy with the inference component.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentspecification.html#cfn-sagemaker-inferencecomponent-inferencecomponentspecification-modelname

startup_parameters

Settings that take effect while the model container starts up.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentspecification.html#cfn-sagemaker-inferencecomponent-inferencecomponentspecification-startupparameters

InferenceComponentStartupParametersProperty

class CfnInferenceComponentPropsMixin.InferenceComponentStartupParametersProperty(*, container_startup_health_check_timeout_in_seconds=None, model_data_download_timeout_in_seconds=None)

Bases: object

Settings that take effect while the model container starts up.

Parameters:
  • container_startup_health_check_timeout_in_seconds (Union[int, float, None]) – The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting. For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .

  • model_data_download_timeout_in_seconds (Union[int, float, None]) – The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentstartupparameters.html

ExampleMetadata:

fixture=_generated

Example:

# The code below shows an example of how to instantiate this type.
# The values are placeholders you should change.
from aws_cdk.mixins_preview.aws_sagemaker import mixins as sagemaker_mixins

inference_component_startup_parameters_property = sagemaker_mixins.CfnInferenceComponentPropsMixin.InferenceComponentStartupParametersProperty(
    container_startup_health_check_timeout_in_seconds=123,
    model_data_download_timeout_in_seconds=123
)

Attributes

container_startup_health_check_timeout_in_seconds

The timeout value, in seconds, for your inference container to pass health check by Amazon S3 Hosting.

For more information about health check, see How Your Container Should Respond to Health Check (Ping) Requests .

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentstartupparameters.html#cfn-sagemaker-inferencecomponent-inferencecomponentstartupparameters-containerstartuphealthchecktimeoutinseconds

model_data_download_timeout_in_seconds

The timeout value, in seconds, to download and extract the model that you want to host from Amazon S3 to the individual inference instance associated with this inference component.

See:

http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-sagemaker-inferencecomponent-inferencecomponentstartupparameters.html#cfn-sagemaker-inferencecomponent-inferencecomponentstartupparameters-modeldatadownloadtimeoutinseconds